A week ago today, I finally managed to get my committee in one room and presented my PhD proposal to them. The title of my thesis is “Interactive Activity Recognition for Practical Food Journaling”.
The proposal is very important because it is essentially a list of work items that I need to complete before I can defend my thesis. I am now done with this milestone and have a roadmap for the next several months in hand. Without a doubt, I will be writing more here about interactive activity recognition. Onward!
A new year is upon us, hello 2014. This is the time of the year when I look back and wonder how 12 months went by so quickly. I am sure I am not alone. The last couple of months of 2013 were particularly busy. In addition to personal trips during the holidays, I also had the opportunity to attend the 2013 SenseCam and Pervasive Imaging Conference in San Diego, CA.
This was a small conference focused primarily on applications of first-person point-of-view images. I presented our paper titled “Feasibility of Identifying Eating Moments from First-Person Images Leveraging Human Computation”, which can be downloaded here. Our work was very well received and I had the opportunity to meet several researchers at UCSD and beyond who are taking health and behavior assessment to a whole new level thanks to wearable cameras.
Next on the agenda are additional studies related to my thesis work and my thesis proposal, which will probably take place at the end of January. As @dgmacarthur wrote on Twitter:
“May 2014 bring you clean data, well-documented code, large and deeply phenotyped samples, and a clear path to clinical translation”
While getting ready to start my day this morning, I flipped through the latest edition of the “Communications of the ACM” magazine. Right at the beginning was a very compelling writeup by Jason Hong about privacy in the age of Google Glass and similar wearable devices. Jason makes two points. In the first one, he draws a parallel between what we are experiencing today with regards to privacy in terms of wearable devices and what emerged when the “Invisible Computing” vision of ubiquitous computing was first put forth by Mark Weiser. Back in the late 80s and early 90s, there was also an outcry with regards to the negative impact of technology, with the press writing stories with titles such as “You’re Not Paranoid: They Really Are Watching You”.
According to Jason, it all comes down to expectations, and more importantly, the fact that we don’t know what to expect from these technologies, and whether they will be useful enough to offset the privacy concerns. The second point in Jason’s argument is that expectations of privacy change. He mentions Claude Fisher’s work observing that at first people objected to having landlines in their own homes and, of course, brings up the stir caused by the introduction of the Kodak camera when all of a sudden, any moment could be captured in film.
A good short read, and it’s also available online.
Have you heard of the AIRO? It is the latest device in the sea of activity and “well-being” trackers. This one deserves a little bit more attention because it claims to be able to automatically track not only sleep and exercise like the Fitbit and Up, but also stress and food intake. For the latter, the device is supposed to have an embedded spectrometer that can break down the nutritional intake of food consumed – this is what makes it stand apart. All in all, the AIRO is what the millions of people interested in tracking have been waiting for, the one wearable that tracks the key pillars of health: diet, stress, sleep and exercise. But is it real?
I am extremely skeptical. It all sounds good in theory, but in practice it’s a completely different story. First, there is food. Friends who are in the biomedical engineering space tell me that detecting “metabolites” through spectrometry is a promising direction, but unlikely to be developed enough to be productized by 2014. And based on my own research, I question the value of obtaining this information automatically. There is increasing evidence that when it comes to food, it is critically important that people are actively engaged in the food journaling process. Awareness of what one eats, and not just background data collection, is what leads to behavior change.
Second, there is stress. There are lots of researchers working on ways to capture stress level in naturalistic settings. Galvanic skin response, heart rate variability (HRV) and voice features have been used to estimate emotional state. This is not my area of my expertise so I cannot comment in much detail, but again, it is a really hard problem, particularly when it comes to evaluating the technology. Stress is highly subjective and variable from person to person and not all forms of stress should be perceived negatively.
Finally, we have sleep. Some believe that an accurate hypnogram can only be obtained through polysomnography. Thus, the notion that a smartphone app or wristband can wake you up at the perfect time so you feel as refreshed and rested as possible might be fundamentally flawed. A more detailed examination of this topic can be found here.
I hope the AIRO team proves me wrong on all these points.
While working on my dissertation proposal, I recently came across a paper by Bulling, Blanke and Schiele that discusses a number of techniques around human activity recognition using wearable sensors. The title of the paper is “A Tutorial on Human Activity Recognition Using Body-worn Inertial Sensors“.
The paper is a tutorial indeed, but assumes that the reader has a good amount of familiarity with the overall workflow of activity recognition (AR) systems. It presents a nice overview of related work, research challenges, and then goes into what it calls the Activity Recognition Chain (ARC) , which comprises the stages of an AR system: data acquisition, pre-processing, segmentation, feature extraction, feature selection and classification. The case study comparing various configurations for each stage of the ARC is very nice as well.
Overall I found this to be a great resource and I will certainly use it as a reference in the future. If you would like to learn more about the challenges and approaches around AR, this is the paper for you.
Thanks to my friend and fellow graduate student Gabriel Reyes, here is my talk at Ubicomp in Zurich a few weeks ago. I presented our paper titled “Technological Approaches for Addressing Privacy Concerns When Recognizing Eating Behaviors with Wearable Cameras”. You can download the paper here. Gabriel recorded this video with his Google Glass. Headphones recommended.
We have known for quite some time that there are many issues with self-report methods (e.g. 24-hour dietary recall) used in nutritional epidemiology research. Today, a study by researchers at USC suggests that close to 40 years of nutritional surveillance data compiled through the National Health and Nutrition Examination Survey (NHANES) has been compromised by methodological limitations. According to the researchers, the level of energy intake has been underreported such that the numbers are not physiologically plausible.
“the ability to estimate population trends in caloric intake and generate public policy relevant to diet-health relationships is extremely limited”
The magnitude of this error is enormous, and could not put more emphasis on the need for more accurate instruments to measure dietary/energy intake at the population level. Needless to say, this report will be front and center in my presentation when I propose my dissertation on the topic of food journaling in a few weeks.
Thanks to Sarita Yardi for bringing this report to my attention (on Twitter).
Earlier today I had the chance to read Kay et al.’s “There’s No Such Thing as Gaining a Pound: Reconsidering the Bathroom Scale User Interface” paper, which was presented at Ubicomp 2013 a few weeks ago. This was the second year in a row that Kay and Kientz got a best paper at the conference, so kudos to them.
I enjoyed reading this paper for many reasons. First of all, it addresses an issue that most of us take for granted – who would ever question the user interface of a weight scale? It is just a number reflecting body weight, right? As the authors point out, it is more complex than that. It is just a number, yes, but that number is associated with a wide range of feelings, perceptions and perhaps more importantly, misinterpretations.
Secondly, the paper answers its research questions through a series of qualitative and quantitative studies. Being more focused on quantitative studies myself, I always enjoy learning more about mixed-method approaches and how they can be successfully used in the space of personal informatics. I particularly like the within-day weight fluctuation study answering the question of how much a person’s weight typically varies during a single day.
And finally, I enjoyed the conclusion with a discussion of how the user interface of weight scales can be improved. One of the key opportunities is to educate people about weight fluctuations and de-emphasize the notion of current weight.
Academically and intellectually, September was the busiest month of the year for me, and the weeks leading up to it were quite intense as well. In light of conference travels, presentations and deadlines, there weren’t too many hours left in the day. Luckily all activities were tremendously exciting.
From September 8th to the 12th, I attended the Ubicomp 2013 conference in Zurich. I participated in the Doctoral School and also presented a paper on the last day of the conference. This year ISWC (International Symposium on Wearable Computers) was co-located with Ubicomp and I think this was a great move, since there was an obvious and productive overlap between both conferences. I should also say that the Ubicomp and Pervasive conferences merged, and this year we had the first joint conference between these two events, which is now officially called the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. The 4-track conference was very well attended with 700 people registering. More on Ubicomp 2013 in future posts.
Once back in Atlanta from Zurich, there was nothing but long hours and lots of work in preparation for a submission to CHI, whose Paper & Notes deadline was September 18th. At the end of the day we felt that we submitted a strong paper to be considered for the CHI 2014 program.
While doing some research in preparation for my thesis proposal, I decided to re-read one of the papers that I expect to become increasingly important in the field of personal informatics. It is Li et al.’s “A Stage-Based Model of Personal Informatics Systems“.
Based on surveys and interviews with 68 people, the authors suggest a model that describes personal informatics systems as a series of 5 stages (Preparation, Collection, Integration, Reflection, and Action). One of the most useful aspects of the paper in my view is the description of barriers people encountered in each stage. Some of the barriers in the Collection stage include “Remembering”, “Lack of Time”, and “Motivation”. These map exactly to the challenges that I’ve identified in my recent food journaling work.
If you are interested in the space of personal informatics, it is a paper I recommend. You can download it here.