One of the sponsors of my research at Georgia Tech is Intel. My group, the Ubicomp group, is a member of the Intel Science and Technology Center for Pervasive Computing (ISTC-PC), and together with other academic institutions and researchers, we explore how sensing and computing technologies are changing the landscape of a variety of domains, such as health and wellbeing.
Every year, Intel organizes a retreat event with all ISTC-PC members. This year we all traveled to Suncadia, about one hour east of Seattle for 2 days of face-to-face conversation. Some of the topics covered included the future of sensing, privacy & security, activity recognition and behavior journaling.
It was a very productive event with good discussions and the opportunity to meet many friends in the research community. And the location was truly spectacular.
As my research progresses, it is becoming clearer that my dissertation topic will focus on systems that help people keep track of what they eat. This is an interesting problem because it is not possible to track diet as easily and objectively as physical activity, which one can capture using an activity tracker like the Fitbit. Yet, diet is critical as a health determinant.
There is a large body of work around automatic dietary monitoring (ADM), and Oliver Amft (TU Eindhoven) and Gerhard Troster (ETH Zurich) have explored this domain fairly extensively from the perspective of on-body sensing. Today I decided to take a closer look at one of their papers, an article for the IEEE Pervasive Computing magazine titled “On-body Sensing Solutions for automatic dietary monitoring“. It was published in 2009.
The article discusses the motivation for ADM and the weaknesses of self-report techniques. One of the highlights is a discussion of ADM on-body sensing options, ranging from gastric activity to the thermic effect of food intake (TEF):
“TEF starts immediately after food reaches the stomach and peaks after roughly 60 minutes. For unrestrained eating in people of normal weight, skin temperature above the liver increased between 0.8 and 1.5K”
The approach they chose to investigate in more detail involves modeling the intake cycle, covering intake gestures, chewing, and swallowing using a probabilistic context-free grammar. They frame it as a hierarchical recognition task that is constituted of detecting a sequence of sub-events.
It is a good read.
At TEDMED this year, Deborah Estrin made a strong case for leveraging our “digital breadcrumbs” to “help us get a clearer picture of our personal health”. She referred to these “breadcrumbs” as our “small data”, and told the story of her father’s decaying health; while invisible to his caregivers and doctors, his changing daily behaviors could have been picked up by his phone and computer. She finished her talk with a call-to-action: we need to get web services and mobile providers to give us our data back so that we can use it for our own purposes, such as for personal health inferences.
Needless to say, I agree with Estrin. Collecting digital traces of our lives is something that I have been doing for almost 10 years now, first with Slife Labs and now in my research. In fact, being plugged-in to this concept has forced me to pay attention, and as a result, I’ve seen many efforts with similar messaging come and sometimes go (hello Attention Trust, YouData, the Locker Project). In my opinion, however, we don’t necessarily need access to what is centrally stored about us by AT&T, Verizon and Google to make an impact. By working at the node level, by installing software on our phones, computers and soon TVs, we can collect all the data we need to build predictive models of human behavior. In fact, this has been done, and Estrin knows this too – she mentions the company Ginger.io in her talk. Additionally, many of these companies already provide APIs that give us access to a lot of the data we would want. From my experience, the biggest barrier to realizing the small data vision that Estrin proposes is getting the green light from the medical community, to not only to validate and refine these approaches but also incorporate them into traditional medical practices. That’s where the challenge lies; I do not see access to data as a barrier, at least not at the personal level.
Another significant health initiative related to personal activity data tracking was announced yesterday. The California Institute for Telecommunications and Information Technology (Calit2) in San Diego and the Robert Wood Johnson foundation unveiled a project called Health Data Exploration, with the goal of bringing everyday activity data to researchers. In the spirit of the Quantified-Self movement, it is framed as a conversation, an exploration, and not as a research program per se. But if I am correct, the ideal outcome would be to have companies like FitBit, Jawbone and Nike open their activity databases for “the public good”.
There is a clear overlap between these two initiatives. I see the value of the enormous amount of data that companies like Fitbit have compiled. Analyzing that data would certainly lead to new insights about health and human behavior, so I am curious to see how the project unfolds. But again, as a researcher, today I feel that I am more restricted due to the skepticism of the medical community towards adopting new strategies and technologies than by lack of data. Even starting collaborations with medical professionals has been a challenge.
Here is what would be very exciting to me as a computer scientist working in the health domain: an avenue to work directly with doctors and patients. I am not sure how this could be structured, but in an age when health accountability matters, it might be possible to incentivize healthcare institutions to work alongside technologists more closely. Yes, I am aware of the difficulties associated with the regulatory process, getting FDA approval, dealing with HIPAA, etc, but my perspective is that we will have to deal with these hurdles eventually no matter which road we take. In practical terms, I would love to see a resource, perhaps a web site, acting as a meeting place where electrical engineers, biomedical engineers, computer scientists and even designers could propose studies to healthcare organizations around new and innovative technologies and approaches. One of these studies could be Estrin’s small data, perhaps applied in the context of congestive heart failure or Alzheimer’s Disease. Doctors and caregivers would visit this resource and possibly sign up to collaborate, depending on their needs and areas of specialization. Realistic? I am not sure, especially considering that physical presence matters in these kinds of collaborations. But to me the value of a project like this is crystal clear. Especially if doctors and scientists really do show up to the party and decide to dance together.
One of the reasons why activity recognition in the home is interesting to me is because it has the potential to enable so many important health applications, such as remote monitoring. There are many people, especially older adults, who would much rather stay at home while battling chronic diseases than move into an assisted living facility or hospital. But without a range of supportive services, which are often prohibitively expensive, it becomes virtually impossible to properly care for someone in their own home.
There is a wide range of commercial products centered on supporting independent living, from fall detection to medication compliance systems. One piece of the puzzle that is missing is a communication channel that offers caregivers a holistic view of an individual’s patterns of daily living at home. This would enable caregivers to observe everyday behaviors on a regular basis and hopefully anticipate problems.
One type of sensor goes on pill boxes, while another measures whether people are eating and drinking on a regular schedule by indicating when refrigerator or pantry doors are opened, both using accelerometers. A third variety is a key fob with a Bluetooth Low Energy transmitter than lets the server know when the user is out of range, typically 125 meters (about 410 feet). This measure acts as a proxy for indicating when the person has left home.
Researchers have attempted to use sensor networks this way, with moderate success. For example, Tapia took the idea of sensors in the environment and showed how one could learn more about an individual’s high-level activities from low-level sensors. Rantz and Skubic demonstrated how a sensor network could be used as an early-warning system for conditions such as urinary tract infection in older adults. The only limitation of these system has been the large number of sensors required, sometimes in the order of 50-80 sensors per home. That is too many.
It’s clear that there is room for sensor networks in health monitoring at home, and Lively is betting that a few strategically located wireless sensors can tell us most of what we need to know about someone’s well-being. I do agree with this direction, and I am now curious to see how the company does in the future, including whether it raises significant funds through Kickstarter, which is a good indicator of how demand exists for a product like this.
It seems like every other day a new story in the media reports on the rise of the smartphone as the uber-sensor for medical applications. Yesterday I had the chance to read the New York Times story “Apps Alert the Doctor When Trouble Looms“, which highlights apps that monitor patterns in device usage (e.g. phone calls made, text messages received) and link pattern deviations to a variety of possible medical conditions.
The notion of extracting behavior features from smartphone usage is an idea I am fond of. I am following Ginger.io, the company that brought this idea to market, very closely. So, naturally, I read the story with interest. Surprisingly, the most remarkable aspect of the story was not the story itself, but the comments from readers.
As of now, 8 out of 10 comments note that the notion of using a smartphone as a diagnostic tool or to alert a doctor is preposterous. The majority of these negative comments were “upvoted” using the “Recommend” link that the New York Times publishing platform provides. Many comments touched on the privacy issue:
This is insane Big Brotherism
No way around it, such is a Violation of Privacy!
An interesting perspective came from someone battling chronic conditions:
As someone dealing with several chronic conditions I feel qualified to state that this is the worst idea I have ever read. The possibilities for the app to go wrong are endless.
A doctor also chimed in:
You want to know the first thing i thought about as a doctor? Picturing me getting sued by a patient or their family and the lawyer saying: this app shows that you received this information, yet there is no record that you acted on it. im literally supposed to act on it, pull the chart, and then note what the app said and what i did, and why i did it, and why i didnt do something else. tort reform before i use this app.
First of all, the variety of viewpoints on an important topic from many of the represented parties constitute, in my opinion, one of the best practical examples of Social Construction of Technology theory (SCOT) at play I have ever witnessed. In a few words, human action shapes technology and technology is interpreted differently by different social groups.
Secondly, as we move forward developing technologies that assess an individual’s health condition from everyday behavior, do we need to pay more attention to these loud voices asking us to stop? Or should we remember that just 10 years ago, the idea of sharing what we share today on social networks was equally ludicrous and ended up going mainstream anyway? As a technologist, I am not surprised to find myself leaning towards the latter. But I do believe it’s time for more effort to be put into thinking deeply about the privacy implications of this work.
What do you think?
Due to a very hectic schedule in the last few months, which included a paper deadline, the Ubicomp conference, a qualifying exam presentation and the beginning of classes, I haven’t been able to blog here nearly as often as I should. With a little bit more breathing room now, I will try to catch up in the next post or two.
Let me start with the ISTC retreat. Back in August, I had the opportunity to attend the first ISTC-PC retreat in the beautiful Alderbrook Resort & Spa, located about 2 hours from Seattle by car and ferry. It was a great event, and an opportunity to meet up with other researchers that comprise the relatively new Intel Science and Technology Center for Pervasive Computing.
Many professors and researchers from the University of Washington were in attendance and it was wonderful to see old friends and meet new ones. Intel was well represented and sent a large entourage of engineers and executives from many relevant units, from embedded systems to mobile devices and personal computers. I found it particularly rewarding to meet and have conversations with Deborah Estrin (Cornell-NY), Tanzeem Choudhury (Cornell) and their students, since we all share a deep interest in health modeling and applications.
It was encouraging to hear that Intel is very eager to enable our vision of the future as well. In particular, one of the issues they are addressing is how to make mobile device sensors less dependent on the CPU and associated chip infrastructure in the interest of conserving battery life. Designing so-called “sensor hubs” might be one direction to pursue.
Overall it was a great experience and I am looking forward to more interactions with ISTC collaborators and Intel in the future.
Today I am in Pittsburgh for Ubicomp 2012, presenting our work in recognizing activities in the home using infrastructure-mediated sensing:
Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing
Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. How- ever, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, pri- vacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning ap- proach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activi- ties in the kitchen and bathroom, such as cooking and shav- ing. Results from two studies show that our system can es- timate activities with overall accuracy of 82.69% for one in- dividual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure- mediated sensing for inferring high-level human activities in a home setting.
Edison Thomaz, Vinay Bettadapura, Gabriel Reyes, Megha Sandesh, Grant Schindler, Thomas Plo ̈tz, Gregory D. Abowd, Irfan Essa
Download the full paper and leave your questions, comments and suggestions here.
Rock Health, a technology/startup incubator focused on health companies and applications, just released a very clear report on medical and wellness sensors. It’s a graphical, somewhat summarized version of a post I wrote a while back about health sensing technologies. Definitely worth a look:
The key message here is that health-focused sensors will be big. They project hundreds of millions of devices by 2014.
Lately, as part of my research in indirect health inference through infrastructure-mediated sensing techniques, I’ve been investigating options for remote data acquisition. It would be wonderful if I could find a platform that let me take an analog signal as input, send it through a pipeline of signal processing and machine learning algorithms, and submit results to a remote server.
At the high-end, there are netbooks. Any BestBuy can sell you a complete netbook system for less than $300. For certain applications in data sensing, processing and communication, $300 is good enough. Unfortunately, a netbook is a bit too big and power hungry. Not to mention that its screen, graphics card and other features might go unused, inflating the cost of the device unnecessarily considering the job it’s been designated to do.
At the other end of the spectrum are platforms like the Arduino, which is small, inexpensive, but might not provide the processing power one might need. Are there any other alternatives? There are plenty of single-board computers out there, one of which is the Chumby Hacker Board, or CHB for short.
Recently, I’ve been following the development of an ARM-based platform called Raspberry Pi. The goal is to develop the cheapest possible computer with a basic level of functionality, for around $25. The team is already showing a prototype board, the size of a credit card, running Ubuntu:
Lots of details can be found here and they are expected to be shipping in December. I am really looking forward to experimenting with them.
The primary sensing technology that I’ve been using in my health modeling research this year is Hydrosense, a device that consists of a water pressure sensor and some additional hardware for signal acquisition, processing, communication and/or storage. By monitoring water pressure change patterns in a single-family home, Hydrosense lets us identify which water fixtures are in use and help us develop a good sense of which water-based activities are taking place. It’s a cornerstone of our activity and lifestyle recognition efforts.
A few weeks ago, while at the CDC Public Health Informatics Conference, a question was posed to me: “What is the Hydrosense coverage right now considering that not all homes in the US are single-family homes and not all of them are connected to a water supply system?”. I was intrigued by this question and decided to investigate further. Luckily I didn’t have to go very far. Wikipedia and the U.S. Census Bureau had all the numbers I was looking for.
There are 115.9 million homes in the US. About 70 million of these (60.3%) are detached single-family units. Eighty percent of single-family homes are occupied by owners. In terms of water supply, 14.5% of Americans rely on their own water sources, typically wells. Water well pumps are used in this case. The pump sends water to a storage tank with an air bladder that compresses as the water is pumped in. At 40-60psi, the pump stops. When water is used in the home, pressure drops and when it goes below 20psi, the pump starts again.
More than 99% of the US has access to “complete plumbing facilities”, defined as having (1) hot and cold piped water, (2) bathtub or shower, and (3) flush toilet. Homes that lack such water facilities total 670,986, and are usually inhabited by the elderly, the poor and those living in rural areas. Alaska has the highest percentage of households without plumbing.
To sum it up, Hydrosense can be used today in 60% of homes in the US. We would like to enhance it so that it can work reliably in multi-family homes and apartment complexes as well, which will expand Hydrosense’s coverage to virtually every home in the country. Thinking globally, and especially in the context of developing countries, I now wonder how suitable Hydrosense is in other regions of the world.