Health-tech KOL Janae Sharp talks the brain, wearables and autism and the inventor of affective computing.
This is the first installment of a two-part story on Rosalind Picard, Sc.D., who is the pioneer of affective computing, which began at the Massachusetts Institute of Technology (MIT) Media Lab. She has two startups in the affective computing space, including Empatica, which makes a wearable to help with epilepsy management. Picard recently spoke at the Intersystems Global Summit, and I had the chance to talk with her about artificial intelligence (AI), depression and autism before her keynote. An edited rundown of our discussion is below.
Rosalind Picard started her early work with a question: Can we build computers that think the same way the human brain functions?
This desire to build a better computer meant she needed a better understanding of which parts of the brain were responsible for intelligence and thinking.
The neurologist Richard Cytowic, M.D., M.F.A., was doing brain imaging to figure out which part of the brain was responsible for synesthesia. His book, “The Man Who Tasted Shapes,” detailed people who could taste something and feel an object in their hand or see colors when they hear music. They had sense impressions.
>> READ: 6 Top Takeaways from Health 2.0 for Healthcare Leaders
At the time, in their quest to pioneer a smarter way of computing, Picard and her colleagues at Harvard and other universities were primarily focused on the cortex.
But contrary to popular assumptions, the cortex was shutting down during episodes of synesthesia, meaning it wasn’t responsible for processing sense impressions. Blood flow to the brain wasn’t shutting down, so the blood flow had to be going somewhere else. Cytowic started to learn that these deep, old parts of the brain that controlled emotion, memory and attention were actually playing a very important role in sensory functioning and perception.
Picard: No one was looking at the parts of the brain that were responsible for this thought development. The “emotion” parts of the brain were critically involved in perception, decision making, rational thinking.
I had this moment of like, “Oh, great — emotion. Here I am, this woman, trying to be taken seriously in engineering, and I’m going to tell them that emotion is important? Really, this is just not going to work.”
I was horrified: “Great, what a crappy finding. Just what I need to tank my career.”
I did a lot more work, thinking, “Please don’t let it be this — it must be something else.”
Lo and behold, it turns out that emotion is critically important, and nobody was paying attention to it in computer science or AI or engineering, or even really in brain science. There were only a few people starting to look at affective neuroscience: Antonio Damasio, Joseph LeDoux and Jaak Panksepp, who came along at various points starting to touch this topic.
I started reading all their stuff, and I thought, “I’m going to have to break it to all my fellow AI researchers that I think emotion is important.”
Telling them produced classic moments, like the quintessential stereotypical engineer standing toe to toe with me, looking at my feet, telling me how unimportant emotion is without once looking at my face.
After the negative feedback and taking that risk, I thought, “Hey, science is about trying to figure out what’s true, even if it makes us uncomfortable. We’ve got to figure it out.” So, I proposed a lot of experiments. I thought, “I’m going to bring the tools of engineering to this and bring rigorous, robust measurements, and we’re going to see if there is something real here, because it sure looks like there might be.”
Picard wrote the book “Affective Computing,” and with that a computational field was born. There's a journal on this subject, an international society and conferences, and affective computing has even made the Gartner hype cycle. The MIT Media Lab’s most requested demo is on the subject.
Picard: The point is that it’s actually gone from something embarrassing to something that almost every company coming in here wants to see.
So, we now know that what we thought might be important really is important, that it can be measured in more objective ways, and that it can help machines be more intelligent. It can have huge implications for your health — your physical health, your mental health, your relational health, your social emotional health. We are trying to build technology that really helps people with that, as opposed to technologies that just make AI really smart.
Sharp: Now your work also deals with patients with autism and having tools to recognize emotions. How did that develop?
Picard: My developments in the field of autism began when I was trying to get computers to understand our emotions better. One day my husband and I were leading 30 people on a bicycle trip to Cape Cod. One of the guys I hadn’t met, who was going to bicycle with us, asked what I did for work, and I said, “I’m working on giving computers the ability to read our emotions.”
And he said, “Oh, can you help my brother?” And I’m like, “What do you mean? Tell me about your brother.”
As I learned more about his brother and other people on the spectrum, I learned that many of them — but not all — want to be able to do a better job of understanding emotions. The rest of us are showing them on our faces, because we can gauge people’s emotions from reading their facial expressions, but people with autism just don't get the same information from expressions. They're kind of oblivious to the cues.
>> READ: Ethical Concerns for Cutting-Edge Neurotechnologies
We started working to understand more about the needs of patients with autism, and as we worked with them, we not only developed tools that could help some of them read other people's facial expressions — which spun out in the company Affectiva, which now makes computing and wearables tools — but I also came to the realization that we were actually wrong in assuming that so many of them were having trouble reading other people's emotions. In fact, it was the other way around. One of them said to me, “Ros, my biggest problem is not with other people's emotions; the problem is that you are not reading my emotions.”
Hearing that was like, “Whoops! Stab me in the heart. I'm sorry. You know what? Am I screwing up? I know I have room to improve too, but help me out here.”
And she said, “You know, it's not just you — it’s everybody.”
I asked, “What is it that we're all missing?”
And she said, “You are not reading my stress. I am having enormous stress and anxiety and you guys are missing it.”
Sharp: Yes. When I’ve met students with autism, they need breaks from school. They are more overwhelmed with everyday input, and changes in their schedule are even more stressful.
Picard: A lot of us are just good at hiding our stress. People with autism, though, are often experiencing much more sensory overload and stress than the rest of us. And we misread it all the time.
They're suffering, and we're oblivious. Then, as I've learned more about what it was that we were missing, I realized we had built tools in the lab a long time ago that could measure that aspect of emotion. I thought, “You know, this might be helpful for these folks with autism.”
The problem is, everything we developed in the lab had all of these wires attached to it, and you just couldn't put it on a kid all day in school; they wouldn’t be able to wash their hands, they wouldn't be able to jump on the trampoline and run around the playground. So, I thought, “Technology's getting to the point where maybe we really could build a new version that could be mobile, that could go on everybody's bodies.”
And as we learned about the overload they were experiencing, we thought, “You know, we can’t keep bringing them in the lab — we need to give them a comfortable setting. We need to put a lab where they are. And the ‘lab’ has to be comfortable for them to wear.”
We built a lot of different versions: ankles, footwear, socks. At one point, the accounting people at MIT were calling my assistant, really worried, because they saw an order of 24 baby socks from my lab, and they thought that, as a new mom, I must be cheating MIT. We were putting sensors on them for babies born in families with autism, to measure the signals from the babies’ feet and ankles as part of a collaboration with Mass General Hospital.
Sharp: It was a giant MIT-sponsored baby shower for all the scientists
Picard: Yes. We had to tell the accounting people why we were buying little baby shoes and socks, that they were for wearables. One of my students graduated and went on to IBM, and she said, “I miss the day that I could just go buy a bra to put a sensor in or shoes for sensors; at IBM, when I buy clothing, that kicks off an audit, and they get all upset and stop my research.”
We built the wearable sensors and tried to integrate them into clothing in a safe, comfortable, washable way so we could gather data outside the lab, instead of bringing some poor kid in for an hour. Whenever kids with autism would come in, they would be so stressed in a new environment that many of them shut down. Perceptually, everything they saw just looked white.
The data from our sensors were jaw-dropping — things we never expected to find, such as seizures and new patterns that indicate different kinds of stress. The data were much more specific than what we originally expected. We learned from the wild, and that led to the work we're now doing in neurology patients with epilepsy and forecasting mood and stress and health. It’s like personalized weather forecasts, and a whole bunch of things that I never would have thought were possible if I hadn't seen the data.
The second part of our conversation with Picard will be published next week. Stay tuned.
Get the best insights in healthcare analytics directly to your inbox.
Related
Wearables Are Saving Human Lives. Can They Save Hospitals Too?
Physician Credentialing Poses Problems. Can Blockchain Help?