This week on Data Book, the first human killed by a robot and the murky question of AI and legal liability.
Photo: Mixabest/Wikimedia Commons
In 1979, the world saw the first human ever killed by a robot. Robert Williams was a 25-year-old factory worker at a Ford plant in Flat Rock, Michigan. He was retrieving parts when a malfunctioning robotic arm smashed his head. He died instantly.
This week on Data Book, we give you that story and more. The big question: Who is responsible when artificial intelligence (AI) fails and kills?
>> LISTEN: The High-Tech Hospital the World Wasn't Ready For
It’s true that the first-ever human death at the hands of a robot didn’t involve AI, at least not in the way we think about it today. But as self-driving cars and other smart, automated technologies proliferate, we will likely see this fatal problem become a bigger issue. Just this year, for instance, a self-driving car killed its first pedestrian, in Arizona.
So, what does this mean for healthcare? As noted in this feature story, AI is infiltrating healthcare (in a good way), from diagnostics and logistics to care planning. Although experts say “true AI” has yet to take hold, it’s likely that algorithms, trained on sprawling data sets, will eventually play a bigger role in medicine. And it’s likely that something, at some point, will go wrong. Horribly wrong.
Our guest, writer Gautham Thomas, returns to Data Book to discuss the liability question. He wrote a feature story on this pressing topic for our June issue of the magazine (look for that soon), and he’s spoken with leading voices in the field.
If you like listening to Data Book, please subscribe, rate, and review us on iTunes, Google Play, Spotify, or just about any podcasting app. And feel free to tell a friend. We publish new episodes on the best stories and insights in big data, artificial intelligence, and cybersecurity every Friday morning.
Get the best insights in healthcare analytics directly to your inbox.
Related