"The majority of available user interfaces are targeted at average users. This one-size-fits-all thinking does not consider individual differences in abilities," Jussi Jokinen, PhD, said.
Photo courtesy Aalto University press release.
Touchscreen devices have been a revolution in ergonomics. They create intuitive, tactile interfaces to guide users through complex functions.
But they aren’t for everyone, at least not yet. Various conditions can make touchscreens challenging to use, like tremors that make it hard to select the right region on the screen or mental conditions that make it difficult to type the right word or remember the intricacies of a task.
A “one-size-fits-all” approach doesn’t work for touchscreen interfaces, according to Finnish researcher Jussi Jokinen, PhD. But it would be impossible to predict and test all of the possible alternatives that might make it easier for persons who suffer tremors. Artificial intelligence (AI) can help fill the gaps.
“Previously, designers did not have detailed models that are based on psychological research and can be used to predict how different individuals perform in interactive tasks," Jokinen said. His team from Aalto University in Finland collaborated with researchers from Kochi University of Technology in Japan to test a method that would predictively optimize an interface based on a user’s interactions with it. The process produces numerous interfaces tailored to a person with a certain disability, and then tests them against a simulated model of that disability. It’s able to quickly automate a validation process that would take far longer if done in real life.
With a generic text interface, someone with a severe tremor could expect numerous missteps and typos. Tested with a real user who was suffering essential tremor, “the model’s prediction and real-life observations coincided, and the user was able to type almost error-free messages,” according to a statement.
They began the research with text entry because it’s a fundamental, everyday task. They chose tremor as a condition to optimize for because it’s a physical disability that makes text entry difficult. But having established a predictive method, the work can be extended to other disabilities and functions.
“We have models for simulating how being a novice or an expert with an interface impacts user's performance. We can also model how memory impairments affect learning and everyday use of interfaces,” Jokinen said in a statement. “The important point is that no matter the ability or disability, there must be a psychologically valid theory behind modelling it.”
The work is early, but for those in health-tech, there’s obvious appeal. A major barrier to the spread of mHealth and patient engagement applications has typically been patient usability—quite often, those who stand to gain the most from remote care via a mobile device or tablet also have some form of mental or physical impairment that makes it difficult. If AI can help tailor applications to their needs, that barrier could someday be removed.
“This is, of course, just a prototype interface, and not intended for consumer market.” Jokinen said. "I hope that designers pick up from here and, with the help of our model and optimizer, create individually targeted, polished interfaces."
The team’s report was published this week in IEEE Pervasive Computing, an Institute of Electrical and Electronics Engineers publication.
Related Coverage:
Startup Looks for Its Own Lane With Hands-Free VR Therapy
mHealth: Powered by Potential but Dogged by Dubious Studies
Spotting the Signs of a Strong AI Start-Up
Podcast: Adoption of Healthcare Tech in the Age of COVID-19 with Dr Kaveh Safavi
June 22nd 2021Kaveh Safavi, MD, JD, global health lead of Accenture Health, discusses how the pandemic influenced the speed at which healthcare organizations adopted new technologies and how this adoption is impacting patient care.