The potential of AI application in healthcare is approached from different angles and discussed broadly in a number of booklets, white papers, and publications. Today’s Digital Health Blog article is a comprehensive story filled with examples and observations from a vast international research experience perspective that will be helpful to understanding the background, trends, and key aspects of AI use in the healthcare industry.
The hero of our story is Giuseppe Riccardi, a Full Professor in Computer Science and Information Engineering in the Department of Computer Science and Information Engineering at the University of Trento, Italy. Giuseppe is a pioneer in natural language understanding and conversational human-machine interaction who has applied his research in the health domain.
Giuseppe, please tell us more about your background and research focus.
I joined the University of Trento in 2005 after coming back from the United States, where I had spent 15 years working at Bell Labs – the place where most innovation was happening before the start of the 21st century. At Bell Labs, I dedicated my attention to various research problems in speech and language processing in different roles, leading the deployment of the first naturally speaking conversational system in the world, codenamed “How May I Help You?”.
Upon returning to Italy, I extended my research agenda in artificial intelligence to include the investigation of all signals exchanged by humans in their interactions, including natural language (speech, text, and multimodal) and physiological signals. The main goal of my research is to design and train computers in all forms (like smartphones, smartwatches, and robots) to live and work with humans and make their world better.
In the last 15 years, I have extended my efforts to AI applications in the health domain. I am running an interdisciplinary research lab called Signals and Interactive Systems Lab that unites approximately fifteen people, including students, postdocs, and research engineers with electrical engineering, computer science, computational linguistics, and psychology backgrounds. We are studying the signals generated in human interactions and collaboratively do concept and test prototyping with healthcare professionals, hospitals, and companies. Our lab has been part of various international research projects, including CO-ADAPT. (CO-ADAPT is a European research project centered around active aging with the use of conversational agents. You can read more about it in our interview with Dr. Giulio Jacucci – editorial note).
How long will it take for AI to reach mass adoption in healthcare? Nowadays we talk more and more about the accelerated application of technologies in health. Is the future of this mass adoption already here?
In my opinion, we still need quite some time. Let’s go back to pre-pandemic times, 2009 to be exact, and have a look at one example. I was already a professor here in Italy and I was invited to collaborate on a well-known IBM project called IBM Watson. As you may know, a few years after IBM Watson beat the champions of “Jeopardy,” having an impact in healthcare became the major objective for Watson. (IBM Watson thoroughly defeated two human champions in the game of Jeopardy in 2011 – editorial note).
Just think about it: a huge company like IBM, with all its resources and impressive research facilities like Yorktown Heights Research Center, sets up an agenda to have impact in health. By now we all know it did not go as expected in the sense of having a huge impact (you can read more about IBM Watson Healthcare case here – editorial note). Rather, it was a story of figuring out why it’s so hard to do – and it took around ten years to understand what the issue was. In research, we know that failure is part of the path to success.
And what is the issue, in your opinion? What are the key factors in AI adoption and more generally, in healthcare transformation?
I could say that in the last fifteen years I’ve learned transforming healthcare is really about setting an equation where you need to include “the people” as well as the latest and greatest from AI and other fields. An example that comes to my mind here is a pilot on hypertension that we did in an Italian hospital in Turin to monitor the psychological state of patients using AI and various sensors and wearables. The objective of the pilot was to establish and analyze a correlation between the patient’s psychological state and hypertensive state (you can check this link for more publications on the subject – editorial note).
What we realized during this pilot was that the patients were eager to share their problems and their data to understand the cause of hypertension. On the other hand, we found it more difficult to interact with the healthcare system and to interject at particular points of a patient’s journey. Of course, there is a big data problem as well, but from my perspective, the biggest and most understated problem is the human factor. You have all kinds of people in the hospital, from nurses to internal medicine MDs and surgeons, very different kinds of people, and many are concerned about the use of AI or other technology because they fear their work and role could change. This is a natural human response to novel events that may be threatening.
That’s why in our projects, we are looking for champions or early adopters: people in the healthcare system that are open to designing new pathways in the future and to implementing changes in the way they work and make their decisions. Hence, another crucial aspect is establishing new processes in the patient’s pathway: integrating into the hospital’s pipeline and changing the protocols to transform the way it works. An important advantage of the technology that comes with AI is the ability to measure the value of human decisions and behaviour, so AI systems can provide feedback and we humans can learn over time to change both, if necessary.
The current state of affairs in hospitals is that you have little empires, in a way, that don’t work organically amongst themselves or with inpatients and outpatients, yet the hospital is a really big place to do innovation. That’s why I would also underline the importance of creating synergies between people inside and outside the healthcare organizations and between people and technologies. For example, the above mentioned CO-ADAPT project where we use Personal Health Agents (PHAs) to engage patients in conversations and support psychologists in improving users’ mental health. It’s a very good case of establishing synergies and cooperation between people – HCPs, psychologists, patients, and AI scientists – and technology. PHAs transparently connect users and psychotherapists to provide so-called blended interventions. This is a first-of-its-kind intervention in the world using AI to help people.
How do we deal with the trust issues around AI? It becomes especially important when we talk about conversational agents who collect information about the day-to-day routine of patients.
It is a complex issue and, in my opinion, trust has multiple facets. First of all, I see trust in connection with the utility function, maximized by a certain service or product. Think about Gmail: we all use it without actually reading any contract, even though our emails are analyzed by AI that provides suggestions on the composition of the phrases in your text. But we trust it in a way. That “way” is your personal utility point value. If that does not satisfy you, you stop using it, although it is difficult nowadays. Then there are macro processes at the societal level that determine the minimum trust level that society would set and discipline through government regulations.
In our research and experimentation, in addition to national regulations and ethical committee guidelines, we make sure that users are fully aware of the AI systems they are interacting with and who is responsible for their actions.
What are key aspects where AI is especially helpful when applied in healthcare?
First one is persistence. AI offers you a tool in the form of pervasive intervention: for instance, you can consult with your PHA on your everyday nutrition and exercise choices. PHA can stimulate a positive change, especially important for chronic disease patients, like hypertensives and diabetics. Obviously, there is no doctor who could follow and monitor you 24 hours a day – but PHA can.
Another recent project I’m involved with is in collaboration with Anffas Associazione Nazionale and funded by the TIM Foundation focusing on helping autistic people with Asperger’s syndrome to manage their everyday routines and problematic situations. People with this condition may have difficulty interacting socially, and something like not understanding a joke can be dramatic for their self-confidence. We are designing PHA that would support these users, help them not feel alone, and support them during the day. We are developing a truly organic artificial intelligence that interacts with users and their caregivers that can teach PHA over time to improve its problem-solving support.
The second aspect is scalability. The pandemic has made the challenge of scalability especially visible: how can we help a large set of the population with limited resources? Clearly, PHAs are a huge asset for scalability of healthcare services such as blended mental health interventions, which have been tested during the pandemic in the CO-ADAPT project along with IDEGO Digital, a company spearheading digital psychology.
Judging from your broad international experience in research and application of AI in healthcare, can you outline the main country-level differences in the field?
On the side of the research, I don’t see major differences. However, I do see them at the funding level: there is definitely much more research and venture funding available in the US than in Europe, especially at the national level in Italy. To give you some context, at the University of Trento, our Department of Computer Science and Information Engineering is very strong in AI, a top one in Italy and in the top 100 worldwide. Getting the kind of venture capital funding that would be accessible for us in the US, we could launch at least two startups per year in AI and related subfields.
Then, there are some cultural differences. In the area of mental health, for example, the US and Nordics markets would be more suitable for testing technologies and products because there is much less stigma around mental health issues than there is in Mediterranean countries. However, I think, the pandemic may have started some cultural changes in this respect.
If we talk about the regulatory aspect, the US approach is very different from the European one. In the US, the regulations are based on evidence: if the product is effectively working and is being used by the patients, the regulations are created in the favourable direction. In Europe, the regulations are done from much more administrative perspective with the aim to audit the products. In addition, the EU draft AI regulations released at the end of April may create a burden on big companies that were looking into the cooperation or acquisition of startups because they may assess a higher risk in jumping into the partnerships with AI startups – and this may slow down innovation.