We’ve written much about how technology is remaking health and human services in consumer appointments, billing, evidence-based practices, and other respects (see The Urgent Care Opportunity, Does Your ‘Online Presence’ Have The Elements It Needs?, and Are There Evidence-Informed Tech Practices For Community Support Services?). But keeping up with so many potentially revolutionary tech trends — big data, cloud computing, mobile apps, blockchain, the Internet of Things, to name a few — is an increasingly challenging proposition. How can you tell what’s legitimate, and what’s sci-fi-style speculation?
One such development in the health and human service technology landscape is cognitive computing, which employs artificial-intelligence methodologies to create machines that think in ways similar to human beings. We’ve all seen depictions of this in science fiction, from C-3PO in Star Wars to WALL-E. But cognitive computing has made significant strides in reality during the past few years. The most famous example of its real-life application is “Watson,” an artificially intelligent, question-answering computer system built by IBM that’s been used in a variety of health care settings — as well as to win the game show Jeopardy (see Robot + Watson = ? and Watson Vs. Siri?).
At The 2016 OPEN MINDS Technology & Informatics Institute on November 10-11, we’ll hear about the future of cognitive computing from a leader in the field. Craig Rhinehart, director of Innovation and Market Development for IBM Watson Health, will open the event with a keynote address titled, “Cognitive Computing & Big Data: How They Will Shape The Future Of Care Delivery.”
What is cognitive computing and how will it affect care coordination in the future? For an answer to this, I recently spoke with Mr. Rhinehart, who covered the following points in our conversation:
Where does Watson fit into the “big picture” for technology and computing use in health care?
According to Mr. Rhinehart, advances in computing have come in three eras. In the first era, computers were counting machines, and in the second, they evolved into memory tools. In the third era, which we’re in now, they’re evolving into cognitive solutions. He noted:
A computer’s first value was as a counting machine and its primary role was to count stuff, and this really allowed things like the Census and the Social Security Administration, where the counting of things was the most important function. That’s where the first wave of computing really impacted how people do their jobs.
And then there was a second wave, or the programmable systems era. Fundamentally, there was a major technological advance where computers now could be programmed and could remember how to do things. So you could predetermine a series of steps or sequences, so computers could now do a series of more advanced functions such as landing a man on the moon, where the equations to guide that sort of things were far more complex than what counting systems could do, and really what human beings could do.
We are now on the precipice of a third era, the cognitive systems era. Watson is an example of a cognitive system. By cognitive system, we mean that systems are becoming more human-like in the way that they are used and in the way they do things. Instead of programming every step or function into a computer, now systems can learn from outcomes. If you think about what human beings do, we … observe things, and we use that with other information to make decisions, and then there is an outcome. We learn from that outcome, so that the next time we need to make a similar decision, we have that knowledge to draw upon. That cognitive model is where computers are moving to.
How can cognitive computing address the problem of too much data and not enough people to manage it?
Mr. Rhinehart’s solution hinged on recognizing a productive division of labor between people and computers, both of which provide tremendous advantage as long as we aren’t applying them incorrectly to the job at hand. He noted:
The first [benefit] is that cognitive computing offers the opportunity to have smarter systems that can make sense of all that information, so that the relevant information is brought to bear or applied when it is time to make a decision or to do something in a caregiving situation.
Human beings excel at things like common-sense dilemmas, morals, compassion, imagination, dreaming, abstraction, and generalizations. But we get tired, we need a little vacation, we need to sleep. Computers don’t have some of those challenges, and we want computers to be cognitive and help us apply that information through things like understanding natural patterns in data, finding out what context is relevant to a situation, learning from those interactions, and eliminating bias. And of course endless capacity. You are only limited in power and the systems you have. So there is a scalability factor. The challenge becomes, how do we take what computers are good at from a cognitive point of view and what human beings are good at, and blend the two.
Health care is moving to more integrated care and population health management – how will cognitive computing affect this care coordination?
Cognitive computing could offer a big step toward better care coordination by applying contextual and situational analysis to the medical record, at the same time that larger parts of that record are becoming available to medical professionals. Mr. Rhinehart noted:
I think the biggest advantage of a cognitive system is the ability to differentiate between data and knowledge, and apply it in context. In the complete set of medical knowledge, you are not limited or bounded by a person’s individual experience. This is where these technologies can help care coordination, the ability to make suggestions and offer helpful advice — if you will, best practices. A cognitive system has the previous knowledge and the previous outcomes.
In a population health scenario, I think the biggest eye-opener is to use all that big data to discover new connections, trends, and patterns to help guide the right treatment. I’m reminded of a recent project in Texas, where a patient was admitted eight times in a six-month period. Each time they came in, they were asked the same questions and treated the same way. Turns out the patient was self-medicating and was really depressed. Without the ability to look across all the encounters, they may have never known they needed to ask that patient a certain question. It took different doctors and eight different situations, but when used, data identified this in the very first attempt.
You don’t get that kind of insight without looking at big data. That’s where looking across population costs and outcomes shows the trends and patterns, and what they mean in the way that we treat people.
If cognitive computing is the third wave in technology, what might the next wave be?
Mr. Rhinehart also sees advances in care management, the concept of contextual relevance, expansion of the types of data that will be made useful to computing systems, and how people will interact with those systems. He explained:
If you think about cognition as a concept and how we do things as human beings, I think we should expect computers to operate and function more like we do. We should expect computers to be more aware of the situation. We should expect them to be able to apply context, so that instead of hoping the information that comes back during a search is trustworthy, you are confident that the information is trustworthy. You should trust it is relevant to the situation in which you are trying to use it, and addressing the question of why do I have to think like the computer, why can’t the computer think like me?
A specific example is the concept of contextual relevance. Imagine if you go into an emergency room, and you complain of shortness of breath. Today, doctors have to look through pages and pages of your medical history, if they even have access to that, which is often stored in different systems and rarely are those things all connected.
The doctor who now has to treat you only has a few minutes to read through an impossible amount of information, and what they are really looking for is a situation relevant to shortness of breath. They aren’t looking through trying to find everything in your medical history, but looking at everything that’s relevant, on hand, right now. And there are things that are across your history that are relevant, or not relevant. So having a system that is smart enough to understand the demographics of the person and what’s relevant in their history in their current situation is something that the cognitive system can do.
One, let’s make it more relevant. Two, let’s expand the information that is part of the knowledge base. Information in images, in recordings, or unstructured case notes and texts is generally not used in today’s care systems. Pull all that unstructured data in and use all the data about a person in context, including all their social data, their behavioral health data, their physical health data, and all these other little data pockets that get missed. There is a lot more data that can be brought to bear at the point of care.
Lastly, and probably most importantly, advances in care management. One of the goals of a care plan is to influence what the person does when they are not in front of the doctor or in the hospital, [meaning] how to influence positive behavior change that can lead to better adherence to treatment plans that can lead to better overall health outcomes. Today, understanding a person to that depth requires time and effort on the part of the care manager, to ask a lot of questions and search through unstructured notes and other historical information that contains insights not necessarily contained in coded or structured data. Understanding all the psycho-social determinants of a person’s health, which research shows is more important to overall outcomes than just addressing their clinical needs. This is labor intensive and not scalable for managing and increasing the number of people that are being moved into value-based care delivery models.
The promise of cognitive computing is that it can sift through the unstructured data and all other medical and claims information about the person, but also comb non-health data such as demographic, census, geographic, social media, and credit data, to get a holistic view of the patient that could provide insights into barriers to treatment adherence. Do they suffer from mental health issues? Do they have transportation challenges? Do they live alone? Do they only use text messaging to communicate? Cognitive technologies hold the promise of being able to connect dots that get missed today, and provide recommended next steps faster and more efficiently than a care manager can do manually. This will be powered by access to new cognitive knowledge bases with previously unknown best practices and more. This will be combined with interactive engagement tools, and other smart technologies, for automated interventions, extending and optimizing the reach of the care manager to more people in need and enhancing the patient experience at the same time.
You can listen to my complete discussion with Mr. Rhinehart online now. For more, join the 600-plus health and human service executives focused on delivery high-quality, cost-effective services to consumers with complex needs at The 2016 OPEN MINDS Technology & Informatics Institute. Can’t make it to the institute? Follow all of our institute coverage live on Twitter @openmindscircle – #OMTechnology.