Medically Reviewed: Dr Hanif Chattur
Image Credit: Canva
Thomas Davenport, president’s distinguished professor of information technology and managementA and Ravi Kalakota, managing directorB
Author information Copyright and License information PMC Disclaimer
Key Takeaways
Table of Contents
Toggle- Improved Diagnosis and Treatment: AI can analyze medical images, detect diseases earlier, and recommend personalized treatment plans.
- Drug Discovery and Development: AI can accelerate drug discovery, reduce costs, and increase success rates.
- Personalized Medicine: AI can analyze patient data to tailor treatments and prevent diseases.
- Enhanced Patient Care: AI-powered virtual assistants can provide 24/7 support, monitor patient health, and improve patient engagement.
- Operational Efficiency: AI can streamline administrative tasks, reduce paperwork, and optimize resource allocation.
Introduction
Artificial intelligence (AI) and related technologies are increasingly prevalent in business and society, and are beginning to be applied to healthcare. These technologies have the potential to transform many aspects of patient care, as well as administrative processes within provider, payer and pharmaceutical organisations.
There are already a number of research studies suggesting that AI can perform as well as or better than humans at key healthcare tasks, such as diagnosing disease. Today, algorithms are already outperforming radiologists at spotting malignant tumours, and guiding researchers in how to construct cohorts for costly clinical trials. However, for a variety of reasons, we believe that it will be many years before AI replaces humans for broad medical process domains. In this article, we describe both the potential that AI offers to automate aspects of care and some of the barriers to rapid implementation of AI in healthcare.
Types of AI relevance to healthcare
Artificial intelligence is not one technology, but rather a collection of them. Most of these technologies have immediate relevance to the healthcare field, but the specific processes and tasks they support vary widely. Some particular AI technologies of high importance to healthcare are defined and described below.
Machine learning – neural networks and deep learning
Machine learning is a statistical technique for fitting models to data and to ‘learn’ by training models with data. Machine learning is one of the most common forms of AI; in a 2018 Deloitte survey of 1,100 US managers whose organizations were already pursuing AI, 63% of companies surveyed were employing machine learning in their businesses.1 It is a broad technique at the core of many approaches to AI and there are many versions of it.
In healthcare, the most common application of traditional machine learning is precision medicine – predicting what treatment protocols are likely to succeed on a patient based on various patient attributes and the treatment context. The great majority of machine learning and precision medicine applications require a training dataset for which the outcome variable (e.g onset of disease) is known; this is called supervised learning.
A more complex form of machine learning is the neural network – a technology that has been available since the 1960s has been well established in healthcare research for several decades and has been used for categorization applications like determining whether a patient will acquire a particular disease. It views problems in terms of inputs, outputs and weights of variables or ‘features’ that associate inputs with outputs. It has been likened to the way that neurons process signals, but the analogy to the brain’s function is relatively weak.
The most complex forms of machine learning involve deep learning, or neural network models with many levels of features or variables that predict outcomes. There may be thousands of hidden features in such models, which are uncovered by the faster processing of today’s graphics processing units and cloud architectures. A common application of deep learning in healthcare is recognition of potentially cancerous lesions in radiology images. Deep learning is increasingly being applied to radiomics, or the detection of clinically relevant features in imaging data beyond what can be perceived by the human eye. Both radiomics and deep learning are most commonly found in oncology-oriented image analysis. Their combination appears to promise greater accuracy in diagnosis than the previous generation of automated tools for image analysis, known as computer-aided detection or CAD.
Deep learning is also increasingly used for speech recognition and, as such, is a form of natural language processing (NLP), described below. Unlike earlier forms of statistical analysis, each feature in a deep learning model typically has little meaning to a human observer. As a result, the explanation of the model’s outcomes may be very difficult or impossible to interpret.
Natural language processing
Making sense of human language has been a goal of AI researchers since the 1950s. This field, NLP, includes applications such as speech recognition, text analysis, translation and other goals related to language. There are two basic approaches to it: statistical and semantic NLP. Statistical NLP is based on machine learning (deep learning neural networks in particular) and has contributed to a recent increase in accuracy of recognition. It requires a large ‘corpus’ or body of language from which to learn.
In healthcare, the dominant applications of NLP involve the creation, understanding and classification of clinical documentation and published research. NLP systems can analyse unstructured clinical notes on patients, prepare reports (eg on radiology examinations), transcribe patient interactions and conduct conversational AI.
Rule-based expert systems
Expert systems based on collections of ‘if-then’ rules were the dominant technology for AI in the 1980s and were widely used commercially in that and later periods. In healthcare, they were widely employed for ‘clinical decision support’ purposes over the last couple of decades and are still in wide use today. Many electronic health record (EHR) providers furnish a set of rules with their systems today.
Expert systems require human experts and knowledge engineers to construct a series of rules in a particular knowledge domain. They work well up to a point and are easy to understand. However, when the number of rules is large (usually over several thousand) and the rules begin to conflict with each other, they tend to break down. Moreover, if the knowledge domain changes, changing the rules can be difficult and time-consuming. They are slowly being replaced in healthcare by more approaches based on data and machine learning algorithms.
Physical robots
Physical robots are well known by this point, given that more than 200,000 industrial robots are installed each year around the world. They perform pre-defined tasks like lifting, repositioning, welding or assembling objects in places like factories and warehouses, and delivering supplies in hospitals. More recently, robots have become more collaborative with humans and are more easily trained by moving them through a desired task. They are also becoming more intelligent, as other AI capabilities are being embedded in their ‘brains’ (really their operating systems). Over time, it seems likely that the same improvements in intelligence that we’ve seen in other areas of AI would be incorporated into physical robots.
Surgical robots, initially approved in the USA in 2000, provide ‘superpowers’ to surgeons, improving their ability to see, create precise and minimally invasive incisions, stitch wounds and so forth.6 Important decisions are still made by human surgeons, however. Common surgical procedures using robotic surgery include gynaecologic surgery, prostate surgery and head and neck surgery.
Robotic process automation
This technology performs structured digital tasks for administrative purposes, ie those involving information systems, as if they were a human user following a script or rules. Compared to other forms of AI they are inexpensive, easy to program and transparent in their actions. Robotic process automation (RPA) doesn’t really involve robots – only computer programs on servers. It relies on a combination of workflow, business rules and ‘presentation layer’ integration with information systems to act like a semi-intelligent user of the systems. In healthcare, they are used for repetitive tasks like prior authorisation, updating patient records or billing. When combined with other technologies like image recognition, they can be used to extract data from, for example, faxed images in order to input it into transactional systems.7
We’ve described these technologies as individual ones, but increasingly they are being combined and integrated; robots are getting AI-based ‘brains’, image recognition is being integrated with RPA. Perhaps in the future these technologies will be so intermingled that composite solutions will be more likely or feasible.
The future of AI in healthcare
We believe that AI has an important role to play in the healthcare offerings of the future. In the form of machine learning, it is the primary capability behind the development of precision medicine, widely agreed to be a sorely needed advance in care. Although early efforts at providing diagnosis and treatment recommendations have proven challenging, we expect that AI will ultimately master that domain as well. Given the rapid advances in AI for imaging analysis, it seems likely that most radiology and pathology images will be examined at some point by a machine. Speech and text recognition are already employed for tasks like patient communication and capture of clinical notes, and their usage will increase.
The greatest challenge to AI in these healthcare domains is not whether the technologies will be capable enough to be useful, but rather ensuring their adoption in daily clinical practice. For widespread adoption to take place, AI systems must be approved by regulators, integrated with EHR systems, standardized to a sufficient degree that similar products work in a similar fashion, taught to clinicians, paid for by public or private payer organizations and updated over time in the field. These challenges will ultimately be overcome, but they will take much longer to do so than it will take for the technologies themselves to mature. As a result, we expect to see limited use of AI in clinical practice within 5 years and more extensive use within 10.
It also seems increasingly clear that AI systems will not replace human clinicians on a large scale, but rather will augment their efforts to care for patients. Over time, human clinicians may move toward tasks and job designs that draw on uniquely human skills like empathy, persuasion and big-picture integration. Perhaps the only healthcare providers who will lose their jobs over time may be those who refuse to work alongside artificial intelligence.