By Gerrit Van Wyk.
Technology never makes mistakes.
A few years ago, I questioned an ultrasound report during a conversation about a shared case with a specialist colleague, who responded, “an ultrasound is technology and technology is never wrong”. Meredith Broussard calls the idea that technology can solve all problems technochauvinism.
Two articles appeared in press in the past few weeks about the potential use of artificial intelligence (AI) in healthcare as a technological fix for the industry’s problems, which deserves some comment. Several articles printed around the same time raises questions about the dangers of AI, which the authors from healthcare didn’t address.
The first article explains AI is a set of technologies allowing computers to simulate human intelligence, and specifically problem solving and learning, which makes it potentially useful in healthcare. According to the second, AI is as good and better than humans at many healthcare tasks, such as reading tests, answering patient questions, doing surgery, etc., and patients have better outcomes when interacting with apps than with healthcare workers. In other words, AI has the potential to replace many if not most of the medical workforce. Both argue computers and AI can manage more data than humans and more accurately.
What one can see here, and that is a problem with the AI conversation in general, is the authors make numerous sweeping assumptions, which are all tied to an assumption about how our world works, namely it is like a machine.
People often write and think of computer technology and AI as if it works like the human brain. It is a useful metaphor for conversation but completely divorced from reality. The human brain is a network of about 128 billion neurons with roughly 500 trillion connections. In complexity it is remarkably like the cosmic web, which is one of the most complex entities known to us. Its function is keeping the body and bodily processes balanced by predicting what may happen next as part of a staggeringly complex ongoing process.
Compare that to the architecture of a computer with a known number of components, and a handful or processors running in parallel, rather than 128 billion interconnected processors running at the same time. If you remove or damage a computer part, you must remove or replace it, the brain repairs itself. Computers are designed by human inventors, brains evolved over millions of years to fulfill a critical function in human survival. It’s true computers can find patterns in more data more quickly than humans, but the information the brain searches for is generated and used very differently. In short, the architecture trying to simulate the human brain is nothing like it, and neither does its software work like a brain.
Computers need software programs to tell the hardware what to do, and software programs are mathematical procedures based on binary logic designed to trigger specific outcomes. One may argue there is biological and social code giving instructions to the human brain, but the level of complexity cannot be compared to software code, as complex as it is.
A great deal of perception about AI, particularly as described in the popular press and other media, is fictionalized. In the real world, people write software and AI algorithms, in other words, like the hardware it needs, both are human social artifacts incorporating human assumptions and biases. Software and AI converts data created and selected by humans into different data, which is a big problem.
All data used by algorithms is generated and collected by humans, which means it is socially constructed within specific contexts, contains noise, is messy and incomplete, and for mathematical reasons, this messiness cannot be computed. During programming, programmers clean up and insert missing data based on a best guess, without which computer programs, and AI, which requires mathematical precision, cannot work. The term unreasonable effectiveness of data means AI seems to work well, provided you ignore the underlying data problems.
A lot of AI use machine-learning algorithms based on advanced statistics to search for patterns in massive amounts of data, something which computers, due its architecture, are much better at than humans. Supervised AI tells the computer what patterns to search for, unsupervised AI randomly searches for patterns, and reinforcement AI are trained through trial-and-error feedback, like Pavlov’s dogs, to learn to achieve an objective. The fact algorithms are based on many assumptions means they wittingly or unwittingly build in programmers’ biases, which can and do have unintended consequences, often creating new problems in our social world.
In short, AI doesn’t come close to simulating human intelligence, as the one author claims, and neither does it problem solve and learn the same way humans do. Other than that, the fact the data supporting the idea that AI is better than physicians was based on comparing AI to generalists, not specialists in the area, hence contains noise, AI, unlike humans ignores context, and that is missing in the paper. The problem with AI diagnosing many cases humans don’t, is it also diagnoses many new cases that require no investigation or treatment, and risks triggering over investigation and over treatment to the detriment of the healthcare system. In other words, it needs flesh and blood human bodies to make sense of what it spits out.
Finally, both authors assume AI algorithms can be written covering something as complex as healthcare, including its several thousand medical diagnoses, of which patients can have more than one, increasing the complexity, often paired to several treatment options and tests, making it even more complex, and that the mechanical solution produced by the algorithm, or algorithms, will be meaningful without context within a very complex human social world. They also assume the very imperfect data it will be using can be safely ignored, and algorithms will be free of bias, which, given human complexity, is a stretch.
Many leaders and some of the founders of machine learning are alarmed about us losing control of the algorithms we write, which some call Franken algorithms, after the monster. Once algorithms start creating cascading new algorithms independently, the human interface and therefore control gets further away, and we have no idea where that may lead.
Computers are fast, not intelligent, and, as experience show, AI can be tricked for nefarious purposes, and magnify and entrench human biases. When algorithms start learning, we are no longer certain what its rules and parameters are, nor how that will interact with us, the physical world, and other algorithms. Once they become wild, they become erratic. For example, stock brokers use competing algorithms trying to outwit each other, without human oversight or control, and there has been inexplicable erratic behavior, that so far, hasn’t been a problem, but in the future can become a major issue. If stock markets go haywire, the worldwide social consequences will be catastrophic.
The authors I refer to assume, like most people, computers are deterministic and predictable and can therefore be controlled, which is not true. When AI code becomes a collective and interact, like all complex adaptive systems, it becomes unpredictable and uncontrollable. Can one risk Franken algorithms de novo, or after a viral attack, in something as critical as healthcare?
Does that mean there is no place for computer technology and AI algorithms in healthcare? The answer is no. The mistake the authors make is the animistic fallacy; the idea computers have little thinking minds like humans, which is why, as the one author suggested, computers in principle can replace many healthcare workers.
There is a difference between information technology (IT), and information systems (IS). Information technology is the hardware, software, and algorithms we use, an information system has to do with the information and data humans need to be more efficient and make better decisions, and IT is a component of that. In other words, the mistake is looking at IT as an alternative and superior method of decision making and work, rather than as a tool for helping people in their work, and sadly, that is the default in healthcare. IS are designed around the needs of people, IT is designed to replace people. It is for this reason virtually all IT in healthcare is dysfunctional and not human friendly. Instead of asking people what they need, they are given computers designed by people who never worked in the front lines, and told to use it.
I agree with the writers that computers and AI can have an important role to play in healthcare, but not in the way they visualize it. As Meredith Broussard pointed out, the only way to overcome the unreasonable effectiveness of data problem, is to accept the complexity of the human social world and data collection, and use what comes out pragmatically. Which means humans, making sense of and giving context to it, which requires an IS. Computer technology and AI can be our friend, but if we don’t understand where it comes from and its limitations, it can and will be our enemy.
There’s a joke about a new AI designed airplane with an AI pilot. Many dignitaries, politicians, scientists, investors, etc., were invited to participate in the maiden flight and after takeoff, the AI captain started a presentation describing the marvels of the technology, and then said: “… this is your captain speaking, this is your captain speaking, this is your captain speaking…” You don’t want an AI captain going into a loop inflight, and you certainly don’t want an AI doctor to do so during an intricate procedure or a healthcare crisis situation.