Book Title: Digitizing Diagnosis: Medicine, Minds, and Machines in Twentieth-Century America
Author: Andre S. Lea
Publications Details: Johns Hopkins University Press, 2023, 256 pp., $54.95, hardcover
George Bernard Shaw wrote in an essay early in the 20th century:
An anonymous reviewer to a 1966 manuscript on the Medical Data Screen (MDS) stated, “Only ignorants, Luddites, and denialists would not agree that computers will be making better value judgments than the average physician in the foreseeable future” (p. 52). And in 2016, Geoffrey Hinton claimed that we should stop training radiologists: “It’s just completely obvious that in five years deep learning is going to do better than radiologists." 2- 4 In 2023, although true that artificial intelligence (AI) can outperform radiologists in specific tasks, 5 to believe that we no longer need new radiologists is not mainstream.
Tech news frequently gives the impression that a revolution is just around the corner. Andrew S. Lea’s Digitizing Diagnosis puts things into perspective by analyzing three main projects whose purposes were to incorporate computer aids for history taking, diagnosis, and therapeutics. The first part of the book is dedicated to Keeve Brodman and the MDS; the second to HEME, a program designed to diagnose hematologic diseases; and the third to MYCIN, a digital aid for antibiotic prescribing. With thorough research of primary sources, Lea constructs a detailed and interesting story while discussing the more general aspects of computer aids and engaging with some relevant literature.
The first part is the most interesting. It intertwines Brodman’s personal history with the long project that would lead to the MDS, starting in the early 1940s up to 1972 when Roche decided not to commercialize it. The MDS spawned from the Cornell Medical Index (CMI), which in turn had its roots in the Cornell Selectee Index used by the military to identify gay men, with an “explicit anti-queer bias” (p.18). The issue of bias is recognized early in the history of medical programs and is addressed through the chapters. Although actors at the time did recognize the danger of algorithmic bias, the book (and probably the primary sources as well) is unclear whether they saw beyond the technical bias to the social, gender, racial, and other forms of bias that could be present in their datasets.
With 195 yes-or-no questions to be filled in the waiting room, the CMI raised a debate about the gathering of clinical information not done by the physician, its usefulness, and applicability. A 1952 British Medical Journal editorial on the CMI stated,
In a way, the view of the “total patient” is approximated to the view of the family physician. Interestingly, in a period with quickening rates of specialization, the problem of the narrowed gaze was already felt and attempted to be dealt with.
Lea himself declares that the cases he presents have similarities, which are fascinating and important to be highlighted, but at times make the reading a bit repetitive. As interesting as what the author presents about that niche is what is not discussed. When discussing whether the computer systems are “better” than the doctors, or even “useful,” concepts like sensitivity, predictive values, and likelihood ratios are not mentioned. Clinical epidemiology and evidence-based medicine are just briefly commented on. In a scholarly fashion, Lea raises more questions than provides answers about the path ahead, mentioning the insufficiency of AI to address social and economic factors, and the “fundamental complexities [that] remain” (p.189).
Nonetheless, the book is a superb addition to the history of medicine and science. It underscores social limitations as well as the technical ones (ie, the different strategies used for machine learning) and shows the great deal of subjectivity and arbitrariness that goes into creating computer systems. Perhaps mainly, the text tells stories that are mostly forgotten: stories of failures. Despite decades of work with brilliant scientists trying to determine the best approach, and grants and important institutions backing the projects, the initiatives still failed to take off. In recent years, many AI programs were promised to accomplish wonders.6, 7 Take the example of IBM’s Watson; hailed as the future of health care, it has not delivered tremendous improvement.7- 9 Even in areas where computers are now almost ubiquitous, such as electronic medical records, their performance is still far from optimal. 10
Some applications of IA are already here, 11 mostly in the field of image interpretation, 5, 12, 13 but also in infectious disease surveillance 14 and even in some preliminary clinical problem-solving. 15 Finding the balance between technophilia and skepticism is difficult, especially for noncomputer experts. Critically examining the lower-tech history of previous endeavors helps.
The history of computers in medicine may best be summarized in a comment from Dr Norman Sharpless, former director of the American National Cancer Institute, about IBM’s partnership with the University of North Carolina School of Medicine to train Watson in oncology: “We thought it would be easy, but it turned out to be really, really hard." 7
There are no comments for this article.