Researchers at the University of California, San Francisco (UCSF) have developed a new approach to diagnosing lung infections in critically ill patients. The method combines generative artificial intelligence analysis of medical records with a biomarker found in lung fluid, specifically the expression of the FABP4 gene, which is associated with inflammation reduction.
In an observational study involving critically ill adults, this combined approach achieved a correct diagnosis rate of 96%. It also outperformed intensive care clinicians in distinguishing between infectious and non-infectious causes of respiratory failure. The researchers estimate that if this model had been available when patients were admitted, inappropriate antibiotic use could have been reduced by more than 80%.
“We’ve devised a method that gives results much faster than a culture, and it could be easy to implement in the clinic,” said Chaz Langelier, M.D., Ph.D., associate professor of Medicine and senior author of the study published December 16 in Nature Communications. “We’re confident that it could lead to faster diagnosis and curtail the unnecessary use of antibiotics.”
The research team discovered that FABP4 is less expressed in infected lung cells compared to healthy ones. This makes it useful for diagnosing infection.
The study examined data from two groups: one recruited before the COVID-19 pandemic (mainly bacterial infections) and another during the pandemic (mainly viral infections including COVID-19). Each diagnostic method—the FABP4 biomarker or AI—was about 80% accurate on its own. When combined, their accuracy improved.
Doctors generally prescribed antibiotics for most patients diagnosed with pneumonia. In contrast, the biomarker-plus-AI model was more selective when assigning this diagnosis.
To further assess accuracy, researchers compared AI analysis using GPT4 on UCSF’s privacy-protecting platform with evaluations by three physicians specializing in internal medicine and infectious diseases. Both approaches had similar rates of correct diagnoses; however, AI placed greater emphasis on radiology reports while physicians relied more on clinical notes.
“It was almost showing a cultural difference, if you can say that about an AI,” said Natasha Spottiswoode, M.D., DPhil, assistant professor of Medicine and first author. “It shows how AI can complement the work physicians do.”
The research team published their AI prompts so other physicians can test them on HIPAA-compliant platforms.
“Using this is unbelievably simple, you don’t have to be a bioinformatician,” said Hoang Van Phan, Ph.D., co-first author.
The group is now working to validate this model as a clinical test and plans to adapt it for sepsis diagnosis next.
Other authors include Emily Lydon, M.D., Carolyn Calfee, M.D., MAS, Victoria Chu, M.D., MPH, Adolfo Cuesta, M.D., Ph.D., Alexander Kazberouk, M.D., MBA, Natalie Richmond, M.D., and Padmini Deosthale, MS—all from UCSF. The work received funding from the National Institutes of Health and Chan Zuckerberg Biohub. No financial or personal conflicts of interest were reported by any authors.



