In 2024, information systems Ph.D. student Ommo Clark penned an opinion piece for BusinessDay Nigeria exploring why many Nigerians diagnose and treat their medical conditions themselves, often turning to unreliable online information.
While the essay was inspired by firsthand experiences in her native country, the impulse to consult “Dr. Google” is a worrying global trend, Clark says, and one that has motivated her Ph.D. work. It’s unlikely that people will stop going online with health questions, so Clark is researching ways that AI could help patients, healthcare providers, public health officials, and content platforms better understand and evaluate the sea of medically related content on the internet.
A dual approach to misinformation

Ommo’s efforts were recently recognized when one of her research papers, co-authored with information systems professor Karuna Joshi, won the Best Student Paper Award at the IEEE International Conference on Digital Health 2025, held in July in Helsinki, Finland. The paper, titled “Real-Time Detection of Online Health Misinformation using an Integrated Knowledgegraph-LLM Approach,” describes the results of combining two types of AI approaches (a concept sometimes called third-wave AI) to tackle the problem of identifying online health misinformation.
Clark and Joshi combined a large language model (LLM), which excels at understanding nuanced language, with knowledge graphs, which provide structured factual verification—in this case of medical knowledge. They found the combined approach significantly outperformed either approach by itself.
“The most significant takeaway is that effective health misinformation detection requires both linguistic understanding and structured medical knowledge. Neither alone is sufficient for the complexity of health discourse online,” Clark says.
Equally important, the researchers built robust privacy protections into the system, a critical piece that is missing from many current misinformation detection systems, Clark says.
Informing, not dictating
Going forward, the team is working to further improve their system by giving it the ability to understand the emotional undertones, cultural cues, stance, and persuasive structures of online health stories, in which people may describe personal experience with health treatments. This “narrative” information is important, Clark says, because it illuminates how some stories can be particularly compelling. The researchers are also working to build a system that can evaluate the clinical risk of misinformation, sorting potentially harmless claims from those that could risk your health.

The upgrades will produce a tool that gives users critical information and meaningful risk assessments without presenting a “true/false” judgement, Clark says. “This nuanced approach respects user autonomy,” she says. “Rather than censoring content, we are giving people the tools to make informed decisions about the health information they encounter. In this era of declining institutional trust, transparency about methodology and risk assessment rather than authoritative declarations may be more effective in protecting public health while preserving democratic discourse.”
Clark has already received positive feedback from potential users of such tools. A nurse practitioner at Retriever Integrated Health whom she talked to about her work immediately asked if the system could be integrated into Google. Healthcare practitioners consult evidence-based medical sources before diagnosing or prescribing, the nurse said, “but patients go to Google!”
Tags: COEIT, GraduateSchool, IS, Research
