Large Language Model-Based Evaluation of Medical Question Answering Systems: Algorithm Development and Case Study
Large Language Model-Based Evaluation of Medical Question Answering Systems: Algorithm Development and Case Study

Large Language Model-Based Evaluation of Medical Question Answering Systems: Algorithm Development and Case Study

Stud Health Technol Inform. 2024 Apr 26;313:22-27. doi: 10.3233/SHTI240006.

ABSTRACT

BACKGROUND: Healthcare systems are increasingly resource constrained, leaving less time for important patient-provider interactions. Conversational agents (CAs) could be used to support the provision of information and to answer patients’ questions. However, information must be accessible to a variety of patient populations, which requires understanding questions expressed at different language levels.

METHODS: This study describes the use of Large Language Models (LLMs) to evaluate predefined medical content in CAs across patient populations. These simulated populations are characterized by a range of health literacy. The evaluation framework includes both fully automated and semi-automated procedures to assess the performance of a CA.

RESULTS: A case study in the domain of mammography shows that LLMs can simulate questions from different patient populations. However, the accuracy of the answers provided varies depending on the level of health literacy.

CONCLUSIONS: Our scalable evaluation framework enables the simulation of patient populations with different health literacy levels and helps to evaluate domain specific CAs, thus promoting their integration into clinical practice. Future research aims to extend the framework to CAs without predefined content and to apply LLMs to adapt medical information to the specific (health) literacy level of the user.

PMID:38682499 | DOI:10.3233/SHTI240006