ChatGPT-4 作為病患教育工具:人工智慧和泌尿系統癌症相關查詢

楊証傑、陳進利、高建璋、楊明昕、曹智惟、蒙恩、江佩璋*

三軍總醫院 外科部 泌尿外科

ChatGPT-4 as a tool for patient education: artificial intelligence and urological cancer-related queries

Cheng-Chieh Yang, Chin-Li Chen, Chien-Chang Kao, Ming-Hsin Yang, Chih-Wei Tsao, En Meng, Pei-Jhang Chiang*

Division of Urology, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan

 

Purpose

To evaluate the quality and efficacy of ChatGPT-4's responses to urological cancer-related queries, as a tool for patient education.

 

Methods

A selection of 115 urological cancer-related questions, spanning categories such as prostate, bladder, kidney, testicular, penile, and external genital tumors, were posed to ChatGPT-4. Responses were critically assessed across various metrics including understandability, actionability, length, readability, and accuracy.

 

Results

ChatGPT-4 consistently yielded high understandability across all cancer types, averaging a score of 91.7%. However, its actionability, an indicator of the practical applicability of the information, averaged at 40.0%, with variations observed particularly in kidney and testicular cancer categories. The model's answers averaged around 395 words and 2,268 characters, structured into approximately 16 paragraphs and 21 sentences. Readability metrics exhibited slight variations, with Flesch Reading Ease scores ranging from 30.65 (kidney cancer) to 40 (penile cancer). The Flesch-Kincaid Grade Level consistently placed the text's complexity at the college level, with scores ranging from 12.15 to 13.4. Notably, the misinformation score was low across categories, emphasizing the accuracy of the generated content. Passive voice usage, an important metric for active reader engagement, was variable but remained predominantly below 30%.

 

Conclusions

ChatGPT-4 demonstrates significant potential as a tool for urological patient education, given its consistent high understandability scores. However, the variability in actionability and text complexity necessitates further refinement. The tool's robust accuracy in information dissemination is commendable, yet category-specific refinements could optimize its efficacy for patient-centric communication. This study emphasizes the potential and challenges of integrating machine learning models in medical education platforms.

 

Keywords: ChatGPT, patient education tool, artificial intelligence, urological cancer

    位置
    資料夾名稱
    摘要
    發表人
    TUA線上教育_家琳
    單位
    台灣泌尿科醫學會
    建立
    2024-01-10 10:45:02
    最近修訂
    2024-01-10 10:45:41
    1. 1.
      Podium 1
    2. 2.
      Podium 2
    3. 3.
      Podium 3
    4. 4.
      Moderated Poster 01
    5. 5.
      Moderated Poster 02
    6. 6.
      Non-Discussion Poster