ChatGPT outperforms humans on OBGYN exam: study

The artificial intelligence chatbot ChatGPT outperformed human candidates in a mock obstetrics and gynecology exam – even excelling in areas like empathetic communication and exhibiting specialist knowledge.

ChatGPT scored an average of 77.2 percent on the OBGYN specialist exam, while human candidates only eked out a 73.7 percent average, a new study from the National University of Singapore reveals.

ChaptGPT also took an average of just under three minutes to complete each station, well under the 10 minutes time limit, the study noted.

The test, which had candidates complete a series of stations with evolving clinic scenarios, asked each participant to formulate a care plan and demonstrate a competent grasp of myriad issues including labor management, gynecologic oncology and postoperative care.

The study at the National University of Singapore revealed ChatGPT’s surprising bedside skills.
AJOG / National University of Singapore

Not only did ChatGPT score higher than the human students, it also surprised the panel by scoring very well in empathetic communication, and was able to develop answers to complex questions very quickly, the study said.

Although the ChatGPT candidate was concealed from the examiners, they were often able to figure out which responses were from the artificial intelligence based on their thoroughness and speed.

The technology’s major limitation, however, was its lack of “local ethnic knowledge,” the assessment explained.


The ChatGPT app.
Artificial intelligence is entering more industries by the day.
AP

While the human exam takers were able to communicate with patients with a fusion of English, Malay, Tamil, and Chinese dialects, ChatGPT relies on more scripted formatting. Thus, the examiners concluded that the individual candidates were better to “bridge…closeness and build trust” with patients.

“The arrival and increased use of ChatGPT has proven that it can be a viable resource in guiding medical education, possibly provide adjunct support for clinical care in real time, and even support the monitoring of medical treatment in patients,” Associate Professor Mahesh Choolani, who ran the study, concluded in a press release.

“In an era where accurate knowledge and information is instantly accessible, and these capabilities could be embedded within appropriate context by Generative AI in the foreseeable future, the need for future generations of medical doctors to clearly demonstrate the value and importance of the human touch is now saliently obvious,” he continued.


Open AI.
Although the ChatGPT candidate was concealed from the examiners, they were often able to figure out which responses were from the artificial intelligence.
REUTERS

“As doctors and medical educators, we need to strongly emphasize and exemplify the use of soft skills, compassionate communication and knowledge application in medical training and clinical care.”

The results of the Singapore study come as more industries grapple with artificial technology’s implications for their industry.

This week, homework help website Chegg announced it was cutting 4 percent of its staff as ChatGPT threatened to cut into its business.

The company plans to “execute against its AI strategy and to create long-term sustainable value for its students and investors,” it said in a statement.

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link