<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ChatGPT &#8211; Pharmacy Update Online</title>
	<atom:link href="https://pharmacyupdateonline.com/tag/chatgpt/feed/" rel="self" type="application/rss+xml" />
	<link>https://pharmacyupdateonline.com</link>
	<description></description>
	<lastBuildDate>Mon, 25 Aug 2025 11:28:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Head-to-head against AI, pharmacy students won</title>
		<link>https://pharmacyupdateonline.com/2025/08/head-to-head-against-ai-pharmacy-students-won/</link>
		
		<dc:creator><![CDATA[Charlie King]]></dc:creator>
		<pubDate>Thu, 14 Aug 2025 08:00:32 +0000</pubDate>
				<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Devices and Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[education technology]]></category>
		<category><![CDATA[pharmacy students]]></category>
		<category><![CDATA[PharmD student]]></category>
		<guid isPermaLink="false">https://pharmacyupdate.online/?p=18066</guid>

					<description><![CDATA[Students pursuing a Doctor of Pharmacy degree routinely take – and pass – rigorous exams to prove competency in several areas. Can ChatGPT accurately answer the same questions? [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Students pursuing a Doctor of Pharmacy degree routinely take – and pass – rigorous exams to prove competency in several areas. Can ChatGPT accurately answer the same questions? A new study by <a href="https://click.comms.arizona.edu/?qs=d764ee533f248502d7041094eb13503bba41ca1c7e91232b35915cab65243baa97ef9d6a0c270023aa2450b1821f2476c6678e48ecd430ad" target="_blank" rel="noopener">University of Arizona R. Ken Coit College of Pharmacy</a> researchers said no, it can’t.</p>
<p>Researchers found that ChatGPT 3.5, a form of artificial intelligence, fared worse than PharmD students in answering questions on therapeutics examinations that ensure students have the knowledge, skills, and critical thinking abilities to provide safe, effective and patient-centered care.</p>
<p>ChatGPT was less likely to correctly answer application-based questions (44%) compared with questions focused on recall of facts (80%). It also was less likely to answer case-based questions correctly (45%) compared with questions that weren’t focused on patient cases (74%). Overall, ChatGPT answered only 51% of the questions correctly.</p>
<p>The results provide additional insights into the uses and limitations of the technology and may also prove valuable in the development of pharmacy exam questions. The study findings appear in <em><a href="https://click.comms.arizona.edu/?qs=d764ee533f24850230f419d379e85d010b49fe3e79be385e2c76587ed6c68d2cc567e9072abe200ca0f614c4aea3987d616da3fd7a57d0aa" target="_blank" rel="noopener">Currents in Pharmacy Teaching and Learning</a></em>.</p>
<p>“AI has many potential uses in health care and education, and it’s not going away,” said <strong>Christopher Edwards, PharmD</strong>, an associate clinical professor of pharmacy practice and science. “One of the things we were hoping to answer with the study was if students wanted to use AI on an exam, how would they perform? I wanted to have data to show the students and tell them they can do well in the exams by studying hard and they don’t necessarily need these tools.”</p>
<p>A secondary goal was to find out what kinds of questions AI would struggle with. Coit College of Pharmacy Interim Dean <strong>Brian Erstad, PharmD</strong>, wasn’t surprised that ChatGPT did better with straightforward multiple choice and true-false questions and was less successful with application-based questions.</p>
<p>“The kinds of places where evidence is limited and judgment is required, which is often in a clinical setting, was where we found the technology somewhat lacking,” he said. “Ironically those are the kinds of questions clinicians are always facing.”</p>
<p>Edwards, Erstad, and <strong>Bernadette Cornelison, PharmD</strong>, an associate professor of pharmacy practice and science, evaluated answers to 210 questions from six exams in two pharmacotherapeutics courses that are part of the university’s Coit College of Pharmacy PharmD program.</p>
<p>The questions came from a first-year PharmD course focused on disorders related to nonprescription medications for heartburn, diarrhea, atopic dermatitis, cold and allergies. The other class was a second-year course that covered cardiology, neurology and critical care topics.</p>
<p>To compare the exam performances of pharmacy students and ChatGPT, they calculated mean composite scores as a measure of the ability to correctly answer questions. For ChatGPT, they added individual scores for each exam and divided by the number of exams. To figure out the mean composite score for the students, they divided the sum of the mean class performance on each exam by the number of exams. The mean composite score for six exams for ChatGPT was 53 compared to 82 for pharmacy students.</p>
<p>Educators, clinicians and others continue to debate the value of AI large language models, such as ChatGPT, in academic medicine. While such models will continue to play a range of roles in health care, pharmacy practice and other areas, many are concerned that relying too much on the technology could hamper the development of needed reasoning and critical thinking skills in students.</p>
<p>Both Erstad and Edwards acknowledged that in time, newer and more advanced technology may change these results.</p>
<p><strong>Image: </strong><strong>Brian Erstad, PharmD, is the interim dean and a professor at the R. Ken Coit College of Pharmacy.</strong></p>
<p><a href="https://www.eurekalert.org/multimedia/1086573">View <span class="no-break-text">more</span></a> Credit: Photo by Kris Hanning, U of A Office of Research and Partnerships</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How good are AI doctors at medical conversations?</title>
		<link>https://pharmacyupdateonline.com/2025/01/how-good-are-ai-doctors-at-medical-conversations/</link>
		
		<dc:creator><![CDATA[Charlie King]]></dc:creator>
		<pubDate>Fri, 03 Jan 2025 08:00:07 +0000</pubDate>
				<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Devices and Technology]]></category>
		<category><![CDATA[Practices and Services]]></category>
		<category><![CDATA[Service Developments]]></category>
		<category><![CDATA[AI doctors]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[clinician workload]]></category>
		<category><![CDATA[medical conversations]]></category>
		<category><![CDATA[medical technology]]></category>
		<guid isPermaLink="false">https://www.pharmacyupdate.online/?p=15519</guid>

					<description><![CDATA[Artificial intelligence tools such as ChatGPT have been touted for their promise to alleviate clinician workload by triaging patients, taking medical histories and even providing preliminary diagnoses. These [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Artificial intelligence tools such as ChatGPT have been touted for their promise to alleviate clinician workload by triaging patients, taking medical histories and even providing preliminary diagnoses.</p>
<p>These tools, known as large-language models, are already being used by patients to make sense of their symptoms and medical tests results.</p>
<p>But while these AI models perform impressively on standardized medical tests, how well do they fare in situations that more closely mimic the real world?</p>
<p>Not that great, according to the findings of a new study led by researchers at Harvard Medical School and Stanford University.</p>
<p>For their analysis, published Jan. 2 in<em> <a href="https://www.nature.com/articles/s41591-024-03328-5">Nature Medicine</a></em>, the researchers designed an evaluation framework — or a test — called CRAFT-MD (Conversational Reasoning Assessment Framework for Testing in Medicine) and deployed it on four large-language models to see how well they performed in settings closely mimicking actual interactions with patients.</p>
<p>All four large-language models did well on medical exam-style questions, but their performance worsened when engaged in conversations more closely mimicking real-world interactions.</p>
<p>This gap, the researchers said, underscores a two-fold need: First, to create more realistic evaluations that better gauge the fitness of clinical AI models for use in the real world and, second, to improve the ability of these tools to make diagnosis based on more realistic interactions before they are deployed in the clinic.</p>
<p>Evaluation tools like CRAFT-MD, the research team said, can not only assess AI models more accurately for real-world fitness but could also help optimize their performance in clinic.</p>
<p>&#8220;Our work reveals a striking paradox &#8211; while these AI models excel at medical board exams, they struggle with the basic back-and-forth of a doctor&#8217;s visit,&#8221; said study senior author <a href="https://pranavrajpurkar.com/">Pranav Rajpurkar</a>, assistant professor of biomedical informatics at Harvard Medical School. “The dynamic nature of medical conversations &#8211; the need to ask the right questions at the right time, to piece together scattered information, and to reason through symptoms &#8211; poses unique challenges that go far beyond answering multiple choice questions. When we switch from standardized tests to these natural conversations, even the most sophisticated AI models show significant drops in diagnostic accuracy.&#8221;</p>
<p><strong>A better test to check AI’s real-world performance</strong></p>
<p>Right now, developers test the performance of AI models by asking them to answer multiple choice medical questions, typically derived from the national exam for graduating medical students or from tests given to medical residents as part of their certification.</p>
<p>“This approach assumes that all relevant information is presented clearly and concisely, often with medical terminology or buzzwords that simplify the diagnostic process, but in the real world this process is far messier,” said study co-first author Shreya Johri, a doctoral student in the <a href="https://www.rajpurkarlab.hms.harvard.edu/">Rajpurkar Lab</a> at Harvard Medical School. “We need a testing framework that reflects reality better and is, therefore, better at predicting how well a model would perform.”</p>
<p>CRAFT-MD was designed to be one such more realistic gauge.</p>
<p>To simulate real-world interactions, CRAFT-MD evaluates how well large-language models can collect information about symptoms, medications, and family history and then make a diagnosis. An AI agent is used to pose as a patient, answering questions in a conversational, natural style. Another AI agent grades the accuracy of final diagnosis rendered by the large-language model. Human experts then evaluate the outcomes of each encounter for ability to gather relevant patient information, diagnostic accuracy when presented with scattered information, and for adherence to prompts.</p>
<p>The researchers used CRAFT-MD to test four AI models — both proprietary or commercial and open-source ones — for performance in 2,000 clinical vignettes featuring conditions common in primary care and across 12 medical specialties.</p>
<p>All AI models showed limitations, particularly in their ability to conduct clinical conversations and reason based on information given by patients. That, in turn, compromised their ability to take medical histories and render appropriate diagnosis. For example, the models often struggled to ask the right questions to gather pertinent patient history, missed critical information during history taking, and had difficulty synthesizing scattered information. The accuracy of these models declined when they were presented with open-ended information rather than multiple choice answers. These models also performed worse when engaged in back-and-forth exchanges — as most real-world conversations are — rather than when engaged in summarized conversations.</p>
<p><strong>Recommendations for optimizing AI’s real-world performance </strong></p>
<p>Based on these findings, the team offers a set of recommendations both for AI developers who design AI models and for regulators charged with evaluating and approving these tools.</p>
<p>These include:</p>
<ul>
<li>Use of conversational, open-ended questions that more accurately mirror unstructured doctor-patient interactions in the design, training, and testing of AI tools</li>
<li>Assessing models for their ability to ask the right questions and to extract the most essential information</li>
<li>Designing models capable of following multiple conversations and integrating information from them</li>
<li>Designing AI models capable of integrating textual (notes from conversations) with and non-textual data (images, EKGs)</li>
<li>Designing more sophisticated AI agents that can interpret non-verbal cues such as facial expressions, tone, and body language</li>
</ul>
<p>Additionally, the evaluation should include both AI agents and human experts, the researchers recommend, because relying solely on human experts is labor-intensive and expensive.  For example, CRAFT-MD outpaced human evaluators, processing 10,000 conversations in 48 to 72 hours, plus 15-16 hours of expert evaluation. In contrast, human-based approaches would require extensive recruitment and an estimated 500 hours for patient simulations (nearly 3 minutes per conversation) and about 650 hours for expert evaluations (nearly 4 minutes per conversation). Using AI evaluators as first line has the added advantage of eliminating the risk of exposing real patients to unverified AI tools.</p>
<p>The researchers said they expect that CRAFT-MD itself will also be updated and optimized periodically to integrate improved patient-AI models.</p>
<p>“As a physician scientist, I am interested in AI models that can augment clinical practice effectively and ethically,” said study co-senior author Roxana Daneshjou, assistant professor of Biomedical Data Science and Dermatology at Stanford University.  “CRAFT-MD creates a framework that more closely mirrors real-world interactions and thus it helps move the field forward when it comes to testing AI model performance in health care.”</p>
<p><strong>Authorship, funding, disclosures</strong></p>
<p>Publication DOI 10.1038/s41591-024-03328-5</p>
<p>Additional authors included Jaehwan Jeong and Hong-Yu Zhou, Harvard Medical School; Benjamin A. Tran, Georgetown University; Daniel I. Schlessinger, Northwestern University; Shannon Wongvibulsin, University of California-Los Angeles; Leandra A. Barnes, Zhuo Ran Cai and David Kim, Sandford University; and Eliezer M. Van Allen, Dana-Farber Cancer Institute.</p>
<p>The work was supported by the HMS Dean’s Innovation Award and a Microsoft Accelerate Foundation Models Research grant awarded to Pranav Rajpurkar. SJ received further support through the IIE Quad Fellowship.</p>
<p>Daneshjou reported receiving personal fees from DWA, personal fees from Pfizer, personal fees from L&#8217;Oreal, personal fees from VisualDx, stock options from MDAlgorithms and Revea outside the submitted work, and a patent for TrueImage pending. Schlessinger is the co-founder of FixMySkin Healing Balms, a shareholder in Appiell Inc. and K-Health, a consultant with Appiell Inc and LuminDx, and an investigator for Abbvie and Sanofi. Van Allen serves as an advisor to Enara Bio, Manifold Bio, Monte Rosa, Novartis Institute for Biomedical Research, Serinus Bio. E.M.V.A provides research support to Novartis, BMS, Sanofi, NextPoint. Van Allen holds equity in Tango Therapeutics, Genome Medical, Genomic Life, Enara Bio, Manifold Bio, Microsoft, Monte Rosa, Riva Therapeutics, Serinus Bio, Syapse. Van Allen has filed for institutional patents on chromatin mutations and immunotherapy response, and methods for clinical interpretation; intermittent legal consulting on patents for Foaley &amp; Hoag, and serves on the editorial board of <em>Science Advances</em>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>When it comes to emergency care, ChatGPT overprescribes</title>
		<link>https://pharmacyupdateonline.com/2024/10/when-it-comes-to-emergency-care-chatgpt-overprescribes/</link>
		
		<dc:creator><![CDATA[Charlie King]]></dc:creator>
		<pubDate>Fri, 11 Oct 2024 08:00:10 +0000</pubDate>
				<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Devices and Technology]]></category>
		<category><![CDATA[Practices and Services]]></category>
		<category><![CDATA[Service Developments]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[emergency care]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[hospital treatment]]></category>
		<category><![CDATA[overprescribing]]></category>
		<guid isPermaLink="false">https://www.pharmacyupdate.online/?p=14743</guid>

					<description><![CDATA[Generative AI still needs to find the right balance between too little and too much care before it can help doctors make decisions in the Emergency Department.  If [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><em>Generative AI still needs to find the right balance between too little and too much care before it can help doctors make decisions in the Emergency Department. </em></p>
<p>If ChatGPT were cut loose in the Emergency Department, it might suggest unneeded x-rays and antibiotics for some patients and admit others who didn’t require hospital treatment, a new study from UC San Francisco has found.</p>
<p>The researchers said that, while the model could be prompted in ways that make its responses more accurate, it’s still no match for the clinical judgment of a human doctor.</p>
<p>“This is a valuable message to clinicians not to blindly trust these models,” said postdoctoral scholar <a href="https://urldefense.com/v3/__https://u7061146.ct.sendgrid.net/ls/click?upn=u001.gqh-2BaxUzlo7XKIuSly0rC-2F1FkALUKsUn-2F3xA6AKw-2BfnAyCTS26h3Omidw0Cayki5eNnr6P9Znnp8vBsmOK-2BNAg-3D-3DOzj7_Ylre-2F07SALHbMk99pbuxBBlZHa-2F5o-2FWUGLES1ilvhGlGrGuJ1CdsoY2F9sfrH7k-2BMHmtt2RnsCI1j8p9-2BLuYtBV5CWdcmaifv0I54GRP3ek7voj-2FNATkSVGIHcxVmHW4Usu3cV7DMSVoSAR9zOX2VDLnUcj9JVc-2FZQLsVesLdg4ZctJws9k-2FwWZFmUY51AJfqXzqZgr9pg8hqApE3tReeYBDLrKpWe0QzsrhjciVCRe435hMnpAOLqnq1spz94rO1dgJyWd2AnCzLVkJ7S8mHcBIAeezD4P4N9FXx9rIJEDTg5SNUq4e9mTrN4GRbUwNPUHl-2FsbEO-2FQeGH-2BG6mpFm5OqcVUjmYyZ9LgX8n-2FojVo-3D__;!!LQC6Cpwp!vVnuTTjXetSkERT5vPZ53cRfUkWPJtvXu_0p4B1R3-Xtes69-HHdFwm0yQ0n9s02hA9Q7ndWQLzWaDWodP3fkXEVNXAo$">Chris Williams</a>, MB BChir, lead author of the study, which appears Oct. 8 in <em><a href="https://urldefense.com/v3/__https://u7061146.ct.sendgrid.net/ls/click?upn=u001.gqh-2BaxUzlo7XKIuSly0rC0c9cga68YvkwlUdTt3oQwrrFLOCv4nmf73mUpOjLByBD-2FHPMx-2F6aPd7fHr-2BDqy1gA-3D-3DN3nH_Ylre-2F07SALHbMk99pbuxBBlZHa-2F5o-2FWUGLES1ilvhGlGrGuJ1CdsoY2F9sfrH7k-2BMHmtt2RnsCI1j8p9-2BLuYtBV5CWdcmaifv0I54GRP3ek7voj-2FNATkSVGIHcxVmHW4Usu3cV7DMSVoSAR9zOX2VDLnUcj9JVc-2FZQLsVesLdg4ZctJws9k-2FwWZFmUY51AJfqXzqZgr9pg8hqApE3tReeTrPrkknD6yCqgQ1EZrSeZAI6fGluv1KoSn1IQ09DesULi7I6CwhQ2oUEJBCPBwA4GhaztAS8Uefu13KxKpndjDvRlcaeWbSX6cDpFK1KEXwiNEbnXxO4m5CJSe0-2FDoz1k3pL-2B1E4QmFdsemRAhbS6A-3D__;!!LQC6Cpwp!vVnuTTjXetSkERT5vPZ53cRfUkWPJtvXu_0p4B1R3-Xtes69-HHdFwm0yQ0n9s02hA9Q7ndWQLzWaDWodP3fkcJMQX_E$">Nature Communications</a></em>. “ChatGPT can answer medical exam questions and help draft clinical notes, but it’s not currently designed for situations that call for multiple considerations, like the situations in an emergency department.”</p>
<p>Recently, Williams showed that ChatGPT, a large language model (LLM) that can be used for researching clinical applications of AI, was <a href="https://urldefense.com/v3/__https://u7061146.ct.sendgrid.net/ls/click?upn=u001.gqh-2BaxUzlo7XKIuSly0rC-2Fxhdmv9CVBkyVBEGvInt-2FaZJEVnrWLpqYiUipy0OJ9uUCoxCFb-2FOnXDtrcpZV2cebT56f36Ryvev1IDLG3Ex3YN5BHfkfynOqL5NrwxZSPUtRnf-2Bo6Ds2YxEzVhv4RlVg-3D-3DMni3_Ylre-2F07SALHbMk99pbuxBBlZHa-2F5o-2FWUGLES1ilvhGlGrGuJ1CdsoY2F9sfrH7k-2BMHmtt2RnsCI1j8p9-2BLuYtBV5CWdcmaifv0I54GRP3ek7voj-2FNATkSVGIHcxVmHW4Usu3cV7DMSVoSAR9zOX2VDLnUcj9JVc-2FZQLsVesLdg4ZctJws9k-2FwWZFmUY51AJfqXzqZgr9pg8hqApE3tReeXxZcf23eE-2FKva2uip-2BYLdztfKKSQpkgEEgSK-2FRrSaVp5S6QbS-2FaTpWXjo7njtZ-2BGtX3R-2BRthS8VCCsgFJHQvpcGpueVB15ndcvue7yaKkyEukU6jfFn4pKca6lRqveyFmenfb8WpaSXjZma8i7JPwI-3D__;!!LQC6Cpwp!vVnuTTjXetSkERT5vPZ53cRfUkWPJtvXu_0p4B1R3-Xtes69-HHdFwm0yQ0n9s02hA9Q7ndWQLzWaDWodP3fkTLgUmI8$">slightly better than humans</a> at determining which of two emergency patients was most acutely unwell, a straightforward choice between patient A and patient B.</p>
<p>With the current study, Williams challenged the AI model to perform a more complex task: providing the recommendations a physician makes after initially examining a patient in the ED. This includes deciding whether to admit the patient, get x-rays or other scans, or prescribe antibiotics.</p>
<p><strong>AI model is less accurate than a resident </strong></p>
<p>For each of the three decisions, the team compiled a set of 1,000 ED visits to analyze from an archive of more than 251,000 visits. The sets had the same ratio of “yes” to “no” responses for decisions on admission, radiology and antibiotics that are seen across UCSF Health’s Emergency Department.</p>
<p>Using UCSF’s secure generative AI platform, which has broad privacy protections, the researchers entered doctors’ notes on each patient’s symptoms and examination findings into ChatGPT-3.5 and ChatGPT-4. Then, they tested the accuracy of each set with a series of increasingly detailed prompts.</p>
<p>Overall, the AI models tended to recommend services more often than was needed. ChatGPT-4 was 8% less accurate than resident physicians, and ChatGPT-3.5 was 24% less accurate.</p>
<p>Williams said the AI’s tendency to overprescribe could be because the models are trained on the internet, where legitimate medical advice sites aren’t designed to answer emergency medical questions but rather to send readers to a doctor who can.</p>
<p>“These models are almost fine-tuned to say, ‘seek medical advice,’ which is quite right from a general public safety perspective,” he said. “But erring on the side of caution isn’t always appropriate in the ED setting, where unnecessary interventions could cause patients harm, strain resources and lead to higher costs for patients.”</p>
<p>He said models like ChatGPT will need better frameworks for evaluating clinical information before they are ready for the ED. The people who design those frameworks will need to strike a balance between making sure the AI doesn’t miss something serious, while keeping it from triggering unneeded exams and expenses.</p>
<p>This means researchers developing medical applications of AI, along with the wider clinical community and the public, need to consider where to draw those lines and how much to err on the side of caution.</p>
<p>“There’s no perfect solution,” he said, “But knowing that models like ChatGPT have these tendencies, we’re charged with thinking through how we want them to perform in clinical practice.”</p>
<p><strong>Authors:</strong> Additional authors include Brenda Miao, Aaron Kornblith, and Atul Butte, all of UCSF.</p>
<p><strong>Funding: </strong>The Eunice Kennedy Shriver National Institute of Child Health and Human Development and the National Institutes of Health (K23HD110716).</p>
<p><strong>Disclosures:</strong> Please see the paper.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Study finds ChatGPT shows promise as medication management tool, could help improve geriatric health care</title>
		<link>https://pharmacyupdateonline.com/2024/04/study-finds-chatgpt-shows-promise-as-medication-management-tool-could-help-improve-geriatric-health-care/</link>
		
		<dc:creator><![CDATA[Charlie King]]></dc:creator>
		<pubDate>Fri, 19 Apr 2024 08:00:11 +0000</pubDate>
				<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Devices and Technology]]></category>
		<category><![CDATA[Practices and Services]]></category>
		<category><![CDATA[Service Developments]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[care of the elderly]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[geriatric care]]></category>
		<category><![CDATA[medication management]]></category>
		<category><![CDATA[polypharmacy]]></category>
		<guid isPermaLink="false">https://www.pharmacyupdate.online/?p=12879</guid>

					<description><![CDATA[Polypharmacy, or the concurrent use of five or more medications, is common in older adults and increases the risk of adverse drug interactions. While deprescribing unnecessary drugs can [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Polypharmacy, or the concurrent use of five or more medications, is common in older adults and increases the risk of adverse drug interactions. While deprescribing unnecessary drugs can combat this risk, the decision-making process can be complex and time-consuming. Increasingly, there is a need for effective polypharmacy management tools that can support short-staffed primary care practitioners.</p>
<p>In a new study, researchers from the Mass General Brigham MESH Incubator found that ChatGPT, a generative artificial intelligence (AI) chatbot, showed promise as a tool to manage polypharmacy and deprescription. These findings, published April 18<sup>th</sup> in the <em>Journal of Medical Systems</em>, demonstrate the first use case of AI models in medicine management.</p>
<p>To evaluate its utility, the investigators provided ChatGPT with different clinical scenarios and asked it a set of decision-making questions. Each scenario featured the same elderly patient taking a mixture of medications but included variations in cardiovascular disease history (CVD) and degree of impairment in activities of daily living (ADL).</p>
<p>When asked yes or no questions about reducing prescribed drugs, ChatGPT consistently recommended deprescribing medications in patients without a history of CVD. However, it was more cautious when overlying CVD was introduced, and more likely to keep the patient’s medication regimen unchanged. In both cases, the researchers observed that ADL impairment severity did not seem to affect decision outcomes.</p>
<p>The team also noted that ChatGPT had a tendency to disregard pain and favored deprescribing pain medications over other drug types like statins or antihypertensives. In addition, ChatGPT responses varied when presented with the same scenario in new chat sessions — which the authors suggest could reflect inconsistency in commonly reported clinical deprescribing trends on which the model was trained.</p>
<p>More than 40 percent of older adults meet the criteria for polypharmacy. The rate of seniors on Medicare seeing more specialists on their care teams has increased in recent years, leaving primary care providers to oversee medication management. An effective AI tool could help aid this practice, according to the researchers.</p>
<p>“Our study provides the first use case of ChatGPT as a clinical support tool for medication management,” said senior corresponding author Marc Succi, MD, Associate Chair of Innovation and Commercialization at Mass General Brigham Radiology<strong> </strong>and Executive Director of the MESH Incubator.  “While caution should be taken to increase accuracy of such models, AI-assisted polypharmacy management could help alleviate the increasing burden on general practitioners. Further research with specifically trained AI tools may significantly enhance the care of aging patients.”</p>
<p>Arya Rao, lead author, MESH researcher and Harvard Medical student, added “Our findings suggest that AI-based tools can play an important role in ensuring safe medication practices for older adults; it is imperative that we continue to refine these tools to account for the complexities of medical decision-making.”</p>
<p>Read more in the <em>Journal of Medical Systems</em>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>ChatGPT can produce medical record notes ten times faster than doctors</title>
		<link>https://pharmacyupdateonline.com/2024/03/chatgpt-can-produce-medical-record-notes-ten-times-faster-than-doctors/</link>
		
		<dc:creator><![CDATA[Charlie King]]></dc:creator>
		<pubDate>Thu, 28 Mar 2024 08:00:04 +0000</pubDate>
				<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Devices and Technology]]></category>
		<category><![CDATA[Pharmaceutical Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Discharge documents]]></category>
		<category><![CDATA[hospital care]]></category>
		<category><![CDATA[medical record]]></category>
		<category><![CDATA[Orthopaedics]]></category>
		<guid isPermaLink="false">https://www.pharmacyupdate.online/?p=12672</guid>

					<description><![CDATA[The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><strong>The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at Uppsala University Hospital and Uppsala University in collaboration with Danderyd Hospital and the University Hospital of Basel, Switzerland. They conducted a pilot study of just six virtual patient cases, which will now be followed up with an in-depth study of 1,000 authentic patient medical records.</strong></p>
<p>“For years, the debate has centred on how to improve the efficiency of healthcare. Thanks to advances in generative AI and language modelling, there are now opportunities to reduce the administrative burden on healthcare professionals. This will allow doctors to spend more time with their patients,” explains Cyrus Brodén, an orthopaedic physician and researcher at Uppsala University Hospital and Uppsala University.</p>
<p>Administrative tasks take up a large share of a doctor’s working hours, reducing the time for patient contact and contributing to a stressful work situation. Researchers at Uppsala University Hospital and Uppsala University, in collaboration with Danderyd Hospital and the University Hospital of Basel, Switzerland, have shown in a new study that the AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality.</p>
<p>The aim of the study was to assess the quality and effectiveness of the ChatGPT tool when producing medical record notes. The researchers used six virtual patient cases that mimicked real cases in both structure and content. Discharge documents for each case were generated by orthopaedic physicians. ChatGPT-4 was then asked to generate the same notes. The quality assessment was carried out by an expert panel of 15 people who were unaware of the source of the documents. As a secondary metric, the time required to create the documents was compared.</p>
<p>“The results show that ChatGPT-4 and human-generated notes are comparable in quality overall, but ChatGPT-4 produced discharge documents ten times faster than the doctors,” notes Brodén.</p>
<p>“Our interpretation is that advanced large language models like ChatGPT-4 have the potential to change the way we work with administrative tasks in healthcare. I believe that generative AI will have a major impact on healthcare and that this could be the beginning of a very exciting development,” he maintains.</p>
<p>The plan is to launch an in-depth study shortly, with researchers collecting 1,000 medical patient records. Again, the aim is to use ChatGPT to produce similar administrative notes in the patient records.</p>
<p>“This will be an interesting and resource-intensive project involving many partners. We are already working actively to fulfil all data management and confidentiality requirements to get the study under way,” concludes Brodén.</p>
<p>Rosenberg, G. S., Magnéli, M., Barle, N., Kontakis, M. G., Müller, A. M., Wittauer, M., Gordon, M., &amp; Brodén, C.. 2024: ChatGPT-4 generates orthopedic discharge documents faster than humans maintaining comparable quality: a pilot study of 6 cases. <em>Acta Orthopaedica</em>, <em>95</em>, 152-156. <a href="https://doi.org/10.2340/17453674.2024.40182">https://doi.org/10.2340/17453674.2024.40182</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google &#038; ChatGPT have mixed results in medical information queries</title>
		<link>https://pharmacyupdateonline.com/2023/08/google-chatgpt-have-mixed-results-in-medical-information-queries/</link>
		
		<dc:creator><![CDATA[Charlie King]]></dc:creator>
		<pubDate>Thu, 03 Aug 2023 08:00:43 +0000</pubDate>
				<category><![CDATA[Devices and Technology]]></category>
		<category><![CDATA[Medicines and Therapeutics]]></category>
		<category><![CDATA[Mental Health]]></category>
		<category><![CDATA[Pharmaceutical Technology]]></category>
		<category><![CDATA[Alzheimers]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[dementia]]></category>
		<category><![CDATA[Internet Research]]></category>
		<category><![CDATA[medical information querie]]></category>
		<guid isPermaLink="false">https://www.pharmacyupdate.online/?p=10016</guid>

					<description><![CDATA[When you need accurate information about a serious illness, should you go to Google or ChatGPT? An interdisciplinary study led by University of California, Riverside, computer scientists found [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>When you need accurate information about a serious illness, should you go to Google or ChatGPT?</p>
<p>An interdisciplinary study led by University of California, Riverside, computer scientists found that both internet information gathering services have strengths and weaknesses for people seeking information about Alzheimer&#8217;s disease and other forms of dementia. The team included clinical scientists from the University of Alabama and Florida International University.</p>
<p>Google provides the most current information, but query results are skewed by service and product providers seeking customers, the researchers found. ChatGPT, meanwhile, provides more objective information, but it can be outdated and lacks the sources of its information in its narrative responses.</p>
<p>“If you pick the best features of both, you can build a better system, and I think that this is what will happen in the next couple of years,” said Vagelis Hristidis, a professor of computer science and engineering in UCR’s Bourns College of Engineering.</p>
<p>In their study, Hristidis and his co-authors submitted 60 queries to both Google and ChatGPT that would be typical submissions from people living with dementia and their families.</p>
<p>The researchers focused on dementia because more than 6 million Americans are impacted by Alzheimer&#8217;s disease or a related condition, said study co-author Nicole Ruggiano, a professor of social work at the University of Alabama.</p>
<p>&#8220;Research also shows that caregivers of people living with dementia are among the most engaged stakeholders in pursuing health information, since they often are tasked with making decisions for their loved one&#8217;s care,&#8221; Ruggiano said.</p>
<p>Half of the queries submitted by the researchers sought information about the disease processes, while the other half sought information on services that could assist patients and their families.</p>
<p>The results were mixed.</p>
<p>“Google has more up-to-date information, and covers everything,” Hristidis said. “Whereas ChatGPT is trained every few months. So, it is behind. Let&#8217;s say there&#8217;s some new medicine that just came out last week, you will not find it on ChatGPT.”</p>
<p>While dated, ChatGPT provided more reliable and accurate information than Google. This is because the ChatGPT creators at OpenAI choose the most reliable websites when they train ChatGPT through computationally intensive machine learning. Yet, users are left in dark about specific sources of information because the resulting narratives are void of references.</p>
<p>Google, however, has a reliability problem because it essentially “covers everything from the reliable sources to advertisements,” Hristidis said.</p>
<p>In fact, advertisers pay Google for their website links to appear at the top of search result pages. So, users often first see links to websites of for-profit companies trying to sell them care-related services and products. Finding reliable information from Google searches thus requires a level of user skill and experience, Hristidis said.</p>
<p>Co-author Ellen Brown, an associate professor of nursing at the Florida International University, pointed out that families need timely information about Alzheimer&#8217;s.</p>
<p>&#8220;Although there is no cure for the disease, many clinical trials are underway and recently a promising treatment for early stage Alzheimer&#8217;s disease was approved by the FDA,&#8221; Brown said. &#8220;Therefore, up-to-date information is important for families looking to learn about recent discoveries and available treatments.&#8221;</p>
<p>The authors of the study write that “the addition of both the source and the date of health-related information and availability in other languages may increase the value of these platforms for both non-medical and medical professionals.” It was published in the <em>Journal of Medical Internet Research</em> under the title &#8220;ChatGPT vs Google for Queries Related to Dementia and Other Cognitive Decline: Comparison of Results.”</p>
<p>Google and ChatGPT both scored low for readability scores, which makes it difficult for people with lower levels of education and low health literacy skills.</p>
<p>“My prediction is that the readability is the easier thing to improve because there are already some tools, some AI methods, that can read and paraphrase text,” Hristidis said. “In terms of improving reliability, accuracy, and so on, that&#8217;s much harder. Don&#8217;t forget that it took scientists many decades of AI research to build ChatGPT. It is going to be slow improvements from where we are now.”</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
