Hero-digest

Health & Science Tech Digest: February 2023


Share on:

Dear Readers, 

Progress in machine learning (ML), neural networks, and artificial intelligence (AI) are transforming industries, bringing sophisticated software and hardware that help in simple work tasks, determining patient diagnoses, and even enabling human-like responses to our most interesting questions! In this month’s news and research roundup, we talk through some of the ways AI is changing health and science. We hope you enjoy February’s Health and Science Tech Digest. 

Warmest regards,
DeAnne Canieso, PhD
Communications Manager

Get Our Health & Science Tech Digest Sent to you Monthly

Subscribe to our Health & Science Tech Huddle newsletter to get this curated list of relevant news, research, and resources sent to your inbox each month. Plus get additional resources designed to empower health & science innovators.

News

hs-news

ChatGPT Sets Record for Fastest-Growing Application in History, But What Does It Mean for Humanity?

This month, OpenAI's popular natural language chatbot, ChatGPT, received the moniker of being the fastest-growing consumer application in history. To put it into perspective, it took TikTok nine months to reach 100 million users, while Instagram took 2.5 years. ChatGPT reached 100 million in January, just shy of two months after launch. If you’re not in the know, ChatGPT is a natural language processing (NLP) technology trained on a massive corpus of textual data. Now, it is capable of understanding the context and nuances of language in a variety of domains to the extent that it generates human-like responses to any question. 

The implications are scary, but it is possible that generative AI, like ChatGPT, will significantly impact health and science. For example, ChatGPT could improve patient literacy by providing instant access to information and support. Likewise, providers could make more accurate and informed clinical decisions using the chatbot to analyze patient history, symptoms, and test results. Despite the flurry of positivity surrounding the possibilities, I do urge that we maybe – slow our roll – and understand what it could mean for humanity. For one, it may lead to us losing the ability to perform the most basic tasks. Afterall, the chatbot can write your email, give you relationship advice, and list discussion questions for a book in our CrossComm book club (yes we did this)! 

However, ChatGPT is only as good as the data it is fed, which perhaps signals even more that we proceed with caution. Some of ChatGPT’s training data comes from a vast corpus of text from the internet, and these texts are written by humans. It is very possible that ChatGPT can perpetuate current societal prejudices and reinforce existing stereotypes and biases. In fact, it can amplify the biases of our language by learning and replicating them in its response generation. Further, if the data fed has an inkling of false or inaccurate information then the risks for use remain rather high, especially in healthcare. I can’t help but feel the perpetuation of AI can give us hope, but it should also give us pause to consider and establish some ethical frameworks and guidelines sooner rather than later. 


Robots Join the Hospital Ranks to Help Combat Nurse Burnout

In recent news, robots have joined the medical workforce. A hospital in Spokane is testing out incredibly cute four-foot robots named Moxi. While Moxi is not tasked with the important patient care that nurses give, it can run errands such as taking samples to the lab, delivering medication to providers, and grabbing equipment. The Spokane hospital found that their nurses were spending 70 minutes of a 12-hour workday doing tasks the robot can do. They are hoping to decrease burnout by decreasing their work burden. 

This news pinpoints a key concern for healthcare. The emotional toll of burnout among nurses is a critical issue that has a far reaching impact in patient care. In one survey I found,  64% of nurses are looking to leave their profession. And while there’s been an increase in attention towards nurse burnout due to the pandemic, almost all nurses felt their mental health wasn’t a priority at work. If nurses continue to feel unsupported and distressed, there is an obvious downstream effect on patient care, including poor decision making that could lead to safety incidents.

This leaves us with a bigger question, can machines help? Helper robots may possibly usher in a time of efficiency, especially in automating menial tasks for an already-struggling workforce. And over time, improvements in hardware and software will extend the scope of their capabilities (right now there are robots in development that help dress patients). But I can also imagine that a singular mechanical malfunction could cost human lives. There are also the emotional interactions to consider that distinguish the human experience from robotic engagement. This dehumanization could impact patients, especially those with mental health conditions, which leads me to wonder, when is it appropriate to replace human interaction?    


Research Roundup

HSTResearch

Eye Gaze as a Biomarker in the Recognition of Autism Spectrum Disorder (ASD) Using Virtual Reality and Machine Learning: A Proof of Concept for Diagnosis

A diagnostic biomarker, eye gaze, is a variable researchers can now collect via a VR headset! Within the context of ASD, clinical research has been striving towards identifying objective and “unconscious” processes to help in the systemic diagnosis and earlier detection of ASD. It is typically a challenge for experts to assess the disorder because it requires observations in clinical settings, which can be a limitation as the setting doesn’t necessarily capture real-world performance. Despite the limitations, eye gaze behaviors continue to be the most relevant indicator in the diagnosis of ASD. 

Children with ASD show atypical eye patterns in social settings, such as less attention to faces, people, and social situations compared with typically developed (TD) children. The eye tracking measures collected in VR capture a child’s response to virtual environments with realistic stimuli. Researchers in the study created various virtual shopping and entertainment stores, as well as static and dynamic social and nonsocial stimuli to assess eye gaze. All-in-all, 13 variables were analyzed, which indicated statistically significant differences between autistic and TD participants. The authors additionally proposed the first supervised ML and eye-tracking paradigm that is able to distinguish between autistic and TD children. 

The collaborative use of ML and VR brings into the realm of possibilities the use of VR systems to detect hard-to-diagnose conditions such as ASD. While this research was exploratory in nature, it lays the groundwork for future studies where VR systems can discriminate between two different groups through the detection of patterns specific to a disorder. Moreover, advancements in ML techniques could help providers reach an earlier diagnosis through ecologically valid virtual environments that mimic our real world situations. Read the Research


A Validated Deep Learning Model to Predict Future Lung Cancer Risk

Sybil, a new lung cancer risk assessment tool developed by MIT researchers, uses AI to detect cancer in low-dose computed tomography (LDCT) scans. The endeavor was quite the challenge for researchers, as thoracic radiologists worked to label hundreds of LDCT scans with visible tumors to train Sybil effectively. Now, Sybil can accurately predict an individual’s future lung cancer risk out to six years from a single scan.

What makes this novel screening tool innovative is that current cancer prediction models require demographic data, clinical risk factors, and radiologic annotations to predict cancer risk assessment. However, Sybil can assess cancer risk based entirely on LDCT images. Researchers additionally validated Sybil in two large randomized control trials, establishing efficacy and showing that it could maintain accuracy across sex, age, and smoking history subgroups.      

As deep learning AI becomes increasingly sophisticated, and our medical scans can better predict risk for disease, I hope we continue to expand the type of data that is inputted in the algorithms. The authors mentioned limitations to this study including suboptimal population diversity, and the lack of access to smoking data of the participants. Now that Sybil gives us the potential for early detection with scans alone, we might also one day find associations between the scans and our behaviors – insight that can show us a way towards prevention. More research is needed to link social determinants of health, as well as geographic variables, to LDCT data. Read the Research  


Performance of ChatGPT on USMLE: Potential for AI-assisted Medical Education Using Large Language Models

Researchers tested ChatGPT’s ability to perform clinical reasoning by asking the AI chatbot 376 questions from the United States Medical Licensing Exam. The medical exam covers a conceptually dense review of physician’s knowledge including basic science, medical management, clinical understanding, and bioethics. Results showed that currently ChatGPT approaches or exceeds the passing threshold to pass the USMLE. In fact, it is rising in accuracy. 

In prior iterations, ChatGPT achieved a 46% accuracy, which with further tuning later achieved 50%. In this present study, the platform was found to exceed 60% in some areas, a passing threshold, in some analysis. Researchers additionally assessed the potential for AI-assisted human learning, and noted that ChatGPT responses were structured in a way that a human learner could follow the language, logic, and directionality of the relationships in the explanations to the extent that, “ChatGPT possesses the partial ability to teach medicine.”  

Whew! We’re heading into what some might label ethically dicey territory. Even OpenAI, the company that developed ChatGPT, asserts that the tool has a tendency to respond with 'plausible-sounding but incorrect or nonsensical answers.' Yet, ChatGPT is proving its accuracy over time, lending an air of credibility to its knowledge base. More and more users are accepting the plausibility of the text generated without considering that ChatGPT’s response might be disseminating false information. Should we be worried? Read the Research


Do you have thoughts about the topics in our digest? Feel free to contact me with your impression and ideas, and be a part of the conversation in our next Health & Science Tech Digest. 

Get Our Health & Science Tech Digest Sent to you Monthly

Subscribe to our Health & Science Tech Huddle newsletter to get this curated list of relevant news, research, and resources sent to your inbox each month. Plus get additional resources designed to empower health & science innovators.