profile

Brandeis Marshall - DataedX

Rebel Tech Newsletter: Google’s Med-PaLM

Published 7 months ago • 4 min read

October 10th, 2023

The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization.


IN DATA NEWS

“Google’s Med-PaLM 2, an AI tool designed to answer questions about medical information, has been in testing at the Mayo Clinic research hospital, among others, since April. Med-PaLM 2 was trained on a curated set of medical expert demonstrations, which Google believes will make it better at healthcare conversations than generalized chatbots like Bard, Bing, and ChatGPT. In the study, physicians found more inaccuracies and irrelevant information in answers provided by Google’s Med-PalM 2 than those of other doctors.”

Generative AI-assisted healthcare is four words that should make your heart skip a beat. For physicians and other medical professionals, alarm bells are ringing very, very loudly. So the slow-roll of adopting Med-PaLM 2 by using it to handle patient chart documentation and/or transcribing doctor-patient conversations has commenced. Google’s Med-PaLM has shared a sliver of their research findings. Their boosting of an 86.5% passing grade on a U.S. medical licensing exam makes me leery since it’s well-understood that standardized exams don’t assess critical thinking and problem solving nor do they assess cultural competence. The transparency and accountability of what this tool is capturing digitally (even though they say that no patient data is used) and how it performs for different common medical conditions remain noticeably missing.

A step toward more adequate transparency and accountability would be to require the completion and public release of Med-PaLM’s algorithmic impact assessment. Ada Lovelace Institute’s Algorithmic Impact Assessment (AIA) template happens to be a must-read. And it should be more widely adopted, in my opinion. It’s not perfect – such as it lacks strong recommendations on enforcing changes or abandoning particularly harmful algorithms. But this AIA asks important questions. Section 3 (Impact identification and scenarios) questions are strikingly relevant for this Med-PaLM discussion:

  1. What is the best-case scenario that could arise from the use of this system? Discuss when the system works as designed/intended, but also how failures, errors, mistakes, or unexpected behaviors would be handled.
  2. What kind of socio-environmental requirements are necessary for the success of this system in operation? E.g. stable connection to the internet, training for doctors and nurses, collaboration between particular clinical and administration staff, etc.
  3. What are likely challenges/hurdles to achieving the best-case scenario?
  4. What is the worst-case scenario to arise from the use of this system?
    1. When the system works as designed/intended
    2. When the system fails or doesn’t work as designed/intended in some way

It would be eye-opening to read the Med-PaLM team’s responses as a reflective exercise and participatory groups’ raw feedback and reactions. Remember, what Med-PaLM considers an inaccuracy directly affects a person’s life as a medical professional. Only physicians and other medical professionals are mandated to abide by the Hippocratic Oath.

Like what you're reading? Find it informative and insightful? You can sponsor the Rebel Tech Newsletter and follow on LinkedIn.


DATA CONSCIENCE CORNER

"...our society engrosses itself in leveraging, operationalizing, powering, and monetizing AI under misguided notions of increasing profits, enhancing a person’s or a company’s cool factor, and making everyday life easier for the average person." pg 163 Data Conscience Corner

For every help that AI brings, there comes at least equal parts harm. This stance has been decades in the making. Governments from around the globe have considered or established ways to limit the harmful effects of AI. Tech companies’ profits have been garnished through sanctions, penalties and algorithmic destruction. Tech companies are increasingly finding themselves in ethical trouble with ill-conceived under-vetted product/services releases. The cool factor of these releases lasts only for a viral post cycle. The problems that need solving aren’t being asked with growing regularity. And the “average person” tech has built the product/service for is less like their average actual customers. We’re in a multi-racial, multicultural, multilingual and multi-gendered society. So our everyday lives have become increasingly complicated with AI as we individually try to sort out which AI is helping, which ones are harming us and under what conditions. Advice: Take 15-30 days before adopting new AI-infused tools and take 30-60 days to assess whether existing AI-infused tools are benefiting you.


A WORD FOR BLACK WOMEN IN DATA

It’s never too late to win. And this isn’t a future win that you have to work on starting now, nor is it a big win like completing another data cert.

I’m talking about your overlooked incremental wins. There's been so much impact you’ve made this year that you have forgotten. You’ve set and enforced more boundaries. You’ve completed that tough data project. You’ve self-taught yourself a new data skill and applied it to one of your work projects. You’ve done more paid public speaking engagements. You’ve become comfortable with calling yourself a data professional. Take inventory of your overlooked wins that you’ve set aside and moved past already this year. Write them down. Say them out loud. Celebrate them openly.

Daily-ish rest routine suggestion: I drink 2 8-oz glasses of warm water with 2 teaspoons of lime juice when I first wake up. It helps me anchor my day — ushering it in peacefully.

video preview

It has been just over 2 weeks since the Black Women in Data Summit 2023. Whew, it feels like it was just yesterday where we met, connected, laughed and enjoyed each other's company. If you missed it, you can share in the experience by viewing my keynote, THEE panel (featuring Dr. Mindelyn Anderson, Taye Johnson, Dr. Kenya Oduor and Monique Mills) and all-attendee general workshop led by Laura E. Knights.


UPCOMING EVENTS

AI in Education Day at Portland Community College’s AI Symposium

Join me virtually on October 18th for AI in Education Day at Portland Community College’s AI Symposium. I’ll be talking 1PM-3PM EDT. Yes, you read that right — a TWO hour keynote that’ll feel more like a mini-workshop as we discuss key themes in Data Conscience. Bring your notebook 😊

Portland Community College has put together an awesome lineup of expert speakers and engaging sessions. Take a look at their full schedule here.

Follow us on social


LAUGHING IS GOOD FOR THE SOUL

Stay Rebel Techie,

Brandeis

Thanks for subscribing! If you like what you read or use it as a resource, please share the newsletter signup with three friends!

Brandeis Marshall - DataedX

Learn how to make more responsible data connections. I help educators, researchers and practitioners align data polices, practices and products for equity. Sign up for my Rebel Tech Newsletter!

Read more from Brandeis Marshall - DataedX

February 20th, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS “Don’t let...

2 months ago • 2 min read

February 6th, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS “Wisconsin’s...

3 months ago • 2 min read

January 23, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS “Concerns about...

3 months ago • 3 min read
Share this post