April 11th, 2023
The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization.
IN DATA NEWS
“Future of Life Institute (FLI) wrote an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter states. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” More than 30,000 people—including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang have signed the letter, but AI researchers and experts vocally disagree with the letter’s proposal and approach.”
The past few weeks has been full of hubbub about the future of AI, GPT-4 and longtermism. Great reaction and followup commentary has been shared by Dr. Emily Bender, Dr. Timnit Gebru, Dr. Meg Mitchell and so many other PhD’d womxn AI scholars. They’ve stood up and raised important points that shouldn’t be ignored. No lies detected and in full agreement. So I’m not going to re-iterate what they’ve already said.
Let’s focus our attention just how mediocre this AI pause request was explained. I’d give it a C. Here are two key irritation points, imho:
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” I have a TON of questions:
- Which AI labs? Which independent researchers? No names were shared. So I guess this is an open call and we all can imagine how wide the AI knowledge gaps will be to those who’d respond.
- Do these AI lab personnel and independent researchers know each other? People work with other people they trust.
- When are they supposed to meetup? It often takes me 2-3 weeks to schedule a zoom with 2 other people
- Who is leading them — one of these tech “leaders”? Without a strategy or plan, nothing productive will materialize.
“AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”
- Do AI developers want to work with policymakers? Do they even know how? The point-of-views from both sides are almost exact opposites. AI developers don’t understand policymaking and policymakers don’t understand AI. A joint short cohort-based course would be needed first. But neither side has time because AI developers are caught up coding and policymakers are dialed in on writing.
- What sort of AI governance legislation or system can be developed in 6 months? AI development operates fast and haphazardly while the government works very very very slow and strategically. I can’t reconcile these two work ethics arriving at a common place within 6 months. Nor do I envision any common place to reach the level of a robust AI governance system.
- Who is leading them — one of these tech “leaders”? Again, without a strategy or plan, nothing productive will materialize
None of the originator of this #AIPause have stepped up to actually do any real, productive work. They haven’t even pledged money to fund at least white-led organizations to acquiesce in to their flimsy requests. These early petition signers have run/are running multi-Million and multi-Billion dollar organizations, yet failed to supply any sort of business strategy for execution. Come on, now 🙄👀 This petition is in stark contrast to their standard operation behaviors of business optimization, monetization and execution.
I share other unrelated thoughts in my Medium post released on Apr 10th.
|Click Here to Read the Entire Article on Vice|
Like what you're reading? Find it informative and insightful? You can sponsor the Rebel Tech Newsletter and follow on LinkedIn.
DATA CONSCIENCE CORNER
"The intended use of data, how it’s represented in digital systems, and the resulting impact of outcomes play heavily in the moral fabric of data management." pg 4, Data Conscience
A moral friction of data management that we don’t like to talk about, but we need to talk about, is the bolstering of eugenics aka social Darwinism aka race science within our digital infrastructure. It’s a debunked theory that certain ethnicities, religions and genders are better than other ethnicities, religions, genders, etc. Numerical data rooted in anthropometry (measurements and proportions of the human body) are weaponized as evidence that stereotypes are true: white people are naturally smarter and have larger brains than other ethnicities, Asian people are naturally great at math, Black people are dumb and dirty, men are better leaders than women, and so on. But this discriminatory theory and oppression-driven stereotypes continue to be perpetuated and are popularized under different names with an AI twist, e.g., longtermism and effective altruism. There’s a stench of “selective breeding” that’s at the core of eugenics. Instead of relying solely on numerical data, data in all of its forms are digitized and working in tandem to promote systemic -isms narratives. Your call-to-action is to speak up, in real time, about systemic -isms you witness and/or experience.
|Get Your Copy of Data Conscience Here!|
The Data Conscience Book Tour 2023
The Rebel Tech Newsletter gives you a peek at how to become an informed proactive data citizen. Being aware is the first step. Sharing what you've learned and putting it to good use are the next steps.
Let Dr. Marshall kickoff your company's inclusive tech journey.
There are always new opportunities to make this digital society a better place for all. Join the Data Conscience Book Tour. We are still accepting submissions for dates between March and December of this year. Complete the form below and we'll followup to confirm scheduling for online or in-person appearances.
|Join the Data Conscience Book Tour Here!|
A WORD FOR BLACK WOMEN IN DATA
Daily-ish rest routine suggestion: Make your space smell good. A dozon drops of peppermint oil a few times a week has a calming effect. Give it a try.
Black Women in Data Summit
September 23-24, 2023
ATL | Online
$499 In-Person | $79 Online
|Grab your BWD Summit tix here!|
Community-Driven Approaches to Research in Technology & Society | CCC x MacArthur Foundation
On May 8-9, 2023, Dr. Marshall is attending the Community-Driven Approaches to Research in Technology & Society Visioning Workshop hosted by Computing Community Consortium (CCC) and the MacArthur Foundation. A group of researchers, educators and activists have been invited to discuss how to better conduct community-driven research in order to develop technology that benefits us all. The OSTP’s Blueprint for an Artificial Intelligence Bill of Rights will be anchoring our conversations. #intheroom #atthetable #tomakeimpact
Follow us on social
LAUGHING IS GOOD FOR THE SOUL
Stay Rebel Techie,
Thanks for subscribing! If you like what you read or use it as a resource, please share the newsletter signup with three friends!