profile

Brandeis Marshall - DataedX

Rebel Tech Newsletter: A New Era for AI...?

Published 6 months ago • 8 min read

November 7th, 2023

The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization.


IN DATA NEWS

“President Biden has issued an Executive Order to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.”

The full Executive Order on AI Development and Use is over 100 pages long and it has A LOT of information. I’ve read the whole document but digested up to 4.5. Reducing the Risks Posed by Synthetic Content section. There’s some hope and some heartburn. If you want to read the whole document, you can do so here.

The hope: This Executive Order (EO) is more informed, thoughtful and insightful than I expected. First, I appreciate the leadership being attempted to establish systemic governing protocols to evaluate AI development and use. Second, as a datahead, I appreciated the continued frequent mention of data (mentioned 75 times). It’s as if they know you can’t discuss tech without discussing the data and understanding the assignment (what I say all the time): “data fuels the algorithms that make the tech” – well, now it has been rebranded as the AI. It was also great to read that “data-driven decision-making” didn’t appear anywhere. Data doesn’t drive decisions, it’s people. This EO fundamentally gets that point. That was refreshing. Hence, a bunch of oversight committees/boards and reports are specified to help provide a landscape of AI in the US. Third, there is a clear recognition of the algorithm-based harms generated by AI. Mitigating the disparities in housing, healthcare and other systems are explicitly called out. Lastly, several sections are dedicated to insisting on professional development of the current government workforce in AI and attracting AI talent to federal agencies. I’m glad that AI skills (where I’d include data skills as part of AI skills development)– from mechanics of coding to techno-ethical impacts – are a noted priority. The responsibility of being more ethical with AI is on everybody. That deserves a round of applause.

Other people from former US government officials to AI ethics/responsible AI organizations have shared their reflections. These summaries essentially support this US Administration’s wrangling of AI. You’re encouraged to read them. Now, with that said, you’re reading this commentary because you’re curious about what’s missing or lacking in this EO. Let me give my perspective.

The heartburn: First, there’s no explicit definition of what a safe, secure and trustworthy development and use of AI is in this EO. The terms ‘safe AI’ or ‘AI safety’ makes me pause because AI hasn’t been safe for historically excluded groups since its inception – so it’s improbable that’s what that term refers to in practice. Also, the ‘safe AI’ term has appeared more often in the mainstream media as a retooling of the ‘AI for good’ movement, that became popular 7-10 years ago, with tendencies of extracting and exploiting historically excluded communities instead of prioritizing their needs. The term ‘secure AI’ tends to refer to the combatting software and hardware vulnerabilities across our digital infrastructure. The national and international considerations for securing our cyber borders will forever be a top concern, but they do overshadow more local disparities enabled by AI. Secure AI is almost like having a near legalized way to sanction algorithmic-based harms, e.g., facial recognition tools being used in policing or forensic genealogy. Trustworthy AI is the best of the lot, imho, since it’s often considered an approach that bridges the gap between social and technical implications of AI. At least humans are intentionally in the loop with trustworthy AI. But in the computing sciences, especially the AI subfield, the social implications tend to be purposefully ignored. There’s a tendency to try to ‘improve’ technology as a way of addressing social ills – which is the foundation of tech-can-solve-all-of-our-problems belief. This is a belief I don’t believe. It would be useful for these distinctions to be explicit.

Second, the emphasis on identifying cybersecurity and privacy concerns remain palpable. It’s in direct opposition to the lack of existing state-level and national data privacy laws. See the US state legislation tracker by IAPP for more information. It’ll be nice to know if all this cyber borders protection will protect US cities and states from ransomware attacks or protect people’s data from rising data breaches. I don’t know about you, but I’m surely tired of receiving data breach disclosure notifications every few months. I need these companies to get it together.

Third, the litany of oversight committees/boards and reports are a great start but then what? I didn’t read what the metrics of success criteria are. The key objectives and end goals aren’t mentioned. I wonder if these committees/boards and reports will become performa. With no way to measure against a tangible outcome/deliverable, it feels like busy work and progress will elude us. I have a whole sub-section in Data Conscience (Chapter 9’s Regulating the Tech Sector) that makes several suggestions. One recommendation is to create a new agency called the Center for Technological Civility (CTC) that’d be focused on protecting the public from digital harms and provide guidance of being digitally healthy. This CTC would centralize the bulk of these activities and hold many people/organizations accountable. That leads me to my last and biggest heartburn.

Lastly, the notion of accountability is implicitly considered a post-product release activity rather than a pre-release activity. Accountability before an AI product’s release is possible – read Data Conscience Chapters 5-8, for example. Big tech is still basically regulating themselves. It’s as if a concession was made that the US government won’t forcefully try to regulate AI anymore. The existing anecdotal and infrequent lawsuits to Big Tech firms would remain – some fiscal penalties but not enough to really hurt these companies significantly. There’s a seemingly ultra-positive tone that Big Tech will eventually do right by the public is given a chance. It has been 30+ years and they haven’t. There’s no evidence these companies will operate differently without punitive cause. It’s disheartening that the real substantive governing of AI isn’t likely to happen. Adding substantially more personnel to the Federal Trade Commission and other regulatory agencies would have been a good indicator of such a priority.

Like what you're reading? Find it informative and insightful? You can sponsor the Rebel Tech Newsletter and follow on LinkedIn.


DATA CONSCIENCE CORNER

"The great hope of data visualizations is that the pretty pictures, in the form of charts, graphs, dashboards, or some other type of media, will bring clarity to those who are able to view them. But data visualizations simply try to take information and then speculate on which outputs provide valuable knowledge and insights." pg 191-192 Data Conscience

Data visualizations can be very misleading. Take a few moments to review those visualization materials before making a judgment. There are assumptions you’re making (and likely assumptions the creator of these visualizations made too). Whether you’re a data practitioner or a data leader, visualizations are a regular part of your work life. Here’s the top question you can ask yourself and the team to help you evaluate the relevance of the pretty picture you’re seeing: How does the team verify outcomes associated with the data processes, algorithms and systems used to generate these visualizations? Clarifying the key factors contributing to the figures gives you an indication of its strengths and weaknesses. You’d be in a better position to make a more informed decision.


A WORD FOR BLACK WOMEN IN DATA

A Word of Encouragement: “Perfectionism is not human. To be human is to rest. I will open my heart to rest.” ~ Tricia Hersey of The Nap Ministry.

The time has come to leave perfectionism behind. It’s past time to let it go. It’s not too late to DELIVER you.

A Word for Promoting a Daily-ish Rest Routine: I watch a pre-Google TV show nightly. I’m a huge fan of whodunnit series. I'm currently re-watching Murder, She Wrote.

A Word about the Streams of Income Cohort: The Streams of Income (SOI) cohort guides Black women in the data industry in building another consistent income stream. We’re shifting mindsets, crafting our messaging, executing business mechanics and making more than just money during cohort sessions, open business-building hours and action-provoking assignments. It’s a 6-month engagement starting in December.

You don't think you can start a data side-hustle enterprise (SHE) because you're more scared of potential conflicts with your plantation job duties.

This is the 🥇 objection I hear from BWDs interested in my Streams of Income (SOI) cohort program

Valid. You don't want to unnecessarily draw attention to yourself. Ok. But you still want to start building a SHE and getting a new job ain't in the cards (the job search landscape is straight 🗑) -- what are OTHER ways to leverage your data skills, expertise and experiences and turn it into 💰💰💰?

Here are 5 SHEs that won't draw your plantation job's attention:

1️⃣ data upskilling coach

2️⃣ data career coach

3️⃣ data strategist/consultant

4️⃣ data educational consultant

5️⃣ data expert speaker

Yeah sis, I know, you already do at least 1 of these activities on a volunteer unpaid basis. You didn't think you could actually make 💴 , build your confidence and meet other like-minded BWDs to support you. Contact me in my LinkedIn DMs with questions.

Join the next SOI cohort, enrollment closes Nov 17th.


UPCOMING EVENTS

October has been a high-engagement month!

  • DataedX led the coordination of the in-person AIAI Network KickOff on October 4, 2023. Dr. Brandeis with the rest of the AIAI Network team gave an overview of the initiative, announced our first seed grant round and sparked a solution-oriented conversation. About 70 people attended the event at Science Gallery Atlanta. Subscribe to stay up-to-date on justice-centered AI: https://aiai.network/newsletter/
  • An engaging virtual fireside chat with Prem Natarajan, Head of Enterprise AI, Data, and Analytics, on mitigating AI bias in tech systems at Capital One’s 2023 Blacks in Tech Summit was held on October 10, 2023. It was wonderful to learn over 1000 Capital One team members tuned in live!
  • On October 14, 2023, Dr. Brandeis shared her journey to the responsible AI sector to 30 high school students and their undergraduate mentors participating in UChicago’s Data4All High School Bridge Workshop.
  • Dr. Brandeis delivered a timely virtual keynote workshop at Portland Community College’s AI Symposium on October 18th. A discussion covering history and culture, social justice and AI and generative AI in and around higher education occurred for the hundreds of participants.
  • And rounding out October, Dr. Brandeis was again outside, but this time in DC. She gave the keynote address on effective ways for digital educators/leaders to responsibly integrate AI tools at Online Learning Consortium’s Accelerate Conference on October 25th. That Maryland Ballroom was packed.

Follow us on social


LAUGHING IS GOOD FOR THE SOUL

Stay Rebel Techie,

Brandeis

Thanks for subscribing! If you like what you read or use it as a resource, please share the newsletter signup with three friends!

Brandeis Marshall - DataedX

Learn how to make more responsible data connections. I help educators, researchers and practitioners align data polices, practices and products for equity. Sign up for my Rebel Tech Newsletter!

Read more from Brandeis Marshall - DataedX

February 20th, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS “Don’t let...

2 months ago • 2 min read

February 6th, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS “Wisconsin’s...

3 months ago • 2 min read

January 23, 2024 The Rebel Tech Newsletter is our safe place to critique data and tech algorithms, processes, and systems. We highlight a recent data article in the news and share resources to help you dig deeper in understand how our digital world operates. DataedX Group helps data educators, scholars and practitioners learn how to make responsible data connections. We help you source remedies and interventions based on the needs of your team or organization. IN DATA NEWS “Concerns about...

3 months ago • 3 min read
Share this post