FlexOS | AI in HR Today with Anthony Onesto
Subscribe
FlexOS | AI in HR Today with Anthony Onesto
Issue #
27

Heretofore - Navigating the AI Legal Challenges

From Risk to Revolution: Why AI’s Legal Woes May Be Its Greatest Ally

Heretofore - Navigating the AI Legal Challenges

I just wrote last week that change is the only constant, and it's now moving at accelerated rates. As of today, the most significant and rapidly evolving change is the meteoric rise of Artificial Intelligence (AI). We're no longer talking about robots taking over factory floors; we're talking about AI revolutionizing all jobs, including HR and recruiting. With significant market-impacting events, we see a surge in legal challenges, and AI is no different. I will dive into why the legal challenges facing AI in HR and recruiting aren't a roadblock, but rather a catalyst for improvement. You'll learn about the landscape of AI-related lawsuits and biases, and discover why a full restriction of usage is not the answer; instead, a human-in-the-loop approach is crucial for navigating this exciting, yet sometimes chaotic, future.

TOGETHER WITH

Let's Look at the Numbers

Note: specific statistics on lawsuits solely focused on AI in HR and recruiting are still emerging. Let’s take a look at broader trends and predictions that highlight the increasing legal scrutiny around AI:

  • ​Workday Lawsuit​ - a federal judge ruled that a class action lawsuit against Workday can proceed, alleging its AI-powered hiring software discriminated against applicants over age 40, potentially involving hundreds of thousands of job seekers.
  • ​Bias in AI Resume Screening​ - a University of Washington study found that AI tools favored specific names 85% of the time over other names in resume screening, highlighting significant racial bias.
  • ​Regulatory Landscape​ - as of 2024, over 400 AI-related bills were introduced across U.S. state legislatures, with 16 states enacting legislation related to AI, reflecting growing legal scrutiny of AI applications, including those in hiring.

Now, I know what some of you are thinking: "Lawsuits! Regulations! The end of AI as we know it!" And to that, I say, "History is sometimes the greatest teacher!” When the internet first emerged, mobile phones became ubiquitous, and social media became deeply ingrained in our daily lives. What do you think happened? Yes, lawsuits went up! Why? Because these technologies were new, unregulated, and, to be honest, somewhat messy. But did those lawsuits stop progress? Nope. They pushed us to be better, to think more critically about the technology, and to advocate for necessary legislation in the end. We're still waiting on some of those laws, but the point is, the legal battles help refine and strengthen the technology, holding its owners and users accountable.

The Problem Isn't AI, It's Us (Sometimes)

Let's be real for a second: human recruiting and selection? It's far from perfect to begin with. We've got biases that are baked into our current systems. Most companies, even the ones that think they’ve got the secret sauce for interviewing, are mediocre at best. Do we know what our leaders are asking in interviews? Are those scorecards ever actually filled out? The problem isn't AI itself; it's when we bring the inherent flaws of the human experience into the AI, creating what Cathy O’Neil coined "Weapons of Math Destruction” in her book by the same name. We’ve seen firsthand how an AI-powered system can perpetuate existing biases if the data it learns from is already biased. For example, based on available data, women occupy only about 26% of positions in STEM (science, technology, engineering, and mathematics) fields. In Silicon Valley, publicly available reports indicate that approximately 17% of Google's tech workers are women, while the figures for Facebook and Yahoo stand at around 15% each. These numbers highlight the existing disparities that AI, if not carefully implemented, could amplify.

The Human-AI Partnership: Better Together

But here’s the crucial part: it’s never "AI or humans." That’s a false dichotomy. As highlighted in "Race Against the Machines," there will always be things machines do best, things humans do best, and then there’s the magic that happens when you combine the two. Currently, we require human intervention to guide and refine AI's capabilities. However, AI will surpass humans in specific tasks. Think about it: a human recruiter trying to scan infinite data points across LinkedIn, countless databases, and the entire internet to find the truly most qualified candidate? It’s an impossible feat. With their ability to process vast amounts of data and identify patterns, the machines will augment humans in identifying potential candidates. While current tools on LinkedIn are helpful, they have their limitations. The future of recruiting, with AI at the helm, will leverage this incredible data processing power to surface candidates that companies might otherwise miss. The cautionary tale is what is happening right now with Workday, so make sure you have a human in the loop during the process to ensure fairness and educate the algorithms.

Embracing the Disruption

The impact of AI on recruiting is clear: yes, there will be lawsuits, and yes, there will be an increase in legal activity. But this isn’t a sign to slam the brakes on AI. Instead, it’s a sign that we’re pushing the boundaries, learning, and forcing ourselves to build more robust, ethical, and transparent tools. This pressure will ensure that AI tools, their creators, and their users are held accountable, ultimately making the recruiting process fairer and more efficient for everyone involved. There will be advice and discussions now to avoid disruption due to the legal challenges, but I believe we should embrace it and use it to build a better, business-impacting future for HR and recruiting.