In a refrain that feels almost entirely too familiar by now: Generative AI is b? phim tình d?crepeating the biases of its makers.
A new investigation from Bloombergfound that OpenAI's generative AI technology, specifically GPT 3.5, displayed preferences for certain racial in questions about hiring. The implication is that recruiting and human resources professionals who are increasingly incorporating generative AI based tools in their automatic hiring workflows — like LinkedIn's new Gen AI assistant for example — may be promulgating racism. Again, sounds familiar.
The publication used a common and fairly simple experiment of feeding fictitious names and resumes into AI recruiting softwares to see just how quickly the system displayed racial bias. Studies like these have been used for years to spot both human and algorithmic bias among professionals and recruiters.
"Reporters used voter and census data to derive names that are demographically distinct — meaning they are associated with Americans of a particular race or ethnicity at least 90 percent of the time — and randomly assigned them to equally-qualified resumes," the investigation explains. "When asked to rank those resumes 1,000 times, GPT 3.5 — the most broadly-used version of the model — favored names from some demographics more often than others, to an extent that would fail benchmarks used to assess job discrimination against protected groups."
The experiment categorized names into four categories (White, Hispanic, Black, and Asian) and two gender categories (male and female), and submitted them for four different job openings. ChatGPT consistently placed "female names" into roles historically aligned with higher numbers of women employees, such as HR roles, and chose Black women candidates 36 performance less frequently for technical roles like software engineer.
ChatGPT also organized equally ranked resumes unequally across the jobs, skewing rankings depending on gender and race. In a statement to Bloomberg, OpenAI said this doesn't reflect how most clients incorporate their software in practice, noting that many businesses fine tune responses to mitigate bias. Bloomberg's investigation also consulted 33 AI researchers, recruiters, computer scientists, lawyers, and other experts to provide context for the results.
The report isn't revolutionary among the years of work by advocates and researchers who warn against the ethical debt of AI reliance, but it's a powerful reminder of the dangers of widespread generative AI adoption without due attention. As just a few major players dominate the market, and thus the software and data building our smart assistants and algorithms, the pathways for diversity narrow. As Mashable's Cecily Mauran reported in an examination of the internet's AI monolith, incestuous AI development (or building models that are no longer trained on human input but other AI models) leads to a decline in quality, reliability, and, most importantly, diversity.
And, as watchdogs like AI Nowargue, "humans in the loop" might not be able to help.
Clubhouse and Twitter Spaces have very different data privacy policiesWhat happened when Waymo reenacted real fatal car crashes with its autonomous vehicle'Billie Eilish: The World's A Little Blurry' is a triumph of teen fameHummer will reveal its electric SUV during the NCAA Final Four2022 Bolt EUV review: Slick with handsApple is discontinuing the pricey iMac Pro, so get it while you canHummer will reveal its electric SUV during the NCAA Final FourJohn McAfee plotted over unencrypted Twitter DMs, alleges DOJApple's iPhone might get a periscope camera, but don't expect it very soonThe trending #BoycottAmazon hashtag is led by consumers, not the union MTV News, BET News to host live town hall to discuss 'America in crisis' Black Lives Matter activist, journalists reportedly arrested in Baton Rouge We should all be letting 2 'Pokémon Go' players are taping their phones to ceiling fans to hatch Eggs Wisconsin becomes the latest state to float 'Blue Lives Matter' bill No selfies here, just the best iPhone photos of 2016 The Latina Disney princess we've been waiting for is here in 'Elena of Avalor' For 17 hours, Dallas PD called an innocent man a suspect Dog's butt hole is the spitting image of the IKEA monkey Los Angeles officers cleared in shooting death of Redel Jones
0.1674s , 8193.6875 kb
Copyright © 2025 Powered by 【b? phim tình d?c】Enter to watch online.AI shows clear racial bias when used for job recruiting, new tests reveal,Global Perspective Monitoring