Are we programming AI to be sexist?

0
313
Is AI sexist?

On a certain level, there’s something of a child-parent relationship between AI technologies and its programmers. And according to a recent report, AI is being raised by a bunch of single dads.

The findings released last month by the World Economic Forum paint a dismal picture of gender representation in the AI workforce in America: Less than a quarter of the jobs in the fast-growing industry are held by women, a gap three times larger than other industry talent pools. The WEF also noted that women working in AI “are less likely to be positioned in high-profile senior roles,” according to CNBC.

Gender imbalance has long been an issue across the tech industry. But because machine learning is unique in reflecting its programmer’s biases—both known and unknown—the lack of representation means we could be building old-world sexism into our state-of-the-art machines.

Advertisement
Manage your supply chain from home with Sourcengine

Auto Incorrect

Take the new autocomplete feature that Google rolled out for Gmail last year. The “Smart Compose” feature has been stripped of gender-based pronouns after early trials showed the predictive text might be inclined toward offensive assumptions.

Gmail product manager Paul Lambert told Reuters the problem was discovered after a research scientist typed, “I am meeting an investor next week,” which prompted the follow-up question: “Do you want to meet him?” instead of “her.”

After the Smart Compose engineering team tried several workarounds, they found the only effective method was to eliminate gendered pronouns altogether. Although Lambert told Reuters the pronoun prohibition “affects fewer than 1 percent of cases where Smart Compose would propose something,” there are 1.5 billion Gmail users, and 11 percent of all messages sent utilize the smart compose feature. That would have been a widespread problem.

Digital Glass Ceiling

Incorrect assumptions about someone’s career are one thing, but in at least one case, AI was caught actively perpetuating gender imbalances in the workplace.

In October, Amazon was forced to pull a machine-learning program used for recruitment after it was discovered that the AI was penalizing resumes that included the word “women’s” (as in, “women’s chess club champion”). Interestingly, this bias developed because the AI was trained to vet candidates based on patterns in resumes submitted to Amazon over a 10-year period. Since most of those candidates were men, the algorithm essentially “taught itself that male candidates were preferable,” according to Reuters.

Amazon said that the software was never the sole factor in their hiring decisions. And while other companies—like Hilton Hotels and Goldman Sachs—are increasingly looking to use AI for hiring, the technology is not yet reliable enough to be used on its own. Nevertheless, a 2017 CareerBuilder survey revealed that 55 percent of HR managers in the U.S. said that AI would be a regular part of their work within five years.

Other Biases Emerge

Racial discrimination in AI patterns

Women aren’t the only group that can be unfairly subjected to the biases of AI. In 2016, ProPublica published a report detailing racial bias in the COMPAS program—or Correctional Offender Management Profiling for Alternative Sanctions—which used algorithmic data to help judges in some states decide parole and other sentencing conditions.

“COMPAS uses machine learning and historical data to predict the probability that a violent criminal will re-offend. Unfortunately, it incorrectly predicts black people are more likely to reoffend than they do,” a paper by AI professor Toby Walsh of the University of New South Wales stated.

This isn’t the only example of how algorithms can exacerbate a problem they’re designed to solve. As the Brookings Institute notes, software for making mortgage approval decisions based on middle-income and low-income neighborhoods would, over time, come to privilege those in the middle-income neighborhoods, since they’re more likely to have a higher income.

“Those approvals, in turn, will widen the wealth disparity between the neighborhoods, since loan recipients will disproportionally benefit from rising home values, and therefore see their future borrowing power rise even more,” the paper notes.

Counters Culture

The biggest challenge in AI programming may not be bias against women or minority groups but the differences between cultures. Asia—and China in particular, where AI has already been incorporated into everything from street cameras to news anchors—is making rapid strides in machine learning. The data culled from that region could differ significantly than machine learning techniques developed in the West.

“So you could imagine, for an example, data that comes from China and India—with combined population of 2.6 billion people when that data becomes widely available and used—there will be biases that we might not see in the West but may be very salient or very sensitive in our part of the world,” Eugene Tan Kheng Boon, associate professor of law at Singapore Management University, told CNBC.

Humans are the ones that will spot those biases. Even Google’s Smart Compose feature “need[s] a lot of human oversight” as it rolls out in new languages like Spanish, Portuguese, Italian, and French. “In each language, the net of inappropriateness has to cover something different,” Google engineer Prabhakar Raghavan told Reuters.

Fixing Wikipedia

Still, not all algorithms are problematic. San Francisco-based startup Primer has developed a software tool that helps identify notable female scientists missing from Wikipedia’s database. Dubbed Quicksilver, the software “uses machine-learning algorithms to scour news articles and scientific citations to find notable scientists missing from Wikipedia, and then write fully sourced draft entries for them,” according to Wired.

It has already identified thousands of female scientists who are eligible for a Wikipedia entry. Primer says that Quicksilver was fed “30,000 English Wikipedia articles about scientists, their Wikidata entries, and over 3 million sentences from news documents describing them and their work,” along with “names and affiliations of 200,000 authors of scientific papers.” The software spit back a list of 40,000 people notable enough for the digital encyclopedia.

“Quicksilver doubled the number of scientists potentially eligible for a Wikipedia article overnight,” researchers noted. Quicksilver won’t, however, automatically update the site, as Wikipedia’s human editors still need to review the entries for errors. But, given that an estimated 84 to 90 percent of the site’s editors are male, Quicksilver is proving invaluable at identifying noteworthy women.

Keen AI

Which brings us back to AI as an HR tool. Although AI has demonstrated its capacity for developing its own biases, it can still be used to help eliminate ours.

Consider studies that have show biases against job candidates with “minority-sounding” names compared to those with more “white-sounding” names. As Forbes notes, AI can be programmed to disregard name-based data and focus only on specific skills and qualifications, helping eliminate cultural biases.

Others are using AI to improve other areas of the hiring process. According to Reuters, the Salt Lake City startup HireVue “analyzes candidates’ speech and facial expressions in video interviews to reduce reliance on resumes.”

We’re still in the nascent stages of understanding the technological and ethical implications of AI, both good and bad. But when it comes to programming these machines, one thing is clear: the more diversity that goes into it, the better.