Liquid error (layout/theme line 5): Could not find asset snippets/yotpo-subs-script-loader.liquid How AI “Bakes in” Gender Biases—And What it Means for Women. – Rosebud Woman

Cart

Your cart is empty.
How AI “Bakes in” Gender Biases—And What it Means for Women.

How AI “Bakes in” Gender Biases—And What it Means for Women.

As children we learn to extract patterns though the things we are exposed to, to make meaning. These patterns get coded into our thinking as rules. Unless these rules are consciously unwound by later learning or inquiry or data, the rules stay with us as bias and prejudice and inform all of our decisions. This is how inherent bias gets transferred from generation to generation- including gender bias.  

Well, guess what? That’s what happening with machine learning- it’s codifying and hardening bias from prior data.  And that is not good news for women. 

Facebook, for example, was initially censoring Rosebud Woman ads as porn, simply because the standards algorithm could not conceive of intimate wellness products that reached beyond sex. It could not imagine that taking care of the vulva or vagina would benefit women’s own wellness and comfort. Similarly, Artificial intelligence (AI) algorithms can’t see an image of a woman’s breasts—whether it appears as a Picasso line drawing in a breast cancer awareness ad, or woman sharing vacation pics of herself in a swimsuit—as anything but porn. Why is this?

The answer is simple: because AI is trained by humans, who give its algorithms a huge set of images to extrapolate from. The algorithms that govern digital life learn, in part, from datasets that are already available. Algorithms don’t anticipate the future—they build on the historical bias, anchored in the past. They are a self-fulfilling ideation. Since they are trained by datasets and programmed for what to optimize for, they “bake in” gender biases and archaic values.

Here are some recent examples:

— In November 2019, it was discovered that the credits limits on the Apple Card—whose assessments are made by algorithms at Goldman Sachs—were dramatically lower for women than for men. This was true even in married couples, where the woman had a higher credit rating. The bias was even noted even by Apple co-founder Steve Wozniak, who indicated his credit limit is 10 times that of his wife. Though Goldman Sachs denied any wrongdoing, the algorithms are being scrutinized.

— In late 2018, Amazon was forced to scrap an AI recruiting tool that “did not like women.” This was specifically because the algorithm had been loaded with historically biased data, drawn from the last ten years—when men dominated the business and data worlds. “In effect, Amazon’s system taught itself that male candidates were preferable.”

— In 2016, a Microsoft Twitter bot called Tay (“thinking about you”) was designed “to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.” In short order, the algorithm began posting racist, sexually-charged tweets and images.

— For many years, clinical trials of new drugs have included more men than women; in many studies, women of childbearing age were excluded altogether. The resulting data points, programmed into medical databases and related AI algorithms, biases treatments and outcomes.

It’s easy to see how gender discrimination becomes institutionalized. Think about it: If you were an AI tasked with selecting the 50 best authors, scientists, artists, composers, lawyers, etc., and your findings were based on historical data sets, what do you think your results would be?

So, what do we do about this? How do we avoid gender bias when building machine learning sets for today’s increasingly influential AIs?

An article in the Harvard Business Review, published a few weeks ago, suggests four ways we can do this. Not surprisingly, it involves getting more diversity on board when collecting samples—not just from women, but from minority groups, and people with diverse backgrounds as well. And—sort of amazingly—these AIs can actually realize when they are favoring a gender or racial group unfairly. They have to be taught how, of course. But it’s possible—and necessary.

“We have an obligation to create technology that is effective and fair for everyone,” says Josh Feast, the article’s author.

The men and women coding AI are very likely unaware of built-in bias and therefore not striving to correct for it, so part of our responsibility is to be awake to the algorithms at use in our companies, and in the services we license at work, to call for attention to be paid to fair, humane tech. We're looking forward to the day when a female pediatrician has the same borrowing power as her male partner, because the algorithm knows what it's doing.

----

References:

Apple Card Algorithm: 

https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/

AI and recruiting:

https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Tay:

https://en.wikipedia.org/wiki/Tay_(bot)

Medical research:

https://www.nytimes.com/2019/09/09/health/breast-cancer-men.html

Possible Solutions:

https://hbr.org/2019/11/4-ways-to-address-gender-bias-in-ai

treat yourself