Machine learning

Facial recognition tools get it so wrong with Michelle Obama, Serena Williams, researcher finds

How three women in tech are dealing with Silicon Valley’s diversity problem, digital consent and algorithms designed to discriminate.

When coder, poet and digital activist Joy Buolamwini asked, “AI, ain’t I a woman?,” she received some unsettling answers. Inspired by the questionthat women’s rights activist and abolitionist Sojourner Truth asked more than 150 years ago, Buolamwini, a computer scientist at the MIT Media Lab, was researching facial recognition technology created by companies such as Amazon, Microsoft and Google. She saw algorithms misidentify Michelle Obama, Oprah Winfrey and Serena Williams as male. One analysis questioned whether Obama was wearing a “hairpiece” in a photo.

How else does artificial intelligence (AI) get it wrong? A Microsoft executive and a founder of a non-profit using data to increase diversity in the tech industry, along with Buolamwini, talked with Elaine Welteroth, former editor-in-chief of Teen Vogue, about these technology-driven errors at the 10th annual Women in the World summit in New York City. They recognized the potential of AI — and also noted how the designers building these algorithms introduce their own biases, inadvertently or not, with white men dominating the Silicon Valley tech sector.

“The flip side is that the same technology that can power can also divide and is dividing,” said Toni Townes-Whitley, the president of U.S. Regulated Industries at Microsoft.

Laura Gómez, who focuses on hiring at tech firms, grew up in Silicon Valley and was an early employee at Twitter. When Hugo Chávez, then president of Venezuela, first got on Twitter, Gómez said she questioned higher-ups at the company on whether he should have access to the unfiltered message dissemination system. There were no real answers, she said, and since then we have seen the proliferation of abuse and the inability to recognize it across digital platforms. She started her company, Atipica, to address the discrepancies between the people who design and the people who consume that design.

Then, there is the issue of consent. Just as in real life, consent is a growing issue when it comes to AI. “AI is infiltrating our lives in ways we are not even aware,” said Buolamwini, who also founded the Algorithmic Justice League. “We have your face-prints. When you go into a shop, we can identify you and give you information of your likes and preferences. Where is the consent? We are being robbed of choice,” she said.

This idea begs the question: Who is in charge? At the moment, U.S. tech titans are seemingly left to self-regulate and set ethical standards. While companies like Microsoft have created ethics committees to grapple with questions about AI and bias in algorithms and establish good practices, broader efforts may be required. “We have a lot to learn,” Townes-Whitley said. “The one thing we are absolutely clear on is that we don’t want technologists making all these decisions.”

While acknowledging Microsoft’s leadership on ethics and regulation, Buolamwini was skeptical: “If we shouldn’t have self-regulation, but then you yourselves are writing the laws through lobbyists, then we’re not bringing in the voices. I hope there will also be the moral courage to make sure the critical voices are in the room.”

Joy Buolamwini, Toni Townes-Whitley and Laura Gómez, interviewed by Elaine Welteroth, at The 2019 Women In The World Summit in New York City, April 11, 2019.

Additional reporting by Vittoria Elliott.

Related

Study finds women are more likely to lose their jobs to automation than men

Amazon’s secret AI project to evaluate job candidates discriminated against women, sources say

‘Mother of Invention’ created ingenious way to track down human trafficking victims

 

Leave a Reply

Your email address will not be published. Required fields are marked *