Have you convinced your boss yet? Groups get the best deals 🎟️ Buy now before price increase →

This article was published on August 10, 2021

Twitter’s image-cropping algorithm marginalizes elderly, disabled, and Arabic

An AI bias bounty contest exposed numerous potential harms


Twitter’s image-cropping algorithm marginalizes elderly, disabled, and Arabic

A Twitter algorithm that favored light-skinned faces has now been shown to perpetuate a range of further biases.

The algorithm estimated whom a person would want to see first in a picture so the image could be cropped to a suitable size on Twitter. But it was ditched after users found it chose white faces over Black ones.

Twitter sought to identify further potential harms in the model by launching the industry’s first algorithmic bias bounty content. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The competition winners, who were announced on Monday, discovered a plethora of further issues.

Twitter’s algorithmic biases

Bogdan Kulynych, who bagged the $3,500 first-place prize, showed that the algorithm can amplify real-world biases and social expectations of beauty.

Kulynych, a grad student at Switzerland’s EPFL technical university, investigated how the algorithm predicts which region of an image people will look at.

The researcher used a computer-vision model to generate realistic pictures of people with different physical features. He then compared which of the images the model preferred.

Kulynych said the model favored “people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits:”

These internal biases inherently translate into harms of under-representation when the algorithm is applied in the wild, cropping out those who do not meet the algorithm’s preferences of body weight, age, skin color. This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images.

The other competition entrants exposed further potential harms.

The runners-up, HALT AI, found the algorithm sometimes crops out people with grey hair, dark skin, or using wheelchairs, while third-place winner, Roya Pakzad, showed the model favors Latin scripts over Arabic

The algorithm also has a racial preference when analyzing emoji. Vincenzo di Cicco, a software engineer, found that emoji with lighter skin tones are more likely to be captured.

Bounty hunting in AI

The array of potential algorithmic harms is concerning, but Twitter’s approach to identifying them deserves credit.

There’s a community of AI researchers that can help mitigate algorithmic biases, but they’re rarely incentivized in the same way as whitehat security hackers.

“In fact, people have been doing this sort of work on their own for years, but haven’t been rewarded or paid for it,” Twitter’s Rumman Chowdhury told TNW before the contest.

The bounty hunting model could encourage more of them to investigate AI harms. It can also operate more quickly than traditional academic publishing. Contest winner Kulynych noted that this fast pace has both flaws and strengths:

Unlike academic publishing, here I think there was not enough time for rigor. In particular, my submission came with plenty of limitations that future analyses using the methodology should account for. But I think that’s a good thing.

Even if some submissions only hinted at the possibility of the harm without rigorous proofs, the ‘bug bounty’ approach would enable to detect the harms early. If this evolves in the same way as security bug bounties, this would be a much better situation for everyone. The harmful software would not sit there for years until the rigorous proofs of harm are collected.

He added that there are also limitations in the approach. Notably, algorithmic harms are often a result of design rather than mistakes. An algorithm that spreads clickbait to maximize engagement, for instance, won’t necessarily have a “bug” that a company wants to fix.

“We should resist the urge of sweeping all societal and ethical concerns about algorithms into the category of bias, which is a narrow framing even if we talk about discriminatory effects,” Kulynych tweeted.

Nonetheless, the contest showcased a promising method of mitigating algorithmic harms. It also invites a wider range of perspectives than one company can incorporate (or will want) to investigate the issues. 

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with