fb

Algorithms Can Hurt Us.

By Karen and Erica

A recent article in the Financial Times tells an interesting story about possible algorithmic bias at LinkedIn. 

What is LinkedIn? Here is what it says about itself and its mission:  LinkedIn is the world’s largest professional network with more than 1 billion members in more than 200 countries and territories worldwide. Its vision is to [c]reate economic opportunity for every member of the global workforce. Its mission is simple: connect the world’s professionals to make them more productive and successful. It is, in short, a powerful tool used by professionals to connect. YouGov breaks down users in the U.S., estimating that 14% of users are retired, and 28% are Boomers.

According to reports given to the FT, LinkedIn introduced a new algorithm early this year, and the effect on some users was catastrophic. Suspecting that there might be a gender issue, a number of women users reissued their posts with male names and took ChatGPT’s advice as to how to make the posts sound more manly. The Washington Post, which picked up the story, described one user’s experience: 

Megan Cornish had spent months puzzling about her waning reach on LinkedIn when she decided to run a test: She recast her profile to seem more like a guy.

Within a week, her impressions on the careers website quadrupled.

“I wish I was kidding about this,” the mental health professional wrote late last month on LinkedIn, after describing how she used ChatGPT to give her profile a more masculine edge. When she asked it to make her content more “male coded,” the artificial intelligence chatbot axed words like “communicator” and “clinician advocate” and replaced them with language about “driving ethical growth in behavioral health,” Cornish detailed in a Substack post titled “LinkedIn Likes Me Better as a Man.”

Others apparently experienced as much as a 175% decrease in readership, though the Washington Post reported that many people were repeating the sex experiment, and only some saw a change in the number of followers. LinkedIn denies bias, and some expect that there is no intentional bias. The Post reported one possible reason for the observed results:

“LinkedIn is a professional platform, and business language is very male,” [Carol Kulik, a professor in the Center for Workplace Excellence at the University of South Australia] said. While she doesn’t doubt LinkedIn’s assertion that its algorithm isn’t designed to suppress certain identity groups, “is it going to be sensitive to gendered language? Of course it is!”

Cindy Gallop is a prolific LinkedIn user who apparently experienced an extreme decline in readership. She joined another user to start a campaign called Fairness In The Feed, asking LinkedIn to revise its algorithm.

The social media companies have handed over control to AI at an alarming rate and appear to be ignoring reports of damaging trends. Nowhere is this more apparent than on LinkedIn, For months women, global majority professionals and anyone posting about things not considered corporate have seen their reach plummet. When women around the world reported an exponential rise in impressions simply by changing their profiles to men, we started a petition to bring the whole issue to light. 

This is not the first time we have heard about algorithmic biasAlgorithmic bias is when bias happens within a computer program or system. This is often talked about in relation to systems that operate on their own, like artificial intelligence. Algorithmic bias occurs in many areas where algorithms operate, for example medicine, either because of the (often unconscious) biases of coders, or the nature of the data the algorithms are fed.

Artificial intelligence (AI) has an astonishing potential in assisting clinical decision making and revolutionizing the field of health care. A major open challenge that AI will need to address before its integration in the clinical routine is that of algorithmic bias. Most AI algorithms need big datasets to learn from, but several groups of the human population have a long history of being absent or misrepresented in existing biomedical datasets. If the training data is misrepresentative of the population variability, AI is prone to reinforcing bias, which can lead to fatal outcomes, misdiagnoses, and lack of generalization. 

Aside from the obvious problem of a business tool, for whatever reason, possibly exhibiting bias against female names, or language perceived as feminine, this story presents a bigger question. The risk of algorithmic bias is known, and that bias can hurt the targets affected by it. Given that, what rules should govern the provider of a service that may be affected by algorithmic bias? A robust conversation about how to balance the interests of all participants when algos are in play seems in order.

We are at the beginning of the new age of AI.  It’s exciting, but it’s complicated. AI needs rules. We all need to play a role in their formulation.

Related Articles

We want to hear what you have to say.

  1. Thank you for this article. I feel so validated! I recently removed myself from a networking group for female professionals after getting consistent pushback on bringing up this very issue. It was disheartening to be gaslit by other women when sharing my personal experience with Linkedin and the disappearance of voices of so many WOC on Linkedin and in the US workforce.
    I appreciated the introduction of Algorithm Bias. Great article.