Edit Content
Image

OK, what is going on on with LinkedIn’s algo?

One day in November, a product strategist we’ll call Michelle (not her real name), logged into her LinkedIn account and switched her gender to male. She also changed her name to Michael, she told TechCrunch. 

She was partaking in an experiment called #WearthePants where women tested the hypothesis that LinkedIn’s new algorithm was biased against women. 

For months, some heavy LinkedIn users complained about seeing drops in engagement and impressions on the career-oriented social network. This came after the company’s vice president of engineering, Tim Jurka, said in August that the platform had “more recently” implemented LLMs to help surface content useful to users. 

Michelle (whose identity is known to TechCrunch) was suspicious about the changes because she has more than 10,000 followers and ghostwrites posts for her husband, who has only around 2,000. Yet she and her husband tend to get around the same number of post impressions, she said, despite her larger following. 

“The only significant variable was gender,” she said. 

Marilynn Joyner, a founder, also changed her profile gender. She’s been posting on LinkedIn consistently for two years and noticed in the last few months that her posts’ visibility declined. “I changed my gender on my profile from female to male, and my impressions jumped 238% within a day,” she told TechCrunch.

Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and so on.  

Techcrunch event

San Francisco
|
October 13-15, 2026

LinkedIn said that its “algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed” and that “a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias” within the Feed. 

Social algorithm experts agree that explicit sexism may not have been a cause, although implicit bias may be at work.  

Platforms are “an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly,” Brandeis Marshall, a data ethics consultant, told TechCrunch.  

“The changing of one’s profile photo and name is just one such lever,” she said, adding that the algorithm is also influenced by, for example, how a user has and currently interacts with other content.  

“What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another. This is a more complicated problem than people assume,” Marshall said. 

Bro-coded

The #WearthePants experiment began with two entrepreneurs — Cindy Gallop and Jane Evans.

They asked two men to make and post the same content as them, curious to know if gender was the reason so many women were feeling a dip in engagement. Gallop and Evans both have sizable followings — more than 150,000 combined compared to the two men who had around 9,400 at the time. 

Gallop reported that her post reached only 801 people, while the man who posted the exact same content reached 10,408 people, more than 100% of his followers. Other women then took part. Some, like Joyner, who uses LinkedIn to market her business, became concerned.

“I’d really love to see LinkedIn take accountability for any bias that may exist within its algorithm,” Joyner said. 

But LinkedIn, like other LLM-dependent search and social media platforms, offers scant details on how content-picking models were trained.

Marshall said that most of these platforms “innately have embedded a white, male, Western-centric viewpoint” due to who trained the models. Researchers find evidence of human biases like sexism and racism in popular LLM models because the models are trained on human-generated content, and humans are often directly involved in post-training or reinforcement learning. 

Still, how any individual company implements its AI systems is shrouded in the secrecy of the algorithmic black box. 

LinkedIn says that the #WearthePants experiment could not have demonstrated gender bias against women. Jurka’s August statement said — and LinkedIn’s Head of Responsible AI and Governance, Sakshi Jain, reiterated in another post in November — that its systems are not using demographic information as a signal for visibility. 

Instead, LinkedIn told TechCrunch that it tests millions of posts to connect users to opportunities. It said demographic data is used only for such testing, like seeing if posts “from different creators compete on equal footing and that the scrolling experience, what you see in the feed, is consistent across audiences,” the company told TechCrunch.

LinkedIn has been noted for researching and adjusting its algorithm to try to provide a less biased experience for users.

It’s the unknown variables, Marshall said, that probably explain why some women saw increased impressions after changing their profile gender to male. Partaking in a viral trend, for example, can lead to an engagement boost; some accounts were posting for the first time in a long time, and the algorithm could have possibly rewarded them for doing so. 

Tone and writing style might also play a part. Michelle, for example, said the week she posted as “Michael,” she adjusted her tone slightly, writing in a more simplistic, direct style, as she does for her husband. That’s when she said impressions jumped 200% and engagements rose 27%.

She concluded the system was not “explicitly sexist,” but seemed to deem communication styles commonly associated with women “a proxy for lower value.” 

Stereotypical male writing styles are believed to be more concise, while the writing style stereotypes for women are imagined to be softer and more emotional. If an LLM is trained to boost writing that complies with male stereotypes, that’s a subtle, implicit bias. And as we previously reported, researchers have determined that most LLMs are riddled with them.

Sarah Dean, an assistant professor of computer science at Cornell, said that platforms like LinkedIn often use entire profiles, in addition to user behavior, when determining content to boost. That includes jobs on a user’s profile and the type of content they usually engage with.

“Someone’s demographics can affect ‘both sides’ of the algorithm — what they see and who sees what they post,” Dean said. 

LinkedIn told TechCrunch that its AI systems look at hundreds of signals to determine what is pushed to a user, including insights from a person’s profile, network, and activity. 

“We run ongoing tests to understand what helps people find the most relevant, timely content for their careers,” the spokesperson said. “Member behavior also shapes the feed, what people click, save, and engage with changes daily, and what formats they like or don’t like. This behavior also naturally shapes what shows up in feeds alongside any updates from us.”

Chad Johnson, a sales expert active on LinkedIn, described the changes as deprioritizing likes, comments, and reposts. The LLM system “no longer cares how often you post or at what time of day,” Johnson wrote in a post. “It cares whether your writing shows understanding, clarity, and value.”

All of this makes it hard to determine the true cause of any #WearthePants results.

People just dislike the algo

Nevertheless, it seems like many people, across genders, either don’t like or don’t understand LinkedIn’s new algorithm — whatever it is. 

Shailvi Wakhulu, a data scientist, told TechCrunch that she’s averaged at least one post a day for five years and used to see thousands of impressions. Now she and her husband are lucky to see a few hundred. “It’s demotivating for content creators with a large loyal following,” she said.

One man told TechCrunch he saw about a 50% drop in engagement over the past few months. Still, another man said he’s seen post impressions and reach increase more than 100% in a similar time span. “This is largely because I write on specific topics for specific audiences, which is what the new algorithm is rewarding,” he told TechCrunch, adding that his clients are seeing a similar increase. 

But in Marshall’s experience, she, who is Black, believes posts about her experiences perform more poorly than posts related to her race. “If Black women only get interactions when they talk about black women but not when they talk about their particular expertise, then that’s a bias,” she said. 

The researcher, Dean, believes the algorithm may simply be amplifying “whatever signals there already are.” It could be rewarding certain posts, not because of the demographics of the writer, but because there’s been more of a history of response to them across the platform. While Marshall may have stumbled into another area of implicit bias, her anecdotal evidence isn’t enough to determine that with certainty.

LinkedIn offered some insights into what works well now. The company said the user base has grown, and as a result, posting is up 15% year-over-year while comments are up 24% YOY. “This means more competition in the feed,” the company said. Posts about professional insights and career lessons, industry news and analysis, and education or informative content around work, business, and the economy are all doing well, it said. 

If anything, people are just confused. “I want transparency,” Michelle said. 

However, as content-picking algorithms have always been closely guarding secrets by their companies, and transparency can lead to gaming them, that’s a big ask. It’s one that’s unlikely ever to be satisfied. 

SHARE THIS POST