Sunday, November 30, 2025
L&D Nexus Business Magazine
Advertisement
  • Home
  • Cover Story
  • Articles
    • Learning & Development
    • Business
    • Leadership
    • Innovation
    • Lifestyle
  • Contributors
  • Podcast
  • Contact Us
No Result
View All Result
  • Home
  • Cover Story
  • Articles
    • Learning & Development
    • Business
    • Leadership
    • Innovation
    • Lifestyle
  • Contributors
  • Podcast
  • Contact Us
No Result
View All Result
L&D Nexus Business Magazine
No Result
View All Result
Home Innovation

New AI tool can lower political temperature and partisan rhetoric through algorithm control

November 29, 2025
in Innovation
Reading Time: 3 mins read
0 0
A A
0
New AI tool can lower political temperature and partisan rhetoric through algorithm control
Share on FacebookShare on Twitter


image: ©Kenneth Cheung | iStock

A new Stanford-led research published in Science shows that a web-based tool can drastically reduce partisan animosity on X feeds without platform cooperation.

By reordering content to downrank anti-democratic posts, the bipartisan study demonstrated a path toward lowering political polarisation and giving users control over their algorithms

New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds

A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement.

The multidisciplinary research, published in the journal Science, not only offers a concrete way to reduce political polarisation but also paves the path for users to gain more control over the proprietary algorithms that shape their online experience.

Reducing the temperature of the feed

The researchers sought to counter the toxic cycle where social media algorithms often amplify emotionally charged, divisive content to maximise user engagement. The developed tool acts as a seamless web extension, leveraging a large language model (LLM) to scan a user’s X feed for posts containing anti-democratic attitudes and partisan animosity.

This harmful content includes things like advocating for violence or extreme measures against the opposing party. Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline.

In a 10-day experiment conducted during the 2024 election with roughly 1,200 participants, those whose feeds had this toxic content downranked showed a measurable improvement in their views toward the opposing political party. The effect was universal, observed in both liberal and conservative users.

“Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science and the study’s senior author. “We have demonstrated an approach that lets researchers and end users have that power.”

Small change, major impact on polarisation: Controlling the political temperature

The study’s impact demonstrates that a subtle algorithmic change can have a substantial effect on political attitudes. On average, participants who experienced reduced exposure to toxic content reported attitudes toward the opposition that were 2 points warmer on a 100-point scale.

Researchers noted this is equivalent to the estimated change in political attitudes that typically occurs in the general U.S. population over a period of three years. Furthermore, downranking the content also reduced participants’ reported feelings of anger and sadness, highlighting the emotional toll of highly polarised feeds.

According to author Tiziano Piccardi, now an assistant professor at Johns Hopkins University, the findings clearly illustrate the causal link. “When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” he stated.

This success is particularly notable because previous attempts to mitigate polarisation, such as displaying posts chronologically, have shown mixed or negligible results.

By creating an AI tool that works independently of the platform and making the code available, the team has opened new doors for researchers and developers to create nuanced, effective interventions that could promote healthier democratic discourse and greater social trust.



Source link

Author

  • admin
    admin

Tags: ControlTemperatureToolAlgorithmPoliticalpartisanrhetoric
Previous Post

What’s Your Time Worth? Why Every Task Needs a Price Tag

Next Post

Leaders Assume Employees Are Excited About AI. They’re Wrong.

Next Post
Leaders Assume Employees Are Excited About AI. They’re Wrong.

Leaders Assume Employees Are Excited About AI. They’re Wrong.

Importance of Training and Internship – Track2Training

Importance of Training and Internship – Track2Training

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

L&D Nexus Business Magazine

Copyright © 2025 L&D Nexus Business Magazine.

Quick Links

  • About Us
  • Advertise With Us
  • Disclaimer
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Follow Us

No Result
View All Result
  • Home
  • Cover Story
  • Articles
    • Learning & Development
    • Business
    • Leadership
    • Innovation
    • Lifestyle
  • Contributors
  • Podcast
  • Contact Us
  • Login
  • Sign Up

Copyright © 2025 L&D Nexus Business Magazine.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In