Last year, Petter Törnberg, a University of Amsterdam researcher, painted a grim picture of social media’s future. His study didn’t blame algorithms or user psychology for the platform’s worst traits—echo chambers, attention inequality, and the amplification of extreme voices. Instead, he argued these issues are hardwired into the medium itself. Now, with two new papers and a preprint, Törnberg has doubled down on that conclusion: social media’s problems aren’t fixable with tweaks. They require a radical rethink.
The Architecture of Toxicity
Törnberg’s work centers on a counterintuitive idea: social media behaves nothing like the physical world. Unlike real-life interactions, where influence spreads gradually and organically, online platforms create a high-speed feedback loop. A few hyper-connected users dominate the conversation, while the vast majority remain passive. This structure isn’t accidental—it’s a direct result of how networks are designed to prioritize engagement over quality or balance.
His most recent PLoS ONE study used a novel approach: agent-based modeling paired with large language models (LLMs). Essentially, he created AI-driven simulations of human behavior to test how echo chambers form. The experiments showed that even when users start with neutral or diverse viewpoints, the system naturally pushes them toward polarized clusters. The reason? Engagement thrives on conflict, not consensus.
Why Common Fixes Fail
Platforms have tried to mitigate these issues with familiar tools:
- Algorithm adjustments: Downranking divisive content often backfires, as it reduces overall engagement without addressing the root cause.
- Moderation policies: Targeting bad actors is like playing whack-a-mole—their replacements adapt quickly.
- Timeline changes: Chronological feeds slow the spread of extreme views but do little to curb the structural imbalance.
Törnberg’s research suggests these fixes treat symptoms, not the disease. The problem isn’t what users see—it’s how the system is wired to reward visibility. The loudest voices, not the most credible ones, dominate by design.
The Search for a Blue Sky Alternative
If incremental changes won’t work, what’s left? Törnberg hints at two possible directions:
- Decentralized architectures: Platforms without centralized control might reduce attention inequality by distributing influence more evenly. Think federated networks where users, not algorithms, set the rules.
- Incentive redesign: Rewarding quality over engagement could flip the script. For example, platforms might prioritize posts that spark constructive dialogue rather than outrage.
Neither solution is simple. Decentralized systems face coordination challenges, while incentive shifts require buy-in from both users and platform owners. Yet the alternative—accepting endless toxicity—isn’t sustainable.
The Messy Path Forward
Social media isn’t going away, but its current form may be obsolete. The next wave of platforms won’t emerge from tweaking feeds or hiring more moderators. They’ll come from questioning the foundational assumptions of how online interaction should work. Until then, expect more of the same: polarization, outrage, and the slow erosion of trust.
The good news? Törnberg’s work proves the problem is solvable—if we’re willing to rethink the entire system.
AI summary
Sosyal medyanın kutuplaşma, aşırılık ve toplumsal bölünmelere yol açmasının ardındaki yapısal nedenler araştırılıyor. Uzmanlar, sistemin temelden yeniden tasarlanması gerektiğini vurguluyor.