Quick Thoughts on Revisiting the Facebook Dilemma

Three years ago, I wrote about the Facebook Dilemma and the challenges of regulating algorithms.  I revisited these arguments after Meta’s (Facebook) recent decision to scale back content moderation in favor of greater free speech. Reactions predictably fell along partisan lines, with supporters praising the move and critics denouncing it. While free speech arguments might…


Three years ago, I wrote about the Facebook Dilemma and the challenges of regulating algorithms.  I revisited these arguments after Meta’s (Facebook) recent decision to scale back content moderation in favor of greater free speech. Reactions predictably fell along partisan lines, with supporters praising the move and critics denouncing it. While free speech arguments might have merit, the unchecked proliferation of misinformation renders truth increasingly malleable. This is detrimental to democracy, where debates should center on the interpretation and use of facts, not the facts themselves.

The problem is deeply complex. Facebook serves as the primary information source for many users. Is it like a newspaper, subject to journalistic standards and liable for libel? Section 230 of the Communications Decency Act protects platforms based on the premise that they merely host content rather than create it, placing liability on the content creator. Is Facebook more akin to a whiteboard, where the responsibility lies with users and community norms should govern content? Or is it like a public utility, creating a public good with potential for both broad benefit and harm, thus warranting government regulation?

Facebook’s nature blurs these categories, complicating regulatory solutions. It is not a newspaper because it does not create content but curates it, determining what users see. Regulating algorithms or curation raises thorny questions. It is not a whiteboard because it actively influences the content displayed, challenging passive-user governance. It is not a public utility because it operates as a private entity with profit motives. How do we regulate a company that profits from engagement while influencing public discourse?

As things stand, Facebook’s business model now triumphs on all fronts. Section 230 shields it from public regulation, and the shift away from private content moderation leaves misinformation unchecked. While an interesting concept, the proposed crowd-sourced moderation seems insufficient to address the scale and severity of the problem and might even reinforce it.

So where is the hope?  I see three avenues, each replete with issues.  First, is there hope that AI can provide scalable solutions? Generative AI has made creating fake news, images, and videos alarmingly easy, further complicating distinguishing truth. While it seems unlikely that current Foundational AI models trained on biased, unstructured internet data can reliably police the issues they’ve helped create.  Current evidence of AI use in fact-checking is mixed at best, but this could improve given rapid advancements in AI.  AI also offers tools to enhance transparency and user control. Explainable AI could demystify algorithmic decisions, empowering users to adjust preferences and align content with their values. Second, there is also hope that private regulatory frameworks, such as digital truth certification for content, can lead to satisfactory technical solutions (metadata on provenance, digital signatures, watermarks, blockchain)

Finally, the ultimate solution may not lie in top-down regulation but in fostering awareness and critical thinking from the ground up. There was a quaint time when everything in the print media was considered “the truth.”  Today, the default stance should be “everything you see is false unless verified.”  Educating consumers to evaluate information critically and responsibly could alleviate many of these problems. However, in highly polarized times, this may be a long and arduous journey.


Leave a Reply

Your email address will not be published. Required fields are marked *