Quick Thoughts on the Challenge of Regulating Algorithms

Much of the blame for societal polarization has been heaped on big tech, particularly social media companies. The argument implicates their algorithms that have an objective of increasing engagement which translates to profitability under the advertising model regime. A consequence of this is the promotion of extreme content. It’s a fair argument, partially acknowledged by the companies themselves…


Much of the blame for societal polarization has been heaped on big tech, particularly social media companies. The argument implicates their algorithms that have an objective of increasing engagement which translates to profitability under the advertising model regime. A consequence of this is the promotion of extreme content. It’s a fair argument, partially acknowledged by the companies themselves as they try to moderate extreme or misleading content. It’s also fair because the algorithms are proprietary and essentially a black box for the user. So, it seems that there is this giant behemoth manipulating the little guy with these mysterious algorithms. These platform companies are protected from any liability by section 230, a 27-year-old federal law that shields social-media platforms from liability for content (free speech) published by others. Section 230 has two key subsections that govern user-generated posts. The first, protects platforms from legal liability relating to harmful content posted on their sites by third parties. The second, allows platforms to police their sites for harmful content, but it doesn’t require policing or removal, and it protects them from liability.

The soundness of Section 230 is the case currently being considered by the US Supreme Court. In Google vs Gonzales, Gonzales was killed in a nightclub shooting. Gonzales’s family (the plaintiff) claims the YouTube (owned by Google) recommended terrorist training videos and should be held liable for recommendations. The question before the court is whether section 230 protects Google. The section is intended to protect the free speech of third parties on the platform, but what about the platform itself?  Do algorithms that provide recommendations represent “actions” taken by the platform and are they also protected? Does the content recommended make the platform a publisher (e.g., like a newspaper that creates, curates and points to content) subject to libel laws. Or are the recommendations equivalent to someone pointing you to the frozen section in the grocery store when you asked them for directions?

If Section 230 is overturned, then there would be draconian content control by platforms to protect them from being sued for libel. The big companies would survive, but smaller ones could be vulnerable to the loss of volume….and could be sued for any user generated content (including reviews on TripAdvisor!). An easy case made by platforms is that with the massive content on the Internet, the use of algorithms to provide recommendations is necessary. Without algorithms that select and rank content based on user data, things would be chaotic ……so holding platforms liable for algorithmic recommendations could be a slippery slope. So, despite the technological naïveté of the court, it is unlikely that they will completely quash Section 230.

This is one of the many dilemmas we will face with respect to regulating AI.   If the platforms cannot be regulated, and Section 230 cannot be dropped then what is the solution? It seems like many of the  answers may lie in bridging the information asymmetry between the consumer and the algorithm. In the same genre as explainable AI, perhaps we need a human rule-based explanation of how the algorithm arrived at its recommendation. Such transparency can allow consumers to adjust certain algorithmic parameters and assume some culpability for the content they see.  This is a challenging problem that lies at the intersection of technical issues (AI and deep neural nets are a black box, even for the creators), business model issues (loss of proprietary advantage), regulatory issues (ability to define and control transparency), and a myriad of others. The problem is exponentially more challenging with generative AI where the concept of holding someone liable, itself gets murky.


Leave a Reply

Your email address will not be published. Required fields are marked *