Section 230 and the Monopolization Problem
“Section 230” is a phrase some of you have likely heard in the news recently. It is commonly called “the law that created the internet.” The law, which was enacted in 1995, provides a shield for technology platforms for the transmission of user content. For example, if John decides to publish a defamatory statement against Jane on Platform Y, while Jane can sue John for his defamatory remarks, Jane, because of Section 230, cannot sue Platform Y.
To most, this protection would seem intuitive. Why should a transmitter of information be held liable for the contents which it is distributing - that would be like holding Verizon accountable for two people using its service to sell illegal drugs. This analogy is only partly correct. First, modern internet platforms are not mere conversations between two individuals, they are more analogous to theaters with billions of audience members. Second, platforms like Google and Facebook aren’t mere transmitters of information, they also harvest every ounce of data they can from their users and manipulate user posts and search rankings to maximize engagement with their platforms. In short, there is a lot more about Section 230 than meets the eye and much-needed reporting would help clarify a lot of confusion and mystery surrounding this law.
In my latest op-ed, published in The Reboot, I discuss the complexities of Section 230, comment on some of the reform efforts, criticize commonly asserted points against reforming the law, and detail the actual problem Congress should be focusing on - the monopolization of the technology sector.
Here is the first paragraph:
In the early years of the commercial internet, back when companies like CompuServe and Prodigy dominated service, regulators faced a reckoning. They were forced to determine whether or not digital platforms should have traditional publisher liability for user-generated content hosted on their site.
CompuServe had escaped liability for defamation in 1991 because it made no effort to review or filter its users’ content — the company took an entirely hands-off approach, so it couldn’t claim to have knowledge of what was posted. But in 1995, a court found Prodigy liable in another case because the company had decided to actively moderate its message boards. In policing some of the platform’s content, the court decided that Prodigy had assumed legal responsibility for all of it.