On December 8, a remarkable group of Internet policy experts and lawyers gathered in San Francisco for a conference on the 20th anniversary of the Supreme Court’s 1997 decision in Reno v. ACLU, a landmark ruling that struck down parts of a 1995 law would have greatly restricted free speech online, while preserving a small part of the law that has in turn enabled online innovation and free speech to blossom.

Authored largely by Senator James Exon of Nebraska, The Communications Decency Act, passed in 1995, sought to impose decency standards on the Internet, much like those that existed for broadcast television. Under the law, speech such as “Fuck the CDA!” could have been a felony, for example, and some of the explicit charges of sexual harassment now making up the #metoo movement would also have been restricted. Indeed, if the CDA had survived as passed, it’s likely that most of our popular Internet forums today, such as Facebook and Twitter, would never have gotten off the ground.

Yet, nestled within the CDA was Section 230 (47 U.S.C. § 230) which states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." It’s this language that enables Internet users today to upload their own videos to YouTube, for example, or post photos and debate topics on Reddit. No other country has as broad a “safe harbor” provision for Internet services, and it’s one key reason why the United States has been host to the most innovative and rapidly growing companies in the world.

That the Supreme court struck down the anti-free speech provisions of the CDA in 1997 while leaving Section 230 intact is a testament to the work of a small group of men and women, lawyers and advocates, and the organizations that supported them: The American Civil Liberties Union, the American Library Association, the Electronic Frontier Foundation, the Center for Democracy and Technology, and many other concerned groups joined hands to bring their concerns to the courts—and to the public. In fact, in 1996 a number of web sites across the country exploited some recently released code in the newest version of the Internet browser and turned their pages black in protest of the CDA—a protest that would be mirrored in 2012 during the SOPA/PIPA debates.

The legal and policy protections within Section 230 and the innovation they fostered have had a far-reaching impact on free speech in the digital age. But while, celebrating the Supreme Court’s 1997 decision, the lawyers and experts on the stage in San Francisco earlier this month also questioned whether such a decision could have happened today—and discussed whether Section 230 is now facing a mortal threat.

Then, and Now

When the CDA was passed in 1995 there were only about 40 million Internet users on the planet (the U.N. estimates there are now more than three billion). And as online networks were becoming increasingly popular, concern about the dangerous, threatening material they might bring into the homes of Americans was mounting.

The July 3, 1995 cover of Time Magazine portrayed a young child staring zombie-like at a lit screen with a headline of “CYBERPORN,” echoing the kinds of concerns that were first debated with the growth of broadcast television. And, in 1991, Vanity Fair published Annie Liebovitz’s famous cover photograph of a pregnant Demi Moore, whose artfully concealed nudity would have been regarded as pornography by the CDA. In fact, the discussion around the image of Demi Moore possibly did more to raise broad, significant concerns over the CDA’s language than any scholarly debate.

In order to educate the District Court about the Internet (and subsequently the Supreme Court upon appeal), lawyers (along with a leading Internet architect, Harvard’s Scott Bradner) arranged for terminals and a T1 line to be brought into the Court, so the judges could surf the early Web. This intervention—the first time any court in the country was hooked up to the Internet—clearly helped sway the Courts.

"[J]ust as the strength of the Internet is chaos, so the strength of our liberty depends upon the chaos and cacophony of the unfettered speech the First Amendment protects,” wrote District Court Judge Stewart Dalzell in the 1996 decision in ACLU v. Janet Reno.

The following year, after the Supreme Court upheld the ruling that the CDA’s indecency provisions were an unconstitutional limitation on our First Amendment rights, the decision was rushed to the steps of the Court and uploaded to the Internet over a dial-up modem—a first. This demonstration of the ability of individuals to instantly publish online was an implicit affirmation of the underlying principle that user control of the network at its endpoints was both more acceptable and technically more effective than any imposed regulation on carriers.

Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.

Today, however, some 20 years after the Supreme Court’s final decision, the Internet is facing a barrage of new attacks, both from legislators and regulators around the world as well as from changes in the very fabric of the Internet itself.

For example, free speech today is increasingly threatened by bots and distributed denial of services (DDOS) attacks that can overwhelm all but the most powerful voices online. “Unlike in the 1990s, one almost needs a content delivery network [CDN] to intermediate and protect speech,” observed Alissa Cooper, the chair of the Internet Engineering Task Force. Indeed, the power of this intermediary content distribution layer became obvious over the summer when one CDN, Cloudfare, banned the neo-Nazi, white supremacist web site the Daily Stormer.

In a blog post, Cloudfare’s Matthew Prince explained why Cloudfare had made its decision—and why the decision itself was so dangerous. “Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online,” Prince explained.

Meanwhile, as the FCC’s controversial vote to repeal net neutrality rules grabbed headlines last week, another 21st Century piece of decency legislation is quietly working its way through Congress: the Senate’s Stop Enabling Sex Traffickers Act (SESTA), and the House’s Fight Online Sex Trafficking Act (FOSTA).

After pushing back against back House and Senate stipulations that would have gutted the broad protections of Section 230, some large technology companies and startup advocacy organizations are now giving their halting support to revised language, which, as currently written, would amend Section 230 to remove liability protections from hosting sites, such as Backpage, that knowingly foster or facilitate sex trafficking, particularly of minors.

This acceptance of alterations to Section 230 is highly controversial. Critics point out that our current laws are already sufficient to prosecute such bad actors; Internet-centric legislation is not necessary. And that some technology companies are suddenly willing to throw Backpage under the bus and accept changes to Section 230 fills other tech firms and civil rights attorneys with concern, if not outright dread—after all, if the threat of sex trafficking can yield such concessions today, what’s next?

These issues get to the very heart of the concerns for the next generation Internet. It was obvious 20 years ago that a hosting platform such as AOL could never effectively wield “intermediary liability” for the vast amount of content its users post. Yet today, with Congress and the European Union ratcheting up concerns over “fake news” and the growing power of the largest Internet companies to frame social dialogue, there are breathless demands for such accountability.

FCC chairman Ajit Pai, the man responsible for repealing “net neutrality,” recently appeared on Fox News to share his concerns about the deletion of conservative political speech by Facebook, Google, and Twitter. In his remarks, Pai specifically called out Twitter's decision to block an ad by Rep. Marsha Blackburn (R-Tenn.) that was based on false claims about Planned Parenthood, and YouTube's decision to block advertising for some right-wing Internet personalities accused of violating YouTube’s terms of service.

“One of the things that people have suggested is it's not Internet service providers, it's some of the content companies that decide what you see on the Internet and, more importantly, what you don't see," Pai suggested when asked about the threats to free speech online. "Where's the transparency there?”

20 Years Later

Technology can also be hoisted by its own petard. As AI (artificial intelligence) becomes more powerful and theoretically capable of detecting inappropriate or illegal speech, will the cries for regulation grow louder, bolstered by the assumption that the tools now exist to implement automatic takedowns with minimal touch?

As media critic Frédéric Filloux opined, it is widely believed that “The only hope for a serious pushback against misinformation will come from progress in artificial intelligence, machine learning, and natural language processing.” Already, platforms like YouTube and Facebook, among others, are walking a fine line between imposing some form of content review, while trying to keep their platforms open and engaging.

Yet AIs are only as omniscient as our human biases can make them. AIs will ultimately do no better than their human authors at ascertaining distinctions, for example, between eroticism, pornography, and explicit sexual display as social critique. And none of the largest technology platforms—Google, Facebook, Amazon, Apple—believe they will ever be able to protect us from speech that is truly, deeply hurtful and destructive, while permitting continued engagement with provocative, critical, and sometimes indecent speech.

The Internet today is the most lightly regulated major infrastructure that humans have ever invented. But as it continues to link together people and ideas from across the globe, it is falling under increasing scrutiny as our complex contemporary societies are striving to come to terms with notions of diversity, community, and inclusion.

And not just here in America: The Internet is a tightly walled garden in China; India has been working to influence network governance through the International Telecommunications Union; Parts of the Middle East still restrict Internet access to portions of their populations. And European regulators are said to be considering host of measures, such as increasing demands on Google’s search results, calls for algorithmic transparency, and new investigations into the impact of Google and Facebook on all forms of publishing.

But we’d all do well to remember this: what we want the Internet to become—just how raucous a debate we want it to support—is a discussion that will not only set the boundaries of the Internet, but of the greater guarantee of free speech.