Skip to main content
Briefing Paper

Would New Legislation Actually Make Kids Safer Online?

Analyzing the Consequences of Recent Youth Online Safety Proposals

Many young people have found supportive communities online that they may struggle to replicate in the offline world.

April 6, 2023 • Briefing Paper No. 150
Children online privacy

Over the past few years, an increasing number of headlines displayed disturbing trends and statistics about teenagers’ use of social media. In response, many policymakers have called for regulation to assuage the concerns of friends and constituents. But can government intervention actually solve the perceived social media crisis? Do the proposed policies accomplish their goals, and if so, at what cost?

The current concerns over teenage social media use echo past panics over technologies and entertainment leading to negative behavior. Comic books, video games, and media depictions of teen pregnancy have all previously received blame for setting bad examples.1 Although the policymakers expressing their desire to do something about it come from a place of well‐​intentioned concern for the next generation, child online safety policies raise significant concerns for speech rights, parental choice, and digital literacy.

While federal policymakers have stated their intention to take regulatory action on social media, state legislatures are already considering bills and proposals. This brief will examine the general types of proposals that have appeared at a state level and evaluate their potential impacts on online platforms, innovation, and young people online. Further, the proposals investigated could likely serve as a model for potential federal legislation and other states amplifying the concerning consequences of intervention for users.

Model 1: Complete (or Nearly Complete) Bans on Minors’ Use of Social Media

The most extreme examples of social media regulatory action are policies that would ban teenagers from using online platforms. Although such proposals claim to target social media specifically, the amorphous definition of social media can encompass a variety of online services, including messaging services; review sites, like Yelp; or even online information distribution channels, such as message boards used by schools and churches. These proposals include a state‐​level bill in Texas and the MATURE (Making Age‐​Verification Technology Uniform, Robust, and Effective) Act, a federal proposal introduced by Sen. Josh Hawley (R‑MO).2

These draconian approaches to regulation would inevitably face legal challenges on First Amendment grounds. The Supreme Court has previously struck down a similar attempt to safeguard the internet for minors in Reno v. American Civil Liberties Union, finding that such laws have an impact on the speech rights of adults, not only the minors they seek to protect.3 Furthermore, young people themselves possess their own First Amendment rights that would be clearly violated by a full ban on social media access.4 Similarly, the Supreme Court upheld lower courts’ actions striking down a restrictive child protection regime for the internet, the Child Online Protection Act, in the 2000s.5

Finally, this unilateral approach removes parents’ ability to choose an approach that works best for their children and family. Parents may be concerned enough to prohibit their children from social media or smartphone usage, but for many families, internet access may be a rite of passage like taking the bus or staying home alone, attainable through a process of gradual trust and responsibility.6 Some parents, for example, may feel they are better able to provide guidance and positive influence while their children are still at home to prepare them to make choices that reflect a family’s values on social media as they would for other content, such as movies or television shows. A social media ban for minors would prevent parents from choosing a gradual release of responsibility and would deny parents the opportunity to supervise their children while teaching them to use the internet responsibly.

Beyond the First Amendment concerns, these laws could also have consequences exacerbating concerns about problematic online usage rather than responding to them. A total ban would hurt many young people despite its claims and alleged intentions to help. Proposals banning minors from using social media favor the protection of a minority of users who may be predisposed to harmful behaviors and who find such behaviors exacerbated online at the expense of those users who find positive communities or need help online. For example, even in the infamous Instagram whistleblower statistics, while nearly one‐​third of teen girls felt Instagram made their body image worse, 45 percent felt it made no impact and more than 20 percent felt it made it better.7 The same often‐​cited internal research found that the majority of teenage girls felt Instagram helped assuage feelings of loneliness and sadness, and the majority of teenage boys surveyed felt the app helped with issues of anxiety, social comparison, fear of missing out, loneliness, and sadness.8

Many young people have found supportive communities online that they may struggle to replicate in the offline world. For example, teenagers who may feel different from their peers because of disability, race, culture, or religion may be able to connect with similar communities that support them and empower them to embrace their identity. While some may connect with negative influences, many will also find communities of peers. This generation of young people has more opportunities to connect and have its voice heard than previous generations thanks to technology, and the benefits of these communities should not be ignored merely because they exist online.

It’s highly likely that banning social media entirely would prevent young people from accessing tools that could prepare them for their future in an internet‐​friendly world. Young adults who can only access social media once they reach legal adulthood may face slower progress in gaining experience with using technological tools that can aid them socially or in the workforce.9

Model 2: Age‐​Appropriate Design Codes

An age‐​appropriate design code is a regulatory measure that limits platforms from displaying certain types of content deemed inappropriate for users younger than a certain age. These proposals are largely modeled off actions in the EU and UK proposing or instituting similar forms of regulation.10 Most notably, California enacted AB 2273 last year, which creates an age‐​appropriate design code that requires websites to take additional steps and collect more data on users to ensure certain information is not served to those younger than 18.11

Although these measures seem to directly target concerns about children and teenagers engaging with inappropriate content online, the policies have the potential to be abused for censorship and pose significant privacy implications. In addition, they may exacerbate mental health concerns by preventing young people from accessing online support resources.

To comply with age‐​appropriate design codes, online platforms would need to collect additional and more sensitive information to verify a user’s age.12 Merely asking for a birthdate to affirm that a user is over a certain age would no longer suffice. Instead, websites would be forced to verify and store information, such as government‐​issued identification, for all users and may even have to collect certain biometric information to verify that the ID matches the user. Increased data collection could negatively affect the comfort of individuals, thus limiting their ability to engage in speech online. Further, collection of such data could attract attention from nefarious actors as well.13 For example, if pedophiles gained access to such a database, they would know exactly which accounts to target. This also creates a honeypot for bad actors of sensitive information, such as driver’s license numbers, addresses, and birthdates, that could raise the risks of identity fraud or other harm.

Second, the ambiguous definition of what is age‐​appropriate could be abused by governments to censor access to certain information. If age‐​based restrictions are instituted on a state level, there could easily be disagreements on what information is appropriate for teenagers to access, particularly regarding issues of sexuality and reproduction. But even for less controversial topics, the definition of “age appropriate” could be manipulated by policymakers to prevent the flow of certain information. For example, the UK’s age‐​appropriate design has debated whether information concerning illegal border crossings by boat is appropriate to be displayed to young people.14

Finally, age‐​appropriate design codes may hurt the very young people they are trying to help. While it may seem easy enough to simply ban sexual content or violence, these policies are blunt tools dealing with overbroad categories. These policies, such as the current proposals that seek to target sexual, violent, or mental health content, may restrict those broad categories. But in doing so, platforms may remove or block content produced to help reduce problematic behavior. Platforms might have to remove the eating disorder recovery content as well as harmful content or remove information about how to seek help for self‐​harming behaviors—because of a law intended to stop young people from consuming content promoting these harmful behaviors. It may make it more difficult for a teenager experiencing or witnessing abuse to find the resources necessary to mitigate the situation. It could even prevent young people from gaining full information about current events, particularly those that contain violent actions, such as the war in Ukraine or mass shootings.

An age‐​appropriate design code may be less obviously problematic than a direct ban of young people’s online access but could have significant consequences, such as silencing voices, decreasing the privacy of users, and hampering the ability of young people to find help online. These consequences raise First Amendment concerns, as already seen in the legal challenge to the California law.15

Model 3: Limited‐​Topic Age‐​Appropriate Design Code

The third model that has appeared is an age‐​appropriate design code applied to a specific and limited type of content that is already regulated. The most notable example of this is Louisiana’s “porn bill,” which creates age verification requirements to access internet pornography.16

Unlike the general bills in Model 2, this proposal specifically responds to pornography, a form of speech where regulation in some forms—particularly regarding minors’ access—has previously been upheld.17 As such, the law will likely face fewer and more‐​limited First Amendment challenges; but this does not mean it is not without additional tradeoffs or concern for privacy and speech.

On the privacy front, much like general age‐​appropriate design codes, this proposal will require collection of additional, more sensitive data to verify user identity and age.18 This creates an alluring honeypot of potentially sensitive personal data for hackers, subjecting individual consumers to increased fraud risk and the chance of data being used for blackmail or other malicious purposes, such as the hack of affair website Ashley Madison.19

The dynamic nature of the internet may also cause compliance difficulties without safe harbors or clarity around enforcement. For example, the Louisiana law’s requirements are triggered when more than 30 percent of a website’s content is pornographic. A small and new website could find itself subject to attackers who not only wish to ruin the user experience but also create legal problems for the website by spamming it with porn to force it into noncompliance with the law by increasing the percentage of its content that is problematic. Without a safe harbor or other legal mechanism to provide reasonable time for compliance, the struggle to pinpoint violators on a changing internet may lead to unintended consequences. General‐​content websites that allow some content that may be deemed subject to the law (Tumblr, for example) may be discouraged from carrying any user‐​generated content on certain subjects, or else they may have to review all posts before they are published—thus limiting and censoring users’ protected speech in the process. Such consequences were seen in the removal of certain subreddits and Craigslist groups following the passage of the Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act.20 Although these groups were not involved in sex trafficking, the platforms felt that the content was too risky to continue to carry. Further, the costs of compliance will be more challenging for smaller platforms or those that use decentralized methods of content moderation, heightening the barriers to entry to the internet speech marketplaces.

The Louisiana law’s vagueness creates compliance difficulties and would likely lead to legal challenges on such grounds. While targeting pornography is more tailored than the generally restrictive age‐​appropriate codes, it is not without its problems. Laws like Louisiana’s will be open to legal challenges based on the arbitrary nature of age limits and confusion over whether a website is subject to such enforcement.21

Conclusion

Although child online safety proposals are inspired by good intentions of concerned parents and policymakers, current proposals have significant unintended consequences for parents, teenagers, and all online users. The underlying questions of teenagers’ increase in mental health issues are a just concern for further study, but merely targeting the regulation of social media demonstrates a failure to properly understand these legitimate issues and is not the right way to address them. Young people and all internet users would face significant and likely unconstitutional consequences that would silence their voices and diminish their privacy.

Because the issues with each child and family are different, there is unlikely to be a regulatory regime that can satisfy the difficult and nuanced challenges faced by parents in a digital age. Importantly, policymakers and parents should engage with young people to fully understand their changing online behaviors and experiences. The best solution is to empower and educate parents and young people to make responsible choices with technology. Such an approach can allow for individuals to more directly address the concerns associated with harmful social media usage while allowing the next generation to experience the benefits of positive social media use. If policymakers feel they must do something about these concerns, they should consider less‐​restrictive solutions that focus on education and empowerment of young people and parents rather than onerous regulatory regimes.

Download the Briefing Paper

About the Author
Notes

1. “Pessimists Archive,” Pessimists Archive; and Melissa S. Kearney and Phillip Levine, “Media Influences on Social Outcomes: The Impact of MTV’s ’16 and Pregnant’ on Teen Childbearing,” Brookings Institution, January 22, 2018.

2. “Texas Bill Would Ban Social Media for Children under 18,” FOX 4 News Dallas‐​Fort Worth, December 8, 2022; and Jared Gans, “Hawley Proposes Ban on Social Media for Kids under 16,” The Hill, February 15, 2023.

3. Reno v. ACLU, 521 U.S. 844 (June 26, 1997).

4. See Nina Totenberg, “At Supreme Court, Mean Girls Meet 1st Amendment,” NPR, April 28, 2021.

5. American Civil Liberties Union v. Gonzales, 478 F. Supp. 2d 775 (E.D. Pa. March 22, 2007).

6. See Maggie Seaver, “This Is When Kids Are Old Enough to Stay Home Alone, according to Moms (and the Law),” Real Simple, July 29, 2022.

7. “Instagram Teen Annotated Research Deck 1,” Doc​u​ment​Cloud​.org.

8. “Instagram Teen Annotated Research Deck 1,” Doc​u​ment​Cloud​.org.

9. “Social Media Resume: Skills Employers Look For,” Maryville University (blog), Maryville University.

10. “Special Group on the EU Code of Conduct on Age‐​Appropriate Design,” Shaping Europe’s Digital Future, European Commission, February 2, 2023; and “Introduction to the Age Appropriate Design Code,” Information Commissioner’s Office.

11. Adi Robertson, “California Passes Sweeping Online Safety Rules for Kids,” The Verge, August 30, 2022.

12. Jackie Snow, “Why Age Verification Is So Difficult for Websites,” Wall Street Journal, February 27, 2022.

13. “Online Age‐​Verification System Could Create ‘Honeypot’ of Personal Data and Pornography‐​Viewing Habits, Privacy Groups Warn,” The Guardian, October 30, 2022.

14. Jon Fingas, “UK Bill Would Ban Videos Portraying Channel Immigrant Crossings in a ‘Positive Light,’ ” Engadget, January 19, 2023.

15. “NetChoice v. Bonta,” NetChoice, February 28, 2023.

16. “Liability for Publishers and Distributors of Material Harmful to Minors,” La. Rev. Stat. Ann. § 9:2800.29 (2021).

17. “Obscenity and Indecency: Constitutional Principles and Federal Statutes,” Congressional Research Service, April 28, 2009.

18. “Louisiana’s New Porn Law Carries User Privacy Risks,” NPR, January 8, 2023.

19. Zak Doffman, “Ashley Madison Hack Returns to ‘Haunt’ Its Victims: 32 Million Users Now Watch and Wait,” Forbes, February 1, 2021.

20. Ej Dickson, “Sex Workers Say Reddit Is Quietly Banning Them,” Rolling Stone, August 17, 2022; and Samantha Cole, “Craigslist Just Nuked Its Personal Ads Section because of a Sex‐​Trafficking Bill,” Vice, March 23, 2018.

21. Tim Cushing, “Louisiana Law Now Requires Age Verification at Any Site Containing More than One‐​Third Porn,” Techdirt (blog), January 3, 2023.