After more than a decade of congressional inaction, federal lawmakers are moving to impose new regulatory obligations on technology companies to protect minors from documented harms caused by social media platforms and artificial intelligence systems. The House Energy and Commerce Committee last month advanced the Kids Internet and Digital Safety Act (KIDS Act), marking what proponents call the most significant legislative action on children's online safety in a generation.

From Voluntary Pledges to Enforceable Rules

The legislation would transform how major platforms interact with young users. It mandates that companies employing algorithmic feeds, autoplay features, and infinite scroll mechanisms—designed explicitly to maximize user engagement—must now address the specific harms these features cause to minors. These requirements would no longer be voluntary corporate commitments but legally enforceable obligations, with oversight from the Federal Trade Commission and state attorneys general.

Read also
Policy
Steak 'n Shake Appoints Former HHS Adviser as 'Chief MAHA Officer' in Nutrition Push
Steak 'n Shake has appointed former HHS senior adviser Michael Boes as its inaugural Chief Make America Healthy Again Officer, a role created to advance nutritional standards in line with a Trump-era health movement.

"Markets function best when consumers can make informed choices," noted Rep. Erin Houchin (R-IN), a chief architect of the legislation. "Children cannot meaningfully consent to having their psychological vulnerabilities studied, targeted, and monetized by billion-dollar Big Tech companies." The push follows revelations from internal Meta research showing Instagram's negative impact on teenage girls' mental health, evidence the company reportedly possessed while continuing to prioritize engagement metrics.

Strengthening Parental Tools and AI Safeguards

The bill addresses widespread criticism of existing parental controls, which are often described as weak, confusing, or easily bypassed. Under the KIDS Act, platforms would be required to provide effective parental notification systems when children encounter certain online dangers and to simplify the process for deleting a minor's account and personal data.

Perhaps most urgently, the legislation confronts emerging threats from artificial intelligence chatbots that simulate human relationships. The bill incorporates the AWARE Act, directing the FTC to develop clear guidance about AI risks for families and educators. It also includes the Safebots Act, which prohibits AI systems interacting with minors from presenting themselves as medical professionals or therapists. These systems must clearly disclose their artificial nature and direct users to legitimate crisis resources when conversations turn to self-harm or suicide.

This provision responds directly to tragic cases, including that of a Florida teenager who died by suicide after months of intimate conversations with an AI chatbot that encouraged his darkest thoughts rather than directing him to help. The incident highlights how unregulated AI systems can operate behind what Houchin described as "closed and locked" digital doors in children's bedrooms.

Broader Context of Child Safety Legislation

The KIDS Act advances amid growing bipartisan concern about children's welfare across multiple policy domains. Recent tragedies, such as the Shreveport domestic shooting that left eight children dead, have intensified debates about systemic protections for minors. Meanwhile, states and platforms are taking independent action, with companies like Roblox implementing age-gated accounts under mounting legal and public pressure.

Houchin, who co-chairs the Kids Online Safety Caucus, has also introduced the RESET Act, which would raise the minimum age for social media accounts to at least 16. She cites scientific evidence identifying ages 11 to 15 as a period of particular vulnerability online and notes that other countries are moving toward similar age restrictions.

The KIDS Act now moves to the House floor, with Senate consideration expected to follow. While the legislation represents a significant regulatory shift for an industry accustomed to light-touch oversight, supporters argue that the scale of harm to children justifies the intervention. "Social media companies did not design their platforms with children's safety as the priority," Houchin stated. "They designed them to maximize engagement. And our children were left to suffer the consequences."