The rise of AI agents is reshaping how people interact with the internet—but it also threatens to deepen the data surveillance that already plagues the online world. By 2025, retailers, news outlets, and marketing firms saw a 20 to 40 percent drop in direct web traffic as tools like ChatGPT began browsing, booking flights, compiling research, and even ordering groceries on behalf of users. Yet as these agents take on more tasks, they also collect and potentially expose sensitive personal data.

This trend is not new. Since the dawn of online advertising, websites have routinely shared user data with ad networks and data brokers, which can then land in government hands. But the AI-driven web amplifies the risk. Users may ask ChatGPT for financial advice, reveal health concerns, or discuss mental health—giving AI agents an intimate portrait of their lives.

Read also
Technology
Meta Slashes 10% of Staff as AI Investment Signals Workforce Shift
Meta is cutting 10% of its workforce and 6,000 open roles, even as revenue rises, signaling a strategic shift toward AI that is reshaping the tech industry.

Ads and the Agentic Web

OpenAI and Microsoft recently announced plans to introduce ads in ChatGPT and Copilot, following the dominant business model of the web. But experts warn that simply transplanting the existing surveillance-capitalism ecosystem onto the new platform would be a mistake. Instead, they argue, this is a chance to build privacy into the infrastructure from the start.

“When an AI agent books a hotel room, it doesn’t need to hand over personal data to dozens of booking sites,” said Sebastian Zimmeck, an associate professor of computer science at Wesleyan University and founder of Global Privacy Control. “It only needs to say someone wants a room in Berlin for three nights. The sites don’t need cookies, IP addresses, or location. Our agents can act as privacy bodyguards.”

Zimmeck advocates for a combination of technical standards and legal enforcement. On the technical side, standards like the Model Context Protocol should restrict data sharing for ad targeting. On the legal side, privacy laws—such as those in California, Connecticut, Colorado, the European Union, and other regions—should be rigorously enforced against agentic platforms. These laws give users a right to prevent AI companies from sharing their personal information, including selling whole profiles to data brokers.

Privacy Without Stifling Innovation

Critics worry that such measures could kill ad-based business models or stifle innovation. But Zimmeck argues otherwise. AI agents can still serve ads, even personalized ones, without sharing data. For instance, an advertiser can tell an AI company it wants to reach people planning a trip to Berlin—the agent handles the targeting without handing over user data.

Small ad networks also have a role. They can run campaigns on agentic platforms and measure success in privacy-preserving ways. However, to prevent AI companies from gaining an unfair advantage, laws and technical measures must ensure they don’t exploit their access to user data.

The broader political context matters. Recent debates over national data privacy frameworks and FISA 702 renewal without warrant requirements show that privacy protections remain contentious in Washington. Meanwhile, public demand for AI education underscores the growing awareness of these issues.

“Many websites treat consumers like products to be sold,” Zimmeck said. “AI agents can help prevent that. It won’t be easy, and likely not perfect, but we can do much better than the current system.”