A class-action lawsuit filed yesterday by five major publishers and bestselling novelist Scott Turow is escalating the legal war over artificial intelligence and intellectual property, targeting Meta and founder Mark Zuckerberg directly. The complaint alleges Meta used millions of copyrighted books and journal articles—some pulled from known piracy hubs LibGen and Sci-Hub—to train its Llama large language model without authorization.
The lawsuit claims Zuckerberg “personally authorized and actively encouraged the infringement,” a charge that, if proven, could significantly raise the stakes for Meta. The company has not yet publicly responded to the allegations, but the case is already sending shockwaves through the publishing industry and beyond.
“This is shameless, damaging and unjust behavior,” Turow said in a statement, calling it “distressing and infuriating” that one of the world’s richest corporations would allegedly use pirated versions of his work to build a system capable of producing “competing material, including works supposedly in my style.” The fear, he added, isn’t just about copying—it’s about outright replacement.
The lawsuit argues that AI-generated books are already flooding marketplaces like Amazon, potentially crowding out human authors. Even more troubling, these systems can summarize entire novels so effectively that readers might skip buying the original altogether. In one example cited in the filing, Llama was prompted to mimic a travel writer’s voice and produced what the complaint called a “convincing rendition” of that style. When asked how it did it, the system essentially admitted it had been trained on vast amounts of text, including that author’s published work.
This case is part of a growing wave of legal challenges against AI companies. Anthropic recently agreed to a $1.5 billion settlement with writers over similar claims, and lawsuits against OpenAI, Google, and others are piling up. The core question is whether training AI on copyrighted material without consent constitutes fair use—or outright theft.
Congress is now facing mounting pressure to define what fair use looks like in the age of artificial intelligence. The debate echoes broader tensions over fiscal restraint and regulatory oversight that are reshaping policy battles across Washington. At the same time, the economic stakes are high: AI can create jobs and expand access, but as economic strain tips the scales in global conflicts, the cost of unregulated innovation is becoming harder to ignore.
The lawsuit doesn’t just target Meta—it challenges the broader ethos of Silicon Valley’s “move fast and break things” culture. “AI can absolutely be a great thing,” said Lindsey Granger, a NewsNation contributor and co-host of The Hill’s “Rising.” “But the excuse that we need to move fast to compete cannot come at the expense of the people who have spent their lives creating the very content these systems are built on.” Granger’s column, an edited transcription of her on-air commentary, calls for regulation and oversight from Congress so that stealing intellectual property doesn’t become the norm in the name of innovation.
At its core, this case is about respect for creative work and deciding where to draw the line between inspiration and appropriation. As Pentagon spending on the Iran conflict shows, resource allocation is a political choice—and so is the decision to let AI companies train on stolen data. The outcome of this lawsuit could set a precedent for how America balances technological progress with the rights of creators.
