Americans lost more than ten billion dollars to fraud in 2023, and job scams are among the fastest-growing categories. But the most dangerous scams no longer rely on fear—they exploit your dreams.

Take the case of a neurosurgeon—call him Doctor Dan—who works at one of the world's most prestigious universities. He received an email from a recruiter at a renowned hospital offering a promotion. By Saturday morning, after 25 emails, he sent his CV. The problem? The recruiter was an AI chatbot, the job didn't exist, and the entire pitch was crafted from his public resume.

Read also
Technology
Wiles: White House Won't Pick AI Winners, But Safety Rules Loom
White House chief of staff Susie Wiles says the Trump administration isn't picking AI winners and losers, even as it weighs safety rules for advanced models.

Doctor Dan's friend, a journalist who teaches fraud detection to Fortune 500 executives, initially cheered the opportunity. 'This is HUGE,' she replied. She didn't spot the fake either. The scam was quiet, polite, and perfectly timed—no threats, no urgency, just a promise of the career he'd quietly wanted since medical school.

This is a new breed of fraud. For decades, we've been trained to spot scams by their smell: bad grammar, all caps, urgent threats like 'Your account has been hacked!' or 'Pay this toll now!' Those fear-based scams work on a tired slice of the population at odd hours. But the new scams sell hope. And hope is more dangerous than fear. When a stranger threatens you, you slow down and verify. When a stranger offers your dream, you lean in and answer quickly.

The key enabler: large language models. Five years ago, personalizing a scam like this required a human to study a target's CV, publications, and skills—too expensive for volume. Today, AI can do that for 10,000 targets at once for the price of a cup of coffee. Personalization used to be the moat protecting sophisticated people. AI has drained the moat.

Doctor Dan's CV is online. So is yours. The same technology that built a fake job for him can build a fake anyone, tailored to anyone. In 2024, scammers cloned the voice and likeness of WPP CEO Mark Read in a WhatsApp and Teams meeting to trick staff into wiring funds. A finance worker at engineering firm Arup sent $25 million to fraudsters after a video call where all participants were deepfakes.

Recruitment scams grew by over 1,000 percent in a single quarter of 2025, according to McAfee. The Markup, a nonprofit newsroom, posted a job opening and watched scammers clone it, fabricate a contract, and hunt for applicants' banking details. For every scam reported, dozens more succeed—companies often hide their losses.

Doctor Dan got lucky: he only shared public info, costing him spam texts and a bruised ego. But someone less lucky might hand over a Social Security number. As one analyst noted, Americans are already wary of declining ethics, but this threat is different. The old red flags—bad grammar, weird links, urgency—are gone. AI fixes all three.

The lesson: if you think you're too smart to be targeted, you're exactly who they're after. Hope sells more than fear, and in the age of AI, your dreams are the new vulnerability.