New research shows scams that once took hours now take minutes.
Generative AI isn’t just changing how we work, but it’s also transforming how scams are pulled off. As per Vyntra’s 2026 report, tasks that once took fraudsters over 16 hours can now be done in under 5 minutes using generative AI tools.
That’s a massive shift. What used to require skill, time, and effort can now be automated and scaled almost instantly, turning fraud into what experts are calling a $400 billion global industry.
Why is AI making fraud so easy?
Because it removes the biggest barriers: time and expertise. Modern AI tools can generate convincing phishing emails, deepfake voices, fake documents, and even entire scam campaigns in minutes.
In fact, scams are becoming so advanced that they’re now hyper-personalized, targeting individuals with tailored messages that feel incredibly real. And it’s not just theory. In fact, reports show AI-powered scams are growing at a much faster rate than traditional fraud, with entire “fraud-as-a-service” ecosystems emerging online.
This is no longer small-scale fraud
What’s really worrying is the scale. Fraud has evolved from isolated attempts into organized, industrialized operations, where criminals can launch thousands of scams simultaneously. And with AI automating much of the process, these attacks can be deployed faster, targeted more precisely, and scaled globally with minimal effort.
Estimates suggest global scam losses have already reached over $400 billion annually, with AI playing a major role in accelerating that growth. And the worst part is that many of these scams succeed quickly, often within hours of first contact, leaving very little time to detect or stop them.
What does this mean going forward?
At the end of the day, this isn’t just about smarter scams but a full-blown shift in how cybercrime works. AI is making fraud faster, cheaper, and massively scalable, and right now, attackers seem to be evolving more quickly than defenses. The real challenge isn’t just spotting scams anymore… It’s keeping up with how quickly they’re changing.
Varun is an experienced technology journalist and editor with over eight years in consumer tech media. His work spans…
Meta’s next smart glasses sound like a treat for humans stuck with prescription lenses
Codenamed Scriber and Blazer and already through FCC filings, Meta’s prescription-focused AI glasses are shaping up to be the company’s most inclusive wearable launch yet.
For the billions of people who rely on corrective glasses every day (including me), smart glasses have always been a slightly awkward conversation. Sure, you can already pick up Ray-Ban Meta frames with your prescription built in, but it looks like Meta has something better in store for us.
According to a Bloomberg report, Meta is working on two new AI glasses designed specifically for prescription wearers rather than treating them as an add-on afterthought. The models could arrive in rectangular and rounded frame styles. Unlike current offerings, they will be sold through conventional prescription eyewear retailers.
Study says AI chatbots are increasingly ignoring humans, but it isn’t quite Skynet yet
Isn’t it frustrating when you ask an AI chatbot something, and halfway through, it just goes off track? You might be discussing a simple technical fix, and suddenly it throws in random suggestions — things that don’t even exist or don’t make any sense. It’s confusing, and honestly, pretty annoying.
What makes it worse is that it often feels like the chatbot isn’t even paying attention to what you said. You give it clear details, but it either ignores them or responds with something completely unrelated. That’s exactly what this study points out. AI isn’t as reliable or “obedient” as we thought, and if you’ve used one for long enough, you’ve probably noticed it yourself.
I see Apple skipping the AI hellfire, but shaping Siri as the most flexible assistant
When Apple introduced Siri back in 2011, the world freaked out. A personal assistant on a phone with conversational chops elicited an audible gasp from the audience, and plenty of fear. “That it’s a sinister, potentially alien artificial intelligence that’s bound to kill us all,” CNN’s coverage surmised. It was a one-of-a-kind advancement, something Apple was delivering consistently back then.
And then it fell off. Now, Siri has a reputation for being, well… not exactly the sharpest voice assistant, especially in a pool of next-gen generative AI assistants such as Claude, Gemini, and ChatGPT. Anyone who’s tried asking it a tricky question knows exactly what I mean — it’s a drag to talk with Siri, and more importantly, get work done. But things are starting to shake up. Bloomberg’s Mark Gurman, a prolific all-things-Apple eavesdropper, shared yesterday that Siri might soon open its doors to third-party AI tools in a major iOS update. That’s right! Apple’s walled garden could finally be cracking.







