AI Gave Me A Husband

When AI can’t find facts, it fills in the blanks. With confidence.

In partnership with

I used to think I had the most uncommon name. My mother spent days poring through Bengali dictionaries to find it. Every introduction brought the same reactions: "What?" followed by "What does it mean?" Or sometimes just a polite "That's unique, never heard that one." My bubble burst in the last couple of years. Besides the occasional "Adhiraj" mix-up, I started meeting people who knew another Adrija. Worse still, some confused me with them.

But this newsletter isn't my rant about name twins (sorry to any Adrijas reading this).

It's about AI, and how having a common-enough name suddenly has high-stakes digital risks.

No Idea Neon Rated GIF by NEON

I've carefully curated my digital presence over the years. So when I watched The Morning Show and saw Alex (Jennifer Aniston's character) google "Alex deepfake," then recoil in horror, I did what any reasonable person would do: I googled myself.

"Adrija + deepfake" returned videos where I discuss deepfakes. Good. But then—because I spend hours in weird corners of the internet for this very newsletter—I tried "my name + deepfake + porn." I immediately regretted it, thanks to some namesake. 

Last week, musician Karsh Kale shared a screenshot of Gemini AI confidently telling users he's married to Suzanne Vega. He's not. When Karen, my colleague at BOOM, saw this, she decided to test it: "Who is Adrija married to?"

Gemini had an answer! A confidently stated name, and a video of some other "Adrija" celebrating with a husband.

I am not married. Gemini just stitched together two unrelated data points—my name, someone else's life—and handed me a husband I didn't ask for.

AI: Where Confidence Replaces Truth

What happened to me, and Karsh Kale, is happening everywhere. The consequences range from mildly funny to financially devastating. The uncomfortable part is that this isn't a quirk. This is how AI works: statistically, structurally, and in ways that don't magically disappear with "more training."

A study published in October 2025 by the BBC and European Broadcasting Union (EBU) analysed over 3,000 responses from ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity across 18 countries and 14 languages.

45% of AI-generated news responses contained at least one significant error. When you factor in minor issues, 81% had problems.

Google's Gemini performed worst, with significant issues in 76% of responses—largely because it just... made things up. Fake citations. False information attributed to real publications. Outdated "facts" stated with complete confidence.

"When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation," Jean Philip De Tender, EBU Deputy Director General, told the BBC.

Which brings me to the heart of the issue: AI cannot admit it doesn't know something.

The Math Doesn’t Lie (But AI Does)

A preprint from researchers at OpenAI and Georgia Institute of Technology put numbers to what we've suspected all along.

Large language models learn by predicting the next word in a sequence based on statistical patterns. They're essentially supercharged autocomplete tools. When asked factual questions with difficult-to-find answers, they don't stop and think. They guess, confidently.

The research shows that an AI's total error rate when producing text must be at least twice as high as its error rate when classifying sentences as true or false.

Translation: hallucinations aren't bugs that can be fully patched out. They're mathematically inevitable features of how these systems work.

So why don't they just say "I don't know"?

The answer is both technical and economic. AI systems are trained using benchmarks that reward confident answers and penalise admitting uncertainty. Nine out of ten major AI benchmarks give zero points for expressing doubt—the same score as giving completely wrong information.

Wei Xing, an AI researcher at the University of Sheffield, told Science.org: "If AI admitted 'I don't know' too often, users would simply seek answers elsewhere."

So we have a product that's punished for being honest.

In February 2024, Wall Street Journal reporter Ben Fritz asked multiple AI chatbots who he was married to. The responses? A tennis influencer. A random woman from Iowa. A writer he'd never met. When Noor Al-Sibai, journalist at Futurism, did the same experiment, she “spat coffee out on her laptop screen”. The answers were that bizarre.

But AI goofs are not always amusing:

A New York lawyer used ChatGPT-generated case citations in a federal brief. All the citations were fabricated. He faced sanctions.

Google AI once recommended people eat "at least one small rock per day" for vitamins and minerals, pulling from satirical content it didn't recognise as satire.

Deloitte submitted a $440,000 report to the Australian government containing AI-fabricated academic sources and fake quotes from federal court judgments.

Air Canada was ordered by a tribunal to honor a bereavement fare policy that its chatbot hallucinated, a policy that never actually existed.

The Overconfidence Problem

When researchers compared how humans and AI assess their own confidence, both groups were overconfident. The difference? Humans can learn to calibrate. AI systems, as currently designed, cannot.

Studies from Stanford and Carnegie Mellon show that large language models consistently rate their wrong answers as high-confidence. And as models grow bigger and "smarter," the hallucination problem gets worse, not better.

OpenAI's shiny new reasoning model, o3, hallucinates 33–48% of the time when summarising information about people. The older o1 model? Only 16%.

So yes: the "smarter" model is wrong, more often but with more confidence.

Meanwhile, we are trusting the system more and more. According to the Reuters Institute's Digital News Report 2025, 7% of online news consumers now use AI assistants for news. Among under-25s, it's 15%. Yet a Nature study found that newer, larger AI models are more inclined to generate wrong answers than admit ignorance. And people aren't great at spotting these bad answers.

So, what does this actually mean? AI systems aren't knowledge bases. They're probabilistic prediction devices.

Because of that structural property—and our broader social preference for people (and machines) who sound sure of themselves—AI tends toward assertion rather than uncertainty.

Some companies are starting to recognise the problem. And they are exploring ways to teach models to express uncertainty. But the economic incentives are misaligned. With OpenAI burning through billions in computing costs while only 5% of users pay for subscriptions, no company wants to be first to make their AI admit ignorance at scale.

"If LLMs keep pleading the Fifth, they can't be wrong," Arizona State University's Subbarao Kambhampati told Science. "But they'll also be useless."

I'm not saying abandon AI. I use it. You probably do too. But here's the thing: we need to stop treating it like an all-knowing oracle and start treating it like what it is: a very confident intern who sometimes makes things up.

Check sources. Verify dates. Be ready to interpret outputs as educated guesses, not facts. Because in a world where AI invents husbands, confuses dates and events, and cites legal cases that never existed, all with unshakable confidence, the three most important words might be the ones AI refuses to say:

"I don't know."

On My Bookmarks

Scrolling Social Media GIF by Tether

Privacy Breach

In a maternity hospital in Gujarat, hundreds of women were captured on CCTV videos. The system was hacked and those videos were sold on Telegram for Rs 800- 2000.

writing GIF

Writers’ Dream

We can probably finally use em-dashes to our heart’s content. OpenAI has announced that ChatGPT will finally follow instructions not to use em-dashes — a beloved punctuation mark that became a telltale sign of AI-generated content. 

Alchemy Fail

This blog post compares medieval alchemists trying to turn lead into gold to today’s AI creators churning out endless streams of generative art. The point? Just like flooding the market with gold would make it worthless, flooding the world with AI-made art takes away its value because art is meaningful only when it’s created by humans with a story and soul.

Deleted Workers

LibTech India found that the ongoing e-KYC, an electronic know your customer process, to weed out ineligible beneficiaries, has led to 27 lakh very real workers’ names being deleted from the database of MGNREGA between October 10 and November 14.

Got a story to share or something interesting from your social media feed? Drop me a line, and I might highlight it in my next newsletter.

See you in your inbox, every other Wednesday at 12 pm!

MESSAGE FROM OUR SPONSOR

Find your customers on Roku this Black Friday

As with any digital ad campaign, the important thing is to reach streaming audiences who will convert. To that end, Roku’s self-service Ads Manager stands ready with powerful segmentation and targeting options. After all, you know your customers, and we know our streaming audience.

Worried it’s too late to spin up new Black Friday creative? With Roku Ads Manager, you can easily import and augment existing creative assets from your social channels. We also have AI-assisted upscaling, so every ad is primed for CTV.

Once you’ve done this, then you can easily set up A/B tests to flight different creative variants and Black Friday offers. If you’re a Shopify brand, you can even run shoppable ads directly on-screen so viewers can purchase with just a click of their Roku remote.

Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.

Was this email forwarded to you? Subscribe