What Editing Decode Taught Me About India in 2025

The stories that reminded me why slowing down the internet to ask better questions still matters.

In partnership with

I started Decode in 2022 because I wanted to understand how technology actually works in India—not the press release version, but the real one. How tools, platforms, policies, misinformation, and corporate narratives collide with people's lives. How the internet shapes power, and who pays when systems break down.

Three years in, 2025 has been clarifying and contradictory. India is now the eighth most digitised nation globally, with UPI hitting 17 billion transactions at the beginning of the year, and AI giants betting billions on data centers. The government pitches our digital public infrastructure as a model for the world. The keywords at press events are "inclusion" and "innovation."

But look closer, and 2025 also revealed something else: algorithms excluding people from food and jobs, data protection rules that empower the state over citizens, unemployment pushing people to find jobs that turn them into scammers, content takedown orders issued in thousands without transparency, and facial recognition systems deployed without statutory backing. 

The gap between what technology promises and what it delivers has never felt wider. 

Dinesh Hukmani/Shutterstock

I'm now less interested in whether systems work and more interested in what happens when they don't—or worse, when they work exactly as designed, but the design never considered the people using them.

Here are some Decode stories from 2025 that remind me why this journalism matters.

The degree costs money. The education? That's extra.

Medical students in India face a peculiar economics lesson before they ever touch a stethoscope. Government colleges charge around ₹18,000 a year. Private ones, somewhere between ₹60 lakh and a crore. Then there's the app subscription: ₹50,000 annually.

Not for entertainment. For education.

A student at Government Medical College Bettiah told Vipul Kumar she goes to class because attendance is mandatory, not because anyone's teaching. Her classmates scroll through Marrow lectures while professors drone on. One student couldn't get her app to work because she'd shared her login—the app caught her, blocked her account for three months, and she survived on pirated Telegram videos.

The apps openly admit they're not medical training. But they've become the syllabus anyway. Faculty shortages and overcrowded colleges have created a vacuum, and ed-tech filled it. With a price tag.

The uncomfortable truth: Indian medical students now pay twice. Once for the degree. Once for the education. And if you can't afford the second payment, good luck becoming a competent doctor.

Your face doesn't match. Try again tomorrow.

Eight months pregnant, Avni stood in an Anganwadi center staring into a phone camera. Third day in a row. The facial recognition system refused to believe she was herself.

"Maybe pull your dupatta over your head like in the earlier photo," the worker suggested gently.

Since July, India's nutrition scheme for pregnant women and children mandates facial recognition to distribute take-home rations. Packets of panjiri, fortified flour, energy supplements—nothing gets handed over without the machine's approval.

The system rejects dark skin. Tired faces. Pregnancy weight. Bad lighting. 

Hera Rizwan reported on the Anganwadi workers—already underpaid, overworked, managing two centers each because their colleagues were fired for protesting—who now spend evenings coaxing apps into compliance. The technology was supposed to reduce their workload. Instead, it added surveillance, biometric verification, and the constant fear that if too many faces don't match, their jobs disappear.

Efficiency, in these moments, feels less like progress and more like cruelty with a user interface.

India’s misinformation problem, still amplified by old tricks

During the India-Pakistan tensions in May 2025, a familiar zombie rose from the dead: "Dance of the Hillary," the malware that wasn't.

WhatsApp groups exploded with warnings. Don't click Instagram links. Don't open videos. Pakistani hackers will wipe your phone clean. The panic was real: ATM queues stretched for hours in Kashmir, shopkeepers refused UPI payments, families stayed awake terrified.

The malware? Didn't exist. Never did. This exact hoax has been circulating since 2016, maybe earlier, resurrected whenever there's a crisis and people are already scared.

What fascinated me wasn't the fake malware—it was how little sophistication the deception needed. No deepfakes. No AI-generated voices. Just text messages that sounded official, forwarded by people you trust, during a moment when everyone's guard is already up.

India doesn't need cutting-edge AI to be misled. It needs a forward button, a moment of fear, and the appearance of authority. Old tricks still work. That should worry us more than deepfakes.

When virality becomes the business model

Aviator is an illegal betting game. It's also wildly popular on YouTube and Instagram, promoted by influencers with millions of followers, amplified by Meta's ad platform, and legitimised by AI-generated celebrity endorsements.

Alishan Jafri’s investigation found 75 YouTube channels selling video slots to scammers. They'd post "how to hack Aviator" videos—complete with fake Shahrukh Khan voiceovers—charge ₹17,000, keep it live for five days, then delete. By then, thousands had already clicked through to Telegram groups where "predictors" promised to multiply their money.

One student lost ₹35,000. Tried to recover it through a prediction group. Was told he'd won ₹20,000—just invest ₹15,000 more to claim it. When he ran out of money, they turned hostile.

Platforms profit from the engagement. Influencers pocket the fees. Scammers vanish into Telegram. And the victims—engineering students, daily wage workers, anyone desperate enough to believe in easy money—bear the entire cost.

Everyone had plausible deniability. Illegality becomes negotiable when virality is high. Accountability is always someone else's problem.

When AI helps, and what that means

Kerala Police used AI to solve a 19-year-old murder case. They took grainy 2006 photographs of two suspects, aged them digitally, tried different hairstyles and features, and kept iterating until something clicked.

A wedding photo on Facebook matched one AI-generated image. That led them to Puducherry, where both men had been living under false names for nearly two decades. The arrests brought closure to a mother who'd spent 19 years seeking justice for her daughter and twin granddaughters.

It's a genuinely good outcome. AI sifted through data humans couldn't, surfaced patterns that had gone cold, helped investigators ask better questions.

But it also raises uncomfortable ones. What happens when facial recognition misidentifies someone? When AI suggests a match and police act on probability, not certainty? When tools built to assist judgment start replacing it?

The promise is real. So are the risks. And the difference between aid and overreach is thinner than most policy documents admit.

When digital IDs become gatekeepers

And then there’s APAAR, India's new digital education ID—a story I reported on.

The official line: voluntary. The reality: schools refusing admission without it, teachers threatened with salary cuts if targets aren't met, educational boards making the ID mandatory for exams, students trapped because their Aadhaar card has one extra letter their school register doesn't.

What stayed with me wasn't just the technical failures. It was watching access become conditional—not on learning or ability, but on whether your mother's phone could receive an OTP in a village with no signal. On how many phones a family has versus how many children. On whether three separate government databases agreed on the spelling of your name.

When access is digitised, exclusion scales silently. And the gap between "voluntary" policy and mandatory enforcement becomes a bureaucratic maze where children's futures get lost.

Editing Decode has made me sceptical of easy optimism and deeply curious about consequences. Technology doesn't fail equally. It fails downward, hitting hardest the people with the least power to push back.

These stories take weeks, sometimes months. They're reported by journalists who follow threads most outlets ignore. And with every story I edit, I realise that slowing down the internet long enough to ask better questions is still worth doing. The tiny BOOM team has been relentless—whether it's tracking misinformation as it goes viral, or investigating how that false information actually harms people.

If Decode's work feels important to you, share these stories. The algorithm won't work for these stories unless you become the messenger.
On My Bookmarks
Searching Social Media GIF by All Better

Deception Deluge

Tech Policy has an end‑of‑year analysis on how hyper‑realistic AI video, increasingly complex content pipelines and uneven detection capacity are reshaping information warfare, regulation and platform responsibility.

Ban Bust

With Australia becoming the first country to lock under‑16s out of TikTok and Instagram, this Bloomberg piece punctures the hype. It shows how easy workarounds, privacy‑invasive age checks and lack of investment in real mental‑health support make the ban more theatrical than a solution. 

Humanity Against AI

Anthropic has a nine‑person “societal impacts” team whose whole job is to study how its chatbot is actually changing people’s lives, rather than just catching obvious misuse. The Verge followed this group as they mined millions of conversations with an internal tool called Clio, published inconvenient truths about bias, persuasion, economics and elections, and then tried to push those findings back into product and policy decisions inside a 2,000‑person AI company racing for market share.

A Trap

A developer building an on-device NSFW detector downloaded the widely cited NudeNet AI dataset to benchmark his tool, only to have Google suspend his entire account after its automated systems flagged hidden CSAM within the scraped images. Google ignored his appeals until reporters stepped in, showing how auto-moderation hurts good researchers while AI companies scrape freely.

Got a story to share or something interesting from your social media feed? Drop me a line, and I might highlight it in my next newsletter.

See you in your inbox, every other Wednesday at 12 pm!

MESSAGE FROM OUR SPONSOR

Find customers on Roku this holiday season

Now through the end of the year is prime streaming time on Roku, with viewers spending 3.5 hours each day streaming content and shopping online. Roku Ads Manager simplifies campaign setup, lets you segment audiences, and provides real-time reporting. And, you can test creative variants and run shoppable ads to drive purchases directly on-screen.

Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.

Was this email forwarded to you? Subscribe