Kagi Translate's Silly 'Languages' Show Why We Still Love Playing With AI

You know that moment when you find a feature you weren’t supposed to find, and suddenly the whole internet wants to play with it too? That’s basically what happened this week with Kagi Translate. The paid search competitor quietly released a translation tool last year, but nobody really cared until people realized you could make it translate into “LinkedIn Speak,” “Gen Z slang,” or even “horny Margaret Thatcher.”

Yes, really.

It turns out the tool’s underlying AI doesn’t actually care what you ask for. Want your text converted to “rude man with a Boston accent”? Just type it in. How about “tiny little kitten”? Sure, why not. The whole thing feels like discovering a secret passage in a video game, except the secret passage is actually just bad input validation.

The Great LLM Party Trick Revival

Here’s what’s wild about this: it’s reminding the internet what was actually fun about AI before everything got so serious. Remember when ChatGPT first launched and people were delighted just asking it to write Vogon poetry or recreate forum arguments from the year 2000? That was genuinely entertaining. It felt like playing with a new kind of creative tool rather than worrying about whether robots would steal your job.

Kagi leaned into the chaos. Their social media team started encouraging people to use the LinkedIn translations to “fit right into that crowd,” which is the kind of self-aware humor you don’t usually see from companies. People started testing increasingly absurd translation targets. Someone tried Werner Herzog. Someone else tried Carl Sagan. The usual internet suspects showed up with political jokes and media criticism.

This whole thing happened because Kagi’s engineers didn’t build strong guardrails around what users could input. They built a container for language transformation, poked the underlying LLM with fuzzy prompts, and watched to see what would happen.

The Uncomfortable Truth About Minimal Oversight

Here’s where it gets sticky, though. The same flexibility that makes the tool fun also means you can ask it to emulate “someone who keeps saying slurs” and it’ll probably do it. No fancy jailbreak needed. Just type it in. The company didn’t sanitize inputs, and that’s the kind of oversight failure that becomes a bigger problem when you’re building tools at scale.

But unlike Google’s AI Overviews that hallucinate fake information or some startup’s sketchy AI therapy bot handing out medical advice, the stakes here are genuinely low. Nobody’s going to trust Kagi Translate to make important decisions. It’s not being used for anything critical. It’s just a toy that showed us something interesting about how technology companies can accidentally create fun when they leave enough room for play.

The thing is, even “toys” need some basic sense. You don’t have to go full corporate paranoia mode, but maybe catch the most egregious stuff before it ships. Kagi probably should have seen this coming, or at least thought about what might happen when you let users type arbitrary text into a language parameter field connected to an LLM.

What This Actually Says About Us

The rush to test increasingly ridiculous translation targets says something about why people got interested in AI in the first place. It wasn’t the promise of productivity gains or workplace automation or whatever the industry evangelists were selling. It was the weirdness. It was the fact that you could ask a machine to do something completely unexpected and sometimes it would actually work.

Five years ago, you couldn’t just ask a tool to think like Werner Herzog or write like you were explaining tech to McKinsey consultants. Now you can, and while that’s not going to revolutionize anything, it’s genuinely delightful in a way that most tech isn’t anymore.

The real question is whether we’ll see more of this kind of playful experimentation or if every company will lock everything down so tight that using an AI tool becomes just another sterile, corporate experience. Kagi stumbled into something that felt like the early internet vibes, even if it was partly accidental.

What happens when the fun part of AI becomes the thing people actually want to use?

Written by

Adam Makins

I’m a published content creator, brand copywriter, photographer, and social media content creator and manager. I help brands connect with their customers by developing engaging content that entertains, educates, and offers value to their audience.