The Teddy Bear Was Once Evil

What past tech panics reveal about our future with AI — and how to design better

Here’s the problem

Teddy Bears. Walkmans. Comic books. The personal computer.

All once considered dangerous.

You read that right. The teddy bear was branded a “monstrous threat” to young girls because critics feared it would ruin their maternal instincts. Comic books were blamed for juvenile delinquency. The bicycle? Supposedly corrupted women’s morals and caused mental illness.

Sound ridiculous? That’s because hindsight has a sense of humor.

But these weren’t fringe opinions at the time — they were mainstream panic.

I’ve been revisiting this history through Pessimists Archive, an incredible site and Substack that catalogs tech panic through newspaper clippings from the 1800s to now. It’s a timeline of outrage — from the telegraph to the Walkman — and it’s a reminder that fear has always been part of innovation.

A screenshot of the timeline of invention from pessimistsarchive.org

And now?

We’ve added a “new” villain to the list: AI.

Why It Matters

What we fear says more about us than about the tools themselves.

Generative AI is the current lightning rod — ChatGPT, DALL·E, coding agents — but the same narratives are playing out:

  • “It’s making people lazy.”

  • “It’s killing creativity.”

  • “It’s replacing humans.”

Here’s the thing: AI is not ChatGPT. It’s not one product or one company. It’s a field — a spectrum — that’s been evolving since the 1950s. The same fears we’re hearing today? Dr. Norbert Wiener — the OG cyberneticist — voiced them 70 years ago. In fact, Pessimists Archive recently highlighted how even Wiener worried that machines could “escape our control” if we didn’t stay vigilant about the ethics of design.

And yet, most people I talk to still think AI = this chatbot that gets their grammar wrong or writes awkward haikus.

That’s why I’m writing this.

We’ve been wrestling with the morality of human-computer interaction for decades. The question isn’t if machines will take over. They won’t. They don’t have will, or desire, or agency.

The question is: What role do we want these systems to play in our lives?

For me, the answer begins with Douglas Engelbart.

You might not know his name, but you’ve used his ideas. Engelbart invented the mouse. He demoed video conferencing before the internet. In 1968, his “Mother of All Demos” showed a future most people couldn’t even imagine.

But what mattered most wasn’t what he built — it was why.

Engelbart didn’t believe in automation for efficiency’s sake. He believed in augmenting human intellect — building tools that expand our ability to think, collaborate, and create. Tools that make us more, not less.

That’s the camp I’m in.

As someone who didn’t grow up with 24/7 electricity, who didn’t have high-speed internet until the late 2000s, I never saw technology as inevitable. I saw it as power. And power needs purpose.

Automation has its place — dishwashers are great. But if all we do is automate in the name of speed, scale, and shareholder value, we risk building tools that erode human purpose at scale.

And that’s the real danger. Not Skynet. Not killer robots.

But tools that chip away at our agency, our creativity, our sense of meaning — quietly, invisibly, and with our permission.

So What Should Builders Do?

Whether you’re a founder, a PM, a designer, or an indie hacker — here’s my invitation:

1. Do no harm.

Sounds simple. It’s not. But it’s a worthy first principle.

As makers of tech, we can’t afford to think of ourselves as gadget makers. We’re shaping the infrastructure of everyday life.

And yet, there’s no Hippocratic oath for technologists. No formal pledge to center dignity, or safety, or human agency. Just shipping cycles and scale metrics.

We need our own code of care.

Because the truth is: your product will touch more people than you ever meet.

And often, the most vulnerable people will never show up in your user testing.

If you’re building AI systems, or tools that shape behavior, or platforms that move money, attention, or decisions — your work has massive ethical surface area.

So treat it like medicine.

Not because you’re operating on bodies — but because you’re operating on lives.

→ What assumptions are you encoding?

→ Who gets left out?

→ Who bears the cost if things go wrong?

This isn’t a moral flex. It’s a design imperative.

“Do no harm” doesn’t mean avoiding all risk.

It means owning the full impact of what you make — and choosing responsibility even when the roadmap doesn’t demand it.

That’s what real innovation looks like.

2. Understand not just users, but humans.

Too often, teams check the “user research” box by asking ChatGPT to generate a persona:

“This is Marissa. She’s 32. She lives in Austin. She hates friction and loves smoothies.”

That’s not research. That’s a Mad Lib.

We’ve mistaken synthetic patterns for lived reality — and it shows. Products are being built around quirky names, fake pain points, and shallow assumptions. All polished into pitch decks, yet totally disconnected from the messy complexity of actual human lives.

Demographics aren’t understanding.

Pain points aren’t behavior.

Real understanding means going deeper:

• What does this person care about?

• What are they trying to protect, prove, or preserve?

• What does a “good day” look like in their world?

• Where are they when they use your tool? What’s happening around them?

It’s not just what they do. It’s why they do it.

And how your invention might shift that dynamic — for better or worse.

Because every tool changes behavior. Whether we admit it or not.

And if you don’t understand the full context — their needs, abilities, values, limitations, and aspirations — you’re not designing with empathy. You’re designing with guesswork.

You are not the user. Even when you think you are.

That’s the real work. And it doesn’t live in a prompt. It lives in conversation.

Talk to people. Listen without trying to confirm your pitch.

Design like their lives — not your roadmap — are the real source of insight.

3. Take your idea on a journey

When I work with founders or teams in the early stages of building, I walk them through a simple mental model I call the Multi-Room Test. It’s one of the best ways I know to confront both the promise and peril of your own invention.

There are three rooms:

🟢 1. The Disney Room

This is the utopia. Everything works out exactly as planned. Your tool uplifts people, saves them time, unlocks creativity, and makes the world a little more human. The most optimistic version of your product’s impact — play it out fully.

Now ask: Who benefits? How? What are the best use cases and why?

⚫ 2. The Black Mirror Room

Flip the switch. Now your invention causes harm — whether through misuse, unintended consequences, or system-level ripple effects. It’s addictive. It displaces jobs. It widens inequality. It gets into the wrong hands or is used in ways you didn’t anticipate.

Now ask: What damage could this do — even unintentionally? Who bears the cost?

⚪ 3. The Realist Room

This is where you land after walking through both extremes. Based on what you now know — not just from your imagination, but from talking to real people, not AI personas — what’s the likely outcome? What tradeoffs are worth it? What guardrails do you need?

The Realist Room is where thoughtful tech gets made.

You don’t emerge with guarantees. But you do emerge with clarity. Because when you’ve seen both utopia and dystopia, you stop designing in the dark.

Final Thought

Every generation has its tech panic.

But we don’t need to repeat the cycle.

We can meet this moment — this AI moment — with something deeper than fear:

Intention.

Design.

Responsibility.

Because the scariest thing isn’t that machines might become human.

It’s that we might forget what it means to be human in the first place.

Let’s not outsource that.

Over to You

What past tech panic do you remember?

What fears around AI feel real — and which feel recycled?

And how would you walk your own idea through the three rooms?

Comment below or hit reply — I’d love to hear how you’re navigating this moment.

Reply

or to participate.