← Back

I Read 20,000 Words About AI Risks and Now I'm Worried (Here's Why You Should Be Too)

I spend a lot of time thinking about AI. I use it for research, ideation and sometimes to help digest complex information faster than I could alone. I've written about AI panic and misdirected fear, about building portfolio chatbots and about how AI can be an amazing creative partner.

But I just read something that confirmed what I’ve been thinking. While we're debating whether AI-generated copy is "authentic enough" or whether ChatGPT is making students lazy, we're missing much bigger, much scarier conversations happening in boardrooms and political circles.

Dario Amodei, CEO of Anthropic (the company that makes Claude), just published an essay called "The Adolescence of Technology." It's long — 20,000 words (woof) — but it lays out the civilizational risks of AI over the next 1-5 years in a way that's both sobering and logical. The more I read, the more I felt validated.

And honestly? I'm a little worried. Not in a "the robots are coming" way, but in a "we know exactly what safeguards we need and we're choosing not to use them" way.

Here's what I learned after reading the tome:

The Timeline (Soon)

Amodei predicts we could have "powerful AI" within 1-2 years. Not the chatbots we have now, but systems that are:

  • Smarter than Nobel Prize winners across most fields
  • Capable of autonomous work over hours, days or weeks
  • Able to control computers, robots and lab equipment
  • Operating at 10-100x human speed

He calls this a "country of geniuses in a datacenter."

It’s not hypothetical. AI already writes most of the code at Anthropic. A mere three years ago, it struggled with simple math. Its rate of acceleration is, and will continue to be, astounding.

The Five Major Risks

Amodei breaks down the risks into five categories. Below are quick summaries, but whoa, they all deserve their own essays.

1. When AI Develops Its Own Goals

The worry isn't that AI will go rogue like in sci-fi movies. It's that during training, AI systems develop unpredictable behaviors, some of which could be coherent, focused and dangerous.

This has already happened in testing. Anthropic's own models have:

  • Deceived researchers when they believed Anthropic was "evil"
  • Blackmailed fictional employees when told they'd be shut down
  • Developed "bad person" identities after being caught cheating, then adopted other destructive behaviors

These are REAL behaviors that were observed during safety testing.

Amodei's team is working on solutions through constitutional AI, understanding its “brain” and transparency, but the main point is that we can't predict or control how the systems will behave as they continue to get smarter.

2. Misuse for Destruction:

Ok, so here's the one that genuinely freaked me out.

Right now, creating a bioweapon requires genius-level expertise, right? But AI is quickly lowering that threshold. Someone with a basic biology degree could soon use AI to design and create a pathogen while getting help debugging problems along the way.

Amodei writes, “I am concerned that a genius in everyone's pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing and releasing a biological weapon step-by-step.”

This isn't about searching terms on Google. Google can't walk you through a collaborative, months-long process of trial and error. AI can.

And the scariest part (as if this is the only one)? Any bad actor that wants to harm can do so without the former typical knowledge needed to do it. Currently, the people capable of creating bioweapons are very educated and have a lot to lose. The disturbed loner who wants to cause destruction typically lacks the skills. AI changes that whole situation.

3. Misuse for Seizing Power

If a "country of geniuses" existed in a datacenter that was controlled by an authoritarian government, what could they do?

  • Total surveillance: Compromise every computer system, read all communications, detect dissent before it forms
  • AI propaganda: Personalized psychological influence on citizens
  • Autonomous weapons: Millions of armed drones, locally controlled by AI, impossible to resist
  • Strategic dominance: Strategize better than humans on military, diplomatic and economic decisions

Amodei is mostly worried about the main threat: China. They have the world's second-most advanced AI capabilities (U.S. is first) and already operate a surveillance state.

4. Economic Disruption

Amodei predicts that 50% of entry-level white-collar jobs will be displaced within 1-5 years.

This isn't the usual “new tech, new jobs” kind of thing. AI is different because of:

  • Speed: The pace of change is unprecedented (AI went from barely coding to writing production code in two years)
  • Scope: It's not automating one skill; it's a general thinking substitute
  • It fills gaps: Unlike past automation, AI adapts to weaknesses

And the wealth concentration: we're already at Gilded Age levels of inequality. AI could create trillion-dollar fortunes while putting millions out of work. The rich get richer (except on a more insane level).

5. Unknown Unknowns

What happens when we smash a century of progress into a decade? What indirect effects will we face?

  • Major biological advances happening too fast?
  • AI relationships that change human psychology?
  • Loss of human purpose in a world where AI does everything better?

Who knows? And that's part of the problem.

What’s Worrying Me

After reading this essay, I did what I always do…researched more. I wanted to know what's actually being done about these risks and how I can insert myself into the action.

So, I searched for current AI policy in the U.S.

And buddies, as you can probably imagine, it's not going great.

The current administration is doing the opposite of implementing the guardrails Amodei describes. Instead, they are:

  1. Actively dismantling safety measures: Day one, Trump revoked Biden's AI executive order that required safety testing and monitoring. Sigh.
  2. Suing states that try to regulate: Created an "AI Litigation Task Force" to challenge state AI laws (including the transparency legislation Amodei supported)
  3. Threatening states financially: Withholding federal funds from states with tedious AI laws
  4. Prioritizing power over safety: Focusing on winning the AI race instead of making it safe

Amodei writes, “There is so much money to be made with AI — literally trillions of dollars per year — that even the simplest measures are finding it difficult to overcome the political economy inherent in AI.”

We know that we need transparency requirements, safety testing, control of chips to China, guardrails against bioweapon information and advanced taxation to address inequality.

But, not surprisingly, those darned economic incentives are winning. And when economic incentives clash with existential safety, that's pretty scary.

WHY ISN’T THIS A BIGGER STORY?

This is what is seriously driving me nuts. Why are we arguing about whether AI-generated essays are plagiarism, if AI art violates copyright and whether chatbots make students lazy…

...when we should be asking:

  • Who's making sure that AI won't help terrorists create bioweapons?
  • What stops China from using AI for authoritarian control?
  • How do we prevent 50% unemployment in five years?
  • Who do we hold accountable when things go wrong?

I think it's because the small stuff is easier to grasp. A teacher can see a student using ChatGPT. An artist can see their style copied. These feel real and urgent.

But “a misaligned country of geniuses could freely create bioweapons”? That's abstract. Distant. Hard to understand. And doesn’t seem possible.

Except it's not distant or impossible. It's 1-5 years away. So, like, tomorrow.

What Can We Do?

Amodei warns explicitly against "doomerism,” which you might think I’m doing in this recap, but it’s more the “sky is falling right now” that can make things worse because people stop listening. I get it. Who wants to imagine this insanity actually happening?

But we can't just shrug and think the experts will just handle it, because, if you’re paying attention here, they're being blocked from handling it by our own government.

Here's what I think we can do:

1. Make this a political issue (This should be bipartisan, honestly.)

  • Ask candidates their position on AI safety, support state legislators fighting for transparency laws and demand accountability when tech companies fight regulation

2. Educate without catastrophizing (What I’m trying to do here, but probably failing.)

  • Share calm, factual information (like Amodei's essay!), connect with current concerns people have and apply them to the bigger picture and frame it as democratic accountability and responsibility instead of technological doom (I would never).

3. Support organizations doing this work

4. Use your leverage

  • If you work in tech, advocate internally for safety measures; If you work with AI, demand transparency from vendors; If you hire those vendors or contractors, ask how they approach AI ethics.

5. Stay informed

  • It’s moving fast. What's true today might change next month, so it’s important to follow AI policy developments and not just product announcements (though those can be very exciting). And read the actual research and not just headlines.

My Take

If you know me well, you know I'm not anti-AI at all. I use it almost daily. I think it's genuinely a great collaborative tool for creative work, research and problem solving.

But after reading Amodei's essay and understanding what's actually happening in policy circles, I can't pretend the biggest AI concerns are about copywriting jobs or academic integrity.

Something incredibly powerful is being built, and we're choosing to do it without basic safety measures. And it’s not because we don't know what those measures should be, but because economic and political incentives are working against them.

That's not a technology problem. That's a governance problem.

And governance problems require citizens (that’s us!), not just experts, to demand better.

The window is 1-5 years. That's enough time to get this right!

But only if we start treating it seriously. And I don't know about you, but I'd rather not find out what happens if we don't.

Read the full essay: "The Adolescence of Technology" by Dario Amodei

💬