AI Panic and Misdirected Fear: Notes from a Public LinkedIn Series
This blog post grew out of a short series I started writing on LinkedIn. I didn’t plan it as a formal project. I was just reacting in real time to how conversations about AI keep collapsing into panic, moral judgment and oddly specific "tells." Since that’s where much of the conversation was happening, that’s where I wrote.
I wanted to also highlight my thoughts on my blog, so that’s the purpose of this post! I’m pulling the ideas together to live in one spot.
I’m not an AI researcher. I’m not here to make predictions or act like I think I know everything. I’m a writer who follows this space closely and keeps seeing the same misunderstandings repeated LOUDLY (and sometimes every five minutes). What I’m seeing most often is a mix of real fear, real confusion and misdirected energy, and as we all know, that combination rarely leads to good outcomes.
The AI Writing Detection Panic Isn’t About Detection
Despite how confidently they’re marketed, AI writing detectors can’t reliably determine authorship. Many experts argue that detection itself is a dead end.
That hasn’t stopped people from obsessing over supposed “hints” like:
- em dashes
- paragraph length
- polished tone
- even intentional typos to seem more “human” (I can’t imagine doing this.)
Some of these observations sound plausible, but none of them prove anything.
Detection tools look for predictability and patterns. Unfortunately for those tools, humans also write in patterns, so style choices aren’t evidence of who (or what) wrote what.
(If em dashes are a smoking gun, a lot of writers from the last two centuries are in big trouble.)
I don’t think the fixation on being detected isn’t really about technology. It’s about trust and identity. People worry AI will replace their voice, cheapen their skill or make effort harder to recognize. I get it. Those fears are understandable. But anxiety tends to latch onto whatever feels concrete — even when it’s wrong. That’s how misinformation spreads.
The reality is simple: you can’t reliably detect AI writing using current tools or stylistic markers, so instead of asking whether someone used AI, better questions to ask are:
Is the content accurate? Thoughtful? Misleading? Actually useful?!
If AI helped someone brainstorm or clarify ideas — with human editing and fact-checking still involved, of course — that’s not a crisis. That’s a tool doing its job.
The real problems worth focusing on are things like deepfakes, misinformation, data privacy, control and corporate misuse. People using AI to outline ideas are not the problem. Em dashes are not the problem.
The “AI Isn’t Green” Panic Misses the Point
Another common panic focuses on AI’s environmental impact.
To be clear: AI uses energy, and climate change is a defining crisis. Those concerns are valid.
But the way this issue is often framed online does more harm than good.
Yes, data centers consume significant power. Estimates suggest AI’s share of U.S. electricity could rise meaningfully over the next few years. That matters.
What’s missing is context.
Every modern digital system uses energy: aviation, crypto, streaming, cloud storage, the list goes on. But the question isn’t whether AI consumes power. It’s what we get from it, and whether specific uses justify their costs.
Infrastructure also evolves. Companies are investing in renewable energy, more efficient chips and better cooling systems. Whether that’s sufficient is a fair debate, but it’s not being ignored!
There’s also a quieter reality: AI is already being used to improve power grids, accelerate battery research and support clean energy development. The same technology people panic about could help reduce emissions long-term. That’s a really big deal.
What doesn’t help are slogans like:
“Every prompt uses a bottle of water” or “AI is destroying the planet.”
Guilt and scare tactics don’t lead to good policy. Governance does.
Not all AI use is equal. Generating low-value content isn’t the same as advancing research or helping people learn new skills (with verification and care).
A more productive focus would be transparency, efficiency, grid investment and clear tradeoffs alongside addressing AI harms happening right now, like bias, surveillance and disinformation.
The Common Thread
Both the detection panic and the environmental panic follow the same pattern:
they target individuals, simplify complex systems, and let real power structures off the hook.
Panic feels productive, but it isn’t.
In my opinion, accountability, regulation and thoughtful pressure on the correct targets are harder but way more effective.
So, for now, this is where I’ll pause. Definitely not with certainty, but with a simple request for better conversations full of less panic, more nuance and better targeting. PLEASE!
(I’m not an AI researcher or climate scientist — just a writer trying to bring clarity to a chaotic moment in time. If I continue this series, I want to look next at writing itself and what really happens when people use AI as part of the creative process. Oh, and why blaming everyday users misses the point.)
0 Comments Add a Comment?