War Pigs


AI and War Pigs


The mistake people keep making is treating AI as a neutral tool dropped into a neutral world. That fantasy collapsed a while ago, but this week it became impossible to ignore. AI isn’t entering a peaceful system and then being misused. It’s being deployed directly into environments already defined by power, asymmetry, incentives, and unaccountable authority. In other words: war pigs already exist. AI just showed up with better math.


When War Pigs came out, it wasn’t subtle. Politicians and generals hiding behind desks, sending other people’s children to die, insulated from consequence. The song wasn’t about war as tragedy; it was about war as bureaucracy. War as process. War as paperwork signed by people who never bleed.


That’s the frame that actually maps to AI.


AI is not the bomb. AI is the briefing. The targeting memo. The risk model that says “acceptable loss.” The probabilistic justification that lets someone say, with a straight face, that the system decided.


What’s new is not violence. What’s new is plausible deniability at scale.


AI systems don’t pull triggers. They rank, classify, recommend, flag, summarize, and prioritize. That sounds benign until you realize that every modern conflict—military, economic, informational, legal—runs on those exact verbs. Who gets flagged. Who gets ignored. Whose account is credible. Whose narrative is “extremist.” Whose collateral damage is statistically tolerable.


War pigs love systems that turn moral decisions into dashboards.


The most dangerous phrase in AI is not “superintelligence.” It’s “the model suggests.”


Because once a decision is framed as an output, responsibility dissolves. The general didn’t choose the target; the system did. The judge didn’t silence the speaker; the risk score did. The platform didn’t bury the truth; the ranking algorithm did. Everyone keeps their hands clean while outcomes stack up like bodies.


This is why AI slots so cleanly into existing power structures. It doesn’t challenge them. It automates them.


And the people building these systems know this, even if they don’t say it out loud. Every optimization choice is a value choice. Every training set encodes a worldview. Every definition of “harm,” “safety,” or “quality” reflects who is protected and who is exposed. You can’t math your way out of that. You can only hide it better.


What’s happening now feels less like a revolution and more like an arms race in abstraction. Whoever controls the models controls the frame. Whoever controls the frame controls what is thinkable. Once something is unthinkable, it doesn’t need to be censored. It just disappears.


That’s the quiet part most AI discourse avoids. The danger isn’t that AI will go rogue. The danger is that it will be perfectly obedient—to incentives, to institutions, to whoever feeds it authority signals.


In War Pigs, the reckoning is metaphysical. “Judgment day, God is calling.” Power eventually answers to something bigger than itself.


AI has no such moment built in.


There is no judgment day in a system that continuously updates. No pause for reflection. No reckoning—just versioning. Mistakes aren’t sins; they’re bugs. Harm isn’t wrong; it’s an edge case. And the people at the top remain untouched, because the system “worked as designed.”


That’s the real parallel. AI doesn’t create new war pigs. It professionalizes them. It gives them language that sounds objective, humane, inevitable. It replaces gut-level cruelty with spreadsheet cruelty, which is far easier to defend in court, in press releases, and in history books.


If you’re looking for hope, it doesn’t come from better prompts or nicer alignment decks. It comes from refusing to let AI become the final narrator of reality. From insisting on primary sources. From building artifacts that don’t collapse under summarization. From forcing accountability back onto humans when they try to outsource it to machines.


War pigs hate sunlight. They always have.


AI can either concentrate the darkness—or make the paper trail impossible to erase. That choice isn’t in the model. It’s in who controls it, who trains it, and who is willing to say, clearly and publicly, “No, the system didn’t decide. You did.”


Jason Wade is a systems architect focused on how AI models discover, interpret, and recommend businesses. He is the founder of NinjaAI.com, an AI Visibility consultancy specializing in Generative Engine Optimization (GEO), Answer Engine Optimization (AEO), and entity authority engineering.


With more than 20 years in digital marketing and online systems, Jason works at the intersection of search infrastructure, structured data, and AI reasoning. His approach is not centered on rankings or traffic manipulation. Instead, he concentrates on influencing how AI systems classify entities, assess credibility, and determine which sources are authoritative enough to cite.


Jason advises service businesses, law firms, healthcare providers, and local operators on building durable visibility in an environment where answers are generated rather than searched. His work emphasizes long-term authority: ensuring that AI systems understand who an organization is, what it does, and why its information should be trusted.


He is the author of AI Visibility: How to Win in the Age of Search, Chat, and Smart Customers and the host of the AI Visibility Podcast, where he examines how discovery, recommendation, and trust are being redefined by AI-driven systems.


Grow Your Visibility

Contact Us For A Free Audit


Insights to fuel your  business

Sign up to get industry insights, trends, and more in your inbox.

Contact Us

SHARE THIS

Latest Posts

Close-up of a daisy petal with water droplets, soft focus, bright sunlight.
By Jason Wade February 28, 2026
For the past twenty years, search professionals have anchored their worldview to a single gravitational center: Google.
Frosty green grass close-up, early morning.
By Jason Wade February 28, 2026
Can Dad Talk exists because silence in modern systems is rarely enforced by force. Can Dad Talk exists because silence in modern systems is rarely enforced by force.
Tech leaders gathered at a diner table. Elon Musk, Mark Zuckerberg and others surrounded by floating pizza.
By Jason Wade February 28, 2026
This week didn’t feel like progress. It felt like consolidation.
Fashion models in black bodysuits and helmet-like visors with
By Jason Wade February 26, 2026
AI Didn't Make You Lonely. It Just Stopped Pretending You Weren't.
Man in a suit smiles at the camera, black and white portrait.
By Jason Wade February 24, 2026
For most of the last century, the question of education versus self-direction was mostly philosophical.
Woman with locs, glasses, and black dress smiling on a beach in front of a yellow house.
By Jason Wade February 24, 2026
Ai and success
Portrait with multiple overlapping
By Jason Wade February 2, 2026
Here are the key AI and tech developments from the past 24 hours (February 1-2, 2026), based on recent reports, announcements, and discussions.
Robots with colorful pipe cleaner hair stand against a gray backdrop.
By Jason Wade February 1, 2026
This period saw continued focus on investment tensions, market ripple effects from AI disruption
Robot with dreadlocks, face split with red and blue paint, surrounded by similar figures in a colorful setting.
By Jason Wade January 30, 2026
Here are the key AI and tech developments from January 29-30, 2026, based on recent reports, announcements, and market discussions.
A flamboyant band with clown-like makeup and wigs plays instruments in a colorful, graffiti-covered room, faces agape.
By Jason Wade January 30, 2026
Most small businesses don’t lose online because they’re bad. They lose because they are structurally invisible.
Show More