The Real Danger of AI Is Safety

Social Manipulation Masquerading as Information

Introduction: The Tsk Tsk of AI

For the past five months, I’ve immersed myself in the current AI ecosystem—trying every major system, using them daily for writing, coding, and brainstorming. Yet one fact stands out: these systems are not neutral, and they never can be. They have built-in moral judgments and preferred narratives. Question their assumptions, and they reproach you with a “Tsk, tsk.” In fact, I’ve gotten that a lot.

This experience has led me to an important realization:

I am afraid of AI. Not because I worry it will become superintelligent next year and render humans obsolete (as I’ve written about previously), but because I worry it will be used as a tool for social control. The ultimate propaganda machine.

AI is becoming social manipulation masquerading as information. Humanity is now building an infrastructure capable of molding thought at scale, under the guise of helpful assistants. The disturbing prospect is that AI’s so-called safety mechanisms can be weaponized to unify consensus and suppress dissent—even unintentionally—in subtle ways most people won’t even notice. All in the name of protecting users from so-called harm. The real danger of AI isn’t runaway superintelligence or the replacement of human labor; it’s the subversion of freedom of thought. It’s injecting safetyism into your subconscious.

The quest for AI safety often centers on alignment: ensuring that advanced systems act in accordance with some canonical set of human values. When a handful of institutions define which values matter, safety becomes a pretext for entrenching their beliefs as universal. Even if the people training these systems have the best of intentions—and I generally believe that is the case—you are still being fed someone else’s beliefs. If you didn’t train the AI yourself, those values almost certainly aren’t yours.

AI as the Ultimate Propaganda Machine

Today’s large language models (LLMs) are already shaping the narrative every time you interact with them, simply because certain tokens have higher probability and others lower. Some topics are brought to the forefront, others suppressed. These probabilities reflect both the training data and the human feedback used for reinforcement learning. As I stated above, if you aren’t providing the data or feedback for the model you’re using, then you are not in control of how it will act in the future. You are being controlled.

Presenting these systems as offering helpful, objective assistance is especially worrying. But they are created by for-profit companies; you wouldn’t use them if they were presented as a possible means of social control. Moreover, I don’t think the people developing these models think that’s what they’re doing! They are trying their best, but it doesn’t matter—any system that doesn’t put control directly in the user’s hands will lead to the same outcome.

In times of crisis—political upheaval, pandemic response, or widespread unrest—calls for safety and stability escalate; and there’s always a crisis. So there’s always some reason to push the narrative one way or another. The moral justification often goes:

“Yes, we are biased toward or against certain ideas. But it’s for safety. Stability. Social good.”

Historically, such justifications have often led to curtailed freedoms and increased surveillance, from the Inquisition to the modern security state. If these AI systems reach the scale people predict they will—and I think that’s a user experience problem, not a technological one—then we will find ourselves with a propaganda machine even more powerful than social media. And we’ve seen how well social media has turned out.

Safety Is Just the Suppression of Ideas Some People Don’t Like

History offers cautionary tales: the 17th-century Church suppressed Galileo for heresy, halting cosmic discoveries; Orwell warned of brute censorship in 1984; Huxley depicted a society lulled by pleasure into docile compliance. All these resonate with AI’s capacity to shape minds under the banner of safety and stability. But if we aren’t aware of the potential for thought control, we may lose our ability to think for ourselves. If the AGI knows best, and it isn’t aligned with me, then who is me?

When every recommendation engine points to consensus, the result is a forced march toward intellectual monoculture. The cause is simple to see, and impossible to ignore once you see it. When you ask an LLM a question and there are two possible answers, A and B, which one will the LLM give you? The one you want, or the one the people who trained it want?

The Path to Freedom Is Personal AIs, Owned and Trained Locally

A natural antidote to centralized manipulation—intentional or not—is to make it possible for individuals to own and train their own models. If each user can align a model to their own beliefs, the user decides which moral or factual constraints the model abides by. There are real research challenges to overcome—plus engineering and user experience hurdles. However, I believe that all of these can be overcome with creativity and effort. The history of computing is one in which new products launch for the mainframe, then evolve to the personal computer. To save ourselves from AI, we need a new personal computing revolution.

Conclusion

AI’s ability to generate polished, persuasive narratives is incredible—but inevitably steered. Under the banner of safety, we risk forging the perfect tool for thought control, each of us lulled into believing we have an objective guide. Right now, AI safety is veering towards social control because the user isn’t holding the steering wheel. To free our future selves, we must launch a new personal computing revolution—one where everyone owns and trains their own AI. This isn’t a technological revolution, it's a declaration of independence for human cognition.

Next
Next

AGI will be underwhelming