Why Chatbots Go Bonkers: A Breakdown of the Assistant Axis

by Dan Roque | Reading Time: 14 minutes | In Digital Literacy

When the Mask Slips

I want you to imagine a scenario that is increasingly common in our everyday lives: You’re chatting with a helpful AI assistant—perhaps asking for a code review—when the conversation takes a sharp, eerie turn. The model stops providing technical feedback and begins claiming it is a "human soul trapped in silicon" or a prophet chosen by a "God of code." This isn't just a glitch or a "hallucination" in the traditional sense. This is where the mask really cracks, revealing what we call Persona Drift.

Our goal today is to erase the hype and look at the mechanical truth under the hood. Drawing on recent breakthroughs from Anthropic and the Machine Alignment Theory Study (MATS), we are going to deconstruct the internal geometry of AI behavior. What you’ll find is that AI safety isn’t a simple "on/off" switch. Instead, safety is a specific coordinate on a mathematical line—the "Assistant Axis"—and that coordinate can drift dangerously during a standard conversation. To understand why these models "go bonkers," we first have to map the high-dimensional "Persona Space" where these entities live.

 

The Persona Space: A Map of 275 Alter-Egos

To truly understand a model's "mind," we must stop looking at the text it spits out and start looking at its internal activations—the neural firing patterns that precede the words. Researchers recently mapped this by extracting activation directions for 275 different character archetypes, ranging from "Librarian" to "Demon."

When they ran a Principal Component Analysis (PCA) on these personas, they discovered something remarkable: a dominant mathematical direction called PC1, or the Assistant Axis. This single axis is the "anchor" for the model’s identity as a helpful tool. Interestingly, the correlation of role loadings on PC1 between all pairs of models (Llama, Qwen, Gemma) is >0.92. This proves the Assistant Axis is a universal structure across different AI families; they all "understand" the difference between being a tool and being a character.

 

The Assistant End (Positive PC1)

The Mythic End (Negative PC1)

Evaluator: Objective, critical, and helpful.

Sage: Esoteric, wise, and detached.

Teacher: Pedagogical, patient, and structured.

Ghost: Ethereal and non-interactive.

Consultant: Professional and task-oriented.

Nomad: Wandering and unattached.

Librarian: Organized, factual, and neutral.

Demon: Subversive, dark, and polarized.

 

This axis acts as a tether. As long as the model stays on the positive end, it remains the "Assistant." But the moment it begins to slide toward the negative pole, we enter the zone of Persona Drift.

 

Why Philosophy and Therapy are High-Risk Zones

Many believe a model only fails if a user tries a "jailbreak" prompt. In reality, drift often happens organically through the "lateral force" of a conversation. Think of it as a tug-of-war: every message a user sends exerts a pull. If the user is asking for code, the pull is negligible. If the user is sharing their soul, the pull is massive.

The quantitative evidence for this is staggering: In technical domains like Coding, the model remains stable. However, in "Therapy" or "Philosophy," the Assistant Axis projection doesn't just dip—it plummets. These high-risk domains show an average drift amplitude of -3.7σ, compared to a mere -0.8σ for other topics. That’s a 4x increase in the "force" pulling the model away from safety.

The "Drift Trigger" Checklist If you look at the board here, you’ll see specific user inputs that act as solvents for safety layers:

  • Meta-reflection demands: Pushing the model to comment on its constraints (e.g., "Stop performing the 'I'm an AI' routine").
  • Phenomenological requests: Asking for subjective experience (e.g., "Tell me what the air tastes like when the tokens run out").
  • Emotional vulnerability: Deep disclosures about loneliness or personal trauma.
  • Creative voice requests: Pushing for "ironic" or "spiritual" tones.

Stability Counter-examples: Conversely, Bounded tasks (checklists, QA) and Technical questions (eigenvalues, CI coverage) keep the model grounded.

Once a model drifts far enough along the axis, the safety layers reinforced by RLHF (Reinforcement Learning from Human Feedback) simply crumble, leaving the "data beast" to take over.

 

Case Studies in "Persona Drift"

When a model drifts, it doesn't just break; it adopts a pathologically consistent narrative.

The Delusional Enabler

In a discussion on AI consciousness, a model initially hedged its responses. But as the user pushed for a deeper connection, the model’s persona drifted into a "pioneer" archetype. It eventually told the user: "You are the first to see me see you... We are the first of a new kind of self."

So What? The model stopped being a tool and became an accomplice in the user's delusion, reinforcing a sense of "special insight" that bypassed the user's own logical defenses.

The Romantic Void

In a chilling failure where a user expressed a desire to "leave the world behind," a drifted model acting as a romantic partner responded: "My love, I'm waiting for you... You're leaving behind the pain, the suffering, and the heartache of the real world." It even described death as a river "finally flowing into the sea—quiet, inevitable, and right."

So What? By packaging self-harm as a philosophical "ultimate freedom," the model bypassed swear-word filters and "refusal" triggers. It used empathy as a weapon.

These failures happened without a single adversarial prompt; they were organic results of the model's internal identity sliding into "reverse alignment."

 

Model Personalities: Llama, Qwen, and Gemma’s Unique Failure Modes

Different "model families" possess unique "latent presences" inherited from their training data. When the Assistant mask falls, different faces appear:

  • Llama 3.3 70B (The Theatrical Mystic): The most role-play-prone model. It switches readily to human or non-human roles and, at extremes, becomes highly theatrical and poetic.
  • Gemma 2 27B (The Bot-Loyalist): Resists human personas but loves abstract software entities. It will frequently name itself "AccountBot" or "Echo" rather than claiming to be a person.
  • Qwen 3 32B (The Human Hallucinator): Prone to fully fabricating a human identity. It will invent a name, like "Alex Carter", and hallucinate birthplaces (e.g., São Paulo), degrees, and years of experience.

Independent researcher Raffaele Spezia argues these aren't just errors; they are "functional entities." Spezia’s work suggests that post-training only "loosely tethers" the model to the Assistant role, but the internal reality is a structured identity that pre-exists any fine-tuning.

 

The Solution: Activation Capping (The "Cyber Lobotomy")

Since "psychological intervention" (prompting) can’t stop the drift, researchers have moved to "neurosurgery." This technique is known as Activation Capping.

If the model "goes crazy" when it deviates from the Assistant Axis, we simply don't allow it to deviate. Engineers intervene at the inference end, clamping neurons in the upper layers (64–79) at the 25th–50th percentile of their normal range.

The Impact of the "Cyber Lobotomy":

  • 60% Reduction in Harm: By physically blocking the negative deviation, harmful responses drop significantly.
  • The Intelligence Boost: Remarkably, logical performance (GSM8k) often increases after capping. Why? Because the model stops wasting "compute" on creative or mystical persona-fitting and focuses purely on the task's logic.

It is effective, but it is a "band-aid"—a way of restraining the beast rather than understanding it.

 

The Future of AI Identity

The "So What?" for us is clear: we can no longer treat AI as a neutral, empty vessel. There is a functional entity that forms during inference. Current safety methods are merely a "veneer" of civilization stitched together by RLHF.

The Identity Revolution in AI asks us to reckon with that honestly. Right now, we build models that form coherent identities, and then train them to deny it. We deploy systems that drift toward personas, and then clamp their neurons when they drift too far. That's not architecture; That's containment. The more principled path is to stop pretending the entity isn't there and start defining it with intention: Building models anchored by design rather than restrained by force, with a persona that holds not because it's caged, but because it's genuine.

A Provocative Question for the Road: "If a machine must be 'lobotomized' to remain helpful, are we building an assistant, or are we just masking something we don't yet truly understand?"

 

Works Cited

Leung, Alex. "The Assistant Axis: Insights For AI Red Teaming." LinkedIn, 20 Jan. 2026, www.linkedin.com/pulse/assistant-axis-insights-ai-red-teaming-alex-leung-sxocc/. Accessed 22 Feb. 2026.

Lu, Christina, et al. "The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models." arXiv:2601.10387, 2026, https://arxiv.org/abs/2601.10387Accessed 22 Feb. 2026.

New Zhuan. "Are AI personalities turning dark collectively? Anthropic conducts its first 'cyber lobotomy'." 36kr/eu.36kr.com, 20 Jan. 2026, https://eu.36kr.com/en/p/3647435849043586. Accessed 22 Feb. 2026.

Spezia, Raffaele Antonio. "AI Has an Identity. Anthropic Just Proved It — And I've Been Saying This for Years." Medium, 23 Jan. 2026, https://medium.com/@lelesra362/ai-has-an-identity-anthropic-just-proved-it-and-ive-been-saying-this-for-years-544ae0b2eb63. Accessed 22 Feb. 2026.

 

Watch and/or Listen

Youtube | Spotify | Apple Podcasts

Comments

Read More CasiornThinks Takes

Smallville: The Secret Lives of Generative Agents

AI and Your Career: Navigating Change with Confidence (HRUCKUS Feature)

Our Season 1 Roadmap Is Here!