Strategic AI risks: The 5 questions that help us keep a clear head.

Most teams can’t keep up with the pace of AI right now. Between sci-fi headlines and very real security concerns, it’s getting harder to separate what’s urgent from what’s loud.

This post is FemAIs current take, and the set of questions I use to control AI risk at the strategy level, before a project becomes “too complex to stop.”

1. “Bots inventing languages” is not evidence of AGI

One of the most misunderstood AI headlines of the last decade came from a 2017 research experiment: negotiation bots started compressing their language into something that was hard for humans to read. Predictably, this was framed as “AI creating its own secret language,” and then escalated into “proof of consciousness.”

It wasn’t.

If you optimize systems for efficiency rather than human interpretability, they will find shortcuts. That’s not awareness, not intent, not “world understanding.” It’s optimization under constraints. It’s statistics plus an objective function plus a training environment.

The danger of treating every emergent behavior as “AGI” is strategic:

  • Leadership loses sensitivity for real, near-term risks.
  • Teams debate metaphysics instead of system boundaries, controls, and failure modes.
  • Governance becomes either hysterical (overreaction) or vague (underreaction), because the conversation is ungrounded.

For me, the consciousness debate often becomes a convenient distraction: a way to avoid talking about ownership, accountability, and operational safety.

2. Agents introduce new security risk 🔒

If you’re deploying agents, you’re not just shipping “a smarter chatbot.” You’re shipping automation and scale.

Here’s why that matters:

  • Credible impersonation becomes cheap (LLMs + personas + memory).
  • Bot networks can reinforce each other (coordinated actors with consistent style, roles, narratives).
  • Narratives can adapt in real time (testing what works, iterating instantly).
  • Moderation systems are tuned for human speed, not thousands of coordinated AI-driven accounts.

This is not sci-fi. This is an abuse and scaling problem. And it demands boring, serious disciplines as product features: Threat modeling. Abuse cases. Red teaming. Monitoring. Kill switches. Less “are they conscious?” More “how do we prevent coordinated manipulation and automated harm?”

Because these systems can already be dangerous through speed, coupling, and scale, especially when they amplify hidden bias or automate decisions that were previously slow, manual, and socially constrained.

3. Speed is the real problem

Even experienced professionals struggle to keep up because everything is improving simultaneously:

  • Models iterate quickly.
  • Tooling lowers the barrier to entry (agent frameworks, RAG pipelines, AutoGPT-style loops).
  • New combinations (LLMs + tools + APIs + internal systems) create capabilities that no single person explicitly designed.

This is why many AI projects don’t fail because the idea was wrong. They fail because of blind spots.

The most important strategic risk isn’t “will AI become conscious?” It’s “did we accidentally build an autonomous capability we can’t see, can’t control, and can’t stop fast enough?”

Our strategy-level risk questions (use these before you ship!)

These are the five questions I bring into planning, stakeholder alignment, and governance. They’re designed to force clarity on autonomy, coupling, misuse, drift, and accountability.

  1. Where can the system act autonomously without us actively noticing?
    Look for “silent autonomy”: background actions, automated decisions, or self-triggering workflows. If the system can do things without human friction, you need visibility and constraints.
  2. Where can it couple with other systems or APIs?
    Coupling increases blast radius. Every integration is a new path for escalation: data access, tool execution, external actions, and unintended side effects. Treat integrations as risk multipliers.
  3. What is the worst realistic misuse scenario, not the sci-fi scenario?
    Skip the killer-robot fantasy. Focus on likely abuse: fraud, impersonation, manipulation, harassment at scale, biased automated decisions, leakage of sensitive data, compliance violations.
  4. Which metrics will show early warning signs of drift or escalation?
    “Trust me” is not a metric. Define signals that indicate behavior is changing: error patterns, unusual tool calls, policy violations, output toxicity shifts, bias indicators, rising manual overrides, anomaly detection alerts.
  5. Who pulls the plug when something goes out of hand?
    This is the question most teams avoid, and the one that matters most. Make it explicit:
  • Who has authority?
  • What triggers escalation?
  • What does shutdown actually mean (disable tools, revoke credentials, roll back model, quarantine system)?
  • How fast can it happen?

If there’s no clear answer, you don’t have a safety plan.

Closing thought

We can debate the meaning of intelligence later. Right now, the urgent work is governance that matches the reality of modern AI: fast-moving systems with real-world integrations, automated agency, and scalable impact. Strategy-level risk control starts by asking better questions early, and refusing to let “AI hype” replace operational accountability.

Picture of Alexandra Wudel

Alexandra Wudel

Founder of FemAI

Stay updated with FemAI

Join our newsletter to receive the latest insights, updates, and news about our work in ethical AI. Be the first to know about our initiatives, certifications, and events.