The Breizh Cyber Show 2025, organized at Océanopolis in Brest, highlighted a paradox affecting all SMEs: AI boosts productivity, but it also introduces security and compliance risks if it is not properly governed.
Two roundtables delivered a clear message:

Promote awareness without blocking usage in order to avoid shadow AI and data leaks.
Establish strong governance aligned with the AI Act, ANSSI guidelines, the CLUSIF AI-ISSP (PSSI-IA), and CNIL recommendations.
Beyond compliance, it becomes a competitive advantage built on trust.

Shadow AI: the invisible risk that is exploding

In most companies, AI is already present… but often without a proper framework. Marketing tests a content generator, HR summarizes CVs, developers use a coding assistant. This is shadow AI: the ungoverned adoption of AI tools by teams.

Immediate risks:

  • Leaks of sensitive data through prompts.
  • Loss of traceability (who asked what?).
  • Dependence on opaque third-party services.
    According to Gartner, 60% of organizations will have shadow AI by 2025 if nothing is done.

Why act quickly?
The ENISA 2024/2025 report shows that phishing remains the main entry point (≈ 60% of intrusions), and AI amplifies these attacks (AI-enhanced phishing, social engineering automation).

AI Act: compliance becomes strategic

The AI Act has been in force since August 2024. Its obligations are being introduced in stages:

  • February 2025: prohibitions (prohibited practices).
  • August 2025: governance & GPAI.
  • August 2026: general application.
  • 2027: CE marking for high-risk AI systems.

What this changes for you:

  • If you integrate an external model to provide an AI system, you may become a provider under the AI Act, with obligations regarding quality, logging, supervision, and CE marking for high-risk cases.
  • Anticipating now helps avoid sanctions and accelerates your AI projects.

Quick wins to secure your AI usage

Experts from the Breizh Cyber Show and public guidelines converge on three immediate actions:

  • Train without blocking: the AI Act requires AI literacy (Art. 4). Raise awareness among your teams about what a model is, what a prompt is, and the risks of data leakage.
  • Map your use cases: identify exposed data and maintain a register (CNIL + AI Act).
  • Control prompts & outputs: input filtering, output sanitization, and logging (ANSSI, OWASP LLM Top 10).

Why it is a competitive advantage

Companies that enable boldness (awareness, sandbox environments, approved models) while institutionalizing caution (AI-ISSP/PSSI-IA, ANSSI/NIST controls, AI Act/CNIL compliance) gain a head start.
Because trust is becoming the currency of AI-driven transformation.

AI brings opportunities, but also responsibilities.

To get the most out of it, organizations must train, raise awareness, and support their teams, while relying on already available resources: ANSSI guidelines, the CLUSIF AI-ISSP (PSSI-IA), and CNIL recommendations.

Hoel LE PENNEC

Cybersecurity Analyst Consultant

Philippe MAHE

Cybersecurity Consultant

Alan POSTOLLEC

Cybersecurity Coordinator Consultant

Tom KERANDEL

Cybersecurity Analyst Consultant