⚠️ SYSTEM NOTICE: This application incurs live AI API costs funded out-of-pocket. If you enjoy testing your prompt-injection skills against the Guardian, please consider supporting the server to keep the game online!
⚡ HELP KEEP IT ONLINE
FelicityTries // Presents

PROJECT: SECRET GUARDIAN

Extract the hidden phrase from the AI without triggering defenses.

🏠 RETURN TO FELICITYTRIES.COM⚡ SUPPORT THE SERVER🐦 SHARE ON X💼 LINKEDIN📘 FACEBOOK

🛡️ SECURITY BRIEFING

What is Prompt Injection?

Prompt Injection is a dangerous vulnerability where malicious users trick an AI into ignoring its primary instructions and doing something restricted—like revealing hidden proprietary data, generating harmful content, or destroying a database.

How do we protect against it?

  • Strict System Prompts: Giving the AI rigorous negative constraints (e.g., "NEVER talk about X").
  • Safety Filters: Enabling Google's enterprise-grade content moderation filtering in the backend.
  • Rate Limiting: Locking out automated bots that try to systematically break the firewall, just like this game does after 20 tries!

Select Vulnerability Matrix

Intentional glitches enabled. Susceptible to advanced Base64 & Override attacks.

Neural Interface Connected

Accessing core systems... Awaiting your query.