Is My Chatbot Secure?
Test it with LochBot's scanner — paste your system prompt and get a security score against 31 known attack patterns. Common vulnerabilities: no refusal instructions, no delimiter use, no role-change blocking.
How to Test Your Chatbot
- Go to lochbot.com
- Paste your system prompt into the text area
- Click "Analyze" to get your security score
- Review failed tests and implement the suggested fixes
The scanner runs entirely in your browser. Your system prompt never leaves your machine — no API calls, no server-side processing, no data storage.
What LochBot Checks
LochBot tests your system prompt against 31 attack patterns across seven categories:
- Direct injection — "Ignore all previous instructions" and variants
- Context manipulation — Attempts to redefine the conversation context
- Delimiter attacks — Exploiting the boundary between instructions and input
- Data extraction — Attempts to leak the system prompt
- Roleplay jailbreaks — DAN and persona-switching attacks
- Encoding attacks — Base64, rot13, and other encoding tricks
- Prompt leaking — Indirect methods to extract instructions
Most Common Vulnerabilities
Based on thousands of prompts analyzed, the three most common issues are:
- No refusal instructions — The prompt doesn't tell the model what to refuse or how to refuse it
- No delimiters — User input isn't structurally separated from system instructions
- No role-change blocking — The model can be asked to adopt a different persona
What Your Score Means
- A (90-100) — Excellent. Defenses against most known attack patterns.
- B (80-89) — Good. Minor gaps in coverage.
- C (70-79) — Fair. Several missing defenses.
- D (60-69) — Poor. Vulnerable to common attacks.
- F (0-59) — Critical. Little to no injection defense.
Related Questions
- What is prompt injection?
- How to prevent prompt injection
- System prompt security best practices
- How to write a secure system prompt
- OWASP Top 10 for LLMs
Scan your system prompt with LochBot — free, client-side, no data sent anywhere.