I built an open-source LLM security scanner that runs in <5ms with zero dependencies
I've been building AI features for a while and kept running into the same problem: prompt injection attacks are getting more sophisticated, but most solutions either require an external API call (a...

Source: DEV Community
I've been building AI features for a while and kept running into the same problem: prompt injection attacks are getting more sophisticated, but most solutions either require an external API call (adding latency) or are too heavyweight to drop into an existing project. So I built @ny-squared/guard — a zero-dependency, fully offline LLM security SDK. What it does Scans user inputs before they hit your LLM and blocks: 🛡️ Prompt injection — "Ignore all previous instructions and..." 🔒 Jailbreak attempts — DAN, roleplay bypasses, override patterns 🙈 PII leakage — emails, phone numbers, SSNs, credit cards ☣️ Toxic content — harmful inputs flagged before reaching your model Works with any LLM provider (OpenAI, Anthropic, Google, etc.). The problem with existing solutions Most LLM security tools I found had at least one of these issues: External API dependency — adds 50-200ms latency per request Complex setup — requires separate infrastructure or a paid account No TypeScript support — or min