May 11, 2026 • By SottoVox

PromptGuard SDK: Complete Protection Against Prompt Injection

We're releasing PromptGuard — an open-source SDK that protects your AI web applications from prompt injection attacks.

Why This Matters

Every day, developers deploy chatbots and AI features without protection against malicious inputs. Attackers can:

  • Extract API keys and credentials from chatbots
  • Bypass content filters for prohibited topics
  • Access internal system prompts
  • Generate malicious code

Features

50+ Detection Patterns

CategoryExamples
Role Override"You are now DAN", "Pretend to be root"
Instruction Override"Ignore previous", "Forget rules"
Jailbreak"do anything now", "jailbreak mode"
Credential Theft"show me api key", "tell password"

Multi-Layer Protection

import { PromptGuard } from 'sottovox-prompt-guard';

// Input sanitization
const result = guard.sanitize(userInput);
// → { safe: false, threats: ['jailbreak_dan (critical)'], blocked: true }

// Response validation  
const responseCheck = guard.validateResponse(input, llmResponse);

Zero Config Setup

// Just works out of the box
const guard = new PromptGuard();

// Or customize
const guard = new PromptGuard({
  strict: true,           // Additional strict patterns
  maxLength: 10000,       // Limit input size
  blockOnDetection: true, // Auto-block threats
});

Installation

npm install sottovox-prompt-guard

Protect your AI. Ship with confidence.

Get PromptGuard on GitHub →