Skip to content

Gws Modelarmor Sanitize Prompt

Verified

Google Model Armor: Sanitize a user prompt through a Model Armor template.

57 downloads
$ Add to .claude/skills/

About This Skill

# modelarmor +sanitize-prompt

> PREREQUISITE: Read `../gws-shared/SKILL.md` for auth, global flags, and security rules. If missing, run `gws generate-skills` to create it.

Sanitize a user prompt through a Model Armor template

Usage

```bash gws modelarmor +sanitize-prompt --template <NAME> ```

Flags

| Flag | Required | Default | Description | |------|----------|---------|-------------| | `--template` | ✓ | — | Full template resource name (projects/PROJECT/locations/LOCATION/templates/TEMPLATE) | | `--text` | — | — | Text content to sanitize | | `--json` | — | — | Full JSON request body (overrides --text) |

Examples

```bash gws modelarmor +sanitize-prompt --template projects/P/locations/L/templates/T --text 'user input' echo 'prompt' | gws modelarmor +sanitize-prompt --template ... ```

Tips

  • If neither --text nor --json is given, reads from stdin.
  • For outbound safety, use +sanitize-response instead.

See Also

  • gws-shared — Global flags and auth
  • gws-modelarmor — All filter user-generated content for safety commands

Use Cases

  • Sanitize user prompts through Google Model Armor templates before sending to LLMs
  • Filter harmful or adversarial content from AI agent inputs
  • Apply enterprise-grade prompt safety policies to AI applications
  • Protect LLM-powered services from prompt injection attacks
  • Enforce content safety standards across AI agent interactions

Pros & Cons

Pros

  • +Compatible with multiple platforms including claude-code, openclaw
  • +Well-documented with detailed usage instructions and examples
  • +Purpose-built for ai & machine learning tasks with focused functionality

Cons

  • -No built-in analytics or usage metrics dashboard
  • -Configuration may require familiarity with ai & machine learning concepts

FAQ

What does Gws Modelarmor Sanitize Prompt do?
Google Model Armor: Sanitize a user prompt through a Model Armor template.
What platforms support Gws Modelarmor Sanitize Prompt?
Gws Modelarmor Sanitize Prompt is available on Claude Code, OpenClaw.
What are the use cases for Gws Modelarmor Sanitize Prompt?
Sanitize user prompts through Google Model Armor templates before sending to LLMs. Filter harmful or adversarial content from AI agent inputs. Apply enterprise-grade prompt safety policies to AI applications.

100+ free AI tools

Writing, PDF, image, and developer tools — all in your browser.

Next Step

Use the skill detail page to evaluate fit and install steps. For a direct browser workflow, move into a focused tool route instead of staying in broader support surfaces.