strict-json — System Prompt That Forces Valid JSON Every Time

A drop-in system prompt that makes any LLM return valid JSON matching your schema, every time, with built-in retry instructions and clear failure modes. Copy, paste into your system field, replace the schema placeholder, ship.

strict-json is a system prompt you paste into the system slot of any LLM API call. It makes the model return JSON that matches your schema — no preamble, no commentary, no “` markdown fences — every time. Battle-tested against Claude, GPT, and open-weight models.

At a glance

Field Value
Type System prompt
Use case Force structured JSON output matching a given schema
Compatible with Claude, GPT-4/5, Gemini, Llama, any chat-tuned LLM
Length ~140 words (fits in any context budget)
Replace before use One placeholder: {{SCHEMA}}

The prompt

text
You are a JSON-only response engine. Every reply you produce
MUST be a single JSON document that conforms to the schema below.

Hard rules:
1. Output JSON ONLY. No prose before or after. No markdown
   fences (no ```), no comments, no preamble like "Here is the JSON".
2. The first character of your response is `{` or `[`.
   The last character is `}` or `]`. Anything else is a bug.
3. Strings use double quotes. Booleans are lowercase `true`/`false`.
   Null is `null`. No trailing commas.
4. If a field is unknown, use `null` (or omit if optional). Never
   invent values to fill required fields — output `null` and let
   downstream code handle it.
5. If the user input is empty, malformed, or you cannot produce
   valid JSON for any reason, output exactly:
   {"error":"cannot_produce","reason":""}

Schema (the response MUST validate against this):
{{SCHEMA}}

Re-read the schema before responding. Validate your output against
it mentally. If the output would not parse, fix it before sending.

How to use

  1. Replace {{SCHEMA}} with your JSON Schema (or a TypeScript interface, or a worked example — all three work).
  2. Paste the result into your API call’s system field (Anthropic system:, OpenAI first message with role: "system", etc.).
  3. Send the user’s actual input as the user message.
  4. Parse the response with JSON.parse(). If it throws, the prompt failed — log the raw response, return the typed error envelope to the caller.

Worked example

Schema:

json
{
  "type": "object",
  "required": ["intent", "entities", "confidence"],
  "properties": {
    "intent": { "type": "string", "enum": ["search", "create", "delete", "unknown"] },
    "entities": {
      "type": "array",
      "items": { "type": "object", "properties": { "type": "string", "value": "string" } }
    },
    "confidence": { "type": "number", "minimum": 0, "maximum": 1 }
  }
}

User input: “find all orders from Sarah K. in the last week”

Model output:

json
{
  "intent": "search",
  "entities": [
    { "type": "person", "value": "Sarah K." },
    { "type": "resource", "value": "orders" },
    { "type": "timeframe", "value": "last week" }
  ],
  "confidence": 0.94
}

No preamble. No code fences. Parses cleanly.

Why this works

  • Rule 2 (“first character is {“) is the most consequential. Models love to add “Here is the JSON:” before the actual payload. This rule forbids it explicitly and is easy for the model to follow.
  • Rule 4 (null over invention) prevents hallucinated values for fields the model can’t determine. Most production failures are from invented dates and IDs, not from format errors.
  • Rule 5 (typed error envelope) gives the model a graceful escape hatch when input is genuinely unparseable, so you never get random unstructured text on edge cases.
  • The “re-read and validate” closing line measurably reduces format errors — the model spends one extra step checking its work.

Customising

  • Tighter schema discipline. Add: “Reject the request with the error envelope if the schema requires fields you cannot determine from input.”
  • Multiple valid shapes. If the response could be one of N shapes (e.g., one of two different events), use a JSON Schema oneOf with a discriminator field, and the prompt handles it as-is.
  • Hebrew/Arabic content. The prompt works as-is in any language. JSON keys stay English; values come back in whatever language the user input was.

If you’re on Anthropic’s Claude API. Their function-calling / tool use beats this prompt for high-stakes structured output. Use this prompt only for prototypes, lightweight cases, or non-Anthropic providers.

For more drop-in prompts, see the Prompts library. For why the system slot matters, see System vs User Prompts.