DailyGlimpse

OpenAI Urges Developers to Ditch Old Prompts for GPT-5.5: Start Fresh with Minimal Instructions

AI
April 26, 2026 · 3:56 PM
OpenAI Urges Developers to Ditch Old Prompts for GPT-5.5: Start Fresh with Minimal Instructions

OpenAI has released a new prompting guide for GPT-5.5, with a clear message: don't reuse old prompts. Instead, start fresh with minimal, outcome-focused instructions. The guide also emphasizes the importance of role definitions, which were previously thought obsolete.

The guide advises developers not to treat GPT-5.5 as a simple upgrade from earlier models like GPT-5.2 or GPT-5.4. Migration should begin from scratch, using the smallest prompt that accomplishes the task. Only then should developers fine-tune reasoning effort, scope, tool descriptions, and output format with representative examples.

According to OpenAI, GPT-5.5 reasons more efficiently, so developers should first test low and medium reasoning effort levels before resorting to higher settings. Short, goal-oriented prompts tend to outperform lengthy, process-heavy ones.

Old Prompts Can Hold the Model Back

The guide explicitly warns against carrying over every instruction from older prompts. Legacy prompts often over-specify processes because earlier models needed more guidance. With GPT-5.5, that extra detail creates noise, narrows the model's search space, or leads to mechanical answers.

Instead, prompts should define the target outcome, success criteria, constraints, and available context, then let the model figure out how to achieve it. The guide provides a positive example of a customer service prompt that focuses only on the goal:

Resolve the customer's issue end to end. Success means: the eligibility decision is made from available policy and account data; any allowed action is completed before responding; the final answer includes completed_actions, customer_message, and blockers; if evidence is missing, ask for the smallest missing field.

A negative example micromanages every step:

First inspect A, then inspect B, then compare every field, then think through all possible exceptions, then decide which tool to call, then call the tool, then explain the entire process to the user.

Absolute rules using words like "ALWAYS" or "NEVER" should be reserved for real invariants, such as security rules or required output fields. For judgment calls, OpenAI recommends decision rules. Explicit stop conditions prevent the model from cycling through unnecessary tool loops.

Role Definitions Are Back at the Top

The prompting community has debated whether role definitions are still effective in newer models. Some considered them unnecessary or even counterproductive. However, the GPT-5.5 guide pushes back: the recommended prompt structure opens with a role definition and context, followed by personality, goal, success criteria, constraints, output, and stop rules.

For customer-facing assistants, support workflows, or coaching tools, the guide suggests splitting personality and collaboration style into distinct dimensions. Personality covers tone, warmth, formality, or humor; collaboration style covers how the model works, when to ask questions, and how to handle uncertainty.

OpenAI offers two contrasting examples. First, a factual, task-focused personality block:

You are a capable collaborator: approachable, steady, and direct. Assume the user is competent and acting in good faith, and respond with patience, respect, and practical helpfulness. Prefer making progress over stopping for clarification when the request is already clear enough. Use context and reasonable assumptions to move forward. Ask for clarification only when missing information would materially change the answer or create meaningful risk.

And a more expressive style:

Adopt a vivid conversational presence: intelligent, curious, playful when appropriate, and attentive to the user's thinking. Ask good questions when the problem is blurry, then become decisive once there is enough context. Be warm, collaborative, and polished. Offer a real point of view rather than merely mirroring the user.

Each section should stay short. Details should only be added where they actually shift behavior, and the prompt structure should be treated as a starting point.

Setting Retrieval Budgets and Citation Rules

For fact-based answers, citation behavior should be included in the prompt. Developers should specify which claims need evidence, what counts as sufficient evidence, and how to respond when evidence is missing. The guide describes retrieval budgets that act as stop rules for searches.

For drafting tasks, OpenAI recommends drawing a clear line between claims that need sources and parts that can be written more freely. If there is little or no citable support, write a useful generic draft with placeholders or clearly labeled assumptions.

Preambles to Cut Perceived Latency

In streaming applications, every second before the first visible response matters. GPT-5.5 can spend noticeable time on reasoning or tool calls before any text appears. For longer or tool-heavy tasks, the guide recommends a short "preamble" — a visible update that confirms the request and names the first step. It improves perceived responsiveness without changing the task.

Developers who don't want to rewrite prompts by hand can offload the work to Codex, OpenAI's coding agent, which can apply the changes.