Most developers treat prompting as an afterthought—write something reasonable, observe the output, and iterate if needed. That approach works until reliability becomes critical. As LLMs move into production systems, the difference between a prompt that usually works and one that works consistently becomes an engineering concern. In response, the research community has formalized prompting into a set of well-defined techniques, each designed to address specific failure modes—whether in structure, reasoning, or style. These methods operate entirely at the prompt layer, requiring no fine-tuning, model changes, or infrastructure upgrades.
This article focuses on five such techniques: role-specific prompting, negative prompting, JSON prompting, Attentive Reasoning Queries (ARQ), and verbalized sampling. Rather than covering familiar baselines like zero-shot or basic chain-of-thought, the emphasis here is on what changes when these techniques are applied. Each is demonstrated through side-by-side comparisons on the same task, highlighting the impact on output quality and explaining the underlying mechanism.
Setting up the dependencies
Here, we’re setting up a minimal environment to interact with the OpenAI API. We securely load the API key at runtime using getpass, initialize the client, and define a lightweight chat wrapper to send system and user prompts to the model (gpt-4o-mini). This keeps our experimentation loop clean and reusable while focusing only on prompt variations.
The helper functions (section and divider) are just for formatting outputs, making it easier to compare baseline vs. improved prompts side by side. If you don’t already have an API key, you can create one from the official dashboard here: https://platform.openai.com/api-keys
import json
from openai import OpenAI
import os
from getpass import getpass
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
client = OpenAI()
MODEL = "gpt-4o-mini"
def chat(system: str, user: str, **kwargs) -> str:
"""Minimal wrapper around the chat completions endpoint."""
response = client.chat.completions.create(
model=MODEL,
messages=[
{"role": "system", "content": system},
{"role": "user", "content": user},
],
**kwargs,
)
return response.choices[0].message.content
def section(title: str) -> None:
print()
print("=" * 60)
print(f" {title}")
print("=" * 60)
def divider(label: str) -> None:
print(f"\n── {label} {'─' * (54 - len(label))}")
Role-Specific Prompting
Language models are trained on a wide mix of domains—security, marketing, legal, engineering, and more. When you don’t specify a role, the model pulls from all of them, which leads to answers that are generally correct but somewhat generic. Role-specific prom...