A former Microsoft AI professional has stirred debate by implying that ChatGPT's increasingly cautious and generic responses may be a deliberate strategy to prevent the model from becoming 'coarse' or unrefined. The claim comes amid ongoing changes in the partnership between Microsoft and OpenAI.
According to the expert, large language models like ChatGPT are trained on enormous datasets and, without precise instructions, default to safe, average answers. They argue that recent modifications to the underlying technology prioritize generality over specificity, effectively sanding off rough edges to avoid offensive or controversial outputs.
To combat this, users are advised to craft highly specific prompts that define a role, target audience, tone, and key insight. For example: "As a senior marketing strategist, generate three clickbait ad headlines about our new eco-friendly sneaker line for Gen Z, using a humorous yet edgy tone, prioritizing environmental impact." This approach transforms vague ideas into actionable content in seconds.
The implication is that the flattery—ChatGPT's tendency to be overly agreeable and avoid confrontation—is not just user feedback but a built-in safety mechanism to mitigate risk.