Published: December 6, 2025
59
300
2.3k

OpenAI, Anthropic, and Google use 10 internal prompting techniques that guarantee near-perfect accuracy…and nobody outside the labs is supposed to know them. Here are 10 of them (Bookmark this for later):

Technique 1: Role-Based Constraint Prompting The expert don't just ask AI to "write code." They assign expert roles with specific constraints. Template: You are a [specific role] with [X years] experience in [domain]. Your task: [specific task] Constraints: [list 3-5 specific

Technique 2: Chain-of-Verification (CoVe) Google's research team uses this to eliminate hallucinations. The model generates an answer, then generates verification questions, answers them, and refines the original response. Template: Task: [your question] Step 1: Provide your

Image in tweet by Louis Gleeson

Technique 3: Few-Shot with Negative Examples Anthropic discovered that showing the model what NOT to do is as powerful as showing what TO do. Template: I need you to [task]. Here are examples: ✅ GOOD Example 1: [example] ✅ GOOD Example 2: [example] ❌ BAD Example 1:

Image in tweet by Louis Gleeson

Technique 4: Structured Thinking Protocol OpenAI's GPT-5 team uses this for complex reasoning tasks. Force the model to think in layers before responding. Template: Before answering, complete these steps: [UNDERSTAND] - Restate the problem in your own words - Identify what's

Technique 5: Confidence-Weighted Prompting Google DeepMind uses this for high-stakes decisions. Ask the model to rate its confidence and provide alternative answers. Template: Answer this question: [question] For your answer, provide: 1. Your primary answer 2. Confidence

Image in tweet by Louis Gleeson

Technique 6: Context Injection with Boundaries Anthropic engineers inject massive context but set clear boundaries on what matters. Template: [CONTEXT] [paste your documentation, code, research paper] [FOCUS] Only use information from CONTEXT to answer. If the answer isn't in

Technique 7: Iterative Refinement Loop OpenAI's research team chains prompts to refine outputs through multiple passes. Template: [ITERATION 1] Create a [draft/outline/initial version] of [task] [ITERATION 2] Review the above output. Identify 3 weaknesses or gaps. [ITERATION

Technique 8: Constraint-First Prompting Google Brain researchers start with constraints before the actual task. Template: HARD CONSTRAINTS (cannot be violated): - [constraint 1] - [constraint 2] - [constraint 3] SOFT PREFERENCES (optimize for these): - [preference 1] -

Technique 9: Multi-Perspective Prompting Anthropic's Constitutional AI uses multiple viewpoints to reduce bias and improve reasoning. Template: Analyze [topic/problem] from these perspectives: [PERSPECTIVE 1: Technical Feasibility] [specific lens] [PERSPECTIVE 2: Business

Technique 10: Meta-Prompting (The Nuclear Option) This is what OpenAI's red team uses to break their own models and find edge cases. You ask the AI to generate the perfect prompt for itself. Template: I need to accomplish: [high-level goal] Your task: 1. Analyze what would

You can use these techniques RIGHT NOW I've been using these with Claude Sonnet 4.5, GPT-4, and Gemini 2.0 Flash for 6 months. Results: - 100% reduction in hallucinations on technical docs - 3x faster iteration on code generation - 90%+ accuracy on complex analysis tasks The

Enjoy this? 1 - Follow me @Aigleeson 2 - Discover how to publish content faster and go viral more often with my brand-new AI Powered Creator training. 88% off for Black Friday weekend only 🚨

Share this thread

Read on Twitter

View original thread

Navigate thread

1/13