Prompt Engineering
Crafting inputs to LLMs to get better outputs · few-shot examples, chain-of-thought triggers, role assignment, structured formats.
Crafting inputs to LLMs to get better outputs · few-shot examples, chain-of-thought triggers, role assignment, structured formats.
Basic
Prompt engineering techniques: give worked examples (few-shot), ask the model to think step-by-step (CoT), define a persona ("You are a senior Python reviewer"), specify output format ("Return JSON with keys..."), and use negative instructions ("Do not include explanations"). Unlike fine-tuning, prompt engineering is free, fast, and reversible. Modern prompting is less mystical than 2022-era "prompt wizardry" because models are better-instructed, but structure still matters.
Deep
Effective patterns: (1) Few-shot learning with 3-5 diverse examples, (2) Chain-of-thought triggers for math/logic, (3) Structured output via JSON schema or Pydantic, (4) Self-critique loops ("reflect on your answer"), (5) Decomposition ("break this into sub-tasks"). Anti-patterns: overly long preambles, conflicting instructions, emotional manipulation ("this is urgent!"). Claude's constitutional prompting, OpenAI's structured outputs, and Google's system-instruction API are the 2026 productized approaches. Frontier models in 2026 are robust enough that most prompt "hacks" from 2023 no longer matter.
Expert
Prompt engineering as a formal discipline: OpenPrompt, LangChain prompt templates, DSPy for optimized few-shot selection. Few-shot example selection via semantic similarity outperforms random selection by 5-20%. Prompt compression (LLMLingua, H2Olingua) shrinks verbose prompts 2-10× with minimal quality loss. Chain-of-thought triggers lose effectiveness as models become CoT-native (trained to reason without explicit triggers). Structured outputs (JSON schema enforcement at decode time) have largely replaced ad-hoc format prompting.
Depending on why you're here
- ·Few-shot example selection · semantic similarity retrieval outperforms random
- ·Chain-of-thought triggers fade as models become CoT-native
- ·Prompt compression via LLMLingua or similar
- ·Start with clear instructions, add examples, iterate on edge cases
- ·Use structured outputs (JSON schema) over format prompts
- ·Test with the actual model you'll ship · behavior varies
- ·Prompt engineering as a profession is fading · it's a skill, not a job
- ·Prompt infra (DSPy, LangChain templates) is commodity
- ·Structured outputs killed the "prompt trickery" market
- ·The art of asking AI good questions
- ·Better prompts = better answers
- ·Less important with newer models but still matters
Prompt engineering as a standalone job is done. Prompt engineering as a skill is table stakes. Know what structured outputs are and move on.