Beta
ConceptsReading · ~3 min · 77 words deep

Prompt Engineering

Crafting inputs to LLMs to get better outputs · few-shot examples, chain-of-thought triggers, role assignment, structured formats.

TL;DR

Crafting inputs to LLMs to get better outputs · few-shot examples, chain-of-thought triggers, role assignment, structured formats.

Level 1

Prompt engineering techniques: give worked examples (few-shot), ask the model to think step-by-step (CoT), define a persona ("You are a senior Python reviewer"), specify output format ("Return JSON with keys..."), and use negative instructions ("Do not include explanations"). Unlike fine-tuning, prompt engineering is free, fast, and reversible. Modern prompting is less mystical than 2022-era "prompt wizardry" because models are better-instructed, but structure still matters.

Level 2

Effective patterns: (1) Few-shot learning with 3-5 diverse examples, (2) Chain-of-thought triggers for math/logic, (3) Structured output via JSON schema or Pydantic, (4) Self-critique loops ("reflect on your answer"), (5) Decomposition ("break this into sub-tasks"). Anti-patterns: overly long preambles, conflicting instructions, emotional manipulation ("this is urgent!"). Claude's constitutional prompting, OpenAI's structured outputs, and Google's system-instruction API are the 2026 productized approaches. Frontier models in 2026 are robust enough that most prompt "hacks" from 2023 no longer matter.

Level 3

Prompt engineering as a formal discipline: OpenPrompt, LangChain prompt templates, DSPy for optimized few-shot selection. Few-shot example selection via semantic similarity outperforms random selection by 5-20%. Prompt compression (LLMLingua, H2Olingua) shrinks verbose prompts 2-10× with minimal quality loss. Chain-of-thought triggers lose effectiveness as models become CoT-native (trained to reason without explicit triggers). Structured outputs (JSON schema enforcement at decode time) have largely replaced ad-hoc format prompting.

The takeaway for you
If you are a
Researcher
  • ·Few-shot example selection · semantic similarity retrieval outperforms random
  • ·Chain-of-thought triggers fade as models become CoT-native
  • ·Prompt compression via LLMLingua or similar
If you are a
Builder
  • ·Start with clear instructions, add examples, iterate on edge cases
  • ·Use structured outputs (JSON schema) over format prompts
  • ·Test with the actual model you'll ship · behavior varies
If you are a
Investor
  • ·Prompt engineering as a profession is fading · it's a skill, not a job
  • ·Prompt infra (DSPy, LangChain templates) is commodity
  • ·Structured outputs killed the "prompt trickery" market
If you are a
Curious · Normie
  • ·The art of asking AI good questions
  • ·Better prompts = better answers
  • ·Less important with newer models but still matters
Gecko's take

Prompt engineering as a standalone job is done. Prompt engineering as a skill is table stakes. Know what structured outputs are and move on.

As a specialist role, mostly no. As a skill expected of every AI engineer, yes. The 2023 "prompt engineer" tier is gone by 2026.