Module 1 of 7
1. The Core Idea
Weak vs Strong Prompts
Constraints narrow the prediction space, producing better output
Weak Prompt
"Write about marketing."
Vague. The AI has no constraints to guide predictions. Output will be generic and unfocused.
Strong Prompt
"Write a 200-word LinkedIn post about content marketing for B2B SaaS startups. Use a professional tone and include 3 actionable tips."
Specific. Constraints on length, audience, platform, tone, and structure produce focused output.
This module introduces the foundational concept of prompt engineering. LLMs are prediction engines, and your prompt steers those predictions. This module covers essential configurations and the mindset to get started.
Model Comparison
Frontier models and their trade-offs for prompt engineering
| Model | Context | Strengths | Cost |
|---|---|---|---|
| Claude Sonnet 4 | 200K | Coding, instruction following, safety | $3/$15 per M tokens |
| GPT-4o | 128K | Multimodal, speed, tool calling | $2.50/$10 per M tokens |
| Gemini 2.5 Pro | 1M | Long context, multilingual | $1.25/$10 per M tokens |
| Llama 3.3 70B | 128K | Open source, self-hosted, free | Free (local compute) |