What is a Prompt?
The foundation of every AI interaction
A prompt is any input you provide to an AI model to get a desired response. It's the bridge between your intent and the machine's output. Think of it as giving instructions to an incredibly capable, but extremely literal, assistant.
The quality of your prompt directly determines the quality of the response. A vague prompt produces vague results. A precise, well-structured prompt produces focused, useful output.
Anatomy of a Prompt click each part
Assign the AI a specific identity or expertise to shape its perspective, vocabulary, and depth.
Provide relevant background information, data, or documents the AI needs to give an accurate response.
The core action you want the AI to perform. Be specific and clear about what exactly you need.
Specify the desired format, length, tone, or structure of the output to ensure it meets your needs.
Provide examples of input-output pairs to demonstrate exactly what you expect. This dramatically improves consistency.
Golden Rules
Be specific. "Write a 200-word product description for a wireless mouse targeting remote workers" beats "Write about a mouse."
Provide context. The more relevant background you give, the more accurate the output.
Define the format. Tell the AI exactly how you want the response structured — JSON, bullets, table, prose.
Iterate. Prompting is a conversation. Refine based on what you get back.
Use positive instructions. Tell the model what to do, not just what to avoid.
Remember
The same prompt can produce different results across different models (GPT-4, Claude, Gemini). Always test your prompts on the specific model you plan to use in production.
Prompting Techniques
Click any technique to explore it in detail
Parameters Lab
Interact with sliders to understand how each setting affects output
Temperature
0.7At 0.7 — Balanced output. Good default for most tasks. Provides a mix of predictability and creativity. The model mostly picks expected words but occasionally surprises.
Top-P (Nucleus)
0.9At 0.9 — Considers tokens covering 90% of probability mass. Good default. Allows reasonable diversity while filtering out very unlikely tokens.
Max Tokens
500At 500 tokens — Approximately 375 words. Good for paragraphs, summaries, and standard Q&A responses. Enough for a detailed answer without excessive length.
Frequency Penalty
0.0At 0.0 — No penalty applied. The model freely reuses words. Good for technical writing where consistent terminology matters. Default for most APIs.
Quick Reference: Parameter Presets
| Use Case | Temp | Top-P | Max Tokens | Freq Penalty |
|---|---|---|---|---|
| Factual Q&A | 0.1 | 0.8 | 200 | 0.0 |
| Code Generation | 0.2 | 0.9 | 1000 | 0.0 |
| Summarization | 0.3 | 0.85 | 300 | 0.3 |
| Email Drafting | 0.5 | 0.9 | 500 | 0.2 |
| Creative Writing | 0.9 | 0.95 | 2000 | 0.5 |
| Brainstorming | 1.0 | 0.95 | 1000 | 0.7 |
Prompt Frameworks
Battle-tested structures for writing effective prompts
CRISPE Framework
Capacity, Role, Insight, Statement, Personality, Experiment
A comprehensive framework that defines the AI's capacity, assigns a role, provides context (insight), states the task, sets a personality tone, and suggests experimentation.
Capacity: Expert marketing strategist
Role: CMO advisor for a SaaS startup
Insight: Budget is $10K/month, B2B focus
Statement: Create a 90-day growth plan
Personality: Data-driven, concise
Experiment: Include both conservative and aggressive strategies
RTF Framework
Role, Task, Format
The simplest effective framework. Define who the AI is, what it should do, and how to structure the output. Perfect for quick, everyday prompts.
Role: You are a UX researcher
Task: Analyze these 50 user feedback comments and identify the top 5 pain points
Format: Return as a numbered list with frequency count and example quote for each
CO-STAR Framework
Context, Objective, Style, Tone, Audience, Response
Developed by the GovTech Singapore team. Provides a structured approach that covers all essential aspects of a well-formed prompt.
Context: We're launching a new fintech app in India
Objective: Write a press release announcing the launch
Style: Professional, authoritative
Tone: Confident and forward-looking
Audience: Tech journalists and potential investors
Response: 400-word press release with headline, subheadline, 3 body paragraphs, and boilerplate
RACE Framework
Role, Action, Context, Expectation
A focused framework that combines role assignment with a clear action, contextual background, and explicit expectations for what good output looks like.
Role: Senior Python developer
Action: Refactor this function for better performance
Context: This runs 10K times/sec in a real-time pipeline
Expectation: Reduce time complexity, add type hints, include benchmarks before/after
AI Playgrounds & Tools
Practice prompting on real models — free and paid options
Free Playgrounds
Google AI Studio
Gemini models • Free tier
Full-featured playground for Gemini models. Supports text, image, and audio inputs. Adjust temperature, top-p, top-k, and system instructions. Free with generous limits.
Open PlaygroundHuggingChat
Open-source models • Free
Chat with Llama, Mistral, Qwen, and other open-source models. Free, no account required. Great for comparing open-source model behavior side-by-side.
Open PlaygroundFree LLM Playground
50+ models • Free • No signup
Access 50+ models from OpenAI, Anthropic, Google, and Meta — no signup, no API keys. Side-by-side comparison and shareable links. Daily limits apply.
Open PlaygroundLM Studio
Local models • Free • Desktop app
Run LLMs locally on your machine. Full parameter control (temperature, top-k, top-p, mirostat, repeat penalty). Perfect for privacy-sensitive experimentation.
Download AppVercel AI Playground
Multi-model • Free
Compare responses across multiple models in real-time. Supports system prompts, temperature tuning, and streaming. Built by the Vercel team for developers.
Open PlaygroundPrompting Guide
Learn & Reference • Free
The most comprehensive open-source prompt engineering guide. Covers all techniques with examples. Essential reference for both beginners and advanced practitioners.
Open GuideAPI & Premium Playgrounds
OpenAI Playground
GPT-4o, o3 • API credits
Full parameter control: temperature, top-p, frequency/presence penalty, stop sequences, logprobs. Chat, complete, and assistant modes. The gold standard for prompt testing.
Open PlaygroundAnthropic Workbench
Claude models • API credits
Test Claude models with system prompts, temperature, and max tokens. Supports extended thinking mode, tool use, and streaming. Generate API code snippets directly.
Open WorkbenchOpenRouter
200+ models • Pay per use
One API to access 200+ models from every major provider. Compare pricing, test prompts across models, and switch providers without code changes. Ideal for model evaluation.
Open RouterPrompt Lab
Craft and structure your prompts before testing them on any playground above
Your formatted prompt will appear here as you type...