Step 9 of 14

Prompt Building

The quality of what you get from AI depends entirely on how you ask. A vague prompt gives a vague answer. A precise prompt gives a precise answer. This guide teaches you how to build prompts that get results — every time.

This is the most important skill you'll learn. Whether you're using Claude Code, GSD, or Nexus — the quality of your prompt determines the quality of the result. Come back to this page whenever you're not getting good answers.

Why English?

All your prompts to AI must be in English. Here's why:

You don't need perfect English. Simple, clear sentences work best. "Fix the bug in login function" is better than a grammatically perfect paragraph that buries the instruction.

The 5-Part Prompt Structure

Every great prompt has up to 5 parts. You don't always need all five — but knowing them helps you build precise instructions.

Part What it does Example
1. Context Tells the AI what it's working with — the project, the situation, what already exists "This is a Python mining script for Bittensor subnet 18."
2. Task The specific action you want done — use a verb: create, fix, explain, refactor, analyze "Analyze the scoring function and explain how miners are ranked."
3. Constraints Rules, limits, or requirements — what to include, what to avoid "Don't modify the network layer. Only change the scoring logic."
4. Format How you want the answer delivered — list, table, code, step-by-step "Give me a bullet-point summary, then the code."
5. Examples Show the AI what you expect by giving a sample of the desired output "For example: Metric: emission_rate, Value: 0.42, Meaning: this subnet..."
Start with Context + Task. These two are always required. Add Constraints, Format, and Examples when the task is complex or when the first result isn't what you wanted.

Before / After Examples

The best way to understand prompt building is to see bad prompts fixed. Here are three real examples:

Example 1: Analyzing Code

Before (vague) After (precise)
"Explain this code" "This is the scoring function from Bittensor subnet 18 (in src/scoring.py). Explain how miners are ranked — what metrics are used, how weights are calculated, and what a miner needs to do to get a higher score. Give me a summary table with each metric, its weight, and what it measures."

Why it's better: The vague version could mean anything — Claude might explain syntax, or line-by-line logic, or the entire architecture. The precise version tells Claude exactly what you want to know (scoring, ranking, metrics) and exactly how to format it (summary table).

Example 2: Fixing a Bug

Before (vague) After (precise)
"Fix the error" "The miner crashes with 'ConnectionRefusedError' when it tries to register on the network. The error happens in src/miner.py at line 87. I think the problem is the subtensor endpoint URL. Fix it so it uses the correct finney endpoint (wss://entrypoint-finney.opentensor.ai:443) and add a retry mechanism that waits 5 seconds between attempts."

Why it's better: The vague version forces Claude to guess which error, which file, and what "fix" means. The precise version gives the exact error message, exact location, suspected cause, and desired solution — including specific technical details.

Example 3: Building Something New

Before (vague) After (precise)
"Make a dashboard" "Create an HTML page called status.html that shows the mining status for our Bittensor subnets. Use a simple table with columns: Subnet Number, Subnet Name, Our Rank, Daily Emission, Status (mining/stopped). Read the data from a JSON file called data/status.json. Style it with dark background (#0f0f13) and purple accents (#7c6ef0) to match our existing site."

Why it's better: "Make a dashboard" gives Claude zero guidance — it could build a React app, a terminal dashboard, or a Jupyter notebook. The precise version specifies the technology (HTML), the data structure (table columns), the data source (JSON file), and the visual style (matching existing site).

Power Phrases

These phrases make Claude work smarter. Add them to the end of your prompts when you need deeper, more careful responses:

Power phrase When to use it What it does
Think step by step Complex problems with multiple parts Forces Claude to break the problem down instead of jumping to a conclusion
Explain your reasoning When you need to understand WHY, not just WHAT Claude shows its thought process so you can verify the logic
What are the risks? Before making changes to production code or configuration Claude identifies potential problems, edge cases, and failure modes
What am I missing? When you've proposed a solution but aren't sure it's complete Claude looks for gaps, edge cases, or overlooked requirements
Compare X and Y When choosing between two approaches, libraries, or designs Claude gives a structured comparison with pros/cons
Give me the simplest solution that works When you want practical, not over-engineered Prevents Claude from building a complex solution when a simple one will do
What would you change? Code review — when you want Claude to critique existing code Claude identifies improvements, bugs, and style issues
Combine power phrases. "Analyze this scoring function. Think step by step. What are the risks of changing the weight calculation? Explain your reasoning." — This gets you a thorough, well-structured analysis.

Bittensor Prompt Patterns

When working with Bittensor subnets, these prompt patterns will save you time. Copy and adapt them:

Subnet Analysis

I'm looking at Bittensor subnet [NUMBER]. Read the repository README and the scoring/validation code. Tell me: (1) What task this subnet performs, (2) How miners are scored, (3) What the emission rate per miner looks like, (4) How competitive it is for a new miner to enter. Format as a summary with a recommendation: MINE, SKIP, or WATCH.

Codebase Mapping

Map the codebase of this Bittensor subnet repository. Show me the file structure, identify the main entry points (miner.py, validator.py), and explain the scoring mechanism. Focus on what a miner needs to implement to earn rewards.

Scoring System Deep-Dive

Analyze the scoring/reward function in this subnet. Walk me through step by step: what inputs does the validator use, how are scores calculated, what weight does each metric have, and what's the minimum performance needed to stay profitable. Think step by step and explain your reasoning.

Mining Decision

Given these subnet metrics: emission per miner = [X], number of miners = [Y], our GPU = RTX 4090. Is it worth mining this subnet? Compare the expected daily reward against our hardware costs. What are the risks? What am I missing?
Always verify AI analysis. Claude gives excellent analysis, but subnet data changes constantly. Always cross-check emission rates and miner counts on taostats.io or through Nexus before making mining decisions.

Quick Reference

Situation Prompt pattern
Understanding code "Read [file]. Explain [specific thing]. Format as [table/list/summary]."
Fixing a bug "Error: [message] in [file] at [line]. I think [cause]. Fix it by [approach]."
Building something "Create [thing] that [purpose]. Use [technology]. Include [features]. Style it [how]."
Choosing approach "Compare [A] and [B] for [use case]. Which is simpler? What are the risks?"
Review/improve "Review [file]. What would you change? Think step by step."

Now that you know how to build prompts, learn how to manage entire projects with the GSD Workflow guide.