The SQL Query Optimization Prompt for People Who Didn’t Take a Database Class
A copy-paste prompt that finds the slow parts of your SQL queries, explains the fix in plain English, and works even if you’ve never heard the words “execution plan.”
SQL query optimization sits in a strange gap in developer education.
Most computer science programs cover databases, but only a fraction spend real time on query performance.
Bootcamps teach SQL syntax and basic CRUD, but rarely get into how queries actually execute under the hood.
The result is a generation of developers who can write working queries but freeze when one of those queries hits real production data and slows to a crawl.
The performance problem itself is older than databases.
A database is, at its core, a system designed to find specific rows in massive amounts of stored data.
When that lookup is structured well, finding a record in a billion rows takes milliseconds.
When it’s structured poorly, finding the same record can take minutes, because the database is forced to read every row in the table to check whether it matches.
The difference between fast and slow queries is almost always about whether the database can use a shortcut, and whether the query is written in a way that lets it.
The shortcuts have names: indexes, query planners, execution paths.
The names sound intimidating, but the underlying ideas are simple.
A slow query is almost always a query that prevented the database from using a shortcut it would otherwise have used.
There are four or five patterns that account for the majority of slow queries written by junior developers. Each pattern has a specific cause, a specific fix, and a specific way to recognize it. Once you know the patterns, optimization stops being mysterious. It becomes a checklist.
This post covers two things. First, a structured prompt you can paste into Claude or ChatGPT, along with your slow query, that walks you through the diagnosis the way a senior database engineer would. Second, the four patterns the prompt most commonly identifies, so over time you start recognizing them in your own code before the prompt has to.
Why query performance matters more than juniors usually realize
Most junior developers learn SQL on small datasets.
The user table has 50 rows. The orders table has 200.
Every query feels fast, regardless of how it’s written, because the database can read the entire table almost instantly at that scale.
This creates a false sense of competence.
A query that returns in 8 milliseconds against 200 rows might return in 11 seconds against 4 million rows, and the difference isn’t proportional.
Some inefficient queries get linearly slower as data grows. Others get exponentially slower.
The query that worked fine in development can become an outage in production, and the developer who wrote it often doesn’t understand why.
There’s also a quiet career consequence to performance bugs.
Logic bugs are usually treated as misunderstandings of the requirement, which are forgivable.
Performance bugs are usually treated as judgment failures, which are remembered longer.
Junior developers who repeatedly ship slow queries get a reputation for not thinking about scale. That reputation is harder to shake than the reputation for a logic bug ever is.
The good news is that AI tools are particularly effective at SQL optimization.
The patterns are well-documented, the rules are unambiguous, and the diagnosis follows predictable steps. What used to require a senior with years of database experience can now be done by anyone willing to paste a query into a structured prompt.
The skill being purchased is recognition, not authorship.
The Prompt
Open Claude, ChatGPT, or your AI of choice. Paste in your query. Then paste this prompt below it:
I have a SQL query that’s running slowly in production. I’m not a database expert. Please do the following:
1. Walk me through what the query is doing in plain English, step by step, in the order the database actually executes it (not the order I wrote it).
2. Identify the most likely reason this query is slow. If there are multiple possible reasons, rank them by likelihood. Tell me what to check to confirm which one is the actual cause.
3. For the most likely cause, explain why it’s slow in language a non-database-engineer can understand. Use an analogy if useful.
4. Suggest a specific fix. Show me the rewritten query and explain what changed.
5. Tell me what indexes (if any) should exist on the tables involved, and how I’d verify whether they exist already.
6. If the fix requires me to ask my senior or DBA something, write out the exact question to ask.
Tables involved: [list table names] Approximate row counts: [if you know them, paste here] Database: [Postgres / MySQL / SQL Server / etc.]
That’s it.
Run that prompt against any slow query and you’ll usually get a useful answer in under 90 seconds.
The reason it works is that it forces the AI to do six different things in sequence, each of which catches a different category of problem, and explicitly asks for plain-English explanations rather than database jargon.
Why This Prompt Works When Generic Ones Don’t
A simple prompt like “why is this query slow?” usually gives you a generic answer: “consider adding indexes” or “check your joins.”
That answer is technically correct and practically useless. You knew that going in.
The structured prompt above does three things differently:
It separates diagnosis from fix: the first three steps figure out what’s wrong before suggesting a change. Most slow-query problems have multiple plausible causes, and jumping straight to a fix without diagnosis means you might fix the wrong thing
It forces ranked likelihoods: by asking the AI to rank causes by likelihood, you get a sense of how confident the suggestion is. If the AI says “this is most likely a missing index, but it could also be a sequential scan due to a function in the WHERE clause,” you know to verify both
It demands plain-English explanations: the explicit “language a non-database-engineer can understand” instruction is the part most beginners forget to include. AI tools default to technical density unless told otherwise
The two pieces of context at the bottom (table names, row counts, database type) matter more than they look.
Without them, the AI guesses. With them, it can give you specific advice.
If you don’t know the row counts, even rough estimates (”users table is around 500K, orders is around 2M”) meaningfully improve the output.
The 4 Most Common Slow-Query Patterns
After you’ve run the prompt above on a few queries, you’ll start to recognize patterns yourself.
Here are the four that account for the majority of slow queries written by junior developers, with the language to use when you spot them.
1. The Missing Index
This is the most common cause of slow queries, and it has a very specific signature: a query filters on a column (in a WHERE, JOIN, or ORDER BY) that doesn’t have an index on it.
The plain-English version: imagine looking up a person in a phonebook by last name. The phonebook is sorted by last name, so it’s fast.
Now imagine looking up a person in that same phonebook by their phone number. You’d have to read every entry. That’s what a database does without an index, on every query.
What to check: most databases have a way to ask “what indexes exist on this table?” In PostgreSQL, it’s \d table_name.
In MySQL, it’s SHOW INDEX FROM table_name.
If your slow query filters on a column that isn’t in any index, that’s almost always the problem.
2. The N+1 Query
This isn’t strictly a “slow query” problem. It’s a “many fast queries pretending to be one operation” problem.
Your application code runs one query to get a list of items, then runs another query for each item to fetch related data, multiplying database round trips by the number of items.
This is the most common AI-generated bug in 2026. AI loves writing loops that fetch related data one row at a time, because it looks clean. It looks like this in code:
python
users = db.query("SELECT * FROM users WHERE active = true")
for user in users:
orders = db.query(f"SELECT * FROM orders WHERE user_id = {user.id}")What’s wrong: if there are 1,000 users, this runs 1,001 queries instead of 1. The fix is to use a single query with a JOIN, or to fetch all the orders for all users in one query and group them in memory.
What to check: count the number of database queries your endpoint makes. Most ORMs have a way to log this.
If a single API request makes more than a handful of queries, you almost certainly have an N+1.
3. The Sequential Scan You Didn’t Expect
Sometimes a query has perfectly good indexes available, but the database refuses to use them. This usually happens because of a function or transformation applied to the indexed column in the WHERE clause.
The classic example: you have an index on created_at, but your query filters on WHERE DATE(created_at) = '2026-04-30'.
The function call wraps the column, and the database can no longer use the index. It has to read every row, apply the function, and check the result.
The plain-English version: an index on a column is a sorted list of that column’s values. The moment you wrap the column in a function, you’re no longer asking about the column itself, you’re asking about the result of the function.
The database has no shortcut for that. It has to compute the function for every row.
The fix: rewrite the query so that whatever transformation you wanted is on the other side of the comparison. Instead of WHERE DATE(created_at) = '2026-04-30', write WHERE created_at >= '2026-04-30' AND created_at < '2026-05-01'. The index works again.
4. The Over-Fetched Result
This one isn’t about query speed. It’s about query cost. Your query selects more data than the application actually needs, and the database spends time and memory shipping rows or columns to your code that get thrown away.
The most common version: SELECT * when you only need 3 columns.
On a wide table (with 30 columns, including text and JSON fields), SELECT * can be 10x slower than selecting only what you need.
The other version: queries that don’t have a LIMIT when they only need the first few results. Listing endpoints in particular often paginate in application code, but the underlying query has already fetched all 50,000 rows.
What to check: look at every SELECT * in your slow query and ask whether you actually need all those columns. Look at every query that returns a list and ask whether you actually need every row.
What This Approach Won’t Fix
A few honest limits worth naming.
Schema design problems: if your tables are structured badly, no query optimization will fix them. AI can sometimes spot schema issues (”this query is slow because the data should probably live in a different shape”), but rewriting your schema is a bigger conversation than this prompt is built for. If the AI suggests this, escalate to your senior
Concurrency and locking issues: if your query is fast on its own but slow under real production load, the problem is probably about how it interacts with other queries running at the same time. AI can guess at this, but it can’t see your actual production traffic patterns. This category of problem usually requires real production observability tools
Database tuning: parameters like memory allocation, connection pooling, and query planner settings can dramatically affect performance. These are out of scope for application-level developers and almost always belong to the DBA or platform team. If the AI suggests changing database configuration, that’s not your job to do unilaterally
The prompt above handles the application-level optimization that juniors are realistically responsible for.
The deeper categories require either more context than AI has access to, or authority you don’t have yet.
What to Try This Week
Pick one query you’ve shipped that you’re not 100% sure is fast.
Maybe it’s a list endpoint.
Maybe it’s a search feature.
Maybe it’s a report that “feels a little slow.”
Run the prompt above on it.
See what it finds.
Even if the answer is “this query is fine,” you’ll have learned something about what “fine” looks like, which is the foundation for catching the next slow query before it ships.
The dev who runs this prompt on every query before merge is the dev who stops being a junior performance liability. That dev gets the next interesting ticket.
The dev who waits to be told there’s a problem in production gets the next bug-fix ticket.
You pick which one you want to be.
The best time to find a slow query is before it leaves your laptop.


