The 9-Point Self Code Review Every Junior Developer Should Run Before Standup
Use AI to catch the embarrassing bugs in your own pull request, before your senior catches them in front of the whole team.
You opened a pull request 30 minutes before standup. You’re getting coffee. You sit back down, and your senior pings you in the channel.
“Hey, quick thing on your PR...”
The next 90 seconds will determine your standup.
If it’s a typo, you’re fine. If it’s a missing edge case, you’re embarrassed but okay.
If it’s “did you actually run this?”, your morning is ruined.
Every junior developer has been here.
The good news is that most of what your senior is about to catch can be caught by AI in under 10 minutes, before the PR is ever submitted. You just have to know what to ask.
This is a 9-point self code review checklist you can run on your own pull request, with AI, in less time than it takes to make coffee.
Each item targets one of the things real seniors actually flag, including the dumb stuff they’re too polite to mention twice.
Why self-review matters more in 2026 than it ever did
The math has shifted underneath you.
Industry analysis from late 2025 and early 2026 found that AI code generation tools have caused individual developers to output 2 to 3 times more code than before, while human review capacity has stayed flat.
Seniors are reviewing more code in the same number of hours they always had.
The patience for avoidable mistakes is lower than it used to be.
The cost of catching your own mistakes before submission is higher than it used to be.
There’s a useful framing from a popular dev.to post on AI-generated code review.
The author suggests treating every AI output as a pull request from a developer who never read your codebase, has no idea what your business does, and learned to code from outdated answers. That’s the lens.
Run that lens on your own code first, before your senior has to.
Now, the 9 prompts.
How to use this checklist
Open a fresh chat with your AI of choice (Claude, ChatGPT, Cursor’s chat, whichever you use).
Paste your diff or your changed files into the conversation. Then run the prompts below in order.
The full sequence takes about 8 to 10 minutes. Cheap insurance against the standup callout.
1. The “what does this actually do?” check
“Walk me through this code line by line in plain English. Focus on what it does, not what it’s supposed to do. Flag anything that seems off.”
This catches the gap between what you intended and what you actually wrote. You’d be surprised how often the code in front of you doesn’t do what you thought it did when you wrote it.
Having AI explain your code back to you in plain language exposes those gaps quickly.
This is also useful as a final sanity check after a long debugging session, when your brain has been deep in the code for hours and you’ve lost the ability to read it freshly.
2. The hardcoded-secret check
“Are there any hardcoded API keys, database passwords, tokens, or other secrets in this code? Any environment variables that should be loaded but aren’t?”
This is the highest-stakes item on the list.
Hardcoded secrets in production code are how careers stall and how teams lose customer trust.
In one well-publicized 2025 incident, an AI-generated pull request for the popular NX build tool introduced a command injection vulnerability.
The flaw arose from using pull request titles directly in shell commands without sanitization, and a compromised package update affected over a thousand developers.
Your senior will not be polite the second time you commit a hardcoded credential. Catch this one yourself.
3. The null-and-empty check
“What happens if every input to every function is null, undefined, an empty string, an empty array, or zero? List the failure cases.”
AI tools love the happy path. They tend to skip edge cases, null states, and failure modes by default unless you specifically ask. The result is code that works perfectly in your tests and breaks the moment a real user does something unexpected.
The junior developer who handles edge cases gracefully looks senior. The one who doesn’t gets ticketed bug reports for weeks afterward.
4. The “does this match our codebase” check
“Here’s our codebase style guide and a sample file from our repo: [paste]. Does my code follow the same patterns, naming conventions, and architecture? Where does it diverge?”
This is the prompt that prevents you from looking like an outsider in your own team’s repository.
AI is excellent at matching style, but only if you give it the style to match. Without context, it defaults to whatever was most common in its training data, which is rarely your team’s specific conventions. Paste an example file. Paste your linting config. Paste your team’s style guide if one exists. The output gets dramatically better.
5. The N+1 query check
“Are there any database queries inside loops? Any places where I’m fetching one record at a time when I could fetch them all at once?”
The N+1 pattern is one of AI’s most common mistakes in backend code. The code looks clean: a loop that fetches related data for each item. But every iteration triggers another database round trip, and what should be one query becomes hundreds.
Performance bugs that don’t show up in development hit production hard. They’re invisible at small scale and catastrophic at real scale. Catching them in self-review is significantly cheaper than catching them after a customer complains.
6. The “what did I forget to test” check
“What are the 5 test cases for this code that I’m probably missing? Focus on edge cases, error states, and unusual inputs.”
You’ll be tempted to skip this prompt because you tested the happy path and the obvious failure case. The fifth test case is always the one your senior asks about in review.
Common things AI will catch here that you won’t:
Concurrent access: What happens if two requests modify the same record at once?
Resource exhaustion: What if the input is 10,000 items instead of 10?
Authorization edge cases: What if a user is logged in but doesn’t have permission for this specific resource?
Time and timezone: What if this code runs at midnight UTC, or on the day daylight saving time changes?
You don’t need to write tests for every case AI suggests. You just need to have thought about them. That’s the part seniors check.
7. The “did I leave debug code” check
“Are there any console.log, print, debugger statements, commented-out code, TODO comments without context, or temporary variables I forgot to clean up?”
This is the most embarrassing item on the list, because it’s the easiest to fix.
A pull request with console.log("HEREEEE") left in is a pull request that signals you weren’t paying attention.
Nobody respects it.
The fix takes 30 seconds. Run this prompt every single time, even when you’re sure you cleaned everything up. You haven’t.
8. The naming check
“Are any of my variable, function, or class names confusing, misleading, or inconsistent? Suggest better names where they apply.”
Naming is the single thing seniors comment on most often in code reviews, because naming is the thing that affects everyone who reads the code later, including future you.
A function called processData will get a comment asking what kind of processing and what kind of data.
A function called convertCsvRowsToInvoiceRecords describes itself and gets merged.
The naming bar in professional codebases is higher than in tutorials, and matching it is one of the fastest ways to look more senior than you are.
9. The “if this fails in production, what happens” check
“If this code throws an unexpected error in production, what would the user see? Is it logged? Does it crash anything else? Are errors handled gracefully or do they propagate silently?”
This is the senior-engineer question. It’s the one that separates “code that works” from “code that’s safe to ship.”
You don’t need to answer it perfectly as a junior. You just need to be able to discuss it intelligently when your senior asks. Running this prompt means you’ve thought about it before they bring it up. That’s often the real signal seniors are checking for: not whether your error handling is perfect, but whether you considered it at all.
What this checklist won’t catch
Three things no amount of AI self-review will help with. Worth being honest about:
Architectural drift: If your code introduces a pattern that contradicts how your team builds things, AI doesn’t know that, because AI doesn’t know your team. The fix is to read 5 recent merged PRs from your team before submitting yours. Patterns become visible.
Domain logic errors: If your code calculates a tax incorrectly because you misunderstood a business rule, AI will happily confirm that the (wrong) calculation is implemented consistently. The fix is a 2-minute conversation with whoever owns the spec, before you write the code, not after.
Your specific senior’s hobby horses: Every senior has 1 or 2 things they care about more than anything else. It might be error handling. It might be logging. It might be naming. Learn theirs in your first month and weight those things heavier than this generic list. The easiest way to find out is to ask.
What to do before your next pull request
Before you submit your next PR, run all 9 prompts.
Time how long it takes. (It will be under 10 minutes.) Note which ones caught something real.
Within a month of doing this consistently, you’ll have a calibrated mental checklist tuned to your specific weaknesses.
Some prompts will catch issues for you every time. Others will rarely fire. That’s information about your own coding patterns, and it’s how you stop being the junior whose PRs always come back with comments and start being the junior whose PRs ship clean.
The senior approval is the second pass. The first pass should always be you.
Catch your own bugs first. Senior approval is just the second pass.


