The Design Review Nobody Does
The meeting that would fix half your product problems
Every crypto team has engineering code review. Pull requests get examined. Other developers check the work before it ships. This is standard practice. Nobody questions it.
Almost no crypto team has equivalent design review. Screens get built and shipped without structured critique. The designer finishes, hands off to engineering, and moves on. Maybe a founder glances at it. Maybe someone in Discord says it looks good.
This is how products accumulate design debt. This is how interfaces become inconsistent. This is how obvious problems ship to users.
The design review is the meeting nobody does. It’s also the cheapest fix for product quality that exists.
What gets skipped
Design review means structured critique of design work before it ships. Multiple eyes examining the same screens with specific questions.
Does this match our existing patterns. Does this solve the user problem we identified. Does this create new problems. Is this accessible. Does this work on mobile. Does this handle edge cases. Does this fit our brand.
Simple questions. But if nobody’s asking them systematically, nobody’s catching the issues they’d reveal.
Most teams have informal feedback. Slack threads. Quick opinions. A founder saying “looks good” or “make the logo bigger.” This isn’t review. This is reaction.
Real review is structured. It has criteria. It happens at specific points in the process. It includes people with different perspectives. It produces actionable outcomes.
The gap between informal reaction and structured review is where quality lives.
Why teams skip it
Time pressure is the obvious reason. We need to ship. We don’t have time for meetings about meetings. Design review feels like process for process sake.
This is short-term thinking. The time you save skipping review gets spent later fixing problems that review would have caught. Usually with interest.
Small teams are another reason. We only have one designer. Who would review their work. But design review doesn’t require designers reviewing designers. Engineers can review for technical feasibility. Product managers can review for user needs. Founders can review for brand alignment. Different perspectives catch different problems.
Ego is a quieter reason. Critique feels like criticism. Designers don’t want their work questioned. Founders don’t want to seem like they’re micromanaging. So everyone stays polite and problems ship.
The best design cultures treat review as collaboration, not judgment. The goal isn’t to prove the work is bad. The goal is to make it better before users see it.
What good review looks like
A design review meeting needs structure or it becomes a rambling opinion session.
Start with context. The designer explains the problem being solved. What user need. What business goal. What constraints existed. This frames everything that follows.
Then show the work. Walk through the design. Explain decisions. Point out tradeoffs that were made. The designer should present, not just share a link.
Then structured questions. Not “what do you think” but specific prompts. Does this follow our existing patterns. Where does it deviate and why. What happens in error states. How does this look on mobile. What’s the loading state. What if the user has no data yet.
These questions should be consistent across reviews. A checklist that becomes habit.
Then open discussion. This is where broader feedback happens. But it comes after the structured questions, not instead of them.
Then decisions. What changes are needed. What’s approved to ship. What needs another round. Clear outcomes, not vague agreement to iterate.
Thirty minutes. Maybe an hour for complex work. That’s it.
Who should be in the room
The designer who did the work. Obviously.
Another designer if you have one. Peer review catches things self-review misses.
An engineer who’ll build it. They see feasibility issues designers miss. They catch animations that are expensive. They know what’s hard.
Someone representing users. Product manager, support lead, someone who talks to users. They catch assumptions designers make about user knowledge.
Optional: founder or design lead for brand alignment. But this person should observe more than direct. Their presence can shut down honest critique if they dominate.
Not everyone speaks on everything. Different people catch different things. The engineer might not have opinions on color. The PM might not have opinions on spacing. But having them present creates coverage.
The checklist
Every review should hit certain questions. Write them down. Use them every time.
Consistency. Does this match existing patterns in the product. If it deviates, is that intentional and justified.
Hierarchy. Is it clear what’s most important on each screen. Where does the eye go first. Is that right.
Copy. Does the text make sense. Is it consistent with our voice. Are labels clear.
States. What’s the empty state. Loading state. Error state. Success state. Edge cases that aren’t the happy path.
Accessibility. Is contrast sufficient. Are touch targets big enough. Does it work without color as the only indicator.
Responsive. Does this work on mobile. Tablet. Small laptop. Large monitor. Where does it break.
Technical. Is this buildable in reasonable time. Are there animations or interactions that are expensive. Does engineering see problems.
Brand. Does this feel like our product. Would users recognize this as us.
Run through these every time. It becomes automatic. Problems get caught before they ship.
The cost of skipping
Design debt compounds like code debt.
An inconsistent button style ships. Then another screen references it. Then another. Now changing it means changing twelve screens. So it stays.
An edge case gets missed. Users hit it. Support tickets pile up. Engineering scrambles to fix. The fix is rushed so it creates its own issues.
A pattern gets established that doesn’t scale. It works for version one. By version three it’s breaking and redesigning it means retraining users.
Every shortcut taken in review becomes interest paid later. Usually by people who weren’t in the room when the shortcut happened.
The thirty minutes you didn’t spend in review becomes the three days you spend fixing what shipped.
Starting from zero
If your team has no design review practice, start simple.
Pick one project. Before it ships, schedule thirty minutes. Invite the designer, an engineer, and someone who represents users.
Use the checklist. Go through each question. Note what you find.
Ship whatever you ship. But track whether review caught things that would have been problems.
Do it again next project. And the next. Build the habit before building the process.
You’ll find issues. You always find issues. The question is whether you find them in a meeting room or in production.
The culture shift
Design review only works if critique is safe.
If designers get defensive, feedback stops being honest. If founders override everything, review becomes performance. If engineers just say “that’s hard” without engaging, collaboration dies.
The culture has to treat review as making work better, not proving it wrong. The designer’s job isn’t to defend. Everyone’s job is to improve.
This is hard. It requires trust. It requires people who can separate their ego from their work. It requires leadership that models receiving feedback well.
But it’s the only way design quality scales. Individual talent gets you started. Process is how you maintain quality as you grow.
The design review nobody does is the most boring, obvious improvement available. No new tools. No new hires. Just a meeting with structure and the discipline to do it consistently.
Most teams won’t. That’s why most products have obvious problems that nobody caught.
Yours doesn’t have to be one of them.
Thank you :)
If your project needs design, brand, product, strategy, and leadership,
let’s talk, hi@dragoon [dot] xyz | Follow: 0xDragoon



