Designing for the Hack
Nobody wants to think about this. But every protocol should.
You’ve seen the pattern. Protocol gets exploited. Twitter goes silent for six hours. When they finally post, it’s a screenshot of a Notes app with “we’re investigating an incident.” The website still shows TVL numbers from before the drain. Users are panicking in Discord while mods say “please wait for official communication.”
This is a brand crisis being made worse by zero preparation.
The design decisions you make in the first 72 hours after a hack determine whether your protocol recovers or dies. And almost no one has a playbook.
The Interface Moment
When Wormhole lost $320 million in February 2022, users who visited the bridge saw... the normal interface. For hours. People were still trying to use it while the exploit was ongoing.
Think about how absurd that is. The largest bridge hack in DeFi history is unfolding, and the product is still showing “Bridge Your Assets” like nothing’s wrong.
Compare that to how a bank handles a security incident. The moment something’s wrong, you see it. Clear messaging. Disabled functions. A number to call. The entire experience shifts to acknowledge reality.
Crypto has none of this infrastructure. Most protocols have exactly two states: fully operational and completely offline. There’s nothing in between. No designed experience for “something is wrong but we’re handling it.”
Here’s what your interface should communicate instantly when something goes wrong:
Status clarity. Is the protocol paused? Partially functional? Which functions are affected? Users shouldn’t have to check Twitter to understand if they can use your product. The interface itself should be the source of truth.
Asset safety. If funds are at risk, say it directly. If user funds in the protocol are safe but the exploit affected something else, say that too. The vacuum of information is where panic breeds. People assume the worst when you say nothing.
Action guidance. Should users do anything? Revoke approvals? Move funds from related protocols? Wait and do nothing? Tell them specifically what to do and what not to do. Vague warnings without actionable advice just increase anxiety.
Timeline expectations. When’s the next update coming? “We’ll share more information within 2 hours” is infinitely better than silence. Even if you have nothing new to say, saying “no new information yet, next update in 1 hour” maintains trust.
Pre-Building Crisis Components
You should have crisis UI components sitting in your design system right now, ready to deploy. This isn’t paranoid. It’s professional.
Most protocols spend six figures on security audits. They have detailed incident response plans for the technical side. Runbooks for the engineering team. War room procedures.
And then for the user-facing side? Nothing. The thing that actually determines whether users ever trust you again? Complete improvisation.
Here’s what to build before you need it:
A real maintenance mode. Not a blank page. Not a generic “down for maintenance” message. A designed experience that can be customized quickly with specific incident information. Different severity levels with different visual treatments.
Alert banner system. You need banners that can communicate warnings, active incidents, and all-clear status. These should be part of your component library, tested, and deployable in minutes.
A status page that’s actually useful. Not just green checkmarks and red X’s. Something that can communicate nuance. “Deposits paused, withdrawals functioning, governance unaffected.” Users need granular understanding of what’s working.
Pre-written copy. Templated language for different scenarios. Obviously you’ll customize it, but having a starting point means faster response and fewer panicked typos. Write the copy for scenarios you hope never happen.
A crisis homepage variant. Your normal marketing homepage is wrong during a crisis. Showing “Earn 12% APY” while funds are being drained is tone-deaf at best. You need a homepage that can flip to incident-focused messaging instantly.
The 72-Hour Brand Window
Euler Finance lost $197 million in a flash loan attack in March 2023. Within hours, they had a dedicated incident page. Regular updates with timestamps. Transparent communication as they negotiated with the attacker. Clear documentation of what happened and why.
They recovered most of the funds. More importantly for this conversation, their brand survived. The protocol is still building. Users came back.
The protocols that go dark, that fumble communication, that make users feel abandoned while the team figures things out internally? They don’t come back. The brand damage exceeds the financial damage.
After a hack, you have about three days to establish the narrative. Either you’re the team that responded professionally, communicated clearly, and took care of users. Or you’re the team that disappeared while everyone panicked.
This window matters because first impressions of crisis response are sticky. The crypto community has long memories. “Remember when [Protocol X] went dark for 18 hours during their exploit?” becomes part of your permanent reputation.
Visual Consistency as Signal
Here’s something subtle that matters more than you’d think: if your crisis communications look like rushed screenshots, mismatched fonts, and obviously improvised graphics, you’re sending a message beyond the content itself.
You’re telling users the whole operation is falling apart.
Even in crisis, design quality signals competence. The protocol that posts a well-formatted incident report with clear typography, proper formatting, and professional presentation is telling users: we have our shit together. We’re handling this.
The protocol that posts a blurry screenshot of a Notion doc is telling users: we were not prepared for this.
This doesn’t mean spending hours on design while users wait for information. It means having templates ready. Brand-consistent formats for incident reports. A status page that looks intentional.
The Post-Mortem as Brand Asset
After the immediate crisis, the post-mortem becomes one of the most important brand documents you’ll ever publish.
Wormhole’s post-incident analysis was detailed, technical, and honest about what went wrong. It explained the vulnerability, the attacker’s method, and the path forward. It turned a disaster into a demonstration of transparency and technical competence.
A good post-mortem should be:
Fast but thorough. Days, not weeks. You’re racing against speculation and misinformation.
Technically credible. The DeFi community will scrutinize this. Superficial explanations damage trust further.
Visually considered. Format it properly. Use diagrams where helpful. Make it readable for both technical and non-technical audiences.
Honest about failures. Attempting to minimize or deflect blame always backfires. The community respects teams that own their mistakes.
The Recovery Playbook
Some protocols have come back from massive hacks. Wormhole continued operating after Jump Crypto covered the loss. Euler negotiated the return of most stolen funds over several weeks of public, transparent communication with the attacker.
The ones that survive share a pattern. They treat the hack as a brand moment, not just a security incident. They over-communicate rather than under-communicate. They show their work publicly. They design the recovery experience with the same care they designed the original product.
The protocols that don’t survive often had recoverable situations. The funds weren’t always the fatal blow. The brand collapse was.
Most teams spend months designing their launch experience. The onboarding flow. The first-time user journey.
Almost none spend any time designing the worst-case experience. The crisis flow. The user journey when everything goes wrong.
The hack you’re not planning for will test your brand more than any marketing campaign ever could.
Maybe worth thinking about before you need to.
Thank you :)
If your project needs design, brand, product, strategy, and leadership,
let’s talk. Work with me: hi@dragoon [dot] xyz | Follow: 0xDragoon



