Zero fluff, pure trenches-tested insights from climbing to the top 50s on Cantina. This is the alpha that actually moved the needle — not theory, not vibes, just what works when you're deep in the code at 3 AM and the leaderboard is watching.
I've been in the competitive audit space for a while now. I've had findings confirmed, disputed, rejected, flipped back to confirmed, and everything in between. I've submitted too late and watched my work get invalidated by live fixes. I've underestimated "low" findings that turned into mediums during judging. I've made every mistake in the book — and I've taken notes on all of them.
After hitting 6x top 3 finishes on Cantina and climbing into the top 50s, I wanted to distill everything that actually mattered into one place. Not the stuff you read in every "how to get started in auditing" thread — the stuff that separates top 10 finishes from mid-pack results.
Here's what I've learned.
This one sounds obvious until you watch how most people actually audit. They fire up Slither, grep for known patterns, skim the docs, and start writing findings within an hour. That's not auditing. That's pattern-matching with extra steps.
The approach that consistently works for me? Live in the codebase. Don't just scan for patterns — trace execution flows, understand the state machine, map out how every external call interacts with internal accounting. If the protocol has a live dApp, use it. Click through every user flow. Deposit, withdraw, claim, stake, unstake. The edge cases that devs miss usually live in the flows they tested least.
The bugs reveal themselves when you understand the code and the flow — not when you're hunting keywords and patterns. Understanding comes first. Findings follow.
This is genuinely underrated alpha. Most security researchers skip the /test folder entirely or give it a cursory glance. That's a mistake.
Think about it: developers write tests for the scenarios they're worried about. The test suite is a window into their threat model. But the real gold isn't in what they tested — it's in what they didn't test.
The gaps and edge cases are literally mapped out for you. Spend serious time here. Most hunters skip this goldmine, and that's exactly why it's a goldmine.
Developers leak everything in their version control history. Fix commits tell you what broke before. PR discussions reveal what they were concerned about. Issue threads surface the problems that came up during development.
They're basically documenting their weak spots — and it's all public.
Read everything in there. Devs document their blind spots without realizing it.
In contests, sponsors often respond to questions with things like "This is safe because..." or "We handle that in X." It's natural to take their word for it. Don't.
Sponsors know their intent, not always their implementation reality. There's often a gap between what the code was designed to do and what it actually does. Your job is to prove assumptions with facts, not take them at face value.
Always cross-check sponsor responses against the actual code. Their replies describe the system they think they built — the code shows you the system they actually built.
Complex integrations in Hardhat or Foundry got you stuck? Multi-contract interactions making your head spin? Don't give up. This is where most hunters quit — and that's exactly where you can shine.
AI tools can help you navigate complex test setups and multi-contract interactions. Use them to scaffold your PoC environment, understand unfamiliar testing frameworks, or debug weird compilation errors. The goal isn't to have the AI find bugs for you — it's to remove the friction that keeps you from proving the bugs you already found.
The hardest PoCs to write are often the most valuable findings. If it were easy to prove, someone else would have already submitted it. Push through the complexity — that's where the alpha lives.
This one doesn't get talked about enough. Competitive auditing isn't just about finding bugs — it's about defending them through the judging process. I've had findings disputed, confirmed, rejected, and flipped multiple times in a single contest. It's soul-draining but unavoidable.
You have to learn the game of escalations. Stand your ground with solid evidence. Never reply with emotions — analyze critically, respond with proof, not feelings.
Patience + persistence + clear proof = eventual recognition. Every time.
Severity can shift during judging based on impact assessment. That "low" or "info" you almost didn't bother proving? It might become a medium once the judges evaluate the full impact chain.
And here's the thing — if a finding gets downgraded unfairly, you need concrete proof for your escalation. If you skipped the PoC because it was "just a low," you've got nothing to fight with.
Cover all bases from day one. A PoC takes extra time upfront but can save (or earn) you significant payouts during judging. The ROI on thorough proof-of-concepts is almost always worth it.
This one hit me the hard way. I learned it during the Velvet contest, and it fundamentally changed my submission strategy.
If a protocol is applying fixes mid-contest, your carefully held finding could become a duplicate of a fix — or worse, entirely invalidated — because someone else submitted first and the team patched it. Speed over stealth when live fixes are in play. No exceptions.
Checklists and known vulnerability patterns get you started. They're the training wheels. But the unique, high-severity findings — the ones that land you in the top 3 — come from understanding the business logic deeply.
Everyone is running the same automated checks. Everyone has the same Solidity vulnerability checklists bookmarked. Your edge is manual analysis where others won't go. Reading every function, tracing every state transition, understanding why the protocol made specific design decisions and where those decisions break down.
The "just read the code" formula isn't gatekeeping — it's the actual secret. The unglamorous truth is that the best findings come from the deepest understanding, and there's no shortcut to that.
Until the final payout hits your wallet, everything is fluid. Severities change. Duplicates emerge. Escalations flip decisions. I've seen "confirmed Highs" get downgraded to nothing, and I've seen disputed findings get upgraded after a well-argued escalation.
Don't build castles in the air after seeing a few confirmed findings. Stay focused until the actual results drop. The leaderboard is a snapshot, not a guarantee.
Here's one that nobody talks about but everyone suffers from: escalations can start weeks or even months after you've moved on to other contests. You're deep in a completely different protocol when suddenly you need to defend a finding from three contests ago with surgical precision.
The protocol details fade, but you still need to argue with the same clarity you had on day one. This is brutal context switching, and it will cost you findings if you're not prepared.
Your future self, three contests deep into a completely different protocol, will thank you when a judging dispute lands in your inbox at midnight.
The leaderboard rewards the ones who stay in the trenches longest — not the ones who run the fanciest tools. See you in the next contest. 🔥