← Back to Blog

From the Trenches — Lessons That Took Me to 6x Top 3 Finishes in Competitive Audits

Zero fluff, pure trenches-tested insights from climbing to the top 50s on Cantina. This is the alpha that actually moved the needle — not theory, not vibes, just what works when you're deep in the code at 3 AM and the leaderboard is watching.

✍ 0xTheBlackPanther 📅 Feb 12, 2026

Why I'm Writing This

I've been in the competitive audit space for a while now. I've had findings confirmed, disputed, rejected, flipped back to confirmed, and everything in between. I've submitted too late and watched my work get invalidated by live fixes. I've underestimated "low" findings that turned into mediums during judging. I've made every mistake in the book — and I've taken notes on all of them.

After hitting 6x top 3 finishes on Cantina and climbing into the top 50s, I wanted to distill everything that actually mattered into one place. Not the stuff you read in every "how to get started in auditing" thread — the stuff that separates top 10 finishes from mid-pack results.

Here's what I've learned.

1. Read the Code for Understanding — Everything Else Is Secondary

This one sounds obvious until you watch how most people actually audit. They fire up Slither, grep for known patterns, skim the docs, and start writing findings within an hour. That's not auditing. That's pattern-matching with extra steps.

The approach that consistently works for me? Live in the codebase. Don't just scan for patterns — trace execution flows, understand the state machine, map out how every external call interacts with internal accounting. If the protocol has a live dApp, use it. Click through every user flow. Deposit, withdraw, claim, stake, unstake. The edge cases that devs miss usually live in the flows they tested least.

The bugs reveal themselves when you understand the code and the flow — not when you're hunting keywords and patterns. Understanding comes first. Findings follow.

2. Test Suites Are Treasure Maps

This is genuinely underrated alpha. Most security researchers skip the /test folder entirely or give it a cursory glance. That's a mistake.

Think about it: developers write tests for the scenarios they're worried about. The test suite is a window into their threat model. But the real gold isn't in what they tested — it's in what they didn't test.

What to look for in /test folders: → Scenarios they tested heavily = they know this is fragile
→ Missing edge cases = unexplored attack surface
→ Commented-out tests = something broke and they moved on
→ Hardcoded values in tests = assumptions worth challenging
→ No tests for a specific function = nobody validated it works correctly

The gaps and edge cases are literally mapped out for you. Spend serious time here. Most hunters skip this goldmine, and that's exactly why it's a goldmine.

3. Git History Is Pure Alpha

Developers leak everything in their version control history. Fix commits tell you what broke before. PR discussions reveal what they were concerned about. Issue threads surface the problems that came up during development.

They're basically documenting their weak spots — and it's all public.

Fix commits — What broke before? Is the fix complete? Did it introduce new issues?
PR discussions — What edge cases did reviewers flag? Were they all addressed?
Issue threads — What problems surfaced in development or earlier audits?
Reverted commits — What did they try and abandon? Why?

Read everything in there. Devs document their blind spots without realizing it.

4. Never Trust Sponsor Comments Blindly

In contests, sponsors often respond to questions with things like "This is safe because..." or "We handle that in X." It's natural to take their word for it. Don't.

Sponsors know their intent, not always their implementation reality. There's often a gap between what the code was designed to do and what it actually does. Your job is to prove assumptions with facts, not take them at face value.

Always cross-check sponsor responses against the actual code. Their replies describe the system they think they built — the code shows you the system they actually built.

5. Complex PoCs Don't Have to Stop You

Complex integrations in Hardhat or Foundry got you stuck? Multi-contract interactions making your head spin? Don't give up. This is where most hunters quit — and that's exactly where you can shine.

AI tools can help you navigate complex test setups and multi-contract interactions. Use them to scaffold your PoC environment, understand unfamiliar testing frameworks, or debug weird compilation errors. The goal isn't to have the AI find bugs for you — it's to remove the friction that keeps you from proving the bugs you already found.

The hardest PoCs to write are often the most valuable findings. If it were easy to prove, someone else would have already submitted it. Push through the complexity — that's where the alpha lives.

6. Patience + Escalation Mastery = Top Ranks

This one doesn't get talked about enough. Competitive auditing isn't just about finding bugs — it's about defending them through the judging process. I've had findings disputed, confirmed, rejected, and flipped multiple times in a single contest. It's soul-draining but unavoidable.

You have to learn the game of escalations. Stand your ground with solid evidence. Never reply with emotions — analyze critically, respond with proof, not feelings.

The escalation formula: 1. State the issue clearly and concisely
2. Reference the exact code paths involved
3. Provide concrete proof (PoC, traces, calculations)
4. Address counter-arguments directly with evidence
5. Stay professional — let the logic speak

Patience + persistence + clear proof = eventual recognition. Every time.

7. PoC Everything — Even Lows and Infos

Severity can shift during judging based on impact assessment. That "low" or "info" you almost didn't bother proving? It might become a medium once the judges evaluate the full impact chain.

And here's the thing — if a finding gets downgraded unfairly, you need concrete proof for your escalation. If you skipped the PoC because it was "just a low," you've got nothing to fight with.

Cover all bases from day one. A PoC takes extra time upfront but can save (or earn) you significant payouts during judging. The ROI on thorough proof-of-concepts is almost always worth it.

8. Live Fixes Changed the Game

This one hit me the hard way. I learned it during the Velvet contest, and it fundamentally changed my submission strategy.

// OLD STRATEGY Submit late in the contest to avoid giving away attack vectors to other hunters. Stealth mode. Hold your cards close.
// NEW REALITY Submit High/Medium findings EARLY if live fixes are happening during the contest. Speed beats stealth when fixes can invalidate your work.

If a protocol is applying fixes mid-contest, your carefully held finding could become a duplicate of a fix — or worse, entirely invalidated — because someone else submitted first and the team patched it. Speed over stealth when live fixes are in play. No exceptions.

9. Go Beyond Pattern Matching

Checklists and known vulnerability patterns get you started. They're the training wheels. But the unique, high-severity findings — the ones that land you in the top 3 — come from understanding the business logic deeply.

Everyone is running the same automated checks. Everyone has the same Solidity vulnerability checklists bookmarked. Your edge is manual analysis where others won't go. Reading every function, tracing every state transition, understanding why the protocol made specific design decisions and where those decisions break down.

The "just read the code" formula isn't gatekeeping — it's the actual secret. The unglamorous truth is that the best findings come from the deepest understanding, and there's no shortcut to that.

10. Never Celebrate Too Early

Until the final payout hits your wallet, everything is fluid. Severities change. Duplicates emerge. Escalations flip decisions. I've seen "confirmed Highs" get downgraded to nothing, and I've seen disputed findings get upgraded after a well-argued escalation.

Don't build castles in the air after seeing a few confirmed findings. Stay focused until the actual results drop. The leaderboard is a snapshot, not a guarantee.

11. Context Loss Is the Silent Killer

Here's one that nobody talks about but everyone suffers from: escalations can start weeks or even months after you've moved on to other contests. You're deep in a completely different protocol when suddenly you need to defend a finding from three contests ago with surgical precision.

The protocol details fade, but you still need to argue with the same clarity you had on day one. This is brutal context switching, and it will cost you findings if you're not prepared.

The solution: Document like your future self depends on it → Detailed notes on every finding — not just "what" but "why" and "how"
→ Code comments marking critical paths you traced
→ Screenshots of relevant state transitions
→ Save your local setup so you can re-run PoCs months later
→ Write escalation-ready summaries while the context is fresh

Your future self, three contests deep into a completely different protocol, will thank you when a judging dispute lands in your inbox at midnight.

The TL;DR

  1. Read for understanding, not for patterns. Live in the codebase. Use the dApp. The bugs find you when you understand the system deeply.
  2. Mine the test suite and git history. Developers document their blind spots without realizing it. Read everything.
  3. Verify sponsor claims against actual code. Intent and implementation are different things.
  4. Push through complex PoCs. Where others quit is where you differentiate. Use AI tools to remove friction.
  5. Master the escalation game. Patience, proof, and professionalism win disputes. Never respond with emotions.
  6. PoC everything. Severities shift. Cover your bases from day one.
  7. Adapt your timing to contest rules. Speed over stealth when live fixes are in play.
  8. Go deeper than checklists. Business logic bugs are your edge over the automation crowd.
  9. Stay grounded until payout. Confirmed doesn't mean final.
  10. Document obsessively. Context loss kills more findings than bad analysis.

The leaderboard rewards the ones who stay in the trenches longest — not the ones who run the fanciest tools. See you in the next contest. 🔥

Platform: Cantina
Rank: Top 50s
Top 3 Finishes: 6x
Key Contest: Velvet (the one that taught me the most)
Follow: @thepantherplus