3,000+ lines of vulnerability patterns for Sui & Aptos โ plugged directly into your AI workflow.
2 weeks ago I open-sourced my generic audit prompts and workflow. Those were chain-agnostic โ they worked for Solidity, Rust, Move, whatever. The feedback was great, but one thing kept nagging me: generic prompts produce generic results.
When I started doing more Move audits โ both Sui and Aptos โ I realized the attack surface is fundamentally different from EVM. Object ownership models, capability patterns, hot potatoes, witness abuse, PTB reentrancy, acquires annotations, resource account escalation. None of this exists in Solidity. My generic prompts weren't catching these.
So I built something specific.
It's a Claude Code skill โ not a standalone tool, not a CLI binary. You install it once, and it turns Claude Code into a Move-specialized security auditor. It auto-activates when you open .move files.
That's it. No config files, no API keys, no setup beyond a cp.
My previous repo was a collection of prompts you copy-paste into your workflow. It worked. But there were problems:
A Claude Code skill solves all three. It's a structured, multi-phase audit workflow that loads chain-specific patterns on demand and produces a consistent report format every time.
The skill doesn't just dump a checklist. It runs a structured 5-phase process:
Phase 2 is the one I think matters most. Asking the AI to think from three perspectives โ attacker, designer, integrator โ catches things a single-pass review misses. An attacker thinks about what they can call. A designer thinks about what invariants should hold. An integrator thinks about what happens when this module composes with others.
The skill ships with 3,000+ lines of security checks across four reference files. Here's the breakdown:
These apply to all Move code โ Sui, Aptos, or any future Move chain:
Sui's object-centric model creates an attack surface that doesn't exist anywhere else:
Object ownership confusion (SUI-01) โ a shared object treated as owned, or vice versa. Breaks access control silently.
Witness pattern abuse (SUI-03) โ one-time witness with copy ability. Supposed to be used once, now usable forever.
PTB state inconsistency (SUI-02) โ Programmable Transaction Blocks let you compose calls in a single transaction. Shared objects can be mutated between calls within the same PTB.
Dynamic field injection (SUI-06) โ attacker adds unexpected fields to objects they interact with.
Hot potato misuse (SUI-09) โ hot potato pattern enforces "must be consumed in same transaction." If the consumption check is weak, the enforced flow is bypassable.
Aptos's global storage model has its own class of bugs:
Missing acquires (APT-01) โ forgetting the annotation means the function can't actually access the resource. Sounds harmless until it silently skips a critical check.
Resource account escalation (APT-02) โ the signer capability for a resource account gets leaked or stored with copy.
Coin type confusion (APT-03) โ generic type parameters let attackers pass a worthless coin type where a valuable one is expected.
ConstructorRef leaks (APT-17) โ if a ConstructorRef is stored or returned, anyone can mint unlimited tokens.
Move 2.2 reentrancy (APT-21) โ new dispatch patterns in Move 2.2 reintroduce reentrancy vectors that didn't exist before.
For any Move protocol touching tokens, swaps, lending, or oracles:
This is a rule I set for the repo: no check gets added without a concrete code example showing both the vulnerable pattern and the fix. Abstract descriptions are useless when you're reviewing actual code.
Every check follows this pattern. You see the bug, you see the fix. No ambiguity.
Each finding in the output follows a consistent structure:
Severity table at the top, PoC scenario for every finding, verified clean checks at the bottom. Consistent every time.
Here's what's planned:
On benchmarking: Right now, I know the skill finds things that generic prompts miss. But I can't prove it with numbers. The plan is to benchmark move-auditor against three baselines: (1) raw Claude with no skill, (2) Claude with my old generic prompts, and (3) manual review alone. Run all three against the same set of known-vulnerable Move contracts and measure: what gets caught, what gets missed, how many false positives.
If a tool can't beat "just reading the code carefully," it shouldn't exist. I want to know exactly where the skill adds value and where it doesn't. That data will also drive which checks get improved and which new ones get prioritized.
This is the item on the roadmap I'm most interested in. Open-sourcing code is easy. Proving it works is harder. The benchmarking results will be published alongside the skill so anyone can verify the claims.
AI-assisted audit output must be manually verified. This skill accelerates your workflow โ it does not replace deep manual review and PoC testing. Every finding the skill generates needs human confirmation before it goes into a report. The skill is a force multiplier, not an autopilot.
If you treat AI output as final, you will ship false positives and miss real bugs. The skill is designed to surface candidates faster โ you still need to verify, test, and think.
The repo is MIT licensed and contributions are welcome:
common-move.mdsui-patterns.md โ numbered SUI-XXaptos-patterns.md โ numbered APT-XXdefi-vectors.md โ numbered DEFI-XXEvery new check must include a code example showing both the vulnerable and safe pattern. No exceptions.
The skill is at github.com/pantheraudits/move-auditor. If you audit Move contracts, try it on your next review and tell me what it catches that you wouldn't have found otherwise. That's the only metric that matters.
โ Panther ยท GitHub ยท Telegram
Disclaimer: The Move language and the Sui/Aptos ecosystems evolve fast. APIs change, new patterns emerge, and security considerations shift with each upgrade. If you find anything in this post that is outdated or inaccurate, please reach out to me so I can update it.