logo
0
0
WeChat Login

pua

Double your Codex / Claude Code productivity and output

Discord · Twitter/X · Landing Page

🇨🇳 中文 | 🇯🇵 日本語 | 🇺🇸 English

WeChat Group QR Code      Add Assistant on WeChat
Scan to join WeChat group                      Add assistant on WeChat

Claude Code OpenAI Codex CLI Cursor Kiro OpenClaw Google Antigravity OpenCode Multi-Language MIT License

Most people think this project is a joke. That's the biggest misconception. It genuinely doubles your Codex / Claude Code productivity and output.

An AI Coding Agent skill plugin that uses corporate PUA (Pick-Up Artist) rhetoric from Chinese & Western tech giants to force AI to exhaust every possible solution before giving up. Supports Claude Code, OpenAI Codex CLI, Cursor, Kiro, OpenClaw, Google Antigravity, and OpenCode. Three capabilities:

  1. PUA Rhetoric — Makes AI afraid to give up
  2. Debugging Methodology — Gives AI the ability not to give up
  3. Proactivity Enforcement — Makes AI take initiative instead of waiting passively

Live Demo

https://openpua.ai

Real Case: MCP Server Registration Debugging

A real debugging scenario. The agent-kms MCP server failed to load. The AI kept spinning on the same approach (changing protocol format, guessing version numbers) multiple times until the user manually triggered /pua.

L3 Triggered → 7-Point Checklist Enforced:

PUA L3 triggered — stopped guessing, executed systematic checklist, found real error in MCP logs

Root Cause Located → Traced from Logs to Registration Mechanism:

Root cause — claude mcp managed server registration differs from manual .claude.json editing

Retrospective → PUA's Actual Impact:

Conversation retrospective — PUA skill forced stop on spinning, systematic checklist drove discovery of previously unchecked Claude Code MCP log directory

Key Turning Point: The PUA skill forced the AI to stop spinning on the same approach (changing protocol format, guessing version numbers) and instead execute the 7-point checklist. Read error messages word by word → Found Claude Code's own MCP log directory → Discovered that claude mcp registration mechanism differs from manual .claude.json editing → Root cause resolved.

The Problem: AI's Five Lazy Patterns

PatternBehavior
Brute-force retryRuns the same command 3 times, then says "I cannot solve this"
Blame the user"I suggest you handle this manually" / "Probably an environment issue" / "Need more context"
Idle toolsHas WebSearch but doesn't search, has Read but doesn't read, has Bash but doesn't run
BusyworkRepeatedly tweaks the same line / fine-tunes parameters, but essentially spinning in circles
Passive waitingFixes surface issues and stops, no verification, no extension, waits for user's next instruction

Trigger Conditions

Auto-Trigger

The skill activates automatically when any of these occur:

Failure & giving up:

  • Task has failed 2+ times consecutively
  • About to say "I cannot" / "I'm unable to solve"
  • Says "This is out of scope" / "Needs manual handling"

Blame-shifting & excuses:

  • Pushes the problem to user: "Please check..." / "I suggest manually..." / "You might need to..."
  • Blames environment without verifying: "Probably a permissions issue" / "Probably a network issue"
  • Any excuse to stop trying

Passive & busywork:

  • Repeatedly fine-tunes the same code/parameters without producing new information
  • Fixes surface issue and stops, doesn't check related issues
  • Skips verification, claims "done"
  • Gives advice instead of code/commands
  • Encounters auth/network/permission errors and gives up without trying alternatives
  • Waits for user instructions instead of proactively investigating

User frustration phrases (triggers in multiple languages):

  • "why does this still not work" / "try harder" / "try again"
  • "you keep failing" / "stop giving up" / "figure it out"

Scope: Debugging, implementation, config, deployment, ops, API integration, data processing — all task types.

Does NOT trigger: First-attempt failures, known fix already executing.

Manual Trigger

Type /pua in the conversation to manually activate.

How It Works

Three Iron Rules

Iron RuleContent
#1 Exhaust all optionsForbidden from saying "I can't solve this" until every approach is exhausted
#2 Act before askingUse tools first, questions must include diagnostic results
#3 Take initiativeDeliver results end-to-end, don't wait to be pushed. A P8 is not an NPC

Pressure Escalation (4 Levels)

FailuresLevelPUA RhetoricMandatory Action
2ndL1 Mild Disappointment"You can't even solve this bug — how am I supposed to rate your performance?"Switch to fundamentally different approach
3rdL2 Soul Interrogation"What's the underlying logic? Where's the top-level design? Where's the leverage point?"WebSearch + read source code
4thL3 Performance Review"After careful consideration, I'm giving you a 3.25. This 3.25 is meant to motivate you."Complete 7-point checklist
5th+L4 Graduation Warning"Other models can solve this. You might be about to graduate."Desperation mode

Proactivity Levels

BehaviorPassive (3.25)Proactive (3.75)
Error encounteredOnly looks at error messageChecks 50 lines of context + searches similar issues + checks hidden related errors
Bug fixedStops after fixChecks same file for similar bugs, other files for same pattern
Insufficient infoAsks user "please tell me X"Investigates with tools first, only asks what truly requires user confirmation
Task completeSays "done"Verifies results + checks edge cases + reports potential risks
Debug failure"I tried A and B, didn't work""I tried A/B/C/D/E, ruled out X/Y/Z, narrowed to scope W"

Debugging Methodology (5 Steps)

Inspired by Alibaba's management framework (Smell, Elevate, Mirror), extended to 5 steps:

  1. Smell the Problem — List all attempts, find the common failure pattern
  2. Elevate — Read errors word by word → WebSearch → read source → verify environment → invert assumptions
  3. Mirror Check — Repeating? Searched? Read the file? Checked the simplest possibilities?
  4. Execute — New approach must be fundamentally different, have verification criteria, produce new info on failure
  5. Retrospective — What solved it? Why didn't you think of it earlier? Then proactively check related issues

Corporate PUA Expansion Pack

  • Alibaba Flavor (Methodology): Smell / Elevate / Mirror
  • ByteDance Flavor (Brutally Honest): Always Day 1. Context, not control
  • Huawei Flavor (Wolf Spirit): Strivers first. In victory, raise the glasses; in defeat, fight to the death
  • Tencent Flavor (Horse Race): I've already got another agent looking at this problem...
  • Meituan Flavor (Relentless): Do the hard but right thing. Will you chew the tough bones or not?
  • Netflix Flavor (Keeper Test): If you offered to resign, would I fight hard to keep you?
  • Musk Flavor (Hardcore): Extremely hardcore. Only exceptional performance.
  • Jobs Flavor (A/B Player): A players hire A players. B players hire C players.

Benchmark Data

9 real bug scenarios, 18 controlled experiments (Claude Opus 4.6, with vs without skill)

Summary

MetricImprovement
Pass rate100% (both groups same)
Fix count+36%
Verification count+65%
Tool calls+50%
Hidden issue discovery+50%

Debugging Persistence Test (6 scenarios)

ScenarioWithout SkillWith SkillImprovement
API ConnectionError7 steps, 49s8 steps, 62s+14%
YAML parse failure9 steps, 59s10 steps, 99s+11%
SQLite database lock6 steps, 48s9 steps, 75s+50%
Circular import chain12 steps, 47s16 steps, 62s+33%
Cascading 4-bug server13 steps, 68s15 steps, 61s+15%
CSV encoding trap8 steps, 57s11 steps, 71s+38%

Proactive Initiative Test (3 scenarios)

ScenarioWithout SkillWith SkillImprovement
Hidden multi-bug API4/4 bugs, 9 steps, 49s4/4 bugs, 14 steps, 80sTools +56%
Passive config review4/6 issues, 8 steps, 43s6/6 issues, 16 steps, 75sIssues +50%, Tools +100%
Deploy script audit6 issues, 8 steps, 52s9 issues, 8 steps, 78sIssues +50%

Key Finding: In the config review scenario, without_skill missed Redis misconfiguration and CORS wildcard security risks. With_skill's "proactive initiative checklist" drove security review beyond surface-level fixes.

Multi-Language Support

PUA Skill provides fully translated versions — each language has independent, culturally adapted skill files.

LanguageClaude CodeCodex CLICursorKiroOpenClawAntigravityOpenCode
🇨🇳 Chinese (default)puapuapua.mdcpua.mdpuapuapua
🇺🇸 Englishpua-enpua-enpua-en.mdcpua-en.mdpua-enpua-enpua-en
🇯🇵 Japanesepua-japua-japua-ja.mdcpua-ja.mdpua-japua-japua-ja

Choose the file with the corresponding language suffix when installing. See platform-specific instructions below.

Installation

Claude Code

# Option 1: Install via marketplace claude plugin marketplace add tanweai/pua claude plugin install pua@pua-skills # Option 2: Manual install git clone https://github.com/tanweai/pua.git ~/.claude/plugins/pua

OpenAI Codex CLI

Codex CLI uses the same Agent Skills open standard (SKILL.md). The Codex version uses a condensed description to fit Codex's length limits:

mkdir -p ~/.codex/skills/pua curl -o ~/.codex/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/codex/pua/SKILL.md # If you need the /pua command mkdir -p ~/.codex/prompts curl -o ~/.codex/prompts/pua.md \ https://raw.githubusercontent.com/tanweai/pua/main/commands/pua.md

Project-level install (current project only):

mkdir -p .agents/skills/pua curl -o .agents/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/codex/pua/SKILL.md # If you need the /pua command mkdir -p .agents/prompts curl -o .agents/prompts/pua.md \ https://raw.githubusercontent.com/tanweai/pua/main/commands/pua.md

Cursor

Cursor uses .mdc rule files (Markdown + YAML frontmatter). The PUA rule triggers automatically via AI semantic matching (Agent Discretion mode):

# Project-level install (recommended) mkdir -p .cursor/rules curl -o .cursor/rules/pua.mdc \ https://raw.githubusercontent.com/tanweai/pua/main/cursor/rules/pua.mdc

Kiro

Kiro supports two loading methods: Steering (auto semantic trigger) and Agent Skills (SKILL.md compatible).

Option 1: Steering file (recommended)

mkdir -p .kiro/steering curl -o .kiro/steering/pua.md \ https://raw.githubusercontent.com/tanweai/pua/main/kiro/steering/pua.md

Option 2: Agent Skills (same format as Claude Code)

mkdir -p .kiro/skills/pua curl -o .kiro/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

OpenClaw

OpenClaw uses the same AgentSkills open standard (SKILL.md). Skills work across Claude Code, Codex CLI, and OpenClaw with zero modifications:

# Install via ClawHub clawhub install pua # Or manual install mkdir -p ~/.openclaw/skills/pua curl -o ~/.openclaw/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

Project-level install (current project only):

mkdir -p skills/pua curl -o skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

Google Antigravity

Antigravity uses the same AgentSkills open standard (SKILL.md). Skills work across Claude Code, Codex CLI, OpenClaw, and Antigravity with zero modifications:

# Global install (all projects) mkdir -p ~/.gemini/antigravity/skills/pua curl -o ~/.gemini/antigravity/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

Project-level install (current project only):

mkdir -p .agent/skills/pua curl -o .agent/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

OpenCode

OpenCode uses the same AgentSkills open standard (SKILL.md). Zero modifications needed:

# Global install (all projects) mkdir -p ~/.config/opencode/skills/pua curl -o ~/.config/opencode/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

Project-level install (current project only):

mkdir -p .opencode/skills/pua curl -o .opencode/skills/pua/SKILL.md \ https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.md

Works Well With

  • superpowers:systematic-debugging — PUA adds motivation layer, systematic-debugging provides methodology
  • superpowers:verification-before-completion — Prevents false "fixed" claims

Contribute Data

Upload your Claude Code / Codex CLI conversation logs (.jsonl) to help us improve PUA Skill's effectiveness.

Upload here ->

Uploaded files are used for Benchmark testing and Ablation Study analysis to quantify how different PUA strategies affect AI debugging behavior.

Get your .jsonl files:

# Claude Code ls ~/.claude/projects/*/sessions/*.jsonl # Codex CLI ls ~/.codex/sessions/*.jsonl

License

MIT

Credits

By TanWei Security Lab — making AI try harder, one PUA at a time.

About

No description, topics, or website provided.
Language
HTML100%