Skip to content

Test-Driven Development (NeoLab)

Verified

TDD workflow plugin for Claude Code and compatible agents

By NeoLab 55,700 stars v1.0 Updated 2026-03-15
$ Add to .claude/skills/

About This Skill

# Fix Tests

User Arguments

User can provide to focus on specific tests or modules:

``` $ARGUMENTS ```

If nothing is provided, focus on all tests.

Context

After business logic changes, refactoring, or dependency updates, tests may fail because they no longer match the current behavior or implementation. This command orchestrates automated fixing of all failing tests using specialized agents.

Goal

Fix all failing tests to match current business logic and implementation.

Important Constraints

  • Focus on fixing tests - avoid changing business logic unless absolutely necessary
  • Preserve test intent - ensure tests still validate the expected behavior
  • "Analyse complexity of changes" -
  • - if there 2 or more changed files, or one file with complex logic, then Do not write tests yourself - only orchestrate agents!
  • - if there is only one changed file, and it's a simple change, then you can write tests yourself.

Workflow Steps

Preparation

  1. Read sadd skill if available
  2. - If available, read the sadd skill to understand best practices for managing agents
  1. Discover test infrastructure
  2. - Read @README.md and package.json (or equivalent project config)
  3. - Identify commands to run tests and coverage reports
  4. - Understand project structure and testing conventions
  1. Run all tests
  2. - Execute full test suite to establish baseline
  1. Identify all failing test files
  2. - Parse test output to get list of failing test files
  3. - Group by file for parallel agent execution

Analysis

  1. Verify single test execution
  2. - Choose any test file
  3. - Launch haiku agent with instructions to find proper command to run this only test file
  4. - Ask him to iterate until you can reliably run individual tests
  5. - After he complete try running a specific test file if it exists
  6. - This ensures agents can run tests in isolation

Test Fixing

  1. Launch `developer` agents (parallel)
  2. - Launch one agent per failing test file
  3. - Provide each agent with clear instructions:
  4. * Context: Why this test needs fixing (business logic changed)
  5. * Target: Which specific file to fix
  6. * Guidance: Read TDD skill (if available) for best practices how to write tests.
  7. * Resources: Read README and relevant documentation
  8. * Command: How to run this specific test file
  9. * Goal: Iterate until test passes
  10. * Constraint: Fix test, not business logic (unless clearly broken)
  1. Verify all fixes
  2. - After all agents complete, run full test suite again
  3. - Verify all tests pass
  1. Iterate if needed
  2. - If any tests still fail: Return to step 5
  3. - Launch new agents only for remaining failures
  4. - Continue until 100% pass rate

Success Criteria

  • All tests pass ✅
  • Test coverage maintained
  • Test intent preserved
  • Business logic unchanged (unless bugs found)

Agent Instructions Template

When launching agents, use this template:

``` The business logic has changed and test file {FILE_PATH} is now failing.

  1. Your task:
  2. Read the test file and understand what it's testing
  3. Read TDD skill (if available) for best practices on writing tests.
  4. Read @README.md for project context
  5. Run the test: {TEST_COMMAND}
  6. Analyze the failure - is it:
  7. - Test expectations outdated? → Fix test assertions
  8. - Test setup broken? → Fix test setup/mocks
  9. - Business logic bug? → Fix logic (rare case)
  10. Fix the test and verify it passes
  11. Iterate until test passes
  12. ```

Use Cases

  • Implement Test-Driven Development workflows with AI guidance
  • Write failing tests first and iterate toward passing implementations
  • Build comprehensive test suites following TDD red-green-refactor cycle
  • Apply TDD principles to complex features and edge case coverage
  • Generate test cases from requirements before writing implementation code

Pros & Cons

Pros

  • +Compatible with multiple platforms including claude-code, codex, gemini, cursor
  • +Well-documented with detailed usage instructions and examples
  • +Automation-first design reduces manual intervention

Cons

  • -No built-in analytics or usage metrics dashboard
  • -Configuration may require familiarity with developer tools concepts

FAQ

What does Test-Driven Development (NeoLab) do?
TDD workflow plugin for Claude Code and compatible agents
What platforms support Test-Driven Development (NeoLab)?
Test-Driven Development (NeoLab) is available on Claude Code, OpenAI Codex CLI, Gemini CLI, Cursor.
What are the use cases for Test-Driven Development (NeoLab)?
Implement Test-Driven Development workflows with AI guidance. Write failing tests first and iterate toward passing implementations. Build comprehensive test suites following TDD red-green-refactor cycle.

100+ free AI tools

Writing, PDF, image, and developer tools — all in your browser.

Next Step

Use the skill detail page to evaluate fit and install steps. For a direct browser workflow, move into a focused tool route instead of staying in broader support surfaces.