API Data Fetcher
CautionCollects data from REST and GraphQL APIs with pagination, rate limiting, error handling, authentication, and output normalization into structured formats.
Install
Claude Code
Copy the SKILL.md file to your project's .claude/skills/ directory About This Skill
API Data Fetcher builds robust data collection scripts for REST and GraphQL APIs. It handles the complex realities of API integration: pagination, rate limiting, authentication, error recovery, and data normalization. The skill produces production-quality code with proper retry logic and checkpoint-based resumption.
How It Works
- API analysis — Reads API documentation or example responses to understand endpoints, auth, and pagination patterns
- Script generation — Produces Python or Node.js collection scripts with proper HTTP client configuration
- Pagination handling — Implements cursor, offset, page-number, or link-header pagination automatically
- Rate limiting — Adds configurable delays, respects Retry-After headers, and implements exponential backoff
- Output normalization — Flattens nested responses, handles missing fields, and writes to CSV, JSON, or database
Best For
- Collecting data from third-party APIs for analysis or migration
- Building data feeds that run on a schedule with incremental updates
- Integrating multiple APIs into a unified dataset
- Prototyping data collection before committing to a full ETL pipeline
Reliability Features
Checkpoint files allow resuming interrupted collections. Exponential backoff with jitter prevents thundering herd. Dead letter queues capture failed requests for manual retry. All responses are logged with timestamps for audit trails.
Use Cases
- Fetch paginated data from REST APIs with cursor or offset pagination
- Collect data from GraphQL APIs with nested query optimization
- Build resilient fetchers with retry logic and exponential backoff
- Normalize heterogeneous API responses into flat CSV or structured JSON
- Schedule incremental data collection with checkpoint resumption
Pros & Cons
Pros
- + Handles pagination, auth, and rate limiting automatically
- + Checkpoint-based resumption for interrupted collections
- + Normalizes heterogeneous API responses into clean formats
- + Supports both REST and GraphQL APIs
Cons
- - Requires API credentials and proper authorization
- - Cannot handle APIs requiring browser-based OAuth flows interactively
- - Aggressive fetching without proper rate limiting can cause IP bans
Related Skills
Data Pipeline
CautionDesigns and implements ETL/ELT data pipelines using Python, SQL, and orchestration tools like Airflow, dbt, and Prefect for batch and streaming workflows.
CSV Transformer
CautionTransforms, cleans, and converts data between CSV, JSON, Excel, and other tabular formats with column mapping, type casting, and validation.
Stay Updated on Agent Skills
Get weekly curated skills + safety alerts
每周精选 Skills + 安全预警