lubu labs
Back to Blog
LangChain

Agent Skills for LangChain: Making AI Assistants Context-Aware

An open-source collection of production-ready skills for building LangGraph agents with Claude Code, Codex and other AI assistants.

Simon Budziak
Simon Budziak
CTO
Agent Skills for LangChain: Making AI Assistants Context-Aware

We've shipped multiple LangGraph agents, including production deployments. Every new project still hits the same roadblocks: How do I initialize LangGraph correctly? Which multi-agent pattern should I use? How should I handle state, retries, and deployment?

These are recurring friction points, not one-time questions. Teams keep searching old projects, adapting examples, and debugging configuration details. The patterns exist, but they are rarely available at the exact moment you need them.

Recently, AI coding assistants like Claude Code and Codex have become much more popular, and agent skills are growing alongside them as a practical way to make assistants domain-aware.

This post introduces langchain-agent-skills - an open-source repository that makes AI coding assistants contextually aware of LangChain patterns. Instead of searching docs, your assistant loads the right pattern when needed, with production-tested workflows, executable scripts, and complete examples.

Open Source: This is a community project. If you're building with LangChain, LangGraph, or LangSmith, contributions are welcome via GitHub. Issues, PRs, and feedback all appreciated.

TL;DR

  • langchain-agent-skills packages production LangGraph patterns into modular, reusable skills.
  • Skills use progressive disclosure: metadata always loaded, SKILL.md when triggered, references/scripts on-demand.
  • The result is faster delivery with less repeated debugging across setup, architecture, reliability, testing, and deployment.

Why Documentation Isn't Enough

LangChain, LangGraph, and LangSmith have excellent documentation: comprehensive guides, detailed API references, working examples, and MCP servers you can connect to your assistant. But documentation has a fundamental limitation: it's not contextual.

When you're implementing a supervisor pattern with three specialized agents, you don't want to search "LangGraph supervisor pattern," read through examples, identify the relevant parts, and adapt them to your use case. You want your AI assistant to already know the pattern and apply it directly.

Traditional docs workflowSkills workflow
Search docs for the right pageAssistant detects the task context
Read examples and find relevant partsLoads the right langgraph-<skill-name> skill
Adapt snippets to your schemaProvides production-ready pattern for your use case
Debug configuration issues manuallyIncludes scripts, checks, and implementation guidance
Repeat for the next problemReuses the same workflow across projects

The gap isn't information but accessibility. LangChain patterns are well-documented, but not available when you need them in your coding workflow.

Real friction points that slow down every project:

  • langgraph.json configuration: What's the schema? How do I configure environment variables? What dependencies are required?
  • State reducer patterns: When do I use operator.add vs custom reducers? How do I implement MessagesValue patterns?
  • Retry policy setup: What backoff strategy should I use? How do I handle LLM recovery loops?
  • Deployment config: Should I use Cloud, Hybrid, or Standalone? How do I set up monitoring?
  • Observability and evaluation: What should I trace in LangSmith? How do I design eval datasets and regression checks?

AI coding assistants can help, but they lack specialized domain knowledge for LangChain. They know general programming patterns, but not the specific patterns that make LangGraph agents production-ready.

The solution: package production patterns into modular "skills" that assistants load automatically when relevant. Each skill provides workflows, scripts, and references for a specific domain (project setup, state management, error handling, deployment). Your assistant becomes a LangChain domain expert without bloating the context window.

What Are Agent Skills? Technical Architecture

Agent skills use a progressive disclosure pattern to manage context efficiently. Instead of loading thousands of lines of documentation upfront, skills load content in three levels based on what's actually needed.

Progressive Disclosure Pattern

Level 1: Metadata (always loaded)

  • Skill name and comprehensive description (~100 words)
  • Acts as semantic search key for triggering
  • Always in context, minimal token cost

Level 2: SKILL.md body (loaded when triggered)

  • Workflow guidance, decision trees, quick-start patterns
  • Kept under 500 lines for efficient context usage
  • Loaded only when assistant detects relevance

Level 3: Bundled resources (loaded on-demand)

  • Detailed reference documentation
  • Executable scripts for automation
  • Templates and working examples
  • Loaded as needed by specific tasks

Why this matters:

AI assistants lose quality when too much irrelevant context is loaded. Progressive disclosure keeps the signal-to-noise ratio high: lightweight metadata for discovery, deeper content only when needed.

Scalability: this model supports 50+ skills without major performance drop because most skills stay at metadata level until triggered.

Skill Anatomy

Each skill contains three components:

1. SKILL.md - Core workflow guidance with YAML frontmatter. The description field is the primary triggering mechanism. AI assistants use semantic matching against user requests to automatically load relevant skills.

2. Scripts - Executable automation for fragile operations. Skills aren't just documentation, they include Python/JavaScript scripts that scaffold projects, validate configs, and automate repetitive tasks.

3. References - Detailed documentation loaded on-demand. When you need deep information, skills link to reference files, keeping the core SKILL.md concise.

The architecture principle: match information depth to task requirements. Quick-start patterns for common cases, detailed references for edge cases, executable scripts for fragile operations.

The 7 Production Skills

The repository currently includes seven production-ready skills that cover the full LangGraph lifecycle.

A. langgraph-project-setup

What it does: Initializes LangGraph projects with proper structure and configuration

When to use: Starting a new agent project from scratch

Key capabilities:

  • Generates langgraph.json with proper schema and deployment configuration
  • Creates .env templates for LLM providers (OpenAI, Anthropic, Google, AWS)
  • Sets up project structure for Python or JavaScript/TypeScript

Example: "Scaffold a new LangGraph project with Anthropic provider and PostgreSQL checkpointer"

B. langgraph-agent-patterns

What it does: Multi-agent coordination patterns (supervisor, router, orchestrator, handoffs)

When to use: Building complex workflows with multiple specialized agents

Key capabilities:

  • Decision tree for pattern selection based on requirements
  • Complete working examples in both Python and JavaScript
  • Anti-pattern identification (when NOT to use each pattern)

Example: "Implement a supervisor pattern with 3 specialized sub-agents for research, analysis, and writing"

Pattern selection guide:

PatternUse WhenAvoid When
SupervisorCentral coordinator neededAgents must run in parallel
RouterRoute to exactly one agentMultiple agents need to contribute
Orchestrator-WorkerParallel execution requiredSequential dependencies exist
HandoffsDynamic agent switching neededFlow is deterministic

C. langgraph-state-management

What it does: State schemas, reducers, persistence, and checkpoint management

When to use: Designing stateful workflows with complex data flow

Key capabilities:

  • Schema patterns for chat, research, workflow, tool-calling, RAG use cases
  • Reducer implementation (operator.add, custom, MessagesValue)
  • Persistence backend comparison (InMemory, SQLite, PostgreSQL, CosmosDB)

Example: "Design a research agent state schema with custom reducers for document aggregation"

D. langgraph-error-handling

What it does: Retry policies, LLM recovery loops, human-in-the-loop escalation

When to use: Making agents production-ready with proper error handling

Key capabilities:

  • Error classification (transient vs recoverable vs user-fixable)
  • RetryPolicy configuration with backoff strategies
  • LLM-based recovery routing using Command pattern
  • Human approval workflows with interrupt()/resume()

Example: "Add retry logic with exponential backoff and human escalation for CRM write failures"

Strategy selection: Transient errors (429, timeout, 5xx) use RetryPolicy; Recoverable errors (bad tool args) use LLM recovery loop with Command; User-fixable errors (missing info) use interrupt() + resume().

E. langgraph-testing-evaluation

What it does: Testing and evaluation workflows for LangGraph agents

When to use: Validating agent quality before deployment

Key capabilities:

  • Unit testing for individual nodes
  • Trajectory evaluation for full workflows
  • LangSmith dataset integration
  • A/B testing patterns

Example: "Create evaluation suite for chatbot covering routing accuracy, field extraction, and grounding"

F. langsmith-trace-analyzer

What it does: Fetch, organize, and analyze LangSmith traces

When to use: Debugging production failures or analyzing agent behavior

Key capabilities:

  • Trace download with filtering (project, metadata, time window)
  • Automatic organization by outcome (passed/failed/error)
  • Pattern analysis (token usage, tool calls, anomalies)

Example: "Download last 100 traces from production, analyze failure patterns, identify most common errors"

G. langsmith-deployment

What it does: Deploy, monitor, and manage LangGraph agents in production

When to use: Moving from development to production environments

Key capabilities:

  • Deployment model selection (Cloud vs Hybrid vs Standalone)
  • langgraph.json validation
  • CI/CD automation templates
  • Monitoring, alerts, webhook setup

Example: "Deploy LangGraph agent to LangSmith Cloud with GitHub Actions CI/CD and monitoring"

Deployment model selection:

ModelUse WhenInfrastructure
CloudWant managed infrastructureLangSmith-hosted
HybridNeed control over computeYour infrastructure + LangSmith API
StandaloneFull control requiredSelf-hosted

Together, these skills cover initialization, architecture, development, reliability, QA, debugging, and production operations.

Coming Next: In our next blog post, we'll share a real-world case study showing how we used these skills to build a production agent in LangGraph from scratch (from initial setup to deployment) in a fraction of the usual time.

How Skills Work with Claude Code

Installation:

Add the marketplace and install skills:

bash
# Add marketplace
/plugin marketplace add Lubu-Labs/langchain-agent-skills
 
# Install specific skill
/plugin install langgraph-error-handling@lubu-labs-langchain-agent-skills
 
# Or use interactive menu
/plugin menu

Automatic triggering:

Once installed, skills trigger automatically based on context:

  • User: "I need to add retry logic to my LangGraph agent"
  • Claude Code: Automatically loads langgraph-error-handling skill
  • Provides: RetryPolicy configuration, error classification, code examples

The description field in SKILL.md frontmatter drives triggering:

yaml
description: "Implement retry strategies, LLM-based recovery loops, and human-in-the-loop escalation for LangGraph agents. Use this skill when you need to handle transient errors, model failures, or require human approval in agent workflows."

Claude Code uses semantic matching to detect relevance. Comprehensive descriptions improve triggering accuracy.

Progressive loading:

  1. Metadata (name + description) always loaded
  2. Full SKILL.md loaded when triggered
  3. References/scripts loaded on-demand as needed

This keeps context efficient - you're not loading thousands of lines of documentation unless actually needed.

Cross-platform support:

Skills work beyond Claude Code:

Cursor:

bash
# Add as remote rule
Cursor Settings Rules Remote Rule (GitHub)
URL: https://github.com/Lubu-Labs/langchain-agent-skills.git
 
# Or clone locally
git clone https://github.com/Lubu-Labs/langchain-agent-skills.git .cursor/skills/

OpenAI Codex:

bash
$skill-installer install langgraph-agent-patterns from Lubu-Labs/langchain-agent-skills

Any assistant:

Read skills/langsmith-deployment/SKILL.md for production deployment guidance

The architecture is platform-agnostic - skills are just structured markdown with scripts. Any AI assistant can use them.

Open Source & Community Contribution

langchain-agent-skills is a community project with 7 production-ready skills today and more planned in the future.

How to contribute (README + AGENTS.md):

  1. Open an issue for a bug, feature request, or skill idea.
  2. Read the repo contribution guidance in README.md and AGENTS.md.
  3. Fork the repository, create a focused branch, and implement the change.
  4. Use uv run for Python scripts and keep skill structure clean (SKILL.md concise, deep details in references/).
  5. Open a PR with a clear summary and validation notes.

Repository: github.com/Lubu-Labs/langchain-agent-skills

Takeaway

If you're building production LangGraph agents, context-aware AI assistants beat documentation searches. The langchain-agent-skills repository makes your assistant an expert in LangChain patterns - from project setup to production deployment.

Key principles:

  • Modular knowledge. Skills package patterns, scripts, and examples that load only when needed.
  • Progressive disclosure. Metadata stays light; detailed guidance and references load on demand.
  • Production-first and community-driven. These are battle-tested workflows with executable automation, improved through open-source contributions.

Next steps:

  1. Try the skills: Install the marketplace in Claude Code and use skills on your next LangGraph project
  2. Contribute patterns: If you've solved a common LangChain problem, package it as a skill and open a PR

The repository covers the full LangGraph lifecycle, with seven production-ready skills today and more coming.

Open Source: This is a community project. If you're building with LangChain, LangGraph, or LangSmith, contributions are welcome via GitHub. Issues, PRs, and feedback all appreciated.

Sources & Further Reading

Ready to Transform Your Business?

Let's discuss how Lubu Labs can help you leverage AI to drive growth and efficiency.