Claude Code Skills Just Changed Everything About AI Assistants
ChatGPT's growth is stalling, Anthropic for Science, Agent Skills and the great Claude Acceleration.
If you think today’s topic is worth sharing with someone, here is how to do it (click on the share button below). Read on the web (you can also do this by clicking the title).
Good Morning,
Today is our second edition of our series on using Anthropic’s Claude. Here was the first in our “Guides” section (related to our vibe coding section) of the AI Supremacy Newsletter. The purpose of this post is to teach you some new hands-on things about Claude. Please click on links, it tells me someone is actually reading this.
Today’s guide will be by Michael Jovanovich (you might know him as Typhren). His Newsletter is dedicated to Claude Code:
Claude Code: Response Awareness Methodology 🌊
I asked him for his insights on implementing Agent skills. Anthropic hopes Agent skills accelerates real-world applications. It’s quite a deep dive so take your time on this one.
Michael (formerly went by Typhren) is a self-taught programmer who specializes in AI-assisted development frameworks. He’s spent hundreds of hours exploring orchestration patterns and extensibility features in Claude Code
Claude’s Models have a Rising Coding and Scientific Utility 💡
It has to be said that Anthropic announcing Claude’s agent skills, learn more - is sort of a big deal. According to my analysis Anthropic is on pace to move ahead of OpenAI in ARR by 2027 or 2028 by the latest. As well they are dedicating themselves to the best AI coding models and agents for science.
Anthropic will Grow Faster than OpenAI in Revenue in 2026 and Catch OpenAI in 2027 in Annual Recurring Revenue (ARR) per my analysis
AI Reports I’m reading 📚
The Geopolitics of AI: Decoding the New Global Operating System (JPMorganChase)
Efficient Compute: Why Optimization Is Now Inevitable.
How People Around the World View AI (PewResearch, Oct 15th, 2025). We cover some slides about this in the appendix.
Models to Look out For 🔮
As impressive as Agent Skills are, don’t forget we still have models like Gemini 3, Grok 5, Claude Opus 4.5 and DeepSeek-R2 to look forward to in the coming weeks.
The U.S. is Relying too Heavily on Gas, Coal and Fracking for AI Infrastructure
Is this an ecological and environmental disaster waiting to happen? The United States is pushing dirty power.
The Great Energy Uncertainty ⚡
Increasingly AI companies are regularly announcing large data center deals that are fueling a rally in the utility sector, however these same the utilities on the frontline of the AI boom are struggling to figure out how much of the demand will actually turn into projects that get built in their regions.
FERC Chairman David Rosner warned in September that the difference of a few percentage points in electricity load forecasts “can impact billions of dollars in investments and customer bills.” I’ll be covering this more in my next piece on AI Infrastructure and the great datacenter boom.
ChatGPT Growth Slowing ✨
Okay, this shocked me alittle.
OpenAI has been caught doing some deceptive marketing and PR practices. Essentially lying. Sam Altman has committed to spending more than $1 trillion to build out AI infrastructure, totaling 26 gigawatts of compute capacity from tech companies including Nvidia, AMD, and Oracle with money they obviously don’t have. The problem?
ChatGPT’s growth is stalling in mid to end of 2025. This likely began around May, 2025.
ChatGPT’s mobile app is seeing slowing download growth and daily use, analysis shows.
ChatGPT’s mobile app growth may have hit its peak, according to a new analysis of download trends and daily active users provided by the third-party app intelligence firm Apptopia, reported TechCrunch. Its estimates indicate that new user growth, measured by percentage changes in new global downloads, slowed after April. Meanwhile open resistance to OpenAI and Generative AI in society is on the rise.
It’s a big deal: Although October is only half over, the firm says it’s on pace to be down 8.1% in terms of a month-over-month percentage change in global downloads.
Anthropic’s Claude Haiku is the most Nimble Lightweight Model Yet in 2025
Claude Haiku 4.5 really is a marvel to behold, it was released about one week ago on October 15th, 2025. Claude Haiku 4.5 even surpasses Claude Sonnet 4 at certain tasks, like using computers. Compared to Claude Sonnet 4.0, Claude Haiku 4.5 gives you similar levels of coding performance but at one-third the cost and more than twice the speed.

Anthropic for Science ⚗️
As OpenAI has faltered with GPT-5, promising to add erotica, an AGI- promise meltdown and losing key talent to Meta and Anthropic, Anthropic itself as accelerating it product offering.
Anthropic is winning with developers growing its share of API marketshare. Coding and science for real. They announced yesterday Claude Life Sciences. Claude will now offer AI integration for researchers to advance scientific discovery. For example while Claude for Life Sciences is built around Anthropic’s existing AI models - it now supports connections with other scientific tools including Benchling, PubMed, 10x Genomics.
Can Anthropic top OpenAI in Coding and Science applications with LLMs?
Anthropic, which is one of the companies at the center of the AI boom is the most likely now to grow fastest in AI Enterprise spend, coding and science integrations globally in the 2026 to 2030 period. Note also that both Google and Amazon have significant stakes in Anthropic, nearing the size (each) of what Microsoft has invested in OpenAI, but with far less equity. This means Anthropic’s investor portfolio is more balanced and sustainable. The fact is with such high stock dilution and disarray in product-focus in apps, hardware and doing too much most of OpenAI’s talent has moved on to build their own things or to work at competitors.
OpenAI’s ChatGPT DAU activity has Dropped 22.5% since July, 2025 🤯
ChatGPT’s slowdown is more serious than the media is reporting (I don’t even think they are reporting this story). According to the Apptopia data, average time spent per DAU (daily active user) in the U.S., specifically, has dropped 22.5% since July, and average sessions per DAU in the U.S. are also down by 20.7%. This means people are likely leaving ChatGPT for competitors like Gemini and Claude.
This indicates that U.S. users are spending less time in ChatGPT’s app and are opening it fewer times per day.
Which also makes their partner and vendor financing circular deals and announcements all the more absurd! They have been trying to “fake it till you make it” while losing momentum. ChatGPT distribution and AI Infrastructure was supposed to be their moat.
Backlash to GPT-5’s launch and many Sam Altman mess-ups like promising erotica, are taking its toll including the overall exaggerated deceptions around AGI and datacenter deals, like the Oracle, AMD or Nvidia ones.
Anthropic’s Agent Skills
Claude Agent Skills:
↳ They turn your knowledge into automatic workflows.
↳ You capture your process just once.
↳ Claude applies it automatically every time.
↳ It’s available now on all paid plans.
What I did for financial modeling:
↳ Pulled historical SEC data (KPIs, financials).
↳ Built projections from historicals + your assumptions.
↳ Created a perfectly integrated model.
↳ No hardcodes.
How to use my prompt:
↳ Go to Settings and enable “Skill Creator.”
↳ Copy & paste my prompt to build the Skill.
↳ Add the result as a new Skill.
↳ Ask Claude to model any public company. - Source.
I asked several guest contributors for their take on how to apply Agent skills with Claude, so stay tuned.
While most of OpenAI’s features are copies and clones of the ideas of others, it doesn’t matter if they try to copy Anthropic’s agentic offering, Claude is just better. In fact, 2025 has been incredible years for both Google Gemini and Anthropic to really differentiate themselves and catch-up to ChatGPT’s tremendous popularity. It turns out of course, having the most users isn’t always the best sign or best kind of pressure for an AI startup. In 2026, I expect the likes of xAI, Qwen and DeepSeek to make more tangible progress.
I humbly requested the author known as Typhren to make a LinkedIn profile, so please take a visit. For an AI tinkerer I think he’s really talented. Although this guide might appeal to the more technical and developer/tinkerer savvy readers among you. We’ll have more beginner guides on the Agent skills topic soon.
Claude Code Skills Just Changed Everything About AI Assistants
By
Why Skills transforms AI from helpful autocomplete into a programmable development partner
Agent Skills announcement.
About Michael Jovanovich
Michael is a self-taught programmer who specializes in AI-assisted development frameworks. He’s spent hundreds of hours exploring orchestration patterns and extensibility features in Claude Code, focusing on techniques that scale beyond simple tasks into genuinely complex and autonomous workflows. He writes Claude Code: Response Awareness Methodology teaching developers to build sophisticated systems using AI tools and orchestration patterns.
Top Writing
LLMs as Interpreters: The Probabilistic Runtime for English Programs
The Science of AI Internal State Awareness: Two Papers That Validate Response-Awareness Methodology
Anthropic released Skills for Claude Code
I’d been writing the same prompts over and over. “Remember our API conventions.” “Use our standard testing pattern.” “Don’t forget the error handling per our standards.” Every coding session meant repeating context that should have been obvious by now for any human.
Skills changed that. Developers are fundamentally rethinking how they work with AI coding tools—from autocomplete to IDE chat to programmable, to agentic systems. Skills make that shift practical: your conventions become executable code that the AI loads on demand.
Anthropic is the AI company behind Claude, a leading AI assistant competing with OpenAI’s ChatGPT and Google’s Gemini. Claude Code is their command-line tool that integrates directly into your working environment—giving Claude access to your files and projects for extended work sessions. While it excels at software development, it’s equally powerful for writing, crafting reports, research, data analysis, and any complex multi-step work.
Skills change the game. You can now write automatically executable prompt templates that Claude loads on demand. Your conventions become code. Your patterns become callable. And once you have that foundation, you can build systems where skills invoke other skills based on what the task requires.
What Are Skills
Skills are executable prompt templates that live in your project directory. The basic structure looks like this:
Claude Skills: Specialized capabilities you can customize
You can even have Claude make you any custom skill file you want but if you want to make it yourself, the format is simple
name: status-report
description: Generate weekly status reports following team format
<your instructions>
You drop this file in .claude/skills/ and suddenly you have a callable unit of specialized behavior. When working on a relevant task, Claude automatically discovers and invokes the appropriate skills, loading that prompt and operating under those specific constraints.
The file format is simple. A few lines of metadata at the top (name and description), then the actual prompt instructions below. That prompt can be as detailed as you need.
What makes Skills different from Claude.md files or project files is that they’re dynamic. Claude reads Claude.md every time it is in the related folder, which uses tokens even if it didn’t need to. Skills only activate when needed, so they have the potential to be a lot more economical for specific information.
What Skills Enable
Skills work through progressive disclosure—a two-level system that keeps context efficient. The skill’s metadata (name and description) is the first level: it provides just enough information for Claude to know when each skill should be used without loading the full content. If Claude thinks the skill is relevant to the current task, it reads the complete SKILL.md file—the second level of detail. Think of it like a table of contents that lets you find the right chapter before reading the whole book.
Anthropic ships pre-built skills for common workflows:
PowerPoint (pptx): Create presentations, edit slides, analyze presentation content
Excel (xlsx): Create spreadsheets, analyze data, generate reports with charts
Word (docx): Create documents, edit content, format text
PDF (pdf): Generate formatted PDF documents and reports
The skill packages instructions about how these formats work so Claude doesn’t have to guess.
Document creation is just the starting point. Skills handle enterprise workflows like applying brand guidelines (official colors, typography, logo usage) to artifacts. Internal communications follow your company’s tone and format automatically. Financial analysis skills generate dashboards, portfolio reports, investment summaries with the charts and data presentation your team expects.
The pattern becomes clear quickly. Anywhere you have domain-specific knowledge that Claude needs to apply repeatedly, a skill makes that knowledge executable. Your conventions become code that the AI can invoke.
Five categories where skills deliver immediate value:
Document workflows: PowerPoint, Excel, Word, PDF creation following format conventions. No more “how do we structure quarterly reports again?” Every output matches your template.
Brand consistency: Typography, colors, logo usage, voice and tone. The skill enforces standards so every artifact looks like it came from your team.
Communications: Status reports, newsletters, FAQs, internal announcements. The format and structure stay consistent, you focus on the actual content.
Analysis and reporting: Financial dashboards, portfolio analysis, data visualization. Skills handle chart types, data presentation, and report structure.
Cross-format workflows: Pull data from CSV, build Excel analysis with pivot tables, generate PowerPoint summary, export to PDF. Each format transition follows the conventions for that medium.
This extends beyond business documents. Creative applications (art direction, music production, design systems), technical tasks (web app testing, API documentation), developer workflows (code scaffolding, release notes, standardized reviews). The common thread is packaged expertise that Claude loads on demand.
Try It: Your First Skill
Here’s an example from Anthropic of how to make your own Skill in Claude.ai
Creating custom Skills with Claude
Now let’s make our own, pick something you repeat constantly. For example
Create a file at .claude/skills/status-report.md or ask Claude to:
name: status-report
description: Generate weekly status reports following team format
When creating status reports:
1. Start with executive summary (3 bullet points max)
2. Progress section: what shipped this week
3. Blockers section: what needs attention or decisions
4. Next week section: planned work and milestones
5. Metrics: key numbers (PRs merged, issues closed, test coverage)
6. Tone: factual and concise, highlight risks early
Format as markdown with clear section headers.
Now when you ask Claude to write your weekly update, it discovers the skill automatically and loads that format. No need to manually invoke it. Claude sees the skill is relevant and applies your exact structure.
You’ve just programmed your programming assistant. The AI now has reusable expectations that it can apply consistently and automatically when needed.
Once you have one skill working, you’ll start seeing opportunities everywhere. Every time you think “I need to remind Claude about X again,” that’s a skill waiting to be written.
Skills for Code: Integration Awareness
Skills become powerful for coding when they capture knowledge that wouldn’t naturally be in context. Integration points are a perfect example.
Say you have a payment processing function that multiple parts of your application depend on. When Claude works on that function, it needs to know about all the surfaces that code touches, even if those files aren’t currently open.
Create .claude/skills/payment-integration-map.md:
name: payment-integration-map
description: Integration surfaces for payment processing code
When modifying payment processing functions:
Direct dependencies:
- src/api/checkout.ts - Calls processPayment() during checkout flow
- src/api/subscriptions.ts - Calls processPayment() for recurring billing
- src/webhooks/stripe.ts - Calls refundPayment() on dispute events
Data contracts:
- Payment methods must include: id, type, last4, expiry
- Response must include: transactionId, status, timestamp
- Error codes: INSUFFICIENT_FUNDS, CARD_DECLINED, NETWORK_ERROR
Side effects:
- Updates user.paymentHistory in database
- Triggers email via EmailService.sendReceipt()
- Logs to audit system via AuditLogger.recordTransaction()
Testing requirements:
- Update integration tests in tests/payment-flows.test.ts
- Mock Stripe API calls in tests
- Verify all three caller paths still work
Before modifying the payment code, verify changes won’t break these integration points.
Now when Claude works on payment processing, it knows about the checkout flow calling it, the subscription system depending on it, the webhook handlers that trigger refunds. Those files might not be in the immediate context, but the skill surfaces the integration map automatically.
This pattern scales to any critical code with multiple dependents. Authentication systems, database connection pools, API client wrappers, event emitters. Anywhere a change could ripple through parts of the codebase that aren’t currently visible.
Beyond the Basics: Skills for Workflow Routing
I’d been working with slash commands and orchestration patterns for months before Skills arrived.
Slash commands are custom workflows you define in .claude/commands/ that Claude can invoke, like creating your own “/fix-bug” or “/add-feature” commands with specific instructions.
Orchestration is when the main Claude instance coordinates work by deploying sub-agents. Think of sub-agents as fresh copies of Claude, each with their own separate context window (the working memory where Claude holds information during a conversation). Instead of one Claude trying to remember everything, you deploy multiple specialized Claude instances, each focused on one piece of the puzzle.
The approach worked: for complex multi-domain tasks, you deploy sub-agents to handle focused implementation. While the main agent maintains coordination and ensures your original instructions do not suffer context rot from implementation tokens.
What is context rot? This is a limitation of the attention mechanism in modern Large Language Models. As the number of tokens in the context window, prior tokens lose influence over the model’s output. This is the number one reason models fail to follow instructions on long running tasks. What you asked the model to do is typically the first tokens in context, which means they are the first to rot.
The trade-off is important to consider though. Orchestration means multiple subagents with separate context windows can end up reading the same files for the same context. As a result token usage goes up. For genuinely complex work, that cost pays off because you avoid the exponentially more expensive debugging and rework that comes from context rot of your instructions on long running tasks.
But orchestration overhead on simple tasks is waste. You ask Claude to fix a problem with the application crashing. Claude deploys networks of sub-agents to identify and plan fixes, only to discover a single syntax error. Analysis agents, planning verification steps, coordinated integration for zero reason. The systematic rigor that saves complex projects kills efficiency on trivial work.
Even for experienced developers, how complex an issue runs isn’t always clear at the start. I wanted Claude to handle anything autonomously, from fixing a syntax error to managing systemic integration failure.
I needed routing. Simple tasks execute directly. Complex tasks trigger orchestration. All in one workflow. The challenge was helping Claude make that decision reliably and completely autonomously.
Before Skills, I tried putting routing logic in slash commands. A scouting agent would scout what a task involved and assess complexity. Then in the slash command instructions, I provided different file paths Claude should reference for how much to orchestrate based on that complexity score.
It worked sometimes. Sometimes Claude would read the correct next instruction file. Other times the main agent that should be deploying teams of subagents for complex problems would just start trying to fix it directly.
Skills solved this.
Automatic Complexity Routing
As mentioned above i had built a complexity scoring system with four dimensions:
File Scope: How many files does this task affect? Single file scores 0, multiple related files scores 1, different modules scores 2, different system domains scores 3.
Requirement Clarity: How clear are the requirements? Crystal clear scores 0, minor ambiguities score 1, significant interpretation needed scores 2, vague or contradictory scores 3.
Integration Risk: How many integration points? Isolated change scores 0, touches existing APIs scores 1, cross-module integration scores 2, system-wide impact scores 3.
Change Type: What kind of change? Documentation or config scores 0, logic changes in existing patterns score 1, new features score 2, architectural changes score 3.
Total the scores. 0-1 gets lightweight direct execution. 2-4 gets moderate coordination. 5-7 gets full planning and synthesis. 8+ gets maximum orchestration with progressive context loading.
Before skills i just provided file paths based on the total score it worked but only sometimes, skills changed everything, it made it reliable. Here’s an example of me applying the routing with Claude skills to a character-skill system in a video game I am working on.
The router skill doesn’t do the work itself. It’s a traffic controller. Based on the complexity score, it invokes one of four specialized workflow skills:
Light tier for scores 0-1: Direct implementation, minimal overhead, handles simple bug fixes and cosmetic changes.
Medium tier for scores 2-4: Light planning phase, coordinated implementation, verification. Handles standard feature additions and moderate refactoring.
Heavy tier for scores 5-7: Planning with multiple approach exploration, synthesis of the best path, coordinated implementation across domains, systematic verification.
Full tier for scores 8+: Multi-domain architecture changes. Progressive context loading of the instructions so the router never tries to hold all the complexity at once. Survey phase to understand the codebase, parallel planning across domains each exploring multiple options for that domain, synthesis of integration points between separate plans and picking the best of the multiple options explored, phased implementation, comprehensive verification.
For example:
When I ask Claude to “fix the login button color,” the router scores it: file scope 0 (single CSS file), clarity 0 (completely clear), integration 0 (isolated), change type 0 (cosmetic). Total score: 0. Routes to light tier, direct implementation.
When I ask Claude to “add user authentication,” the router scores it: file scope 3 (database, API, frontend, tests across all layers), clarity 1 (mostly clear but implementation details open), integration 2 (cross-module), change type 2 (new feature). Total score: 8. Routes to full tier, orchestrated workflow.
The router makes this decision before any work begins. Task complexity determines cognitive architecture.
I created skill files for orchestration at four different complexity levels, from “Claude fixes it directly” to “deploy five different teams of sub-agents.” The complexity assessment triggers the appropriate skill file automatically. No manual routing, no hoping Claude remembers to read the correct file path buried in context.
It works consistently. So far, 100% success rate at routing to the correct workflow in my work. That reliability is what makes the entire system practical and scalable in complexity instead of theoretical.
The effect is that complexity now gets the right treatment. Simple tasks stay simple. Complex tasks get the structure they need to succeed. The cost Scouting the task to set up the routing is fast and cheap with Anthropics new Claude 4.5 Haiku model that runs at a third the cost of sonnet while being more than two times faster.
What’s more there’s no limit to skills that can be invoked, during the full tier orchestration skill, a domain skill for my game engine can activate for the subagent working in that part of the code base, and for every subagent for everything they are working on
Try It: Basic Router
You can build a simplified version of this routing system yourself.
Start with just two tiers: simple and complex.
Create .claude/skills/task-router.md:
name: task-router
description: Assess task complexity and route to appropriate skill
Assess this task’s complexity:
File Scope: How many files affected?
- Single file = 0 points
- Multiple files = 1 point
Clarity: How clear are requirements?
- Very clear = 0 points
- Some ambiguity = 1 point
Integration: How many integration points?
- Isolated change = 0 points
- Touches other systems = 1 point
Total score:
- 0-1 points: Use “direct-implementation” skill
- 2-3 points: Use “coordinated-implementation” skill
Based on your assessment, invoke the appropriate skill.
Then create the two implementation skills it routes to:
.claude/skills/direct-implementation.md for simple tasks:
name: direct-implementation
description: Direct execution for simple tasks
This is a straightforward task. Implement it directly:
1. Make the change
2. Test it works
3. Report completion
No coordination overhead needed.
.claude/skills/coordinated-implementation.md for complex tasks:
name: coordinated-implementation
description: Coordinated execution for complex tasks
This task requires coordination:
1. Identify all affected components
2. Plan integration points
3. Implement in logical order
4. Verify integrations work
5. Report completion with architectural notes
Maintain awareness of how pieces connect.
Now when you invoke your routing slash command, Claude evaluates the task complexity. Based on the score, the appropriate tier skill file automatically loads through context matching. The light-tier or heavy-tier skill instructions become active without you manually selecting them. You’ve just built adaptive task handling.
This scales with your needs. Start with two tiers. As you identify more patterns, add more specialized skills. The router grows more sophisticated, one step at a time.
Results and Trade-offs
I’ve been testing routing patterns for a month. Skills took routing reliability to 100% - every single task routes correctly. That consistency changes everything for token economics.
Here’s the critical insight: orchestration uses more tokens upfront but prevents expensive failures. Claude charges $3 per million input tokens (reading) and $15 per million output tokens (writing code) - 5x more expensive. Orchestration spends extra on cheap reads to avoid catastrophic failures on expensive writes.
Does the trade-off pay off? Let me show you the math.
Simple task reality: “Fix the login button color”
Light tier (direct implementation):
- Input: ~8K tokens (reads CSS file, makes change)
- Output: ~2K tokens (color change + test)
- Cost: ~$0.05
- Result: Done in one pass
If this had routed to full tier by mistake:
- Input: ~40K tokens (survey agent, planning agents, file reads)
- Output: ~15K tokens (plans, synthesis, implementation reports)
- Cost: ~$0.35
- Result: Massive overkill
Routing saves $0.30 by not deploying orchestration on trivial work. Multiply that by dozens of simple tasks per week.
Complex task: “Add user authentication”
Single agent approach (no orchestration):
First attempt: 90K tokens implementing database, API, frontend, and tests in one context. Around file 8 of 15, context degradation hits—Claude’s working memory is so full of implementation details that it loses track of the original plan. Integration issues appear.
The debugging spiral:
Explore integration bugs: 60K tokens
First refactor: 120K tokens (doesn’t fix database-API mismatch)
Second refactor: 150K tokens (introduces new frontend bugs)
Debug the new bugs: 80K tokens
Desperation debugging: 100K tokens
Total: ~500-600K tokens, ~150K input + ~450K output, $7.20, code doesn’t work
You Git reset. The entire session was wasted. Attempt #2 will likely cost another $7.20+ if it works at all.
Full tier orchestration (Skills routing):
Survey agent maps auth patterns: 30K input, 5K output
Three planning agents explore approaches: 120K input, 30K output
Synthesis agent resolves integration contracts: 25K input, 8K output
Four implementation agents (database, API, frontend, tests): 180K input, 45K output
Verification agent tests integration: 40K input, 5K output
Light debugging and adjustments: 50K input, 15K output
Total: ~445K input + 108K output, $3.00, working code
The verdict: Orchestration costs $3.00 and works. Single agent costs $14.40+ and likely still fails.
The extra input tokens and a few output tokens of sub agent reports are insurance against catastrophic failure.
The single agent’s 600K token failure spiral happens because context degradation accumulates within one session. The debugging and refactoring keep adding to the same degraded context, making the problem worse. Orchestration prevents this by maintaining clean separation between coordination and implementation.
The extra input token cost is insurance. For complex multi-domain work, orchestration saves money because it avoids the catastrophic failure mode where context rot forces you to abandon work and revert to earlier checkpoints.
Where routing pays off:
The inflection point sits around “moderate refactoring” complexity. Changing a function signature used in five places? Medium tier catches all call sites, updates them systematically, verifies nothing broke. Simple implementation likely would have caused issues and complex orchestration would have been overkill
What This Enables
Skills transform AI coding assistants from helpful suggestion tools into programmable development environments. You’re no longer just prompting—you’re building reusable collaboration patterns that encode your expertise and expectations so they are readily and automatically accessible to Claude.
The combination of slash commands, sub agent orchestration, and skills creates something fundamentally different: a natural language program that makes programs. You write the logic for how things should work (skills). You define when those logic pieces activate (commands). And you break big problems into smaller parallel tasks (orchestration). Claude becomes the interpreter that executes this system.
Your domain knowledge becomes executable. Your workflows become composable. The system adapts to task complexity automatically.
This is the shift from AI assistance to AI infrastructure. Raw prompting works for simple tasks. Complex software development requires coordination, verification, and context management—Skills provide that scaffolding. The result: 100% reliable routing, big savings on complex work, and autonomous handling of everything from syntax errors to multi-domain architecture changes.
What next?
Start simple. Create one skill for something you repeat constantly. Then another. Once you see the pattern, you’ll start encoding your entire development workflow. That’s when it clicks: you’re not just using an AI tool anymore. You’re programming your programming partner.
Thank you for reading this article! The ideas and concepts you need to hone your own skills are free on my Substack. But I offer paid subscribers access to a private GitHub Response Awareness Repository with my actual workflow files: slash commands, Skills, Subagents, etc. that I continuously update and expand.
Special Discount
If you liked my work and you’re interested in sophisticated Claude Code Workflows, I’m offering 20% off for up to 1 year for readers of this article until 10/28/2025.
Redeem now at: https://responseawareness.substack.com/b3e3aee2
Biography of Self:
Michael is a self-taught programmer who specializes in AI-assisted development frameworks. He’s spent hundreds of hours exploring orchestration patterns and extensibility features in Claude Code, focusing on techniques that scale beyond simple tasks into genuine architectural work. He writes Claude Code: Response Awareness Methodology teaching developers to build sophisticated systems using AI tools and agent orchestration.
Top Writing
LLMs as Interpreters: The Probabilistic Runtime for English Programs
The Science of AI Internal State Awareness: Two Papers That Validate Response-Awareness Methodology
Addendum Editor’s Notes
As the demand for AI compute rises dramatically in the second half of the 2020s, will energy and datacenter become more efficient?
At the start of 2026, Lambda, Crusoe and Nscale all go public in a rush to catch the likes of CoreWeave and Nebius in GPU renting, along with a dozen or so Bitcoin mining frantically pivoting to AI datacenters. A lot of them do run on clean energy like hydro, wind and so forth. More on this group in an upcoming post.
The U.S. is so far behind China in both energy infrastructure and renewables while the Trump Administration is leaving them even further behind. A lot of the clean hydro power is actually up in Canada. Others are thinking of building datacenters in colder regions like Norway, or in the ocean or even Space. Few realize how the energy-bottleneck the U.S. will face in the demand for compute era is actually a Space-tech acceleration push.
How People Around the World View AI
I’ve explored this topic twice before and the Pew Research Center do another great job here.
Countries Most Concerned about AI:
Concerned as opposed to excited.
United States 🇺🇸
Italy 🇮🇹
Australia 🇦🇺
Brazil 🇧🇷
Greece 🇬🇷
Canada 🇨🇦
United Kingdom 🇬🇧
Young People are being Bombarded with AI
Especially in countries like:
Japan
France
Germany
Greece
Global Countries Don’t Trust the U.S. or China to Regulate AI
Most people say they trust their own regional country or the EU to “regulate AI” than trust the U.S. or China.
Wealthier Countries have Heard of (Generative) AI more than Others
Who Trust the U.S. the least to Regulate Generative AI?
The countries with the lowest trust that the United States will properly regulate Generative AI products are:
France
Australia
Turkey
Mexico
Netherlands &
Canada
About those Agent Skills? 🤔
“Skills aren’t just a Claude feature. They’re the formalization of context engineering as the primary competitive advantage in AI.”
Is that true?
“Claude can now do work for you in your exact style and using your exact process. You can “teach” Claude to run a pre-designated workflow on any task - making agents MUCH more useful (time saved on manual formatting alone is huge).”
Claude Agent skills help organizations and enterprise settings automate procedural knowledge with organizational context that make AI agents work for them by specializing agents and workflows using files and folders. Skills extend Anthropic Claude’s capabilities by packaging local & personalized expertise into composable resources for Claude, transforming general-purpose agents into specialized agents that fit specific case by case needs. So is this going to be a big deal? And, how long will it take to get good?
Further reading on Agent Skills
Claude Skills are awesome, maybe a bigger deal than MCP
Claude Skills Are Taking the AI Community by Storm
Hacker News trending: Andrej Karpathy – It will take a decade to work through the issues with agents (over 1,000 comments).
Watch it on YouTube (Dwarkesh)
Claude Skills Might Be More Important Than MCP!
Claude (Agent) Skills: 50+ Power Tips & Tricks Guide
The Genius of Anthropic’s Agent Skills
Everyone should be using Claude Code more
Stay tuned for more example and takes on Anthropic’s Agent skills. You will find them in our vibe coding section.
We should also mention that Claude Code for web is now rolling out to subscribers to Anthropic’s $20-per-month Pro plan, as well as its $100- and $200-per-month Max plans.
To read the first article in our series on Claude Code see here:
Wow, this is such a comprehensive overview, I can only imagine how much research went into writing this, thank you! And thank you so much for mentioning my post, I appreciate it! 🤗
Thanks for sharing! Would you recommend Claude for certain disciplines versus ChatGPT for other fields? How would you advise someone who's deciding which one to us (or both)?