Rendered at 16:00:03 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
aplthrowaway67 3 hours ago [-]
I will never understand why someone would go through all the trouble of developing this cool idea, without bothering to link a demo or include sample output. I see this every day on HN.
So the only way I can see what this skill actually looks like is to download and run it myself? No thank you.
gbro3n 12 minutes ago [-]
I'm still finding skill use to be far less reliable than clear instruction in AGENTS.md - I appreciate the idea is to give the agent the opportunity not to add the skill if not relevant to avoid context bloat, but there's no way (without an explicit instruction in AGENTS.md) to ensure that the agent will use the skill, and that point they might as well be any markdown file referenced at any location.
While building https://www.agentkanban.io (a Github CoPilot integrated task board), I experimented a lot with instruction placement. A single degree of separation from AGENTS.md works really well (I needed a robust means of having the agent pick up task specific IDs and so settled on a file called INSTRUCTION.md in a file managed by the tool which avoids polluting AGENTS.md as much as possible). I experimented with skills, but they were skipped too often for the tool to work as reliably as it now does.
adastra22 5 minutes ago [-]
Claude auto-injects skill descriptions into the context, and is pretty good about using them. I don’t know about the other harnesses.
giwook 2 hours ago [-]
The SKILL.md is right there, you can just read it to see what it does.
testycool 51 minutes ago [-]
A sample output will give the user an idea of whether the project is worth their time.
neuralkoi 8 hours ago [-]
I'm not familiar with Skills, but looking at the repo I find the amount of decorative code/text as overkill for what amounts to just the following prompt in a bash script (yikes) executing after a commit is run:
{"hookSpecificOutput":{"hookEventName":"PostToolUse","additionalContext":"[learning-opportunities-auto] The user just committed code. Per the learning-opportunities skill, consider whether this is a good moment to offer a learning exercise. If the committed work involved new files, schema changes, architectural decisions, refactors, or unfamiliar patterns, ask the user (one short sentence) if they'd like a 10-15 minute exercise. Do not start the exercise until they confirm. If they decline, note it — no more offers this session."}}
alexhans 8 hours ago [-]
Skills are just a good standard to describe repeatable workflows saving context through progressive disclosure, prompt sharing and, very underused feature, also bound the non deterministic parts with determism (which could be scripts).
Conceptually, you should treat them as incremental software instead of magic you grab from others [1]
The killer feature is that coding harnesses tend to have SkillBuilder agent skills so creating them becomes very easy and you can evolve them.
I recommend you build your own for your particular pain points.
Very simple example [2] showing what another user mentioned around "evals" so that you can really achieve good enough correctness for your automation.
After reading your first article I'm not sure I would agree. Skills are certainly transferrable in the sense that a sufficiently narrowly-tailored skill can be applicable for others with no modification. Similar to how we grab libraries that encapsulate certain abstractions for us.
alexhans 1 hours ago [-]
Sorry. I'm not sure about the specific part you don't agree with. You prefer people to just use skills instead of building them?
That's fair but I think this is similar to power tools like vim, obsidian or others. There's the path of grabbing other people's workflows and not being able to modify them to really tailor the tool to your needs and there's the minimal incremental path that empowers you and gives you control all the way through. It gets you to understand the tools and you'll be able to think possibilities that match your exact problems.
I'm not dogmatic about it but I do really recommend it. You can see the transformative shift once people start "skill building" instead of "skill consuming".
Edit: The approach I mention works with non engineers/developers. So there's no different technical bar.
saidnooneever 7 hours ago [-]
most stuff in these tools is just another md file which get spliced into prompt somehow. its how llms work.. this is normal. its also why id recommend people to use claude to build a similar tool for themselve. you will spend some tokens on it and then after save like 90% token costs using your own tool... its really crazy how much less tokens and calls are needed to do meaningful work....
also you can secure/lockdown tool calls better and make the agents tasks retryable, give it failure modes etc. (not if ur laptop dies during agent work its only god and the agent who know what happened to your code.. oh no wait. the agent needs to just spend 100k tokens to remember where it was (great way to spend ur money).
jonaseriksen 1 hours ago [-]
[flagged]
pisama 42 minutes ago [-]
[dead]
rglover 2 hours ago [-]
For those who haven't gone down this rabbit hole like me yet: skills are just structured markdown files that describe how to handle a narrow-band task.
So, if I write my API endpoints a certain way, the skill would describe that specific process. Later, an agent can "see" this skill, load it when it's relevant to current chat context, and then do whatever is instructed.
Similar to "tool calls," but instead of being a function you can call, it's just instructions for how to perform that "skill."
At least for the agent I use (Cline), you can define skills either globally or locally (project level).
Juvination 1 hours ago [-]
This is a great idea, I've been exploring with it this morning. I've really been feeling the brain drain from using AI to much, and while this isn't the fix. I think a few exercises a day can really help.
aledevv 8 hours ago [-]
What exactly is the "adaptive dynamic textbook approach"?
Examples?
> Generation effect: Accepting generated code and decreasing generating one's own code can skip the active processing that builds understanding.
Holy truth.
jongguk 5 hours ago [-]
[flagged]
ruguo 51 minutes ago [-]
Just tried this skill, pretty interesting. The Q&A at the end actually went surprisingly deep.
zihotki 8 hours ago [-]
No benchmarks and evals present, how do you know it produces better result than /create-skill ? Naive testing doesn't provide any confidence
schnitzelstoat 8 hours ago [-]
I think it means human skill development. It offers learning opportunities to the user.
> When you complete architectural work (new files, schema changes, refactors), Claude offers optional 10-15 minute learning exercises grounded in evidence-based learning science. The exercises use techniques like prediction, generation, retrieval practice, and spaced repetition to provide you with semi-worked examples from across your own project work.
Confusing name though.
wiseowise 1 hours ago [-]
When your brain is so cooked on LLMs that mentioning any related terminology triggers Pavlovian response.
alexhans 7 hours ago [-]
Hey, it's awesome that you mention evals. May I ask what you currently use, or look for? Do you roll your own or use an existing framework?
areoform 7 hours ago [-]
I really love the idea, I've had Claude make textbooks for me on the fly using open source textbooks and documentation. Is it possible to extend this skill to more generalized areas of learning / application? Or, is it domain specific to code?
romanoonhn 10 hours ago [-]
Looks interesting! I know it's easy to setup and test it but I'm on mobile current so I think it'd be great if there was full-interaction example to better understand how it works.
As I understand, this skill is intended to understand AI-generated code and potentially reduce skill atrophy. So it asks the agent to pause after important milestones (eg: created a file, changed db schema etc ) and ask the user questions about how they would do it.
itsafarqueue 3 hours ago [-]
Hey bro I heard you like skills so I put a skill in your oh whatever
I want to learn Java spring, and probably let ai help me / quiz me. I will take a look into the skills for inspiration.
tomaytotomato 3 hours ago [-]
I am a Java dev and Spring user for about 10 years now.
If you want to learn how Spring framework and Spring boot works, the best thing to do is build your own library and then learn to add it to a new spring boot service.
Depending on which AI tool you are using, you can also get it to debrief what it is doing and what layer of the Spring architecture it is using (lifecycle, bean scope, is it using auth/messaging/data middleware etc)
Also here is a service I have built with Claude code along with a sample Spring boot service
It is a demo to show that I could get Apache Solr working in the latest version of Spring Framework 7 and Spring Boot 4. There is a sample application in there for a bookstore you can play around with.
Mashimo 17 minutes ago [-]
Thanks mate. Will check it out later.
Current plan is to use a existing vue/typescript browser game as frontend, send high score and similar via web sockets. Do ~something~ with red panda to tip my toes into the Kafka world.
ramon156 8 hours ago [-]
Is there a reason why making a spring app and learning hands-on is not feasible?
I know I sometimes get demotivated mid-way, but that also tells me it might not be worth the investment
Mashimo 4 hours ago [-]
It's feasible, but I want to try to learn something new with an Ai tutor. See how that goes.
I want to make an spring app, but instead of looking everything up on Google, I can ask the Ai with context and maybe give me an learning plan that fits my needs
imtringued 7 hours ago [-]
Spring is reasonably easy to learn. The hard part is knowing where beans are defined, because Spring doesn't make that easy at all. Anyone and anything can define new beans in any library you pull.
I still don't see why AI would be mandatory. It's helpful, yes, but not mandatory.
satao 5 hours ago [-]
is that why navigating a Spring codebase is so confusing? I'm jumping through implementations and definitions and whatever without ever reaching the actual business logic most of the time
WASDx 5 hours ago [-]
I've had mostly problem-free experiences with intellij (ultimate-only feature I think). One click finds declarations both in business code and buried deep in libraries.
ffsm8 4 hours ago [-]
Following the code via IDE is indeed easy in javaland - but if you didn't have a breadcrumb yet... Spring boot you didn't architect yourself is indeed annoying to navigate.
Everything can be an entry point and it's often non-obvious how things are structured.
More opinionated frameworks which enforce routes and consumers to be centrally managed are generally easier to figure out from the filesystem.
But if you've got an IDE like intellij you get the entry point tool which lists all endpoints. Consumers are more annoying...
So the only way I can see what this skill actually looks like is to download and run it myself? No thank you.
While building https://www.agentkanban.io (a Github CoPilot integrated task board), I experimented a lot with instruction placement. A single degree of separation from AGENTS.md works really well (I needed a robust means of having the agent pick up task specific IDs and so settled on a file called INSTRUCTION.md in a file managed by the tool which avoids polluting AGENTS.md as much as possible). I experimented with skills, but they were skipped too often for the tool to work as reliably as it now does.
Conceptually, you should treat them as incremental software instead of magic you grab from others [1]
The killer feature is that coding harnesses tend to have SkillBuilder agent skills so creating them becomes very easy and you can evolve them.
I recommend you build your own for your particular pain points.
Very simple example [2] showing what another user mentioned around "evals" so that you can really achieve good enough correctness for your automation.
- [1] https://alexhans.github.io/posts/series/evals/building-agent...
- [2] https://alexhans.github.io/posts/series/evals/sketch-to-text...
That's fair but I think this is similar to power tools like vim, obsidian or others. There's the path of grabbing other people's workflows and not being able to modify them to really tailor the tool to your needs and there's the minimal incremental path that empowers you and gives you control all the way through. It gets you to understand the tools and you'll be able to think possibilities that match your exact problems.
I'm not dogmatic about it but I do really recommend it. You can see the transformative shift once people start "skill building" instead of "skill consuming".
Edit: The approach I mention works with non engineers/developers. So there's no different technical bar.
also you can secure/lockdown tool calls better and make the agents tasks retryable, give it failure modes etc. (not if ur laptop dies during agent work its only god and the agent who know what happened to your code.. oh no wait. the agent needs to just spend 100k tokens to remember where it was (great way to spend ur money).
So, if I write my API endpoints a certain way, the skill would describe that specific process. Later, an agent can "see" this skill, load it when it's relevant to current chat context, and then do whatever is instructed.
Similar to "tool calls," but instead of being a function you can call, it's just instructions for how to perform that "skill."
At least for the agent I use (Cline), you can define skills either globally or locally (project level).
Examples?
> Generation effect: Accepting generated code and decreasing generating one's own code can skip the active processing that builds understanding.
Holy truth.
> When you complete architectural work (new files, schema changes, refactors), Claude offers optional 10-15 minute learning exercises grounded in evidence-based learning science. The exercises use techniques like prediction, generation, retrieval practice, and spaced repetition to provide you with semi-worked examples from across your own project work.
Confusing name though.
https://github.com/DrCatHicks/learning-opportunities/blob/ma...
As I understand, this skill is intended to understand AI-generated code and potentially reduce skill atrophy. So it asks the agent to pause after important milestones (eg: created a file, changed db schema etc ) and ask the user questions about how they would do it.
https://github.com/SimHacker/moollm/blob/main/skills/skill/S...
I want to learn Java spring, and probably let ai help me / quiz me. I will take a look into the skills for inspiration.
If you want to learn how Spring framework and Spring boot works, the best thing to do is build your own library and then learn to add it to a new spring boot service.
https://www.baeldung.com/spring-boot-custom-starter
Depending on which AI tool you are using, you can also get it to debrief what it is doing and what layer of the Spring architecture it is using (lifecycle, bean scope, is it using auth/messaging/data middleware etc)
Also here is a service I have built with Claude code along with a sample Spring boot service
https://github.com/tomaytotomato/spring-data-solr-lazarus
It is a demo to show that I could get Apache Solr working in the latest version of Spring Framework 7 and Spring Boot 4. There is a sample application in there for a bookstore you can play around with.
Current plan is to use a existing vue/typescript browser game as frontend, send high score and similar via web sockets. Do ~something~ with red panda to tip my toes into the Kafka world.
I know I sometimes get demotivated mid-way, but that also tells me it might not be worth the investment
I want to make an spring app, but instead of looking everything up on Google, I can ask the Ai with context and maybe give me an learning plan that fits my needs
I still don't see why AI would be mandatory. It's helpful, yes, but not mandatory.
Everything can be an entry point and it's often non-obvious how things are structured.
More opinionated frameworks which enforce routes and consumers to be centrally managed are generally easier to figure out from the filesystem.
But if you've got an IDE like intellij you get the entry point tool which lists all endpoints. Consumers are more annoying...