Ox comes with a guide on how to best use the library with LLM coding assistants: trying to solve the problem how to let the agents know how and when to use Ox.
Is that guide useful? Should we replicate it to our other OSS projects? What's missing?
Ox comes with a guide on how to best use the library with LLM coding assistants: trying to solve the problem how to let the agents know how and when to use Ox.
Is that guide useful? Should we replicate it to our other OSS projects? What's missing?
How to best provide the documentation of a library to LLMs, so that an AI coding assistant / agent is fully informed during development?
I've gathered a couple of options, and did a PoC on one of the libraries that I maintain.
Here are the results:
https://virtuslab.com/blog/backend/providing-library-documentation/
Ox, the safe direct-style streaming, concurrency and resiliency library for #Scala 3 on the JVM, is nearing a 1.0.0 release, with 1.0.0-RC1 now available!
Should be finalized just in time for ScalaDays - in a month! :)
Please test! Is anything missing? Or worse - broken?
Might seem like a tiny #Scala Metals features, but it really makes a difference: the status bar tells you that the code in the module you're currently in won't compile (hence - no new highlighting), because there are errors in an upstream module.
Here, `core-test` won't compile because `core` has errors, but this of course generalises to multi-module projects with long dependency chains, not only tests.
Comparing #Java Streams & Jox Flows: similar APIs on the surface, so why bother with a yet another streaming library for Java?
One is pull-based, the other push-based; one excels at transforming collections, the other at managing async event flows & concurrency.
https://softwaremill.com/comparing-java-streams-with-jox-flows/
@hktpe You tell me! :)
We did a couple of RAG experiments, but they usually failed to retrieve the appropriate fragment.
Since then context windows grew considerable, and I think that providing a lot of relevant context might work better.
The time has come even for JavaScript frameworks to mature; Bartek Butrym created a walk-through of a modern React & Next.js-based application architecture, everything from the folder structure, through tooling (linters), to validating forms!
https://softwaremill.com/modern-full-stack-application-architecture-using-next-js-15/
More MCP examples written using Chimp!
1. proxying to a service available via HTTP: https://github.com/softwaremill/chimp/blob/master/examples/src/main/scala/chimp/weatherMcp.scala
2. exposing multiple tools: https://github.com/softwaremill/chimp/blob/master/examples/src/main/scala/chimp/twoToolsMcp.scala
Will a custom rule saying "when using library XYZ, consult documentation at http://xyz.docs" be enough?
Supposedly LLMs work best with markdown. Should I then somehow point them to an AGENTS.md file which has the entire documentation (it might be long)?
What's the current "best practice"?
Is there some standard / adopted (as in: used by Cursor / Anthropic's Claude / coding agents) way to provide documentation for libraries?
My use-case: I've got a library which works best when using certain approach to concurrency, error handling etc. So an MCP which has access to the API won't suffice. It's the general usage guidelines, not specifics, that I'd like to include in the context.
@NicolasRinaudo LambdaLeaks!
@NicolasRinaudo 🙈 I re-read the email but nothing is written there about not sharing, and I assumed that's what is not forbidden, is allowed ;)
So ... what IS functional programming?
The video of my talk @LambdaDays is up! :) https://www.youtube.com/watch?v=pnZSff01FYQ
Jox (#Java) & Ox (#Scala) - virtual thread-based safe concurrency & streaming - updates:
* `selectWithin` which guarantees that no elements are received from channels on timeout
* more `Flow` operators: `split` & `splitOn`
Find out more:
https://jox.softwaremill.com/latest/
https://ox.softwaremill.com/latest/
Since commit messages are now computed from the source code (via LLMs), I'm catching myself not paying much attention to them.
Which reinforces a good practice: source code should capture all relevant information. If necessary, in a comment. Commit msgs should be just summaries.
Thank you Devoxx & LambdaDays! It's been very intense three days, with great hallway/functional/Java/AI tracks :) Hope to see you next year!
The slides for both "Virtual Threads, 2 years later" and "What is Functional Programming" are available on my website: https://warski.org/talks/
Star the project, experiment, give us feedback & bug/feature requests!
The tight feedback loop provided by the compiler helps a lot - Cursor nicely iterates on the generated code, fixing any problems.
You do need to have quite a good idea how you'd like the code to look, to guide the generation, though. There are still limits to the magic ;).
Fun fact: most of Chimp's code is auto-generated using Cursor with the help of Scala Metals MCP server.
Tasks such as generating the MCP JSON-RCP model - basing on the Typescript schema, writing tests or "filling in" missing code given a non-compiling API sketch turn out to be a great fit of LLMs.
Chimp supports any Scala stack, as it generates Tapir endpoints, which can be used with any Tapir server interpreter (including ones using cats-effect, ZIO, Pekko, synchronous servers).
The input schema is automatically derived using the same mechanism as in Tapir.