Another update on my #llm / #ai journey.
While writing my first #luanti mod tonight, I pulled up my local #ollama instance in a browser tab through my fork of `django-ollama`.
I did this because my #vim integration is still misconfigured, I think. More on that, later.
Anyway, llama3 batted about 50% for being useful for my requests for the evening.
First I asked it to protoype the mod, in #lua based on my description.
( If you're curious about the mod specifics, there's an [overview on my site](https://edward.delaporte.us/blog/luanti/garden/). )
I think this kind of request **could** go well later, but llama3 was either hallucinating wildly, or simply writing code for a very outdated version of Luanti.
In fairness, Luanti had a big API update in the last year or so. I will try this again in a few months.
So I read the Luanti core API docs manually to get what I needed.
Later, I asked really basic questions about lua syntax to llama3. [I know a number of languages](https://edward.delaporte.us/code/)
, but lua is not one of them.
This was a great experience. Ollama with llama3 reliably answered my syntax questions, and was probably faster than searching the lua docs, for me, since I am unfamiliar with lua‘s doc pages.
Overall, I can now rescind my complaint from earlier today of no return on my time investment.
My hours of time invested have now saved me a few minutes, and I am well on my way to enjoying the [Automation Curve](https://xkcd.com/1319/).
Joking aside, this really is cool stuff.

