Day 1/10 coding with generative A.I.
If you missed it: last week I was complaining that my boss came up with a green field project to evaluate "how helpful A.I. is for daily coding and if, when, where and how it can speed us up".
So today I sat down with a heavy heart and wrote my first ever A.I. prompt. The project is pretty stupid (IMO). It's about building up a database and filling it with structured data from JSON exports. At least that's the part I'm building, my two colleagues will build a backend API and a frontend to visualize the data.
I'm using Anthropic's Claude Code (no endorsement) and I used it to generate the whole thing in Python. I haven't looked at everything yet, but it looks reasonable and with just a few small adjustments, I could make it work. Out of curiosity, I threw the error messages at Claude without any suggestion other than "fix it" and... it did. It even explained correctly what it changed and why.
I now have working Python code to read the JSON input, transform it slightly and dump it into a PostgreSQL DB. I also have (absolutely insane levels of verbose) project documentation and tests and a Jenkins pipeline that I hooked up and got working within minutes.
.
.
.
And I'm sooooo conflicted. If you've been following me for a while, you probably know my stance on generative A.I.
I absolutely hate it. I rarely hate things, but I do hate A.I. I have various reasons that I don't want to get into right now.
And I was kinda hoping that it would be... bad. I was expecting a terrible experience, borderline unreadable spaghetti code, flaky tests, security issues left and right and a CLI that will just crash no matter what you do.
I got my crashes alright, but the rest of it seems pretty okay. 😢 And the crashes were rather easy to fix and the model even found the errors itself after I prompted it again.
It feels like the only thing left to do is make the documentation less verbose and readable for an actual human. 😩
I recognize that this is a very small green-field project in a popular programming language with a clearly structured, machine-readable input to start from.
And nothing, absolutely nothing, that I got was something I couldn't have written myself.
But I cannot possibly write code this fast. I haven't even managed to read everything yet.
And despite all my expectations I ended up learning new things. I saw some pretty cool pieces of code that I hadn't even thought of.
I am disappointed, because I kinda found myself enjoying the experience. And that goes strongly against my beliefs that ethically A.I. is one of the worst things that has happened to humanity in recent history.
We'll see what the next few days bring, but I honestly don't think that there's much more work to be done on my end.
I'm not proud of the code, I don't fully understand it, but it looks like something I could maintain for the long-term and I'm shocked at how quickly I was able to nail this together.
At least I got some "real" coding in when I solved this day's AoC puzzle earlier. I wouldn't cheat myself out of solving a puzzle, if that's what you thought.
9 more days...
Sorry if this just trails off... My head is still reeling and I can't quite put my feelings in order yet.