**Some thoughts on my AI/LLM usage**
Read it on my blog, it has a nicer image/text layout.
This time I will skip my usual (energy, water, ethical) rants and criticism of AI / LLM (large language model such as ChatGPT) and write how and when I use it.
It would be hypocritical from me if I wouldn’t admit that I use it sometimes too. I use it in privacy of my office with a lot of shame and I deeply despise public posts like ‘Look what I made with AI’. No, you didn’t make anything, ok?
Stories? Naaa.
When ChatGP was released, I played with it and was mildly amused with stories it could spit out. After some tries my interest for generating stories and poems with it faded. I rather write stories by myself, because I enjoy it. I really don’t need something that mimics it can replace my creative activities.
Learning content? No.
Then, I experimented with it for creating learning content. I could re-use some story-like narratives, but professional/expert parts … weren’t usable. They sounded real and authoritative, but were wrong. Usually in some details and nuances, which spoiled the meaning of the text (for those who know what they’re reading). So I ditched it for this purpose (writing learning content).
Project proposal? Not a chance.
Then, I thought I could use it to help me to write project proposals. I ditched it, because it generated boring, over-chewed ideas. No, I don’t need help with generating ideas, I have enough of them. The problem aren’t new ideas, the problem is everything else (finding good consortium members, balancing the budget and activities etc.). Moreover, sometimes I’m in the role of proposal evaluator. If I detect something is generated with LLMs, I will be more critical than usual and reject it.
Improving text/language?
I tried it to improve language of my texts, because English is not my primary language. But then, everything sounded the same. I ditched it. I don’t mind anymore if I my English sounds Slavic or strange.
Exploring what is out there? Maybe…
I found a potential use of LLMs. In my spare time I tinker with my homelab a lot. There is never-ending stream of technical challenges. For example: Which filesystem should I choose (ext4 or zfs)? How do I stop monitor to flicker?
After I exhaust various forums, I ask ChatGPT or Mistral. It usually gives me 10 wrong answers and maybe 1 with a potential.
For example: I’m testing Ubuntu with Firefox and when I was scrolling with my mouse, it didn’t detect the first tick after reversing a direction of scrolling. Quite a niche issue. ChatGPT gave me at least 10 ways to resolve it. For 7 of them I knew they were nonsense from the first glance. For 4 of them I wasn’t sure. One of them was – check if your desktop runs X11 or Wayland. It was X11. Then it gave me instructions how to switch to Wayland. It worked.
If I tried all of proposed solutions, I would probably f**ked some configuration and couldn’t roll it back (yeah, it happened).
I don’t even know how to explain this type of usage. Brainstorming? No. Exploring possible options (wrong ones included)? Not sure. Very vague and broad search? Maybe. Sensing what kind of information exist?
Local models and home assistant?
The next thing I’m planning to use LLMs for is to use it with HomeAssistant. Interacting with it, recognizing objects on my webcams, finding patterns of energy usage and similar. Didn’t started it yet (except studying some projects).
Prompt engineering?
Don’t make me start on this. Chatting with a bot, typing questions, and adding a context to them is not and will never be ‘engineering’. Let’s skip this phrase, pretty please.
To ask smart questions, a person doesn’t need prompting skills, but a good background/expertise and years of study and experience from a real profession. In Computer Science we call it (for example) Requirements Engineering . Knowing how to ask questions about a specific field is only a skin on the milk.
I definitely can not ask good questions about … marine biology for example, because I know nothing about it. Whatever LLM generates about it, I could took as a good answer, because it sounds authoritative. But I know it’s not, because I’m aware of my and its limitations.
Do I sound as an old grumpy guy yelling at the clouds?
Probably. I can fully understand the attraction of LLMs which can help not-so-bright people sound more competent as they are.
But still, I can hardy wait ‘AI’ hype to pass and crash.
I can hardly wait for a better energy/water consumption regulation, better ethical/authorship regulations, usage transparency, source data disclosures, possibly compensations to authors. Don’t @ me with false dichotomies like ‘scraping of content is like reading’ or ‘scraping content is like search engine indexing’ ).
I can hardly wait the change of ‘AI’ development direction. I wouldn’t mind if all ‘AI’ development efforts went to cancer or other diseases research. But generating funny images, authoritative, but wrong texts and videos that looks like cheap spam? No, I don’t need that.
Until then, I will:
- disable scraping of my content by AI bots (already did that on my reverse proxy)
- limit myself using big corporations’ LLMs for answer generation and focus on local models that could help me to pick up my dog’s shit or do the dishes
- treat it as illusion or hopium, if you want. Because it is being sold as such. It gives people hope that something is different (better?) than it is.
Image source (Open Clipart Vectors)
Tags: #AI #LLM #ethics
https://blog.rozman.info/some-thoughts-on-my-ai-llm-usage/
#AI #ethics #LL #LLM