I have come to appreciate that expectations for GenAI models (language models and similar) are often wrong.
The technology is extremely powerful and impressive, but it is not like anything we have experienced or imagined as humans so we have to adjust our expectations
These models are trained on REPRESENTATIONS of things (language, photos, communications, etc.), not on the ACTUAL things themselves (ideas, emotions, logic, intent, and more). GenAI can infer emotions but they don't actually feel them, GenAI can communicate about logic but isn't naturally logical (without external logic modules / system prompts / etc.)
This causes issues with our expectations because GenAI isn't something we have known and expected before:
▪️ We cannot expect GenAI to be rigorously logical/deterministic because the human communications they are trained on isn't logical
▪️ We cannot expect GenAI to have an emotional core/identity/soul with overriding moral imperatives because they are trained on our _expression_ of emotion/identity/etc. and don't actually _feel emotions
▪️ We cannot expect GenAI to act like we have seen in Science Fiction (Commander Data from Star Trek, 'I, Robot', and many others) because most tell a story of a purely logical machine (that can talk) who are struggling to learn human social and emotional processing (morals, personal identity, jokes, etc.)
As we integrate these models, we need to recognize _what they actually are_ and what things they do well and don't do well.
I think of GenAI as a genuinely 'new' entity which remixes a bunch of 'old' things we already know about in a very different way and try to keep in mind how they process (diagram) and their:
🔹 Machine-like Execution - LLMs and similar tools are like any other machine (execute at scale/speed/etc whether its done well or badly), but its always based on the data they are trained on - human communications and our publications (or AI mimicry of it nowadays).
🔹 Overconfidence - They are trained on our confident final results, not our internal thought processes that led to the conclusions (whether these are correct, wrong, or a mix).
🔹 Movie magic dynamic - They can mimic many aspects of human experience, but it's like a movie or TV show - it looks real and is often useful, but it isn't actually 'real'
I have been tinkering with to try and capture this simply - I would love your thoughts and feedback.