William Whitlow

Computer Scientist who through the workings of life and discernment ended up as a philosopher.

2026-01-27

Can someone clarify, in academia and industry are LLM hallucinations the result of overfitting, or simply a false positive?

I'm beginning to think that hallucinations are evidence of overfitting. It seems surprising that there are few attempts to articulate the underlying cause of hallucinations. Also, if the issue is overfitting, then increasing training time and datasets may not be an appropriate solution to the problem of hallucinations.

#AI #ML #llm

2026-01-26

@avandeursen
One of the greatest challenges is defining the intelligence goal. The replication of human intelligence certainly seems beyond LLMs but is that the goal? The vast accumulation of data and statistical associations has been a tremendous success for LLMs. While it makes me fear overfitting, it can sometimes have near super intelligent recall from association alone.

Who determines intelligence? Developers? Psychologists? Philosophers?

#AI #philosophy #psychology

2026-01-26

@chris_e_simpson

Is there a particular article you have in mind?

I ask because the term ambiguity on this topic has really begun to fascinate me. As the topic has gone more and more mainstream, it has been difficult to sit back and listen to people talk about #AI with no idea what the underlying algorithms are doing. There seems to be a gap between how general users understand #AI or #ML and how developers understand them. My fear is that this gap is what is causing so many issues right now.

2026-01-22

@abucci

I like that analogy.

Unfortunately? I never got into functional programming, so I had to look up hylomorphism as a CS concept. I have found that computational and philosophical terms often have a high degree of similarity, but not so much here.

Aristotelian hylomorphism supposes that substances are composed of matter and form. Wherein form is the principle of intelligibility and provides the end. Intellect extracts the form from material beings. Hence language alone cannot suffice.

2026-01-22

@abucci

I had failed to challenge the base assumptions. All the talk of #GenAI had led me to adopt that very language while trying to critique it. There is the reality that the prompt from the user is necessary, but all of the creativity is explicitly encoded into the dataset. I suppose then the generative aspect more so derives from the trade secrets of dataset creation.

Thank you. This is where I am trying to think out loud and work on refining my own ideas and how to communicate them.

2026-01-22

@abucci

I have not read Hans yet, but I should probably add his work to my list.

I consider it fortunate to find myself indebted to the Aristotelian metaphysics of hylomorphism. Which, while it does not address all questions, it certainly helps to ground contemporary philosophy in a realist tradition. In that regard, one of the shortcomings of LLMs is believing that associations of words without experience constitutes knowledge. Rather than language deriving from experience.

2026-01-22

@abucci

I suppose to a certain extent this is the epistemological question of LLMs. Does data arrive at true understanding, or statistical associations? This insight seems to be growing in popularity, but now has to climb the mountain of sunk cost from investments. As such, many of these conversations are on the fringe of being impossible to even have.

The greatest irony, is believing we have moved beyond these challenges. Whereas the reality is we have merely stopped engaging with them.

2026-01-22

@SteveThompson

Continuing to think on this, as I’m sure you are aware as well, the next few months could be very pivotal. I remember early on an airline’s chatbot hallucinated coupons, that the airline was mandated to accept. Now multiple wrongful death cases have been filed recently. These have tremendous potential to define AI liability moving forward. Although it makes me wonder how much Tesla might have already undermined this with their self-driving related suits.

2026-01-22

@SteveThompson

Yes, it is scary to acknowledge that professionalism in software engineering only remotely begins to creep in after incidents like the 737 Max 8 and the Therac-25. Watching software sprint forward with little to no concern over ethics simultaneously makes conversation about it feel futile and of the upmost importance. Since in the absence of conversation the failures are all but inevitable.

2026-01-22

@SteveThompson

I wonder what the necessary steps to balance this might be? It seems to be an example of how the design parameters that have been chosen in training are actually not the best overall for the end user. However, concern over this is often focused on impact more than on reconsideration of design principles. A sad and frustrating reality as it means more people are going to end up hurt by these tools that we are sprinting to make ubiquitous.
#ai #ethics #philosophy

2026-01-22

Open Question:
What is the end of AI systems?

Like the end for a car is transportation. Therefore, we continue to iterate on the design to improve power, efficiency, or style.

Can a similar focus on end be applied to AI? Some seem clear like AlphaGo, self-driving cars, medical imaging, etc. Really the challenge seems to be with LLMs. Is the lack of clear end contributing to misuse and harm caused by LLMs?

#AI #chatbots #LLM #philosophy #openquestion #discussion

2026-01-21

6/
Summary:
Data clustering - Full dataset remains constantly available (association based learning/prediction)

Functional creativity - No foreknowledge, acquires policy for experimentation. Evaluative based on performance that can be verified externally

Process generative - Information from dataset is encoded into model. Outputs to new inputs are based on learned associations in the data. Future outputs based on training associations. One time training for all use cases.
#AI #philosophy

2026-01-20

5/ Process generative:
These are models like ChatGPT and Dall-E. Their goal is to generate new responses to text or image inputs by the user. As such, they are trying to encode the entirety of language or visual representation. The recent success in this context has enabled the explosion of #AI services. The challenge of encoding all this information should explain all the shortcomings. These models operate based on relations learned from a large dataset. Everything develops from this dataset.

2026-01-20

4/ Functional creativity:
This is a more novel way to describe reinforcement learning. It is acknowledges how models like AlphaGo and AlphaFold are able to develop revolutionary advancements through policy adaptation. The important aspects are a well-defined scope and a definitive evaluatory framework. The game Go and protein folding fit these qualifications well. The challenge with these models is the process of creating/defining the environment for policy development. #AI #philosophy

2026-01-20

3/ Data clustering:
From a popular perspective, this is the least interesting of the distinctions. These models categorize new data points based on readily available historical data. Taking a large number of arbitrary data points, these models are able to organize them according to a degree of similarity. Then when new data points are added they are similarly categorized. These models find their true use in data science. Hence their lack of popular appeal. #AI #philosophy

2026-01-20

It seems to me that when we discuss #AI we are often discussing an application like ChatGPT. However, this is only one type of AI models. I would like to propose a three-fold distinction for discussing AI from a philosophical perspective. Each distinction is a different way of processing data, with time often being the varying factor. The three distinctions can be called data clustering, functional creativity, and process generative. #philosophy 2/

2026-01-20

I’m curious to know how often the distinctions between the various types of algorithms has been made? There has been a lot of discussion about #ai. Yet this is often used as a very generic term to describe everything from ChatGPT to AlphaFold without much distinction that the underlying algorithms are often very different. This difference seems like something that should be obvious and yet is not a common distinction in articles. Is this well known or distinction worth exploring? #philosophy

2026-01-19

I am eager for the discussion opportunities that focus on the intersection of this two topics. Especially as clarity regarding the rapid evolution of technology is difficult to come by at this time. It will be my hope that this format becomes an interesting place for discussions. Discussion being the important process through which ideas are refined, and truth is better known. 3/ #introduction #ai #philosophy

2026-01-19

With that in mind, my main areas of focus will be computing and philosophy. My undergrad studies having been a well refined focus on computer networking and machine learning algorithms. Whereas, in graduate school I studied philosophy. The result, I hope, has been an interesting perspective from increasingly watching AI researchers and Philosophers adopting each others terms. While much of it has been good, sometimes there is a need for further discussion. 2/ #introduction #ai #philosophy

2026-01-19

Intro Post: I've been away from serious social media use for the past several years. Frustrations over algorithms and data use really pushed me away. That's where excitement over Mastodon is beginning to grow. Control over data and lack of algorithms are two aspects that I'm hoping will offer the opportunity to muse through various thoughts. Along with the public discourse to provide feedback and help refine them. Thoughts posted here are developing, not finalized. 1/ #Introduction

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst