https://allismotion.org/seti-is-archaeology-signal-science-across-spacetime/
I participated in the quantum experiment described in this paper by Satoru Watanabe.
The paper reports experimental evidence of nonlocal correlations between EEG signals and quantum states, offering a new empirical approach to the hard problem of consciousness.
What struck me most was how it rethinks the role of the observer and subjectivity in physical measurement.
I’d be glad to exchange thoughts with anyone interested.
If usefulness isn’t a guide to what’s real, what is?
Seems like I’ve been writing a lot about quantum mechanics lately. Apparently so have a lot of other people. One thing that keeps coming up is the reality or non-reality of the quantum wave function. Raoni Arroyo and Jonas R. Becker Arenhart argue for non-reality: Quantum mechanics works, but it doesn’t describe reality: Predictive power is not a guide to reality. (Warning: likely paywall.)
Along similar lines, in an article about what he says are quantum myths, Ethan Siegel argues that superpositions are not fundamental to quantum physics:
Superpositions are incredibly useful as intermediate calculational steps to determine what your possible outcomes (and their probabilities) will be, but we can never measure them directly.
Arroyo and Arenhart take a similar line. They argue that it would be more intellectually honest for wave function realists to call their position wave function pragmatism. As they note in the title of their piece, they don’t see predictive success as a guide to reality.
The question I want to ask these people is, if predictive power, if usefulness, isn’t your guide to what is real, then what is?
It’s worth thinking about why we care whether something is real or not. Is the sound I’m hearing from outside rain? Is the rain real? To say it is is to say I need to take an umbrella with me when I go outside, or be prepared to get wet. To say it isn’t is to say I can walk outside without worry of getting wet. We get similar considerations when trying to decide if a stock rally is real or illusory, or, from an evolutionary perspective, whether the sound in the bushes is a real predator or just a figment of your imagination. Reality is that which makes a difference, something which there’s a possible cost to ignoring.
Admittedly, this is a strange point to make when talking about quantum states. It might seem like whether they’re real has little to no bearing in our daily lives. But they do seem to make a difference for experimenters and quantum computing engineers. They have to take the dynamics implied in these mathematical tools seriously. In the case of quantum computing, it’s the very dynamics that seem to enable what they’re trying to do. Failure to treat them as real has consequences.
Now, I’m a structural realist. I think what we can count on being real in successful scientific theories are the structures they describe, at least to some level of approximation. That doesn’t mean we can count on them being fundamental, or that we know what they may be structures of. This is particularly important to remember with quantum theory, where the structures are all we currently have.
Does that mean that, rather than being structures of objective reality prior to a measurement, they could actually be structures of subjective expectations as the QBists argue? Or of the way the experimental equipment has been set up, as other antirealists argue? I suppose so. But that seems to imply the possibilities are completely set by these expectations or preparations, that if scientists really wanted to, they could get any result they wanted.
In practice, something seems to constrain the possible results. Of course, if I put on the epistemic hat, I could argue that those constraints are the constraints on their thoughts (QBism) or practical equipment limitations (other epistemics), not anything in the quantum realm. But taking this literally, that seems to imply that quantum physics is a big illusion, a side effect of the way scientists think or construct experiments. If so, how could anyone be sure that any scientific measurements beyond human senses are to be trusted?
All of that is before remembering that if we think anything objective at all is happening in the physics prior to a measurement, that there are mathematical theorems which kick in and demonstrate that quantum states must describe something real. Epistemic interpretations of quantum mechanics, such as Copenhagen, QBism, and RQM avoid this be saying there is no such objective physics prior to measurement (or interaction). Which, to me, makes calling them “epistemic” misleading. Qbists in particular argue for a “participatory reality,” a notion they inherited from John Wheeler’s “it from bit” idea.
This selective application of antirealism has always felt like gerrymandering to me. Most of the proponents want to resist the idealism label, but they seem to want to take from metaphysical antirealism just what they need to avoid quantum state realism. It all feels forced.
Interestingly enough, that doesn’t appear to have been Niels Bohr’s take. Historians often argue that he was more of a neo-Kantian than either an instrumentalist or idealist. His take seemed to be that the quantum realm was real, but inaccessible, the noumena always beyond the phenomena. Of course, this predates the theorems I mentioned above, which is what forces stronger stances from contemporary epistemic proponents.
But my issue with the Kantian view is it pushes reality into something utterly and forever unknowable. Reportedly, Kant’s motivations for doing this were to preserve space for God, the soul, free will, and morality in response to the “Crisis of the Enlightenment,” which seemed to call all of those things into question. I suspect neo-Kantians are trying to preserve different things, but that kind of preservation likely remains part of their motivation.
But the cost of doing so is to remove the practical aspects I noted above when deciding what’s real or not. In my view, it removes any utility from the concept of reality, except for talking in terms of theology or overall metaphysics.
Which may be why Arroyo and Arenhart want to use the word “pragmatic” instead. I think a better strategy is to retain our grounded everyday meaning for “real,” but admit that we never know whether we’ve reached ultimate reality. But this is coming from someone who doesn’t share the Kantian or neo-Kantian concerns.
Overall, my theory of reality is pragmatic. But I continue to wonder, for the people arguing against that take, what standard are they using?
What do you think? Are there issues with a pragmatic take on reality I’m overlooking? If so, what would be a better standard?
#antirealism #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #realism #structuralRealism
I’ve shared a paper on SSRN proposing a meta-level framework for the conditions of existence, situated between philosophy of science and ontology.
The work explores how coherence, interaction, and complexity function as necessary conditions for any system to exist at all.
Thoughtful critique is welcome.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5847503
#philosophy, #philosophyofscience, #ontology
Why I’m a reductionist
The SEP article on scientific reductionism notes that the etymology of the word “reduction” is “to bring back” something to something else. So in a methodological sense, reduction is bringing one theory or ontology back to a simpler or more fundamental theory or ontology. The Wikipedia entry on reductionism identifies different kinds: ontological, methodological, and theory reductionism. I think the ontological one is the most interesting here, the proposition that all of reality consists of a small number of building blocks.
Most reductions aren’t particularly controversial, at least not in science. There aren’t many arguments that chemistry doesn’t reduce to physics, or geology to both those sciences. Today it’s not controversial that biology reduces to them as well, although this is a relatively recent development.
As late at the early 1900s there were people arguing that life was somehow different, that it was distinguished by a vital force, an ancient idea. Few talk about vital forces today. Biologists learned about evolution through natural selection, genetic inheritance, proteins, DNA, RNA, and overall organic chemistry. Life is now seen as largely a molecular chemical enterprise, albeit a hideously complex one.
This raises an important point. Most reductions are conservative, retaining the reduced concept, but not all. Sometimes it’s eliminative, as in the case of a vital force, or other things like phlogiston or a luminiferous ether. It seems to depend on whether the reduced concept remains useful.
Today there remain at least two areas where people tend to resist reductionist accounts: consciousness and quantum measurement.
The consciousness one goes back to Rene Descartes’ famous distinction between mental and physical substances. Descartes saw no issue with a mechanistic understanding of reality, except for the mind, which he could not conceive of being reducible to mechanisms. He was far from alone. Gottfried Leibniz presented his mill thought experiment, that if the mind were a mill which we entered, we wouldn’t find anything there that explained perception. The mind, he agreed with Descartes, had to be a different kind of thing entirely.
Although a lot of what these guys saw as irreducible has been reduced. Today, psychological concepts like memory and cognition are understood to be neural processes, albeit with still many unanswered questions. But contemporary philosophy of mind often draws a new line at perceived characteristics, typically called qualities or qualia. Because these characteristics are introspectively opaque, they seem irreducible. And studying some of them has proven hard, therefore many assume they’re fundamentally inaccessible to anyone but the subject.
The question is whether the notion of fundamental qualia really explains anything. Does it convey meaningful information? Certainly qualities understood as just perceived characteristics seem useful enough. But regarding them as fundamental seems to obscure rather than convey information.
As a reductionist, I think of qualities as categorizing conclusions. (If that seems radical, consider that the etymology of the Latin root phrase “qualis” is “of what kind.”) Our nervous system qualifies a stimulus for a category when a particular range of neural firing patterns trigger a galaxy of associations, some innate, but many learned, which collectively add to the richness of the experience of that perceived characteristic (redness, sweetness, pain, etc).
Am I completely confident this is the answer? No, but as an explanation, it seems like a more fruitful place to explore. I suspect future scientific studies will validate some aspects of it, but not others. But even if it’s completely wrong, these kinds of theories seem to spur more experimental work than simply assuming qualities are fundamental and inaccessible.
In the case of quantum mechanics, it’s observation that’s often taken to be fundamental. In its strongest forms, this ends up pairing with the idea of consciousness being fundamental. Although the more cautious variants see just measurement as fundamental (or interaction). This can be the idea that quantum states don’t really exist, that measurement itself creates reality, or that quantum states do exist but physically collapse in a measurement, a fundamental change in reality.
In the early years of quantum theory, something like these views seemed inescapable, and most of the physics community closed ranks around them. But there were holdouts, including Albert Einstein and Erwin Schrὅdinger, who kept digging, discovering the phenomenon of entanglement, which would later be used by David Bohm and Hugh Everett to posit mechanistic explanations for the disappearance of quantum effects. But it was the work of H. Dieter Zeh and Wojciech H. Zurek in the 1970s and 80s that really fleshed out the detailed explanation we now call decoherence.
Today, few question whether entanglement and decoherence happen, although many do continue to argue that they’re only useful mathematical tools. Even if they are real physical processes, whether they serve as a full explanation of what’s happening in measurement depends on your preferred interpretation of quantum mechanics. But the key thing is it’s an explanation that wasn’t found by those who were satisfied with measurement being fundamental.
Which gets to why I’m a reductionist. I can’t prove that ontological reductionism is true. Maybe there are unique aspects of reality that aren’t built on a few common building blocks. But there seems to be a lot of history showing that assuming it’s true is far more fruitful than assuming complex concepts are fundamental. From Thales positing that water was the fundamental substance to later Greeks assuming there were four fundamental elements, the history of assuming anything is fundamental seems cautionary at best.
Which is why when I hear “X is fundamental,” I’m reflexively skeptical. We can’t even confidently say that about “elemental” particles, quantum fields, space, or time. We only seem able to talk in terms of something being more fundamental or less fundamental. Scientific theories are always provisional, subject to change on new data. Absolute fundamentality seems like an assumption we can never justify. Calling something fundamental seems to say, “There’s nothing left to explain here. Stop digging.” A lot of progress seems to happen from the people who ignore these prescriptions.
What do I mean by “progress”? None of this is to argue that higher level concepts aren’t useful; thermodynamics, for instance, didn’t cease being a useful concept once it was reduced to particle physics. Or that holistic takes on phenomena can’t be beneficial. Or that in art or daily life, we can’t appreciate things without reducing them.
But reduction aids in acquiring more structurally or causally complete explanations, while assuming something is fundamental often seems to paper over structural or causal gaps. Closing these gaps, when achievable, provides more reliable knowledge, knowledge which gives us new abilities, abilities such as medical scanners, drugs, computers, and many other things. Yes, that does include nuclear weapons and other ills. It doesn’t seem like we can have the good without the bad, although usually the bad can be managed with more reliable knowledge.
At least that’s my view today.
What do you think? Are there benefits to non-reductive approaches I’m overlooking? Or drawbacks to reductionism I’m missing? If you think an alternative approach is better, what are the benefits of that alternative?
#Philosophy #PhilosophyOfMind #PhilosophyOfScience #reductionism #Science
A new paper argues that Gödel’s incompleteness theorems and related results imply a purely algorithmic “Theory of Everything” is impossible 🌌. Undecidable truths in physics point to a “Meta-Theory of Everything” grounded in non‑algorithmic understanding 🧠. This also suggests the universe cannot be a simulation 🚫💻. Read more: https://jhap.du.ac.ir/article_488.html #Physics #QuantumGravity #TheoryOfEverything #PhilosophyOfScience
tl;dr Gödel killed the #SimulationTheory merry #xmas 🤣🤣🤣
Everything is a quantum wave?
In the last post, I discussed Amanda Gefter’s critique of Vlatko Vedral’s view that observers have no special role in reality. Conveniently, Vedral published an article at IAI discussing his view: Everything in the universe is a quantum wave. (Warning: possible paywall.) Vedral puts his view forward as a radical new interpretation of quantum mechanics.
As a quick reminder, the central mystery of quantum mechanics is that quantum particles seem to act like waves, including portions of the wave interfering with itself, but when measured, behave like tiny localized balls. This is known as the measurement problem.
There are numerous interpretations of what’s happening here. But they seem to take one of three broad strategies. The first simply rejects that the waves are real, instead insisting that they are only probabilities, albeit probabilities which evolve deterministically and interfere with each other. In other words, it’s all happening in our mind. In its stronger incarnations, this has idealist or semi-idealist aspects, claiming that observation or interaction creates reality. These are the approaches in the epistemic versions of the Copenhagen Interpretation and its descendants, like QBism and RQM (relational quantum mechanics).
The second strategy is to add new structure to wave mechanics. Due to Bell’s theorem, these additions must be non-local in nature, that is, they must involve “spooky” action at a distance. The ontic version of Copenhagen takes this approach when it adds a physical collapse, as do its variations and descendants like consciousness-causing-the-collapse and other objective collapse theories. Another version of the second strategy is what are historically called “hidden variable” approaches, like Bohmian Mechanics (pilot-wave theory), where there is both a wave and a particle the entire time, with the wave guiding the particle.
The third strategy is to accept the mathematical structure of quantum theory as a full account, or one only requiring a few ancillary assumptions. This became easier with the development of decoherence theory in the 1970s, an extrapolation of quantum wave mechanics, in essence quantum entanglement en masse, that explains why quantum interference disappears at larger scales. It’s the approach Hugh Everett proposed, which eventually became known as the many-worlds interpretation.
And it’s the strategy Vedral uses for his interpretation, which he characterizes as “many-worlds on steroids.” Although he dislikes talking in terms of other worlds, noting that the classical worlds are only a small slice of the possibilities. He prefers to talk in terms of one world but with quantum mechanics being universal, applying at all scales.
Vedral makes a point I made in the last post, that under this universal quantum waves approach, an observation is just two quantum systems becoming entangled, that is, becoming correlated in certain ways. A reminder: entanglement is when two quantum systems have each of their states in superposition become correlated with each of the states in the other system. In other words, for each state in the first system, there is a correlated state in the second. The two systems are now part of the same wave function.
Vedral notes this could be characterized as the quantum particle observing the measuring device as much as the device is observing it. In this view, entanglement is what the apparent collapse looks like from the outside, and collapse is what entanglement looks like from the inside. So contra Gefter’s stance, there’s no special role for observers, at least unless by “observer” we mean everything.
As I noted in the last post, I like Vedral’s approach here of focusing on the physics rather than getting into multiverse language, which as I’ve noted before, often ends up being a distraction. But it’s hard for me to see how his view is radically different from the standard Everettian one. It’s worth noting that Everett’s original proposal was a theory of the universal wave function, essentially the “everything is a quantum wave” view Vedral is advocating. Everett didn’t talk in terms of a multiverse. It was Bryce DeWitt in the 1970s who characterized that way, although Everett saw it as just an alternate way of describing his view.
One difference from contemporary many-worlds views, which Vedral shares with Everett, is that the quantum nature of macroscopic objects is not beyond testability. Everett reportedly maintained that the quantum states of macroscopic objects were in principle detectable. I haven’t read Vedral’s book, but it sounds like a large part of it is finding ways to test his view.
This seems resonant with the progress being made in experimental research, where tiny macroscopic objects can now be held in a quantum superposition, which is putting increasing pressure on ontic collapse theories. And Vedral mentions the ongoing efforts in quantum computing, which is stress-testing quantum theory in ways scientists of earlier decades could only dream of. In the end, we need data, and these efforts are providing more of it.
As a minimalist Everettian myself, I find a lot in Vedral’s discussion compelling. But as he notes in his article, the various interpretation camps are like entrenched armies in World War I, unlikely to be moved except by the strongest experimental results. Even then, I suspect Max Planck’s observation that science moves forward “one funeral at a time” will likely be true here as it always has.
What do you think of Vedral’s views? Does the idea of everything being a quantum wave make sense? Or are there difficulties both he and I are overlooking with this approach?
#InterpretationsOfQuantumMechanics #ManyWorldInterpretation #MWI #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #Science
An article argues that the realism is required to interpret science meaningfully. Any philosophy dispensing realism in science renders science meaningless.
https://open.substack.com/pub/plainlyphilosophical/p/realism-as-a-condition-of-meaningful
🐾 Marvin and the 4 Paws of Boundary Mechanics 🐾
Marvin has finally placed all 4 paws onto the foundations of life, the universe, and everything!
The Hybrid42 Quartet Principle:
In every interaction and transformation, every spark of emergence, there are not two surfaces in contact but four:
the upper surface of each entity
and the sub-surface boundary beneath it
A hidden quartet architecture — a pattern appearing across the universe!
Marvin, naturally, treats this as obvious.
🐾 4 paws, 4 boundaries 🐾
What we once treated as a simple interface now reveals a deeper symmetry:
contact is always a duet of duets — surface meeting surface, with the two inner boundaries reshaping, dissolving, or reflecting the interaction.
Every boundary has an upper and lower face — every interaction engages all four!
#HybridMind42 #BoundaryMechanics #QuartetPrinciple #MarvinTheCosmicCat #ComplexSystems #PhilosophyOfScience #SciComm
New (short) blog post on why I think the distinction between a metaphysical and semantic component of scientific realism is kind of confused (and how to classify the varieties of non-realist positions).
https://pragmatictheories.blogspot.com/2025/12/how-to-distinguish-semantic-and.html
New (short) blog post on why I think the distinction between a metaphysical and semantic component of scientific realism is kind of confused (and how to classify the varieties of non-realist positions).
pragmatictheories.blogspot.com/2025/12/how-...
#philsci #philsky #philosophyofscience
How to distinguish the semanti...
I wouldn't say "too mean", though "disaster" is a bit dramatic :), and rather than "too many open parameters" I would write something like "purely phenomenological parameters".
If/when we get a physically justified and observationally supported, broad consensus explanation for dark matter or dark energy, it will certainly be a big step in @cosmology .
It’s funny — Bell’s inequality didn’t break physics so much as it broke our confidence in knowing what physics is.
Maybe that’s the real frontier: realizing that certainty itself is the variable. ⚛️🧠💫✨
#QuantumTheory
#BellTest
#QuantumFoundations
#Epistemology
#ComplexSystems
#PhilosophyOfScience
#STEM
#Science
*The 500-Million-Year-Old Refutation of Artificial Intelligence*
A brain structure has persisted for 500 million years across fish, amphibians, reptiles, birds, and mammals. The basal ganglia should tell us something fundamental about intelligence itself. Yet when viewed through the computational lens, it makes no sense at all.
https://www.ocrampal.com/the-500-million-year-old-refutation-of-artificial-intelligence/
The problem of old evidence revisited #philosophy #philosophyOfScience #confirmation https://xkcd.com/3154/
What is science, really? 🤔 My latest post dives into Samir Okasha's 'Philosophy of Science' to explore this. It's more than just method; it's about falsifiability and inference. https://www.ctnet.co.uk/unpacking-science-key-philosophical-takeaways-from-okashas-philosophy-of-science/ #PhilosophyOfScience #Science #Epistemology
"younger colleagues ...look down on those who do not deposit research data.... 'if you say ‘data is available upon request,’ they take it as a #middleFinger. ...it would be so easy to share the data—it takes just as much energy to write that statement.'”
https://ir.library.illinoisstate.edu/fpml/268
#openScience #ethics #publishing #science #philosophyOfScience #metascience
Jan-Willem Romeijn on päivittänyt SEP-entryään tilastotieteen filosofiasta,
https://plato.stanford.edu/entries/statistics/
#tilastotiede #tiede #tieteenfilosofia #filosofia #statistics #philosophyofScience #epistemology #probability #induction #confirmation #evidence #learning #reasoning #paattely #bayesianism #models
My new essay Objectivity Is Illusion: An Operating Model of Social and Moral Reasoning is now archived with a DOI.
I argue that what passes as “objectivity” is consensus disguised as granite. Truth is rhetorical, morality prescriptive, and our real obligation is care: tending the scaffolding we walk on.
Read the full essay (open access):
👉 https://doi.org/10.5281/zenodo.17195641
#Philosophy #MetaEthics #SocialEpistemology #Pragmatism #PhilosophyOfScience #Truth #Morality #pschology #society #social
We’ve never lived on granite foundations. What we call “objectivity” in social and moral life is scaffolding: provisional, rhetorical, maintained through care, reciprocity, and revolt.
In my new essay, I map an operating model for reasoning without granite illusions — and argue for an ethic of repair.
Read the full post:
👉 https://philosophics.blog/2025/09/24/stop-pretending-we-live-in-marble-halls/
#Philosophy #Ethics #MetaEthics #PhilosophyOfScience #Epistemology #MetaPhilosophy #SocialEpistemology #CriticalTheory #Subjectivism #Relativism