#expectedvalue

2025-04-02

In a magazine article [1] on problems and progress in quantum field theory, Wood writes of Feynman path integrals, “No known mathematical procedure can meaningfully average an infinite number of objects covering an infinite expanse of space in general. The path integral is more of a physics philosophy than an exact mathematical recipe.”

This article [2] provides a method for averaging an arbitrary collection of objects; however, the average can be any number in the extension of the range of these objects. (Note, an arbitrary collection of these objects is a function.)

Question: Suppose anything meaningful has applications in quantum field theory. Is there a way to meaningfully choose a unique, finite average of a function whose graph matches the description in Wood's quote?

For more info, see this post [3].

[1]: quantamagazine.org/mathematici

[2]: arxiv.org/pdf/2004.09103

[3]: math.stackexchange.com/q/50520

#PathIntegral #quantum #FeynmanPathIntegral #mean #average #expectedvalue #quantumfieldtheory

2025-03-13

I finally know what I want.

Let \(n\in\mathbb{N}\) and suppose function \(f:A\subseteq\mathbb{R}^{n}\to\mathbb{R}\), where \(A\) and \(f\) are Borel. Let \(\text{dim}_{\text{H}}(\cdot)\) be the Hausdorff dimension, where \(\mathcal{H}^{\text{dim}_{\text{H}}(\cdot)}(\cdot)\) is the Hausdorff measure in its dimension on the Borel \(\sigma\)-algebra.

§1. Motivation

Suppose, we define everywhere surjective \(f\):

Let \((A,\mathrm{T})\) be a standard topology. A function \(f:A\subseteq\mathbb{R}^{n}\to\mathbb{R}\) is everywhere surjective from \(A\) to \(\mathbb{R}\), if \(f[V]=\mathbb{R}\) for every \(V\in\mathrm{T}\).

If f is everywhere surjective, whose graph has zero Hausdorff measure in its dimension (e.g., [1]), we want a unique, satisfying [2] average of \(f\), taking finite values only. However, the expected value of \(f\):

\[\mathbb{E}[f]=\frac{1}{{\mathcal{H}}^{\text{dim}_{\text{H}}(A)}(A)}\int_{A}f\, d{\mathcal{H}}^{\text{dim}_{\text{H}}(A)}\]

is undefined since the integral of \(f\) is undefined: i.e., the graph of \(f\) has Hausdorff dimension \(n+1\) with zero \((n+1)\)-dimensional Hausdorff measure. Thus, w.r.t a reference point \(C\in\mathbb{R}^{n+1}\), choose any sequence of bounded functions converging to \(f\) [2, §2.1] with the same satisfying [2, §4] and finite expected value [2, §2.2].

[1]: mathoverflow.net/questions/476

[2]: researchgate.net/publication/3

#HausdorffMeasure #HausdorffDimension
#EverywhereSurjectiveFunction
#ExpectedValue
#Average
#research

@noahshachtman
The easiest #ExpectedValue problem ever ( America's #IntelligenceTest )...

2024-07-31

Suppose \(A\subseteq\mathbb{R}^{2}\) is Borel and \(B\) is a rectangle of \(\mathbb{R}^2\). In addition, suppose the Lebesgue measure on the Borel \(\sigma\)-algebra is \(\lambda(\cdot)\):

Question: How do we define an explicit \(A\), such that:
1. \(\lambda(A\cap B)>0\) for all \(B\)
2. \(\lambda(A\cap B)\neq\lambda(B)\) for all \(B\)?

For a potential answer, see this reddit post [1]. (It seems the answer is correct; however, I wonder if there's a simpler version that is less annoying to prove.)

Moreover, we meaningfully average \(A\) with the following approach:

Approach: We want an unique, satisfying extension of the expected value of \(A\), w.r.t the Hausdorff measure in its dimension, on bounded sets to \(A\), which takes finite values only

Question 2: How do we define "satisfying" in this approach?

(Optional: See section 3.2, & 6 of this paper [2].)

[1]: reddit.com/r/mathematics/comme

[2]: researchgate.net/publication/3

#UnboundedSets #Sets #LebesgueMeasure #MeasureTheory #Measure #ExpectedValue #Expectancy #Mean #Integration #HausdorffMeasure #HausdorffDimension

2024-07-31

Suppose \(f:\mathbb{R}\to\mathbb{R}\) is Borel. Let \(\text{dim}_{\text{H}}(\cdot)\) be the Hausdorff dimension and \(\mathcal{H}^{\text{dim}_{\text{H}}(\cdot)}(\cdot)\) be the Hausdorff measure in its dimension on the Borel \(\sigma\)-algebra.

Question: If \(G\) is the graph of \(f\), is there an explicit \(f\) such that:
1. The function \(f\) is everywhere surjective (i.e., \(f[(a,b)]=\mathbb{R}\) for all non-empty open interval \((a,b)\))
2. \(\mathcal{H}^{\text{dim}_{\text{H}}(G)}(G)=0\)

If such \(f\) exists, we denote this special case of \(f\) as \(F\).

Note, not all everywhere surjective \(f\) satisfy criteria 2. of the question. For example, consider the Conway base-13 function [1]. Since it's zero almost everywhere, \(\text{dim}_{\text{H}}(G)=1\), and \(\mathcal{H}^{\text{dim}_{\text{H}}(G)}(G)=+\infty\).

Question 2: For any real \(\mathbf{A},\mathbf{B}\) is the expected value of \(\left.f\right|_{[\mathbf{A},\mathbf{B}]}\), w.r.t the Hausdorff measure in its dimension, defined and finite?

If not, see this paper [2] for a partial solution.

Optional: Is there other interesting properties of \(F\)?

[1]: en.wikipedia.org/wiki/Conway_b

[2]: researchgate.net/publication/3

#PathalogicalFunctions #EverywhereSurjectiveFunctions #Mean #ExpectedValue #MeasureTheory #Measure #HausdorffMeasure #HausdorffDimension

2024-07-09

I've decided to focus on finding reserach papers which solve my ideas. I want to find a paper which averages everywhere surjective functions using an extension of the expected value w.r.t the Hausdorff measure. For more info, see this link from your desktop:
1drv.ms/w/s!Aqi8qivarhO_nEp_SZ
and this link from your smartphone:
docs.google.com/document/d/1E3
#EverywhereSurjectiveFunctions, #ExpectedValue, #Average, #ExtremelyDiscontinousFunctions, #MostGeneralizedExpectedValue
#question, #help

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2022-11-08
2022-10-29
Doc Edward Morbius ⭕​dredmorbius@toot.cat
2022-02-06

3Blue1Brown on Wordle, probability, information theory, and expected value.

Or: Shannon, von Neumann, and Pascal walk into a word game...

Whether or not you've been sucked into this word game fad, this is a really good explanation of the relationships between probability and expected information value (I = -log2(p)), as well as how to optimally find one element within a known search space.

It explains the logic behind selecting starting words for Wordle, as well as how and why optimisers choose their own guesses, and what an optimal solver's ultimate limits would be.

yewtu.be/watch?v=v68zYyaEmEA

HN discuussion: news.ycombinator.com/item?id=3

#Wordle #3Blue1Brown #Video #InformationTheory #Probability #ClaudeShannon #JohnVonNeumann #BlaisePascal #ExpectedValue

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst