#ApproximationTheory

2024-09-11

I recently read two interesting survey articles by my academic brother Ben Adcock at Simon Fraser University about theoretical aspect of sampling: how to approximate a function 𝑓 given random point samples 𝑓(𝑥ᵢ) with noise. This is a fundamental problem in Machine Learning.

The first paper, "Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks" (by Ben and co-authors), is about how fast the approximation error may decrease as you take more samples. We can overcome the curse of dimensionality if the function gets increasingly smooth in higher dimensions. URL: arxiv.org/abs/2404.03761

The second paper, "Optimal sampling for least-squares approximation", is about choosing where to sample in order to get as close to the unknown function (in least-square sense) as possible. arxiv.org/abs/2409.02342

#MachineLearning #ApproximationTheory #NumericalAnalysis

2023-05-25

This chocolate reminded me of Bernstein ellipses, which govern how fast the Chebyshev approximation converges to a function.

#MathsIsEverywhere #OccupationalHazard #ApproximationTheory

Plot of Bernstein ellipses: about 10 nested ellipses becoming narrower and narrower and approaching the interval [-1, 1] in the complex plane
2023-03-14

My thoughts keep turning back to the OWNA (One World Numerical Analysis) talk of Daan Huybrechs a few weeks ago. Most of numerical analysis is built on approximating functions in finite-dimensional spaces: \[ f(x) \approx \sum_i a_i \varphi_i(i), \] where 𝑓 is the function we want to approximate and φᵢ are easy functions like polynomials. In the standard setting, the φᵢ form a basis. The talk explained why you sometimes want to add some more "basis" functions, which destroys the linear independence of the φᵢ so that they are no longer a basis. The main topic was the theory behind this.

As motivation, consider the square root function on [0, 1]. This is not analytic at x=0 and approximation by polynomials does not converge fast. However, you can get fast convergence (root exponential IIRC) if you use rational functions. More generally, the solution of Laplace's equation on a domain with re-entrant corners has singularities at the corners. The lightning method uses an overcomplete "basis" of polynomials and rational functions, which converges fast.

It's one of those talks that I wished I understood fully, but it would take me over a month of sustained effort or more to do so. Hopefully I will find an excuse to immerse myself in the topic.

#ApproximationTheory #NumericalAnalysis

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst