'Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces', by Jonathan W. Siegel.
http://jmlr.org/papers/v24/23-0025.html
#sobolev #besov #sparse
'Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces', by Jonathan W. Siegel.
http://jmlr.org/papers/v24/23-0025.html
#sobolev #besov #sparse
Hilbert transform can be extended to certain spaces of distributions (Pandey 1996, Chapter 3). Since the Hilbert transform commutes with differentiation, and is a bounded operator on L^p, H restricts to give a continuous transform on the inverse limit of #Sobolev spaces
Hilbert transform can be extended to certain spaces of distributions (Pandey 1996, Chapter 3). Since the Hilbert transform commutes with differentiation, and is a bounded operator on L^p, H restricts to give a continuous transform on the inverse limit of #Sobolev spaces
#Sobolev space {\textstyle X=H_{0}^{1}([-1,+1];\mathbb {R} )} and the functional {\textstyle f:X\rightarrow \mathbb {R} } given by
{f(u)=\int _{-1}^{+1}u'(x)^{2}\,\mathrm {d} x.}
#Sobolev space {\textstyle X=H_{0}^{1}([-1,+1];\mathbb {R} )} and the functional {\textstyle f:X\rightarrow \mathbb {R} } given by
{f(u)=\int _{-1}^{+1}u'(x)^{2}\,\mathrm {d} x.}
Dieser Moment, wenn man beim TeXen denkt: "Boah, das wird eine ekelhafte Indexschlacht", dann ein bisschen nach Packages sucht und eins findet, das genau für diesen Fall entwickelt wurde 😍 https://www.ctan.org/pkg/sobolev
#FiniteElements #FEM #Numerik #LaTeX #Sobolev