package :https://www.kernel-operations.io/rkeops/
useR! video: www.youtube.com/watch?v=5DDd...
#rstats #kernels #gpu #autodiff
Kernel Operations on GPU or CP...
package :https://www.kernel-operations.io/rkeops/
useR! video: www.youtube.com/watch?v=5DDd...
#rstats #kernels #gpu #autodiff
Kernel Operations on GPU or CP...
The worst part about having a permanent position is that you see all these interesting other jobs popping up yet can't apply - this one in #autodiff: https://jobs.inria.fr/public/classic/en/offres/2024-07696
Got control flow working, now I can use the fractals from the distance estimator compendium:
Beyond Backpropagation - Higher Order, Forward and Reverse-mode Automatic Differentiation for Tensorken
https://open.substack.com/pub/getcode/p/beyond-backpropagation-higher-order
I am personally fascinated by "adjoint optimization in CFD", e.g. people using autodiff on entire physical simulations to improve shapes of objects.
At the same time, it seems comparatively rarely used in industry, e.g. most design processes only perform forward simulation.
Does anyone here have an idea *why* that is? Boost for reach please...
Recently I've gotten really excited about #Enzyme as the future of #autodiff in #JuliaLang, in particular because it supports more language features than #Zygote (e.g. mutation, fast control flow, and preserving structural sparsity). I've started getting acquainted with its rules system, and I have some first impressions by comparison to #ChainRules. 🧵
On Thursday I'll be at #NeurIPS2022 presenting a paper on our new system for #autodiff of implicit functions. A 🧵on the paper (https://arxiv.org/abs/2105.15183)
What happens when you use #autodiff and let your nonsmooth iterative algorithm goes to convergence?
With J. Bolte & E. Pauwels, we show that under a contraction assumption, the derivatives of the algorithm converge linearly!
Preprint: https://arxiv.org/abs/2206.00457
I will present this work this week at #NEURIPS2022
Next week I'll be at #NeurIPS2022 presenting a couple of papers. The first one is on #autodiff through #optimization (aka #unrolling) and its bizarre convergence properties. A 🧵 on the paper (https://arxiv.org/pdf/2209.13271.pdf) (1/9)
I know it is "old", but I think I just fell in love with automatic differentiation #autodiff
Huge potential (and force) to speed up stuff in #compchem. Also: it is the cute math that stole my ❤️ (as always)
source:
https://twitter.com/Michielstock/status/1593582416328892416?s=20&t=OQcTmRfNrgiT-4WBUNiiHg
(in dutch, #birdsite)
Need to dig in further, but #chemtwoops insights/thoughts/pointers are strongly appreciated!
Another #JuliaLang #autodiff banger
Technical Q. Anyone know how to do recursive binary checkpointing ("treeverse") over a number of steps that isn't determined until runtime? E.g. for an adaptive ODE solver.
Classically, the number of steps is assumed to be known in advance, I think.
#autodiff
#machinelearning
#honestly_I_have_no_idea_what_hashtag_to_use_for_obscure_technical_questions
With the core techniques for deriving #AutoDiff rules in that page, we can work out rules for complex functions like matrix factorizations. See for example this blog post on deriving rules for the LU decomposition: https://sethaxen.com/blog/2021/02/differentiating-the-lu-decomposition/
In my opinion*, this page from the ChainRules docs is the best intro to working out automatic differentiation rules: https://juliadiff.org/ChainRulesCore.jl/stable/maths/arrays.html
* disclaimer: I wrote it with lots of community input
#AutoDiff #JuliaLang #calculus #gradient
I just migrated from @sethaxen@mastodon.social to this new account at fosstodon.org, so time for a reintroduction!
I'm a #MachineLearning engineer with a focus on probabilistic programming (#probprog) at @unituebingen, where I help scientists use ML for their research. In the office and out, one of my main passions is #FOSS, and I work on a number of #opensource packages, mostly in #JuliaLang :julia: with a focus on #probprog, #manifolds, and #autodiff.
@johnryan Yeah I do #probprog in #JuliaLang, and it's great that we can use arbitrary Julia code within our models. This is because most of the language is differentiable with #autodiff and code is composable, which is not the case for most PPLs.
For #deeplearning research, Julia could come in handy for writing and transforming custom kernels without fussing with CUDA, as some posts in that thread note, but I have no experience with this.