Proud to celebrate the graduation of my PhD student Vinit Ranjan, who defended his thesis this month: "Beyond the Worst Case: Verification of First-Order Methods for Parametric Optimization Problems" ๐ Congratulations Dr. Ranjan!
Assistant Professor at #princeton
๐จโ๐ป osqp.org developer
๐ Interested in #realtime #decisionmaking, #optimization, #optimalcontrol #orms
๐ From ๐ฎ๐น in ๐บ๐ฒ
Proud to celebrate the graduation of my PhD student Vinit Ranjan, who defended his thesis this month: "Beyond the Worst Case: Verification of First-Order Methods for Parametric Optimization Problems" ๐ Congratulations Dr. Ranjan!
Wishing everyone happy holidays! ๐ Feeling lucky to work with such a fantastic group of students and postdocs. Here's to good research, great company, and Neapolitan pizza ๐
New preprint! ๐ We combine PEP with Wasserstein DRO to get data-driven convergence guarantees for first-order methods.
Use observed algorithm trajectories to derive tighter probabilistic rates that reflect how your solver actually behaves. We recover known average-case rates (O(Kโปยนยทโต) for GD, O(Kโปยณ log K) for FGM) without knowing the underlying distribution. ๐ฏ
๐ arxiv.org/abs/2511.17834
๐ป github.com/stellatogrp/dro_pep
Joint work with Jisun Park and Vinit Ranjan. #optimization #dro
AlgoTune: Can Language Models Speed Up General-Purpose Numerical Programs?
Ori Press, Brandon Amos, Haoyu Zhao, Yikai Wu, Samuel K. Ainsworth, Dominik Krupke, Patrick Kidger, Touqir Sajed, Bartolomeo Stellato, Jisun Park, Nathanael Bosch, Eli Meril, Albert Steppi, Arman Zharmagambetov, Fangzhao Zhang, David Perez-Pineiro, Alberto Mercurio, Ni Zhan, Talor Abramovich, Kilian Lieret, Hanlin Zhang, Shirley Huang, Matthias Bethge, Ofir Press
https://arxiv.org/abs/2507.15887 https://arxiv.org/pdf/2507.15887 https://arxiv.org/html/2507.15887
arXiv:2507.15887v1 Announce Type: new
Abstract: Despite progress in language model (LM) capabilities, evaluations have thus far focused on models' performance on tasks that humans have previously solved, including in programming (Jimenez et al., 2024) and mathematics (Glazer et al., 2024). We therefore propose testing models' ability to design and implement algorithms in an open-ended benchmark: We task LMs with writing code that efficiently solves computationally challenging problems in computer science, physics, and mathematics. Our AlgoTune benchmark consists of 155 coding tasks collected from domain experts and a framework for validating and timing LM-synthesized solution code, which is compared to reference implementations from popular open-source packages. In addition, we develop a baseline LM agent, AlgoTuner, and evaluate its performance across a suite of frontier models. AlgoTuner achieves an average 1.72x speedup against our reference solvers, which use libraries such as SciPy, sk-learn and CVXPY. However, we find that current models fail to discover algorithmic innovations, instead preferring surface-level optimizations. We hope that AlgoTune catalyzes the development of LM agents exhibiting creative problem solving beyond state-of-the-art human performance.
toXiv_bot_toot
our work is featured in quanta magazine! check it out https://www.quantamagazine.org/researchers-discover-the-optimal-way-to-optimize-20251013/
๐ข New in the Journal of Machine Learning Research (w/ Rajiv Sambharya: rajivsambharya.github.io/)! ๐
We construct data-driven performance guarantees for classical & learned optimizers via sample convergence bounds and PAC-Bayes theory.
Our results are often much tighter than worst-case bounds. โ
Examples in signal processing, control, and meta-learning. ๐๏ธ๐ฐ๏ธ๐
๐ Paper: https://jmlr.org/papers/v26/24-0755.html
๐ป Code: https://github.com/stellatogrp/data_driven_optimizer_guarantees
๐ข Our paper "Verification of First-Order Methods for Parametric Quadratic Optimization" with my student Vinit Ranjan (https://vinitranjan1.github.io/) is accepted in Mathematical Programming! ๐
We present an optimization-based framework to โ verify finite-step convergence of first-order methods โ directly capturing the structures of parametric linear and quadratic problems.
Slides on these ideas (INRIA ENS) https://stellato.io/assets/downloads/presentations/2025/ens_fom.pdf
๐ https://doi.org/10.1007/s10107-025-02261-w
๐ป https://github.com/stellatogrp/sdp_algo_verify
Data Compression for Fast Online Stochastic Optimization
Irina Wang, Marta Fochesato, Bartolomeo Stellato
https://arxiv.org/abs/2504.08097 https://arxiv.org/pdf/2504.08097 https://arxiv.org/html/2504.08097
arXiv:2504.08097v1 Announce Type: new
Abstract: We propose an online data compression approach for efficiently solving distributionally robust optimization (DRO) problems with streaming data while maintaining out-of-sample performance guarantees. Our method dynamically constructs ambiguity sets using online clustering, allowing the clustered configuration to evolve over time for an accurate representation of the underlying distribution. We establish theoretical conditions for clustering algorithms to ensure robustness, and show that the performance gap between our online solution and the nominal DRO solution is controlled by the Wasserstein distance between the true and compressed distributions, which is approximated using empirical measures. We provide a regret analysis, proving that the upper bound on this performance gap converges sublinearly to a fixed clustering-dependent distance, even when nominal DRO has access, in hindsight, to the subsequent realization of the uncertainty. Numerical experiments in mixed-integer portfolio optimization demonstrate significant computational savings, with minimal loss in solution quality.
๐ Gave a talk at the EURO OSS Seminar Series on "Data-Driven Algorithm Design and Verification for Parametric Convex Optimization"!
๐ฅ Recording: https://euroorml.euro-online.org/
Big thanks to Dolores Romero Morales for the invitation! ๐ #MachineLearning #Optimization #ORMS
A good motivational reading assignment for an OR or combinatorial optimization class: https://www.wsj.com/articles/southwest-airlines-melting-down-flights-cancelled-11672257523 It pins a lot of the Southwest meltdown on a failure of its software to intelligently solve routing problems.
I am a scientist at Meta AI in NYC and study machine learning and optimization, recently involving reinforcement learning, control, optimal transport, and geometry. On social media, I enjoy finding and boosting interesting content from the original authors on these topics
I made this small animation with my recent project on optimal transport that connects continuous structures in the world. The source code to reproduce this and other examples is online at https://github.com/facebookresearch/w2ot
Happy 75th birthday to the transistor, the invention that shaped the modern world!
https://spectrum.ieee.org/invention-of-the-transistor
Next week I'll be at #NeurIPS2022 presenting a couple of papers. The first one is on #autodiff through #optimization (aka #unrolling) and its bizarre convergence properties. A ๐งต on the paper (https://arxiv.org/pdf/2209.13271.pdf) (1/9)
I'm a professor & #ISYE dept chair at #uwmadison, #INFORMS President-Elect, and the author of the blog #PunkRockOR.
I'm into
- public sector operations research (#ORMS)
- data analytics and engineering for social good
- public engagement with science
- contributing to society and future generations through research and higher ed
my Mastodon #intro/#introduction
Hi, I'm Pietro. I work in the Discrete Optimization and Operations Research Group at the Politecnico di Milano. My research interests are in nonconvex optimization (mixed integer and nonlinear), multiobjective optimization, and optimization under uncertainty.
I am the developer of Couenne, an open-source MINLP solver. I also worked in the dev. team of FICO Xpress.
Glad to switch to a social network with the same name as a heavy metal band ๐
I don't want to just be posting about numbers like a clock, but the 2 million mark of monthly active users across the network is a pretty big deal. This is going big numbers! Shout out to all the server operators who are absorbing this wave.
AIROYoung is the young chapter of AIRO (Italian Association of Operations Research), a community for #youngresearchers made by young researchers. Check it out: https://airoyoung.airo.org
We aim to foster collaboration among students and early-career researchers interested in the field of #operationsresearch, provide new opportunities to advance their career and expand their network, and strive to connect the demand/offer in the OR job market, in both #orms #academia and #industry.
If there is one thing the deep learning revolution has taught us, it's that neural nets will outperform hand-designed heuristics, given enough compute and data.
But we still use hand-designed heuristics to train our models. Let's replace our optimizers with trained neural nets!
If you are training models with < 5e8 parameters, for < 2e5 training steps, then with high probability this LEARNED OPTIMIZER will beat or match the tuned optimizer you are currently using, out of the box, with no hyperparameter tuning (!).