Bartolomeo Stellato

Assistant Professor at

๐Ÿ‘จโ€๐Ÿ’ป osqp.org developer
๐Ÿ“– Interested in , ,
๐ŸŒ From ๐Ÿ‡ฎ๐Ÿ‡น in ๐Ÿ‡บ๐Ÿ‡ฒ

Bartolomeo Stellatobstellato
2025-12-31

Proud to celebrate the graduation of my PhD student Vinit Ranjan, who defended his thesis this month: "Beyond the Worst Case: Verification of First-Order Methods for Parametric Optimization Problems" ๐ŸŽ‰ Congratulations Dr. Ranjan!

Bartolomeo Stellatobstellato
2025-12-24

Wishing everyone happy holidays! ๐ŸŽ„ Feeling lucky to work with such a fantastic group of students and postdocs. Here's to good research, great company, and Neapolitan pizza ๐Ÿ•

Bartolomeo Stellatobstellato
2025-12-18

New preprint! ๐Ÿ“„ We combine PEP with Wasserstein DRO to get data-driven convergence guarantees for first-order methods.

Use observed algorithm trajectories to derive tighter probabilistic rates that reflect how your solver actually behaves. We recover known average-case rates (O(Kโปยนยทโต) for GD, O(Kโปยณ log K) for FGM) without knowing the underlying distribution. ๐ŸŽฏ

๐Ÿ“Ž arxiv.org/abs/2511.17834
๐Ÿ’ป github.com/stellatogrp/dro_pep

Joint work with Jisun Park and Vinit Ranjan.

Bartolomeo Stellato boosted:
2025-11-26

AlgoTune: Can Language Models Speed Up General-Purpose Numerical Programs?

Ori Press, Brandon Amos, Haoyu Zhao, Yikai Wu, Samuel K. Ainsworth, Dominik Krupke, Patrick Kidger, Touqir Sajed, Bartolomeo Stellato, Jisun Park, Nathanael Bosch, Eli Meril, Albert Steppi, Arman Zharmagambetov, Fangzhao Zhang, David Perez-Pineiro, Alberto Mercurio, Ni Zhan, Talor Abramovich, Kilian Lieret, Hanlin Zhang, Shirley Huang, Matthias Bethge, Ofir Press
arxiv.org/abs/2507.15887 arxiv.org/pdf/2507.15887 arxiv.org/html/2507.15887

arXiv:2507.15887v1 Announce Type: new
Abstract: Despite progress in language model (LM) capabilities, evaluations have thus far focused on models' performance on tasks that humans have previously solved, including in programming (Jimenez et al., 2024) and mathematics (Glazer et al., 2024). We therefore propose testing models' ability to design and implement algorithms in an open-ended benchmark: We task LMs with writing code that efficiently solves computationally challenging problems in computer science, physics, and mathematics. Our AlgoTune benchmark consists of 155 coding tasks collected from domain experts and a framework for validating and timing LM-synthesized solution code, which is compared to reference implementations from popular open-source packages. In addition, we develop a baseline LM agent, AlgoTuner, and evaluate its performance across a suite of frontier models. AlgoTuner achieves an average 1.72x speedup against our reference solvers, which use libraries such as SciPy, sk-learn and CVXPY. However, we find that current models fail to discover algorithmic innovations, instead preferring surface-level optimizations. We hope that AlgoTune catalyzes the development of LM agents exhibiting creative problem solving beyond state-of-the-art human performance.

toXiv_bot_toot

Bartolomeo Stellato boosted:
2025-10-13
Bartolomeo Stellatobstellato
2025-09-08

๐Ÿ“ข New in the Journal of Machine Learning Research (w/ Rajiv Sambharya: rajivsambharya.github.io/)! ๐ŸŽ‰

We construct data-driven performance guarantees for classical & learned optimizers via sample convergence bounds and PAC-Bayes theory.

Our results are often much tighter than worst-case bounds. โœ…

Examples in signal processing, control, and meta-learning. ๐ŸŽ›๏ธ๐Ÿ›ฐ๏ธ๐Ÿ“š

๐Ÿ“„ Paper: jmlr.org/papers/v26/24-0755.ht
๐Ÿ’ป Code: github.com/stellatogrp/data_dr

Bartolomeo Stellatobstellato
2025-08-08

๐Ÿ“ข Our paper "Verification of First-Order Methods for Parametric Quadratic Optimization" with my student Vinit Ranjan (vinitranjan1.github.io/) is accepted in Mathematical Programming! ๐ŸŽ‰

We present an optimization-based framework to โœ… verify finite-step convergence of first-order methods โ€” directly capturing the structures of parametric linear and quadratic problems.

Slides on these ideas (INRIA ENS) stellato.io/assets/downloads/p
๐Ÿ”— doi.org/10.1007/s10107-025-022
๐Ÿ’ป github.com/stellatogrp/sdp_alg

Bartolomeo Stellato boosted:
2025-04-16

Data Compression for Fast Online Stochastic Optimization

Irina Wang, Marta Fochesato, Bartolomeo Stellato
arxiv.org/abs/2504.08097 arxiv.org/pdf/2504.08097 arxiv.org/html/2504.08097

arXiv:2504.08097v1 Announce Type: new
Abstract: We propose an online data compression approach for efficiently solving distributionally robust optimization (DRO) problems with streaming data while maintaining out-of-sample performance guarantees. Our method dynamically constructs ambiguity sets using online clustering, allowing the clustered configuration to evolve over time for an accurate representation of the underlying distribution. We establish theoretical conditions for clustering algorithms to ensure robustness, and show that the performance gap between our online solution and the nominal DRO solution is controlled by the Wasserstein distance between the true and compressed distributions, which is approximated using empirical measures. We provide a regret analysis, proving that the upper bound on this performance gap converges sublinearly to a fixed clustering-dependent distance, even when nominal DRO has access, in hindsight, to the subsequent realization of the uncertainty. Numerical experiments in mixed-integer portfolio optimization demonstrate significant computational savings, with minimal loss in solution quality.

Bartolomeo Stellatobstellato
2025-02-26

๐Ÿš€ Gave a talk at the EURO OSS Seminar Series on "Data-Driven Algorithm Design and Verification for Parametric Convex Optimization"!

๐ŸŽฅ Recording: euroorml.euro-online.org/

Big thanks to Dolores Romero Morales for the invitation! ๐Ÿ™Œ

Bartolomeo Stellato boosted:
2022-12-29

A good motivational reading assignment for an OR or combinatorial optimization class: wsj.com/articles/southwest-air It pins a lot of the Southwest meltdown on a failure of its software to intelligently solve routing problems.

Bartolomeo Stellato boosted:
2022-12-20

#introduction

I am a scientist at Meta AI in NYC and study machine learning and optimization, recently involving reinforcement learning, control, optimal transport, and geometry. On social media, I enjoy finding and boosting interesting content from the original authors on these topics

I made this small animation with my recent project on optimal transport that connects continuous structures in the world. The source code to reproduce this and other examples is online at github.com/facebookresearch/w2

Bartolomeo Stellato boosted:
2022-12-18

Happy 75th birthday to the transistor, the invention that shaped the modern world!
spectrum.ieee.org/invention-of

Bartolomeo Stellato boosted:
Fabian Pedregosafabian@sigmoid.social
2022-12-03

Next week I'll be at #NeurIPS2022 presenting a couple of papers. The first one is on #autodiff through #optimization (aka #unrolling) and its bizarre convergence properties. A ๐Ÿงต on the paper (arxiv.org/pdf/2209.13271.pdf) (1/9)

Bartolomeo Stellato boosted:
2022-11-22

I'm a professor & #ISYE dept chair at #uwmadison, #INFORMS President-Elect, and the author of the blog #PunkRockOR.

I'm into
- public sector operations research (#ORMS)
- data analytics and engineering for social good
- public engagement with science
- contributing to society and future generations through research and higher ed

my Mastodon #intro/#introduction

Bartolomeo Stellato boosted:
Pietro Belottipietrobelotti
2022-11-20

Hi, I'm Pietro. I work in the Discrete Optimization and Operations Research Group at the Politecnico di Milano. My research interests are in nonconvex optimization (mixed integer and nonlinear), multiobjective optimization, and optimization under uncertainty.

I am the developer of Couenne, an open-source MINLP solver. I also worked in the dev. team of FICO Xpress.

Glad to switch to a social network with the same name as a heavy metal band ๐Ÿ˜›


Bartolomeo Stellato boosted:
Eugen RochkoGargron
2022-11-19

I don't want to just be posting about numbers like a clock, but the 2 million mark of monthly active users across the network is a pretty big deal. This is going big numbers! Shout out to all the server operators who are absorbing this wave.

Bartolomeo Stellato boosted:
2022-11-19

#Introduction

AIROYoung is the young chapter of AIRO (Italian Association of Operations Research), a community for #youngresearchers made by young researchers. Check it out: airoyoung.airo.org

We aim to foster collaboration among students and early-career researchers interested in the field of #operationsresearch, provide new opportunities to advance their career and expand their network, and strive to connect the demand/offer in the OR job market, in both #orms #academia and #industry.

Bartolomeo Stellato boosted:
2022-11-19
EDIT: DO NOT BOOST!

This post is nearly a year old, and boosting will simply spread misinformation. Mastodon now has substantially more funding, and has enough scale to handle its many users.

I would delete this post but my server is not letting me so I'm editing this to reflect that I wish for you to NOT BOOST!

---
Right now Mastodon is only receiving appr. $21,000/month through Patreon.

This is not enough to handle the 1 million new accounts that will be made this week.

Currently, only 4,720 patrons are donating to Mastodon.

However, if everyone chips in $2/month, this will ensure the continued survival of Mastodon!

Be a hero! Donate now! https://www.patreon.com/mastodon
Bartolomeo Stellato boosted:
Jascha Sohl-Dicksteinjascha@sigmoid.social
2022-11-18

If there is one thing the deep learning revolution has taught us, it's that neural nets will outperform hand-designed heuristics, given enough compute and data.

But we still use hand-designed heuristics to train our models. Let's replace our optimizers with trained neural nets!

Bartolomeo Stellato boosted:
Jascha Sohl-Dicksteinjascha@sigmoid.social
2022-11-18

If you are training models with < 5e8 parameters, for < 2e5 training steps, then with high probability this LEARNED OPTIMIZER will beat or match the tuned optimizer you are currently using, out of the box, with no hyperparameter tuning (!).

velo-code.github.io
arxiv.org/abs/2211.09760

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst