04/12/2025 - 15:00 - CORE C.035
Francesca Demelas
(University of Pisa)
Machine Learning for Lagrangian Relaxation
Abstract:
I present two machine-learning approaches to enhance Lagrangian relaxation methods. Lagrangian relaxation dualizes a subset of constraints using penalization weights, called Lagrangian multipliers, to obtain bounds on the optimal value, typically optimized via iterative schemes such as the Bundle method.
The first approach is an amortized optimization model that directly predicts Lagrangian multipliers. A probabilistic graph neural network encodes each instance, represented as a bipartite graph, and a deterministic decoder outputs a multiplier per constraint. The model is trained in an unsupervised way by maximizing the resulting Lagrangian bound, and its predictions can effectively initialize, or even replace, the Bundle method.
The second approach learns to improve the Bundle method itself. At each iteration, it adapts the algorithm’s regularization parameter and replaces the quadratic subproblem with a learned surrogate based on a recurrent attention architecture that produces a convex combination of stored subgradients. By smoothing the stabilization-point updates, the full pipeline becomes differentiable and trainable end-to-end. This yields a trainable optimization algorithm that extends classical Lagrangian relaxation and enables meta-learning and broader learning-to-optimize applications.