Speaker
Date
Regularization is a technique widely used to improve the stability of solutions to statistical problems. We propose a new regularization concept, performance-based regularization (PBR), for data-driven stochastic optimization. The goal is to improve upon Sample Average Approximation (SAA) in terms of the finite-sample performance. We apply PBR to mean-CVaR portfolio optimization, where we penalize portfolios with large variability in the constraint and objective estimations, which effectively constrains the probabilities that the estimations deviate from the respective true values.
This results in a combinatorial optimization problem, but we prove its convex relaxation is tight. We show via simulations that PBR substantially improves upon SAA in finite-sample performance for three different population models of stock returns. We also prove that PBR is asymptotically optimal with appropriate decaying of the penalties, and further derive its first-order behaviour by extending asymptotic analysis of M-estimators.