|
| 1 | +\documentclass[9pt,usenames,dvipsnames]{beamer} |
| 2 | + |
| 3 | +\usetheme[progressbar=frametitle,numbering=fraction]{metropolis} |
| 4 | +\usepackage{appendixnumberbeamer} |
| 5 | +\usepackage{booktabs} |
| 6 | +\usepackage[scale=2]{ccicons} |
| 7 | +\usepackage{pgfplots} |
| 8 | +\usepgfplotslibrary{dateplot} |
| 9 | +\usepackage{xspace} |
| 10 | +\usepackage{algorithm2e} |
| 11 | + |
| 12 | +%\usepackage[utf8]{inputenc} |
| 13 | +%\usepackage{amsfonts} % For AMS Beautification |
| 14 | +%\usepackage{fullpage} % For More Realistic Page Usage |
| 15 | +%\usepackage{algpseudocode} % For algorithm layout |
| 16 | +%\usepackage[boxed,ruled,vlined]{algorithm2e} |
| 17 | +%\usepackage{bm} |
| 18 | +%\usepackage{color} |
| 19 | +%\usepackage{commath} |
| 20 | +\usepackage{amssymb, amsmath, amsfonts,mathrsfs,amsthm} |
| 21 | +\usepackage{mathabx} % \widecheck - inverse Fourier transform |
| 22 | +\usepackage{mathtools} |
| 23 | +\usepackage{graphicx} %includegraphics[scale=#]{filename} |
| 24 | +\usepackage{float} |
| 25 | + \usepackage{relsize} |
| 26 | +%\usepackage{pdfsync} |
| 27 | +%\usepackage{hyperref} |
| 28 | +%\usepackage{upgreek} |
| 29 | +%\usepackage[round]{natbib} |
| 30 | +% \usepackage[backend=biber,style=numeric, citestyle=ieee]{biblatex} |
| 31 | +\usepackage[style=apa]{biblatex} |
| 32 | +\addbibresource{reference.bib} %Imports bibliography file |
| 33 | + |
| 34 | +\usepackage[most]{tcolorbox} |
| 35 | +\newtcolorbox{titlelessblock}{ |
| 36 | + enhanced, |
| 37 | + boxsep=0.25ex, |
| 38 | + arc=1.25ex, |
| 39 | + opacityframe=.6, |
| 40 | + opacityback=.6, |
| 41 | + colback=white, |
| 42 | + %colframe=black!25!white, |
| 43 | + colframe =black!40!white, |
| 44 | + boxrule=0pt |
| 45 | +} |
| 46 | + |
| 47 | +\newcommand{\sdag}[1]{{#1}^{\dag}} |
| 48 | +\newcommand{\Zee}{\mathbb{Z}} |
| 49 | +\newcommand{\zee}{\mathfrak{z}} |
| 50 | + |
| 51 | +\let\oldcite=\cite |
| 52 | +\renewcommand\cite[1]{\hyperlink{#1}{\textcolor{blue}{\small{(\oldcite{#1})}}}} |
| 53 | + |
| 54 | + |
| 55 | +\renewcommand{\footnotesize}{\scriptsize} |
| 56 | + |
| 57 | +\newcommand{\themename}{\textbf{\textsc{metropolis}}\xspace} |
| 58 | + |
| 59 | +\title{Agnostic MPI-SPPY and Consensus ADMM Under Uncertainty} |
| 60 | +% \subtitle{A Bootstrap Approach} |
| 61 | +\date{} |
| 62 | +\author{David L Woodruff \inst{1}\\ |
| 63 | +(with coauthors noted as we go along)} |
| 64 | + |
| 65 | + |
| 66 | +\institute{\inst{1} Graduate School of Management, \\ University of California, Davis} |
| 67 | +% \titlegraphic{\hfill\includegraphics[height=1.5cm]{logo.pdf}} |
| 68 | +\date{INFORMS 2024} |
| 69 | + |
| 70 | +\begin{document} |
| 71 | + |
| 72 | +\maketitle |
| 73 | + |
| 74 | + |
| 75 | +\metroset{sectionpage=none} |
| 76 | + |
| 77 | +\begin{frame}{Optimization Under Uncertainty} |
| 78 | + |
| 79 | +Today we will work with abstract problems such as: |
| 80 | +\alert{ |
| 81 | +\Large |
| 82 | +$$ |
| 83 | +\min_x h(x,\boldmath\Xi) |
| 84 | +$$} |
| 85 | +\begin{itemize} |
| 86 | +\item $\boldmath\Xi$ is a random variable |
| 87 | +\item The function $h$ captures constraints as well as any data modeled as known with certainty. |
| 88 | +\end{itemize} |
| 89 | +But the random variable necessitates some additional specification such as |
| 90 | +requiring a form of robustness or perhaps... |
| 91 | +\pause |
| 92 | +\alert{ |
| 93 | +\Large |
| 94 | +\begin{equation} |
| 95 | + \min_x E_{\xi\sim F}h(x,\xi) \label{eq:TheProblem} |
| 96 | +\end{equation} |
| 97 | +} |
| 98 | +Where the distribution $F$ is unknown and, of course, unknownable (exactly). |
| 99 | +\end{frame} |
| 100 | + |
| 101 | + |
| 102 | +\section{MPI-sppy} |
| 103 | +\begin{frame}{MPI-sppy: The paper and the software} |
| 104 | +\begin{itemize} |
| 105 | +\item B. Knueven, D Mildebrath, C. Muir, JD Siirola, J-P Watson, DL Woodruff, ``A parallel hub-and-spoke system for large-scale scenario-based optimization under uncertainty,'' {\em MPC} |
| 106 | +\item Find $\hat{x}$ with bounds and/or confidence intervals on the objective function for a scenario-based $T$-stage expected value problem with scenario set $\Xi$. |
| 107 | +\end{itemize} |
| 108 | + |
| 109 | +\end{frame} |
| 110 | + |
| 111 | +\begin{frame}{MPI-sppy: The paper and the software} |
| 112 | +\begin{itemize} |
| 113 | +\item B. Knueven, D Mildebrath, C. Muir, JD Siirola, J-P Watson, DL Woodruff, ``A parallel hub-and-spoke system for large-scale scenario-based optimization under uncertainty,'' {\em MPC} |
| 114 | +\item Find $\hat{x}$ with bounds on the objective function (and confidence intervals) for the problem solved to do that; generally scenario-based, e.g.: |
| 115 | + $$ |
| 116 | + \min_x \frac{1}{N} \sum_{i=1}^N h(x, D_i). |
| 117 | + $$ |
| 118 | + for some sample $D$ of size $N$. |
| 119 | +\item I want to talk mainly about the architecture, but first a few words about the software |
| 120 | + \begin{itemize} |
| 121 | + \item Available at \url{https://github.com/Pyomo/mpi-sppy} |
| 122 | + \item It is a library, but we also have a generic program (coming back toward PySP) |
| 123 | + \item It is designed for HPC, but does run on a laptop. |
| 124 | + \end{itemize} |
| 125 | +\end{itemize} |
| 126 | +\end{frame} |
| 127 | + |
| 128 | +\begin{frame}{The Architecture} |
| 129 | + \subtitle{MPI: Message Passing Interface} |
| 130 | + |
| 131 | +\includegraphics[width=1.0\linewidth]{hubspoke.pdf} |
| 132 | +But things are changing |
| 133 | +\end{frame} |
| 134 | + |
| 135 | +\begin{frame}{Agnostic to the AML} |
| 136 | + For scenario based decomposition... |
| 137 | + \begin{itemize} |
| 138 | + \item Loose: |
| 139 | + \begin{itemize} |
| 140 | + \item You code your AML to write an MPS (or maybe lp) file for each scenario along with a json file for each scenario that lists the nonanticipative variabes for each node in the scenario tree traversed by the scenario. |
| 141 | + \item You do this once and let mpi-sppy take over. |
| 142 | + \item No Python programming required (unless, of course, thats how interact with your AML) |
| 143 | + \end{itemize} |
| 144 | + \item[] |
| 145 | + \item Tight: If your ``AML'' is callable in the sense that an outside caller |
| 146 | + can modify the objective, then |
| 147 | + \begin{itemize} |
| 148 | + \item You hope we already have added support for your ``AML'' (we now have support for AMPL, GAMS, and GurobiPy) |
| 149 | + \item You need to write a thin wrapper in Python for your model |
| 150 | + \end{itemize} |
| 151 | + \end{itemize} |
| 152 | +\end{frame} |
| 153 | + |
| 154 | + |
| 155 | +\section{Consensus ADMM Under Uncertainty} |
| 156 | + |
| 157 | +\begin{frame}{Consensus ADMM Under Uncertainty} |
| 158 | + \begin{itemize} |
| 159 | + \item There is a paper with Aymeric Legros on OOL, but write to me for a somewhat better version. |
| 160 | + \item Today, I will give a brief overview, with almost no notation. |
| 161 | + \begin{itemize} |
| 162 | + \item You might want to do scenario decomposition for stochastics and you might want to do consensus ADMM decomposition because you have a huge problem (or you might be decomposing just to get parallel speed-up or for security reasons). |
| 163 | + \item We combine the two. Under the hood, the trick is the tree. |
| 164 | + \item But the interface is that you tell the software about your scenario tree for stochastics and about your consensus variables and subproblems for ADMM using wrappers for your model. |
| 165 | + \end{itemize} |
| 166 | + \item The software is available on github as part of MPI-SPPY. |
| 167 | + \end{itemize} |
| 168 | +\end{frame} |
| 169 | + |
| 170 | +\begin{frame}{A Simple Example} |
| 171 | + \begin{itemize} |
| 172 | + \item Consider a batch production/distribution problem with uncertain production yields |
| 173 | + \item Batch sizes must be non-anticipative, while shipping quantities, inventory etc. can depend on realized yields. |
| 174 | + \item Suppose the ADMM subproblems are regions with a few arcs between them. |
| 175 | + \item So there must be a consensus for flow on the arcs between two regions. |
| 176 | + \end{itemize} |
| 177 | + |
| 178 | + |
| 179 | +\end{frame} |
| 180 | + |
| 181 | +\begin{frame}{Progressive Hedging and the like} |
| 182 | +\centering |
| 183 | + \includegraphics[width=1.1\linewidth]{tree1.pdf} |
| 184 | +\end{frame} |
| 185 | + |
| 186 | +\section{Extended Scenarios and Tree} |
| 187 | +\begin{frame}{Under the hood: Extended Scenarios and Tree} |
| 188 | + \begin{itemize} |
| 189 | +\item The collection of ADMM subproblems, $A$, are considered to emanate from a scenario tree node that is replicated for addition |
| 190 | + to the original scenario tree at every original leaf node. |
| 191 | +\item So now we have a tree with $T+1$ stages and $|\Xi| |A|$ {\em extended scenarios}. |
| 192 | +\item There's going to need to be some funny business with non-anticipative variables and with probabilities if we are going to use standard stochastic scenario |
| 193 | + decomposition algorithms. |
| 194 | +\end{itemize} |
| 195 | +\end{frame} |
| 196 | + |
| 197 | +\begin{frame}{Combined ``Scenario'' Tree} |
| 198 | +\centering |
| 199 | + \includegraphics[width=1.1\linewidth]{tree2.pdf} |
| 200 | +\end{frame} |
| 201 | + |
| 202 | +\begin{frame}{Conclusions about Stochastic consensus ADMM} |
| 203 | +\begin{itemize} |
| 204 | +\item Our paper describes methods and software for using a stochastic programming decomposition algorithm for stochastic consensus ADMM. |
| 205 | +\item You could use similar thinking to adapt an ADMM algorithm for stochastic ADMM. |
| 206 | +\item Aside: decomposition seems to be needed for only a fraction of ``pure'' stochastic problems, so ADMM problems seem like a good place to hawk our wares. |
| 207 | +\end{itemize} |
| 208 | +\end{frame} |
| 209 | + |
| 210 | +\end{document} |
| 211 | + |
| 212 | +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
| 213 | + |
| 214 | + |
| 215 | + |
0 commit comments