Three of the attacks presented (EAD, CW2, and BLB) are unbounded attacks: rather than finding the “worst-case” (i.e., highest loss) example within some distortion bound, they seek to find the closest input subject to the constraint that it is misclassified. Unbounded attacks should always reach 100% “success” eventually, if only by actually changing an image from one class into an image from the other class; the correct and meaningful metric to report for unbounded attacks is the distortion required.