features for structured proofs #14
Replies: 2 comments 4 replies
-
I think either this or the text explaining it is backwards, no? Or else I'm misunderstanding something. I do like the notion of making extensionality more explicit. It's something that new Dafny users always stumble on, and it involves some pretty complex explanations. Having it explicit would mean it would be documented and something users could look up. In the name preconditions, as a minor question, in |
Beta Was this translation helpful? Give feedback.
-
Middle-of-loop invariantsThis would allow an "in-body" loop invariant, rather than being restricted to being stuck to the top of the loop, thereby allowing the programmer to pick the "most natural" place in the loop to position the invariant, preventing hand-written forward/backward computation. Indeed, one could even mix and match positions for different parts of the overall loop invariant, placing each invariant at its most natural position within a loop. As a toy example of how invariants currently work in most tools: while cond {
invariant (bar(foo(x + 3)) <= x^2)
invariant (...)
let z = foo(x + 3)
let y = bar(z)
...
x = ...
}Notice that here we are computing ahead some part of the loop to write out an invariant upon while cond {
invariant (...)
let z = foo(x + 3)
let y = bar(z)
invariant (z <= x^2)
...
x = ...
}I've brought this idea up in multiple contexts over the past few years, such as with AES-GCM in Vale, but a more common use-case that comes to mind at the moment would be for "loop and a half" constructs. These generalized loop invariants work because any loop invariant could (via a weakest precondition computation) be moved to any other location in the loop while having the same meaning. This does mean that it is no more expressive than before; on the other hand, allowing invariants to exist anywhere within the loop allows us to reduce syntactic and cognitive burden of writing invariants, without needing any new syntax (except potentially for disambiguating which loop an in-body invariant applies to in case of nested loops). Note that it is likely that most loops probably still would write invariants at the head, since that is the most natural point for many/most invariants. This feature would simply allow one to pick that location as a choice, rather than being compelled to use only that. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This is to collect a list of not-very-deep proof feature suggestions that might aid in writing cleaner, more structured proofs.
Feel free to add your own. I feel like
calcstatements ought to have room for improvement, but I don't have any concrete thoughts right now.Loop invariants
on_exit and on_loop
Right now the only way to specify a loop invariant is with an
invariantclause, but there are a handful of patterns that show up.One pattern is that we often have invariants that we want to hold on the beginning of each iteration, but not on loop exit, or vice versa. Currently, this would be done like:
Instead we could have a dedicated way to specify such invariants:
Besides documenting the intent better, these would generate better VCs in the case that the loop has a
breakstatement inside it. Note that in the former example,!cond ==> Yis useless on loop exit if you can't actually show that!condholds. Having anon_exitinvariant would let us do the right thing.per-iteration invariants
An extremely common pattern, when looping
ifrom0tonis to have stuff like:It would be great to have a shortcut:
This is basically read as, "on iteration
i, we start the loop knowingg(i)and must provef(i)at the end of the loop".These are a lot easier to read, but they would basically de-sugar to the invariants given above. These are almost simple enough that they look like they could be implemented with macros, but they would need access to the 0..n range.
Extensionality
A common idiom is to prove 2 collections (sequences, maps, etc.) are equal by showing all elements are equal. It would be nice to have a dedicated (and documented) syntax for it, something like
I'm singling out extensionality here because it seems to me its an example of a useful idiom that one "just has to know about", which works because of hidden prelude axioms.
This is simple enough it could be a macro. (But see the next section.)
Named preconditions
The above could be a special case of a more general feature:
It would be nice to call lemmas and prove the preconditions in "assert-by" style.
Beta Was this translation helpful? Give feedback.
All reactions