-
Notifications
You must be signed in to change notification settings - Fork 2
Alphabet generalisation #28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because of weird conflicts with the theory of words when running the tests I had to rename all a's in this file to aa and all b's to bb.
Feel free to rename them to whatever else if you don't like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR does some great work. Here are some of the things I like about it:
- It introduces some nice infrastructure for dealing with functional patterns
- It proves some theorems that will generalize nicely outside of the theory of words, such as to contextual implication over functional contexts.
These are issues I see:
- There are a lot of generated theorems. This means that we are doing proof
generation in three different languages (LISP, Maude and Python). I think
that this will quickly become hard to maintain, unless done in a principled
manner. - Since our goal here is to build up a library of matching logic theorems, we
should focus on modularity and re-usability, so that other people can easily
re-use our library of theorems for there use cases. der_l1_l2_phi,der_l1_l2_phi_andandregex_eq_der_diff_llook like
pretty much the same theorem?
Here is my proposal to get this functionality merged.
-
Theorems relating to constructors/functional patterns:
- In an
mm0file, define notation for constructors/functional patterns, - Generalize theorems relevant theorems here. Assuming functional contexts, examples are:
cong_of_equiv_der==>cong_of_equiv_ctximpregex_eq_der_exists==>propag_exists_ctximpregex_eq_der_bot==>propag_bot_ctximpregex_eq_der_choice==>propag_or_ctximpregex_eq_der_conj==>propag_and_ctximpregex_eq_der_neg==>propag_neg_ctximp
- I think a lot of these are usable in the theory of separation logic as well.
- In an
-
I feel that the most readable and accessible way to define languages over arbitrary alphabets is
as a series of nesting dolls.
The theory for n letters, is defined as an extension to that with n-1 letters:
Each alphabet,alphabet Nis a sort with larger alphabets supersorts of smaller ones.
In fact, we can define this theory in metamath without any python generation, the only caveat it that
we the domain-words axiom cannot be easily defined.
In any case, it may make sense to move towards the traditional sort infrastructure,
and introduce sorted negation to deal with this.I propose using the attached formalization or similar.
countable-words.txt
| term a_symbol : Symbol ; | ||
| def a : Pattern = $sym a_symbol$ ; | ||
| term b_symbol : Symbol ; | ||
| def b : Pattern = $sym b_symbol$ ; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to keep a copy of the theory we used for the paper submission separate from the generated theorems for reference.
I think that that may be easier to understand than the generated files.
We can also have a 15-constructors.mm1 file with various lemmas regarding functional patterns etc. that are shared between them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have that in a separate branch, like submission.
Also I feel the generated files are super easy to understand, maybe I should commit them to the repo given that the maude code doesn't support arbitrary alphabets. Also what lemmas do you mean?
23-words-theorems.mm1
Outdated
| theorem der_l1_l2_phi_and (l1 l2 phi: Pattern) | ||
| (l1_func: $ is_func l1 $) | ||
| (l2_func: $ is_func l2 $) | ||
| (h1: $ |^ l1 /\ top_letter ^| $): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not simplify the theorem to $ derivative l1 (l2 . phi)) <-> (l1 == l2) /\ phi $ by adding a second hypothesis for l2? Also, h1 could probably be h1: $l1 -> top_letter, which is a more standard way of saying "l1 is of sort letter". The theorem then becomes:
theorem der_l1_l2_phi_and (l1 l2 phi: Pattern)
(l1_func: $ is_func l1 $)
(l2_func: $ is_func l2 $)
(h1: $ l1 -> top_letter $)
(h2: $ l2 -> top_letter $):
$derivative l1 (l2 . phi)) <-> (l1 == l2) /\ phi$
which is much easier to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's the signature of der_l1_l2_phi (see discussion above).
Also I changed most of the occurences to use that notation instead
Also I expect Also I like the proposal to generalise our theorems to arbitrary contextual implications. I'm not atm convinced that they all can be generalised but it would be nice. However idk if it should necessarely be part of this PR specifically. That's another pretty large piece of work and ti would be useful to ahve this merged in. |
Generalised the theorems in the Theory of Words so that they don't only work for the original binary "a" and "b" alphabet.
They now work for arbitrary alphabets described by the pattern
top_letter.Also wrote python script generating theory files for simple alphabets made up of named letters.