Skip to content

Leakage-aware GST #410

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 141 commits into
base: develop
Choose a base branch
from
Open

Leakage-aware GST #410

wants to merge 141 commits into from

Conversation

rileyjmurray
Copy link
Contributor

@rileyjmurray rileyjmurray commented Mar 19, 2024

TODOs

  • Rewrite this PR description into an acceptable summary. (Marking as ready for review now!)
  • Justify or revert the change in bc7cb8b. (Justified.)
  • Write explanations of report metrics in pygsti/report/reportables.py.
  • Move various helper functions from my notebooks (i.e., the ones I make static functions of an Infrastructure class) into appropriate places within pyGSTi.
  • Make a leakage-friendly built-in basis.
  • An example notebook.

Leakage analysis that's important, but out of scope for this PR.

  • systems with > 1 qubit.
  • systems with > 1 leakage level.
  • CPTP kite models

@rileyjmurray rileyjmurray changed the title WIP: leakage-aware gauge optimization WIP: leakage-aware gauge optimization -- bad commit history Sep 24, 2024
@rileyjmurray rileyjmurray changed the base branch from master to develop September 24, 2024 21:15
@rileyjmurray rileyjmurray changed the title WIP: leakage-aware gauge optimization -- bad commit history WIP: leakage-aware gauge optimization Sep 24, 2024
@rileyjmurray rileyjmurray changed the title WIP: leakage-aware gauge optimization WIP: leakage-aware GST Mar 25, 2025
@rileyjmurray
Copy link
Contributor Author

rileyjmurray commented Mar 25, 2025

EDIT: we now have an open issue about handling this sort of thing #633.


Text from an email Piper sent (not showing plots since they contained real data):

I was looking at the model violation tab, trying to work out why the smaller L models weren’t fitting the data as well as might be expected as quantified by Nsigma. I think this maybe be a function of how we calculate Nsigma = (2delta logl -k)/sqrt(2k). The number of non-gauge model parameters N_p is reported as 351 in the leakage-enhanced GST report (I don’t have any particular sense of whether this is correct). The number of data points N_s is 193 for L=1. This means that k (the degrees of freedom) is negative if you just calculate it as N_s-N_p. In cases like this, pyGSTi chooses k=1. k remains small until L is fairly large, which means two delta logl must be quite small for Nsigma to be small. This explains why we don’t see any red boxes in the per sequence detail for the L=1 iteration (and Nsigma is large enough that I would expect to see some or at least more dark gray squares than can be seen).

If you directly compare two delta logl between the two fits, you can see that the two delta logl is in fact smaller in the leakage-enhanced report than in the standard report.

I think the solution here is just to make sure that the first iteration of leakage-aware GST uses L=2 instead of L=1. (Or, more generally, we use the smallest L where N_s > N_p.)

Copy link
Contributor Author

@rileyjmurray rileyjmurray left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving file-level comments so @coreyostrove and I can figure out what to split out from this PR.

…inearOperator (not just plain numpy ndarrays)
…frobenius norm computation. Bugfix in test_optools.py.
…lots of in-line comments explaining the logic.
…anch that omitted jac as a keyword argument to scipy.minimize, even if jac was None.
…nt fidelity from the calculation of that metric itself. This makes it easier to reuse the setup code in other metrics. While making this change, also make the setup more efficient by using TensorProduct bases from the beginning.
…imizers. Right now only entanglement fidelity and jtracedist have implementations that can consider leakage. (So there are no leakage-aware metrics for SPAM.)
…dded annotations to all gauage optimization objectives used in the non-LS optimizer, indicating if that particular objective has support for leakage-aware metrics.
… while. The tests only consider when no leakage is modeled. New tests wil be needed for when theres a leakage dimension
@coreyostrove
Copy link
Contributor

Text from an email Piper sent (not showing plots since they contained real data):

I was looking at the model violation tab, trying to work out why the smaller L models weren’t fitting the data as well as might be expected as quantified by Nsigma. I think this maybe be a function of how we calculate Nsigma = (2delta logl -k)/sqrt(2k). The number of non-gauge model parameters N_p is reported as 351 in the leakage-enhanced GST report (I don’t have any particular sense of whether this is correct). The number of data points N_s is 193 for L=1. This means that k (the degrees of freedom) is negative if you just calculate it as N_s-N_p. In cases like this, pyGSTi chooses k=1. k remains small until L is fairly large, which means two delta logl must be quite small for Nsigma to be small. This explains why we don’t see any red boxes in the per sequence detail for the L=1 iteration (and Nsigma is large enough that I would expect to see some or at least more dark gray squares than can be seen).
If you directly compare two delta logl between the two fits, you can see that the two delta logl is in fact smaller in the leakage-enhanced report than in the standard report.

I think the solution here is just to make sure that the first iteration of leakage-aware GST uses L=2 instead of L=1. (Or, more generally, we use the smallest L where N_s > N_p.)

This isn’t a uniquely leakage-related shortcoming, I’ve encountered this in other settings too (with really aggressive FPR schemes when not forcing inclusion of LGST circuits, for example) and I suggest we open up a separate github issue for this and give some additional thought to how we should be approaching the statistics in this case.

Copy link
Contributor

@coreyostrove coreyostrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just finished my first pass through. This is a ton of effort and these contributions are sincerely appreciated. Thank you as always for the excellent work, @rileyjmurray!

Of the comments I've left, the only one I am deeply concerned about is the potential bug that I've pointed out in the cost function for gauge optimization as it pertains to instruments. I have good reason to suspect (I mean, the git history is pretty clear and this present on develop so it isn't really a suspicion) that in the event there is a bug here it is very likely you had nothing to do with introducing it, but since you're already knee deep in this code I'd ask that you take a look and if necessary clean this up on our behalf.

Rest of my comments are either minor clarifying questions or otherwise nice-to-have but non-blocking requests. I've also added my concurrence with a couple comments/suggestions from @nkoskelo.

Comment on lines 610 to 616
assert gates_metric != "frobeniustt"
assert spam_metric != "frobeniustt"
# assert spam_metric == gates_metric
# ^ Erik and Corey said these are rarely used. I've removed support for
# them in this codepath (non-LS optimizer) in order to make it easier to
# read my updated code for leakage-aware metrics. It wouldn't be hard to
# add support back, but I just want to keep things simple. -- Riley
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add an error message for the AssertionError indicating that these metrics are not supported when doing leakage-aware gauge optimization (i.e. n_leak>0).

Leakage-aware metric supported, per implementation in mdl.frobeniusdist.
Refer to how mdl.frobeniusdist handles the case when transform_mx_arg
is a tuple in order to understand how the leakage-aware metric is defined.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there might be a (likely preexisting) bug here related to instruments. For the fidelity and tracedistance metrics I’m not seeing the instrument elements being accessed (except in passing in the spam metric computation where it is just used to filter the item weights). For frobenius this is harder to see, but I took a look at the implementation of the ‘frobeniusdist’ method for ExplicitOpModels here and I’m not seeing instruments there either.

Comment on lines +980 to +983
elif self._static and x._static and self._hash != x._hash:
return False
else:
if self._static and x._static:
return self._hash == x._hash
else:
return self.tup == x.tup
return self.tup == x.tup
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These look functionally equivalent to me. I’ll take your word for it, but for my own edification what about the original version was incorrect?

opLabels = set()
empty_label = _Label(())
for circuit in self:
for layer in circuit.layertup:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good suggestion and I think it would be an independently useful primitive for circuits to return their unique gate labels, probably with some flags to control matching behavior (e.g. just label names versus full labels with sslbls).

sym_err_rel = sym_err_abs / _la.norm(hessian)
assert(sym_err_rel < TOL)
hessian += hessian.T
hessian /= 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just using the following factoid about quadratic forms, right? https://en.wikipedia.org/wiki/Quadratic_form#General_case

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like most/all of these reportable functions hard code in that n_leak is one. I don't think there is anything to flag a user specifying something different in construct_standard_report, though. That is fine for now, but maybe add an assertion for this to avoid unexpected behavior? (i.e. a user being confused by why this was quietly changed under the hood).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I'll update the construct_standard_report docstring and raise an exception if the user passes in n_leak other than 0 or 1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar comment to the one I left on the leakage model regarding docstrings and API-level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to say that everything here is in the private API for now.

@rileyjmurray
Copy link
Contributor Author

@coreyostrove I think I've addressed your comments. Just pushed what should be the last round of changes.

@coreyostrove
Copy link
Contributor

@coreyostrove I think I've addressed your comments. Just pushed what should be the last round of changes.

I think the only major outstanding concern here is the bug I alluded to with gauge optimization for instruments. Since this is a pre-existing bug I am alright with tabling this for the purposes of this PR, but we should open up a new github issue to track it. If you have time today can you open this issue? If not I can do so myself later this evening.

Copy link
Contributor

@coreyostrove coreyostrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic work, @rileyjmurray! Thank you for the significant effort this work represents and your contributions advancing our leakage modeling capabilities. Merge at will.

One last request as you merge things in, can you flesh out the summary in the PR? That’d be helpful as I prepare release notes (and for posterity).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants