[WIP] Compress sparse observable#14073
Conversation
- use combine in test - fix comment
|
One or more of the following people are relevant to this code:
|
Pull Request Test Coverage Report for Build 14111407194Details
💛 - Coveralls |
Cryoris
left a comment
There was a problem hiding this comment.
The approach LGTM, I left some comments below. It would be nice to see some tests on e.g. things like II + IZ + ZI + ZZ --> |0><0| |0><0| 🙂 (also then reno and tests)
| (term1, term2) | ||
| }; | ||
|
|
||
| if t1.num_qubits != t2.num_qubits { |
There was a problem hiding this comment.
Minor, but this and the same_sign below could be moved before the sorting by number of bit terms, since this might be fast exit paths where we don't have to order the terms
There was a problem hiding this comment.
I have rewritten the code, making sure to take fast exit paths into account.
| } | ||
|
|
||
| // check that the coefficients are equal or negative of each other (within the specified tolerance) | ||
| let same_sign = if (t1.coeff - t2.coeff).norm_sqr() <= tol * tol { |
There was a problem hiding this comment.
Is there a reason not just to use abs < tol?
There was a problem hiding this comment.
This is how it was already used in pub fn canonicalize and I wanted to keep this consistent, though I agree this should not make a difference.
| /// Keeps the original ordering of terms as much as possible. | ||
| pub fn compress(&self, tol: f64) -> SparseObservable { | ||
| let mut terms: Vec<SparseTerm> = self.iter().map(|t| t.to_term()).collect(); | ||
| let dummy_term = |
There was a problem hiding this comment.
Do we need this dummy term or could we just create an empty vector and push new elements as we iterate?
There was a problem hiding this comment.
I rewrote the main loop based on this and other suggestions, see 9520ccb. Now instead of keeping Vec<BitTerm> and modifying it in-place, we create a new observable for each iteration.
| out.coeffs.push(term.coeff); | ||
| out.bit_terms.extend_from_slice(&term.bit_terms); | ||
| out.indices.extend_from_slice(&term.indices); | ||
| out.boundaries.push(out.indices.len()); |
There was a problem hiding this comment.
You could just use
| out.coeffs.push(term.coeff); | |
| out.bit_terms.extend_from_slice(&term.bit_terms); | |
| out.indices.extend_from_slice(&term.indices); | |
| out.boundaries.push(out.indices.len()); | |
| out.add_term(&term.view()); |
I think, instead of doing this manually 🙂
There was a problem hiding this comment.
Here also this is how it was already done in the existing code pub fn canonicalize (but this is way nicer indeed).
| /// | ||
| /// Keeps the original ordering of terms as much as possible. | ||
| pub fn compress(&self, tol: f64) -> SparseObservable { | ||
| let mut terms: Vec<SparseTerm> = self.iter().map(|t| t.to_term()).collect(); |
There was a problem hiding this comment.
I assume there were some lifetime issues when using SparseTermView instead of SparseTerm? If not it might be nicer to pass &SparseTermViews to avoid cloning the underlying data
| let mut new_bits = t1.bit_terms.to_vec(); | ||
| new_bits[mismatch_pos] = BitTerm::X; | ||
| let new_indices = t1.indices.to_vec(); | ||
| let new_coeff = t1.coeff * Complex64::new(0.5, 0.0); | ||
| (new_bits, new_indices, new_coeff) |
There was a problem hiding this comment.
It would be nice to put this logic into a closure that can be called to avoid the duplication, something like
let proj_to_pauli = |(pauli: BitTerm, negative_sign: bool, pos: usize, term: &SparseTerm)| -> (...) {
let mut new_bits = term.bit_terms.to_vec();
new_bits[pos] = pauli;
let new_indices = term.indices.to_vec();
let new_coeff = term.coeff * Complex64::new(if negative_sign { 0.5 } else { -0.5 }, 0.0);
(new_bits, new_indices, new_coeff)
}There was a problem hiding this comment.
I have completely rewritten this bit of code.
| Ok(simplified.into()) | ||
| } | ||
|
|
||
| /// Greedily combine the terms in the observable. |
There was a problem hiding this comment.
Maybe we should say something about runtime here. Did you find this to be slow in practice?
There was a problem hiding this comment.
I added a comment about runtime: the worst-time complexity is
Summary
This is a preliminary implementation of the following idea (that came up in a discussion with @Cryoris, @jakelishman and others): in certain cases we can "compress" (aka reduce the number of terms) a
SparseObservableby combining terms. For example, the term1.5 * "X+IZ"can be combined with the term-1.5 * "X+ZZ"to produce3.0 * "X+1Z". Or for another example, the term1.5 * "X+IZ"can be combined with the term1.5 * "X-IZ"to produce1.5 * "XXIZ". While finding the optimum way to combine terms is intractable, a simple greedy strategy is definitely possible.This can be used, for example, when synthesizing a Pauli evolution circuit for a SparseObservable: in some cases the number of two-qubit gates for the compressed observable may be smaller than for the original one. Surprisingly, this shows value on HamLib benchmark set.
Details and comments
The PR should is based on top of #14067 (which improves the default synthesis of
SparseObservables coming from HamLib benchmarks). Update: #14067 is now merged.Currently this contains the following reductions:
Any other suggestions are highly welcome!
Experiments on HamLib benchmarks
The following excel sheet contains the experiments for the 100 HamLib benchmarks that we use in BenchPress, transpiled to
["cx", "u"]and consideringoptimization_level=0andoptimization_level=2:hamlib_compressed.xlsx
For 39 out of the 100 benchmarks, at least one reduction is possible. If we run
transpilewithoptimization_level=0, then compressing theSparseObservableis always beneficial: in29out of39cases the number of CX-gates is reduced (and in no case it is increased).However, compressing terms in
SparseObservablemay negatively affect the cancellations possible withoptimization_level=2, thus unfortunately compressing theSparseObservableis not always beneficial. Yet in16out of39cases the "compression" does help withoptimization_level=2as well, with at least3cases where the reduction is substantial: