api: fix staggered evaluation for add/mul#2782
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2782 +/- ##
==========================================
- Coverage 83.03% 83.01% -0.03%
==========================================
Files 248 248
Lines 50898 50957 +59
Branches 4485 4489 +4
==========================================
+ Hits 42265 42303 +38
- Misses 7865 7883 +18
- Partials 768 771 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
766cc81 to
a156d3c
Compare
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
1a9951a to
dc0e8f5
Compare
42588ae to
6c6d45f
Compare
JDBetteridge
left a comment
There was a problem hiding this comment.
A lot of the norms in the tests and notebooks have changed significantly, how do we know these are now the "right values". And is the test really valuable if we do not know what the true value is?
One could suggest that these are in some sense "regression tests", but then we definitely should not be changing the values!
The one norm that changed a lot in test_mpi.py .... was indeed because of a bug introduced, made me find it so thanks, the rest is small changes due to a few extra interpolation but within tolerance |
| @@ -952,6 +952,11 @@ def _evaluate(self, **kwargs): | |||
| class DiffDerivative(IndexDerivative, DifferentiableOp): | |||
| pass | |||
| class DiffDerivative(IndexDerivative, DifferentiableOp): | ||
| pass | ||
|
|
||
| def _eval_at(self, func): |
There was a problem hiding this comment.
isn't it the case of IndexDerivative too?
| kwargs.pop('is_commutative', None) | ||
| return self.func(*args, **kwargs) | ||
|
|
||
| def _eval_at(self, func): |
There was a problem hiding this comment.
nitpicking: Perhaps could be moved into a mixin class from which all *Derivative class inherit
There was a problem hiding this comment.
Seconded: EvaluatedMixin perhaps?
| # Two indices are aligned if they differ by an Integer*spacing. | ||
| v = (i - j)/d.spacing | ||
| if not i.has(d): | ||
| # Maybe a subdimension |
There was a problem hiding this comment.
nitpicking, SubDimension
| v = (i - j)/d.spacing | ||
| if not i.has(d): | ||
| # Maybe a subdimension | ||
| dims = {sd for sd in i.free_symbols if getattr(sd, 'is_Dimension', False) |
There was a problem hiding this comment.
avoids circular (so local) import using getattr
There was a problem hiding this comment.
it's just a hack to avoid a hack in that case?
anyway, u just reminded me that unless i is a Number or a Dimension w/o an offset, it will be of type AffineIndexAccessFunction https://github.com/devitocodes/devito/blob/main/devito/types/dimension.py#L1822
so all you have to here is
try:
sds, = i.sds
except (AttributeError, ValueError):
# Expected exactly one StencilDimension
continue
btw should we not raise a NotImplementedError if len(sds) > 1 instead?
There was a problem hiding this comment.
Don't think it's better. You have to check all three of i.sds, i.d, o.ofs and see if one is a sbdimensions it doesn't make it cleaner or simpler.
btw should we not raise a NotImplementedError if len(sds) > 1 instead?
No if there is multiple dimensions it's considered that the user decided to specifiy its own indices and to be valid ones. Would lead to a lot of breakign cases to raise and error here
There was a problem hiding this comment.
Could you use is_Sub?
No because it could be something other than a Dimension
| for the compiler. | ||
| """ | ||
| mapper = self._grid_map | ||
| subs = mapper.pop('subs', {}) |
There was a problem hiding this comment.
since _grid_map is (must be) a cached_property, use .get() instead of .pop()
In fact, _grid_map, now that you make me think twice about it, should return a frozendict to be pedantic
There was a problem hiding this comment.
- I should not be cached no, that would break all of the FD since
fandf._subs(x, x+.5)would end up with the same_grid_mapwhen the second needs interpolation - using pop so that can just iterate mapper below
There was a problem hiding this comment.
oh that's because we still have that crazy hack that makes every instance of f(x,y,z) reference the same __dict__ as the "origin" Function?
There was a problem hiding this comment.
No this is because this is based on indices. Every f.subs is the same f with different indices (and has too or it would get really nasty with data/.....) and this checks if those specific indices are on the grid given the reference index (staggered "dim")
| # Evaluate. Since we used `self.function` it will be on the grid when | ||
| # evaluate is called again within FD | ||
| retval = retval._evaluate(**kwargs) | ||
| if subs: |
There was a problem hiding this comment.
I don't think this if is needed, .subs should take O(1) when subs is empty by returning immediately
| except KeyError: | ||
| pass | ||
|
|
||
| if mapper: |
There was a problem hiding this comment.
nitpicking, but same as before. We can reduce verbosity by avoiding these if since they practically speaking won't improve performance
|
|
||
| if mapper: | ||
| return self.subs(mapper) | ||
| return self |
There was a problem hiding this comment.
nitpicking, perhaps a blank line
| eqne = eqn.evaluate.rhs | ||
| assert simplify(eqne - (p._subs(y, yp).evaluate * f).dx(x0=xp).evaluate) == 0 | ||
|
|
||
| def test_param_stagg_add(self): |
There was a problem hiding this comment.
quite a few changes have been made, and it seems weird that only one new test was required?
were they all fixing the same exact (set of) thing(s), which is entirely captured by this test? if so, OK
There was a problem hiding this comment.
This is the only "change" that is tested. Everything else isadjustments from removing is_Parameter and using subs in eval_for_fd instead of explicit derivatives
JDBetteridge
left a comment
There was a problem hiding this comment.
Just flagging a couple of other big changes that might need investigation (or given they are just in the notebooks, maybe re-running after your bug fix)
| "source": [ | ||
| "from devito import norm\n", | ||
| "assert np.isclose(norm(u), norm(U[0]), rtol=1e-5, atol=0)" | ||
| "assert np.isclose(norm(u), norm(U[0]), rtol=1e-2, atol=0)" |
There was a problem hiding this comment.
This is 3 orders of magnitude change in the difference between two values that are notionally the same
There was a problem hiding this comment.
The numpy implementation is techinically wrong, as it does staggering but does not interpolate u/v where needed so there is some difference now because forcing the devito equation to be the same would be tricky since "incorrect"
| "outputs": [], | ||
| "source": [ | ||
| "assert np.isclose(np.linalg.norm(rec.data), 4263.511, atol=0, rtol=1e-4)" | ||
| "assert np.isclose(np.linalg.norm(rec.data), 3640.584, atol=0, rtol=1e-4)" |
There was a problem hiding this comment.
This jump is also pretty big!
There was a problem hiding this comment.
Likely some parameters interpolation that were not being processed due to the SubDimensions. Will double check
There was a problem hiding this comment.
Yes, basically comes from parameters (damp, ro, ...) being properly interpolated now.
cfcadc9 to
ef1f7d8
Compare
01d3d1a to
cea48f3
Compare
0823c5f to
7821fc1
Compare
3cd6f39 to
f91130f
Compare
| def key(i): | ||
| try: | ||
| return i.indices[d] | ||
| except (KeyError, AttributeError): |
There was a problem hiding this comment.
why? and what triggers this?
| edims = set(retrieve_dimensions(tkn, deep=True)) | ||
| return dim._defines & edims and edims.issubset(prefix.dimensions) | ||
|
|
||
| mapper = {k: v for k, v in mapper.items() if key(v)} |
There was a problem hiding this comment.
what triggered this change?
There was a problem hiding this comment.
Current version is too simplistic and misses things like combination of implicit dims. Those changes here are preleminary to the topography multisubdomain when for example taken in intersection of an abox where you end up with multimple implicit dims in the tkn.
| except (AttributeError, ValueError): | ||
| dtype = np.float32 | ||
| eps = np.finfo(dtype).resolution**2 | ||
| b = printer()._print(Cast('b', dtype=dtype)) |
There was a problem hiding this comment.
can use a blank line before this line
also, instantiating the printer at this point is quite weird. Even more if you think one new instance will be instantiated each time you enter this function. Why don't we create the printer once and for all at the beginning of lowering or wherever (as soon as ...) possible?
There was a problem hiding this comment.
That's how the printer is used throughout the codebase. It's instantiated when needed.
| retval = self.subs({i.subs(subs): self.indices_ref[d] | ||
| for d, i in mapper.items()}) | ||
| if 'harmonic' in self._avg_mode: | ||
| retval = retval.safe_inv(retval, safe='safe' in self._avg_mode) |
There was a problem hiding this comment.
safe='safe' is a bit clumsy
There was a problem hiding this comment.
It's a boolean 'safe' in self._avg_mode not safe='safe'
| if self._avg_mode == 'harmonic': | ||
| from devito.finite_differences.differentiable import SafeInv | ||
| retval = SafeInv(retval, self.function) | ||
| if 'harmonic' in self._avg_mode: |
There was a problem hiding this comment.
nitpicking: the _avg_mode should be an instance of something rather than an iterable of string, so that you may do e.g. self._avg_mode.is_harmonic or self._avg_mode.is_harmonic_safe
| return self.subs(mapper) | ||
| return self | ||
|
|
||
| mapper = {} |
There was a problem hiding this comment.
aka
m0 = self.indices_ref
m1 = func.indices_ref
mapper = {m[d]: m1[d] for d in self.dimensions if m0.get(d) is not m1.get(d)}
?
There was a problem hiding this comment.
No that would still throw a KeyError when d is in m0 and not m1 as m0.get(d) is not m1.get(d) would be True but m1[d] doesn't exist
| retval = super()._evaluate(**kwargs) | ||
| if not self._time_buffering and not retval.is_Function: | ||
| # Saved TimeFunction might need streaming, expand interpolations | ||
| # for easier processing. |
There was a problem hiding this comment.
(ultra nitpick) no full stop at the end of comments
| def _evaluate(self, **kwargs): | ||
| retval = super()._evaluate(**kwargs) | ||
| if not self._time_buffering and not retval.is_Function: | ||
| # Saved TimeFunction might need streaming, expand interpolations |
There was a problem hiding this comment.
why the retval.is_Function part? what else can it be?
There was a problem hiding this comment.
Derivative when there is an interp. But if it's a Function then you get an infinite recursion on .evaluate
b1f1e14 to
edd10f2
Compare
| lib = ctypes.CDLL(libname) | ||
|
|
||
| cudaDevAttrMaxSharedMemoryPerBlockOptin = 97 | ||
| # get current device |
There was a problem hiding this comment.
Ultra nitpick: Comment doesn't start with a capital
| return 64 * 1024 # 64 KB default | ||
| lib = ctypes.CDLL(libname) | ||
|
|
||
| cudaDevAttrMaxSharedMemoryPerBlockOptin = 97 |
There was a problem hiding this comment.
Maybe abbreviate this name and add a comment?
There was a problem hiding this comment.
That's the actual CUDA name
| kwargs.pop('is_commutative', None) | ||
| return self.func(*args, **kwargs) | ||
|
|
||
| def _eval_at(self, func): |
There was a problem hiding this comment.
Seconded: EvaluatedMixin perhaps?
| if self._avg_mode not in ['arithmetic', 'harmonic']: | ||
| if self._avg_mode not in ['arithmetic', 'harmonic', 'safe_harmonic']: | ||
| raise ValueError("Invalid averaging mode_mode %s, accepted values are" | ||
| " arithmetic or harmonic" % self._avg_mode) |
There was a problem hiding this comment.
Does this error need updating?
| v = (i - j)/d.spacing | ||
| if not i.has(d): | ||
| # Maybe a subdimension | ||
| dims = {sd for sd in i.free_symbols if getattr(sd, 'is_Dimension', False) |
No description provided.