-
Notifications
You must be signed in to change notification settings - Fork 26
Added expAv function #1587
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added expAv function #1587
Conversation
Tests included. Made a new folder to put misc algorithms in.
This new function uses scaling and squaring. Need to add testing and will cancel the test runs associated with this.
Documented so hopefully will pass tests now...
|
@paciorek when you get time have a look at miscAlgorithms.R. The funky thing that I did was hack sparsity which means that I check the matrix entries for zero and construct a sparse matrix if requested and then only loop through those values for matrix vector products. I think it's worth the speed up. See I made |
|
@paul-vdb if you have a chance, can you fix the roxygen: |
|
A few things I'm noticing:
Also, @paul-vdb I am renaming to |
|
Ok, I've looked at the sparsity hack. Some comments:
I guess I'm somewhat inclined to make the simple switch to reverse the ordering of the two for loops but to leave your use of row and column indices rather than use CSC. |
|
Thanks @paciorek . I'll make those updates soon. Good points and happy to change the order of the columns vs rows and will look at what CSC does and see if I can make the hack good enough to just make it default. |
|
One other note. This might be over-engineering, but we might only want to do sparsity if A is "big enough". That also depends on how many iterations one is likely to do in the while loop. |
|
@paciorek I've made a lot of the changes you suggested including the compressed column sparsity. I think it might be over thinking it for small matrices. If they are small, then the sparse bit won't save much time but I don't think it will hurt, especially with the new CCS changes. I've been trying to get rid of the while loop and instead estimate the number of iterations needed. I'm hoping to have that done today and will push a new version soon. Thanks!!! I also generalized it for not just non-positive diagonals, but am explicit that the matrix must be square. |
Updated both expmAv and expmA to work for any square matrix, not just non-positive diagonals. Changed ROxygen for updates. Made both able to have derivatives although this is untested as of yet. Included more tests for larger matrices. Tested against expm::expm and found comparable speed.
|
@paciorek I made all the requested changes minus a check for length(tol) > 1. I feel a bit dense here but All the algorithms from Sherlock assumed a Markov Jump process (non positive diagonals on A). When this isn't the case you don't want to scale the matrix, but you still need to figure out how many iterations to run. I followed RTMB logic that was more general and used the bound on the spectrum to compute the number of iterations. I also had issues with gradients and It might be best to just turn them off for now and come back around on that in the next release... |
|
Note I removed |
|
Just responding in terms of |
|
@paul-vdb I'm happy to merge this in if you're happy with the implementation and testing. I haven't dug into the algorithmic details. Could you let me know your thoughts about function and argument naming (see my post from a few posts ago in this discussion). |
|
@paciorek I missed the naming bit going through that list... Pushing those changes now. I commented out the |
Name changes added slightly more extreme test switched from getNPrec algorithm back to simpler qpois.
Tests included. Made a new folder to put misc algorithms in.
@paciorek @perrydv No need to merge this any time soon but thought I'd clean up this function and when you have a chance get any feedback. I'm using a sparsity hack for large matrix computation that is optional. Checking with pracma::expm this is actually much more accurate even for a 2x2 matrix. Had to actually use expm::expm for comparisons. In the future we can remove that hack if we have sparsity. It is set up for derivatives. Not a priority but I spent a bunch of time learning this so figured I'd push a tidy version before I fully moved on.