TensorConstant error in PyMC sampling with full_ddm #868
-
|
I'm trying to extent the tutorial Using the low-level API from HSSM directly with PyMC to use the "full_ddm" model. There sampling call is erroring out, seemingly because of how the likelihood function is trying to get the length of the data:
Does anyone have advice on how to resolve this? FWIW, I checked that the version in the tutorial does run as expected. Thanks! ScriptTraceback |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
Hi @johnmdusel, This is because import arviz as az
import matplotlib.pyplot as plt
import pymc as pm
import pytensor
import hssm
from hssm.distribution_utils import make_blackbox_op, make_distribution # import the make_blackbox_op here
from hssm.likelihoods import logp_full_ddm
hssm.set_floatX("float32")
pytensor.config.floatX = "float32"
pytensor.config.blas__ldflags = "-llapack -lblas -lcblas"
if __name__ == "__main__":
v_true, a_true, z_true, t_true, sv_true, sz_true, st_true = (
0.5,
1.5,
0.5,
0.1,
0.1,
0.1,
0.1,
)
dataset = hssm.simulate_data(
model="full_ddm",
theta=[v_true, a_true, z_true, t_true, sv_true, sz_true, st_true],
size=1000,
)
logp_full_ddm_op = make_blackbox_op(logp_full_ddm) #wrap it in an op
FULL_DDM = make_distribution(
rv="full_ddm",
loglik=logp_full_ddm_op, # use the op instead of the likelihood function itself
list_params=["v", "a", "z", "t", "sv", "sz", "st"],
)
with pm.Model() as full_ddm_model:
v = pm.Uniform("v", lower=-10.0, upper=10.0)
a = pm.HalfNormal("a", sigma=2.0)
z = pm.Uniform("z", lower=0.01, upper=0.99)
t = pm.Uniform("t", lower=0.0, upper=0.6, initval=0.1)
sv = pm.HalfNormal("sv", sigma=2.0)
sz = pm.HalfNormal("sz", sigma=2.0)
st = pm.HalfNormal("st", sigma=2.0)
rt_resp = FULL_DDM(
"rt_resp", v=v, a=a, z=z, t=t, sv=sv, sz=sz, st=st, observed=dataset.values
)
rt_resp_trace = pm.sample(
mp_ctx="spawn",
nuts_sampler="pymc",
chains=4,
draws=100,
tune=10,
random_seed=42,
) # error here
az.plot_trace(rt_resp_trace)
plt.tight_layout()Sorry there was no documentation on this, and the naming of the function is confusing. We'll fix it in a future release |
Beta Was this translation helpful? Give feedback.
Hi @johnmdusel,
This is because
logp_ddm_fullis a black box likelihood function that's constructed differently fromlogp_ddmorlogp_ddm_sdv. To use it with PyMC, you need to wrap it in a PytensorOpfirst. I tested and the following code works: