Skip to content

removing saturation artefact from recordings #1905

@Kayv-cmb

Description

@Kayv-cmb

Hi,

I have huge saturation artefact in my recordings due to mouvement, that I tried to remove with blank saturation which work but the result from kilosort3 are still highly contaminated and unusable from that. So I thought about trying DeepInterpolation since I am using Neuropixels recording. I have a question about that and an Issue.

  • Right now I am using the model from the paper on Neuropixel probe but is there a way from spikeinterface to train a new model?
  • There seems to be an issue with the deepinterpolation more precisely Tensorflow and CuDNN, maybe I did something wrong?
2023-08-03 12:31:25.730405: W tensorflow/c/c_api.cc:304] Operation '{name:'conv2d_10/BiasAdd' id:273 op device:{requested: '', assigned: ''} def:{{{node conv2d_10/BiasAdd}} = BiasAdd[T=DT_FLOAT, _has_manual_control_dependencies=true, data_format="NHWC"](conv2d_10/Conv2D, conv2d_10/BiasAdd/ReadVariableOp)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
2023-08-03 12:31:25.797866: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:437] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2023-08-03 12:31:25.798016: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:441] Memory usage: 16299524096 bytes free, 16908615680 bytes total.
2023-08-03 12:31:25.798167: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:451] Possibly insufficient driver version: 470.57.2
2023-08-03 12:31:25.798197: W tensorflow/core/framework/op_kernel.cc:1828] OP_REQUIRES failed at conv_ops_fused_impl.h:625 : UNIMPLEMENTED: DNN library is not found.
2023-08-03 12:31:25.798261: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous recv item cancelled. Key hash: 17003553290036108529

---------------------------------------------------------------------------
UnimplementedError                        Traceback (most recent call last)
Cell In[5], line 4
      1 get_ipython().run_line_magic('matplotlib', 'widget')
      2 #rec =si.BinaryFolderRecording('/nemo/lab/schaefera/scratch/combadk/NP_spikesorting/2023-07-12_17-04-33/preprocess')
----> 4 w = sw.plot_timeseries(rec, backend="ipywidgets")

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/spikeinterface/widgets/base.py:119, in define_widget_function_from_class.<locals>.widget_func(*args, **kwargs)
    117 @copy_signature(widget_class)
    118 def widget_func(*args, **kwargs):
--> 119     W = widget_class(*args, **kwargs)
    120     W.do_plot(W.backend, **W.backend_kwargs)
    121     return W.plotter

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/spikeinterface/widgets/timeseries.py:130, in TimeseriesWidget.__init__(self, recording, segment_index, channel_ids, order_channel_by_depth, time_range, mode, return_scaled, cmap, show_channel_ids, color_groups, color, clim, tile_size, seconds_per_row, with_colorbar, add_legend, backend, **backend_kwargs)
    127 mode = mode
    128 cmap = cmap
--> 130 times, list_traces, frame_range, channel_ids = _get_trace_list(
    131     recordings, channel_ids, time_range, segment_index, order, return_scaled
    132 )
    134 # stat for auto scaling done on the first layer
    135 traces0 = list_traces[0]

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/spikeinterface/widgets/timeseries.py:236, in _get_trace_list(recordings, channel_ids, time_range, segment_index, order, return_scaled)
    234 list_traces = []
    235 for rec_name, rec in recordings.items():
--> 236     traces = rec.get_traces(
    237         segment_index=segment_index,
    238         channel_ids=channel_ids,
    239         start_frame=frame_range[0],
    240         end_frame=frame_range[1],
    241         return_scaled=return_scaled,
    242     )
    244     if order is not None:
    245         traces = traces[:, order]

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/spikeinterface/core/baserecording.py:278, in BaseRecording.get_traces(self, segment_index, start_frame, end_frame, channel_ids, order, return_scaled, cast_unsigned)
    276 channel_indices = self.ids_to_indices(channel_ids, prefer_slice=True)
    277 rs = self._recording_segments[segment_index]
--> 278 traces = rs.get_traces(start_frame=start_frame, end_frame=end_frame, channel_indices=channel_indices)
    279 if order is not None:
    280     assert order in ["C", "F"]

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/spikeinterface/preprocessing/deepinterpolation/deepinterpolation.py:381, in DeepInterpolatedRecordingSegment.get_traces(self, start_frame, end_frame, channel_indices)
    369 # instantiate an input generator that can be passed directly to model.predict
    370 input_generator = self.DeepInterpolationInputGenerator(
    371     recording=self.parent_recording_segment,
    372     start_frame=true_start_frame,
   (...)
    379     batch_size=self.batch_size,
    380 )
--> 381 di_output = self.model.predict(input_generator, verbose=2)
    383 out_traces = self.reshape_backward(di_output)
    385 if true_start_frame != start_frame:

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/keras/src/engine/training_v1.py:1059, in Model.predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1056 self._check_call_args("predict")
   1058 func = self._select_training_loop(x)
-> 1059 return func.predict(
   1060     self,
   1061     x=x,
   1062     batch_size=batch_size,
   1063     verbose=verbose,
   1064     steps=steps,
   1065     callbacks=callbacks,
   1066     max_queue_size=max_queue_size,
   1067     workers=workers,
   1068     use_multiprocessing=use_multiprocessing,
   1069 )

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/keras/src/engine/training_generator_v1.py:706, in GeneratorOrSequenceTrainingLoop.predict(self, model, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
    693 def predict(
    694     self,
    695     model,
   (...)
    703     use_multiprocessing=False,
    704 ):
    705     model._validate_or_infer_batch_size(batch_size, steps, x)
--> 706     return predict_generator(
    707         model,
    708         x,
    709         steps=steps,
    710         verbose=verbose,
    711         callbacks=callbacks,
    712         max_queue_size=max_queue_size,
    713         workers=workers,
    714         use_multiprocessing=use_multiprocessing,
    715     )

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/keras/src/engine/training_generator_v1.py:282, in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
    279 callbacks._call_batch_hook(mode, "begin", step, batch_logs)
    281 is_deferred = not model._is_compiled
--> 282 batch_outs = batch_function(*batch_data)
    283 if not isinstance(batch_outs, list):
    284     batch_outs = [batch_outs]

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/keras/src/engine/training_generator_v1.py:591, in _make_execution_function.<locals>.predict_on_batch(x, y, sample_weights)
    590 def predict_on_batch(x, y=None, sample_weights=None):
--> 591     return model.predict_on_batch(x)

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/keras/src/engine/training_v1.py:1321, in Model.predict_on_batch(self, x)
   1318     return self(inputs)
   1320 self._make_predict_function()
-> 1321 outputs = self.predict_function(inputs)
   1323 if len(outputs) == 1:
   1324     return outputs[0]

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/keras/src/backend.py:4609, in GraphExecutionFunction.__call__(self, inputs)
   4599 if (
   4600     self._callable_fn is None
   4601     or feed_arrays != self._feed_arrays
   (...)
   4605     or session != self._session
   4606 ):
   4607     self._make_callable(feed_arrays, feed_symbols, symbol_vals, session)
-> 4609 fetched = self._callable_fn(*array_vals, run_metadata=self.run_metadata)
   4610 self._call_fetch_callbacks(fetched[-len(self._fetches) :])
   4611 output_structure = tf.nest.pack_sequence_as(
   4612     self._outputs_structure,
   4613     fetched[: len(self.outputs)],
   4614     expand_composites=True,
   4615 )

File ~/.conda/envs/CHIME_kayvan/lib/python3.9/site-packages/tensorflow/python/client/session.py:1482, in BaseSession._Callable.__call__(self, *args, **kwargs)
   1480 try:
   1481   run_metadata_ptr = tf_session.TF_NewBuffer() if run_metadata else None
-> 1482   ret = tf_session.TF_SessionRunCallable(self._session._session,
   1483                                          self._handle, args,
   1484                                          run_metadata_ptr)
   1485   if run_metadata:
   1486     proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

UnimplementedError: 2 root error(s) found.
  (0) UNIMPLEMENTED: DNN library is not found.
	 [[{{node conv2d_1/Relu}}]]
	 [[conv2d_10/BiasAdd/_299]]
  (1) UNIMPLEMENTED: DNN library is not found.
	 [[{{node conv2d_1/Relu}}]]
0 successful operations.
0 derived errors ignored.

If you have any idea how to solve that ? Also if you think there is a better solution to solve my problem of saturation artefact I will gladly take it.

Thank you !!!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions