GPU computing? #310
-
|
hi, i saw you implemented beautifully the new facebook segmentation algorithm. however i found that the segmentation step was a bit slow. i saw all the computing was done by the CPU though, and as I have a decent GPU (geforce RTX3080) i think i'd run way faster on it (at least that's my experience with other machine learning applications). but i've been not smart enough to find how to set up that in the GUI. maybe is somewhere in the code? i hope you can help, since this seems to be an unvaluable tool for making analysis much eficiently. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
|
Hi @DavidMF123, I'm happy to hear you liked the new addition 😁! It's certainly possible to run the segmentation on the gpu and you are right, it will be much faster but you might have memory issues because the model is pretty large. For example I couldn't run segmentation of a 600x600 image on a GPU with 4 GB but I could on 16 GB GPU. Anyway, it's absolutely worth a try. You need to uninstall pytorch cpu and install the gpu version. To do that make sure you close Cell-ACDC first and then open a terminal and run the following commands one by one.
The commands 2-4 uninstall the cpu version of pytorch. They require you to confirm in the terminal. I'm not sure you have torchaudio installed so command 4 might say that it's not, which means you can ignore it. After installing everything, when you get to the model's parameters window one of the parameters is called If data doesn't fit on the gpu you will get an error and unless you buy a GPU with more memory you can try to reduce the "Points per side" parameter. Let me know if it works, thanks! Cheers |
Beta Was this translation helpful? Give feedback.
-
|
ok fine. i had to crop the images down to 512x512 (originally 2048x2048) and reduced the points per side from 32 to 16 to make everything fit within my gpu, and this way 4 files composed by a 9 z-stack of 512x512 took just over 1min to get done with GPU, while the original 2000x2000 images took about 1h40 with CPU (with the default setup for points per side). so still, if i divide my original files in 16 to cover the whole area, and do each of them sequentially, it would take only about 20min with the GPU, so that's a huge improvement (i haven't let finish the CPU on the cropped 512x512 images, but i left it for more than 3min and still didn't look like it has done anything).regarding the quality of the segmentation, it looked equally good with the gpu at 16pps or the default gpu. so yeah, that works!thx
|
Beta Was this translation helpful? Give feedback.
-
|
i guess i'd just dismiss them. anyways i was just testing it with some random file i had from a failed project. i think i'd just be more clever to adjust the acquisition setup to a image size that my gpu can handle, if possible (some scanning confocals can do that for sure), but no matter what there's always cells on the limit of the FOV which you will lose for sure. also depending on the application (ie if you have a cell mechanically trapped), you can use the traps to define around them a much smaller area which will contain your cells always centered in the images.just for fun i made the comparison of running time with the different configurations for a set of 4 images composed by 9stacks of 512x512:gpu 16points per side took 1min18scpu 16points per side took 27min20sgpu 32points per side crashed my memorycpu 32points per side took 1h0min15sregarding the quality of the segmentation is true that for the 32pps option was slightly more accurate, but definitely not worth the extra running time. i havent tried intermediate pps parameters (maybe 24 still would be supported by my gpu and work a bit better) but so far im happy with the result.just for curiosity, the colors of the segmentation masks mean something or are just different to make cells which are close to be seen differently? that's bugging me since i started to play with the GUI🤣
|
Beta Was this translation helpful? Give feedback.
-
|
ok cool, if they are just random then it makes sense. i was overthinking on that🤣i still have a long way to go till i fully learn how this program works, but it looks really interesting. i just started in a new lab and still we are figuring out what to make a project on, so in this meantime i have time to poke around these things. but surely this will be useful once real data obtention is on the line (and i may come back to you then, either new with ideas or asking for help)thx
|
Beta Was this translation helpful? Give feedback.
Hi @DavidMF123, I'm happy to hear you liked the new addition 😁!
It's certainly possible to run the segmentation on the gpu and you are right, it will be much faster but you might have memory issues because the model is pretty large. For example I couldn't run segmentation of a 600x600 image on a GPU with 4 GB but I could on 16 GB GPU.
Anyway, it's absolutely worth a try. You need to uninstall pytorch cpu and install the gpu version. To do that make sure you close Cell-ACDC first and then open a terminal and run the following commands one by one.
acdcenvironmentpip uninstall torchpip uninstall torchvisionpip uninstall torchaudiopip3 install torch torchvisi…