Hello,
I’ve been trying to run patchwork on a test dataset, but the job consistently aborts due to excessive memory usage.
For a dataset of ~113–119 million reads (trimmed and filtered, not assembled), the job uses up to ~450 GB RAM (12 CPUs) before being aborted. This exceeds the available ressources on our system. Is there any workaround to this problem?
singularity run \
--bind /media/inter/abart/Projekte/Isopoda/GenomeSkimming/output/Patchwork:/media/inter/abart/Projekte/Isopoda/GenomeSkimming/output/Patchwork \
/opt/bioinformatics/containers/patchwork.sif \
--contigs "/media/inter/abart/Projekte/Isopoda/GenomeSkimming/output/Patchwork/26613_1.fq.gz" \
"/media/inter/abart/Projekte/Isopoda/GenomeSkimming/output/Patchwork/26613_2.fq.gz" \
--reference "/media/inter/abart/Projekte/Isopoda/GenomeSkimming/output/Patchwork/Cylisticus_convexus-29744.longest_orfs_fixed_header.pep" \
--output-dir /media/inter/abart/Projekte/Isopoda/GenomeSkimming/output/Patchwork/26613 \
--threads 12
Thank you for your help!
Best, Anna
Hello,
I’ve been trying to run patchwork on a test dataset, but the job consistently aborts due to excessive memory usage.
For a dataset of ~113–119 million reads (trimmed and filtered, not assembled), the job uses up to ~450 GB RAM (12 CPUs) before being aborted. This exceeds the available ressources on our system. Is there any workaround to this problem?
Thank you for your help!
Best, Anna