So I generated some fake scribbles on our dataset using the ground truth labels but the pretrained model cannot predict anything meaningful while using the generate clicks and bounding boxes are fine. I am wondering how the scribbles are generated and how the generated scribbles can shift the input distribution so much that it completely confuse the network.