Skip to content

Conversation

@mitchchessnoob
Copy link

No description provided.

vincentcartillier and others added 30 commits April 6, 2022 11:02
Adds a quickstart notebook that sets up the conda env, pre-processes annotations/clips/frames, runs an eval on a subset of VQ, and also does a small e2e training run.

Note: Meta google accounts don't have an access to Colab, use a regular google account to see the notebook instead.
adding camera pose results for the val set
* edge-case handling for clip_uid as None
* checking for corrupt clips + recomputing
* updated TQDM description
* downscale clips to 700p before saving
* edge-case handling for clip_uid as None
* pre-extract detections for each query
* can facilite faster evaluation
* replace test-challenge inference with general inference
* use "WorkerWithDevice" strategy to avoid GPU overload in MP
* add cached evaluation to utilize pre-computed BBoxes
* add "torch.cuda.set_device(device)" in KYSTracker to assign GPU
* add "visualize" flag in perform_retrieval* to save RAM
* Use deepspeed library to measure average detector flops
* add annot_key
* use separate dir per sample for visualization
* set data version based on annot file
* read frames from video_reader lazily
* sets a similarity threshold to remove noisy peaks
* option to set lost_thresh for KYS
* remove end-to-end evaluate_vq2d.py
* add evaluate_vq.py to use precomputed results
* account for errors where extracted clip is shorter than expected
* add missing reader.close()
* add option to extract specific clips
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants