You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Start from a **Python>=3.8** environment with **PyTorch>=1.7**installed. To install PyTorch see [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/). To install YOLOv5 dependencies:
21
+
**Python>=3.8**과 **PyTorch>=1.7**환경을 갖춘 상태에서 시작해주세요. PyTorch를 설치해야 한다면 [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/) 를 참고하세요. YOLOv5 dependency를 설치하려면:
pip install -qr https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt #필요한 모듈 설치
24
24
```
25
25
26
-
27
26
## Model Description
28
27
29
28
<imgwidth="800"alt="YOLOv5 Model Comparison"src="https://github.com/ultralytics/yolov5/releases/download/v1.0/model_comparison.png">
30
29
31
30
32
-
[YOLOv5](https://ultralytics.com/yolov5) 🚀 is a family of compound-scaled object detection models trained on the COCO dataset, and includes simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite.
31
+
[YOLOv5](https://ultralytics.com/yolov5) 🚀는 compound-scaling을 사용하고 COCO dataset으로 학습한 모델들 중 하나이고, Test Time Augmentation (TTA), 모델 앙상블(model ensembling), 하이퍼파라미터 평가(hyperparameter evolution), 그리고 ONNX, CoreML과 TFLite로 변환(export)을 간단하게 해주는 기능이 포함되어 있습니다.
* AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy.
45
+
* AP<sup>test</sup>는
46
46
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
47
47
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
48
48
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
59
59
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
This example loads a pretrained **YOLOv5s**model and passes an image for inference. YOLOv5 accepts **URL**, **Filename**, **PIL**, **OpenCV**, **Numpy** and **PyTorch**inputs, and returns detections in **torch**, **pandas**, and **JSON**output formats. See our [YOLOv5 PyTorch Hub Tutorial](https://github.com/ultralytics/yolov5/issues/36)for details.
67
+
이 예제에서는 사전 훈련된(pretrained)**YOLOv5s**모델을 불러와 이미지에 대해 추론을 진행합니다. YOLOv5s는 **URL**, **파일 이름**, **PIL**, **OpenCV**, **Numpy**와 **PyTorch**형식의 입력을 받고, **torch**, **pandas**, **JSON**출력 형태로 탐지 결과를 반환합니다. 자세히 알고 싶으면 [YOLOv5 파이토치 허브 튜토리얼](https://github.com/ultralytics/yolov5/issues/36)을 참고하세요.
68
68
69
69
70
70
```python
71
71
import torch
72
72
73
-
#Model
73
+
#모델
74
74
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
75
75
76
-
#Images
77
-
imgs = ['https://ultralytics.com/images/zidane.jpg'] #batch of images
**Issues should be raised directly in https://github.com/ultralytics/yolov5.** For business inquiries or professional support requests please visit [https://ultralytics.com](https://ultralytics.com) or email Glenn Jocher at [[email protected]](mailto:[email protected]).
105
-
104
+
**이슈가 생기면 즉시 https://github.com/ultralytics/yolov5 로 알려주세요.** 비즈니스 상의 문의나 전문적인 지원 요청은 [https://ultralytics.com](https://ultralytics.com) 을 방문하거나 Glenn Jocher의 이메일인 [[email protected]](mailto:[email protected]) 으로 연락 주세요.
0 commit comments