Skip to content

2주차 과제 #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
190 changes: 190 additions & 0 deletions 1_파이썬과_기계학습_기초_실습.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/poroblem/22-hub-kr-practice/blob/main/1_%ED%8C%8C%EC%9D%B4%EC%8D%AC%EA%B3%BC_%EA%B8%B0%EA%B3%84%ED%95%99%EC%8A%B5_%EA%B8%B0%EC%B4%88_%EC%8B%A4%EC%8A%B5.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NCnlTCUlgQLb"
},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd"
]
},
{
"cell_type": "code",
"source": [
"from sklearn.datasets import fetch_california_housing\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.linear_model import LinearRegression\n",
"from sklearn.metrics import r2_score"
],
"metadata": {
"id": "GoR_tS6_g9sQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"dataset = fetch_california_housing()\n",
"\n",
"print(dataset.DESCR)"
],
"metadata": {
"id": "YkBBt3WVg_8_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"print(dataset)"
],
"metadata": {
"id": "VUaZDpz1yEEs"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"x_data = pd.DataFrame(dataset.data, columns=dataset.feature_names)\n",
"y_data = pd.DataFrame(dataset.target)\n",
"\n",
"print(x_data)\n",
"print(y_data)"
],
"metadata": {
"id": "fhSBqGZ9iplE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"print(x_data.shape)"
],
"metadata": {
"id": "e2b9w7ddkns2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"print(x_data.describe())"
],
"metadata": {
"id": "kg_Pyot3kyAE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.3)\n",
"\n",
"print(x_train)\n",
"print(y_train)"
],
"metadata": {
"id": "KmU9_rezjTv8"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"print(x_test)\n",
"print(y_test)"
],
"metadata": {
"id": "5akAx1xtveHQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"estimator = LinearRegression()"
],
"metadata": {
"id": "IJ66-fbWj_OX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"estimator.fit(x_train, y_train)"
],
"metadata": {
"id": "TZH1Daf6kXps"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"y_predict = estimator.predict(x_train)\n",
"score = r2_score(y_train, y_predict)\n",
"print(score) #1.0"
],
"metadata": {
"id": "A38LYJB_kY4d"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"y_predict = estimator.predict(x_test)\n",
"score = r2_score(y_test, y_predict)\n",
"print(score) #1.0"
],
"metadata": {
"id": "B2-GKCfMkZyL"
},
"execution_count": null,
"outputs": []
}
]
}
21 changes: 20 additions & 1 deletion pytorch_vision_googlenet.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,19 @@ All pre-trained models expect input images normalized in the same way,
i.e. mini-batches of 3-channel RGB images of shape `(3 x H x W)`, where `H` and `W` are expected to be at least `224`.
The images have to be loaded in to a range of `[0, 1]` and then normalized using `mean = [0.485, 0.456, 0.406]`
and `std = [0.229, 0.224, 0.225]`.
다시 학습받은 모든 모델은 이미지를 같은 방법으로 인풋 이미지를 정규화할려고 기대한다.
즉 작은 집단의 3종류의 RGB이미지의 모양은 `(3 x H x W)'이고, `H` 와 `W` 적어도 '224' 가 되길 기대한다.
그 이미지들은 `[0, 1]` 범위안에 실려야하고, 그리고 `mean = [0.485, 0.456, 0.406]`와 `std = [0.229, 0.224, 0.225]`을 보통으로 사용한다.
@@


Here's a sample execution.
예시 예제가 있다


```python
# Download an example image from the pytorch website
# 파이토치 웹사이트에서 예시 그림을 다운받아라
import urllib
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
try: urllib.URLopener().retrieve(url, filename)
Expand All @@ -40,6 +48,7 @@ except: urllib.request.urlretrieve(url, filename)

```python
# sample execution (requires torchvision)
# 실행 예제 ('torchvision'을 요구한다)
from PIL import Image
from torchvision import transforms
input_image = Image.open(filename)
Expand All @@ -51,17 +60,20 @@ preprocess = transforms.Compose([
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model

# 모델에서 요구되는 미니-배치를 생성하라
# move the input and model to GPU for speed if available
# 속도를 위해 가능하다면 'input'과 'model'을 'GPU'로 옮겨라
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')

with torch.no_grad():
output = model(input_batch)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
# 'Imagenet'의 1000 classes 보다 높은 confidence scores와 함께 'Tensor'의 모양을 1000으로 하여라
print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
# 결과는 비정상적인 scores다. 개연성을 가질려면 softmax를 실행 할 수도 있다
probabilities = torch.nn.functional.softmax(output[0], dim=0)
print(probabilities)
```
Expand All @@ -86,6 +98,10 @@ for i in range(top5_prob.size(0)):

GoogLeNet was based on a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The 1-crop error rates on the ImageNet dataset with a pretrained model are list below.

### 모델 설명

'GoogLeNet'은 "Inception"이라고 암호화된 이름으로 나선형의 심층 신경망의 아키텍쳐에 기반은 둔 새로운상태의 예술의 분류와 'ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014)'의 발견에 책임이있다

| Model structure | Top-1 error | Top-5 error |
| --------------- | ----------- | ----------- |
| googlenet | 30.22 | 10.47 |
Expand All @@ -95,3 +111,6 @@ GoogLeNet was based on a deep convolutional neural network architecture codename
### References

- [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)
### 참조

- [Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)