Skip to content

Commit f35b146

Browse files
committed
update docs. change example interface.
1 parent 69f5101 commit f35b146

40 files changed

+568
-264
lines changed

docs/assets/TensorBoard-nn.png

92.3 KB
Loading

docs/assets/mnist.png

29.7 KB
Loading

docs/assets/nn-result.png

34 KB
Loading

docs/assets/nn.png

77.6 KB
Loading

docs/source/Constant.md

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,21 +7,12 @@ In TensorFlow, a constant is a special Tensor that cannot be modified while the
77
* shape: dimensions;
88
* name: constant's name;
99

10-
在TensorFlow中,常量是一种特殊的Tensor,它在计算图运行的时候,不能被修改。比如在线性模型里$\tilde{y_i}=\boldsymbol{w}x_i+b$, 常数$b$就可以用一个常量来表示。既然常量是一种Tensor,那么它也就具有Tensor的所有数据特性,它包括:
11-
12-
* value: 符合TensorFlow中定义的数据类型的常数值或者常数列表;
13-
* dtype:数据类型;
14-
* shape:常量的形状;
15-
* name:常量的名字;
16-
1710

1811

1912
##### How to create a Constant
2013

2114
TensorFlow provides a handy function to create a Constant. In TF.NET, you can use the same function name `tf.constant` to create it. TF.NET takes the same name as python binding to the API. Naming, although this will make developers who are used to C# naming habits feel uncomfortable, but after careful consideration, I decided to give up the C# convention naming method.
2215

23-
TensorFlow提供了一个很方便的函数用来创建一个Constant, 在TF.NET,可以使用同样的函数名`tf.constant`来创建,TF.NET采取尽可能使用和python binding一样的命名方式来对API命名,虽然这样会让习惯C#命名习惯的开发者感到不舒服,但我经过深思熟虑之后还是决定放弃C#的约定命名方式。
24-
2516
Initialize a scalar constant:
2617

2718
```csharp
@@ -45,9 +36,7 @@ var tensor = tf.constant(nd);
4536

4637
##### Dive in Constant
4738

48-
Now let's explore how constant works.
49-
50-
现在让我探究一下`tf.constant`是怎么工作的。
39+
Now let's explore how `constant` works.
5140

5241

5342

docs/source/Foreword.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,10 @@
22

33
One of the most nerve-wracking periods when releasing the first version of an open source project occurs when the [gitter](https://gitter.im/sci-sharp/community) community is created. You are all alone, eagerly hoping and wishing for the first user to come along. I still vividly remember those days.
44

5-
最让人紧张的时刻是当我为自己的开源项目发布第一个版本并在gitter里开放一个聊天社区,而里面只有你一个人,饥渴地等待第一个进入聊天室的用户,我仍然清楚地记得那个时期。
6-
75

86

97
TensorFlow.NET is my third open source project. BotSharp and NumSharp are the first two. The response is pretty good. I also got a lot of stars on github. Although the first two projects are very difficult, I can't admit that TensorFlow.NET is much more difficult than the previous two, and it is an area I have never been involved with. Mainly related to GPU parallel computing, distributed computing and neural network model. When I started writing this project, I was also sorting out the idea of the coding process. TensorFlow is a huge and complicated project, and it is easy to go beyond the scope of personal ability. Therefore, I want to record the thoughts at the time as much as possible. The process of recording and sorting clears the way of thinking.
108

11-
TensorFlow.NET是我写的第3个开源项目,BotSharp和NumSharp是前两个,反应都还不错,在github上也收获了不少星。虽然前两个项目的难度很大,但是我不得承认TensorFlow.NET的难度要比之前两个要大的多,是我从未涉入过的领域。主要涉及GPU并行计算,分布式计算和神经网络模型。当我开始写这个项目的时候,我同时也在整理编码过程时候的想法,TensorFlow是个巨大最复杂的工程,很容易超出个人能力范围,所以想尽可能地把当时的思路记录下来,也想趁着记录整理的过程把思路理清。
12-
139

1410

1511
All the examples in this book can be found in the github repository of TensorFlow.NET. When the source code and the code in the book are inconsistent, please refer to the source code. The sample code is typically located in the Example or UnitTest project.
16-
17-
本书中的所有例子都可以在TensorFlow.NET的github仓库中找到,当源代码和书中的代码不一致时,请以源代码为准。示例代码一般都位于Example或者是UnitTest项目里。

docs/source/HelloWorld.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,19 +2,17 @@
22

33
I would describe TensorFlow as an open source machine learning framework developed by Google which can be used to build neural networks and perform a variety of machine learning tasks. it works on data flow graph where nodes are the mathematical operations and the edges are the data in the form of tensor, hence the name Tensor-Flow.
44

5-
按照我的理解,TensorFlow是Google公司开发的一个开源机器学习框架,可以用来搭建神经网络模型和其它传统机器学习模型,它采用了图计算模型,图的节点和边分别代表了操作和数据输入或输出,数据在图的单个方向传递,因此这个过程形象地取名叫做TensorFlow。
65

7-
Let's run a classic HelloWorld program first and see if TensorFlow is running on .NET. I can't think of a simpler way to be a HelloWorld.
86

9-
让我们先运行一个经典的HelloWorld程序,看看TensorFlow在.NET上面运行的效果,我想不出有比做个HelloWorld更简单的方式了。
7+
Let's run a classic HelloWorld program first and see if TensorFlow is running on .NET. I can't think of a simpler way to be a HelloWorld.
108

119

1210

1311
### Install the TensorFlow.NET SDK
1412

1513
TensorFlow.NET uses the .NET Standard 2.0 standard, so your new project Target Framework can be .NET Framework or .NET Core. All the examples in this book are using .NET Core 2.2 and Microsoft Visual Studio Community 2017. To start building TensorFlow program you just need to download and install the .NET SDK (Software Development Kit). You have to download the latest .NET Core SDK from offical website: https://dotnet.microsoft.com/download.
1614

17-
TensorFlow.NET采用.NET标准库2.0版本,因此你的新建工程可以是.NET Framework或者是基于.NET Core的。本文中的所有例子都是用的.NET Core 2.2的,IDE用的是Microsoft Visual Studio Community 2017。为了能编译和运行TensorFlow工程,你需要从这里下载最新的.NET Core SDK: https://dotnet.microsoft.com/download。
15+
1816

1917
1. New a project
2018

@@ -34,7 +32,7 @@ PM> Install-Package TensorFlow.NET
3432

3533
After installing the TensorFlow.NET package, you can use the `using Tensorflow` to introduce the TensorFlow library.
3634

37-
安装完TensorFlow.NET包后,你就可以使用`using Tensorflow`来引入TensorFlow库了。
35+
3836

3937
```csharp
4038
using System;
@@ -76,5 +74,3 @@ Press any key to continue . . .
7674

7775
This sample code can be found at [here](https://github.com/SciSharp/TensorFlow.NET/blob/master/test/TensorFlowNET.Examples/HelloWorld.cs).
7876

79-
此示例代码可以在[这里](https://github.com/SciSharp/TensorFlow.NET/blob/master/test/TensorFlowNET.Examples/HelloWorld.cs)找到。
80-

docs/source/LinearRegression.md

Lines changed: 10 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# Chapter. Linear Regression
22

3+
4+
35
### What is linear regression?
46

57
Linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables).
@@ -8,9 +10,7 @@ Consider the case of a single variable of interest y and a single predictor vari
810

911
We have some data $D=\{x{\tiny i},y{\tiny i}\}$ and we assume a simple linear model of this dataset with Gaussian noise:
1012

11-
线性回归是一种线性建模方法,这种方法用来描述自变量与一个或多个因变量的之间的关系。在只有一个因变量y和一个自变量的情况下。自变量还有以下几种叫法:
12-
协变量,输入,特征;因变量通常被叫做响应变量,输出,输出结果。
13-
假如我们有数据$D=\{x{\tiny i},y{\tiny i}\}$,并且假设这个数据集是满足高斯分布的线性模型:
13+
1414
```csharp
1515
// Prepare training Data
1616
var train_X = np.array(3.3f, 4.4f, 5.5f, 6.71f, 6.93f, 4.168f, 9.779f, 6.182f, 7.59f, 2.167f, 7.042f, 10.791f, 5.313f, 7.997f, 5.654f, 9.27f, 3.1f);
@@ -21,24 +21,21 @@ var n_samples = train_X.shape[0];
2121

2222
Based on the given data points, we try to plot a line that models the points the best. The red line can be modelled based on the linear equation: $y = wx + b$. The motive of the linear regression algorithm is to find the best values for $w$ and $b$. Before moving on to the algorithm, le's have a look at two important concepts you must know to better understand linear regression.
2323

24-
按照上图根据数据描述的数据点,在这些数据点之间画出一条线,这条线能达到最好模拟点的分布的效果。红色的线能够通过下面呢线性等式来描述:$y = wx + b$。
25-
线性回归算法的目标就是找到这条线对应的最好的参数$w$和$b$。在介绍线性回归算法之前,我们先看两个重要的概念,这两个概念有助于你理解线性回归算法。
24+
2625

2726
### Cost Function
2827

2928
The cost function helps us to figure out the best possible values for $w$ and $b$ which would provide the best fit line for the data points. Since we want the best values for $w$ and $b$, we convert this search problem into a minimization problem where we would like to minimize the error between the predicted value and the actual value.
3029

31-
损失函数帮助我们估算出最优的参数$w$和$b$,这个最优的参数能够最好的拟合数据点的分布。由于我们想找到最优的参数$w$和$b$,因此我们把这个问题转化成求
32-
预测参数与实际参数之差的最小值问题。
30+
3331

3432
![minimize-square-cost](_static/minimize-square-cost.png)
3533

3634
We choose the above function to minimize. The difference between the predicted values and ground truth measures the error difference. We square the error difference and sum over all data points and divide that
3735
value by the total number of data points. This provides the average squared error over all the data points. Therefore, this cost function is also known as the Mean Squared Error(MSE) function. Now, using this MSE
3836
function we are going to change the values of $w$ and $b$ such that the MSE value settles at the minima.
3937

40-
我们选择最小化上面的函数。预测值和真实值之间的差异的大小衡量了预测结果的偏差。我们用所有点的偏差的平方和除以所有点所有点的数量大小来表示说有点的平均
41-
的误差大小。因此,损失函数又叫均方误差(简称MSE)。到此,我们可以通过调整参数$w$和$b$来使MSE达到最小值。
38+
4239

4340
```csharp
4441
// tf Graph Input
@@ -56,13 +53,13 @@ var pred = tf.add(tf.multiply(X, W), b);
5653
var cost = tf.reduce_sum(tf.pow(pred - Y, 2.0f)) / (2.0f * n_samples);
5754
```
5855

56+
57+
5958
### Gradient Descent
60-
### 梯度下降法
6159

6260
The another important concept needed to understand is gradient descent. Gradient descent is a method of updating $w$ and $b$ to minimize the cost function. The idea is that we start with some random values for $w$ and $b$ and then we change these values iteratively to reduce the cost. Gradient descent helps us on how to update the values or which direction we would go next. Gradient descent is also know as **steepest descent**.
6361

64-
另一个需要理解的重要概念是梯度下降法。梯度下降法是通过更新参数$w$和$b$来最小化损失函数。梯度下降法的思想就是首先以任意的参数$w$和$b$开始计算损失
65-
函数,然后通过递归的方式不断地变化参数来减小损失。梯度下降法帮助我们如何更新参数,或者说告诉我们下一个参数该如何设置。梯度下降法也称为“最快下降法”。
62+
6663

6764

6865
![gradient-descent](_static/gradient-descent.png)
@@ -72,9 +69,7 @@ of steps to reach the bottom. If you decide to take one step at a time you would
7269
reach sooner but, there is a chance that you could overshoot the bottom of the pit and not exactly at the bottom. In the gradient descent algorithm, the number of steps you take is the learning rate. This
7370
decides on how fast the algorithm converges to the minima.
7471

75-
这里做一个类比,想象着你站在一个U形坑的最上面,你的目标是达到坑的最低端。有一个条件是,你不确定你走多少步能到达底端。如果你选择一步一步的走到坑的
76-
底端,这样可能需要的时间很长。如果你每次大步的往前走,你可能很快到达坑的底端,但是你有可能错过坑的最底端。在梯度下降算法中,你所采用的步数就是训练
77-
速率。训练速率决定了算法以多块的速度使得损失函数达到最小值。
72+
7873

7974

8075
```csharp

docs/source/LogisticRegression.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,11 @@
44

55
Logistic regression is a statistical analysis method used to predict a data value based on prior observations of a data set. A logistic regression model predicts a dependent data variable by analyzing the relationship between one or more existing independent variables.
66

7-
逻辑回归是一种统计分析方法,用于根据已有得观察数据来预测未知数据。逻辑回归模型通过分析一个或多个现有自变量之间的关系来预测从属数据变量。
7+
88

99
The dependent variable of logistics regression can be two-category or multi-category, but the two-category is more common and easier to explain. So the most common use in practice is the logistics of the two classifications. An example used by TensorFlow.NET is a hand-written digit recognition, which is a multi-category.
1010

11-
逻辑回归的因变量可以是二分类的,也可以是多分类的,但是二分类的更为常用,也更加容易解释。 TensorFlow.NET用的例子是一个手写数字识别,它是一个多分类的问题。
11+
1212

1313
Softmax regression allows us to handle ![1557035393445](_static\logistic-regression\1557035393445.png) where K is the number of classes.
1414

0 commit comments

Comments
 (0)