You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Updated Readme
Fixing possible memory leaks and improving code generation for different data types
Properly pass build target to cmake
Changed the interface to accept multiple inputs
Thread-safe pointer map, cosmetic renames
Copy file name to clipboardExpand all lines: README.md
+47-5Lines changed: 47 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,21 +1,21 @@
1
1
# Pytorch Fortran bindings
2
2
3
-
The goal of this code is provide Fortran HPC codes with a simple way to use Pytorch deep learning framework.
3
+
The goal of this code is to provide Fortran HPC codes with a simple way to use Pytorch deep learning framework.
4
4
We want Fortran developers to take advantage of rich and optimized Torch ecosystem from within their existing codes.
5
5
The code is very much work-in-progress right now and any feedback or bug reports are welcome.
6
6
7
7
## Features
8
8
9
-
* Define the model convinently in Python, save it and open in Fortran
9
+
* Define the model conveniently in Python, save it and open in Fortran
10
10
* Pass Fortran arrays into the model, run inference and get output as a native Fortran array
11
-
* Train the model from inside Fortran (limit support for now) and save it
11
+
* Train the model from inside Fortran and save it
12
12
* Run the model on the CPU or the GPU with the data also coming from the CPU or GPU
13
13
* Use OpenACC to achieve zero-copy data transfer for the GPU models
14
14
* Focus on achieving negligible performance overhead
15
15
16
16
## Building
17
17
18
-
To assist with the build, we provide the Docker and [HPCCM](https://github.com/NVIDIA/hpc-container-maker) recipe for the container with all the necessary dependancies installed, see [container](container/)
18
+
To assist with the build, we provide the Docker and [HPCCM](https://github.com/NVIDIA/hpc-container-maker) recipe for the container with all the necessary dependencies installed, see [container](container/)
19
19
20
20
You'll need to mount a folder with the cloned repository into the container, cd into this folder from the running container and execute `./make_nvhpc.sh`, `./make_gcc.sh` or `./make_intel.sh` depending on the compiler you want to use.
We are working on documenting the API, for now please refer to the examples.
43
+
We are working on documenting the full API. Please refer to the examples for more details.
44
+
The bindings are provided through the following Fortran classes:
45
+
46
+
### Class `torch_tensor`
47
+
This class represents a light-weight Pytorch representation of a Fortran array. It does not own the data and only keeps the respective pointer.
48
+
Supported arrays of ranks up to 7 and datatypes `real32`, `real64`, `int32`, `int64`.
49
+
Members:
50
+
*`from_array(Fortran array or pointer :: array)` : create the tensor representation of a Fortran array.
51
+
*`to_array(pointer :: array)` : create a Fortran pointer from the tensor. This API should be used to convert the returning data of a Pytorch model to the Fortran array.
52
+
53
+
### Class `torch_tensor_wrap`
54
+
This class wraps a few tensors or scalars that can be passed as input into Pytorch models.
55
+
Arrays and scalars must be of types `real32`, `real64`, `int32` or `int64`.
56
+
Members:
57
+
*`add_scalar(scalar)` : add the scalar value into the wrapper.
58
+
*`add_tensor(torch_tensor :: tensor)` : add the tensor into the wrapper.
59
+
*`add_array(Fortran array or pointe :: array)` : create the tensor representation of a Fortran array and add it into the wrapper.
60
+
61
+
62
+
### Class `torch_module`
63
+
This class represents the traced Pytorch model, typically a result of `torch.jit.trace` or `torch.jit.script` call from your Python script. This class in **not thread-safe**. For multi-threaded inference either create a threaded Pytorch model, or use a `torch_module` instance per thread (the latter could be less efficient).
64
+
Members:
65
+
*`load( character(*) :: filename, integer :: flags)` : load the module from a file. Flag can be set to `module_use_device` to enable the GPU processing.
66
+
*`forward(torch_tensor_wrap :: inputs, torch_tensor :: output, integer :: flags)` : run the inference with Pytorch. The tensors and scalars from the `inputs` will be passed into Pytorch and the `output` will contain the result. `flags` is unused now
67
+
*`create_optimizer_sgd(real :: learning_rate)` : create an SGD optimizer to use in the following training
68
+
*`train(torch_tensor_wrap :: inputs, torch_tensor :: target, real :: loss)` : perform a single training step where `target` is the target result and `loss` is the L2 squared loss returned by the optimizer
69
+
*`save(character(*) :: filename)` : save the trained model
70
+
71
+
### Class `torch_pymodule`
72
+
This class represents the Pytorch Python script and required the interpreter to be called. Only one `torch_pymodule` can be opened at a time due to the Python interpreter limitation. Overheads calling this class are higher than with `torch_module`, but contrary to the `torch_module%train` one can now train their Pytorch model with any optimizer, dropouts, etc. The intended usage of this class is to run online training with a complex pipeline that cannot be expressed as TorchScript.
73
+
Members:
74
+
*`load( character(*) :: filename)` : load the module from a Python script
75
+
*`forward(torch_tensor_wrap :: inputs, torch_tensor :: output)` : execute `ftn_pytorch_forward` function from the Python script. The function is expected to accept tensors and scalars and returns one tensor. The tensors and scalars from the `inputs` will be passed as argument and the `output` will contain the result.
76
+
*`train(torch_tensor_wrap :: inputs, torch_tensor :: target, real :: loss)` : execute `ftn_pytorch_train` function from the Python script. The function is expected to accept tensors and scalars (with the last argument required to be the target tensor) and returns a tuple of bool `is_completed` and float `loss`. `is_completed` is returned as a result of the `train` function, and `loss` is set accordingly to the Python output. `is_completed` is meant to signify that the training is completed due to any stopping criterion
77
+
*`save(character(*) :: filename)` : save the trained model
78
+
79
+
## Changelog
80
+
81
+
### v0.3
82
+
* Changed interface: `forward` and `train` routines now accept `torch_tensor_wrap` instead of just `torch_tensor`. This allows a user to add multiple inputs consisting of tensors of different size and scalar values
83
+
* Fixed possible small memory leaks due to tensor handles
84
+
* Fixed build targets in the scripts, they now properly build Release versions by default
0 commit comments