This repository was archived by the owner on Aug 5, 2022. It is now read-only.
  
  
  - 
                Notifications
    
You must be signed in to change notification settings  - Fork 487
 
Development
        dj1989 edited this page Oct 7, 2014 
        ·
        24 revisions
      
    - Add a class declaration for your layer to the appropriate one of 
common_layers.hpp,data_layers.hpp,loss_layers.hpp,neuron_layers.hpp, orvision_layers.hpp. Include an inline implementation oftypeand the*Blobs()methods to specify blob number requirements. Omit the*_gpudeclarations if you'll only be implementing CPU code. - Implement your layer in 
layers/your_layer.cpp.- (optional) 
LayerSetUpfor one-time initialization: reading parameters, fixed-size allocations, etc. - 
Reshapefor computing the sizes of top blobs, allocating buffers, and any other work that depends on the shapes of bottom blobs - 
Forward_cpufor the function your layer computes - 
Backward_cpufor its gradient (Optional -- a layer can be forward-only) 
 - (optional) 
 - (Optional) Implement the GPU versions 
Forward_gpuandBackward_gpuinlayers/your_layer.cu. - Add your layer to 
proto/caffe.proto, updating the next available ID. Also declare parameters, if needed, in this file. - Register your layer in your cpp file with the macro provided in 
layer_factory.hpp. Assuming that you have a new layerMyAwesomeLayerand the layer type in the proto isAWESOME, you can register it with the following command: 
REGISTER_LAYER_CLASS(AWESOME, MyAwesomeLayer);
- Optionally, you can also register a Creator if your layer has multiple engines. For an example on how to define a creator function and register it, see 
GetConvolutionLayerincaffe/layer_factory.cpp. - Write tests in 
test/test_your_layer.cpp. Usetest/test_gradient_check_util.hppto check that your Forward and Backward implementations are in numerical agreement. 
If you want to write a layer that you will only ever include in a test net, you do not have to code the backward pass. For example, you might want a layer that measures performance metrics at test time that haven't already been implemented.
Doing this is very simple. You can write an inline implementation of Backward_cpu (or Backward_gpu) together with the definition of your layer in include/your_layertype_layers.hpp that looks like:
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, vector<Blob<Dtype>*>* bottom) {
  NOT_IMPLEMENTED;
}
The NOT_IMPLEMENTED macro (defined in common.hpp) throws an error log saying "Not implemented yet". For examples, look at the accuracy layer (loss_layers.hpp) and threshold layer (neuron_layers.hpp) definitions.