|
| 1 | +.. _embarc_mli_example_har_smartphone: |
| 2 | + |
| 3 | +STM Based Human Activity Recognition (HAR) Example |
| 4 | +################################################## |
| 5 | + |
| 6 | +Overview |
| 7 | +******** |
| 8 | +Example shows how to work with recurrent primitives (LSTM and basic RNN) implemented in embARC MLI Library. It is based on open source [GitHub project](https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition) by Guillaume Chevalie. Chosen approach, complexity of the model and [dataset](https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones) are relevant to IoT domain. The model is intended to differentiate human activity between 6 classes based on inputs from embedded inertial sensors from waist-mounted smartphone. Classes: |
| 9 | + * 0: WALKING |
| 10 | + * 1: WALKING_UPSTAIRS |
| 11 | + * 2: WALKING_DOWNSTAIRS |
| 12 | + * 3: SITTING |
| 13 | + * 4: STANDING |
| 14 | + * 5: LAYING |
| 15 | + |
| 16 | +Quick Start |
| 17 | +-------------- |
| 18 | + |
| 19 | +Example supports building with [Zephyr Software Development Kit (SDK)](https://docs.zephyrproject.org/latest/getting_started/installation_linux.html#zephyr-sdk) and running with MetaWare Debuger on [nSim simulator](https://www.synopsys.com/dw/ipdir.php?ds=sim_nSIM). |
| 20 | + |
| 21 | +Add embarc_mli module to Zephyr instruction |
| 22 | +------------------------------------------- |
| 23 | + |
| 24 | +1. Open command line and change working directory to './zephyrproject/zephyr' |
| 25 | + |
| 26 | +2. Download embarc_mli version 2.0 |
| 27 | + |
| 28 | + west update |
| 29 | + |
| 30 | +Build with Zephyr SDK toolchain |
| 31 | +------------------------------- |
| 32 | + |
| 33 | + Build requirements: |
| 34 | + - Zephyr SDK toolchain version 0.13.2 or higher |
| 35 | + - gmake |
| 36 | + |
| 37 | +1. Open command line and change working directory to './zephyrproject/zephyr/samples/modules/embarc_mli/example_har_smartphone' |
| 38 | + |
| 39 | +2. Build example |
| 40 | + |
| 41 | + west build -b nsim_em samples/modules/embarc_mli/example_har_smartphone |
| 42 | + |
| 43 | +Run example |
| 44 | +-------------- |
| 45 | + |
| 46 | +1. Run example |
| 47 | + |
| 48 | + west flash |
| 49 | + |
| 50 | + Result Quality shall be "S/N=3579.6 (71.1 db)" |
| 51 | + |
| 52 | +More options |
| 53 | +-------------- |
| 54 | + |
| 55 | +You can change mode in ml_api_har_smartphone_main.c to 1,2,3: |
| 56 | + |
| 57 | +* mode=1: |
| 58 | + |
| 59 | + Built-in input processing. Uses only hard-coded vector for the single input model inference. |
| 60 | + |
| 61 | +* mode=2: |
| 62 | + |
| 63 | + Unavailable right now due to hostlink error. External test-set processing. Reads vectors from input IDX file, passes it to the model, and writes it's output to the other IDX file (if input is *tests.idx* then output will be *tests.idx_out*). |
| 64 | + |
| 65 | +* mode=3: |
| 66 | + |
| 67 | + Accuracy measurement for testset. Reads vectors from input IDX file, passes it to the model, and accumulates number of successive classifications according to labels IDX file. If hostlink is unavailble, please add _C_ARRAY_ definition. |
| 68 | + |
| 69 | +You can add different definitions to zephyr_compile_definitions() in 'zephyr/samples/modules/embarc_mli/example_har_smartphone/CMakeLists.txt' to implement numerous model: |
| 70 | + |
| 71 | +* 16 bit depth of coefficients and data (default): |
| 72 | + |
| 73 | + MODEL_BIT_DEPTH=16 |
| 74 | + |
| 75 | +* 8 bit depth of coefficients and data: |
| 76 | + |
| 77 | + MODEL_BIT_DEPTH=8 |
| 78 | + |
| 79 | +* 8x16: 8 bit depth of coefficients and 16 bit depth of data: |
| 80 | + |
| 81 | + MODEL_BIT_DEPTH=816 |
| 82 | + |
| 83 | +* If hostlink is not available, please reads vectors from input Array file, passes it to the model, and accumulates number of successive classifications according to labels array file: |
| 84 | + |
| 85 | + _C_ARRAY_ |
| 86 | + |
| 87 | +Example Structure |
| 88 | +-------------------- |
| 89 | +Structure of example application may be divided logically on three parts: |
| 90 | + |
| 91 | +* **Application.** Implements Input/output data flow and it's processing by the other modules. Application includes: |
| 92 | + * ml_api_har_smartphone_main.c |
| 93 | + * ../auxiliary/examples_aux.h(.c) |
| 94 | +* **Inference Module.** Uses embARC MLI Library to process input according to pre-defined graph. All model related constants are pre-defined and model coefficients is declared in the separate compile unit |
| 95 | + * har_smartphone_model.h |
| 96 | + * har_smartphone_model.c |
| 97 | + * har_smartphone_constants.h |
| 98 | + * har_smartphone_coefficients.c |
| 99 | +* **Auxiliary code.** Various helper functions for measurements, IDX file IO, etc. |
| 100 | + * ../auxiliary/tensor_transform.h(.c) |
| 101 | + * ../auxiliary/tests_aux.h(.c) |
| 102 | + * ../auxiliary/idx_file.h(.c) |
| 103 | + |
| 104 | +References |
| 105 | +---------------------------- |
| 106 | +GitHub project served as starting point for this example: |
| 107 | +> Guillaume Chevalier, *LSTMs for Human Activity Recognition*, 2016,[https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition](https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition) |
| 108 | + |
| 109 | +Human Activity Recognition Using Smartphones [Dataset](https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones): |
| 110 | +> Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. *"A Public Domain Dataset for Human Activity Recognition Using Smartphones."* 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013: |
| 111 | + |
| 112 | +IDX file format originally was used for [MNIST database](http://yann.lecun.com/exdb/mnist/). There is a python [package](https://pypi.org/project/idx2numpy/) for working with it through transformation to/from numpy array. *auxiliary/idx_file.c(.h)* is used by the test app for working with IDX files: |
| 113 | +> Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. *"Gradient-based learning applied to document recognition."* Proceedings of the IEEE, 86(11):2278-2324, November 1998. [on-line version] |
0 commit comments