This repository contains a minimal example for running an Edge Impulse machine learning model on an Android device using Android NDK and TensorFlow Lite.
Fully tested on Vision for: FOMO AD, Object Detection, and on sensor data for KWS, and accelerometer with WearOS
See the Android Documentation.
- Ensure you have followed the on Android guide and have a trained model.
- Export your model as a C++ library from Edge Impulse Studio.
-
- Follow one of the tutorial guides for beginners here
-
- Export the C++ Binary Visual Anomaly or download the prebuilt GMM Cracks Demo on the workshop download the C++ export here
-
- Follow the rest of this repo
-
- Now knowledgable Android Developers can make a change to the Kotlin appliction logic to build your own app around the runInference function, e.g. count instances of detections, add thresholding to only detect beyond 70% of confidence, change the UI.
- Install Android Studio.
- Install Android NDK and CMake via the Android Studio SDK Manager. The example is tested to work with Android Studio Ladybug Feature Drop | 2024.2.2, Android API 35, Android SDK Build-Tools 35.0.1, NDK 27.0.12077973, CMake 3.22.1.
We created an example repository that contains an Android Studio project with C++ support. Clone or download this repository:
git clone https://github.com/edgeimpulse/example-android-inferencing.git
cd example-android-inferencingcd example-android-inferencing/example_static_buffer/app/src/main/cpp/tflite
sh download_tflite_libs.bat # download_tflite_libs.sh for OSX and LinuxChoose the project to import
- Open Android Studio.
- Select Open an existing Android Studio project.
- Navigate to the cloned repository and select it.
- Go to Edge Impulse Studio.
- Export your trained model as a C++ library.
- Download the exported model.
- Extract the downloaded C++ library.
- Copy the extracted files into the
example-android-inferencing/example_static_buffer/app/src/main/cppdirectory, dont copy the CMake.txt file.
- Obtain the test feature set from Edge Impulse Studio test impulse tab.
- Paste the test feature set into the raw_features array the native_lib.cpp licated in the cpp directory.
std::vector<float> raw_features = {
// Copy raw features here (e.g. from the 'Model testing' page)
};- In Android Studio, click on Build > Make Project.
- Once the build is successful, run the project on an Android device or emulator.
If you want to integrate additional sensors, such as a Gyroscope or Heart Rate Sensor, follow these steps:
- Enable the Sensor in the Code In MainActivity.kt, locate the sensor initialization section and uncomment the corresponding lines:
// Uncomment to add Gyroscope support
private var gyroscope: Sensor? = null
// Uncomment to add Heart Rate sensor support
private var heartRateSensor: Sensor? = null- Initialize the Sensor in onCreate Inside onCreate(), uncomment and initialize the sensor:
gyroscope = sensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE)
// heartRateSensor = sensorManager.getDefaultSensor(Sensor.TYPE_HEART_RATE)- Register the Sensor in onResume To start collecting sensor data when the app is active, uncomment the registration logic:
gyroscope?.also {
sensorManager.registerListener(this, it, SensorManager.SENSOR_DELAY_NORMAL)
}
// heartRateSensor?.also {
// sensorManager.registerListener(this, it, SensorManager.SENSOR_DELAY_NORMAL)
// }- Handle Sensor Data in onSensorChanged Modify the onSensorChanged() function to collect new sensor data:
Gyroscope data
Sensor.TYPE_GYROSCOPE -> {
ringBuffer[ringBufferIndex++] = event.values[0] // X rotation
ringBuffer[ringBufferIndex++] = event.values[1] // Y rotation
ringBuffer[ringBufferIndex++] = event.values[2] // Z rotation
}
// Heart Rate data
// Sensor.TYPE_HEART_RATE -> {
// ringBuffer[ringBufferIndex++] = event.values[0] // Heart rate BPM
// }- Unregister the Sensor in onPause To save battery and improve performance, ensure sensors stop when the app is paused:
sensorManager.unregisterListener(this)- Run the App and Verify Build and deploy the app on a WearOS device. Check logs for new sensor data. Ensure inference runs correctly with the additional inputs.
Testing on devices without ready access to a camera, like Device Cloud or VR headsets that dont allow you to use the passthrough camera:
override fun onResume() {
super.onResume()
//Read the asset
val bmp = assets.open("test.jpg").use { BitmapFactory.decodeStream(it) }
//Resize to the model’s input — helper does this for you
val resized = EIImageHelper.resizeBitmap(bmp)
//Run inference (synchronous, one-shot)
val result = EIClassifierImage.run(resized)
//Show scores in the existing TextView
runOnUiThread { resultText.text = result.format() }
}