The Edge AI Tuning Kit is a comprehensive solution for creating, tailoring, and implementing AI models in edge platform. It incorporates AI training and inference frameworks, data management tools. This provides businesses with a convenient, economical, and rapid approach to integrate AI on the Intel Platform.
- [2025/06] Added support for Intel® Arc™ B580 Graphics and introduced a new user interface for an improved experience.
- [2025/06] Initial release of Edge AI Tuning Kit v2025.1.0.
Hardware requirements | Minimum | Recommended |
---|---|---|
CPU | 13th Gen Intel(R) Core CPU and above | 4th Gen Intel® Xeon® Scalable Processor and above |
GPU | Single Intel® A-Series or B-Series Graphics | Multiple Intel® A-Series or B-Series Graphics |
RAM (GB) | 64 and above | 128 and above |
Disk (GB) | 500 (Around 4 projects with 1 training task each) | 1000 (Around 8 projects with 1 training task each) |
- Ubuntu 22.04 LTS / Ubuntu 24.04 LTS
- Docker with non-root user.
- Intel GPU drivers
Comprehensive documentation regarding supported models and additional technical specifications is available in the documentation
1. Create a Hugging Face account and generate an access token. For more information, please refer to link.
2. Login to your Hugging Face account and browse to mistralai/Mistral-7B-Instruct-v0.3 and click on the Agree and access repository
button.
Follow the docker installation using the link
Run the following command to add your current user to the Docker group. After running the command, log out and log back in for the changes to take effect.
sudo usermod -aG docker $USER
Run the setup using the command below.
./setup.sh -b
Browse to http://localhost after the application started successfully.
./setup.sh -r
To change the network interface the application listens on, edit the HOST
value in the .env
file located in the application directory.
For example, to listen on all available interfaces, set:
HOST=0.0.0.0
By default, the application is listen only on localhost
HOST=127.0.0.1
Run the command below to stop the application.
./setup.sh -s
If you want to remove the database & application cache files, run the following command:
# Remove the database cache file
docker volume rm edge-ai-tuning-kit-data-cache
docker volume rm edge-ai-tuning-kit-database
docker volume rm edge-ai-tuning-kit-task-cache
If user see this issue when running the docker compose up on redis container, user will need to enable memory overcommit.
# WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
- Only support instruction-based models or tokenizers with a chat template.
The software provided are designed to run exclusively in a trusted environment on a single machine, and they are not intended for deployment on production servers. These scripts have been validated and tested for use in controlled, secure settings. Running the software in any other environment, especially on production systems, is not supported and may result in unexpected behavior, security risks, or performance issues.