Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions Projects/Projects/AI-Defined-Vehicles-ADAS-IVI-on-Arm.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ The IVI application should combine:
- Multimedia services such as navigation, vehicle telemetry, video/audio streaming, internet radio, or gaming
- System-level integration, including boot sequencing, background services, and inter-process communication
- ADAS-inspired AI/ML-driven functionality that actively shapes system behavior, such as:
- Driver monitoring (e.g. attention, drowsiness, or distraction detection)
- In-cabin perception (e.g. sound classification, occupant presence)
- Context-aware automation (e.g. adaptive UI, media muting, or alerts based on inferred events)
1. Driver monitoring (e.g. attention, drowsiness, or distraction detection)
2. In-cabin perception (e.g. sound classification, occupant presence)
3. Context-aware automation (e.g. adaptive UI, media muting, or alerts based on inferred events)

For example, an AI service might analyze camera or audio input and dynamically modify the UI, pause media, or trigger a **simulated** safety alert—illustrating how AI models become first-class components in vehicle software architecture.

Expand Down
9 changes: 6 additions & 3 deletions Projects/Projects/Humanoid-&-Quadruped-Robotics-&-AI.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,15 @@ This combination makes them ideal testbeds for exploring physical AI under real-
By leveraging efficient Arm-native ML frameworks (e.g. PyTorch + ExecuTorch, LiteRT, ONNX Runtime, or accelerated ROS 2 pipelines), developers can study how modern AI models behave when tightly coupled to physical embodiment, whether quadrupedal or humanoid. Arm-based systems enable tight perception–action loops, reducing the latency from sensing a photon or audio wave to taking a meaningful physical action.

**Example Platforms**
Quadruped

**Quadruped**
- [PuppyPi - Raspberry Pi 4/5](https://www.hiwonder.com/products/puppypi?variant=40213129003095&srsltid=AfmBOoonusv8sBF3hP6LMvyvBAXPVJh7eW0V60FU7IDGaPfYcz7cseXH)
- [ROSPug - Jetson Nano](https://www.hiwonder.com/products/rospug?srsltid=AfmBOop83H4FhIvFPY9-fYUyQ2a0xrh9-_gfr4aHVuy15X8owRgYV2PL)
Humanoid / Biped

**Humanoid / Biped**
- [TonyPi - Raspberry Pi 4/5](https://www.hiwonder.com/products/tonypi?variant=31753114681431&srsltid=AfmBOoosUcEQONClryEw_jPzqkrezui8d5BkTunVcWUTKhbD_xikG_10)
Boards

**Boards**
- [Raspberry Pi 4, 5, CM4](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000)
- [Rockchip RK3566 or 3588](https://www.waveshare.com/core3566.htm?srsltid=AfmBOorJELpnCGbrB3pV489O_RIOvnExIjj8q84sPlp-N4W3b4_wsfRj)
- [Nvidia Jetson Nano - Heterogeneous SoC with Arm-powered CPU for control and orchestration](https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/jetson-nano/product-development/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ Build and evaluate a comprehensive AI-augmented audio/video capture and provenan
- captures streamed media with a camera or microphone,
- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements),
- generates C2PA Content Credentials that transparently disclose:
- which models were run,
- their effect/impact on the image or video
1. which models were run,
2. their effect/impact on the image or video

and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines.

Expand Down
12 changes: 6 additions & 6 deletions docs/_posts/2026-02-06-AI-Defined-Vehicles-ADAS-IVI-on-Arm.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ full_description: |-
- Multimedia services such as navigation, vehicle telemetry, video/audio streaming, internet radio, or gaming
- System-level integration, including boot sequencing, background services, and inter-process communication
- ADAS-inspired AI/ML-driven functionality that actively shapes system behavior, such as:
- Driver monitoring (e.g. attention, drowsiness, or distraction detection)
- In-cabin perception (e.g. sound classification, occupant presence)
- Context-aware automation (e.g. adaptive UI, media muting, or alerts based on inferred events)
1. Driver monitoring (e.g. attention, drowsiness, or distraction detection)
2. In-cabin perception (e.g. sound classification, occupant presence)
3. Context-aware automation (e.g. adaptive UI, media muting, or alerts based on inferred events)

For example, an AI service might analyze camera or audio input and dynamically modify the UI, pause media, or trigger a **simulated** safety alert—illustrating how AI models become first-class components in vehicle software architecture.

Expand Down Expand Up @@ -129,9 +129,9 @@ The IVI application should combine:
- Multimedia services such as navigation, vehicle telemetry, video/audio streaming, internet radio, or gaming
- System-level integration, including boot sequencing, background services, and inter-process communication
- ADAS-inspired AI/ML-driven functionality that actively shapes system behavior, such as:
- Driver monitoring (e.g. attention, drowsiness, or distraction detection)
- In-cabin perception (e.g. sound classification, occupant presence)
- Context-aware automation (e.g. adaptive UI, media muting, or alerts based on inferred events)
1. Driver monitoring (e.g. attention, drowsiness, or distraction detection)
2. In-cabin perception (e.g. sound classification, occupant presence)
3. Context-aware automation (e.g. adaptive UI, media muting, or alerts based on inferred events)

For example, an AI service might analyze camera or audio input and dynamically modify the UI, pause media, or trigger a **simulated** safety alert—illustrating how AI models become first-class components in vehicle software architecture.

Expand Down
18 changes: 12 additions & 6 deletions docs/_posts/2026-02-06-Humanoid-&-Quadruped-Robotics-&-AI.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,15 @@ full_description: |-
By leveraging efficient Arm-native ML frameworks (e.g. PyTorch + ExecuTorch, LiteRT, ONNX Runtime, or accelerated ROS 2 pipelines), developers can study how modern AI models behave when tightly coupled to physical embodiment, whether quadrupedal or humanoid. Arm-based systems enable tight perception–action loops, reducing the latency from sensing a photon or audio wave to taking a meaningful physical action.

**Example Platforms**
Quadruped

**Quadruped**
- [PuppyPi - Raspberry Pi 4/5](https://www.hiwonder.com/products/puppypi?variant=40213129003095&srsltid=AfmBOoonusv8sBF3hP6LMvyvBAXPVJh7eW0V60FU7IDGaPfYcz7cseXH)
- [ROSPug - Jetson Nano](https://www.hiwonder.com/products/rospug?srsltid=AfmBOop83H4FhIvFPY9-fYUyQ2a0xrh9-_gfr4aHVuy15X8owRgYV2PL)
Humanoid / Biped

**Humanoid / Biped**
- [TonyPi - Raspberry Pi 4/5](https://www.hiwonder.com/products/tonypi?variant=31753114681431&srsltid=AfmBOoosUcEQONClryEw_jPzqkrezui8d5BkTunVcWUTKhbD_xikG_10)
Boards

**Boards**
- [Raspberry Pi 4, 5, CM4](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000)
- [Rockchip RK3566 or 3588](https://www.waveshare.com/core3566.htm?srsltid=AfmBOorJELpnCGbrB3pV489O_RIOvnExIjj8q84sPlp-N4W3b4_wsfRj)
- [Nvidia Jetson Nano - Heterogeneous SoC with Arm-powered CPU for control and orchestration](https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/jetson-nano/product-development/)
Expand Down Expand Up @@ -135,12 +138,15 @@ This combination makes them ideal testbeds for exploring physical AI under real-
By leveraging efficient Arm-native ML frameworks (e.g. PyTorch + ExecuTorch, LiteRT, ONNX Runtime, or accelerated ROS 2 pipelines), developers can study how modern AI models behave when tightly coupled to physical embodiment, whether quadrupedal or humanoid. Arm-based systems enable tight perception–action loops, reducing the latency from sensing a photon or audio wave to taking a meaningful physical action.

**Example Platforms**
Quadruped

**Quadruped**
- [PuppyPi - Raspberry Pi 4/5](https://www.hiwonder.com/products/puppypi?variant=40213129003095&srsltid=AfmBOoonusv8sBF3hP6LMvyvBAXPVJh7eW0V60FU7IDGaPfYcz7cseXH)
- [ROSPug - Jetson Nano](https://www.hiwonder.com/products/rospug?srsltid=AfmBOop83H4FhIvFPY9-fYUyQ2a0xrh9-_gfr4aHVuy15X8owRgYV2PL)
Humanoid / Biped

**Humanoid / Biped**
- [TonyPi - Raspberry Pi 4/5](https://www.hiwonder.com/products/tonypi?variant=31753114681431&srsltid=AfmBOoosUcEQONClryEw_jPzqkrezui8d5BkTunVcWUTKhbD_xikG_10)
Boards

**Boards**
- [Raspberry Pi 4, 5, CM4](https://www.raspberrypi.com/products/compute-module-4/?variant=raspberry-pi-cm4001000)
- [Rockchip RK3566 or 3588](https://www.waveshare.com/core3566.htm?srsltid=AfmBOorJELpnCGbrB3pV489O_RIOvnExIjj8q84sPlp-N4W3b4_wsfRj)
- [Nvidia Jetson Nano - Heterogeneous SoC with Arm-powered CPU for control and orchestration](https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/jetson-nano/product-development/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ full_description: |-
- captures streamed media with a camera or microphone,
- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements),
- generates C2PA Content Credentials that transparently disclose:
- which models were run,
- their effect/impact on the image or video
1. which models were run,
2. their effect/impact on the image or video

and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines.

Expand Down Expand Up @@ -137,8 +137,8 @@ Build and evaluate a comprehensive AI-augmented audio/video capture and provenan
- captures streamed media with a camera or microphone,
- runs AI models on-device (i.e. face/object/keyword/sentiment detection, upscaling/filters/enhancements),
- generates C2PA Content Credentials that transparently disclose:
- which models were run,
- their effect/impact on the image or video
1. which models were run,
2. their effect/impact on the image or video

and demonstrates how this provenance enables trust and auditability in real-world use cases such as content integrity validation and responsible media pipelines.

Expand Down