Skip to content

Commit 6847187

Browse files
authored
Create extended-readme.md
splitted readme to create a simpler version to show inside HACS
1 parent b9b3582 commit 6847187

File tree

1 file changed

+220
-0
lines changed

1 file changed

+220
-0
lines changed

extended-readme.md

Lines changed: 220 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,220 @@
1+
# CodeProject.AI Home Assistant Object Detection custom component
2+
3+
## This version has been updated to fix the issue with "ANTIALIAS" command being deprecated in Pillow >= 10 in latest HomeAssistant (as of 06/08/2023)
4+
5+
This component is a direct port of the [HASS-Deepstack-object](https://github.com/robmarkcole/HASS-Deepstack-object) component by [Robin Cole](https://github.com/robmarkcole). This component provides AI-based Object Detection capabilities using [CodeProject.AI Server](https://codeproject.com/ai).
6+
7+
[CodeProject.AI Server](https://codeproject.com/ai) is a service which runs either in a Docker container or as a Windows Service and exposes various an API for many AI inferencing operations via a REST API. The Object Detection capabilities use the [YOLO](https://arxiv.org/pdf/1506.02640.pdf) algorithm as implemented by Ultralytics and others. It can identify 80 different kinds of objects by default, but custom models are also available that focus on specific objects such as animals, license plates or objects typically encountered by home webcams. CodeProject.AI Server is free, locally installed, and can run without an external internet connection, is is comatible with Windows, Linux, macOS. It can run on Raspberry Pi, and supports CUDA and embedded Intel GPUs.
8+
9+
On the machine in which you are running CodeProject.AI server, either ensure the service is running, or if using Docker, [start a Docker container](https://www.codeproject.com/ai/docs/why/running_in_docker.html#launching-a-container).
10+
11+
### A note on Ports
12+
CodeProject.AI Server typically runs on port 32168, so you will need to ensure the machine hosting the server has this port open. If you need to changes ports (eg switch to port 80) thenf or Docker use the -p flag:
13+
```
14+
docker run --name CodeProject.AI-Server -d -p 80:32168 ^
15+
--mount type=bind,source=C:\ProgramData\CodeProject\AI\docker\data,target=/etc/codeproject/ai ^
16+
--mount type=bind,source=C:\ProgramData\CodeProject\AI\docker\modules,target=/app/modules ^
17+
codeproject/ai-server
18+
```
19+
For Windows server you will need to either set an environment variable `CPAI_PORT` with value 80 (on the host running CodeProject.AI Server), or edit the appsettings.json file in the `C:\Program Files\CodeProject\AI folder` and change the value of the CPAI_PORT environment variable in the file.
20+
21+
## Usage of this component
22+
Thanks again to Robin for the original write up for his component.
23+
24+
The `codeproject_ai_object` component adds an `image_processing` entity where the state of the entity is the total count of target objects that are above a `confidence` threshold which has a default value of 80%. You can have a single target object class, or multiple. The time of the last detection of any target object is in the `last target detection` attribute. The type and number of objects (of any confidence) is listed in the `summary` attributes. Optionally a region of interest (ROI) can be configured, and only objects with their center (represented by a `x`) within the ROI will be included in the state count. The ROI will be displayed as a green box, and objects with their center in the ROI have a red box.
25+
26+
Also optionally the processed image can be saved to disk, with bounding boxes showing the location of detected objects. If `save_file_folder` is configured, an image with filename of format `codeproject_ai_object_{source name}_latest.jpg` is over-written on each new detection of a target. Optionally this image can also be saved with a timestamp in the filename, if `save_timestamped_file` is configured as `True`. An event `codeproject_ai.object_detected` is fired for each object detected that is in the targets list, and meets the confidence and ROI criteria. If you are a power user with advanced needs such as zoning detections or you want to track multiple object types, you will need to use the `codeproject_ai.object_detected` events.
27+
28+
**Note** that by default the component will **not** automatically scan images, but requires you to call the `image_processing.scan` service e.g. using an automation triggered by motion.
29+
30+
## Home Assistant setup
31+
Place the `custom_components` folder in your configuration directory (or add its contents to an existing `custom_components` folder). Then configure object detection. **Important:** It is necessary to configure only a single camera per `codeproject_ai_object` entity. If you want to process multiple cameras, you will therefore need multiple `codeproject_ai_object` `image_processing` entities.
32+
33+
The component can optionally save snapshots of the processed images. If you would like to use this option, you need to create a folder where the snapshots will be stored. The folder should be in the same folder where your `configuration.yaml` file is located. In the example below, we have named the folder `snapshots`.
34+
35+
Add to your Home-Assistant config:
36+
37+
```yaml
38+
image_processing:
39+
- platform: codeproject_ai_object
40+
ip_address: localhost
41+
port: 32168
42+
# custom_model: mask
43+
# confidence: 80
44+
save_file_folder: /config/snapshots/
45+
save_file_format: png
46+
save_timestamped_file: True
47+
always_save_latest_file: True
48+
scale: 0.75
49+
# roi_x_min: 0.35
50+
roi_x_max: 0.8
51+
#roi_y_min: 0.4
52+
roi_y_max: 0.8
53+
crop_to_roi: True
54+
targets:
55+
- target: person
56+
- target: vehicle
57+
confidence: 60
58+
- target: car
59+
confidence: 40
60+
source:
61+
- entity_id: camera.local_file
62+
```
63+
64+
Configuration variables:
65+
- **ip_address**: the ip address of your CodeProject.AI Server instance.
66+
- **port**: the port of your CodeProject.AI Server instance.
67+
- **timeout**: (Optional, default 10 seconds) The timeout for requests to CodeProject.AI Server.
68+
- **custom_model**: (Optional) The name of a custom model if you are using one. Don't forget to add the targets from the custom model below
69+
- **confidence**: (Optional) The confidence (in %) above which detected targets are counted in the sensor state. Default value: 80
70+
- **save_file_folder**: (Optional) The folder to save processed images to. Note that folder path should be added to [whitelist_external_dirs](https://www.home-assistant.io/docs/configuration/basic/)
71+
- **save_file_format**: (Optional, default `jpg`, alternatively `png`) The file format to save images as. `png` generally results in easier to read annotations.
72+
- **save_timestamped_file**: (Optional, default `False`, requires `save_file_folder` to be configured) Save the processed image with the time of detection in the filename.
73+
- **always_save_latest_file**: (Optional, default `False`, requires `save_file_folder` to be configured) Always save the last processed image, even if there were no detections.
74+
- **scale**: (optional, default 1.0), range 0.1-1.0, applies a scaling factor to the images that are saved. This reduces the disk space used by saved images, and is especially beneficial when using high resolution cameras.
75+
- **show_boxes**: (optional, default `True`), if `False` bounding boxes are not shown on saved images
76+
- **roi_x_min**: (optional, default 0), range 0-1, must be less than roi_x_max
77+
- **roi_x_max**: (optional, default 1), range 0-1, must be more than roi_x_min
78+
- **roi_y_min**: (optional, default 0), range 0-1, must be less than roi_y_max
79+
- **roi_y_max**: (optional, default 1), range 0-1, must be more than roi_y_min
80+
- **crop_to_roi**: (optional, default False), crops the image to the specified roi. May improve object detection accuracy when a region-of-interest is applied
81+
- **source**: Must be a camera.
82+
- **targets**: The list of target object names and/or `object_type`, default `person`. Optionally a `confidence` can be set for this target, if not the default confidence is used. Note the minimum possible confidence is 10%.
83+
84+
For the ROI, the (x=0,y=0) position is the top left pixel of the image, and the (x=1,y=1) position is the bottom right pixel of the image. It might seem a bit odd to have y running from top to bottom of the image, but that is the coordinate system used by pillow.
85+
86+
#### Event `codeproject_ai.object_detected`
87+
An event `codeproject_ai.object_detected` is fired for each object detected above the configured `confidence` threshold. This is the recommended way to check the confidence of detections, and to keep track of objects that are not configured as the `target` (use `Developer tools -> EVENTS -> :Listen to events`, to monitor these events).
88+
89+
An example use case for event is to get an alert when some rarely appearing object is detected, or to increment a [counter](https://www.home-assistant.io/components/counter/). The `codeproject_ai.object_detected` event payload includes:
90+
91+
- `entity_id` : the entity id responsible for the event
92+
- `name` : the name of the object detected
93+
- `object_type` : the type of the object, from `person`, `vehicle`, `animal` or `other`
94+
- `confidence` : the confidence in detection in the range 0 - 100%
95+
- `box` : the bounding box of the object
96+
- `centroid` : the centre point of the object
97+
- `saved_file` : the path to the saved annotated image, which is the timestamped file if `save_timestamped_file` is True, or the default saved image if False
98+
99+
An example automation using the `codeproject_ai.object_detected` event is given below:
100+
101+
```yaml
102+
- action:
103+
- data_template:
104+
caption: "New person detection with confidence {{ trigger.event.data.confidence }}"
105+
file: "{{ trigger.event.data.saved_file }}"
106+
service: telegram_bot.send_photo
107+
alias: Object detection automation
108+
condition: []
109+
id: "1120092824622"
110+
trigger:
111+
- platform: event
112+
event_type: codeproject_ai.object_detected
113+
event_data:
114+
name: person
115+
```
116+
117+
## Displaying the CodeProject.AI latest jpg file
118+
It easy to display the `codeproject_ai_object_{source name}_latest.jpg` image with a [local_file](https://www.home-assistant.io/components/local_file/) camera. An example configuration is:
119+
```yaml
120+
camera:
121+
- platform: local_file
122+
file_path: /config/snapshots/codeproject_ai_object_local_file_latest.jpg
123+
name: codeproject_ai_latest_person
124+
```
125+
126+
## Info on box
127+
The `box` coordinates and the box center (`centroid`) can be used to determine whether an object falls within a defined region-of-interest (ROI). This can be useful to include/exclude objects by their location in the image.
128+
129+
* The `box` is defined by the tuple `(y_min, x_min, y_max, x_max)` (equivalent to image top, left, bottom, right) where the coordinates are floats in the range `[0.0, 1.0]` and relative to the width and height of the image.
130+
* The centroid is in `(x,y)` coordinates where `(0,0)` is the top left hand corner of the image and `(1,1)` is the bottom right corner of the image.
131+
132+
## Browsing saved images in HA
133+
I highly recommend using the Home Assistant Media Player Browser to browse and preview processed images. Add to your config something like:
134+
```yaml
135+
homeassistant:
136+
.
137+
.
138+
whitelist_external_dirs:
139+
- /config/images/
140+
media_dirs:
141+
local: /config/images/
142+
143+
media_source:
144+
```
145+
And configure CodeProject.AI Server to use the above directory for `save_file_folder`, then saved images can be browsed from the HA front end like below:
146+
147+
<p align="center">
148+
<img src="https://github.com/robmarkcole/HASS-Deepstack-object/blob/master/docs/media_player.png" width="750">
149+
150+
<small>(Image courtesy of Robin Cole, and uses his original Deepstack implementation)</small>
151+
</p>
152+
153+
## Face recognition
154+
For face recognition with CodeProject.AI Server use https://github.com/codeproject/CodeProject.AI-HomeAssist-FaceDetect
155+
156+
### Support
157+
- For code related issues such as suspected bugs **with this integration**, please open an issue on this repo.
158+
- For CodeProject.AI Server setup questions, please see see the [CodeProject.AI Server docs](https://www.codeproject.com/AI/docs/)
159+
- For bugs and suggestions related to CodeProject.AI Server, please use the [CodeProject.AI forum](https://www.codeproject.com/Feature/CodeProjectAI-Discussions.aspx).
160+
- For general chat or to discuss Home Assistant specific issues related to configuration or use cases, please [use the Home Assistant forums](https://community.home-assistant.io/).
161+
162+
### Docker tips
163+
164+
Please view the [CodeProject.AI Server docs](https://www.codeproject.com/AI/docs/why/running_in_docker.html)
165+
Add the `-d` flag to run the container in background
166+
167+
### FAQ
168+
Q1: I get the following warning, is this normal?
169+
```
170+
2019-01-15 06:37:52 WARNING (MainThread) [homeassistant.loader] You are using a custom component for image_processing.codeproject_ai_face which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you do experience issues with Home Assistant.
171+
```
172+
A1: Yes this is normal
173+
174+
------
175+
176+
Q6: I am getting an error from Home Assistant: `Platform error: image_processing - Integration codeproject_ai_object not found`
177+
178+
A6: This can happen when you are running in Docker/Hassio, and indicates that one of the dependencies isn't installed. It is necessary to reboot your Hassio device, or rebuild your Docker container. Note that just restarting Home Assistant will not resolve this.
179+
180+
------
181+
182+
## Objects
183+
The following lists all valid target object names:
184+
```
185+
person, bicycle, car, motorcycle, airplane,
186+
bus, train, truck, boat, traffic light, fire hydrant, stop_sign,
187+
parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant,
188+
bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase,
189+
frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove,
190+
skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork,
191+
knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot,
192+
hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table,
193+
toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave,
194+
oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear,
195+
hair dryer, toothbrush.
196+
```
197+
Objects are grouped by the following `object_type`:
198+
- **person**: person
199+
- **animal**: bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe
200+
- **vehicle**: bicycle, car, motorcycle, airplane, bus, train, truck
201+
- **other**: any object that is not in `person`, `animal` or `vehicle`
202+
203+
## Development
204+
Currently only the helper functions are tested, using pytest.
205+
* `python3 -m venv venv`
206+
* `source venv/bin/activate`
207+
* `pip install -r requirements-dev.txt`
208+
* `venv/bin/py.test custom_components/codeproject_ai_object/tests.py -vv -p no:warnings`
209+
210+
## Videos of usage
211+
212+
Robin Cole has a series of videos using Deepstack with Home Asssistant which may provide some assistance.
213+
214+
Checkout this excellent video of usage from [Everything Smart Home](https://www.youtube.com/channel/UCrVLgIniVg6jW38uVqDRIiQ)
215+
216+
[![](http://img.youtube.com/vi/vMdpLiAB9dI/0.jpg)](http://www.youtube.com/watch?v=vMdpLiAB9dI "")
217+
218+
Also see the video of a presentation I did to the [IceVision](https://airctic.com/) community on deploying Deepstack on a Jetson nano.
219+
220+
[![](http://img.youtube.com/vi/1O0mCaA22fE/0.jpg)](http://www.youtube.com/watch?v=1O0mCaA22fE "")

0 commit comments

Comments
 (0)