Skip to content

Commit 658a488

Browse files
authored
Update dependencies (#41)
1 parent 3249dc4 commit 658a488

10 files changed

+940
-680
lines changed

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,13 +45,13 @@ repos:
4545
- id: no-commit-to-branch
4646
args: [--branch, dev, --branch, int, --branch, main]
4747
- repo: https://github.com/astral-sh/ruff-pre-commit
48-
rev: v0.11.11
48+
rev: v0.12.1
4949
hooks:
5050
- id: ruff
5151
args: [--fix, --exit-non-zero-on-fix]
5252
- id: ruff-format
5353
- repo: https://github.com/pre-commit/mirrors-mypy
54-
rev: v1.15.0
54+
rev: v1.16.1
5555
hooks:
5656
- id: mypy
5757
args: [--no-warn-unused-ignores]

.pyproject_generation/pyproject_custom.toml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
[project]
22
name = "ars"
3-
version = "5.1.0"
3+
version = "5.1.1"
44
description = "Access Request Service"
55
dependencies = [
6-
"ghga-event-schemas ~= 9.1.0",
6+
"ghga-event-schemas ~= 9.2.0",
77
"ghga-service-commons[api,auth] >= 4.1.0",
8-
"hexkit[mongodb,akafka] >= 5.1.1",
8+
"hexkit[mongodb,akafka] >= 5.3.0",
99
"httpx >= 0.28",
10-
"typer >= 0.15",
10+
"typer >= 0.16",
1111
]
1212

1313
[project.urls]

README.md

Lines changed: 40 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,21 @@ We recommend using the provided Docker container.
1616

1717
A pre-built version is available at [docker hub](https://hub.docker.com/repository/docker/ghga/access-request-service):
1818
```bash
19-
docker pull ghga/access-request-service:5.1.0
19+
docker pull ghga/access-request-service:5.1.1
2020
```
2121

2222
Or you can build the container yourself from the [`./Dockerfile`](./Dockerfile):
2323
```bash
2424
# Execute in the repo's root dir:
25-
docker build -t ghga/access-request-service:5.1.0 .
25+
docker build -t ghga/access-request-service:5.1.1 .
2626
```
2727

2828
For production-ready deployment, we recommend using Kubernetes, however,
2929
for simple use cases, you could execute the service using docker
3030
on a single server:
3131
```bash
3232
# The entrypoint is preconfigured:
33-
docker run -p 8080:8080 ghga/access-request-service:5.1.0 --help
33+
docker run -p 8080:8080 ghga/access-request-service:5.1.1 --help
3434
```
3535

3636
If you prefer not to use containers, you may install the service from source:
@@ -156,7 +156,7 @@ The service requires the following configuration parameters:
156156
```
157157

158158

159-
- <a id="properties/kafka_max_message_size"></a>**`kafka_max_message_size`** *(integer)*: The largest message size that can be transmitted, in bytes. Only services that have a need to send/receive larger messages should set this. Exclusive minimum: `0`. Default: `1048576`.
159+
- <a id="properties/kafka_max_message_size"></a>**`kafka_max_message_size`** *(integer)*: The largest message size that can be transmitted, in bytes, before compression. Only services that have a need to send/receive larger messages should set this. When used alongside compression, this value can be set to something greater than the broker's `message.max.bytes` field, which effectively concerns the compressed message size. Exclusive minimum: `0`. Default: `1048576`.
160160

161161

162162
Examples:
@@ -171,6 +171,42 @@ The service requires the following configuration parameters:
171171
```
172172

173173

174+
- <a id="properties/kafka_compression_type"></a>**`kafka_compression_type`**: The compression type used for messages. Valid values are: None, gzip, snappy, lz4, and zstd. If None, no compression is applied. This setting is only relevant for the producer and has no effect on the consumer. If set to a value, the producer will compress messages before sending them to the Kafka broker. If unsure, zstd provides a good balance between speed and compression ratio. Default: `null`.
175+
176+
- **Any of**
177+
178+
- <a id="properties/kafka_compression_type/anyOf/0"></a>*string*: Must be one of: `["gzip", "snappy", "lz4", "zstd"]`.
179+
180+
- <a id="properties/kafka_compression_type/anyOf/1"></a>*null*
181+
182+
183+
Examples:
184+
185+
```json
186+
null
187+
```
188+
189+
190+
```json
191+
"gzip"
192+
```
193+
194+
195+
```json
196+
"snappy"
197+
```
198+
199+
200+
```json
201+
"lz4"
202+
```
203+
204+
205+
```json
206+
"zstd"
207+
```
208+
209+
174210
- <a id="properties/kafka_max_retries"></a>**`kafka_max_retries`** *(integer)*: The maximum number of times to immediately retry consuming an event upon failure. Works independently of the dead letter queue. Minimum: `0`. Default: `0`.
175211

176212

config_schema.json

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@
145145
},
146146
"kafka_max_message_size": {
147147
"default": 1048576,
148-
"description": "The largest message size that can be transmitted, in bytes. Only services that have a need to send/receive larger messages should set this.",
148+
"description": "The largest message size that can be transmitted, in bytes, before compression. Only services that have a need to send/receive larger messages should set this. When used alongside compression, this value can be set to something greater than the broker's `message.max.bytes` field, which effectively concerns the compressed message size.",
149149
"examples": [
150150
1048576,
151151
16777216
@@ -154,6 +154,32 @@
154154
"title": "Kafka Max Message Size",
155155
"type": "integer"
156156
},
157+
"kafka_compression_type": {
158+
"anyOf": [
159+
{
160+
"enum": [
161+
"gzip",
162+
"snappy",
163+
"lz4",
164+
"zstd"
165+
],
166+
"type": "string"
167+
},
168+
{
169+
"type": "null"
170+
}
171+
],
172+
"default": null,
173+
"description": "The compression type used for messages. Valid values are: None, gzip, snappy, lz4, and zstd. If None, no compression is applied. This setting is only relevant for the producer and has no effect on the consumer. If set to a value, the producer will compress messages before sending them to the Kafka broker. If unsure, zstd provides a good balance between speed and compression ratio.",
174+
"examples": [
175+
null,
176+
"gzip",
177+
"snappy",
178+
"lz4",
179+
"zstd"
180+
],
181+
"title": "Kafka Compression Type"
182+
},
157183
"kafka_max_retries": {
158184
"default": 0,
159185
"description": "The maximum number of times to immediately retry consuming an event upon failure. Works independently of the dead letter queue.",

example_config.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ docs_url: /docs
2828
download_access_url: http://127.0.0.1:8080/download-access
2929
generate_correlation_id: true
3030
host: 127.0.0.1
31+
kafka_compression_type: null
3132
kafka_dlq_topic: dlq
3233
kafka_enable_dlq: true
3334
kafka_max_message_size: 1048576

lock/requirements-dev-template.in

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ typer>=0.15
1919
httpx>=0.28
2020
pytest-httpx>=0.35
2121

22-
urllib3>=2.4
23-
requests>=2.32
22+
urllib3>=2.5
23+
requests>=2.32.4
2424

2525
casefy>=1.1
2626
jsonschema2md>=1.5

0 commit comments

Comments
 (0)