Skip to content

Commit de75ec8

Browse files
CyberViserCopilot
andcommitted
release: v0.3.0 — multi-language SDK, /v1/code, Qwen Coder 32B
- requirements.txt: openai, flask, python-dotenv now explicit dependencies - pyproject.toml: hancock-client installable Python package (pip install -e .) - clients/python/__init__.py: package init, exports HancockClient + MODELS - Makefile: client-python + client-node targets; updated help text - CHANGELOG.md: full v0.3.0 entry (15 new additions) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
1 parent 9ba4986 commit de75ec8

File tree

5 files changed

+90
-1
lines changed

5 files changed

+90
-1
lines changed

CHANGELOG.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,33 @@ Versioning: [Semantic Versioning](https://semver.org/)
77

88
---
99

10+
## [0.3.0] — 2026-02-21
11+
12+
### Added
13+
- **Qwen 2.5 Coder 32B integration**`MODELS` dict with aliases (`mistral-7b`, `qwen-coder`, `llama-8b`, `mixtral-8x7b`)
14+
- **`/v1/code` REST endpoint** — security code generation: YARA/Sigma rules, KQL/SPL queries, exploit PoCs, CTF scripts
15+
- **`/mode code` CLI command** — auto-switches to Qwen Coder model on entry
16+
- **`CODE_SYSTEM` prompt** — security code specialist persona for Python, Bash, PowerShell, Go, KQL, SPL, YARA, Sigma
17+
- **Python SDK** (`clients/python/`) — `HancockClient` class with `ask/code/triage/hunt/respond/chat` methods
18+
- **Python CLI** (`clients/python/hancock_cli.py`) — interactive + one-shot, `/mode`, `/model` commands, multi-turn history
19+
- **Node.js SDK** (`clients/nodejs/`) — streaming CLI backed by NVIDIA NIM, ES module, same model aliases
20+
- **`pyproject.toml`** — Python SDK installable as `hancock-client` package via `pip install -e .`
21+
- **`__init__.py`** for Python SDK package — exports `HancockClient`, `MODELS`, `__version__`
22+
- **GPU training page** (`docs/train.html`) — 4 free GPU options (Modal ⭐, Kaggle, Colab, NVIDIA NIM)
23+
- **Modal.com GPU runner** (`train_modal.py`) — full LoRA pipeline: data → train → GGUF export, free $30/mo
24+
- **Kaggle fine-tune notebook** (`Hancock_Kaggle_Finetune.ipynb`) — 30h/week free T4
25+
- **Manual finetune workflow** (`.github/workflows/finetune.yml`) — GPU choice dropdown (T4/A10G/A100)
26+
- **Makefile `client-python` + `client-node` targets** — one-command SDK launch
27+
- **1,375 training samples** (`data/hancock_v2.jsonl`) — 691 MITRE ATT&CK + 600 CVEs + 75 pentest/SOC KB + 9 Sigma
28+
29+
### Changed
30+
- `requirements.txt` — added `openai>=1.0.0`, `flask>=3.0.0`, `python-dotenv>=1.0.0`
31+
- `docs/api.html` — added `/v1/code` endpoint, Python SDK + Node.js SDK sections, updated Modes table with `code` mode
32+
- `/health` endpoint — now exposes `modes_available`, `models_available`, and all 6 endpoints
33+
- `.env.example` — documents `HANCOCK_CODER_MODEL=qwen/qwen2.5-coder-32b-instruct`
34+
35+
---
36+
1037
## [0.2.0] — 2026-02-21
1138

1239
### Added

Makefile

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
PYTHON := .venv/bin/python
44
PIP := .venv/bin/pip
55

6-
.PHONY: help setup install dev-install run server pipeline finetune lint clean docker docker-up
6+
.PHONY: help setup install dev-install run server pipeline finetune lint clean docker docker-up client-python client-node
77

88
help:
99
@echo ""
@@ -28,6 +28,10 @@ help:
2828
@echo " pipeline Run data collection pipeline"
2929
@echo " finetune Run LoRA fine-tuning on Mistral 7B"
3030
@echo ""
31+
@echo " Clients:"
32+
@echo " client-python Run Python SDK CLI (interactive)"
33+
@echo " client-node Run Node.js SDK CLI (interactive)"
34+
@echo ""
3135
@echo " Dev:"
3236
@echo " lint Run flake8 linter"
3337
@echo " clean Remove build artifacts and cache"
@@ -83,3 +87,12 @@ docker:
8387

8488
docker-up:
8589
docker-compose up -d
90+
91+
# ─── Clients ─────────────────────────────────────────────────
92+
client-python:
93+
@$(PIP) install openai python-dotenv -q
94+
$(PYTHON) clients/python/hancock_cli.py
95+
96+
client-node:
97+
@cd clients/nodejs && npm install --silent
98+
node clients/nodejs/hancock.js

clients/python/__init__.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
"""Hancock Python SDK — CyberViser."""
2+
from .hancock_client import HancockClient, MODELS # noqa: F401
3+
4+
__version__ = "0.3.0"
5+
__all__ = ["HancockClient", "MODELS"]

pyproject.toml

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
[build-system]
2+
requires = ["setuptools>=68", "wheel"]
3+
build-backend = "setuptools.backends.legacy:build"
4+
5+
[project]
6+
name = "hancock-client"
7+
version = "0.3.0"
8+
description = "Hancock AI Cybersecurity Agent — Python SDK (NVIDIA NIM backend)"
9+
readme = "clients/python/README.md"
10+
license = { file = "LICENSE" }
11+
requires-python = ">=3.10"
12+
authors = [{ name = "CyberViser", email = "cyberviser@proton.me" }]
13+
keywords = ["cybersecurity", "ai", "pentest", "soc", "llm", "nvidia", "security"]
14+
classifiers = [
15+
"Development Status :: 4 - Beta",
16+
"Intended Audience :: Information Technology",
17+
"Topic :: Security",
18+
"Programming Language :: Python :: 3",
19+
"Programming Language :: Python :: 3.10",
20+
"Programming Language :: Python :: 3.11",
21+
"Programming Language :: Python :: 3.12",
22+
"License :: Other/Proprietary License",
23+
]
24+
dependencies = [
25+
"openai>=1.0.0",
26+
"python-dotenv>=1.0.0",
27+
]
28+
29+
[project.urls]
30+
Homepage = "https://cyberviser.netlify.app"
31+
Repository = "https://github.com/cyberviser/Hancock"
32+
Documentation = "https://cyberviser.netlify.app/api"
33+
34+
[project.scripts]
35+
hancock = "hancock_cli:main"
36+
37+
[tool.setuptools.packages.find]
38+
where = ["clients/python"]
39+
40+
[tool.setuptools.package-dir]
41+
"" = "clients/python"

requirements.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
openai>=1.0.0
2+
flask>=3.0.0
3+
python-dotenv>=1.0.0
14
stix2>=3.0.0
25
libtaxii>=1.1.119
36
requests>=2.32.0

0 commit comments

Comments
 (0)