Releases: openai/openai-guardrails-js
Releases · openai/openai-guardrails-js
v0.1.4
What's Changed
- Fix custom and off topic to use llm_base by @steven10a in #31
- More robust handling of checking guardrailLlm client type by @steven10a in #32
- Remove unsupported models from docs by @steven10a in #34
- Bump version to v0.1.4 by @gabor-openai in #33
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's Changed
- Convo history examples by @steven10a in #20
- Add NSFW docs by @steven10a in #22
- Updating prompt injection sys prompt and evals by @steven10a in #24
- Support using prompt param with GuardrailAgent by @steven10a in #23
- Adding Korean RRN PII detection by @steven10a in #25
- Updating safety_identifier usage by @steven10a in #27
- Fix: Correctly use context model for moderation by @steven10a in #26
- Bump version to v0.1.3 by @gabor-openai in #28
Full Changelog: v0.1.2...v0.1.3
v0.1.2
What's Changed
- Fixing typing warnings and errors and enabling npm run lint by @cosmiccrisp in #17
- Cleaner support of conversation history by @steven10a in #18
- Add a GH Actions job for trusted publishing by @seratch in #14
- version bump v0.1.2 by @steven10a in #19
Full Changelog: v0.1.1...v0.1.2
v0.1.1
- Update PI sys prompt and new eval by @steven10a in #16
- Add safety header to LLM calls by @steven10a in #15