refactor: consolidate utilities and improve type safety#20
refactor: consolidate utilities and improve type safety#20razvanazamfirei wants to merge 2 commits intomainfrom
Conversation
- Centralize is_missing_scalar() in types.py and clean_text() in utils.py - Add named confidence constants in hybrid.py (replaces hardcoded 0.8/0.85/1.0) - Add LRU_CACHE_SIZE constant to avoid magic numbers - Add py.typed marker for PEP 561 compliance - Remove unnecessary pass statements from exception classes - Improve processor chunk configuration via instance parameters - Enhance logging error messages with context
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #20 +/- ##
==========================================
- Coverage 82.78% 78.85% -3.93%
==========================================
Files 31 31
Lines 2219 2346 +127
Branches 325 347 +22
==========================================
+ Hits 1837 1850 +13
- Misses 311 426 +115
+ Partials 71 70 -1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Pull request overview
This PR refactors case_parser to centralize common utilities (missing-value checks, text normalization, cache sizing), tighten type annotations across the processor/ML pipeline, and improve configurability of ML model loading.
Changes:
- Consolidates “missing scalar” checks and text normalization into shared helpers (
is_missing_scalar,clean_text) and standardizeslru_cachesizing viaLRU_CACHE_SIZE. - Improves type annotations throughout
CaseProcessorand extraction/ML helpers, and adds apy.typedmarker for PEP 561 typing support. - Adds
CASE_PARSER_MODEL_PATHenvironment variable support for selecting the ML model file at runtime.
Reviewed changes
Copilot reviewed 16 out of 16 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_enhanced_processor.py | Updates log expectations for process-chunk fallback behavior. |
| src/case_parser/utils.py | Introduces shared LRU_CACHE_SIZE and clean_text() helper. |
| src/case_parser/types.py | Adds is_missing_scalar() helper and related imports. |
| src/case_parser/py.typed | Marks package as typed (PEP 561). |
| src/case_parser/processor.py | Refactors missing checks/types, adds configurable process-pool thresholds, and refines fallback logging. |
| src/case_parser/patterns/categorization.py | Reuses shared missing-scalar helper and centralized cache size constant. |
| src/case_parser/patterns/block_site_patterns.py | Replaces local text cleaning with shared clean_text(). |
| src/case_parser/patterns/approach_patterns.py | Uses shared LRU_CACHE_SIZE for cached detectors. |
| src/case_parser/ml/loader.py | Adds env-var based ML model path resolution. |
| src/case_parser/ml/inputs.py | Switches to shared text cleaning and adjusts feature input normalization. |
| src/case_parser/ml/hybrid.py | Replaces magic numbers with named constants and improves Protocol docstrings. |
| src/case_parser/ml/features.py | Uses shared LRU_CACHE_SIZE for cached feature extraction. |
| src/case_parser/logging_config.py | Replaces getattr lookup with explicit log-level map. |
| src/case_parser/extractors.py | Reuses shared missing-scalar logic and tightens input types. |
| src/case_parser/exceptions.py | Simplifies exception definitions (removes redundant pass). |
| AGENTS.md | Adds contributor guidance (commands, architecture, typing/testing conventions). |
Comments suppressed due to low confidence (1)
src/case_parser/processor.py:813
- These row/value annotations assume every DataFrame cell is
Scalar, butrow.get(self.column_map.date)commonly yieldspd.Timestamp/datetimevalues (and the implementation explicitly handles them viapd.to_datetime/_normalize_timestamp_to_utc). This mismatch undermines the PR’s type-safety goal. Either widen the row/value types here (e.g., toobject/ a broader alias) or expandScalarso the annotations reflect the actual inputs.
def _prepare_rows(
self, rows: Sequence[Mapping[Hashable, Scalar]]
) -> list[_PreparedRow]:
"""Precompute per-row metadata for a dataframe batch.
This batches date parsing and hybrid categorization so downstream
row-processing can reuse normalized timestamps, services, categories,
and warning lists.
"""
date_preparations = self._prepare_dates([
row.get(self.column_map.date) for row in rows
])
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Scalar values from pandas DataFrames, Excel cells, or dict lookups. | ||
| # pd.NA/pd.NaT are handled via pd.isna() at runtime. | ||
| type Scalar = str | int | float | bool | None | ||
|
|
There was a problem hiding this comment.
Scalar is defined as only primitive types, but downstream code (e.g., date parsing in CaseProcessor) explicitly supports pd.Timestamp / datetime values from DataFrames. Keeping Scalar = str | int | float | bool | None makes the new type annotations misleading and will force type: ignore / incorrect narrowing elsewhere. Consider broadening Scalar (or introducing a separate alias like CellValue) to include datetime and pd.Timestamp (and possibly other common pandas scalar types) so annotations match actual runtime inputs.
| def build_feature_inputs( | ||
| procedure_texts: Sequence[Scalar], | ||
| services_list: Sequence[Scalar | list[Scalar]] | None = None, | ||
| rule_categories: Sequence[Scalar] | None = None, | ||
| rule_warning_counts: Sequence[Scalar] | None = None, | ||
| ) -> list[FeatureInput]: | ||
| """Build batched FeatureInput objects from parallel procedure metadata.""" | ||
| expected_length = len(procedure_texts) | ||
| services_seq = _normalize_parallel_values( | ||
| name="services_list", | ||
| values=services_list, | ||
| default_value="", | ||
| expected_length=expected_length, | ||
| ) | ||
| categories_seq = _normalize_parallel_values( | ||
| name="rule_categories", | ||
| values=rule_categories, | ||
| default_value="", | ||
| expected_length=expected_length, | ||
| ) | ||
| warnings_seq = _normalize_parallel_values( | ||
| name="rule_warning_counts", | ||
| values=rule_warning_counts, | ||
| default_value=0, | ||
| expected_length=expected_length, | ||
| ) | ||
|
|
||
| return [ | ||
| FeatureInput( | ||
| procedure_text=coerce_text(procedure_text), | ||
| procedure_text=clean_text(procedure_text), | ||
| service_text=coerce_service_text(service_text), | ||
| rule_category=coerce_text(rule_category), | ||
| rule_warning_count=parse_int(rule_warning_count), | ||
| rule_category=clean_text(rule_category), # type: ignore[arg-type] | ||
| rule_warning_count=parse_int(rule_warning_count), # type: ignore[arg-type] | ||
| ) |
There was a problem hiding this comment.
build_feature_inputs() currently needs # type: ignore[arg-type] because _normalize_parallel_values() returns Sequence[Scalar | list[Scalar]] for all parallel sequences, even when a given input (like rule_categories / rule_warning_counts) can never contain lists. This is avoidable and weakens the type-safety refactor. Consider making _normalize_parallel_values() generic (preserve the element type of values/default_value) or splitting into separate helpers so rule_category and rule_warning_count have precise scalar types and the ignores can be removed.
| from typing import TYPE_CHECKING, Literal | ||
|
|
||
| if TYPE_CHECKING: | ||
| pass | ||
|
|
There was a problem hiding this comment.
TYPE_CHECKING is imported but the guarded block is just pass, so it adds noise without providing any typing-only imports. Suggest removing the TYPE_CHECKING import and the empty guard block (or add the intended type-only imports) to keep this module minimal.
| from typing import TYPE_CHECKING, Literal | |
| if TYPE_CHECKING: | |
| pass | |
| from typing import Literal |
WalkthroughCentralizes utilities for text cleaning, missing-value detection, and LRU cache sizing; adds PEP 561 type marker and AGENTS.md docs; makes ML model path configurable via environment; introduces typed changes and configurable process-pool chunking in the processor; removes duplicated helpers and tightens type signatures and logging behavior. Changes
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/case_parser/patterns/categorization.py (1)
375-379:⚠️ Potential issue | 🟠 MajorReplace remaining
pd.isnanull check withis_missing_scalar.Line 377 still uses
pd.isna, which leaves null-check behavior inconsistent within this module after the shared-helper migration.As per coding guidelines, "Use `is_missing_scalar()` function from `types.py` for null checks instead of manual `pd.isna()` calls".✅ Proposed fix
def _normalize_procedure_text(procedure: str | None) -> str: """Normalize optional procedure text to uppercase for caching.""" - if pd.isna(procedure): + if is_missing_scalar(procedure): return "" return str(procedure).upper()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/case_parser/patterns/categorization.py` around lines 375 - 379, In _normalize_procedure_text, replace the pd.isna(procedure) null check with the shared helper is_missing_scalar from types.py: call is_missing_scalar(procedure) and return "" for missing values; also add the appropriate import for is_missing_scalar so the function compiles and behavior remains consistent with other null checks in this module.src/case_parser/processor.py (1)
357-359: 🧹 Nitpick | 🔵 TrivialInconsistent null check pattern.
Line 357 uses
pd.isna(value)while the coding guidelines specify usingis_missing_scalar()for null checks. For consistency with lines 241 and 286, consider usingis_missing_scalar(value)here as well.♻️ Proposed fix
`@staticmethod` def normalize_emergent_flag(value: Scalar) -> bool: - if pd.isna(value): + if is_missing_scalar(value): return False return str(value).strip().upper() in {"E", "Y", "YES", "TRUE", "1"}As per coding guidelines: "Use
is_missing_scalar()function fromtypes.pyfor null checks instead of manualpd.isna()calls"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/case_parser/processor.py` around lines 357 - 359, The null check uses pd.isna(value); replace it with the standard helper is_missing_scalar(value) to match the pattern used on lines 241 and 286 (and to follow coding guidelines). Update the conditional in the function that checks boolean-like strings (the block that returns False for missing values and then checks str(value).strip().upper()) to call is_missing_scalar(value) instead of pd.isna(value), and ensure is_missing_scalar is imported from types.py if not already.src/case_parser/ml/hybrid.py (1)
303-313: 🛠️ Refactor suggestion | 🟠 MajorInconsistent confidence value usage.
Lines 307, 362, and 400 still use hardcoded
0.8 if rule_warnings else 1.0instead of the newRULES_CONFIDENCE_WITH_WARNINGS/RULES_CONFIDENCE_WITHOUT_WARNINGSconstants. This creates inconsistency with lines 219-221 and 283-285 which correctly use the constants.♻️ Proposed fix to use constants consistently
# If ML returns invalid category, fall back to rules return ClassificationResult( category=rule_category, method="rules", - confidence=0.8 if rule_warnings else 1.0, + confidence=RULES_CONFIDENCE_WITH_WARNINGS + if rule_warnings + else RULES_CONFIDENCE_WITHOUT_WARNINGS, alternative=None, warnings=[ *rule_warnings, f"ML returned invalid category: {ml_category_str}", ], )return ClassificationResult( category=rule_category, method="rules", - confidence=0.8 if rule_warnings else 1.0, + confidence=RULES_CONFIDENCE_WITH_WARNINGS + if rule_warnings + else RULES_CONFIDENCE_WITHOUT_WARNINGS, alternative=ml_category, warnings=warnings, )# Low confidence ML - use rules only return ClassificationResult( category=rule_category, method="rules", - confidence=0.8 if rule_warnings else 1.0, + confidence=RULES_CONFIDENCE_WITH_WARNINGS + if rule_warnings + else RULES_CONFIDENCE_WITHOUT_WARNINGS, alternative=None, warnings=rule_warnings, )Also applies to: 359-365, 396-403
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/case_parser/ml/hybrid.py` around lines 303 - 313, Replace the hardcoded confidence expression "0.8 if rule_warnings else 1.0" used when returning a ClassificationResult with the project constants RULES_CONFIDENCE_WITH_WARNINGS / RULES_CONFIDENCE_WITHOUT_WARNINGS; specifically update the return sites that build ClassificationResult (the branches that fall back to rules when ML is invalid or similar) to use "RULES_CONFIDENCE_WITH_WARNINGS if rule_warnings else RULES_CONFIDENCE_WITHOUT_WARNINGS" so confidence is consistent with the other usages; ensure you update all occurrences where ClassificationResult is created with the hardcoded ternary (references: ClassificationResult, rule_warnings, ml_category_str).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/case_parser/logging_config.py`:
- Around line 9-10: Remove the dead if TYPE_CHECKING: pass block from
logging_config.py; delete the block and also remove the now-unused TYPE_CHECKING
import (if present) so there is no empty conditional or unused symbol in the
module; ensure no other code relies on TYPE_CHECKING before committing the
change.
In `@src/case_parser/ml/inputs.py`:
- Around line 125-126: The type ignores on the calls to
clean_text(rule_category) and parse_int(rule_warning_count) arise because
_normalize_parallel_values can return Sequence[Scalar | list[Scalar]] rather
than a single Scalar; update _normalize_parallel_values to narrow its return
type to Scalar where appropriate (or add an explicit runtime assertion/cast
after calling _normalize_parallel_values) so callers like clean_text and
parse_int receive a confirmed Scalar without needing "# type: ignore[arg-type]";
specifically, adjust the implementation of _normalize_parallel_values (or wrap
its result before passing into clean_text and parse_int) to assert/convert to
the expected Scalar type for rule_category and rule_warning_count and remove the
type ignores.
In `@src/case_parser/utils.py`:
- Around line 52-53: The sentinel check using "if text in
_MISSING_TEXT_SENTINELS" is case-sensitive; update the logic (in the function
containing that check in src/case_parser/utils.py) to perform a case-insensitive
comparison by normalizing the input and sentinels (e.g., compute key =
text.strip().strip('<>').casefold() and compare against a precomputed set of
casefolded sentinels like {s.casefold().strip('<>') for s in
_MISSING_TEXT_SENTINELS}); replace the original membership test with "if key in
normalized_missing_sentinels: return ''" so values like "NAN", "None", or "<na>"
are recognized.
---
Outside diff comments:
In `@src/case_parser/ml/hybrid.py`:
- Around line 303-313: Replace the hardcoded confidence expression "0.8 if
rule_warnings else 1.0" used when returning a ClassificationResult with the
project constants RULES_CONFIDENCE_WITH_WARNINGS /
RULES_CONFIDENCE_WITHOUT_WARNINGS; specifically update the return sites that
build ClassificationResult (the branches that fall back to rules when ML is
invalid or similar) to use "RULES_CONFIDENCE_WITH_WARNINGS if rule_warnings else
RULES_CONFIDENCE_WITHOUT_WARNINGS" so confidence is consistent with the other
usages; ensure you update all occurrences where ClassificationResult is created
with the hardcoded ternary (references: ClassificationResult, rule_warnings,
ml_category_str).
In `@src/case_parser/patterns/categorization.py`:
- Around line 375-379: In _normalize_procedure_text, replace the
pd.isna(procedure) null check with the shared helper is_missing_scalar from
types.py: call is_missing_scalar(procedure) and return "" for missing values;
also add the appropriate import for is_missing_scalar so the function compiles
and behavior remains consistent with other null checks in this module.
In `@src/case_parser/processor.py`:
- Around line 357-359: The null check uses pd.isna(value); replace it with the
standard helper is_missing_scalar(value) to match the pattern used on lines 241
and 286 (and to follow coding guidelines). Update the conditional in the
function that checks boolean-like strings (the block that returns False for
missing values and then checks str(value).strip().upper()) to call
is_missing_scalar(value) instead of pd.isna(value), and ensure is_missing_scalar
is imported from types.py if not already.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8186381a-93c3-4795-969f-513d9c626383
📒 Files selected for processing (16)
AGENTS.mdsrc/case_parser/exceptions.pysrc/case_parser/extractors.pysrc/case_parser/logging_config.pysrc/case_parser/ml/features.pysrc/case_parser/ml/hybrid.pysrc/case_parser/ml/inputs.pysrc/case_parser/ml/loader.pysrc/case_parser/patterns/approach_patterns.pysrc/case_parser/patterns/block_site_patterns.pysrc/case_parser/patterns/categorization.pysrc/case_parser/processor.pysrc/case_parser/py.typedsrc/case_parser/types.pysrc/case_parser/utils.pytests/test_enhanced_processor.py
💤 Files with no reviewable changes (1)
- src/case_parser/exceptions.py
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/case_parser/processor.py (1)
691-698: 🧹 Nitpick | 🔵 TrivialConsider using
is_missing_scalarfor consistency.Line 691 uses
pd.notna(metadata.procedure_text)while the rest of the file has migrated tois_missing_scalar(). For consistency, consider:if not is_missing_scalar(metadata.procedure_text) and str(metadata.procedure_text).strip():However, since
metadata.procedure_textis already typed asstr | None(not a raw scalar from DataFrame), and thestr(...).strip()check handlesNonesafely, the current approach is functionally correct.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/case_parser/processor.py` around lines 691 - 698, The code uses pd.notna(metadata.procedure_text) at the start of the procedure block which is inconsistent with the rest of the file's use of is_missing_scalar; replace the pd.notna check with a guard using is_missing_scalar (i.e., if not is_missing_scalar(metadata.procedure_text) and str(metadata.procedure_text).strip():) to match conventions around scalar/missing checks for metadata.procedure_text while leaving the subsequent calls to extract_monitoring, the loop that appends into monitoring, and the call to self._extend_findings(all_findings, confidence_scores) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/case_parser/ml/inputs.py`:
- Around line 17-18: Remove the now-unused TypeVar declaration T = TypeVar("T",
bound=Scalar | list[Scalar]) since _normalize_parallel_values uses the PEP 695
inline type parameter syntax; delete that line and tidy up any related unused
imports (e.g., TypeVar) in the module to avoid lints and unused-symbol warnings.
In `@src/case_parser/utils.py`:
- Around line 41-55: clean_text currently returns the string "NaT" for pandas
NaT because "NaT" isn't in the sentinel set; update the sentinel normalization
so pandas' NaT is treated as missing by adding the lowercased "nat" (or "NaT")
to the sentinel collection referenced by clean_text (e.g., _NORMALIZED_SENTINELS
/ _MISSING_TEXT_SENTINELS) or alternatively detect pandas.NaT before
stringifying in clean_text; ensure the function uses the same sentinel constant
(_NORMALIZED_SENTINELS) and includes "nat" (case-insensitive) so "NaT" is
normalized to an empty string.
---
Outside diff comments:
In `@src/case_parser/processor.py`:
- Around line 691-698: The code uses pd.notna(metadata.procedure_text) at the
start of the procedure block which is inconsistent with the rest of the file's
use of is_missing_scalar; replace the pd.notna check with a guard using
is_missing_scalar (i.e., if not is_missing_scalar(metadata.procedure_text) and
str(metadata.procedure_text).strip():) to match conventions around
scalar/missing checks for metadata.procedure_text while leaving the subsequent
calls to extract_monitoring, the loop that appends into monitoring, and the call
to self._extend_findings(all_findings, confidence_scores) unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d6c650aa-d80c-4786-b6b5-1493173d17d1
📒 Files selected for processing (7)
src/case_parser/logging_config.pysrc/case_parser/ml/hybrid.pysrc/case_parser/ml/inputs.pysrc/case_parser/patterns/categorization.pysrc/case_parser/processor.pysrc/case_parser/types.pysrc/case_parser/utils.py
| T = TypeVar("T", bound=Scalar | list[Scalar]) | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused TypeVar definition after PEP 695 migration.
The T = TypeVar("T", bound=Scalar | list[Scalar]) on line 17 appears to be dead code. The function _normalize_parallel_values at line 157 now uses PEP 695 inline type parameter syntax [T: Scalar | list[Scalar]], making this explicit TypeVar definition unnecessary.
🧹 Proposed cleanup
-from typing import TYPE_CHECKING, TypeVar
+from typing import TYPE_CHECKING
from ..types import Scalar
from ..utils import clean_text
from .config import SERVICE_COLUMN_CANDIDATES
if TYPE_CHECKING:
from pandas import DataFrame
-
-
-T = TypeVar("T", bound=Scalar | list[Scalar])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| T = TypeVar("T", bound=Scalar | list[Scalar]) | |
| from typing import TYPE_CHECKING | |
| from ..types import Scalar | |
| from ..utils import clean_text | |
| from .config import SERVICE_COLUMN_CANDIDATES | |
| if TYPE_CHECKING: | |
| from pandas import DataFrame |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/case_parser/ml/inputs.py` around lines 17 - 18, Remove the now-unused
TypeVar declaration T = TypeVar("T", bound=Scalar | list[Scalar]) since
_normalize_parallel_values uses the PEP 695 inline type parameter syntax; delete
that line and tidy up any related unused imports (e.g., TypeVar) in the module
to avoid lints and unused-symbol warnings.
| def clean_text(value: Scalar | None) -> str: | ||
| """Normalize text values to a plain string or empty string. | ||
|
|
||
| Args: | ||
| value: Any scalar value or None. | ||
|
|
||
| Returns: | ||
| Stripped string with missing-value sentinels normalized to empty string. | ||
| """ | ||
| if value is None: | ||
| return "" | ||
| text = str(value).strip() | ||
| if text.strip("<>").casefold() in _NORMALIZED_SENTINELS: | ||
| return "" | ||
| return text |
There was a problem hiding this comment.
Missing sentinel for pd.NaT string representation.
When pd.NaT is passed to clean_text(), it stringifies to "NaT", which is not in _MISSING_TEXT_SENTINELS. This could cause "NaT" to be returned as valid text instead of being normalized to "".
💡 Proposed fix
-_MISSING_TEXT_SENTINELS = {"", "<NA>", "nan", "NaN", "None"}
+_MISSING_TEXT_SENTINELS = {"", "<NA>", "nan", "NaN", "None", "NaT"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/case_parser/utils.py` around lines 41 - 55, clean_text currently returns
the string "NaT" for pandas NaT because "NaT" isn't in the sentinel set; update
the sentinel normalization so pandas' NaT is treated as missing by adding the
lowercased "nat" (or "NaT") to the sentinel collection referenced by clean_text
(e.g., _NORMALIZED_SENTINELS / _MISSING_TEXT_SENTINELS) or alternatively detect
pandas.NaT before stringifying in clean_text; ensure the function uses the same
sentinel constant (_NORMALIZED_SENTINELS) and includes "nat" (case-insensitive)
so "NaT" is normalized to an empty string.
Not up to standards ⛔🔴 Issues
|
| Category | Results |
|---|---|
| BestPractice | 1 medium |
| ErrorProne | 1 medium 2 high |
🟢 Metrics 534 complexity · 4 duplication
Metric Results Complexity 534 Duplication 4
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests