Skip to content

Add calculated_score field and basic scoring logic#4330

Open
Tanishq-mellu wants to merge 4 commits intoOWASP:mainfrom
Tanishq-mellu:add-calculated-score
Open

Add calculated_score field and basic scoring logic#4330
Tanishq-mellu wants to merge 4 commits intoOWASP:mainfrom
Tanishq-mellu:add-calculated-score

Conversation

@Tanishq-mellu
Copy link

Fixes #4200
This PR introduces an initial implementation of a calculated scoring system for OWASP Nest members.

Changes:

  • Added calculated_score field to User model
  • Introduced a basic scoring function using GitHub contribution count
  • Score is recalculated during GitHub data updates

This is a minimal starting point and can be extended further based on feedback.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 21, 2026

Summary by CodeRabbit

  • New Features
    • Profiles now show an automatically calculated contribution score that combines contribution volume, consistency, and recent activity into a single value.
    • Scores are computed and updated whenever a profile is refreshed or synced; they default to zero when contribution data is absent or minimal.

Walkthrough

Added a persisted calculated_score FloatField to User, implemented scoring helpers and calculate_score() (sums contribution, consistency, recency), added an unused _leadership_score() method, and set calculated_score inside from_github(); reformatted contributions_count declaration.

Changes

Cohort / File(s) Summary
User model & scoring
backend/apps/github/models/user.py
Added User.calculated_score: models.FloatField (default=0, help text). Added methods: calculate_score(), _contribution_score() (uses (contributions_count or 0) * 1.0), _consistency_score() (thresholds: >50→25, >10→10, else 0), _recency_score() (5 if contributions_count > 0 else 0), and _leadership_score() (returns 30 when is_owasp_staff true) — note _leadership_score() is not used by calculate_score(). Updated from_github() to assign self.calculated_score = self.calculate_score(). Minor reformat of contributions_count field declaration.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Linked Issues check ❓ Inconclusive The PR partially addresses issue #4200 requirements by adding the calculated_score field and recalculating during GitHub sync, but the scoring logic only implements contribution count, consistency thresholds, and recency; leadership roles, breadth of contribution, and flexible weighting are not yet implemented as outlined in the issue. Clarify whether this PR is intended as a complete implementation of #4200 or a partial/iterative implementation. If partial, consider opening a follow-up issue for remaining requirements like leadership scoring integration and breadth metrics.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main changes: adding a calculated_score field and implementing basic scoring logic for the User model.
Description check ✅ Passed The description is directly related to the changeset, referencing issue #4200, explaining the addition of calculated_score field, scoring function, and score recalculation during GitHub updates.
Out of Scope Changes check ✅ Passed All changes are scoped to implementing a basic scoring system for the User model. The _leadership_score() method is defined but unused, which appears intentional for future extension and aligns with the iterative approach described in PR comments.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

cubic-dev-ai[bot]
cubic-dev-ai bot previously approved these changes Mar 21, 2026
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

Confidence score: 5/5

  • Automated review surfaced no issues in the provided summaries.
  • No files require special attention.

@Tanishq-mellu
Copy link
Author

Hi @arkid15r
I’ve added an initial implementation of the calculated score based on GitHub contributions as a starting point .

I kept the logic simple intentionally to align with existing data and make it easy to extend.
I’d love to iterate on this further - especially around incorporating leadership roles, contribution quality and recency based weighting.

Looking forward to your feedback

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
backend/apps/github/models/user.py (1)

62-65: Consider adding verbose_name and a database index for consistency and future sorting.

The new field lacks verbose_name, which other fields in this model include. Additionally, since the PR objectives mention using this score for sorting members in API responses, a database index would improve query performance.

♻️ Suggested enhancement
     calculated_score = models.FloatField(
         default=0,
+        verbose_name="Calculated score",
         help_text="Computed score based on contribution signals"
     )

If sorting by this field is planned, consider adding an index in the Meta.indexes list:

models.Index(fields=["-calculated_score"], name="github_user_calc_score_desc"),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 62 - 65, Add a verbose_name
to the calculated_score field and add a DB index for sorting: update the
calculated_score=models.FloatField(...) declaration to include
verbose_name="Calculated score" (or consistent wording used across the model)
and then add a models.Index entry for "-calculated_score" (e.g.
name="github_user_calc_score_desc") to the model's Meta.indexes list; after
changing the field and Meta, run makemigrations/migrate to create the index.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/apps/github/models/user.py`:
- Line 185: The calculated_score is set only in the model via
self.calculated_score = self.calculate_score(), so updates to
contributions_count performed by the github_update_users.py management command
using bulk_save() leave calculated_score stale; fix by updating the management
command to recalculate scores before bulk_save (for each User set
user.contributions_count = ... and user.calculated_score =
user.calculate_score(), then call User.bulk_save(users,
fields=("contributions_count","calculated_score"))), or alternatively implement
a DB computed/generated column or override User.save() to recompute
calculated_score when contributions_count changes (note override won’t affect
bulk_update paths).
- Around line 94-98: The calculate_score method lacks a docstring, has
non-standard spacing inside the float() call, and should include a return type;
update the method signature to def calculate_score(self) -> float:, add a
concise docstring explaining it computes a base contribution score from
self.contributions_count, and fix the formatting to use
float(self.contributions_count or 0) (no space after '(') before returning the
value.

---

Nitpick comments:
In `@backend/apps/github/models/user.py`:
- Around line 62-65: Add a verbose_name to the calculated_score field and add a
DB index for sorting: update the calculated_score=models.FloatField(...)
declaration to include verbose_name="Calculated score" (or consistent wording
used across the model) and then add a models.Index entry for "-calculated_score"
(e.g. name="github_user_calc_score_desc") to the model's Meta.indexes list;
after changing the field and Meta, run makemigrations/migrate to create the
index.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8403af1d-d862-4b07-8ea6-ac127f6295e7

📥 Commits

Reviewing files that changed from the base of the PR and between 7634c51 and e9c3f6f.

📒 Files selected for processing (1)
  • backend/apps/github/models/user.py

Comment on lines +94 to +98
def calculate_score(self):
score = 0.0
# Base contribution score
score += float( self.contributions_count or 0 )
return score
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add docstring and fix formatting.

The method is missing a docstring (flagged by static analysis D102) and has non-standard spacing inside the parentheses on Line 97. Also consider adding a return type annotation for clarity.

📝 Suggested fix
     def calculate_score(self):
+        """Calculate a score based on contribution signals.
+
+        Returns:
+            float: The computed score for this user.
+
+        """
         score = 0.0
         # Base contribution score
-        score += float( self.contributions_count or 0 )
+        score += float(self.contributions_count or 0)
         return score
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 94-94: Missing docstring in public method

(D102)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 94 - 98, The calculate_score
method lacks a docstring, has non-standard spacing inside the float() call, and
should include a return type; update the method signature to def
calculate_score(self) -> float:, add a concise docstring explaining it computes
a base contribution score from self.contributions_count, and fix the formatting
to use float(self.contributions_count or 0) (no space after '(') before
returning the value.

setattr(self, model_field, value)

self.is_bot = gh_user.type == "Bot"
self.calculated_score = self.calculate_score()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Score becomes stale when contributions_count is updated via management command.

The github_update_users.py management command (lines 27-59) updates contributions_count via bulk_save() without calling from_github(). This means calculated_score will not be recalculated when contributions are aggregated through that path, causing the two fields to be out of sync.

Since the management command appears to be the primary mechanism for aggregating contribution counts from RepositoryContributor, this will leave calculated_score stale for most users.

Consider one of these approaches:

  1. Update the management command to also recalculate scores:
# In github_update_users.py, after setting contributions_count:
user.contributions_count = user_contributions.get(user.id, 0)
user.calculated_score = user.calculate_score()
users.append(user)

# And update bulk_save to include the new field:
User.bulk_save(users, fields=("contributions_count", "calculated_score"))
  1. Use a database-level computed/generated field if using PostgreSQL and Django 5+, so the score is always derived from stored values.

  2. Override the model's save() method to always recalculate the score when contributions_count changes (though this won't help with bulk_update).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` at line 185, The calculated_score is set
only in the model via self.calculated_score = self.calculate_score(), so updates
to contributions_count performed by the github_update_users.py management
command using bulk_save() leave calculated_score stale; fix by updating the
management command to recalculate scores before bulk_save (for each User set
user.contributions_count = ... and user.calculated_score =
user.calculate_score(), then call User.bulk_save(users,
fields=("contributions_count","calculated_score"))), or alternatively implement
a DB computed/generated column or override User.save() to recompute
calculated_score when contributions_count changes (note override won’t affect
bulk_update paths).

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="backend/apps/github/models/user.py">

<violation number="1" location="backend/apps/github/models/user.py:31">
P1: Duplicate `calculate_score` method definition causes the first implementation to be overridden by the second. The class defines `calculate_score` twice - the first (lines 27-33) with complex scoring logic using `_consistency_score` and `_recency_score`, and the second (lines 101-104) with simple scoring. The second definition overrides the first, so the complex scoring is never executed and helper methods become dead code.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
backend/apps/github/models/user.py (1)

112-116: ⚠️ Potential issue | 🟠 Major

Implementation doesn't match PR objectives; also see prior feedback on docstring/formatting.

The current implementation only considers contributions_count, but the linked issue #4200 objectives include:

  • Community leadership roles (chapters, projects, committees)
  • Contribution recency and consistency
  • Breadth across repositories and OWASP entities
  • Different contribution type weights

The modular implementation at lines 31-47 (currently dead code due to this redefinition) was closer to these objectives. Consider removing this simpler definition and keeping the modular one, or consolidating into a single well-documented method.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 112 - 116, The
calculate_score method currently returns only contributions_count and overrides
the earlier modular implementation (lines ~31-47) that handled leadership roles,
recency, breadth, and weighted contribution types; restore or consolidate by
removing this simplistic redefinition and re-enable the modular scoring logic
(or merge its components) inside calculate_score, ensure the method has a clear
docstring describing inputs/weights and keep helper functions (the previously
defined modular helpers) invoked rather than shadowed, and fix formatting to
match project style.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/apps/github/models/user.py`:
- Around line 30-47: There are duplicate definitions for the calculated_score
field and calculate_score logic causing dead code; remove the earlier duplicate
block containing calculated_score, calculate_score, _contribution_score,
_consistency_score, and _recency_score or remove the later simpler
calculate_score implementation so only one implementation remains—either keep
the modular implementation (keep methods _contribution_score,
_consistency_score, _recency_score and calculate_score that calls them and
delete the later simple calculate_score) or keep the simple implementation
(delete the modular methods and the earlier calculated_score field), then ensure
the class has a single calculated_score field and a single calculate_score
method (referencing the method names calculated_score and calculate_score to
locate the duplicates) and run tests to confirm no references break.

---

Duplicate comments:
In `@backend/apps/github/models/user.py`:
- Around line 112-116: The calculate_score method currently returns only
contributions_count and overrides the earlier modular implementation (lines
~31-47) that handled leadership roles, recency, breadth, and weighted
contribution types; restore or consolidate by removing this simplistic
redefinition and re-enable the modular scoring logic (or merge its components)
inside calculate_score, ensure the method has a clear docstring describing
inputs/weights and keep helper functions (the previously defined modular
helpers) invoked rather than shadowed, and fix formatting to match project
style.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 52c45bc6-3399-4552-9b2a-7b91e5c8617e

📥 Commits

Reviewing files that changed from the base of the PR and between e9c3f6f and c72c273.

📒 Files selected for processing (1)
  • backend/apps/github/models/user.py

cubic-dev-ai[bot]
cubic-dev-ai bot previously approved these changes Mar 22, 2026
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 issues found across 1 file (changes from recent commits).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
backend/apps/github/models/user.py (1)

199-200: ⚠️ Potential issue | 🟠 Major

This still misses the batch contribution sync path.

These lines keep calculated_score in sync only when from_github() runs. The aggregation command in backend/apps/github/management/commands/github_update_users.py:38-59 updates contributions_count and bulk-saves only that field, so users processed there will keep a stale score.

One way to keep the batch update path aligned
user.contributions_count = user_contributions.get(user.id, 0)
user.calculated_score = user.calculate_score()
users.append(user)

if not len(users) % BATCH_SIZE:
    User.bulk_save(users, fields=("contributions_count", "calculated_score"))

User.bulk_save(users, fields=("contributions_count", "calculated_score"))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 199 - 200, The batch update
path leaves calculated_score stale because calculated_score is only set in
from_github(); update the management command that bulk-updates
contributions_count (github_update_users.py) to also recompute and persist
calculated_score by calling user.calculate_score() for each user before batching
and using User.bulk_save(...,
fields=("contributions_count","calculated_score")); ensure the same symbol names
are used (contributions_count, calculated_score, calculate_score,
User.bulk_save) so both individual (from_github) and batch update paths keep
scores in sync.
🧹 Nitpick comments (1)
backend/apps/github/models/user.py (1)

39-49: Promote the scoring knobs to named constants.

The thresholds and bonuses here (50, 10, 25, 10, 5) are the parts most likely to change next. Pulling them into class or module constants will make the formula easier to tune without editing control flow in multiple helpers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 39 - 49, Extract the magic
numbers in _contribution_score, _consistency_score, and _recency_score into
descriptive constants (e.g., CONSISTENCY_HIGH_THRESHOLD,
CONSISTENCY_MED_THRESHOLD, CONSISTENCY_HIGH_BONUS, CONSISTENCY_MED_BONUS,
RECENCY_BONUS) at the class or module level and replace the raw literals (50,
10, 25, 10, 5) with those constants; update the methods _contribution_score,
_consistency_score, and _recency_score to reference the new constants so
thresholds and bonus values can be tuned without changing control flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/apps/github/models/user.py`:
- Around line 82-85: The model field calculated_score (models.FloatField) was
added to the User model but no migration exists; run Django's migration flow to
create and apply the schema change: run python manage.py makemigrations to
generate a migration that adds the calculated_score column (keeping the
default=0 and help_text), inspect and commit the produced migration file, then
run python manage.py migrate to apply it to the database so the column exists at
runtime.

---

Duplicate comments:
In `@backend/apps/github/models/user.py`:
- Around line 199-200: The batch update path leaves calculated_score stale
because calculated_score is only set in from_github(); update the management
command that bulk-updates contributions_count (github_update_users.py) to also
recompute and persist calculated_score by calling user.calculate_score() for
each user before batching and using User.bulk_save(...,
fields=("contributions_count","calculated_score")); ensure the same symbol names
are used (contributions_count, calculated_score, calculate_score,
User.bulk_save) so both individual (from_github) and batch update paths keep
scores in sync.

---

Nitpick comments:
In `@backend/apps/github/models/user.py`:
- Around line 39-49: Extract the magic numbers in _contribution_score,
_consistency_score, and _recency_score into descriptive constants (e.g.,
CONSISTENCY_HIGH_THRESHOLD, CONSISTENCY_MED_THRESHOLD, CONSISTENCY_HIGH_BONUS,
CONSISTENCY_MED_BONUS, RECENCY_BONUS) at the class or module level and replace
the raw literals (50, 10, 25, 10, 5) with those constants; update the methods
_contribution_score, _consistency_score, and _recency_score to reference the new
constants so thresholds and bonus values can be tuned without changing control
flow.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 91dc5f3c-37db-43bb-8e10-210df0500ee0

📥 Commits

Reviewing files that changed from the base of the PR and between c72c273 and ccb63a6.

📒 Files selected for processing (1)
  • backend/apps/github/models/user.py

Comment on lines +82 to 85
calculated_score = models.FloatField(
default=0,
help_text="Computed score based on contribution signals"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search the GitHub app for model and migration references to `calculated_score`.
rg -n -C2 "calculated_score|AddField\(|AlterField\(" backend/apps/github

Repository: OWASP/Nest

Length of output: 39202


Create a migration for the calculated_score field.

The calculated_score field is defined in the model but lacks a corresponding migration. When the ORM attempts to persist data to this field, the column won't exist in the database, causing failures. Add a migration using python manage.py makemigrations to add this column.

🧰 Tools
🪛 Ruff (0.15.6)

[warning] 82-85: Class field calculated_score is defined multiple times

Remove duplicate field definition for calculated_score

(PIE794)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 82 - 85, The model field
calculated_score (models.FloatField) was added to the User model but no
migration exists; run Django's migration flow to create and apply the schema
change: run python manage.py makemigrations to generate a migration that adds
the calculated_score column (keeping the default=0 and help_text), inspect and
commit the produced migration file, then run python manage.py migrate to apply
it to the database so the column exists at runtime.

@Tanishq-mellu
Copy link
Author

Thanks for the feedback,
I have cleaned up duplicate definitions and consolidated the scoring logic into a modular structure

@sonarqubecloud
Copy link

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 issues found across 1 file (changes from recent commits).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
backend/apps/github/models/user.py (1)

32-36: Either use _leadership_score() or drop it for now.

calculate_score() never calls this helper, so it has no effect on the value persisted on Line 206. Leaving an inactive scoring branch here makes it unclear which signals actually affect member ordering.

➕ If leadership should count in this PR
     def calculate_score(self):
         return (
                 self._contribution_score()
                 + self._consistency_score()
                 + self._recency_score()
+                + self._leadership_score()
         )

Also applies to: 51-56

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/apps/github/models/user.py` around lines 32 - 36, calculate_score
currently sums _contribution_score, _consistency_score, and _recency_score but
never uses _leadership_score, leaving an inactive signal; either include
_leadership_score() in calculate_score's returned sum (so calculate_score
returns self._contribution_score() + self._consistency_score() +
self._recency_score() + self._leadership_score()) or remove the
_leadership_score helper entirely (and any related tests/docs) to avoid
confusion—also update any other scoring-related helpers referenced around the
same area (the other scoring block at lines 51-56) to be consistent with your
choice and ensure the persisted member score (the value set when saving) is
derived from the updated calculate_score implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@backend/apps/github/models/user.py`:
- Around line 32-36: calculate_score currently sums _contribution_score,
_consistency_score, and _recency_score but never uses _leadership_score, leaving
an inactive signal; either include _leadership_score() in calculate_score's
returned sum (so calculate_score returns self._contribution_score() +
self._consistency_score() + self._recency_score() + self._leadership_score()) or
remove the _leadership_score helper entirely (and any related tests/docs) to
avoid confusion—also update any other scoring-related helpers referenced around
the same area (the other scoring block at lines 51-56) to be consistent with
your choice and ensure the persisted member score (the value set when saving) is
derived from the updated calculate_score implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d4d14f52-172a-4bd5-b0c3-139f758965eb

📥 Commits

Reviewing files that changed from the base of the PR and between ccb63a6 and 5351332.

📒 Files selected for processing (1)
  • backend/apps/github/models/user.py

@Tanishq-mellu
Copy link
Author

Added a basic leadership scoring component using OWASP staff signal
This is a simple initial step and can be extended further with more detailed role based scoring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve OWASP Nest member ranking algorithm with calculated scores

1 participant