You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/user_manual/customize_metric.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -187,13 +187,13 @@ For example, if you are implementing a metric that compares two models, you shou
187
187
188
188
If you are implementing an alignment metric comparing model's output with the input, you should use the ``x_gt`` or ``gt_x`` call type. Examples from |pruna| include ``clip_score``.
189
189
190
-
If you are implementing a metric that compares the model's output with the ground truth, you should use the ``y_gt`` or ``gt_y`` call type. Examples from |pruna| include ``fid``, ``cmmd``, ``accuracy``, ``recall``, ``precision``.
190
+
If you are implementing a metric that compares the model's output with the ground truth, you should use the ``y_gt`` or ``gt_y`` call type. Examples from |pruna| include ``fid``, ``kid``, ``cmmd``, ``accuracy``, ``recall``, ``precision``.
191
191
192
192
If you are wrapping an Image Quality Assessment (IQA) metric, that has an internal dataset, you should use the ``y`` call type. Examples from |pruna| include ``arniqa``.
193
193
194
-
You may want to switch the mode of the metric despite your default ``call_type``. For instance you may want to use ``fid`` in pairwise mode to get a single comparison score for two models.
194
+
You may want to switch the mode of the metric despite your default ``call_type``. For instance you may want to use ``fid`` or ``kid`` in pairwise mode to get a single comparison score for two models.
195
195
196
-
In this case, you can pass ``pairwise`` to the ``call_type`` parameter of the ``StatefulMetric`` constructor.
196
+
In this case, you can pass ``pairwise`` to the ``call_type`` parameter of the ``StatefulMetric`` constructor`
Copy file name to clipboardExpand all lines: docs/user_manual/evaluate.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Evaluation helps you understand how compression affects your models across diffe
10
10
This knowledge is essential for making informed decisions about which compression techniques work best for your specific needs using two types of metrics:
11
11
12
12
- **Efficiency Metrics:** Measure speed (total time, latency, throughput), memory (disk, inference, training), and energy usage (consumption, CO2 emissions).
13
-
- **Quality Metrics:** Assess fidelity (FID, CMMD), alignment (Clip Score), diversity (PSNR, SSIM), accuracy (accuracy, precision, perplexity), and more. Custom metrics are supported.
13
+
- **Quality Metrics:** Assess fidelity (FID, KID, CMMD), alignment (Clip Score), diversity (PSNR, SSIM), accuracy (accuracy, precision, perplexity), and more. Custom metrics are supported.
0 commit comments