Benchmark Class¶
-
class
OpenDenoising.
Benchmark
(name='evaluation', output_dir='./results')[source]¶ Bases:
object
Benchmark class.
The purpose of this class is to evaluate models on given datasets. The evaluations are registered through the function “register_function”. After registering the desired metrics, you can run numeric evaluations through “numeric_evaluation”. To further visualize the results, you may also register visualization functions (to generate figures) through “register_visualization”. Once you have computed the metrics, you can call “graphic_evaluation” to generate the registered plots.
-
models
¶ List of
model.AbstractDenoiser
objects.Type: list
-
metrics
¶ List of dictionaries holding the following fields,
- Name (str): metric name.
- Func (
function
): function object. - Value (list): List of values, one for each file in dataset.
- Mean (float): Mean of “Values” field.
- Variance (float): Variance of “Values” field.
Type: list
-
datasets
¶ List of
data.AbstractDatasetGenerator
Type: list
-
partial
¶ DataFrame holding per-file denoising results.
Type: pandas.DataFrame
-
general
¶ DataFrame holding aggregates of denoising results.
Type: pandas.DataFrame
-
__init__
(self, name='evaluation', output_dir='./results')[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
evaluate
(self)[source]¶ Perform the entire evaluation on datasets and models.
For each pair (model, dataset), runs inference on each dataset image using model. The results are stored in a pandas DataFrame, and later written into two csv files:
- (EVALUATION_NAME)/General_Results: contains aggregates (mean and variance of each metric) about the ran tests
- (EVALUATION_NAME)/Partial_Results: contains the performance of each model on each dataset image.
- These tables are stored on ‘output_dir’. Moreover, the visual restoration results are stored into
- ‘output_dir/EVALUATION_NAME/RestoredImages’ folder. If you have visualisations registered into your evaluator, the plots are saved in ‘output_dir/EVALUATION_NAME/Figures’ folder.
-
evaluate_model_on_dataset
(self, denoiser, test_generator)[source]¶ Evaluates denoiser on dataset represented by test_generator.
Parameters: - denoiser (
model.AbstractDeepLearningModel
) – Denoiser object - test_generator (
data.AbstractDatasetGenerator
) – Dataset generator object. It generates data to evaluate the denoiser.
Returns: List of evaluated metrics.
Return type: - denoiser (
-