Evaluate a model#
Documentation for compressai_trainer.run.eval_model
.
Evaluate a model.
Evaluates a model on the configured dataset.infer
(i.e. test set).
Saves bitstreams and reconstructed images to paths.output_dir
.
Computes metrics and saves per-file results and averaged results to
JSON and TSV files.
To evaluate models trained using CompressAI Trainer:
compressai-eval \
--config-path="$HOME/data/runs/e4e6d4d5e5c59c69f3bd7be2/configs" \
--config-path="$HOME/data/runs/d4d5e5c5e4e6bd7be29c69f3/configs" \
...
To evaluate models from the CompressAI zoo:
compressai-eval \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=1 ++criterion.lmbda=0.0018 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=2 ++criterion.lmbda=0.0035 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=3 ++criterion.lmbda=0.0067 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=4 ++criterion.lmbda=0.0130 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=5 ++criterion.lmbda=0.0250 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=6 ++criterion.lmbda=0.0483 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=7 ++criterion.lmbda=0.0932 \
--config-name="eval_zoo" ++model.name="bmshj2018-factorized" ++model.quality=8 ++criterion.lmbda=0.1800
The above can be written more compactly by prepending a “default” override
++model.name="bmshj2018-factorized"
that applies to all configs:
compressai-eval \
++model.name="bmshj2018-factorized" \
--config-name="eval_zoo" ++model.quality=1 ++criterion.lmbda=0.0018 \
--config-name="eval_zoo" ++model.quality=2 ++criterion.lmbda=0.0035 \
--config-name="eval_zoo" ++model.quality=3 ++criterion.lmbda=0.0067 \
--config-name="eval_zoo" ++model.quality=4 ++criterion.lmbda=0.0130 \
--config-name="eval_zoo" ++model.quality=5 ++criterion.lmbda=0.0250 \
--config-name="eval_zoo" ++model.quality=6 ++criterion.lmbda=0.0483 \
--config-name="eval_zoo" ++model.quality=7 ++criterion.lmbda=0.0932 \
--config-name="eval_zoo" ++model.quality=8 ++criterion.lmbda=0.1800
By default, the following options are used, if not specified:
--config-path="conf"
--config-name="config"
++model.source="config"
# if model.source == "config":
++paths.output_dir="outputs/${model.source}-${env.aim.run_hash}-${model.name}"
++paths.model_checkpoint='${paths.checkpoints}/runner.last.pth'
# if model.source == "from_state_dict":
++paths.output_dir="outputs/${model.name}-${Path(paths.model_checkpoint).stem}"
# if model.source == "zoo":
++paths.output_dir="outputs/${model.source}-${model.name}-${model.metric}-${model.quality}"
The model is evaluated on dataset.infer
, which may be configured as follows:
dataset:
infer:
type: "ImageFolder"
config:
root: "/path/to/directory/containing/images"
split: ""
loader:
shuffle: False
batch_size: 1
num_workers: 2
settings:
transforms:
- "ToTensor": {}
meta:
name: "Custom dataset"
identifier: "image/custom"
num_samples: 0 # ignored during eval
To evaluate a model on a custom directory of samples, use the above
config and override dataset.infer.config.root
.