Skip to content

Multiple Target Prediction Plotting Bug #1317

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions pytorch_forecasting/data/encoders.py
Original file line number Diff line number Diff line change
Expand Up @@ -414,7 +414,7 @@ def __init__(

* None (default): No transformation of values
* log: Estimate in log-space leading to a multiplicative model
* logp1: Estimate in log-space but add 1 to values before transforming for stability
* log1p: Estimate in log-space but add 1 to values before transforming for stability
(e.g. if many small values <<1 are present).
Note, that inverse transform is still only `torch.exp()` and not `torch.expm1()`.
* logit: Apply logit transformation on values that are between 0 and 1
Expand Down Expand Up @@ -646,7 +646,7 @@ def __init__(

* None (default): No transformation of values
* log: Estimate in log-space leading to a multiplicative model
* logp1: Estimate in log-space but add 1 to values before transforming for stability
* log1p: Estimate in log-space but add 1 to values before transforming for stability
(e.g. if many small values <<1 are present).
Note, that inverse transform is still only `torch.exp()` and not `torch.expm1()`.
* logit: Apply logit transformation on values that are between 0 and 1
Expand Down Expand Up @@ -753,7 +753,7 @@ def __init__(

* None (default): No transformation of values
* log: Estimate in log-space leading to a multiplicative model
* logp1: Estimate in log-space but add 1 to values before transforming for stability
* log1p: Estimate in log-space but add 1 to values before transforming for stability
(e.g. if many small values <<1 are present).
Note, that inverse transform is still only `torch.exp()` and not `torch.expm1()`.
* logit: Apply logit transformation on values that are between 0 and 1
Expand Down
24 changes: 14 additions & 10 deletions pytorch_forecasting/models/base_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -422,7 +422,7 @@ def __init__(
loss (Metric, optional): metric to optimize, can also be list of metrics. Defaults to SMAPE().
logging_metrics (nn.ModuleList[MultiHorizonMetric]): list of metrics that are logged during training.
Defaults to [].
reduce_on_plateau_patience (int): patience after which learning rate is reduced by a factor of 10. Defaults
reduce_on_plateau_patience (int): patience (in steps) after which learning rate is reduced by a factor of 2. Defaults
to 1000
reduce_on_plateau_reduction (float): reduction in learning rate when encountering plateau. Defaults to 2.0.
reduce_on_plateau_min_lr (float): minimum learning rate for reduce on plateua learning rate scheduler.
Expand Down Expand Up @@ -993,6 +993,7 @@ def plot_prediction(

# for each target, plot
figs = []
ax_provided = ax is not None
for y_raw, y_hat, y_quantile, encoder_target, decoder_target in zip(
y_raws, y_hats, y_quantiles, encoder_targets, decoder_targets
):
Expand All @@ -1012,7 +1013,7 @@ def plot_prediction(
# move to cpu
y = y.detach().cpu()
# create figure
if ax is None:
if (ax is None) or (not ax_provided):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is confusing. I would say: ax_not_provided = ax is None before the loop and then if ax_not_provided after the loop

fig, ax = plt.subplots()
else:
fig = ax.get_figure()
Expand Down Expand Up @@ -1048,13 +1049,16 @@ def plot_prediction(
ax.fill_between(x_pred, y_quantile[:, i], y_quantile[:, -i - 1], alpha=0.15, fc=pred_color)
else:
quantiles = torch.tensor([[y_quantile[0, i]], [y_quantile[0, -i - 1]]])
ax.errorbar(
x_pred,
y[[-n_pred]],
yerr=quantiles - y[-n_pred],
c=pred_color,
capsize=1.0,
)
try:
ax.errorbar(
x_pred,
y[[-n_pred]],
yerr=quantiles - y[-n_pred],
c=pred_color,
capsize=1.0,
)
except ValueError:
print(f"Warning: could not plot error bars. Quantiles: {quantiles}, y: {y}, yerr: {quantiles - y[-n_pred]}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this would have to be a logger.warning() instead of print()

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does the error actually happen. Seems like this solves a different problem


if add_loss_to_title is not False:
if isinstance(add_loss_to_title, bool):
Expand Down Expand Up @@ -1213,7 +1217,7 @@ def configure_optimizers(self):
min_lr=self.hparams.reduce_on_plateau_min_lr,
),
"monitor": "val_loss", # Default: val_loss
"interval": "epoch",
"interval": "step",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't think this makes sense. reduce each step is very aggressive

"frequency": 1,
"strict": False,
}
Expand Down