cntk.logging.progress_print module¶
-
class
ProgressPrinter
(freq=None, first=0, tag='', log_to_file=None, rank=None, gen_heartbeat=False, num_epochs=None, test_freq=None, test_first=0, metric_is_pct=True, distributed_freq=None, distributed_first=0)[source]¶ Bases:
cntk.cntk_py.ProgressWriter
Allows printing various statistics (e.g. loss and metric) as training/evaluation progresses.
Parameters: - freq (int or None, default None) – determines how often printing of training progress will occur.
A value of 0 means a geometric schedule (1,2,4,...).
A value > 0 means an arithmetic schedule (print for minibatch number:
freq
, print for minibatch number:2 * freq
, print for minibatch number:3 * freq
,...). A value of None means no per-minibatch log. - first (int, default 0) – Only start printing after the training minibatch number is greater or equal to
first
. - tag (string, default EmptyString) – prepend minibatch log lines with your own string
- log_to_file (string or None, default None) – if None, output log data to stdout. If a string is passed, the string is path to a file for log data.
- rank (int or None, default None) – set this to distributed.rank if you are using distributed parallelism – each rank’s log will go to separate file.
- gen_heartbeat (bool, default False) – If True output a progress message every 10 seconds or so to stdout.
- num_epochs (int, default None) – The total number of epochs to be trained. Used for some metadata. This parameter is optional.
- test_freq (int or None, default None) – similar to
freq
, but applies to printing intermediate test results. - test_first (int, default 0) – similar to
first
, but applies to printing intermediate test results. - metric_is_pct (bool, default True) – Treat metric as a percentage for output purposes.
- distributed_freq (int or None, default None) – similar to
freq
, but applies to printing distributed-training worker synchronization info. - distributed_first (int, default 0) – similar to
first
, but applies to printing distributed-training worker synchronization info.
-
avg_loss_since_start
()[source]¶ DEPRECATED.
Returns: the average loss since the start of accumulation
-
avg_metric_since_start
()[source]¶ DEPRECATED.
Returns: the average metric since the start of accumulation
-
end_progress_print
(msg='')[source]¶ Prints the given message signifying the end of training.
Parameters: msg (string, default ‘’) – message to print.
-
epoch_summary
(with_metric=False)[source]¶ DEPRECATED.
If on an arithmetic schedule print an epoch summary using the ‘start’ accumulators. If on a geometric schedule does nothing.
Parameters: with_metric (bool) – if False it only prints the loss, otherwise it prints both the loss and the metric
-
log
(message)[source]¶ Prints any message the user wishes to place in the log.
Parameters: msg (string) – message to print.
-
on_write_training_summary
(samples, updates, summaries, aggregate_loss, aggregate_metric, elapsed_milliseconds)[source]¶
-
reset_last
()[source]¶ DEPRECATED.
Resets the ‘last’ accumulators
Returns: tuple of (average loss since last, average metric since last, samples since last)
-
reset_start
()[source]¶ DEPRECATED.
Resets the ‘start’ accumulators
Returns: tuple of (average loss since start, average metric since start, samples since start)
-
update
(loss, minibatch_size, metric=None)[source]¶ DEPRECATED.
Updates the accumulators using the loss, the minibatch_size and the optional metric.
Parameters: - loss (float) – the value with which to update the loss accumulators
- minibatch_size (int) – the value with which to update the samples accumulator
- metric (float or None) – if None do not update the metric accumulators, otherwise update with the given value
-
update_with_trainer
(trainer, with_metric=False)[source]¶ DEPRECATED.
Update the current loss, the minibatch size and optionally the metric using the information from the
trainer
.Parameters: - trainer (
cntk.train.trainer.Trainer
) – trainer from which information is gathered - with_metric (bool) – whether to update the metric accumulators
- trainer (
- freq (int or None, default None) – determines how often printing of training progress will occur.
A value of 0 means a geometric schedule (1,2,4,...).
A value > 0 means an arithmetic schedule (print for minibatch number:
-
class
TensorBoardProgressWriter
(freq=None, log_dir='.', rank=None, model=None)[source]¶ Bases:
cntk.cntk_py.ProgressWriter
Allows writing various statistics (e.g. loss and metric) to TensorBoard event files during training/evaluation. The generated files can be opened in TensorBoard to visualize the progress.
Parameters: - freq (int or None, default None) – frequency at which training progress is written. None indicates that progress is logged only at the end of training. Must be a positive integer otherwise.
- log_dir (string, default ‘.’) – directory where to create a TensorBoard event file.
- rank (int or None, default None) – rank of a worker when using distributed training, or None if training locally. If not None, event files will be created only by rank 0.
- model (
cntk.ops.functions.Function
or None, default None) – model graph to plot.
-
close
()[source]¶ Make sure that any outstanding records are immediately persisted, then close any open files. Any subsequent attempt to use the object will cause a RuntimeError.
-
class
TrainingSummaryProgressCallback
(epoch_size, callback)[source]¶ Bases:
cntk.cntk_py.ProgressWriter
Helper to pass a callback function to be called after each training epoch to
Trainer
,Evaluator
, andTrainingSession
, as well acntk.ops.functions.Function.train()
,cntk.ops.functions.Function.test()
.This allows the user to add additional logging after each training epoch.
Parameters: - epoch_size (int) – periodically call the callback after processing this many samples
- callback (function) – function(epoch_index, epoch_loss, epoch_metric, epoch_samples)