wandb.log
in your code wandb.log({"custom": 0.8})
wandb.log
is disabled when syncing Tensorboard. If you'd like to set a different step count, you can log the metrics with a step metric as:wandb.log({"custom": 0.8, "global_step": global_step})
wandb
?wandb.tensorboard.patch
instead of passing sync_tensorboard=True
to wandb.init
.tensorboard_x=False
to this method to ensure vanilla TensorBoard is patched, if you're using TensorBoard > 1.14 with PyTorch you can pass pytorch=True
to ensure it's patched. Both of these options have smart defaults depending on what versions of these libraries have been imported.tfevents
files and any .pbtxt
files. This enables us to launch a TensorBoard instance on your behalf. You will see a TensorBoard tab on the run page. This behavior can be disabled by passing save=False
to wandb.tensorboard.patch
wandb.init
or wandb.tensorboard.patch
before calling tf.summary.create_file_writer
or constructing aSummaryWriter
via torch.utils.tensorboard
.tfevents
files stored locally and you would like to import them into W&B, you can run wandb sync log_dir
, where log_dir
is a local directory containing the tfevents
files.wandb.finish()
and the end of your training. This will finish the wandb run and upload the tensorboard logs to W&B so they can be visualised. This is not necessary when running a .py
script as wandb finishes automatically when a script finishes.!
, as in !wandb sync directoryname
.