wandb.init(job_type="analysis")
wandb login
on the command line and paste in your API key.WANDB_API_KEY
environment variable to your API key.<entity>/<project>/<run_id>
. In the app UI, open a run page and click the Overview tab to get the run path.run.config
run.history()
wandb.log()
appends to this object.run.summary
wandb.log()
sets the summary to the final value of a logged timeseries. The contents of the summary can also be set directly. Think of the summary as the run's "outputs".api.flush()
to get updated values.run.config
run.history()
run.summary
samples
__ argument). If you want to export all of the data on a large run, you can use the run.scan_history()
method. For more details see the API Reference.api.runs
returns a Runs
object that is iterable and acts like a list. By default the object loads 50 runs at a time in sequence as required, but you can change the number loaded per page with the per_page
keyword argument.api.runs
also accepts an order
keyword argument. The default order is -created_at
, specify +created_at
to get results in ascending order. You can also sort by config or summary values e.g. summary.val_acc
or config.experiment_name
wandb.CommError
will be raised. The original exception can be introspected via the exc
attribute.wandb-metadata.json
. Using the public API, you can get the git hash with run.commit
.wandb.init()
you can access the random run ID or the human readable run name from your script like this:wandb.run.id
wandb.run.name
wandb.init(notes="your notes here")
wandb.log({"accuracy": acc})
for a run saved to "<entity>/<project>/<run_id>"
.keys
argument. The default number of samples when using run.history()
is 500. Logged steps that do not include a specific metric will appear in the output dataframe as NaN
. The keys
argument will cause the API to sample steps that include the listed metric keys more frequently.run1
and run2
.0.9
. It also modifies the accuracy histogram of a previous run to be the histogram of numpy_array
.run.scan_history()
. Here's an example downloading all the loss
data points logged in history.scan_history
so that individual requests don't time out. The default page size is 500, so you can experiment with different sizes to see what works best:best_run
is the run with the best metric as defined by the metric
parameter in the sweep config.model.h5
.