Runs
2 minute read
An iterable collection of runs associated with a project and optional filter.
Runs(
client: "RetryingClient",
entity: str,
project: str,
filters: Optional[Dict[str, Any]] = None,
order: Optional[str] = None,
per_page: int = 50,
include_sweeps: bool = (True)
)
This is generally used indirectly via the Api
.runs method.
Attributes | |
---|---|
cursor |
The start cursor to use for the next fetched page. |
more |
Whether there are more pages to be fetched. |
Methods
convert_objects
convert_objects()
Convert the last fetched response data into the iterated objects.
histories
histories(
samples: int = 500,
keys: Optional[List[str]] = None,
x_axis: str = "_step",
format: Literal['default', 'pandas', 'polars'] = "default",
stream: Literal['default', 'system'] = "default"
)
Return sampled history metrics for all runs that fit the filters conditions.
Args | |
---|---|
samples |
(int, optional) The number of samples to return per run |
keys |
(list[str], optional) Only return metrics for specific keys |
x_axis |
(str, optional) Use this metric as the xAxis defaults to _step |
format |
(Literal, optional) Format to return data in, options are “default”, “pandas”, “polars” |
stream |
(Literal, optional) “default” for metrics, “system” for machine metrics |
Returns | |
---|---|
pandas.DataFrame |
If format=“pandas”, returns a pandas.DataFrame of history metrics. |
polars.DataFrame |
If format=“polars”, returns a polars.DataFrame of history metrics. list of dicts: If format=“default”, returns a list of dicts containing history metrics with a run_id key. |
next
next() -> T
Return the next item from the iterator. When exhausted, raise StopIteration
update_variables
update_variables() -> None
Update the query variables for the next page fetch.
__getitem__
__getitem__(
index: (int | slice)
) -> (T | list[T])
__iter__
__iter__() -> Iterator[T]
__len__
__len__() -> int
Class Variables | |
---|---|
QUERY |
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.