View reasoning information
How to return and view reasoning in your W&B Inference responses.
less than a minute
Reasoning models, like OpenAI’s GPT OSS 20B, include information about their reasoning steps as part of the output returned in addition to the final answer. This is automatic and no additional input parameters are needed.
You can determine whether a model supports reasoning or not by checking the Supported Features sections of its catalog page in the UI.
You can find reasoning information in the a reasoning_content
field of responses. This field is not present in the outputs of other models.
import openai
client = openai.OpenAI(
base_url='https://api.inference.wandb.ai/v1',
api_key="<your-api-key>", # Available from https://wandb.ai/authorize
)
response = client.chat.completions.create(
model="openai/gpt-oss-20b",
messages=[
{"role": "user", "content": "3.11 and 3.8, which is greater?"}
],
)
print(response.choices[0].message.reasoning_content)
print("--------------------------------")
print(response.choices[0].message.content)
curl https://api.inference.wandb.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-d '{
"model": "openai/gpt-oss-20b",
"messages": [
{ "role": "user", "content": "3.11 and 3.8, which is greater?" }
],
}'
Feedback
Was this page helpful?
Glad to hear it! If you have more to say, please let us know.
Sorry to hear that. Please tell us how we can improve.