Hello Jieyu Yao,
Thank you for reaching out on the Microsoft Q&A.
For GPT-5 deployments in Azure AI Foundry, the portal shows a response API example by default, but you can still call it through the Chat Completions API. The trick is to use the chat/completions route and include your deployment name (e.g., gpt-5) in the URL path.
import requests
# Replace with your values
endpoint = "https://{your-resource-name}.openai.azure.com/"
deployment_name = "{your-deployment-name}" # e.g., gpt-5
api_key = "{your-api-key}"
api_version = "2024-08-01-preview"
url = f"{endpoint}openai/deployments/{deployment_name}/chat/completions?api-version={api_version}"
headers = {
"Content-Type": "application/json",
"api-key": api_key
}
payload = {
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
}
response = requests.post(url, headers=headers, json=payload)
print(response.json())
This will send a chat completion request to your GPT-5 deployment. Just replace the placeholders with your resource name, deployment name, API key, and API version and you’ll be able to call GPT-5 with the chat completions API.
I hope this clears things up! If this guidance resolves your concern, please consider accepting the answer so other community members exploring production scenarios with Content Understanding can benefit from it as well.
Thankyou!