Deploy GPT5 through Azure foundry but no chat completion rest api option

Jieyu Yao 20 Reputation points
2025-09-05T22:14:04.22+00:00

I recently deployed GPT 5 model through Azure foundry and the rest API I got is the response api endpoint. How to try GPT5 + chat completion rest api?

GPT5-chat, GPT5-nano, GPT5-mini do not have the same problem, they all have the chat completion api with the model name as the deployment name embedded in the url.

Can someone help

Azure AI Language
Azure AI Language
An Azure service that provides natural language capabilities including sentiment analysis, entity extraction, and automated question answering.
0 comments No comments
{count} votes

Answer accepted by question author
  1. Anshika Varshney 1,910 Reputation points Microsoft External Staff Moderator
    2025-09-08T10:51:19.4633333+00:00

    Hello Jieyu Yao,

    Thank you for reaching out on the Microsoft Q&A.

    For GPT-5 deployments in Azure AI Foundry, the portal shows a response API example by default, but you can still call it through the Chat Completions API. The trick is to use the chat/completions route and include your deployment name (e.g., gpt-5) in the URL path.

    import requests
    # Replace with your values
    endpoint = "https://{your-resource-name}.openai.azure.com/"
    deployment_name = "{your-deployment-name}"  # e.g., gpt-5
    api_key = "{your-api-key}"
    api_version = "2024-08-01-preview"
    url = f"{endpoint}openai/deployments/{deployment_name}/chat/completions?api-version={api_version}"
    headers = {
        "Content-Type": "application/json",
        "api-key": api_key
    }
    payload = {
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Hello, how are you?"}
        ]
    }
    response = requests.post(url, headers=headers, json=payload)
    print(response.json())
    
    

    This will send a chat completion request to your GPT-5 deployment. Just replace the placeholders with your resource name, deployment name, API key, and API version and you’ll be able to call GPT-5 with the chat completions API.

    I hope this clears things up! If this guidance resolves your concern, please consider accepting the answer so other community members exploring production scenarios with Content Understanding can benefit from it as well.

    Thankyou!


1 additional answer

Sort by: Most helpful
  1. Divyesh Govaerdhanan 9,355 Reputation points
    2025-09-07T17:55:27.8666667+00:00

    Hello,

    Welcome to Microsoft Q&A,

    GPT-5 (base) supports the Chat Completions API. The portal often shows a Responses API sample by default, but you can call Chat Completions directly on the Foundry Models inference endpoint.

    curl -X POST "https://<your-resource>.services.ai.azure.com/api/models/chat/completions" \
      -H "Content-Type: application/json" \
      -H "api-key: <AZURE_INFERENCE_CREDENTIAL>" \
      -d '{
        "model": "<your-deployment-name>",
        "messages": [
          {"role":"system","content":"You are a helpful assistant."},
          {"role":"user","content":"Say hi in one short sentence."}
        ]
      }'
    
    
    1. Use your deployment name in the "model" field.
    2. Docs: Use chat completions (Foundry Models) shows the endpoint patterns and SDK examples. Microsoft Learn
    3. The GPT-5 model page confirms Chat Completions support (GPT-5 base requires registration; GPT-5-chat/mini/nano typically don’t)

    https://free.blessedness.top/en-us/azure/ai-foundry/foundry-models/how-to/use-chat-completions?pivots=programming-language-pythonPlease Upvote and accept the answer if it helps!!


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.