Edit

Share via


Azure OpenAI in Azure AI Foundry Models API lifecycle

This article is to help you understand the support lifecycle for Azure OpenAI APIs.

Note

New API response objects may be added to the API response at any time. We recommend you only parse the response objects you require.

API evolution

Previously, Azure OpenAI received monthly updates of new API versions. Taking advantage of new features required constantly updating code and environment variables with each new API release. Azure OpenAI also required the extra step of using Azure specific clients which created overhead when migrating code between OpenAI and Azure OpenAI.

Starting in August 2025, you can now opt in to our next generation v1 Azure OpenAI APIs which add support for:

  • Ongoing access to the latest features with no need to specify new api-version's each month.
  • Faster API release cycle with new features launching more frequently.
  • OpenAI client support with minimal code changes to swap between OpenAI and Azure OpenAI when using key-based authentication.
  • OpenAI client support for token based authentication and automatic token refresh without the need to take a dependency on a separate Azure OpenAI client.
  • Make chat completions calls with models from other providers like DeepSeek and Grok which support the v1 chat completions syntax.

Access to new API calls that are still in preview will be controlled by passing feature specific preview headers allowing you to opt in to the features you want, without having to swap API versions. Alternatively, some features will indicate preview status through their API path and don't require an additional header.

Examples:

  • /openai/v1/evals is in preview and requires passing an "aoai-evals":"preview" header.
  • /openai/v1/fine_tuning/alpha/graders/ is in preview and requires no custom header due to the presence of alpha in the API path.

For the initial v1 Generally Available (GA) API launch we're only supporting a subset of the inference and authoring API capabilities. All GA features are supported for use in production. We'll be rapidly adding support for more capabilities soon.

Code changes

v1 API

Python v1 examples

API Key:

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),
    base_url="https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/"
)

response = client.responses.create(   
  model="gpt-4.1-nano", # Replace with your model deployment name 
  input="This is a test.",
)

print(response.model_dump_json(indent=2)) 
  • OpenAI() client is used instead of AzureOpenAI().
  • base_url passes the Azure OpenAI endpoint and /openai/v1 is appended to the endpoint address.
  • api-version is no longer a required parameter with the v1 GA API.

API Key with environment variables set for OPENAI_BASE_URL and OPENAI_API_KEY:

client = OpenAI()

Microsoft Entra ID:

Important

Handling automatic token refresh was previously handled through use of the AzureOpenAI() client. The v1 API removes this dependency, by adding automatic token refresh support to the OpenAI() client.

from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

token_provider = get_bearer_token_provider(
    DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)

client = OpenAI(  
  base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",  
  api_key = token_provider  
)

response = client.responses.create(
    model="gpt-4.1-nano",
    input= "This is a test" 
)

print(response.model_dump_json(indent=2)) 
  • base_url passes the Azure OpenAI endpoint and /openai/v1 is appended to the endpoint address.
  • api_key parameter is set to token_provider, enabling automatic retrieval and refresh of an authentication token instead of using a static API key.

Model support

For Azure OpenAI models we recommend using the Responses API, however, the v1 API also allows you to make chat completions calls with models from other providers like DeepSeek and Grok which support the OpenAI v1 chat completions syntax.

base_url will accept both https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/ and https://YOUR-RESOURCE-NAME.services.ai.azure.com/openai/v1/ formats.

from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

token_provider = get_bearer_token_provider(
    DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)

client = OpenAI(  
  base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/",  
  api_key=token_provider,
)
completion = client.chat.completions.create(
  model="grok-3-mini", # Replace with your model deployment name.
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Tell me about the attention is all you need paper"}
  ]
)

#print(completion.choices[0].message)
print(completion.model_dump_json(indent=2))

v1 API support

Status

Generally Available features are supported for use in production.

API Path Status
/openai/v1/chat/completions Generally Available
/openai/v1/embeddings Generally Available
/openai/v1/evals Preview
/openai/v1/files Generally Available
/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy Preview
/openai/v1/fine_tuning/alpha/graders/ Preview
/openai/v1/fine_tuning/ Generally Available
/openai/v1/models Generally Available
/openai/v1/responses Generally Available
/openai/v1/vector_stores Generally Available

Preview headers

API Path Header
/openai/v1/evals "aoai-evals":"preview"
/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy "aoai-copy-ft-checkpoints" : "preview"

Changes between v1 preview release and 2025-04-01-preview

  • v1 preview API
  • Video generation support
  • NEW Responses API features:
    • Remote Model Context Protocol (MCP) servers tool integration
    • Support for asynchronous background tasks
    • Encrypted reasoning items
    • Image generation

Changes between 2025-04-01-preview and 2025-03-01-preview

Changes between 2025-03-01-preview and 2025-02-01-preview

Changes between 2025-02-01-preview and 2025-01-01-preview

Changes between 2025-01-01-preview and 2024-12-01-preview

Changes between 2024-12-01-preview and 2024-10-01-preview

Changes between 2024-09-01-preview and 2024-08-01-preview

  • max_completion_tokens added to support o1-preview and o1-mini models. max_tokens doesn't work with the o1 series models.
  • parallel_tool_calls added.
  • completion_tokens_details & reasoning_tokens added.
  • stream_options & include_usage added.

Changes between 2024-07-01-preview and 2024-08-01-preview API specification

Changes between 2024-5-01-preview and 2024-07-01-preview API specification

Changes between 2024-04-01-preview and 2024-05-01-preview API specification

Changes between 2024-03-01-preview and 2024-04-01-preview API specification

Latest GA API release

Azure OpenAI API version 2024-10-21 is currently the latest GA API release. This API version is the replacement for the previous 2024-06-01 GA API release.

Known issues

  • The 2025-04-01-preview Azure OpenAI spec uses OpenAPI 3.1, is a known issue that this is currently not fully supported by Azure API Management

Next steps