Edit

Share via


Quickstart: Generate a video with Sora (preview)

In this quickstart, you generate video clips using the Azure OpenAI service. The example uses the Sora model, which is a video generation model that creates realistic and imaginative video scenes from text instructions and/or image or video inputs. This guide shows you how to create a video generation job, poll for its status, and retrieve the generated video.

For more information on video generation, see Video generation concepts.

Prerequisites

Go to Azure AI Foundry portal

Browse to the Azure AI Foundry portal and sign in with the credentials associated with your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.

From the Azure AI Foundry landing page, create or select a new project. Navigate to the Models + endpoints page on the left nav. Select Deploy model and then choose the Sora video generation model from the list. Complete the deployment process.

On the model's page, select Open in playground.

Try out video generation

Start exploring Sora video generation with a no-code approach through the Video playground. Enter your prompt into the text box and select Generate. When the AI-generated video is ready, it appears on the page.

Note

The content generation APIs come with a content moderation filter. If Azure OpenAI recognizes your prompt as harmful content, it doesn't return a generated video. For more information, see Content filtering.

In the Video playground, you can also view Python and cURL code samples, which are prefilled according to your settings. Select the code button at the top of your video playback pane. You can use this code to write an application that completes the same task.

Prerequisites

Microsoft Entra ID prerequisites

For the recommended keyless authentication with Microsoft Entra ID, you need to:

  • Install the Azure CLI used for keyless authentication with Microsoft Entra ID.
  • Assign the Cognitive Services User role to your user account. You can assign roles in the Azure portal under Access control (IAM) > Add role assignment.

Set up

  1. Create a new folder video-generation-quickstart and go to the quickstart folder with the following command:

    mkdir video-generation-quickstart && cd video-generation-quickstart
    
  2. Create a virtual environment. If you already have Python 3.10 or higher installed, you can create a virtual environment using the following commands:

    py -3 -m venv .venv
    .venv\scripts\activate
    

    Activating the Python environment means that when you run python or pip from the command line, you then use the Python interpreter contained in the .venv folder of your application. You can use the deactivate command to exit the python virtual environment, and can later reactivate it when needed.

    Tip

    We recommend that you create and activate a new Python environment to use to install the packages you need for this tutorial. Don't install packages into your global python installation. You should always use a virtual or conda environment when installing python packages, otherwise you can break your global installation of Python.

  3. For the recommended keyless authentication with Microsoft Entra ID, install the azure-identity package with:

    pip install azure-identity
    

Retrieve resource information

You need to retrieve the following information to authenticate your application with your Azure OpenAI resource:

Variable name Value
AZURE_OPENAI_ENDPOINT This value can be found in the Keys and Endpoint section when examining your resource from the Azure portal.
AZURE_OPENAI_DEPLOYMENT_NAME This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Model Deployments in the Azure portal.
OPENAI_API_VERSION Learn more about API Versions.

You can change the version in code or use an environment variable.

Learn more about keyless authentication and setting environment variables.

Generate video with Sora

You can generate a video with the Sora model by creating a video generation job, polling for its status, and retrieving the generated video. The following code shows how to do this via the REST API using Python.

  1. Create the sora-quickstart.py file and add the following code to authenticate your resource:

    import requests
    import base64 
    import os
    from azure.identity import DefaultAzureCredential
    
    # Set environment variables or edit the corresponding values here.
    endpoint = os.environ['AZURE_OPENAI_ENDPOINT']
    
    # Keyless authentication
    credential = DefaultAzureCredential()
    token = credential.get_token("https://cognitiveservices.azure.com/.default")
    
    api_version = 'preview'
    headers= { "Authorization": f"Bearer {token.token}", "Content-Type": "application/json" }
    
  2. Create the video generation job. You can create it from a text prompt only, or from an input image and text prompt.

    # 1. Create a video generation job
    create_url = f"{endpoint}/openai/v1/video/generations/jobs?api-version={api_version}"
    body = {
        "prompt": "A cat playing piano in a jazz bar.",
        "width": 480,
        "height": 480,
        "n_seconds": 5,
        "model": "sora"
    }
    response = requests.post(create_url, headers=headers, json=body)
    response.raise_for_status()
    print("Full response JSON:", response.json())
    job_id = response.json()["id"]
    print(f"Job created: {job_id}")
    
    # 2. Poll for job status
    status_url = f"{endpoint}/openai/v1/video/generations/jobs/{job_id}?api-version={api_version}"
    status=None
    while status not in ("succeeded", "failed", "cancelled"):
        time.sleep(5)  # Wait before polling again
        status_response = requests.get(status_url, headers=headers).json()
        status = status_response.get("status")
        print(f"Job status: {status}")
    
    # 3. Retrieve generated video 
    if status == "succeeded":
        generations = status_response.get("generations", [])
        if generations:
            print(f"✅ Video generation succeeded.")
            generation_id = generations[0].get("id")
            video_url = f"{endpoint}/openai/v1/video/generations/{generation_id}/content/video?api-version={api_version}"
            video_response = requests.get(video_url, headers=headers)
            if video_response.ok:
                output_filename = "output.mp4"
                with open(output_filename, "wb") as file:
                    file.write(video_response.content)
                    print(f'Generated video saved as "{output_filename}"')
        else:
            raise Exception("No generations found in job result.")
    else:
        raise Exception(f"Job didn't succeed. Status: {status}")
    
  3. Run the Python file.

    python sora-quickstart.py
    

    Wait a few moments to get the response.

Output

The output will show the full response JSON from the video generation job creation request, including the job ID and status.

{
    "object": "video.generation.job",
    "id": "task_01jwcet0eje35tc5jy54yjax5q",
    "status": "queued",
    "created_at": 1748469875,
    "finished_at": null,
    "expires_at": null,
    "generations": [],
    "prompt": "A cat playing piano in a jazz bar.",
    "model": "sora",
    "n_variants": 1,
    "n_seconds": 5,
    "height": 480,
    "width": 480,
    "failure_reason": null
}

The generated video will be saved as output.mp4 in the current directory.

Job created: task_01jwcet0eje35tc5jy54yjax5q
Job status: preprocessing
Job status: running
Job status: processing
Job status: succeeded
✅ Video generation succeeded.
Generated video saved as "output.mp4"

Clean-up resources

If you want to clean up and remove an Azure OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.