I was previously approved to use models like gpt-5-codex and sora-2, and everything worked fine through the Responses API.
After reading that OpenAI’s Codex could be accessed through Azure as a provider, I attempted to configure that setup but couldn’t get it to work. I then discovered that I could use the Azure AI Foundry extension for VS Code to run gpt-5-codex through GitHub Copilot, so I set that up next.
During configuration, I ran into a hub-related error. Thinking it might be easier to start fresh, I deleted my existing hub and created a new resource group and hub. I suspect this is where things started to break.
Now, whenever I try to deploy any research models (released models still work fine), two things happen:
A new Foundry is automatically created, and
- The model is deployed to Sweden Central, it shows no quota elsewhere.
This feels like it might be a quota or resource-linking issue, but here’s the odd part: If I go into the Chat Playground on useast2, click Create New Deployment, and select gpt-5-pro, the interface hangs indefinitely on “Loading resources and quota”.
From my understanding, everything should be tied to the subscription ID, so deleting the hub shouldn’t have broken the linkage, yet it seems something important seems to have got disconnected.
Has anyone run into this before or know what the correct recovery steps might be to get the research model deployments working again?