Server error occurs in Azure AI Foundry when generating graphs using code_interpreter
We are encountering an issue when executing Python code with code_interpreter in Azure AI Foundry.
When running a graph generation task (e.g., using matplotlib), one of the following occurs:
- A server error is returned:
Run failed: {'code': 'server_error', 'message': 'Sorry, something went wrong.'} - Sometimes no explicit error appears, but the graph is not displayed. occasionally, messages like “Resource usage restricted” or “Request limit exceeded” are returned.
Even when the error occurs, the Run data includes an image ID, though no corresponding file exists in the file list.
Important note
Existing projects that were created earlier continue to work normally, but the issue occurs consistently in all newly created projects, regardless of configuration or region.
It seems that the image generation itself succeeds, but something fails afterward — possibly during the backend storage upload or image reference process.
Has anyone else experienced this issue recently? Could this be a known limitation or temporary backend issue in Azure AI Foundry? Any advice or workaround would be appreciated.
Environment:
- Model: GPT-4o
- Feature: Agent +
code_interpreter - Reproducibility: 100% (in newly created projects only)
- Tested regions: East Japan, East US, East US2