Real-time inference endpoint deployment for Azure ML Designer pipeline fails.

Gao, Kevin 20 Reputation points
2025-09-11T04:52:08.34+00:00

Workspace: mis-101

Subscription ID: Azure subscription 1

Region: East (may be West)

Issue: Real-time inference endpoint deployment for Azure ML Designer pipeline fails.

Endpoint name: <endpoint-deploye-mis101-1>

Deployment state: Transitioning → Failed

REST endpoint shows: null (never becomes available)

What I did:

  1. Trained model in Designer, created Real-time inference pipeline.
  2. Minimal graph: Web Service Input → Score Model (MD-FAST-... as Trained model) → Web Service Output.
  3. Ran pipeline on compute; then Deploy to ACI.
  4. Deployment failed. Expect endpoint to be Healthy and testable.

Error evidence:

Under notification, I received this: Deployment "endpoint-deploye-mis101-1" creation failed

When I click on details, the log spins for every long but never displayed what it was.

What I tried:

  • Removed Dataset/Evaluate nodes; only 4 blocks remain.
  • When I click on validation, the error message was displayed: "Please add another input to the component's input port if it is connected with WebServiceInput component." I could not get over this error message. I tried to insert a box between "Web Service Input" and "Score Model" with "Select Columns in Data set" but could not resolve the issue.

Please advise two things:

  1. why inference end point deployment fails and how to resolve.
  2. How to solve this error message that "Please add another input to the component's input port if it is connected with WebServiceInput component."

Thanks,

Kevin

Azure Machine Learning
{count} votes

Answer accepted by question author
  1. Manas Mohanty 11,700 Reputation points Microsoft External Staff Moderator
    2025-10-16T14:31:05.8166667+00:00

    Hi Gao, Kevin

    Glad to see that you are able to get healthy ACI and AKS compute-based endpoint on previous call

    (issue resolved on reported scope finally)

    Suggested to use Canada east or other regions as you had some difficulties on West US

    Your next requirement was to enable the UI for testing input on endpoint itself which is not available for ACI/AKS based endpoint.

    We can locate the registered model in model registry and use scoring script from output+logs and modify (Please create separate thread for that)

    Requesting to use AutoML if you are looking to test input on AML studio UI only

     Thank you for providing inputs and staying patient so far.

    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Alex Burlachenko 18,390 Reputation points Volunteer Moderator
    2025-09-11T07:38:48.71+00:00

    Kevin, this is a classic designer headache. that error message is so confusing, right ))

    the problem is almost certainly your graph structure. the designer is super picky about how the real time inference pipeline is built. the error 'please add another input...' is its way of saying the connection between 'web service input' and 'score model' isnt quite right.

    the 'score model' module expects two inputs: the trained model itself and the new data to score. but your 'web service input' is only sending one thing the new data. the trained model is already connected, but the pipeline needs to be wired a specific way for the endpoint to understand.

    u dont need to add another box. try this instead

    1. make sure your 'trained model' is connected to the leftmost port on the 'score model' module. this is the model input.

    then, connect your 'web service input' to the rightmost port on the 'score model' module. this is the data input.

    its a subtle thing, but the port order matters a lot to the designer. if u have them swapped, it will get confused and throw that vague error.

    also, worth looking into the compute target for the endpoint. aci is fine for testing, but make sure its sized correctly. sometimes failure happens if the container doesnt have enough memory to start.

    if the validation error persists, try a classic trick. delete the 'score model' module and add a new one. then carefully reconnect the 'trained model' to the left port and the 'web service input' to the right port. finally, connect the 'web service output' to the output. this often resets any weird internal state.

    this might help in other designer pipelines too, getting those ports right.

    for the deployment logs, try checking the azure machine learning studio. go to 'endpoints', select your endpoint, and then look at the 'deployment logs' tab. the notification spinner often fails, but the logs here might have the real error message about why the container failed to build or start.

    hope this gets your endpoint healthy! those designer quirks can be tricky, but u will get it.

    Best regards,

    Alex

    and "yes" if you would follow me at Q&A - personaly thx.
    P.S. If my answer help to you, please Accept my answer
    

    https://ctrlaltdel.blog/

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.