Can Voice Assistant UMD be extended to Sensor Class Extension to reuse the presence monitoring service path?

jianbin zhang 20 Reputation points
2025-04-29T05:21:15.8366667+00:00

Hi MSFT team,

I want to query - If Voice Assistant UMD can act as a sensor extended on Sensor Class Extension to reuse the pesence motoring service path to let device enter/exit modern standby mode?

Azure IoT SDK
Azure IoT SDK
An Azure software development kit that facilitates building applications that connect to Azure IoT services.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Amira Bedhiafi 39,341 Reputation points Volunteer Moderator
    2025-09-11T16:38:08.36+00:00

    Hello Jianbin !

    Thank you for posting on Microsoft Learn Q&A.

    A voice assistant UMDF driver human presence sensor are 2 different Windows stacks with different device interfaces and services behind them. Windows won’t treat a voice assistant UMD as a presence sensor, so you can’t extend it to reuse the presence-monitoring pipeline for Modern Standby.

    https://free.blessedness.top/en-us/windows-hardware/drivers/audio/voice-activation

    What Windows expects for presence sensing in Windows 11 is implemented via the sensors platform and the human presence sensor type. The Windows sensor service consumes that sensor and both in normal operation and during modern standby since it uses it to drive experiences like lock on leave and wake on approach an that pipeline is independent of audio or voice activation. https://free.blessedness.top/en-us/windows-hardware/drivers/ddi/sensorsclassextension/nn-sensorsclassextension-isensorclassextension

    What Windows expects for voice activation voice activation lives in the audio driver stack and keyword spotting path since it’s designed to arm a wake word and when the DSP or hardware detects it, wake the system or route audio and again via the audio subsystem not SensorsCx. https://free.blessedness.top/en-us/windows-hardware/drivers/audio/voice-activation

    I am not expert in this subject but what I understood after reading the documentation, you can keep your voice assistant UMD in the audio or voice-activation stack for wake word scenarios.

    Then expose a separate human presence device that implements the sensors class extension (UMDF or KMDF) that reports human presence state to the sensor service. If your hardware is a single module you need to create 2 child dev nodes (1 audio, 1 sensor).

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.