CoherenceEvaluator Class
Evaluates coherence score for a given query and response or a multi-turn conversation, including reasoning.
The coherence measure assesses the ability of the language model to generate text that reads naturally, flows smoothly, and resembles human-like language in its responses. Use it when assessing the readability and user-friendliness of a model's generated responses in real-world applications.
Note
To align with our support of a diverse set of models, an output key without the gpt_ prefix has been added.
To maintain backwards compatibility, the old key with the gpt_ prefix is still be present in the output;
however, it is recommended to use the new key moving forward as the old key will be deprecated in the future.
Constructor
CoherenceEvaluator(model_config, *, threshold=3, credential=None)
Parameters
| Name | Description |
|---|---|
|
model_config
Required
|
Configuration for the Azure OpenAI model. |
|
threshold
Required
|
The threshold for the coherence evaluator. Default is 3. |
Keyword-Only Parameters
| Name | Description |
|---|---|
|
threshold
|
Default value: 3
|
|
credential
|
Default value: None
|
Examples
Initialize with threshold and call a CoherenceEvaluator with a query and response.
import os
from azure.ai.evaluation import CoherenceEvaluator
model_config = {
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
"api_key": os.environ.get("AZURE_OPENAI_KEY"),
"azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
}
coherence_evaluator = CoherenceEvaluator(model_config=model_config, threshold=2)
coherence_result = coherence_evaluator(
query="What is the capital of France?", response="Paris is the capital of France."
)
print(
f"Coherence Score: {coherence_result['coherence']}, Result: {coherence_result['coherence_result']}, Threshold: {coherence_result['coherence_threshold']}"
)
Attributes
id
Evaluator identifier, experimental and to be used only with evaluation in cloud.
id = 'azureai://built-in/evaluators/coherence'