SweepJob Class 
Sweep job for hyperparameter tuning.
Note
For sweep jobs, inputs, outputs, and parameters are accessible as environment variables using the prefix
AZUREML_SWEEP_. For example, if you have a parameter named "learning_rate", you can access it as
AZUREML_SWEEP_learning_rate.
]
]
]
Constructor
SweepJob(*, name: str | None = None, description: str | None = None, tags: Dict | None = None, display_name: str | None = None, experiment_name: str | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict | None = None, compute: str | None = None, limits: SweepJobLimits | None = None, sampling_algorithm: str | SamplingAlgorithm | None = None, search_space: Dict[str, Choice | LogNormal | LogUniform | Normal | QLogNormal | QLogUniform | QNormal | QUniform | Randint | Uniform] | None = None, objective: Objective | None = None, trial: CommandJob | CommandComponent | None = None, early_termination: EarlyTerminationPolicy | BanditPolicy | MedianStoppingPolicy | TruncationSelectionPolicy | None = None, queue_settings: QueueSettings | None = None, resources: dict | JobResourceConfiguration | None = None, **kwargs: Any)
		Keyword-Only Parameters
| Name | Description | 
|---|---|
| 
		 name 
	 | 
	
		
		 Name of the job. Default value: None 
			 | 
| 
		 display_name 
	 | 
	
		
		 Display name of the job. Default value: None 
			 | 
| 
		 description 
	 | 
	
		
		 Description of the job. Default value: None 
			 | 
| 
		 tags 
	 | 
	
		
		 Tag dictionary. Tags can be added, removed, and updated. Default value: None 
			 | 
| 
		 properties 
	 | 
	
		
		 The asset property dictionary.  | 
| 
		 experiment_name 
	 | 
	
		
		 Name of the experiment the job will be created under. If None is provided, job will be created under experiment 'Default'. Default value: None 
			 | 
| 
		 identity 
	 | 
	
		 
				Union[ <xref:azure.ai.ml.ManagedIdentityConfiguration>, <xref:azure.ai.ml.AmlTokenConfiguration>, <xref:azure.ai.ml.UserIdentityConfiguration>
		 
		Identity that the training job will use while running on compute. Default value: None 
			 | 
| 
		 inputs 
	 | 
	
		
		 Inputs to the command. Default value: None 
			 | 
| 
		 outputs 
	 | 
	
		
		 Mapping of output data bindings used in the job. Default value: None 
			 | 
| 
		 sampling_algorithm 
	 | 
	
		
		 The hyperparameter sampling algorithm to use over the search_space. Defaults to "random". Default value: None 
			 | 
| 
		 search_space 
	 | 
	
		
		 Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. Default value: None 
			 | 
| 
		 objective 
	 | 
	
		
		 Metric to optimize for. Default value: None 
			 | 
| 
		 compute 
	 | 
	
		
		 The compute target the job runs on. Default value: None 
			 | 
| 
		 trial 
	 | 
	
		
		 The job configuration for each trial. Each trial will be provided with a different combination of hyperparameter values that the system samples from the search_space. Default value: None 
			 | 
| 
		 early_termination 
	 | 
	
		
		 The early termination policy to use. A trial job is canceled when the criteria of the specified policy are met. If omitted, no early termination policy will be applied. Default value: None 
			 | 
| 
		 limits 
	 | 
	
		 
				<xref:azure.ai.ml.entities.SweepJobLimits>
		 
		Limits for the sweep job. Default value: None 
			 | 
| 
		 queue_settings 
	 | 
	
		
		 Queue settings for the job. Default value: None 
			 | 
| 
		 resources 
	 | 
	
		
		 Compute Resource configuration for the job. Default value: None 
			 | 
Examples
Creating a SweepJob
   from azure.ai.ml.entities import CommandJob
   from azure.ai.ml.sweep import BayesianSamplingAlgorithm, Objective, SweepJob, SweepJobLimits
   command_job = CommandJob(
       inputs=dict(kernel="linear", penalty=1.0),
       compute=cpu_cluster,
       environment=f"{job_env.name}:{job_env.version}",
       code="./scripts",
       command="python scripts/train.py --kernel $kernel --penalty $penalty",
       experiment_name="sklearn-iris-flowers",
   )
   sweep = SweepJob(
       sampling_algorithm=BayesianSamplingAlgorithm(),
       trial=command_job,
       search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])},
       inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}},  # type:ignore
       compute="top_level",
       limits=SweepJobLimits(trial_timeout=600),
       objective=Objective(goal="maximize", primary_metric="accuracy"),
   )
	Methods
| dump | 
					 Dumps the job content into a file in YAML format.  | 
			
| set_limits | 
					 Set limits for Sweep node. Leave parameters as None if you don't want to update corresponding values.  | 
			
| set_objective | 
					 Set the sweep object.. Leave parameters as None if you don't want to update corresponding values.  | 
			
| set_resources | 
					 Set resources for Sweep.  | 
			
dump
Dumps the job content into a file in YAML format.
dump(dest: str | PathLike | IO, **kwargs: Any) -> None
		Parameters
| Name | Description | 
|---|---|
| 
		 dest 
			
				Required
			 
	 | 
	
		
		 The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.  | 
Exceptions
| Type | Description | 
|---|---|
| 
					 Raised if dest is a file path and the file already exists.  | 
			|
| 
					 Raised if dest is an open file and the file is not writable.  | 
			
set_limits
Set limits for Sweep node. Leave parameters as None if you don't want to update corresponding values.
set_limits(*, max_concurrent_trials: int | None = None, max_total_trials: int | None = None, timeout: int | None = None, trial_timeout: int | None = None) -> None
		Keyword-Only Parameters
| Name | Description | 
|---|---|
| 
		 max_concurrent_trials 
	 | 
	
		
		 maximum concurrent trial number. Default value: None 
			 | 
| 
		 max_total_trials 
	 | 
	
		
		 maximum total trial number. Default value: None 
			 | 
| 
		 timeout 
	 | 
	
		
		 total timeout in seconds for sweep node Default value: None 
			 | 
| 
		 trial_timeout 
	 | 
	
		
		 timeout in seconds for each trial Default value: None 
			 | 
set_objective
Set the sweep object.. Leave parameters as None if you don't want to update corresponding values.
set_objective(*, goal: str | None = None, primary_metric: str | None = None) -> None
		Keyword-Only Parameters
| Name | Description | 
|---|---|
| 
		 goal 
	 | 
	
		
		 Defines supported metric goals for hyperparameter tuning. Acceptable values are: "minimize" and "maximize". Default value: None 
			 | 
| 
		 primary_metric 
	 | 
	
		
		 Name of the metric to optimize. Default value: None 
			 | 
set_resources
Set resources for Sweep.
set_resources(*, instance_type: str | List[str] | None = None, instance_count: int | None = None, locations: List[str] | None = None, properties: Dict | None = None, docker_args: str | None = None, shm_size: str | None = None) -> None
		Keyword-Only Parameters
| Name | Description | 
|---|---|
| 
		 instance_type 
	 | 
	
		
		 The instance type to use for the job. Default value: None 
			 | 
| 
		 instance_count 
	 | 
	
		
		 The number of instances to use for the job. Default value: None 
			 | 
| 
		 locations 
	 | 
	
		
		 The locations to use for the job. Default value: None 
			 | 
| 
		 properties 
	 | 
	
		
		 The properties for the job. Default value: None 
			 | 
| 
		 docker_args 
	 | 
	
		
		 The docker arguments for the job. Default value: None 
			 | 
| 
		 shm_size 
	 | 
	
		
		 The shared memory size for the job. Default value: None 
			 | 
Attributes
base_path
creation_context
The creation context of the resource.
Returns
| Type | Description | 
|---|---|
| 
					 The creation metadata for the resource.  | 
		
early_termination
Early termination policy for sweep job.
Returns
| Type | Description | 
|---|---|
| 
						 
							<xref:azure.ai.ml.entities._job.sweep.early_termination_policy.EarlyTerminationPolicy>
						 
			 | 
			
					 Early termination policy for sweep job.  | 
		
id
inputs
limits
log_files
outputs
resources
sampling_algorithm
Sampling algorithm for sweep job.
Returns
| Type | Description | 
|---|---|
| 
					 Sampling algorithm for sweep job.  | 
		
status
The status of the job.
Common values returned include "Running", "Completed", and "Failed". All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
Returns
| Type | Description | 
|---|---|
| 
					 Status of the job.  |