jobs
Creates, updates, deletes, gets or lists a jobs resource.
Overview
| Name | jobs |
| Type | Resource |
| Id | databricks_workspace.jobs.jobs |
Fields
The following fields are returned by SELECT queries:
- get
- list
| Name | Datatype | Description |
|---|---|---|
effective_budget_policy_id | string | The id of the budget policy used by this job for cost attribution purposes. This may be set through (in order of precedence): 1. Budget admins through the account or workspace console 2. Jobs UI in the job details page and Jobs API using `budget_policy_id` 3. Inferred default based on accessible budget policies of the run_as identity on job creation or modification. |
effective_usage_policy_id | string | The id of the usage policy used by this job for cost attribution purposes. |
job_id | integer | The canonical identifier for this job. |
creator_user_name | string | The creator user name. This field won’t be included in the response if the user has already been deleted. |
run_as_user_name | string | The email of an active workspace user or the application ID of a service principal that the job runs as. This value can be changed by setting the `run_as` field when creating or updating a job. By default, `run_as_user_name` is based on the current job settings and is set to the creator of the job if job access control is disabled or to the user with the `is_owner` permission if job access control is enabled. |
created_time | integer | The time at which this job was created in epoch milliseconds (milliseconds since 1/1/1970 UTC). |
has_more | boolean | Indicates if the job has more array properties (`tasks`, `job_clusters`) that are not shown. They can be accessed via :method:jobs/get endpoint. It is only relevant for API 2.2 :method:jobs/list requests with `expand_tasks=true`. |
next_page_token | string | A token that can be used to list the next page of array properties. |
settings | object | Settings for this job and all of its runs. These settings can be updated using the `resetJob` method. |
trigger_state | object | State of the trigger associated with the job. |
| Name | Datatype | Description |
|---|---|---|
effective_budget_policy_id | string | The id of the budget policy used by this job for cost attribution purposes. This may be set through (in order of precedence): 1. Budget admins through the account or workspace console 2. Jobs UI in the job details page and Jobs API using `budget_policy_id` 3. Inferred default based on accessible budget policies of the run_as identity on job creation or modification. |
effective_usage_policy_id | string | The id of the usage policy used by this job for cost attribution purposes. |
job_id | integer | The canonical identifier for this job. |
creator_user_name | string | The creator user name. This field won’t be included in the response if the user has already been deleted. |
created_time | integer | |
has_more | boolean | Indicates if the job has more array properties (`tasks`, `job_clusters`) that are not shown. They can be accessed via :method:jobs/get endpoint. It is only relevant for API 2.2 :method:jobs/list requests with `expand_tasks=true`. |
settings | object | Settings for this job and all of its runs. These settings can be updated using the `resetJob` method. |
trigger_state | object | State of the trigger associated with the job. |
Methods
The following methods are available for this resource:
| Name | Accessible by | Required Params | Optional Params | Description |
|---|---|---|---|---|
get | select | job_id, deployment_name | page_token | Retrieves the details for a single job. |
list | select | deployment_name | expand_tasks, limit, name, offset, page_token | Retrieves a list of jobs. |
create | insert | deployment_name | Create a new job. | |
update | update | deployment_name, job_id | Add, update, or remove specific settings of an existing job. Use the [Reset | |
reset | replace | deployment_name, job_id, new_settings | Overwrite all settings for the given job. Use the Update endpoint to update | |
delete | delete | deployment_name | Deletes a job. | |
run_now | exec | deployment_name, job_id | Run a job and return the run_id of the triggered run. |
Parameters
Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
| Name | Datatype | Description |
|---|---|---|
deployment_name | string | The Databricks Workspace Deployment Name (default: dbc-abcd0123-a1bc) |
job_id | integer | The canonical identifier of the job to retrieve information about. This field is required. |
expand_tasks | boolean | Whether to include task and cluster details in the response. Note that only the first 100 elements will be shown. Use :method:jobs/get to paginate through all tasks and clusters. |
limit | integer | The number of jobs to return. This value must be greater than 0 and less or equal to 100. The default value is 20. |
name | string | A filter on the list based on the exact (case insensitive) job name. |
offset | integer | The offset of the first job to return, relative to the most recently created job. Deprecated since June 2023. Use page_token to iterate through the pages instead. |
page_token | string | Use next_page_token or prev_page_token returned from the previous request to list the next or previous page of jobs respectively. |
SELECT examples
- get
- list
Retrieves the details for a single job.
SELECT
effective_budget_policy_id,
effective_usage_policy_id,
job_id,
creator_user_name,
run_as_user_name,
created_time,
has_more,
next_page_token,
settings,
trigger_state
FROM databricks_workspace.jobs.jobs
WHERE job_id = '{{ job_id }}' -- required
AND deployment_name = '{{ deployment_name }}' -- required
AND page_token = '{{ page_token }}'
;
Retrieves a list of jobs.
SELECT
effective_budget_policy_id,
effective_usage_policy_id,
job_id,
creator_user_name,
created_time,
has_more,
settings,
trigger_state
FROM databricks_workspace.jobs.jobs
WHERE deployment_name = '{{ deployment_name }}' -- required
AND expand_tasks = '{{ expand_tasks }}'
AND limit = '{{ limit }}'
AND name = '{{ name }}'
AND offset = '{{ offset }}'
AND page_token = '{{ page_token }}'
;
INSERT examples
- create
- Manifest
Create a new job.
INSERT INTO databricks_workspace.jobs.jobs (
access_control_list,
budget_policy_id,
continuous,
deployment,
description,
edit_mode,
email_notifications,
environments,
format,
git_source,
health,
job_clusters,
max_concurrent_runs,
name,
notification_settings,
parameters,
performance_target,
queue,
run_as,
schedule,
tags,
tasks,
timeout_seconds,
trigger,
usage_policy_id,
webhook_notifications,
deployment_name
)
SELECT
'{{ access_control_list }}',
'{{ budget_policy_id }}',
'{{ continuous }}',
'{{ deployment }}',
'{{ description }}',
'{{ edit_mode }}',
'{{ email_notifications }}',
'{{ environments }}',
'{{ format }}',
'{{ git_source }}',
'{{ health }}',
'{{ job_clusters }}',
{{ max_concurrent_runs }},
'{{ name }}',
'{{ notification_settings }}',
'{{ parameters }}',
'{{ performance_target }}',
'{{ queue }}',
'{{ run_as }}',
'{{ schedule }}',
'{{ tags }}',
'{{ tasks }}',
{{ timeout_seconds }},
'{{ trigger }}',
'{{ usage_policy_id }}',
'{{ webhook_notifications }}',
'{{ deployment_name }}'
RETURNING
job_id
;
# Description fields are for documentation purposes
- name: jobs
props:
- name: deployment_name
value: "{{ deployment_name }}"
description: Required parameter for the jobs resource.
- name: access_control_list
description: |
List of permissions to set on the job.
value:
- group_name: "{{ group_name }}"
permission_level: "{{ permission_level }}"
service_principal_name: "{{ service_principal_name }}"
user_name: "{{ user_name }}"
- name: budget_policy_id
value: "{{ budget_policy_id }}"
description: |
The id of the user specified budget policy to use for this job. If not specified, a default budget policy may be applied when creating or modifying the job. See `effective_budget_policy_id` for the budget policy used by this workload.
- name: continuous
description: |
An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.
value:
pause_status: "{{ pause_status }}"
task_retry_mode: "{{ task_retry_mode }}"
- name: deployment
description: |
Deployment information for jobs managed by external sources.
value:
kind: "{{ kind }}"
metadata_file_path: "{{ metadata_file_path }}"
- name: description
value: "{{ description }}"
description: |
An optional description for the job. The maximum length is 27700 characters in UTF-8 encoding.
- name: edit_mode
value: "{{ edit_mode }}"
description: |
Edit mode of the job. * `UI_LOCKED`: The job is in a locked UI state and cannot be modified. * `EDITABLE`: The job is in an editable state and can be modified.
- name: email_notifications
description: |
An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted.
value:
no_alert_for_skipped_runs: {{ no_alert_for_skipped_runs }}
on_duration_warning_threshold_exceeded:
- "{{ on_duration_warning_threshold_exceeded }}"
on_failure:
- "{{ on_failure }}"
on_start:
- "{{ on_start }}"
on_streaming_backlog_exceeded:
- "{{ on_streaming_backlog_exceeded }}"
on_success:
- "{{ on_success }}"
- name: environments
description: |
A list of task execution environment specifications that can be referenced by serverless tasks of this job. For serverless notebook tasks, if the environment_key is not specified, the notebook environment will be used if present. If a jobs environment is specified, it will override the notebook environment. For other serverless tasks, the task environment is required to be specified using environment_key in the task settings.
value:
- environment_key: "{{ environment_key }}"
spec: "{{ spec }}"
- name: format
value: "{{ format }}"
description: |
Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to `"MULTI_TASK"`.
- name: git_source
description: |
An optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks. If `git_source` is set, these tasks retrieve the file from the remote repository by default. However, this behavior can be overridden by setting `source` to `WORKSPACE` on the task. Note: dbt and SQL File tasks support only version-controlled sources. If dbt or SQL File tasks are used, `git_source` must be defined on the job.
value:
git_url: "{{ git_url }}"
git_provider: "{{ git_provider }}"
git_branch: "{{ git_branch }}"
git_commit: "{{ git_commit }}"
git_snapshot:
used_commit: "{{ used_commit }}"
git_tag: "{{ git_tag }}"
job_source:
job_config_path: "{{ job_config_path }}"
import_from_git_branch: "{{ import_from_git_branch }}"
dirty_state: "{{ dirty_state }}"
- name: health
description: |
:param job_clusters: List[:class:`JobCluster`] (optional) A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings.
value:
rules:
- metric: "{{ metric }}"
op: "{{ op }}"
value: {{ value }}
- name: job_clusters
value:
- job_cluster_key: "{{ job_cluster_key }}"
new_cluster: "{{ new_cluster }}"
- name: max_concurrent_runs
value: {{ max_concurrent_runs }}
description: |
An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job’s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won’t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to `0` causes all new runs to be skipped.
- name: name
value: "{{ name }}"
description: |
An optional name for the job. The maximum length is 4096 bytes in UTF-8 encoding.
- name: notification_settings
description: |
Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this job.
value:
no_alert_for_canceled_runs: {{ no_alert_for_canceled_runs }}
no_alert_for_skipped_runs: {{ no_alert_for_skipped_runs }}
- name: parameters
description: |
Job-level parameter definitions
value:
- name: "{{ name }}"
default: "{{ default }}"
- name: performance_target
value: "{{ performance_target }}"
description: |
The performance mode on a serverless job. This field determines the level of compute performance or cost-efficiency for the run. The performance target does not apply to tasks that run on Serverless GPU compute. * `STANDARD`: Enables cost-efficient execution of serverless workloads. * `PERFORMANCE_OPTIMIZED`: Prioritizes fast startup and execution times through rapid scaling and optimized cluster performance.
- name: queue
description: |
The queue settings of the job.
value:
enabled: {{ enabled }}
- name: run_as
description: |
The user or service principal that the job runs as, if specified in the request. This field indicates the explicit configuration of `run_as` for the job. To find the value in all cases, explicit or implicit, use `run_as_user_name`.
value:
group_name: "{{ group_name }}"
service_principal_name: "{{ service_principal_name }}"
user_name: "{{ user_name }}"
- name: schedule
description: |
An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
value:
quartz_cron_expression: "{{ quartz_cron_expression }}"
timezone_id: "{{ timezone_id }}"
pause_status: "{{ pause_status }}"
- name: tags
value: "{{ tags }}"
description: |
A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job.
- name: tasks
description: |
A list of task specifications to be executed by this job. It supports up to 1000 elements in write endpoints (:method:jobs/create, :method:jobs/reset, :method:jobs/update, :method:jobs/submit). Read endpoints return only 100 tasks. If more than 100 tasks are available, you can paginate through them using :method:jobs/get. Use the `next_page_token` field at the object root to determine if more results are available.
value:
- task_key: "{{ task_key }}"
clean_rooms_notebook_task:
clean_room_name: "{{ clean_room_name }}"
notebook_name: "{{ notebook_name }}"
etag: "{{ etag }}"
notebook_base_parameters: "{{ notebook_base_parameters }}"
compute:
hardware_accelerator: "{{ hardware_accelerator }}"
condition_task:
op: "{{ op }}"
left: "{{ left }}"
right: "{{ right }}"
dashboard_task:
dashboard_id: "{{ dashboard_id }}"
filters: "{{ filters }}"
subscription:
custom_subject: "{{ custom_subject }}"
paused: {{ paused }}
subscribers:
- destination_id: "{{ destination_id }}"
user_name: "{{ user_name }}"
warehouse_id: "{{ warehouse_id }}"
dbt_cloud_task:
connection_resource_name: "{{ connection_resource_name }}"
dbt_cloud_job_id: {{ dbt_cloud_job_id }}
dbt_platform_task:
connection_resource_name: "{{ connection_resource_name }}"
dbt_platform_job_id: "{{ dbt_platform_job_id }}"
dbt_task:
commands:
- "{{ commands }}"
catalog: "{{ catalog }}"
profiles_directory: "{{ profiles_directory }}"
project_directory: "{{ project_directory }}"
schema: "{{ schema }}"
source: "{{ source }}"
warehouse_id: "{{ warehouse_id }}"
depends_on: "{{ depends_on }}"
description: "{{ description }}"
disable_auto_optimization: {{ disable_auto_optimization }}
disabled: {{ disabled }}
email_notifications:
no_alert_for_skipped_runs: {{ no_alert_for_skipped_runs }}
on_duration_warning_threshold_exceeded:
- "{{ on_duration_warning_threshold_exceeded }}"
on_failure:
- "{{ on_failure }}"
on_start:
- "{{ on_start }}"
on_streaming_backlog_exceeded:
- "{{ on_streaming_backlog_exceeded }}"
on_success:
- "{{ on_success }}"
environment_key: "{{ environment_key }}"
existing_cluster_id: "{{ existing_cluster_id }}"
for_each_task:
inputs: "{{ inputs }}"
task:
task_key: "{{ task_key }}"
clean_rooms_notebook_task:
clean_room_name: "{{ clean_room_name }}"
notebook_name: "{{ notebook_name }}"
etag: "{{ etag }}"
notebook_base_parameters: "{{ notebook_base_parameters }}"
compute:
hardware_accelerator: "{{ hardware_accelerator }}"
condition_task:
op: "{{ op }}"
left: "{{ left }}"
right: "{{ right }}"
dashboard_task:
dashboard_id: "{{ dashboard_id }}"
filters: "{{ filters }}"
subscription:
custom_subject: "{{ custom_subject }}"
paused: {{ paused }}
subscribers: "{{ subscribers }}"
warehouse_id: "{{ warehouse_id }}"
dbt_cloud_task:
connection_resource_name: "{{ connection_resource_name }}"
dbt_cloud_job_id: {{ dbt_cloud_job_id }}
dbt_platform_task:
connection_resource_name: "{{ connection_resource_name }}"
dbt_platform_job_id: "{{ dbt_platform_job_id }}"
dbt_task:
commands:
- "{{ commands }}"
catalog: "{{ catalog }}"
profiles_directory: "{{ profiles_directory }}"
project_directory: "{{ project_directory }}"
schema: "{{ schema }}"
source: "{{ source }}"
warehouse_id: "{{ warehouse_id }}"
depends_on:
- task_key: "{{ task_key }}"
outcome: "{{ outcome }}"
description: "{{ description }}"
disable_auto_optimization: {{ disable_auto_optimization }}
disabled: {{ disabled }}
email_notifications:
no_alert_for_skipped_runs: {{ no_alert_for_skipped_runs }}
on_duration_warning_threshold_exceeded:
- "{{ on_duration_warning_threshold_exceeded }}"
on_failure:
- "{{ on_failure }}"
on_start:
- "{{ on_start }}"
on_streaming_backlog_exceeded:
- "{{ on_streaming_backlog_exceeded }}"
on_success:
- "{{ on_success }}"
environment_key: "{{ environment_key }}"
existing_cluster_id: "{{ existing_cluster_id }}"
for_each_task:
inputs: "{{ inputs }}"
task:
task_key: "{{ task_key }}"
clean_rooms_notebook_task: "{{ clean_rooms_notebook_task }}"
compute: "{{ compute }}"
condition_task: "{{ condition_task }}"
dashboard_task: "{{ dashboard_task }}"
dbt_cloud_task: "{{ dbt_cloud_task }}"
dbt_platform_task: "{{ dbt_platform_task }}"
dbt_task: "{{ dbt_task }}"
depends_on: "{{ depends_on }}"
description: "{{ description }}"
disable_auto_optimization: {{ disable_auto_optimization }}
disabled: {{ disabled }}
email_notifications: "{{ email_notifications }}"
environment_key: "{{ environment_key }}"
existing_cluster_id: "{{ existing_cluster_id }}"
for_each_task: "{{ for_each_task }}"
gen_ai_compute_task: "{{ gen_ai_compute_task }}"
health: "{{ health }}"
job_cluster_key: "{{ job_cluster_key }}"
libraries: "{{ libraries }}"
max_retries: {{ max_retries }}
min_retry_interval_millis: {{ min_retry_interval_millis }}
new_cluster: "{{ new_cluster }}"
notebook_task: "{{ notebook_task }}"
notification_settings: "{{ notification_settings }}"
pipeline_task: "{{ pipeline_task }}"
power_bi_task: "{{ power_bi_task }}"
python_wheel_task: "{{ python_wheel_task }}"
retry_on_timeout: {{ retry_on_timeout }}
run_if: "{{ run_if }}"
run_job_task: "{{ run_job_task }}"
spark_jar_task: "{{ spark_jar_task }}"
spark_python_task: "{{ spark_python_task }}"
spark_submit_task: "{{ spark_submit_task }}"
sql_task: "{{ sql_task }}"
timeout_seconds: {{ timeout_seconds }}
webhook_notifications: "{{ webhook_notifications }}"
concurrency: {{ concurrency }}
gen_ai_compute_task:
dl_runtime_image: "{{ dl_runtime_image }}"
command: "{{ command }}"
compute:
num_gpus: {{ num_gpus }}
gpu_node_pool_id: "{{ gpu_node_pool_id }}"
gpu_type: "{{ gpu_type }}"
mlflow_experiment_name: "{{ mlflow_experiment_name }}"
source: "{{ source }}"
training_script_path: "{{ training_script_path }}"
yaml_parameters: "{{ yaml_parameters }}"
yaml_parameters_file_path: "{{ yaml_parameters_file_path }}"
health:
rules:
- metric: "{{ metric }}"
op: "{{ op }}"
value: {{ value }}
job_cluster_key: "{{ job_cluster_key }}"
libraries: "{{ libraries }}"
max_retries: {{ max_retries }}
min_retry_interval_millis: {{ min_retry_interval_millis }}
new_cluster: "{{ new_cluster }}"
notebook_task:
notebook_path: "{{ notebook_path }}"
base_parameters: "{{ base_parameters }}"
source: "{{ source }}"
warehouse_id: "{{ warehouse_id }}"
notification_settings:
alert_on_last_attempt: {{ alert_on_last_attempt }}
no_alert_for_canceled_runs: {{ no_alert_for_canceled_runs }}
no_alert_for_skipped_runs: {{ no_alert_for_skipped_runs }}
pipeline_task:
pipeline_id: "{{ pipeline_id }}"
full_refresh: {{ full_refresh }}
power_bi_task:
connection_resource_name: "{{ connection_resource_name }}"
power_bi_model:
authentication_method: "{{ authentication_method }}"
model_name: "{{ model_name }}"
overwrite_existing: {{ overwrite_existing }}
storage_mode: "{{ storage_mode }}"
workspace_name: "{{ workspace_name }}"
refresh_after_update: {{ refresh_after_update }}
tables:
- catalog: "{{ catalog }}"
name: "{{ name }}"
schema: "{{ schema }}"
storage_mode: "{{ storage_mode }}"
warehouse_id: "{{ warehouse_id }}"
python_wheel_task:
package_name: "{{ package_name }}"
entry_point: "{{ entry_point }}"
named_parameters: "{{ named_parameters }}"
parameters:
- "{{ parameters }}"
retry_on_timeout: {{ retry_on_timeout }}
run_if: "{{ run_if }}"
run_job_task:
job_id: {{ job_id }}
dbt_commands:
- "{{ dbt_commands }}"
jar_params:
- "{{ jar_params }}"
job_parameters: "{{ job_parameters }}"
notebook_params: "{{ notebook_params }}"
pipeline_params:
full_refresh: {{ full_refresh }}
python_named_params: "{{ python_named_params }}"
python_params:
- "{{ python_params }}"
spark_submit_params:
- "{{ spark_submit_params }}"
sql_params: "{{ sql_params }}"
spark_jar_task:
jar_uri: "{{ jar_uri }}"
main_class_name: "{{ main_class_name }}"
parameters:
- "{{ parameters }}"
run_as_repl: {{ run_as_repl }}
spark_python_task:
python_file: "{{ python_file }}"
parameters:
- "{{ parameters }}"
source: "{{ source }}"
spark_submit_task:
parameters:
- "{{ parameters }}"
sql_task:
warehouse_id: "{{ warehouse_id }}"
alert:
alert_id: "{{ alert_id }}"
pause_subscriptions: {{ pause_subscriptions }}
subscriptions: "{{ subscriptions }}"
dashboard:
dashboard_id: "{{ dashboard_id }}"
custom_subject: "{{ custom_subject }}"
pause_subscriptions: {{ pause_subscriptions }}
subscriptions: "{{ subscriptions }}"
file:
path: "{{ path }}"
source: "{{ source }}"
parameters: "{{ parameters }}"
query:
query_id: "{{ query_id }}"
timeout_seconds: {{ timeout_seconds }}
webhook_notifications:
on_duration_warning_threshold_exceeded:
- id: "{{ id }}"
on_failure:
- id: "{{ id }}"
on_start:
- id: "{{ id }}"
on_streaming_backlog_exceeded:
- id: "{{ id }}"
on_success:
- id: "{{ id }}"
concurrency: {{ concurrency }}
gen_ai_compute_task:
dl_runtime_image: "{{ dl_runtime_image }}"
command: "{{ command }}"
compute:
num_gpus: {{ num_gpus }}
gpu_node_pool_id: "{{ gpu_node_pool_id }}"
gpu_type: "{{ gpu_type }}"
mlflow_experiment_name: "{{ mlflow_experiment_name }}"
source: "{{ source }}"
training_script_path: "{{ training_script_path }}"
yaml_parameters: "{{ yaml_parameters }}"
yaml_parameters_file_path: "{{ yaml_parameters_file_path }}"
health:
rules:
- metric: "{{ metric }}"
op: "{{ op }}"
value: {{ value }}
job_cluster_key: "{{ job_cluster_key }}"
libraries: "{{ libraries }}"
max_retries: {{ max_retries }}
min_retry_interval_millis: {{ min_retry_interval_millis }}
new_cluster: "{{ new_cluster }}"
notebook_task:
notebook_path: "{{ notebook_path }}"
base_parameters: "{{ base_parameters }}"
source: "{{ source }}"
warehouse_id: "{{ warehouse_id }}"
notification_settings:
alert_on_last_attempt: {{ alert_on_last_attempt }}
no_alert_for_canceled_runs: {{ no_alert_for_canceled_runs }}
no_alert_for_skipped_runs: {{ no_alert_for_skipped_runs }}
pipeline_task:
pipeline_id: "{{ pipeline_id }}"
full_refresh: {{ full_refresh }}
power_bi_task:
connection_resource_name: "{{ connection_resource_name }}"
power_bi_model:
authentication_method: "{{ authentication_method }}"
model_name: "{{ model_name }}"
overwrite_existing: {{ overwrite_existing }}
storage_mode: "{{ storage_mode }}"
workspace_name: "{{ workspace_name }}"
refresh_after_update: {{ refresh_after_update }}
tables:
- catalog: "{{ catalog }}"
name: "{{ name }}"
schema: "{{ schema }}"
storage_mode: "{{ storage_mode }}"
warehouse_id: "{{ warehouse_id }}"
python_wheel_task:
package_name: "{{ package_name }}"
entry_point: "{{ entry_point }}"
named_parameters: "{{ named_parameters }}"
parameters:
- "{{ parameters }}"
retry_on_timeout: {{ retry_on_timeout }}
run_if: "{{ run_if }}"
run_job_task:
job_id: {{ job_id }}
dbt_commands:
- "{{ dbt_commands }}"
jar_params:
- "{{ jar_params }}"
job_parameters: "{{ job_parameters }}"
notebook_params: "{{ notebook_params }}"
pipeline_params:
full_refresh: {{ full_refresh }}
python_named_params: "{{ python_named_params }}"
python_params:
- "{{ python_params }}"
spark_submit_params:
- "{{ spark_submit_params }}"
sql_params: "{{ sql_params }}"
spark_jar_task:
jar_uri: "{{ jar_uri }}"
main_class_name: "{{ main_class_name }}"
parameters:
- "{{ parameters }}"
run_as_repl: {{ run_as_repl }}
spark_python_task:
python_file: "{{ python_file }}"
parameters:
- "{{ parameters }}"
source: "{{ source }}"
spark_submit_task:
parameters:
- "{{ parameters }}"
sql_task:
warehouse_id: "{{ warehouse_id }}"
alert:
alert_id: "{{ alert_id }}"
pause_subscriptions: {{ pause_subscriptions }}
subscriptions:
- destination_id: "{{ destination_id }}"
user_name: "{{ user_name }}"
dashboard:
dashboard_id: "{{ dashboard_id }}"
custom_subject: "{{ custom_subject }}"
pause_subscriptions: {{ pause_subscriptions }}
subscriptions:
- destination_id: "{{ destination_id }}"
user_name: "{{ user_name }}"
file:
path: "{{ path }}"
source: "{{ source }}"
parameters: "{{ parameters }}"
query:
query_id: "{{ query_id }}"
timeout_seconds: {{ timeout_seconds }}
webhook_notifications:
on_duration_warning_threshold_exceeded:
- id: "{{ id }}"
on_failure:
- id: "{{ id }}"
on_start:
- id: "{{ id }}"
on_streaming_backlog_exceeded:
- id: "{{ id }}"
on_success:
- id: "{{ id }}"
- name: timeout_seconds
value: {{ timeout_seconds }}
description: |
An optional timeout applied to each run of this job. A value of `0` means no timeout.
- name: trigger
description: |
A configuration to trigger a run when certain conditions are met. The default behavior is that the job runs only when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
value:
file_arrival:
url: "{{ url }}"
min_time_between_triggers_seconds: {{ min_time_between_triggers_seconds }}
wait_after_last_change_seconds: {{ wait_after_last_change_seconds }}
model:
condition: "{{ condition }}"
aliases:
- "{{ aliases }}"
min_time_between_triggers_seconds: {{ min_time_between_triggers_seconds }}
securable_name: "{{ securable_name }}"
wait_after_last_change_seconds: {{ wait_after_last_change_seconds }}
pause_status: "{{ pause_status }}"
periodic:
interval: {{ interval }}
unit: "{{ unit }}"
table_update:
table_names:
- "{{ table_names }}"
condition: "{{ condition }}"
min_time_between_triggers_seconds: {{ min_time_between_triggers_seconds }}
wait_after_last_change_seconds: {{ wait_after_last_change_seconds }}
- name: usage_policy_id
value: "{{ usage_policy_id }}"
description: |
The id of the user specified usage policy to use for this job. If not specified, a default usage policy may be applied when creating or modifying the job. See `effective_usage_policy_id` for the usage policy used by this workload.
- name: webhook_notifications
description: |
A collection of system notification IDs to notify when runs of this job begin or complete.
value:
on_duration_warning_threshold_exceeded:
- id: "{{ id }}"
on_failure:
- id: "{{ id }}"
on_start:
- id: "{{ id }}"
on_streaming_backlog_exceeded:
- id: "{{ id }}"
on_success:
- id: "{{ id }}"
UPDATE examples
- update
Add, update, or remove specific settings of an existing job. Use the [Reset
UPDATE databricks_workspace.jobs.jobs
SET
job_id = {{ job_id }},
fields_to_remove = '{{ fields_to_remove }}',
new_settings = '{{ new_settings }}'
WHERE
deployment_name = '{{ deployment_name }}' --required
AND job_id = '{{ job_id }}' --required;
REPLACE examples
- reset
Overwrite all settings for the given job. Use the Update endpoint to update
REPLACE databricks_workspace.jobs.jobs
SET
job_id = {{ job_id }},
new_settings = '{{ new_settings }}'
WHERE
deployment_name = '{{ deployment_name }}' --required
AND job_id = '{{ job_id }}' --required
AND new_settings = '{{ new_settings }}' --required;
DELETE examples
- delete
Deletes a job.
DELETE FROM databricks_workspace.jobs.jobs
WHERE deployment_name = '{{ deployment_name }}' --required
;
Lifecycle Methods
- run_now
Run a job and return the run_id of the triggered run.
EXEC databricks_workspace.jobs.jobs.run_now
@deployment_name='{{ deployment_name }}' --required
@@json=
'{
"job_id": {{ job_id }},
"dbt_commands": "{{ dbt_commands }}",
"idempotency_token": "{{ idempotency_token }}",
"jar_params": "{{ jar_params }}",
"job_parameters": "{{ job_parameters }}",
"notebook_params": "{{ notebook_params }}",
"only": "{{ only }}",
"performance_target": "{{ performance_target }}",
"pipeline_params": "{{ pipeline_params }}",
"python_named_params": "{{ python_named_params }}",
"python_params": "{{ python_params }}",
"queue": "{{ queue }}",
"spark_submit_params": "{{ spark_submit_params }}",
"sql_params": "{{ sql_params }}"
}'
;