Skip to main content

jobs

Creates, updates, deletes, gets or lists a jobs resource.

Overview

Namejobs
TypeResource
Iddatabricks_workspace.jobs.jobs

Fields

The following fields are returned by SELECT queries:

NameDatatypeDescription
effective_budget_policy_idstringThe id of the budget policy used by this job for cost attribution purposes. This may be set through (in order of precedence): 1. Budget admins through the account or workspace console 2. Jobs UI in the job details page and Jobs API using `budget_policy_id` 3. Inferred default based on accessible budget policies of the run_as identity on job creation or modification.
effective_usage_policy_idstringThe id of the usage policy used by this job for cost attribution purposes.
job_idintegerThe canonical identifier for this job.
creator_user_namestringThe creator user name. This field won’t be included in the response if the user has already been deleted.
run_as_user_namestringThe email of an active workspace user or the application ID of a service principal that the job runs as. This value can be changed by setting the `run_as` field when creating or updating a job. By default, `run_as_user_name` is based on the current job settings and is set to the creator of the job if job access control is disabled or to the user with the `is_owner` permission if job access control is enabled.
created_timeintegerThe time at which this job was created in epoch milliseconds (milliseconds since 1/1/1970 UTC).
has_morebooleanIndicates if the job has more array properties (`tasks`, `job_clusters`) that are not shown. They can be accessed via :method:jobs/get endpoint. It is only relevant for API 2.2 :method:jobs/list requests with `expand_tasks=true`.
next_page_tokenstringA token that can be used to list the next page of array properties.
settingsobjectSettings for this job and all of its runs. These settings can be updated using the `resetJob` method.
trigger_stateobjectState of the trigger associated with the job.

Methods

The following methods are available for this resource:

NameAccessible byRequired ParamsOptional ParamsDescription
getselectjob_id, deployment_namepage_tokenRetrieves the details for a single job.
listselectdeployment_nameexpand_tasks, limit, name, offset, page_tokenRetrieves a list of jobs.
createinsertdeployment_nameCreate a new job.
updateupdatedeployment_name, job_idAdd, update, or remove specific settings of an existing job. Use the [Reset
resetreplacedeployment_name, job_id, new_settingsOverwrite all settings for the given job. Use the Update endpoint to update
deletedeletedeployment_nameDeletes a job.
run_nowexecdeployment_name, job_idRun a job and return the run_id of the triggered run.

Parameters

Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.

NameDatatypeDescription
deployment_namestringThe Databricks Workspace Deployment Name (default: dbc-abcd0123-a1bc)
job_idintegerThe canonical identifier of the job to retrieve information about. This field is required.
expand_tasksbooleanWhether to include task and cluster details in the response. Note that only the first 100 elements will be shown. Use :method:jobs/get to paginate through all tasks and clusters.
limitintegerThe number of jobs to return. This value must be greater than 0 and less or equal to 100. The default value is 20.
namestringA filter on the list based on the exact (case insensitive) job name.
offsetintegerThe offset of the first job to return, relative to the most recently created job. Deprecated since June 2023. Use page_token to iterate through the pages instead.
page_tokenstringUse next_page_token or prev_page_token returned from the previous request to list the next or previous page of jobs respectively.

SELECT examples

Retrieves the details for a single job.

SELECT
effective_budget_policy_id,
effective_usage_policy_id,
job_id,
creator_user_name,
run_as_user_name,
created_time,
has_more,
next_page_token,
settings,
trigger_state
FROM databricks_workspace.jobs.jobs
WHERE job_id = '{{ job_id }}' -- required
AND deployment_name = '{{ deployment_name }}' -- required
AND page_token = '{{ page_token }}'
;

INSERT examples

Create a new job.

INSERT INTO databricks_workspace.jobs.jobs (
access_control_list,
budget_policy_id,
continuous,
deployment,
description,
edit_mode,
email_notifications,
environments,
format,
git_source,
health,
job_clusters,
max_concurrent_runs,
name,
notification_settings,
parameters,
performance_target,
queue,
run_as,
schedule,
tags,
tasks,
timeout_seconds,
trigger,
usage_policy_id,
webhook_notifications,
deployment_name
)
SELECT
'{{ access_control_list }}',
'{{ budget_policy_id }}',
'{{ continuous }}',
'{{ deployment }}',
'{{ description }}',
'{{ edit_mode }}',
'{{ email_notifications }}',
'{{ environments }}',
'{{ format }}',
'{{ git_source }}',
'{{ health }}',
'{{ job_clusters }}',
{{ max_concurrent_runs }},
'{{ name }}',
'{{ notification_settings }}',
'{{ parameters }}',
'{{ performance_target }}',
'{{ queue }}',
'{{ run_as }}',
'{{ schedule }}',
'{{ tags }}',
'{{ tasks }}',
{{ timeout_seconds }},
'{{ trigger }}',
'{{ usage_policy_id }}',
'{{ webhook_notifications }}',
'{{ deployment_name }}'
RETURNING
job_id
;

UPDATE examples

Add, update, or remove specific settings of an existing job. Use the [Reset

UPDATE databricks_workspace.jobs.jobs
SET
job_id = {{ job_id }},
fields_to_remove = '{{ fields_to_remove }}',
new_settings = '{{ new_settings }}'
WHERE
deployment_name = '{{ deployment_name }}' --required
AND job_id = '{{ job_id }}' --required;

REPLACE examples

Overwrite all settings for the given job. Use the Update endpoint to update

REPLACE databricks_workspace.jobs.jobs
SET
job_id = {{ job_id }},
new_settings = '{{ new_settings }}'
WHERE
deployment_name = '{{ deployment_name }}' --required
AND job_id = '{{ job_id }}' --required
AND new_settings = '{{ new_settings }}' --required;

DELETE examples

Deletes a job.

DELETE FROM databricks_workspace.jobs.jobs
WHERE deployment_name = '{{ deployment_name }}' --required
;

Lifecycle Methods

Run a job and return the run_id of the triggered run.

EXEC databricks_workspace.jobs.jobs.run_now 
@deployment_name='{{ deployment_name }}' --required
@@json=
'{
"job_id": {{ job_id }},
"dbt_commands": "{{ dbt_commands }}",
"idempotency_token": "{{ idempotency_token }}",
"jar_params": "{{ jar_params }}",
"job_parameters": "{{ job_parameters }}",
"notebook_params": "{{ notebook_params }}",
"only": "{{ only }}",
"performance_target": "{{ performance_target }}",
"pipeline_params": "{{ pipeline_params }}",
"python_named_params": "{{ python_named_params }}",
"python_params": "{{ python_params }}",
"queue": "{{ queue }}",
"spark_submit_params": "{{ spark_submit_params }}",
"sql_params": "{{ sql_params }}"
}'
;