Skip to main content

job_runs

Creates, updates, deletes, gets or lists a job_runs resource.

Overview

Namejob_runs
TypeResource
Iddatabricks_workspace.jobs.job_runs

Fields

The following fields are returned by SELECT queries:

NameDatatypeDescription
effective_usage_policy_idstringThe id of the usage policy used by this run for cost attribution purposes.
job_idintegerThe canonical identifier of the job that contains this run.
job_run_idintegerID of the job run that this run belongs to. For legacy and single-task job runs the field is populated with the job run ID. For task runs, the field is populated with the ID of the job run that the task run belongs to.
original_attempt_run_idintegerIf this run is a retry of a prior run attempt, this field contains the run_id of the original attempt; otherwise, it is the same as the run_id.
run_idintegerThe canonical identifier of the run. This ID is unique across all runs of all jobs.
creator_user_namestringThe creator user name. This field won’t be included in the response if the user has already been deleted.
run_namestringAn optional name for the run. The maximum length is 4096 bytes in UTF-8 encoding.
attempt_numberintegerThe sequence number of this run attempt for a triggered job run. The initial attempt of a run has an attempt_number of 0. If the initial run attempt fails, and the job has a retry policy (`max_retries` > 0), subsequent runs are created with an `original_attempt_run_id` of the original attempt’s ID and an incrementing `attempt_number`. Runs are retried only until they succeed, and the maximum `attempt_number` is the same as the `max_retries` value for the job.
cleanup_durationintegerThe time in milliseconds it took to terminate the cluster and clean up any associated artifacts. The duration of a task run is the sum of the `setup_duration`, `execution_duration`, and the `cleanup_duration`. The `cleanup_duration` field is set to 0 for multitask job runs. The total duration of a multitask job run is the value of the `run_duration` field.
cluster_instanceobjectThe cluster used for this run. If the run is specified to use a new cluster, this field is set once the Jobs service has requested a cluster for the run.
cluster_specobjectA snapshot of the job’s cluster specification when this run was created.
descriptionstringDescription of the run
effective_performance_targetstringThe actual performance target used by the serverless run during execution. This can differ from the client-set performance target on the request depending on whether the performance mode is supported by the job type. * `STANDARD`: Enables cost-efficient execution of serverless workloads. * `PERFORMANCE_OPTIMIZED`: Prioritizes fast startup and execution times through rapid scaling and optimized cluster performance. (PERFORMANCE_OPTIMIZED, STANDARD)
end_timeintegerThe time at which this run ended in epoch milliseconds (milliseconds since 1/1/1970 UTC). This field is set to 0 if the job is still running.
execution_durationintegerThe time in milliseconds it took to execute the commands in the JAR or notebook until they completed, failed, timed out, were cancelled, or encountered an unexpected error. The duration of a task run is the sum of the `setup_duration`, `execution_duration`, and the `cleanup_duration`. The `execution_duration` field is set to 0 for multitask job runs. The total duration of a multitask job run is the value of the `run_duration` field.
git_sourceobjectAn optional specification for a remote Git repository containing the source code used by tasks. Version-controlled source code is supported by notebook, dbt, Python script, and SQL File tasks. If `git_source` is set, these tasks retrieve the file from the remote repository by default. However, this behavior can be overridden by setting `source` to `WORKSPACE` on the task. Note: dbt and SQL File tasks support only version-controlled sources. If dbt or SQL File tasks are used, `git_source` must be defined on the job.
has_morebooleanIndicates if the run has more array properties (`tasks`, `job_clusters`) that are not shown. They can be accessed via :method:jobs/getrun endpoint. It is only relevant for API 2.2 :method:jobs/listruns requests with `expand_tasks=true`.
iterationsarrayOnly populated by for-each iterations. The parent for-each task is located in tasks array.
job_clustersarrayA list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. If more than 100 job clusters are available, you can paginate through them using :method:jobs/getrun.
job_parametersarrayJob-level parameters used in the run
next_page_tokenstringA token that can be used to list the next page of array properties.
number_in_jobintegerA unique identifier for this job run. This is set to the same value as `run_id`.
overriding_parametersobjectThe parameters used for this run.
queue_durationintegerThe time in milliseconds that the run has spent in the queue.
repair_historyarrayThe repair history of the run.
run_durationintegerThe time in milliseconds it took the job run and all of its repairs to finish.
run_page_urlstringThe URL to the detail page of the run.
run_typestringThe type of a run. * `JOB_RUN`: Normal job run. A run created with :method:jobs/runNow. *<br />`WORKFLOW_RUN`: Workflow run. A run created with [dbutils.notebook.run]. * `SUBMIT_RUN`: Submit<br />run. A run created with :method:jobs/submit.<br /><br />[dbutils.notebook.run]: https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-workflow (JOB_RUN, SUBMIT_RUN, WORKFLOW_RUN)
scheduleobjectThe cron schedule that triggered this run if it was triggered by the periodic scheduler.
setup_durationintegerThe time in milliseconds it took to set up the cluster. For runs that run on new clusters this is the cluster creation time, for runs that run on existing clusters this time should be very short. The duration of a task run is the sum of the `setup_duration`, `execution_duration`, and the `cleanup_duration`. The `setup_duration` field is set to 0 for multitask job runs. The total duration of a multitask job run is the value of the `run_duration` field.
start_timeintegerThe time at which this run was started in epoch milliseconds (milliseconds since 1/1/1970 UTC). This may not be the time when the job task starts executing, for example, if the job is scheduled to run on a new cluster, this is the time the cluster creation call is issued.
stateobjectDeprecated. Please use the `status` field instead.
statusobjectThe current status of the run
tasksarrayThe list of tasks performed by the run. Each task has its own `run_id` which you can use to call `JobsGetOutput` to retrieve the run resutls. If more than 100 tasks are available, you can paginate through them using :method:jobs/getrun. Use the `next_page_token` field at the object root to determine if more results are available.
triggerstringThe type of trigger that fired this run.<br /><br />* `PERIODIC`: Schedules that periodically trigger runs, such as a cron scheduler. * `ONE_TIME`:<br />One time triggers that fire a single run. This occurs you triggered a single run on demand<br />through the UI or the API. * `RETRY`: Indicates a run that is triggered as a retry of a<br />previously failed run. This occurs when you request to re-run the job in case of failures. *<br />`RUN_JOB_TASK`: Indicates a run that is triggered using a Run Job task. * `FILE_ARRIVAL`:<br />Indicates a run that is triggered by a file arrival. * `CONTINUOUS`: Indicates a run that is<br />triggered by a continuous job. * `TABLE`: Indicates a run that is triggered by a table update. *<br />`CONTINUOUS_RESTART`: Indicates a run created by user to manually restart a continuous job run.<br />* `MODEL`: Indicates a run that is triggered by a model update. (CONTINUOUS, CONTINUOUS_RESTART, FILE_ARRIVAL, ONE_TIME, PERIODIC, RETRY, RUN_JOB_TASK, TABLE)
trigger_infoobjectAdditional details about what triggered the run

Methods

The following methods are available for this resource:

NameAccessible byRequired ParamsOptional ParamsDescription
getselectrun_id, deployment_nameinclude_history, include_resolved_values, page_tokenRetrieves the metadata of a run.
listselectdeployment_nameactive_only, completed_only, expand_tasks, job_id, limit, offset, page_token, run_type, start_time_from, start_time_toList runs in descending order by start time.
submitinsertdeployment_nameSubmit a one-time run. This endpoint allows you to submit a workload directly without creating a job.
deletedeletedeployment_nameDeletes a non-active run. Returns an error if the run is active.
cancel_allexecdeployment_nameCancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs
cancelexecdeployment_name, run_idCancels a job run or a task run. The run is canceled asynchronously, so it may still be running when
exportexecrun_id, deployment_nameviews_to_exportExport and retrieve the job run task.
repairexecdeployment_name, run_idRe-run one or more tasks. Tasks are re-run as part of the original job run. They use the current job

Parameters

Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.

NameDatatypeDescription
deployment_namestringThe Databricks Workspace Deployment Name (default: dbc-abcd0123-a1bc)
run_idintegerThe canonical identifier for the run. This field is required.
active_onlybooleanIf active_only is true, only active runs are included in the results; otherwise, lists both active and completed runs. An active run is a run in the QUEUED, PENDING, RUNNING, or TERMINATING. This field cannot be true when completed_only is true.
completed_onlybooleanIf completed_only is true, only completed runs are included in the results; otherwise, lists both active and completed runs. This field cannot be true when active_only is true.
expand_tasksbooleanWhether to include task and cluster details in the response. Note that only the first 100 elements will be shown. Use :method:jobs/getrun to paginate through all tasks and clusters.
include_historybooleanWhether to include the repair history in the response.
include_resolved_valuesbooleanWhether to include resolved parameter values in the response.
job_idintegerThe job for which to list runs. If omitted, the Jobs service lists runs from all jobs.
limitintegerThe number of runs to return. This value must be greater than 0 and less than 25. The default value is 20. If a request specifies a limit of 0, the service instead uses the maximum limit.
offsetintegerThe offset of the first run to return, relative to the most recent run. Deprecated since June 2023. Use page_token to iterate through the pages instead.
page_tokenstringUse next_page_token or prev_page_token returned from the previous request to list the next or previous page of runs respectively.
run_typestringThe type of runs to return. For a description of run types, see :method:jobs/getRun.
start_time_fromintegerShow runs that started at or after this value. The value must be a UTC timestamp in milliseconds. Can be combined with start_time_to to filter by a time range.
start_time_tointegerShow runs that started at or before this value. The value must be a UTC timestamp in milliseconds. Can be combined with start_time_from to filter by a time range.
views_to_exportstringWhich views to export (CODE, DASHBOARDS, or ALL). Defaults to CODE.

SELECT examples

Retrieves the metadata of a run.

SELECT
effective_usage_policy_id,
job_id,
job_run_id,
original_attempt_run_id,
run_id,
creator_user_name,
run_name,
attempt_number,
cleanup_duration,
cluster_instance,
cluster_spec,
description,
effective_performance_target,
end_time,
execution_duration,
git_source,
has_more,
iterations,
job_clusters,
job_parameters,
next_page_token,
number_in_job,
overriding_parameters,
queue_duration,
repair_history,
run_duration,
run_page_url,
run_type,
schedule,
setup_duration,
start_time,
state,
status,
tasks,
trigger,
trigger_info
FROM databricks_workspace.jobs.job_runs
WHERE run_id = '{{ run_id }}' -- required
AND deployment_name = '{{ deployment_name }}' -- required
AND include_history = '{{ include_history }}'
AND include_resolved_values = '{{ include_resolved_values }}'
AND page_token = '{{ page_token }}'
;

INSERT examples

Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job.

INSERT INTO databricks_workspace.jobs.job_runs (
access_control_list,
budget_policy_id,
email_notifications,
environments,
git_source,
health,
idempotency_token,
notification_settings,
queue,
run_as,
run_name,
tasks,
timeout_seconds,
usage_policy_id,
webhook_notifications,
deployment_name
)
SELECT
'{{ access_control_list }}',
'{{ budget_policy_id }}',
'{{ email_notifications }}',
'{{ environments }}',
'{{ git_source }}',
'{{ health }}',
'{{ idempotency_token }}',
'{{ notification_settings }}',
'{{ queue }}',
'{{ run_as }}',
'{{ run_name }}',
'{{ tasks }}',
{{ timeout_seconds }},
'{{ usage_policy_id }}',
'{{ webhook_notifications }}',
'{{ deployment_name }}'
RETURNING
effective_usage_policy_id,
job_id,
job_run_id,
original_attempt_run_id,
run_id,
creator_user_name,
run_name,
attempt_number,
cleanup_duration,
cluster_instance,
cluster_spec,
description,
effective_performance_target,
end_time,
execution_duration,
git_source,
has_more,
iterations,
job_clusters,
job_parameters,
next_page_token,
number_in_job,
overriding_parameters,
queue_duration,
repair_history,
run_duration,
run_page_url,
run_type,
schedule,
setup_duration,
start_time,
state,
status,
tasks,
trigger,
trigger_info
;

DELETE examples

Deletes a non-active run. Returns an error if the run is active.

DELETE FROM databricks_workspace.jobs.job_runs
WHERE deployment_name = '{{ deployment_name }}' --required
;

Lifecycle Methods

Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs

EXEC databricks_workspace.jobs.job_runs.cancel_all 
@deployment_name='{{ deployment_name }}' --required
@@json=
'{
"all_queued_runs": {{ all_queued_runs }},
"job_id": {{ job_id }}
}'
;