job_run_output
Creates, updates, deletes, gets or lists a job_run_output resource.
Overview
| Name | job_run_output |
| Type | Resource |
| Id | databricks_workspace.jobs.job_run_output |
Fields
The following fields are returned by SELECT queries:
- get_output
| Name | Datatype | Description |
|---|---|---|
clean_rooms_notebook_output | object | The output of a clean rooms notebook task, if available |
dashboard_output | object | The output of a dashboard task, if available |
dbt_cloud_output | object | Deprecated in favor of the new dbt_platform_output |
dbt_output | object | The output of a dbt task, if available. |
dbt_platform_output | object | |
error | string | An error message indicating why a task failed or why output is not available. The message is unstructured, and its exact format is subject to change. |
error_trace | string | If there was an error executing the run, this field contains any available stack traces. |
info | string | |
logs | string | The output from tasks that write to standard streams (stdout/stderr) such as spark_jar_task, spark_python_task, python_wheel_task. It's not supported for the notebook_task, pipeline_task or spark_submit_task. Databricks restricts this API to return the last 5 MB of these logs. |
logs_truncated | boolean | Whether the logs are truncated. |
metadata | object | Run was retrieved successfully |
notebook_output | object | The output of a notebook task, if available. A notebook task that terminates (either successfully or with a failure) without calling `dbutils.notebook.exit()` is considered to have an empty output. This field is set but its result value is empty. Databricks restricts this API to return the first 5 MB of the output. To return a larger result, use the [ClusterLogConf] field to configure log storage for the job cluster. [ClusterLogConf]: https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterlogconf |
run_job_output | object | The output of a run job task, if available |
sql_output | object | The output of a SQL task, if available. |
Methods
The following methods are available for this resource:
| Name | Accessible by | Required Params | Optional Params | Description |
|---|---|---|---|---|
get_output | select | run_id, deployment_name | Retrieve the output and metadata of a single task run. When a notebook task returns a value through |
Parameters
Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
| Name | Datatype | Description |
|---|---|---|
deployment_name | string | The Databricks Workspace Deployment Name (default: dbc-abcd0123-a1bc) |
run_id | integer | The canonical identifier for the run. |
SELECT examples
- get_output
Retrieve the output and metadata of a single task run. When a notebook task returns a value through
SELECT
clean_rooms_notebook_output,
dashboard_output,
dbt_cloud_output,
dbt_output,
dbt_platform_output,
error,
error_trace,
info,
logs,
logs_truncated,
metadata,
notebook_output,
run_job_output,
sql_output
FROM databricks_workspace.jobs.job_run_output
WHERE run_id = '{{ run_id }}' -- required
AND deployment_name = '{{ deployment_name }}' -- required
;