feature_kafka_configs
Creates, updates, deletes, gets or lists a feature_kafka_configs resource.
Overview
| Name | feature_kafka_configs |
| Type | Resource |
| Id | databricks_workspace.ml.feature_kafka_configs |
Fields
The following fields are returned by SELECT queries:
- get
- list
| Name | Datatype | Description |
|---|---|---|
name | string | |
auth_config | object | Authentication configuration for connection to topics. |
backfill_source | object | A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config. |
bootstrap_servers | string | A comma-separated list of host/port pairs pointing to Kafka cluster. |
extra_options | object | Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*) |
key_schema | object | Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided. |
subscription_mode | object | Options to configure which Kafka topics to pull data from. |
value_schema | object | Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided. |
| Name | Datatype | Description |
|---|---|---|
name | string | |
auth_config | object | Authentication configuration for connection to topics. |
backfill_source | object | A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config. |
bootstrap_servers | string | A comma-separated list of host/port pairs pointing to Kafka cluster. |
extra_options | object | Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*) |
key_schema | object | Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided. |
subscription_mode | object | Options to configure which Kafka topics to pull data from. |
value_schema | object | Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided. |
Methods
The following methods are available for this resource:
| Name | Accessible by | Required Params | Optional Params | Description |
|---|---|---|---|---|
get | select | name, deployment_name | Get a Kafka config. During PrPr, Kafka configs can be read and used when creating features under the | |
list | select | deployment_name | page_size, page_token | List Kafka configs. During PrPr, Kafka configs can be read and used when creating features under the |
create | insert | deployment_name, kafka_config | Create a Kafka config. During PrPr, Kafka configs can be read and used when creating features under | |
update | update | name, update_mask, deployment_name, kafka_config | Update a Kafka config. During PrPr, Kafka configs can be read and used when creating features under | |
delete | delete | name, deployment_name | Delete a Kafka config. During PrPr, Kafka configs can be read and used when creating features under |
Parameters
Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
| Name | Datatype | Description |
|---|---|---|
deployment_name | string | The Databricks Workspace Deployment Name (default: dbc-abcd0123-a1bc) |
name | string | Name of the Kafka config to delete. |
update_mask | object | The list of fields to update. |
page_size | integer | The maximum number of results to return. |
page_token | string | Pagination token to go to the next page based on a previous query. |
SELECT examples
- get
- list
Get a Kafka config. During PrPr, Kafka configs can be read and used when creating features under the
SELECT
name,
auth_config,
backfill_source,
bootstrap_servers,
extra_options,
key_schema,
subscription_mode,
value_schema
FROM databricks_workspace.ml.feature_kafka_configs
WHERE name = '{{ name }}' -- required
AND deployment_name = '{{ deployment_name }}' -- required
;
List Kafka configs. During PrPr, Kafka configs can be read and used when creating features under the
SELECT
name,
auth_config,
backfill_source,
bootstrap_servers,
extra_options,
key_schema,
subscription_mode,
value_schema
FROM databricks_workspace.ml.feature_kafka_configs
WHERE deployment_name = '{{ deployment_name }}' -- required
AND page_size = '{{ page_size }}'
AND page_token = '{{ page_token }}'
;
INSERT examples
- create
- Manifest
Create a Kafka config. During PrPr, Kafka configs can be read and used when creating features under
INSERT INTO databricks_workspace.ml.feature_kafka_configs (
kafka_config,
deployment_name
)
SELECT
'{{ kafka_config }}' /* required */,
'{{ deployment_name }}'
RETURNING
name,
auth_config,
backfill_source,
bootstrap_servers,
extra_options,
key_schema,
subscription_mode,
value_schema
;
# Description fields are for documentation purposes
- name: feature_kafka_configs
props:
- name: deployment_name
value: "{{ deployment_name }}"
description: Required parameter for the feature_kafka_configs resource.
- name: kafka_config
description: |
:returns: :class:`KafkaConfig`
value:
name: "{{ name }}"
bootstrap_servers: "{{ bootstrap_servers }}"
subscription_mode:
assign: "{{ assign }}"
subscribe: "{{ subscribe }}"
subscribe_pattern: "{{ subscribe_pattern }}"
auth_config:
uc_service_credential_name: "{{ uc_service_credential_name }}"
backfill_source:
delta_table_source:
full_name: "{{ full_name }}"
entity_columns:
- "{{ entity_columns }}"
timeseries_column: "{{ timeseries_column }}"
extra_options: "{{ extra_options }}"
key_schema:
json_schema: "{{ json_schema }}"
value_schema:
json_schema: "{{ json_schema }}"
UPDATE examples
- update
Update a Kafka config. During PrPr, Kafka configs can be read and used when creating features under
UPDATE databricks_workspace.ml.feature_kafka_configs
SET
kafka_config = '{{ kafka_config }}'
WHERE
name = '{{ name }}' --required
AND update_mask = '{{ update_mask }}' --required
AND deployment_name = '{{ deployment_name }}' --required
AND kafka_config = '{{ kafka_config }}' --required
RETURNING
name,
auth_config,
backfill_source,
bootstrap_servers,
extra_options,
key_schema,
subscription_mode,
value_schema;
DELETE examples
- delete
Delete a Kafka config. During PrPr, Kafka configs can be read and used when creating features under
DELETE FROM databricks_workspace.ml.feature_kafka_configs
WHERE name = '{{ name }}' --required
AND deployment_name = '{{ deployment_name }}' --required
;