Skip to main content

Databricks Workspace

Connect to your Databricks Workspace to manage clusters, jobs, and notebooks.

Databricks Workspace Logo

Authentication

This connector uses Token-based authentication.

info

Set up your connection in the Abstra Console before using it in your workflows.

How to use

Using the Smart Chat

Execute the action "CHOOSE_ONE_ACTION_BELOW" from my connector "YOUR_CONNECTOR_NAME" using the params "PARAMS_HERE".

Using the Web Editor

from abstra.connectors import run_connection_action

result = run_connection_action(
connection_name="your_connection_name",
action_name="your_action_name",
params={
"param1": "value1",
"param2": "value2"
})

Available Actions

This connector provides 724 actions:

ActionPurposeParameters
post_api_1_2_commands_cancelCancels a currently running command within an execution context. The command ID is obtained from a prior successful call to execute.data: {
. clusterId (string)
. commandId (string)
. contextId (string)
} (object) required
post_api_1_2_commands_executeRuns a cluster command in the given execution context, using the provided language. If successful, it returns an ID for tracking the status of the command's execution.data: {
. clusterId (string)
. command (string)
. contextId (string)
. language
} (object) required
get_api_1_2_commands_statusGets the status of and, if available, the results from a currently executing command. The command ID is obtained from a prior successful call to execute.clusterId (string) required
contextId (string) required
commandId (string) required
post_api_1_2_contexts_createCreates an execution context for running cluster commands. If successful, this method returns the ID of the new execution context.data: {
. clusterId (string)
. language
} (object) required
post_api_1_2_contexts_destroyDeletes an execution context.data: {
. clusterId (string)
. contextId (string)
} (object) required
get_api_1_2_contexts_statusGets the status for an execution context.clusterId (string) required
contextId (string) required
get_api_2_0_accounts_service_principals_by_service_principal_id_credentials_secretsList all secrets associated with the given service principal. This operation only returns information about the secrets themselves and does not include the secret values.service_principal_id (string)
page_token (string)
page_size (integer)
post_api_2_0_accounts_service_principals_by_service_principal_id_credentials_secretsCreate a secret for the given service principal.service_principal_id (string)
data: {
. lifetime (string)
} (object) required
delete_api_2_0_accounts_service_principals_by_service_principal_id_credentials_secrets_by_secret_idDelete a secret from the given service principal.service_principal_id (string)
secret_id (string)
get_api_2_0_alertsGets a list of alerts accessible to the user, ordered by creation time.page_token (string)
page_size (integer)
post_api_2_0_alertsCreate Alertdata: {
. create_time (string)
. custom_description (string)
. custom_summary (string)
. display_name (string)
. effective_run_as
. evaluation
. id (string)
. lifecycle_state
. owner_user_name (string)
. parent_path (string)
. query_text (string)
. run_as
. run_as_user_name (string)
. schedule
. update_time (string)
. warehouse_id (string)
} (object) required
get_api_2_0_alerts_by_idGets an alert.id (string)
patch_api_2_0_alerts_by_idUpdate alertid (string)
update_mask (string) required
data: {
. create_time (string)
. custom_description (string)
. custom_summary (string)
. display_name (string)
. effective_run_as
. evaluation
. id (string)
. lifecycle_state
. owner_user_name (string)
. parent_path (string)
. query_text (string)
. run_as
. run_as_user_name (string)
. schedule
. update_time (string)
. warehouse_id (string)
} (object) required
delete_api_2_0_alerts_by_idMoves an alert to the trash. Trashed alerts immediately disappear from list views, and can no longer trigger. You can restore a trashed alert through the UI. A trashed alert is permanently deleted after 30 days.id (string)
get_api_2_0_appsLists all apps in the workspace.page_token (string)
page_size (integer)
post_api_2_0_appsCreates a new app.no_compute (boolean)
data: {
. active_deployment
. app_status
. budget_policy_id (string)
. compute_status
. create_time (string)
. creator (string)
. default_source_code_path (string)
. description (string)
. effective_budget_policy_id (string)
. effective_user_api_scopes (array)
. id (string)
. name (string)
. oauth2_app_client_id (string)
. oauth2_app_integration_id (string)
. pending_deployment
. resources (array)
. service_principal_client_id (string)
. service_principal_id (integer)
. service_principal_name (string)
. update_time (string)
. updater (string)
. url (string)
. user_api_scopes (array)
} (object) required
get_api_2_0_apps_by_app_name_deploymentsLists all app deployments for the app with the supplied name.app_name (string)
page_token (string)
page_size (integer)
post_api_2_0_apps_by_app_name_deploymentsCreates an app deployment for the app with the supplied name.app_name (string)
data: {
. create_time (string)
. creator (string)
. deployment_artifacts
. deployment_id (string)
. mode
. source_code_path (string)
. status
. update_time (string)
} (object) required
get_api_2_0_apps_by_app_name_deployments_by_deployment_idRetrieves information for the app deployment with the supplied name and deployment id.app_name (string)
deployment_id (string)
get_api_2_0_apps_by_nameRetrieves information for the app with the supplied name.name (string)
patch_api_2_0_apps_by_nameUpdates the app with the supplied name.name (string)
data: {
. active_deployment
. app_status
. budget_policy_id (string)
. compute_status
. create_time (string)
. creator (string)
. default_source_code_path (string)
. description (string)
. effective_budget_policy_id (string)
. effective_user_api_scopes (array)
. id (string)
. name (string)
. oauth2_app_client_id (string)
. oauth2_app_integration_id (string)
. pending_deployment
. resources (array)
. service_principal_client_id (string)
. service_principal_id (integer)
. service_principal_name (string)
. update_time (string)
. updater (string)
. url (string)
. user_api_scopes (array)
} (object) required
delete_api_2_0_apps_by_nameDeletes an app.name (string)
post_api_2_0_apps_by_name_startStart the last active deployment of the app in the workspace.name (string)
data (object) required
post_api_2_0_apps_by_name_stopStops the active deployment of the app in the workspace.name (string)
data (object) required
get_api_2_0_clean_roomsGet a list of all clean rooms of the metastore. Only clean rooms the caller has access to are returned.page_size (integer)
page_token (string)
post_api_2_0_clean_roomsCreate a new clean room with the specified collaborators. This method is asynchronous; the returned name field inside the clean_room field can be used to poll the clean room status, using the :method:cleanrooms/get method. When this method returns, the clean room will be in a PROVISIONING state, with only name, owner, comment, created_at and status populated. The clean room will be usable once it enters an ACTIVE state. The caller must be a metastore admin or have the CREATE_CLEAN_ROOM privilegdata: {
. access_restricted
. comment (string)
. created_at (integer)
. local_collaborator_alias (string)
. name (string)
. output_catalog
. owner (string)
. remote_detailed_info
. status
. updated_at (integer)
} (object) required
get_api_2_0_clean_rooms_by_clean_room_name_assetsList assets.clean_room_name (string)
page_token (string)
post_api_2_0_clean_rooms_by_clean_room_name_assetsCreate a clean room asset —share an asset like a notebook or table into the clean room. For each UC asset that is added through this method, the clean room owner must also have enough privilege on the asset to consume it. The privilege must be maintained indefinitely for the clean room to be able to access the asset. Typically, you should use a group as the clean room owner.clean_room_name (string)
data: {
. added_at (integer)
. asset_type
. clean_room_name (string)
. foreign_table
. foreign_table_local_details
. name (string)
. notebook
. owner_collaborator_alias (string)
. status
. table
. table_local_details
. view
. view_local_details
. volume_local_details
} (object) required
get_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_nameGet the details of a clean room asset by its type and full name.clean_room_name (string)
asset_type (string)
name (string)
patch_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_nameUpdate a clean room asset. For example, updating the content of a notebook; changing the shared partitions of a table; etc.clean_room_name (string)
asset_type (string)
name (string)
data: {
. added_at (integer)
. asset_type
. clean_room_name (string)
. foreign_table
. foreign_table_local_details
. name (string)
. notebook
. owner_collaborator_alias (string)
. status
. table
. table_local_details
. view
. view_local_details
. volume_local_details
} (object) required
delete_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_nameDelete a clean room asset - unshare/remove the asset from the clean roomclean_room_name (string)
asset_type (string)
name (string)
post_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name_reviewsSubmit an asset reviewclean_room_name (string)
asset_type (string)
name (string)
data: {
. notebook_review
} (object) required
get_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name_revisionsList revisions for an assetclean_room_name (string)
asset_type (string)
name (string)
page_size (integer)
page_token (string)
get_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name_revisions_by_etagGet a specific revision of an assetclean_room_name (string)
asset_type (string)
name (string)
etag (string)
get_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rulesList all auto-approval rules for the callerclean_room_name (string)
page_size (integer)
page_token (string)
post_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rulesCreate an auto-approval ruleclean_room_name (string)
data: {
. auto_approval_rule
} (object) required
get_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules_by_rule_idGet a auto-approval rule by rule IDclean_room_name (string)
rule_id (string)
patch_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules_by_rule_idUpdate a auto-approval rule by rule IDclean_room_name (string)
rule_id (string)
data: {
. author_collaborator_alias (string)
. author_scope
. clean_room_name (string)
. created_at (integer)
. rule_id (string)
. rule_owner_collaborator_alias (string)
. runner_collaborator_alias (string)
} (object) required
delete_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules_by_rule_idDelete a auto-approval rule by rule IDclean_room_name (string)
rule_id (string)
post_api_2_0_clean_rooms_by_clean_room_name_output_catalogsCreate the output catalog of the clean room.clean_room_name (string)
data: {
. catalog_name (string)
. status
} (object) required
get_api_2_0_clean_rooms_by_clean_room_name_runsList all the historical notebook task runs in a clean room.clean_room_name (string)
notebook_name (string)
page_size (integer)
page_token (string)
get_api_2_0_clean_rooms_by_nameGet the details of a clean room given its name.name (string)
patch_api_2_0_clean_rooms_by_nameUpdate a clean room. The caller must be the owner of the clean room, have MODIFY_CLEAN_ROOM privilege, or be metastore admin. When the caller is a metastore admin, only the owner field can be updated.name (string)
data: {
. clean_room
} (object) required
delete_api_2_0_clean_rooms_by_nameDelete a clean room. After deletion, the clean room will be removed from the metastore. If the other collaborators have not deleted the clean room, they will still have the clean room in their metastore, but it will be in a DELETED state and no operations other than deletion can be performed on it.name (string)
post_api_2_0_database_catalogsCreate a Database Catalog.data: {
. create_database_if_not_exists (boolean)
. database_instance_name (string)
. database_name (string)
. name (string)
. uid (string)
} (object) required
get_api_2_0_database_catalogs_by_nameGet a Database Catalog.name (string)
delete_api_2_0_database_catalogs_by_nameDelete a Database Catalog.name (string)
post_api_2_0_database_credentialsGenerates a credential that can be used to access database instances.data: {
. claims (array)
. instance_names (array)
. request_id (string)
} (object) required
get_api_2_0_database_instancesList Database Instances.page_token (string)
page_size (integer)
post_api_2_0_database_instancesCreate a Database Instance.data: {
. capacity (string)
. child_instance_refs (array)
. creation_time (string)
. creator (string)
. effective_enable_pg_native_login (boolean)
. effective_enable_readable_secondaries (boolean)
. effective_node_count (integer)
. effective_retention_window_in_days (integer)
. effective_stopped (boolean)
. enable_pg_native_login (boolean)
. enable_readable_secondaries (boolean)
. name (string)
. node_count (integer)
. parent_instance_ref
. pg_version (string)
. read_only_dns (string)
. read_write_dns (string)
. retention_window_in_days (integer)
. state
. stopped (boolean)
. uid (string)
} (object) required
get_api_2_0_database_instances_by_nameGet a Database Instance.name (string)
patch_api_2_0_database_instances_by_nameUpdate a Database Instance.name (string)
update_mask (string) required
data: {
. capacity (string)
. child_instance_refs (array)
. creation_time (string)
. creator (string)
. effective_enable_pg_native_login (boolean)
. effective_enable_readable_secondaries (boolean)
. effective_node_count (integer)
. effective_retention_window_in_days (integer)
. effective_stopped (boolean)
. enable_pg_native_login (boolean)
. enable_readable_secondaries (boolean)
. name (string)
. node_count (integer)
. parent_instance_ref
. pg_version (string)
. read_only_dns (string)
. read_write_dns (string)
. retention_window_in_days (integer)
. state
. stopped (boolean)
. uid (string)
} (object) required
delete_api_2_0_database_instances_by_nameDelete a Database Instance.name (string)
force (boolean)
purge (boolean)
get_api_2_0_database_instances_find_by_uidFind a Database Instance by uid.uid (string)
post_api_2_0_database_synced_tablesCreate a Synced Database Table.data: {
. data_synchronization_status
. database_instance_name (string)
. effective_database_instance_name (string)
. effective_logical_database_name (string)
. logical_database_name (string)
. name (string)
. spec
. unity_catalog_provisioning_state
} (object) required
get_api_2_0_database_synced_tables_by_nameGet a Synced Database Table.name (string)
delete_api_2_0_database_synced_tables_by_nameDelete a Synced Database Table.name (string)
post_api_2_0_database_tablesCreate a Database Table. Useful for registering pre-existing PG tables in UC. See CreateSyncedDatabaseTable for creating synced tables in PG from a source table in UC.data: {
. database_instance_name (string)
. logical_database_name (string)
. name (string)
} (object) required
get_api_2_0_database_tables_by_nameGet a Database Table.name (string)
delete_api_2_0_database_tables_by_nameDelete a Database Table.name (string)
post_api_2_0_dbfs_add_blockAppends a block of data to the stream specified by the input handle. If the handle does not exist, this call will throw an exception with RESOURCE_DOES_NOT_EXIST. If the block of data exceeds 1 MB, this call will throw an exception with MAX_BLOCK_SIZE_EXCEEDED.data: {
. data (string)
. handle (integer)
} (object) required
post_api_2_0_dbfs_closeCloses the stream specified by the input handle. If the handle does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST.data: {
. handle (integer)
} (object) required
post_api_2_0_dbfs_createOpens a stream to write to a file and returns a handle to this stream. There is a 10 minute idle timeout on this handle. If a file or directory already exists on the given path and overwrite is set to false, this call will throw an exception with RESOURCE_ALREADY_EXISTS. A typical workflow for file upload would be: 1. Issue a create call and get a handle. 2. Issue one or more add-block calls with the handle you have. 3. Issue a close call with the handle you have.data: {
. overwrite (boolean)
. path (string)
} (object) required
post_api_2_0_dbfs_deleteDelete the file or directory optionally recursively delete all files in the directory. This call throws an exception with IO_ERROR if the path is a non-empty directory and recursive is set to false or on other similar errors. When you delete a large number of files, the delete operation is done in increments. The call returns a response after approximately 45 seconds with an error message 503 Service Unavailable asking you to re-invoke the delete operation until the directory structure is fullydata: {
. path (string)
. recursive (boolean)
} (object) required
get_api_2_0_dbfs_get_statusGets the file information for a file or directory. If the file or directory does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST.path (string) required
get_api_2_0_dbfs_listList the contents of a directory, or details of the file. If the file or directory does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. When calling list on a large directory, the list operation will time out after approximately 60 seconds. We strongly recommend using list only on directories containing less than 10K files and discourage using the DBFS REST API for operations that list more than 10K files. Instead, we recommend that you perform such operations in the contpath (string) required
post_api_2_0_dbfs_mkdirsCreates the given directory and necessary parent directories if they do not exist. If a file not a directory exists at any prefix of the input path, this call throws an exception with RESOURCE_ALREADY_EXISTS. Note: If this operation fails, it might have succeeded in creating some of the necessary parent directories.data: {
. path (string)
} (object) required
post_api_2_0_dbfs_moveMoves a file from one location to another location within DBFS. If the source file does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. If a file already exists in the destination path, this call throws an exception with RESOURCE_ALREADY_EXISTS. If the given source path is a directory, this call always recursively moves all files.data: {
. destination_path (string)
. source_path (string)
} (object) required
post_api_2_0_dbfs_putUploads a file through the use of multipart form post. It is mainly used for streaming uploads, but can also be used as a convenient single call for data upload. Alternatively you can pass contents as base64 string. The amount of data that can be passed when not streaming using the contents parameter is limited to 1 MB. MAX_BLOCK_SIZE_EXCEEDED will be thrown if this limit is exceeded. If you want to upload large files, use the streaming upload. For details, see :method:dbfs/create, :methdata: {
. contents (string)
. overwrite (boolean)
. path (string)
} (object) required
get_api_2_0_dbfs_readReturns the contents of a file. If the file does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. If the path is a directory, the read length is negative, or if the offset is negative, this call throws an exception with INVALID_PARAMETER_VALUE. If the read length exceeds 1 MB, this call throws an exception with MAX_READ_SIZE_EXCEEDED. If offset + length exceeds the number of bytes in a file, it reads the contents until the end of file.path (string) required
offset (integer)
length (integer)
get_api_2_0_fs_directoriesby_directory_pathReturns the contents of a directory. If there is no directory at the specified path, the API returns a HTTP 404 error.directory_path (string)
page_size (integer)
page_token (string)
head_api_2_0_fs_directoriesby_directory_pathGet the metadata of a directory. The response HTTP headers contain the metadata. There is no response body. This method is useful to check if a directory exists and the caller has access to it. If you wish to ensure the directory exists, you can instead use PUT, which will create the directory if it does not exist, and is idempotent it will succeed if the directory already exists.directory_path (string)
put_api_2_0_fs_directoriesby_directory_pathCreates an empty directory. If necessary, also creates any parent directories of the new, empty directory like the shell command mkdir -p. If called on an existing directory, returns a success response; this method is idempotent it will succeed if the directory already exists.directory_path (string)
delete_api_2_0_fs_directoriesby_directory_pathDeletes an empty directory. To delete a non-empty directory, first delete all of its contents. This can be done by listing the directory contents and deleting each file and subdirectory recursively.directory_path (string)
get_api_2_0_fs_filesby_file_pathDownloads a file. The file contents are the response body. This is a standard HTTP file download, not a JSON RPC. It supports the Range and If-Unmodified-Since HTTP headers.file_path (string)
Range (string)
If-Unmodified-Since (string)
head_api_2_0_fs_filesby_file_pathGet the metadata of a file. The response HTTP headers contain the metadata. There is no response body.file_path (string)
Range (string)
If-Unmodified-Since (string)
put_api_2_0_fs_filesby_file_pathUploads a file of up to 5 GiB. The file contents should be sent as the request body as raw bytes an octet stream; do not encode or otherwise modify the bytes before sending. The contents of the resulting file will be exactly the bytes sent in the request body. If the request is successful, there is no response body.file_path (string)
overwrite (boolean)
delete_api_2_0_fs_filesby_file_pathDeletes a file. If the request is successful, there is no response body.file_path (string)
get_api_2_0_genie_spacesGet list of Genie Spaces.page_size (integer)
page_token (string)
get_api_2_0_genie_spaces_by_space_idGet details of a Genie Space.space_id (string)
delete_api_2_0_genie_spaces_by_space_idMove a Genie Space to the trash.space_id (string)
get_api_2_0_genie_spaces_by_space_id_conversationsGet a list of conversations in a Genie Space.space_id (string)
page_size (integer)
page_token (string)
delete_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_idDelete a conversation.space_id (string)
conversation_id (string)
post_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messagesCreate new message in a conversation:method:genie/startconversation. The AI response uses all previously created messages in the conversation to respond.space_id (string)
conversation_id (string)
data: {
. content (string)
} (object) required
get_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages_by_message_idGet message from conversation.space_id (string)
conversation_id (string)
message_id (string)
post_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages_by_message_id_attachments_by_attachment_id_execute_queryExecute the SQL for a message query attachment. Use this API when the query attachment has expired and needs to be re-executed.space_id (string)
conversation_id (string)
message_id (string)
attachment_id (string)
get_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages_by_message_id_attachments_by_attachment_id_query_resultGet the result of SQL query if the message has a query attachment. This is only available if a message has a query attachment and the message status is EXECUTING_QUERY OR COMPLETED.space_id (string)
conversation_id (string)
message_id (string)
attachment_id (string)
post_api_2_0_genie_spaces_by_space_id_start_conversationStart a new conversation.space_id (string)
data: {
. content (string)
} (object) required
get_api_2_0_git_credentialsLists the calling user's Git credentials. One credential per user is supported.No parameters
post_api_2_0_git_credentialsCreates a Git credential entry for the user. Only one Git credential per user is supported, so any attempts to create credentials if an entry already exists will fail. Use the PATCH endpoint to update existing credentials, or the DELETE endpoint to delete existing credentials.data: {
. git_provider (string)
. git_username (string)
. is_default_for_provider (boolean)
. name (string)
. personal_access_token (string)
} (object) required
get_api_2_0_git_credentials_by_credential_idGets the Git credential with the specified credential ID.credential_id (integer)
patch_api_2_0_git_credentials_by_credential_idUpdates the specified Git credential.credential_id (integer)
data: {
. git_provider (string)
. git_username (string)
. is_default_for_provider (boolean)
. name (string)
. personal_access_token (string)
} (object) required
delete_api_2_0_git_credentials_by_credential_idDeletes the specified Git credential.credential_id (integer)
get_api_2_0_global_init_scriptsGet a list of all global init scripts for this workspace. This returns all properties for each script but not the script contents. To retrieve the contents of a script, use the get a global init script:method:globalinitscripts/get operation.No parameters
post_api_2_0_global_init_scriptsCreates a new global init script in this workspace.data: {
. enabled (boolean)
. name (string)
. position (integer)
. script (string)
} (object) required
get_api_2_0_global_init_scripts_by_script_idGets all the details of a script, including its Base64-encoded contents.script_id (string)
patch_api_2_0_global_init_scripts_by_script_idUpdates a global init script, specifying only the fields to change. All fields are optional. Unspecified fields retain their current value.script_id (string)
data: {
. enabled (boolean)
. name (string)
. position (integer)
. script (string)
} (object) required
delete_api_2_0_global_init_scripts_by_script_idDeletes a global init script.script_id (string)
post_api_2_0_identity_groups_resolve_by_external_idResolves a group with the given external ID from the customer's IdP. If the group does not exist, it will be created in the account. If the customer is not onboarded onto Automatic Identity Management AIM, this will return an error.data: {
. external_id (string)
} (object) required
post_api_2_0_identity_service_principals_resolve_by_external_idResolves an SP with the given external ID from the customer's IdP. If the SP does not exist, it will be created. If the customer is not onboarded onto Automatic Identity Management AIM, this will return an error.data: {
. external_id (string)
} (object) required
post_api_2_0_identity_users_resolve_by_external_idResolves a user with the given external ID from the customer's IdP. If the user does not exist, it will be created. If the customer is not onboarded onto Automatic Identity Management AIM, this will return an error.data: {
. external_id (string)
} (object) required
get_api_2_0_identity_workspace_access_details_by_principal_idReturns the access details for a principal in the current workspace. Allows for checking access details for any provisioned principal user, service principal, or group in the current workspace. Provisioned principal here refers to one that has been synced into Databricks from the customer's IdP or added explicitly to Databricks via SCIM/UI. Allows for passing in a 'view' parameter to control what fields are returned BASIC by default or FULL.principal_id (integer)
view (string)
post_api_2_0_instance_pools_createCreates a new instance pool using idle and ready-to-use cloud instances.data: {
. aws_attributes
. custom_tags (object)
. disk_spec
. enable_elastic_disk (boolean)
. idle_instance_autotermination_minutes (integer)
. instance_pool_name (string)
. max_capacity (integer)
. min_idle_instances (integer)
. node_type_id (string)
. preloaded_docker_images (array)
. preloaded_spark_versions (array)
} (object) required
post_api_2_0_instance_pools_deleteDeletes the instance pool permanently. The idle instances in the pool are terminated asynchronously.data: {
. instance_pool_id (string)
} (object) required
post_api_2_0_instance_pools_editModifies the configuration of an existing instance pool.data: {
. custom_tags (object)
. idle_instance_autotermination_minutes (integer)
. instance_pool_id (string)
. instance_pool_name (string)
. max_capacity (integer)
. min_idle_instances (integer)
. node_type_id (string)
} (object) required
get_api_2_0_instance_pools_getRetrieve the information for an instance pool based on its identifier.instance_pool_id (string) required
get_api_2_0_instance_pools_listGets a list of instance pools with their statistics.No parameters
post_api_2_0_instance_profiles_addRegisters an instance profile in Databricks. In the UI, you can then give users the permission to use this instance profile when launching clusters. This API is only available to admin users.data: {
. iam_role_arn (string)
. instance_profile_arn (string)
. is_meta_instance_profile (boolean)
. skip_validation (boolean)
} (object) required
post_api_2_0_instance_profiles_editThe only supported field to change is the optional IAM role ARN associated with the instance profile. It is required to specify the IAM role ARN if both of the following are true: Your role name and instance profile name do not match. The name is the part after the last slash in each ARN. You want to use the instance profile with Databricks SQL Serverlesshttps://docs.databricks.com/sql/admin/serverless.html. To understand where these fields are in the AWS console, see Enable serverless SQL wdata: {
. iam_role_arn (string)
. instance_profile_arn (string)
. is_meta_instance_profile (boolean)
} (object) required
get_api_2_0_instance_profiles_listList the instance profiles that the calling user can use to launch a cluster. This API is available to all users.No parameters
post_api_2_0_instance_profiles_removeRemove the instance profile with the provided ARN. Existing clusters with this instance profile will continue to function. This API is only accessible to admin users.data: {
. instance_profile_arn (string)
} (object) required
get_api_2_0_ip_access_listsGets all IP access lists for the specified workspace.No parameters
post_api_2_0_ip_access_listsCreates an IP access list for this workspace. A list can be an allow list or a block list. See the top of this file for a description of how the server treats allow lists and block lists at runtime. When creating or updating an IP access list: For all allow lists and block lists combined, the API supports a maximum of 1000 IP/CIDR values, where one CIDR counts as a single value. Attempts to exceed that number return error 400 with error_code value QUOTA_EXCEEDED. If the new list would blockdata: {
. ip_addresses (array)
. label (string)
. list_type
} (object) required
get_api_2_0_ip_access_lists_by_ip_access_list_idGets an IP access list, specified by its list ID.ip_access_list_id (string)
put_api_2_0_ip_access_lists_by_ip_access_list_idReplaces an IP access list, specified by its ID. A list can include allow lists and block lists. See the top of this file for a description of how the server treats allow lists and block lists at run time. When replacing an IP access list: For all allow lists and block lists combined, the API supports a maximum of 1000 IP/CIDR values, where one CIDR counts as a single value. Attempts to exceed that number return error 400 with error_code value QUOTA_EXCEEDED. If the resulting list would blockip_access_list_id (string)
data: {
. enabled (boolean)
. ip_addresses (array)
. label (string)
. list_type
} (object) required
patch_api_2_0_ip_access_lists_by_ip_access_list_idUpdates an existing IP access list, specified by its ID. A list can include allow lists and block lists. See the top of this file for a description of how the server treats allow lists and block lists at run time. When updating an IP access list: For all allow lists and block lists combined, the API supports a maximum of 1000 IP/CIDR values, where one CIDR counts as a single value. Attempts to exceed that number return error 400 with error_code value QUOTA_EXCEEDED. If the updated list woulip_access_list_id (string)
data: {
. enabled (boolean)
. ip_addresses (array)
. label (string)
. list_type
} (object) required
delete_api_2_0_ip_access_lists_by_ip_access_list_idDeletes an IP access list, specified by its list ID.ip_access_list_id (string)
get_api_2_0_lakeview_dashboardsList dashboards.page_size (integer)
page_token (string)
show_trashed (boolean)
view (string)
post_api_2_0_lakeview_dashboardsCreate a draft dashboard.data: {
. create_time (string)
. dashboard_id (string)
. display_name (string)
. etag (string)
. lifecycle_state
. parent_path (string)
. path (string)
. serialized_dashboard (string)
. update_time (string)
. warehouse_id (string)
} (object) required
post_api_2_0_lakeview_dashboards_migrateMigrates a classic SQL dashboard to Lakeview.data: {
. display_name (string)
. parent_path (string)
. source_dashboard_id (string)
. update_parameter_syntax (boolean)
} (object) required
get_api_2_0_lakeview_dashboards_by_dashboard_idGet a draft dashboard.dashboard_id (string)
patch_api_2_0_lakeview_dashboards_by_dashboard_idUpdate a draft dashboard.dashboard_id (string)
data: {
. create_time (string)
. dashboard_id (string)
. display_name (string)
. etag (string)
. lifecycle_state
. parent_path (string)
. path (string)
. serialized_dashboard (string)
. update_time (string)
. warehouse_id (string)
} (object) required
delete_api_2_0_lakeview_dashboards_by_dashboard_idTrash a dashboard.dashboard_id (string)
get_api_2_0_lakeview_dashboards_by_dashboard_id_publishedGet the current published dashboard.dashboard_id (string)
post_api_2_0_lakeview_dashboards_by_dashboard_id_publishedPublish the current draft dashboard.dashboard_id (string)
data: {
. embed_credentials (boolean)
. warehouse_id (string)
} (object) required
delete_api_2_0_lakeview_dashboards_by_dashboard_id_publishedUnpublish the dashboard.dashboard_id (string)
get_api_2_0_lakeview_dashboards_by_dashboard_id_published_tokeninfoGet a required authorization details and scopes of a published dashboard to mint an OAuth token.dashboard_id (string)
external_value (string)
external_viewer_id (string)
get_api_2_0_lakeview_dashboards_by_dashboard_id_schedulesList dashboard schedules.dashboard_id (string)
page_size (integer)
page_token (string)
post_api_2_0_lakeview_dashboards_by_dashboard_id_schedulesCreate dashboard schedule.dashboard_id (string)
data: {
. create_time (string)
. cron_schedule
. dashboard_id (string)
. display_name (string)
. etag (string)
. pause_status
. schedule_id (string)
. update_time (string)
. warehouse_id (string)
} (object) required
get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_idGet dashboard schedule.dashboard_id (string)
schedule_id (string)
put_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_idUpdate dashboard schedule.dashboard_id (string)
schedule_id (string)
data: {
. create_time (string)
. cron_schedule
. dashboard_id (string)
. display_name (string)
. etag (string)
. pause_status
. schedule_id (string)
. update_time (string)
. warehouse_id (string)
} (object) required
delete_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_idDelete dashboard schedule.dashboard_id (string)
schedule_id (string)
etag (string)
get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptionsList schedule subscriptions.dashboard_id (string)
schedule_id (string)
page_size (integer)
page_token (string)
post_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptionsCreate schedule subscription.dashboard_id (string)
schedule_id (string)
data: {
. create_time (string)
. created_by_user_id (integer)
. dashboard_id (string)
. etag (string)
. schedule_id (string)
. subscriber
. subscription_id (string)
. update_time (string)
} (object) required
get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptions_by_subscription_idGet schedule subscription.dashboard_id (string)
schedule_id (string)
subscription_id (string)
delete_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptions_by_subscription_idDelete schedule subscription.dashboard_id (string)
schedule_id (string)
subscription_id (string)
etag (string)
get_api_2_0_libraries_all_cluster_statusesGet the status of all libraries on all clusters. A status is returned for all libraries installed on this cluster via the API or the libraries UI.No parameters
get_api_2_0_libraries_cluster_statusGet the status of libraries on a cluster. A status is returned for all libraries installed on this cluster via the API or the libraries UI. The order of returned libraries is as follows: 1. Libraries set to be installed on this cluster, in the order that the libraries were added to the cluster, are returned first. 2. Libraries that were previously requested to be installed on this cluster or, but are now marked for removal, in no particular order, are returned last.cluster_id (string) required
post_api_2_0_libraries_installAdd libraries to install on a cluster. The installation is asynchronous; it happens in the background after the completion of this request.data: {
. cluster_id (string)
. libraries (array)
} (object) required
post_api_2_0_libraries_uninstallSet libraries to uninstall from a cluster. The libraries won't be uninstalled until the cluster is restarted. A request to uninstall a library that is not currently installed is ignored.data: {
. cluster_id (string)
. libraries (array)
} (object) required
get_api_2_0_lineage_tracking_external_lineageLists external lineage relationships of a Databricks object or external metadata given a supplied direction.object_info: {
. external_metadata
. model_version
. path
. table
} (object) required
lineage_direction (string) required
page_size (integer)
page_token (string)
post_api_2_0_lineage_tracking_external_lineageCreates an external lineage relationship between a Databricks or external metadata object and another external metadata object.data: {
. columns (array)
. id (string)
. properties (object)
. source
. target
} (object) required
patch_api_2_0_lineage_tracking_external_lineageUpdates an external lineage relationship between a Databricks or external metadata object and another external metadata object.update_mask (string) required
data: {
. columns (array)
. id (string)
. properties (object)
. source
. target
} (object) required
delete_api_2_0_lineage_tracking_external_lineageDeletes an external lineage relationship between a Databricks or external metadata object and another external metadata object.external_lineage_relationship: {
. id (string)
. source
. target
} (object) required
get_api_2_0_lineage_tracking_external_metadataGets an array of external metadata objects in the metastore. If the caller is the metastore admin, all external metadata objects will be retrieved. Otherwise, only external metadata objects that the caller has BROWSE on will be retrieved. There is no guarantee of a specific ordering of the elements in the array.page_size (integer)
page_token (string)
post_api_2_0_lineage_tracking_external_metadataCreates a new external metadata object in the parent metastore if the caller is a metastore admin or has the CREATE_EXTERNAL_METADATA privilege. Grants BROWSE to all account users upon creation by default.data: {
. columns (array)
. create_time (string)
. created_by (string)
. description (string)
. entity_type (string)
. id (string)
. metastore_id (string)
. name (string)
. owner (string)
. properties (object)
. system_type
. update_time (string)
. updated_by (string)
. url (string)
} (object) required
get_api_2_0_lineage_tracking_external_metadata_by_nameGets the specified external metadata object in a metastore. The caller must be a metastore admin, the owner of the external metadata object, or a user that has the BROWSE privilege.name (string)
patch_api_2_0_lineage_tracking_external_metadata_by_nameUpdates the external metadata object that matches the supplied name. The caller can only update either the owner or other metadata fields in one request. The caller must be a metastore admin, the owner of the external metadata object, or a user that has the MODIFY privilege. If the caller is updating the owner, they must also have the MANAGE privilege.name (string)
update_mask (string) required
data: {
. columns (array)
. create_time (string)
. created_by (string)
. description (string)
. entity_type (string)
. id (string)
. metastore_id (string)
. name (string)
. owner (string)
. properties (object)
. system_type
. update_time (string)
. updated_by (string)
. url (string)
} (object) required
delete_api_2_0_lineage_tracking_external_metadata_by_nameDeletes the external metadata object that matches the supplied name. The caller must be a metastore admin, the owner of the external metadata object, or a user that has the MANAGE privilege.name (string)
get_api_2_0_marketplace_exchange_exchangesList exchanges visible to providerpage_token (string)
page_size (integer)
post_api_2_0_marketplace_exchange_exchangesCreate an exchangedata: {
. exchange
} (object) required
get_api_2_0_marketplace_exchange_exchanges_for_listingList exchanges associated with a listinglisting_id (string) required
page_token (string)
page_size (integer)
post_api_2_0_marketplace_exchange_exchanges_for_listingAssociate an exchange with a listingdata: {
. exchange_id (string)
. listing_id (string)
} (object) required
delete_api_2_0_marketplace_exchange_exchanges_for_listing_by_idDisassociate an exchange with a listingid (string)
get_api_2_0_marketplace_exchange_exchanges_by_idGet an exchange.id (string)
put_api_2_0_marketplace_exchange_exchanges_by_idUpdate an exchangeid (string)
data: {
. exchange
} (object) required
delete_api_2_0_marketplace_exchange_exchanges_by_idThis removes a listing from marketplace.id (string)
get_api_2_0_marketplace_exchange_filtersList exchange filterexchange_id (string) required
page_token (string)
page_size (integer)
post_api_2_0_marketplace_exchange_filtersAdd an exchange filter.data: {
. filter
} (object) required
put_api_2_0_marketplace_exchange_filters_by_idUpdate an exchange filter.id (string)
data: {
. filter
} (object) required
delete_api_2_0_marketplace_exchange_filters_by_idDelete an exchange filterid (string)
get_api_2_0_marketplace_exchange_listings_for_exchangeList listings associated with an exchangeexchange_id (string) required
page_token (string)
page_size (integer)
get_api_2_0_marketplace_provider_analytics_dashboardGet provider analytics dashboard.No parameters
post_api_2_0_marketplace_provider_analytics_dashboardCreate provider analytics dashboard. Returns Marketplace specific id. Not to be confused with the Lakeview dashboard id.No parameters
get_api_2_0_marketplace_provider_analytics_dashboard_latestGet latest version of provider analytics dashboard.No parameters
put_api_2_0_marketplace_provider_analytics_dashboard_by_idUpdate provider analytics dashboard.id (string)
data: {
. version (integer)
} (object) required
get_api_2_0_marketplace_provider_filesList files attached to a parent entity.file_parent: {
. file_parent_type
. parent_id (string)
} (object) required
page_token (string)
page_size (integer)
post_api_2_0_marketplace_provider_filesCreate a file. Currently, only provider icons and attached notebooks are supported.data: {
. display_name (string)
. file_parent
. marketplace_file_type
. mime_type (string)
} (object) required
get_api_2_0_marketplace_provider_files_by_file_idGet a filefile_id (string)
delete_api_2_0_marketplace_provider_files_by_file_idDelete a filefile_id (string)
post_api_2_0_marketplace_provider_listingCreate a new listingdata: {
. listing
} (object) required
get_api_2_0_marketplace_provider_listingsList listings owned by this providerpage_token (string)
page_size (integer)
get_api_2_0_marketplace_provider_listings_by_idGet a listingid (string)
put_api_2_0_marketplace_provider_listings_by_idUpdate a listingid (string)
data: {
. listing
} (object) required
delete_api_2_0_marketplace_provider_listings_by_idDelete a listingid (string)
put_api_2_0_marketplace_provider_listings_by_listing_id_personalization_requests_by_request_id_request_statusUpdate personalization request. This method only permits updating the status of the request.listing_id (string)
request_id (string)
data: {
. reason (string)
. share
. status
} (object) required
get_api_2_0_marketplace_provider_personalization_requestsList personalization requests to this provider. This will return all personalization requests, regardless of which listing they are for.page_token (string)
page_size (integer)
post_api_2_0_marketplace_provider_providerCreate a providerdata: {
. provider
} (object) required
get_api_2_0_marketplace_provider_providersList provider profiles for account.page_token (string)
page_size (integer)
get_api_2_0_marketplace_provider_providers_by_idGet provider profileid (string)
put_api_2_0_marketplace_provider_providers_by_idUpdate provider profileid (string)
data: {
. provider
} (object) required
delete_api_2_0_marketplace_provider_providers_by_idDelete providerid (string)
get_api_2_0_mlflow_artifacts_listList artifacts for a run. Takes an optional artifact_path prefix which if specified, the response contains only artifacts with the specified prefix. A maximum of 1000 artifacts will be retrieved for UC Volumes. Please call /api/2.0/fs/directoriesdirectory_path for listing artifacts in UC Volumes, which supports pagination. See List directory contents | Files API/api/workspace/files/listdirectorycontents.run_id (string)
run_uuid (string)
path (string)
page_token (string)
post_api_2_0_mlflow_comments_createPosts a comment on a model version. A comment can be submitted either by a user or programmatically to display relevant information about the model. For example, test results or deployment errors.data: {
. comment (string)
. name (string)
. version (string)
} (object) required
delete_api_2_0_mlflow_comments_deleteDeletes a comment on a model version.id (string) required
patch_api_2_0_mlflow_comments_updatePost an edit to a comment on a model version.data: {
. comment (string)
. id (string)
} (object) required
post_api_2_0_mlflow_databricks_model_versions_transition_stageTransition a model version's stage. This is a Databricks workspace version of the MLflow endpointhttps://www.mlflow.org/docs/latest/rest-api.html transition-modelversion-stage that also accepts a comment associated with the transition to be recorded.data: {
. archive_existing_versions (boolean)
. comment (string)
. name (string)
. stage (string)
. version (string)
} (object) required
get_api_2_0_mlflow_databricks_registered_models_getGet the details of a model. This is a Databricks workspace version of the MLflow endpointhttps://www.mlflow.org/docs/latest/rest-api.html get-registeredmodel that also returns the model's Databricks workspace ID and the permission level of the requesting user on the model.name (string) required
post_api_2_0_mlflow_databricks_runs_delete_runsBulk delete runs in an experiment that were created prior to or at the specified timestamp. Deletes at most max_runs per request. To call this API from a Databricks Notebook in Python, you can use the client code snippet on https://docs.databricks.com/en/mlflow/runs.html bulk-delete.data: {
. experiment_id (string)
. max_runs (integer)
. max_timestamp_millis (integer)
} (object) required
post_api_2_0_mlflow_databricks_runs_restore_runsBulk restore runs in an experiment that were deleted no earlier than the specified timestamp. Restores at most max_runs per request. To call this API from a Databricks Notebook in Python, you can use the client code snippet on https://docs.databricks.com/en/mlflow/runs.html bulk-restore.data: {
. experiment_id (string)
. max_runs (integer)
. min_timestamp_millis (integer)
} (object) required
post_api_2_0_mlflow_experiments_createCreates an experiment with a name. Returns the ID of the newly created experiment. Validates that another experiment with the same name does not already exist and fails if another experiment with the same name already exists. Throws RESOURCE_ALREADY_EXISTS if an experiment with the given name exists.data: {
. artifact_location (string)
. name (string)
. tags (array)
} (object) required
post_api_2_0_mlflow_experiments_deleteMarks an experiment and associated metadata, runs, metrics, params, and tags for deletion. If the experiment uses FileStore, artifacts associated with the experiment are also deleted.data: {
. experiment_id (string)
} (object) required
get_api_2_0_mlflow_experiments_getGets metadata for an experiment. This method works on deleted experiments.experiment_id (string) required
get_api_2_0_mlflow_experiments_get_by_nameGets metadata for an experiment. This endpoint will return deleted experiments, but prefers the active experiment if an active and deleted experiment share the same name. If multiple deleted experiments share the same name, the API will return one of them. Throws RESOURCE_DOES_NOT_EXIST if no experiment with the specified name exists.experiment_name (string) required
get_api_2_0_mlflow_experiments_listGets a list of all experiments.view_type (string)
max_results (integer)
page_token (string)
post_api_2_0_mlflow_experiments_restoreRestore an experiment marked for deletion. This also restores associated metadata, runs, metrics, params, and tags. If experiment uses FileStore, underlying artifacts associated with experiment are also restored. Throws RESOURCE_DOES_NOT_EXIST if experiment was never created or was permanently deleted.data: {
. experiment_id (string)
} (object) required
post_api_2_0_mlflow_experiments_searchSearches for experiments that satisfy specified search criteria.data: {
. filter (string)
. max_results (integer)
. order_by (array)
. page_token (string)
. view_type
} (object) required
post_api_2_0_mlflow_experiments_set_experiment_tagSets a tag on an experiment. Experiment tags are metadata that can be updated.data: {
. experiment_id (string)
. key (string)
. value (string)
} (object) required
post_api_2_0_mlflow_experiments_updateUpdates experiment metadata.data: {
. experiment_id (string)
. new_name (string)
} (object) required
post_api_2_0_mlflow_logged_modelsCreate a logged model.data: {
. experiment_id (string)
. model_type (string)
. name (string)
. params (array)
. source_run_id (string)
. tags (array)
} (object) required
post_api_2_0_mlflow_logged_models_searchSearch for Logged Models that satisfy specified search criteria.data: {
. datasets (array)
. experiment_ids (array)
. filter (string)
. max_results (integer)
. order_by (array)
. page_token (string)
} (object) required
get_api_2_0_mlflow_logged_models_by_model_idGet a logged model.model_id (string)
patch_api_2_0_mlflow_logged_models_by_model_idFinalize a logged model.model_id (string)
data: {
. status
} (object) required
delete_api_2_0_mlflow_logged_models_by_model_idDelete a logged model.model_id (string)
post_api_2_0_mlflow_logged_models_by_model_id_paramsLogs params for a logged model. A param is a key-value pair string key, string value. Examples include hyperparameters used for ML model training. A param can be logged only once for a logged model, and attempting to overwrite an existing param with a different value will result in an errormodel_id (string)
data: {
. params (array)
} (object) required
patch_api_2_0_mlflow_logged_models_by_model_id_tagsSet tags for a logged model.model_id (string)
data: {
. tags (array)
} (object) required
delete_api_2_0_mlflow_logged_models_by_model_id_tags_by_tag_keyDelete a tag on a logged model.model_id (string)
tag_key (string)
get_api_2_0_mlflow_metrics_get_historyGets a list of all values for the specified metric for a given run.run_id (string)
run_uuid (string)
metric_key (string) required
page_token (string)
max_results (integer)
post_api_2_0_mlflow_model_versions_createCreates a model version.data: {
. description (string)
. name (string)
. run_id (string)
. run_link (string)
. source (string)
. tags (array)
} (object) required
delete_api_2_0_mlflow_model_versions_deleteDeletes a model version.name (string) required
version (string) required
delete_api_2_0_mlflow_model_versions_delete_tagDeletes a model version tag.name (string) required
version (string) required
key (string) required
get_api_2_0_mlflow_model_versions_getGet a model version.name (string) required
version (string) required
get_api_2_0_mlflow_model_versions_get_download_uriGets a URI to download the model version.name (string) required
version (string) required
get_api_2_0_mlflow_model_versions_searchSearches for specific model versions based on the supplied filter.filter (string)
max_results (integer)
order_by (array)
page_token (string)
post_api_2_0_mlflow_model_versions_set_tagSets a model version tag.data: {
. key (string)
. name (string)
. value (string)
. version (string)
} (object) required
patch_api_2_0_mlflow_model_versions_updateUpdates the model version.data: {
. description (string)
. name (string)
. version (string)
} (object) required
post_api_2_0_mlflow_registered_models_createCreates a new registered model with the name specified in the request body. Throws RESOURCE_ALREADY_EXISTS if a registered model with the given name exists.data: {
. description (string)
. name (string)
. tags (array)
} (object) required
delete_api_2_0_mlflow_registered_models_deleteDeletes a registered model.name (string) required
delete_api_2_0_mlflow_registered_models_delete_tagDeletes the tag for a registered model.name (string) required
key (string) required
post_api_2_0_mlflow_registered_models_get_latest_versionsGets the latest version of a registered model.data: {
. name (string)
. stages (array)
} (object) required
get_api_2_0_mlflow_registered_models_listLists all available registered models, up to the limit specified in max_results.max_results (integer)
page_token (string)
post_api_2_0_mlflow_registered_models_renameRenames a registered model.data: {
. name (string)
. new_name (string)
} (object) required
get_api_2_0_mlflow_registered_models_searchSearch for registered models based on the specified filter.filter (string)
max_results (integer)
order_by (array)
page_token (string)
post_api_2_0_mlflow_registered_models_set_tagSets a tag on a registered model.data: {
. key (string)
. name (string)
. value (string)
} (object) required
patch_api_2_0_mlflow_registered_models_updateUpdates a registered model.data: {
. description (string)
. name (string)
} (object) required
post_api_2_0_mlflow_registry_webhooks_createNOTE: This endpoint is in Public Preview. Creates a registry webhook.data: {
. description (string)
. events (array)
. http_url_spec
. job_spec
. model_name (string)
. status
} (object) required
delete_api_2_0_mlflow_registry_webhooks_deleteNOTE: This endpoint is in Public Preview. Deletes a registry webhook.id (string) required
get_api_2_0_mlflow_registry_webhooks_listNOTE: This endpoint is in Public Preview. Lists all registry webhooks.model_name (string)
events (array)
page_token (string)
max_results (integer)
post_api_2_0_mlflow_registry_webhooks_testNOTE: This endpoint is in Public Preview. Tests a registry webhook.data: {
. event
. id (string)
} (object) required
patch_api_2_0_mlflow_registry_webhooks_updateNOTE: This endpoint is in Public Preview. Updates a registry webhook.data: {
. description (string)
. events (array)
. http_url_spec
. id (string)
. job_spec
. status
} (object) required
post_api_2_0_mlflow_runs_createCreates a new run within an experiment. A run is usually a single execution of a machine learning or data ETL pipeline. MLflow uses runs to track the mlflowParam, mlflowMetric, and mlflowRunTag associated with a single execution.data: {
. experiment_id (string)
. run_name (string)
. start_time (integer)
. tags (array)
. user_id (string)
} (object) required
post_api_2_0_mlflow_runs_deleteMarks a run for deletion.data: {
. run_id (string)
} (object) required
post_api_2_0_mlflow_runs_delete_tagDeletes a tag on a run. Tags are run metadata that can be updated during a run and after a run completes.data: {
. key (string)
. run_id (string)
} (object) required
get_api_2_0_mlflow_runs_getGets the metadata, metrics, params, and tags for a run. In the case where multiple metrics with the same key are logged for a run, return only the value with the latest timestamp. If there are multiple values with the latest timestamp, return the maximum of these values.run_id (string) required
run_uuid (string)
post_api_2_0_mlflow_runs_log_batchLogs a batch of metrics, params, and tags for a run. If any data failed to be persisted, the server will respond with an error non-200 status code. In case of error due to internal server error or an invalid request, partial data may be written. You can write metrics, params, and tags in interleaving fashion, but within a given entity type are guaranteed to follow the order specified in the request body. The overwrite behavior for metrics, params, and tags is as follows: Metrics: metric vadata: {
. metrics (array)
. params (array)
. run_id (string)
. tags (array)
} (object) required
post_api_2_0_mlflow_runs_log_inputsLogs inputs, such as datasets and models, to an MLflow Run.data: {
. datasets (array)
. models (array)
. run_id (string)
} (object) required
post_api_2_0_mlflow_runs_log_metricLog a metric for a run. A metric is a key-value pair string key, float value with an associated timestamp. Examples include the various metrics that represent ML model accuracy. A metric can be logged multiple times.data: {
. dataset_digest (string)
. dataset_name (string)
. key (string)
. model_id (string)
. run_id (string)
. run_uuid (string)
. step (integer)
. timestamp (integer)
. value (number)
} (object) required
post_api_2_0_mlflow_runs_log_modelNote: the Create a logged model/api/workspace/experiments/createloggedmodel API replaces this endpoint. Log a model to an MLflow Run.data: {
. model_json (string)
. run_id (string)
} (object) required
post_api_2_0_mlflow_runs_log_parameterLogs a param used for a run. A param is a key-value pair string key, string value. Examples include hyperparameters used for ML model training and constant dates and values used in an ETL pipeline. A param can be logged only once for a run.data: {
. key (string)
. run_id (string)
. run_uuid (string)
. value (string)
} (object) required
post_api_2_0_mlflow_runs_outputsLogs outputs, such as models, from an MLflow Run.data: {
. models (array)
. run_id (string)
} (object) required
post_api_2_0_mlflow_runs_restoreRestores a deleted run. This also restores associated metadata, runs, metrics, params, and tags. Throws RESOURCE_DOES_NOT_EXIST if the run was never created or was permanently deleted.data: {
. run_id (string)
} (object) required
post_api_2_0_mlflow_runs_searchSearches for runs that satisfy expressions. Search expressions can use mlflowMetric and mlflowParam keys.data: {
. experiment_ids (array)
. filter (string)
. max_results (integer)
. order_by (array)
. page_token (string)
. run_view_type
} (object) required
post_api_2_0_mlflow_runs_set_tagSets a tag on a run. Tags are run metadata that can be updated during a run and after a run completes.data: {
. key (string)
. run_id (string)
. run_uuid (string)
. value (string)
} (object) required
post_api_2_0_mlflow_runs_updateUpdates run metadata.data: {
. end_time (integer)
. run_id (string)
. run_name (string)
. run_uuid (string)
. status
} (object) required
post_api_2_0_mlflow_transition_requests_approveApproves a model version stage transition request.data: {
. archive_existing_versions (boolean)
. comment (string)
. name (string)
. stage (string)
. version (string)
} (object) required
post_api_2_0_mlflow_transition_requests_createCreates a model version stage transition request.data: {
. comment (string)
. name (string)
. stage (string)
. version (string)
} (object) required
delete_api_2_0_mlflow_transition_requests_deleteCancels a model version stage transition request.name (string) required
version (string) required
stage (string) required
creator (string) required
comment (string)
get_api_2_0_mlflow_transition_requests_listGets a list of all open stage transition requests for the model version.name (string) required
version (string) required
post_api_2_0_mlflow_transition_requests_rejectRejects a model version stage transition request.data: {
. comment (string)
. name (string)
. stage (string)
. version (string)
} (object) required
get_api_2_0_notification_destinationsLists notification destinations.page_token (string)
page_size (integer)
post_api_2_0_notification_destinationsCreates a notification destination. Requires workspace admin permissions.data: {
. config
. display_name (string)
} (object) required
get_api_2_0_notification_destinations_by_idGets a notification destination.id (string)
patch_api_2_0_notification_destinations_by_idUpdates a notification destination. Requires workspace admin permissions. At least one field is required in the request body.id (string)
data: {
. config
. display_name (string)
} (object) required
delete_api_2_0_notification_destinations_by_idDeletes a notification destination. Requires workspace admin permissions.id (string)
post_api_2_0_online_tablesCreate a new Online Table.data: {
. name (string)
. spec
. status
. table_serving_url (string)
. unity_catalog_provisioning_state
} (object) required
get_api_2_0_online_tables_by_nameGet information about an existing online table and its status.name (string)
delete_api_2_0_online_tables_by_nameDelete an online table. Warning: This will delete all the data in the online table. If the source Delta table was deleted or modified since this Online Table was created, this will lose the data forever!name (string)
get_api_2_0_permissions_apps_by_app_nameGets the permissions of an app. Apps can inherit permissions from their root object.app_name (string)
put_api_2_0_permissions_apps_by_app_nameSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.app_name (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_apps_by_app_nameUpdates the permissions on an app. Apps can inherit permissions from their root object.app_name (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_apps_by_app_name_permission_levelsGets the permission levels that a user can have on an object.app_name (string)
get_api_2_0_permissions_authorization_passwordsGets the permissions of all passwords. Passwords can inherit permissions from their root object.No parameters
put_api_2_0_permissions_authorization_passwordsSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_authorization_passwordsUpdates the permissions on all passwords. Passwords can inherit permissions from their root object.data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_authorization_passwords_permission_levelsGets the permission levels that a user can have on an object.No parameters
get_api_2_0_permissions_authorization_tokensGets the permissions of all tokens. Tokens can inherit permissions from their root object.No parameters
put_api_2_0_permissions_authorization_tokensSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_authorization_tokensUpdates the permissions on all tokens. Tokens can inherit permissions from their root object.data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_authorization_tokens_permission_levelsGets the permission levels that a user can have on an object.No parameters
get_api_2_0_permissions_cluster_policies_by_cluster_policy_idGets the permissions of a cluster policy. Cluster policies can inherit permissions from their root object.cluster_policy_id (string)
put_api_2_0_permissions_cluster_policies_by_cluster_policy_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.cluster_policy_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_cluster_policies_by_cluster_policy_idUpdates the permissions on a cluster policy. Cluster policies can inherit permissions from their root object.cluster_policy_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_cluster_policies_by_cluster_policy_id_permission_levelsGets the permission levels that a user can have on an object.cluster_policy_id (string)
get_api_2_0_permissions_clusters_by_cluster_idGets the permissions of a cluster. Clusters can inherit permissions from their root object.cluster_id (string)
put_api_2_0_permissions_clusters_by_cluster_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.cluster_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_clusters_by_cluster_idUpdates the permissions on a cluster. Clusters can inherit permissions from their root object.cluster_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_clusters_by_cluster_id_permission_levelsGets the permission levels that a user can have on an object.cluster_id (string)
get_api_2_0_permissions_experiments_by_experiment_idGets the permissions of an experiment. Experiments can inherit permissions from their root object.experiment_id (string)
put_api_2_0_permissions_experiments_by_experiment_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.experiment_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_experiments_by_experiment_idUpdates the permissions on an experiment. Experiments can inherit permissions from their root object.experiment_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_experiments_by_experiment_id_permission_levelsGets the permission levels that a user can have on an object.experiment_id (string)
get_api_2_0_permissions_instance_pools_by_instance_pool_idGets the permissions of an instance pool. Instance pools can inherit permissions from their root object.instance_pool_id (string)
put_api_2_0_permissions_instance_pools_by_instance_pool_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.instance_pool_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_instance_pools_by_instance_pool_idUpdates the permissions on an instance pool. Instance pools can inherit permissions from their root object.instance_pool_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_instance_pools_by_instance_pool_id_permission_levelsGets the permission levels that a user can have on an object.instance_pool_id (string)
get_api_2_0_permissions_jobs_by_job_idGets the permissions of a job. Jobs can inherit permissions from their root object.job_id (string)
put_api_2_0_permissions_jobs_by_job_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.job_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_jobs_by_job_idUpdates the permissions on a job. Jobs can inherit permissions from their root object.job_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_jobs_by_job_id_permission_levelsGets the permission levels that a user can have on an object.job_id (string)
get_api_2_0_permissions_pipelines_by_pipeline_idGets the permissions of a pipeline. Pipelines can inherit permissions from their root object.pipeline_id (string)
put_api_2_0_permissions_pipelines_by_pipeline_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.pipeline_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_pipelines_by_pipeline_idUpdates the permissions on a pipeline. Pipelines can inherit permissions from their root object.pipeline_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_pipelines_by_pipeline_id_permission_levelsGets the permission levels that a user can have on an object.pipeline_id (string)
get_api_2_0_permissions_registered_models_by_registered_model_idGets the permissions of a registered model. Registered models can inherit permissions from their root object.registered_model_id (string)
put_api_2_0_permissions_registered_models_by_registered_model_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.registered_model_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_registered_models_by_registered_model_idUpdates the permissions on a registered model. Registered models can inherit permissions from their root object.registered_model_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_registered_models_by_registered_model_id_permission_levelsGets the permission levels that a user can have on an object.registered_model_id (string)
get_api_2_0_permissions_repos_by_repo_idGets the permissions of a repo. Repos can inherit permissions from their root object.repo_id (string)
put_api_2_0_permissions_repos_by_repo_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.repo_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_repos_by_repo_idUpdates the permissions on a repo. Repos can inherit permissions from their root object.repo_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_repos_by_repo_id_permission_levelsGets the permission levels that a user can have on an object.repo_id (string)
get_api_2_0_permissions_serving_endpoints_by_serving_endpoint_idGets the permissions of a serving endpoint. Serving endpoints can inherit permissions from their root object.serving_endpoint_id (string)
put_api_2_0_permissions_serving_endpoints_by_serving_endpoint_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.serving_endpoint_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_serving_endpoints_by_serving_endpoint_idUpdates the permissions on a serving endpoint. Serving endpoints can inherit permissions from their root object.serving_endpoint_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_serving_endpoints_by_serving_endpoint_id_permission_levelsGets the permission levels that a user can have on an object.serving_endpoint_id (string)
get_api_2_0_permissions_warehouses_by_warehouse_idGets the permissions of a SQL warehouse. SQL warehouses can inherit permissions from their root object.warehouse_id (string)
put_api_2_0_permissions_warehouses_by_warehouse_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.warehouse_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_warehouses_by_warehouse_idUpdates the permissions on a SQL warehouse. SQL warehouses can inherit permissions from their root object.warehouse_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_warehouses_by_warehouse_id_permission_levelsGets the permission levels that a user can have on an object.warehouse_id (string)
get_api_2_0_permissions_by_request_object_type_by_request_object_idGets the permissions of an object. Objects can inherit permissions from their parent objects or root object.request_object_type (string)
request_object_id (string)
put_api_2_0_permissions_by_request_object_type_by_request_object_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their parent objects or root object.request_object_type (string)
request_object_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_by_request_object_type_by_request_object_idUpdates the permissions on an object. Objects can inherit permissions from their parent objects or root object.request_object_type (string)
request_object_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_by_request_object_type_by_request_object_id_permission_levelsGets the permission levels that a user can have on an object.request_object_type (string)
request_object_id (string)
get_api_2_0_permissions_by_workspace_object_type_by_workspace_object_idGets the permissions of a workspace object. Workspace objects can inherit permissions from their parent objects or root object.workspace_object_type (string)
workspace_object_id (string)
put_api_2_0_permissions_by_workspace_object_type_by_workspace_object_idSets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their parent objects or root object.workspace_object_type (string)
workspace_object_id (string)
data: {
. access_control_list (array)
} (object) required
patch_api_2_0_permissions_by_workspace_object_type_by_workspace_object_idUpdates the permissions on a workspace object. Workspace objects can inherit permissions from their parent objects or root object.workspace_object_type (string)
workspace_object_id (string)
data: {
. access_control_list (array)
} (object) required
get_api_2_0_permissions_by_workspace_object_type_by_workspace_object_id_permission_levelsGets the permission levels that a user can have on an object.workspace_object_type (string)
workspace_object_id (string)
get_api_2_0_pipelinesLists pipelines defined in the Delta Live Tables system.page_token (string)
max_results (integer)
order_by (array)
filter (string)
post_api_2_0_pipelinesCreates a new data processing pipeline based on the requested configuration. If successful, this method returns the ID of the new pipeline.data: {
. allow_duplicate_names (boolean)
. catalog (string)
. channel (string)
. clusters (array)
. configuration (object)
. continuous (boolean)
. deployment
. development (boolean)
. dry_run (boolean)
. edition (string)
. environment
. event_log
. filters
. id (string)
. ingestion_definition
. libraries (array)
. name (string)
. notifications (array)
. photon (boolean)
. root_path (string)
. schema (string)
. serverless (boolean)
. storage (string)
. tags (object)
. target (string)
. trigger
} (object) required
get_api_2_0_pipelines_by_pipeline_idGet a pipeline.pipeline_id (string)
put_api_2_0_pipelines_by_pipeline_idUpdates a pipeline with the supplied configuration.pipeline_id (string)
data: {
. allow_duplicate_names (boolean)
. catalog (string)
. channel (string)
. clusters (array)
. configuration (object)
. continuous (boolean)
. deployment
. development (boolean)
. edition (string)
. environment
. event_log
. expected_last_modified (integer)
. filters
. id (string)
. ingestion_definition
. libraries (array)
. name (string)
. notifications (array)
. photon (boolean)
. root_path (string)
. schema (string)
. serverless (boolean)
. storage (string)
. tags (object)
. target (string)
. trigger
} (object) required
delete_api_2_0_pipelines_by_pipeline_idDeletes a pipeline. Deleting a pipeline is a permanent action that stops and removes the pipeline and its tables. You cannot undo this action.pipeline_id (string)
get_api_2_0_pipelines_by_pipeline_id_eventsRetrieves events for a pipeline.pipeline_id (string)
page_token (string)
max_results (integer)
order_by (array)
filter (string)
post_api_2_0_pipelines_by_pipeline_id_stopStops the pipeline by canceling the active update. If there is no active update for the pipeline, this request is a no-op.pipeline_id (string)
get_api_2_0_pipelines_by_pipeline_id_updatesList updates for an active pipeline.pipeline_id (string)
page_token (string)
max_results (integer)
until_update_id (string)
post_api_2_0_pipelines_by_pipeline_id_updatesStarts a new update for the pipeline. If there is already an active update for the pipeline, the request will fail and the active update will remain running.pipeline_id (string)
data: {
. cause
. full_refresh (boolean)
. full_refresh_selection (array)
. refresh_selection (array)
. validate_only (boolean)
} (object) required
get_api_2_0_pipelines_by_pipeline_id_updates_by_update_idGets an update from an active pipeline.pipeline_id (string)
update_id (string)
post_api_2_0_policies_clusters_createCreates a new policy with prescribed settings.data: {
. definition (string)
. description (string)
. libraries (array)
. max_clusters_per_user (integer)
. name (string)
. policy_family_definition_overrides (string)
. policy_family_id (string)
} (object) required
post_api_2_0_policies_clusters_deleteDelete a policy for a cluster. Clusters governed by this policy can still run, but cannot be edited.data: {
. policy_id (string)
} (object) required
post_api_2_0_policies_clusters_editUpdate an existing policy for cluster. This operation may make some clusters governed by the previous policy invalid.data: {
. definition (string)
. description (string)
. libraries (array)
. max_clusters_per_user (integer)
. name (string)
. policy_family_definition_overrides (string)
. policy_family_id (string)
. policy_id (string)
} (object) required
post_api_2_0_policies_clusters_enforce_complianceUpdates a cluster to be compliant with the current version of its policy. A cluster can be updated if it is in a RUNNING or TERMINATED state. If a cluster is updated while in a RUNNING state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a TERMINATED state, it will remain TERMINATED. The next time the cluster is started, the new attributes will take effect. Clusters created by the Databricks Jobs, DLT, or Models services cannot be enforced bdata: {
. cluster_id (string)
. validate_only (boolean)
} (object) required
get_api_2_0_policies_clusters_getGet a cluster policy entity. Creation and editing is available to admins only.policy_id (string) required
get_api_2_0_policies_clusters_get_complianceReturns the policy compliance status of a cluster. Clusters could be out of compliance if their policy was updated after the cluster was last edited.cluster_id (string) required
get_api_2_0_policies_clusters_listReturns a list of policies accessible by the requesting user.sort_order (string)
sort_column (string)
get_api_2_0_policies_clusters_list_complianceReturns the policy compliance status of all clusters that use a given policy. Clusters could be out of compliance if their policy was updated after the cluster was last edited.policy_id (string) required
page_token (string)
page_size (integer)
post_api_2_0_policies_jobs_enforce_complianceUpdates a job so the job clusters that are created when running the job specified in new_cluster are compliant with the current versions of their respective cluster policies. All-purpose clusters used in the job will not be updated.data: {
. job_id (integer)
. validate_only (boolean)
} (object) required
get_api_2_0_policies_jobs_get_complianceReturns the policy compliance status of a job. Jobs could be out of compliance if a cluster policy they use was updated after the job was last edited and some of its job clusters no longer comply with their updated policies.job_id (integer) required
get_api_2_0_policies_jobs_list_complianceReturns the policy compliance status of all jobs that use a given policy. Jobs could be out of compliance if a cluster policy they use was updated after the job was last edited and its job clusters no longer comply with the updated policy.policy_id (string) required
page_token (string)
page_size (integer)
get_api_2_0_policy_familiesReturns the list of policy definition types available to use at their latest version. This API is paginated.max_results (integer)
page_token (string)
get_api_2_0_policy_families_by_policy_family_idRetrieve the information for an policy family based on its identifier and versionpolicy_family_id (string)
version (integer)
get_api_2_0_preview_accounts_access_control_assignable_rolesGets all the roles that can be granted on an account level resource. A role is grantable if the rule set on the resource can contain an access rule of the role.resource (string) required
get_api_2_0_preview_accounts_access_control_rule_setsGet a rule set by its name. A rule set is always attached to a resource and contains a list of access rules on the said resource. Currently only a default rule set for each resource is supported.name (string) required
etag (string) required
put_api_2_0_preview_accounts_access_control_rule_setsReplace the rules of a rule set. First, use get to read the current version of the rule set before modifying it. This pattern helps prevent conflicts between concurrent updates.data: {
. name (string)
. rule_set
} (object) required
get_api_2_0_preview_scim_v2_groupsGets all details of the groups associated with the Databricks workspace.filter (string)
attributes (string)
excludedAttributes (string)
startIndex (integer)
count (integer)
sortBy (string)
sortOrder (string)
post_api_2_0_preview_scim_v2_groupsCreates a group in the Databricks workspace with a unique name, using the supplied group details.data: {
. displayName (string)
. entitlements (array)
. externalId (string)
. groups (array)
. id (string)
. members (array)
. roles (array)
. schemas (array)
} (object) required
get_api_2_0_preview_scim_v2_groups_by_idGets the information for a specific group in the Databricks workspace.id (string)
put_api_2_0_preview_scim_v2_groups_by_idUpdates the details of a group by replacing the entire group entity.id (string)
data: {
. displayName (string)
. entitlements (array)
. externalId (string)
. groups (array)
. id (string)
. members (array)
. roles (array)
. schemas (array)
} (object) required
patch_api_2_0_preview_scim_v2_groups_by_idPartially updates the details of a group.id (string)
data: {
. Operations (array)
. schemas (array)
} (object) required
delete_api_2_0_preview_scim_v2_groups_by_idDeletes a group from the Databricks workspace.id (string)
get_api_2_0_preview_scim_v2_meGet details about the current method caller's identity.No parameters
get_api_2_0_preview_scim_v2_service_principalsGets the set of service principals associated with a Databricks workspace.attributes (string)
count (integer)
excludedAttributes (string)
filter (string)
sortBy (string)
sortOrder (string)
startIndex (integer)
post_api_2_0_preview_scim_v2_service_principalsCreates a new service principal in the Databricks workspace.data: {
. active (boolean)
. applicationId (string)
. displayName (string)
. entitlements (array)
. externalId (string)
. groups (array)
. id (string)
. roles (array)
. schemas (array)
} (object) required
get_api_2_0_preview_scim_v2_service_principals_by_idGets the details for a single service principal define in the Databricks workspace.id (string)
put_api_2_0_preview_scim_v2_service_principals_by_idUpdates the details of a single service principal. This action replaces the existing service principal with the same name.id (string)
data: {
. active (boolean)
. applicationId (string)
. displayName (string)
. entitlements (array)
. externalId (string)
. groups (array)
. id (string)
. roles (array)
. schemas (array)
} (object) required
patch_api_2_0_preview_scim_v2_service_principals_by_idPartially updates the details of a single service principal in the Databricks workspace.id (string)
data: {
. Operations (array)
. schemas (array)
} (object) required
delete_api_2_0_preview_scim_v2_service_principals_by_idDelete a single service principal in the Databricks workspace.id (string)
get_api_2_0_preview_scim_v2_usersGets details for all the users associated with a Databricks workspace.attributes (string)
count (integer)
excludedAttributes (string)
filter (string)
sortBy (string)
sortOrder (string)
startIndex (integer)
post_api_2_0_preview_scim_v2_usersCreates a new user in the Databricks workspace. This new user will also be added to the Databricks account.data: {
. active (boolean)
. displayName (string)
. emails (array)
. entitlements (array)
. externalId (string)
. groups (array)
. id (string)
. name
. roles (array)
. schemas (array)
. userName (string)
} (object) required
get_api_2_0_preview_scim_v2_users_by_idGets information for a specific user in Databricks workspace.id (string)
attributes (string)
count (integer)
excludedAttributes (string)
filter (string)
sortBy (string)
sortOrder (string)
startIndex (integer)
put_api_2_0_preview_scim_v2_users_by_idReplaces a user's information with the data supplied in request.id (string)
data: {
. active (boolean)
. displayName (string)
. emails (array)
. entitlements (array)
. externalId (string)
. groups (array)
. id (string)
. name
. roles (array)
. schemas (array)
. userName (string)
} (object) required
patch_api_2_0_preview_scim_v2_users_by_idPartially updates a user resource by applying the supplied operations on specific user attributes.id (string)
data: {
. Operations (array)
. schemas (array)
} (object) required
delete_api_2_0_preview_scim_v2_users_by_idDeletes a user. Deleting a user from a Databricks workspace also removes objects associated with the user.id (string)
get_api_2_0_preview_sql_alertsGets a list of alerts. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/list instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlNo parameters
post_api_2_0_preview_sql_alertsCreates an alert. An alert is a Databricks SQL object that periodically runs a query, evaluates a condition of its result, and notifies users or notification destinations if the condition was met. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/create instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmldata: {
. name (string)
. options
. parent (string)
. query_id (string)
. rearm (integer)
} (object) required
get_api_2_0_preview_sql_alerts_by_alert_idGets an alert. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/get instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlalert_id (string)
put_api_2_0_preview_sql_alerts_by_alert_idUpdates an alert. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/update instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlalert_id (string)
data: {
. name (string)
. options
. query_id (string)
. rearm (integer)
} (object) required
delete_api_2_0_preview_sql_alerts_by_alert_idDeletes an alert. Deleted alerts are no longer accessible and cannot be restored. Note: Unlike queries and dashboards, alerts cannot be moved to the trash. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/delete instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlalert_id (string)
get_api_2_0_preview_sql_dashboardsFetch a paginated list of dashboard objects. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban.order (string)
page (integer)
page_size (integer)
q (string)
post_api_2_0_preview_sql_dashboards_trash_by_dashboard_idA restored dashboard appears in list views and searches and can be shared.dashboard_id (string)
get_api_2_0_preview_sql_dashboards_by_dashboard_idReturns a JSON representation of a dashboard object, including its visualization and query objects.dashboard_id (string)
post_api_2_0_preview_sql_dashboards_by_dashboard_idModify this dashboard definition. This operation only affects attributes of the dashboard object. It does not add, modify, or remove widgets. Note: You cannot undo this operation.dashboard_id (string)
data: {
. name (string)
. run_as_role
. tags (array)
} (object) required
delete_api_2_0_preview_sql_dashboards_by_dashboard_idMoves a dashboard to the trash. Trashed dashboards do not appear in list views or searches, and cannot be shared.dashboard_id (string)
get_api_2_0_preview_sql_data_sourcesRetrieves a full list of SQL warehouses available in this workspace. All fields that appear in this API response are enumerated for clarity. However, you need only a SQL warehouse's id to create new queries against it. Note: A new version of the Databricks SQL API is now available. Please use :method:warehouses/list instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlNo parameters
get_api_2_0_preview_sql_permissions_by_object_type_by_object_idGets a JSON representation of the access control list ACL for a specified object. Note: A new version of the Databricks SQL API is now available. Please use :method:workspace/getpermissions instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlobjectType (string)
objectId (string)
post_api_2_0_preview_sql_permissions_by_object_type_by_object_idSets the access control list ACL for a specified object. This operation will complete rewrite the ACL. Note: A new version of the Databricks SQL API is now available. Please use :method:workspace/setpermissions instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlobjectType (string)
objectId (string)
data: {
. access_control_list (array)
} (object) required
post_api_2_0_preview_sql_permissions_by_object_type_by_object_id_transferTransfers ownership of a dashboard, query, or alert to an active user. Requires an admin API key. Note: A new version of the Databricks SQL API is now available. For queries and alerts, please use :method:queries/update and :method:alerts/update respectively instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlobjectType (string)
objectId: {
. new_owner (string)
} (object)
data: {
. new_owner (string)
} (object) required
get_api_2_0_preview_sql_queriesGets a list of queries. Optionally, this list can be filtered by a search term. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/list instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlorder (string)
page (integer)
page_size (integer)
q (string)
post_api_2_0_preview_sql_queriesCreates a new query definition. Queries created with this endpoint belong to the authenticated user making the request. The data_source_id field specifies the ID of the SQL warehouse to run this query against. You can use the Data Sources API to see a complete list of available SQL warehouses. Or you can copy the data_source_id from an existing query. Note: You cannot add a visualization until you create the query. Note: A new version of the Databricks SQL API is now available. Please use :medata: {
. data_source_id (string)
. description (string)
. name (string)
. options
. parent (string)
. query (string)
. run_as_role
. tags (array)
} (object) required
post_api_2_0_preview_sql_queries_trash_by_query_idRestore a query that has been moved to the trash. A restored query appears in list views and searches. You can use restored queries for alerts. Note: A new version of the Databricks SQL API is now available. Please see the latest version. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlquery_id (string)
get_api_2_0_preview_sql_queries_by_query_idRetrieve a query object definition along with contextual permissions information about the currently authenticated user. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/get instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlquery_id (string)
post_api_2_0_preview_sql_queries_by_query_idModify this query definition. Note: You cannot undo this operation. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/update instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlquery_id (string)
data: {
. data_source_id (string)
. description (string)
. name (string)
. options
. query (string)
. run_as_role
. tags (array)
} (object) required
delete_api_2_0_preview_sql_queries_by_query_idMoves a query to the trash. Trashed queries immediately disappear from searches and list views, and they cannot be used for alerts. The trash is deleted after 30 days. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/delete instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.htmlquery_id (string)
get_api_2_0_quality_monitorsUnimplemented List quality monitorspage_token (string)
page_size (integer)
post_api_2_0_quality_monitorsCreate a quality monitor on UC objectdata: {
. anomaly_detection_config
. object_id (string)
. object_type (string)
} (object) required
get_api_2_0_quality_monitors_by_object_type_by_object_idRead a quality monitor on UC objectobject_type (string)
object_id (string)
put_api_2_0_quality_monitors_by_object_type_by_object_idUnimplemented Update a quality monitor on UC objectobject_type (string)
object_id (string)
data: {
. anomaly_detection_config
. object_id (string)
. object_type (string)
} (object) required
delete_api_2_0_quality_monitors_by_object_type_by_object_idDelete a quality monitor on UC objectobject_type (string)
object_id (string)
get_api_2_0_reposReturns repos that the calling user has Manage permissions on. Use next_page_token to iterate through additional pages.path_prefix (string)
next_page_token (string)
post_api_2_0_reposCreates a repo in the workspace and links it to the remote Git repo specified. Note that repos created programmatically must be linked to a remote Git repo, unlike repos created in the browser.data: {
. path (string)
. provider (string)
. sparse_checkout
. url (string)
} (object) required
get_api_2_0_repos_by_repo_idReturns the repo with the given repo ID.repo_id (integer)
patch_api_2_0_repos_by_repo_idUpdates the repo to a different branch or tag, or updates the repo to the latest commit on the same branch.repo_id (integer)
data: {
. branch (string)
. sparse_checkout
. tag (string)
} (object) required
delete_api_2_0_repos_by_repo_idDeletes the specified repo.repo_id (integer)
post_api_2_0_secrets_acls_deleteDeletes the given ACL on the given scope. Users must have the MANAGE permission to invoke this API. Example request: .. code:: 'scope': 'my-secret-scope', 'principal': 'data-scientists' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope, principal, or ACL exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws INVALID_PARAMETER_VALUE if the permission or principal is invalid.data: {
. principal (string)
. scope (string)
} (object) required
get_api_2_0_secrets_acls_getDescribes the details about the given ACL, such as the group and permission. Users must have the MANAGE permission to invoke this API. Example response: .. code:: 'principal': 'data-scientists', 'permission': 'READ' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws INVALID_PARAMETER_VALUE if the permission or principal is invalid.scope (string) required
principal (string) required
get_api_2_0_secrets_acls_listLists the ACLs set on the given scope. Users must have the MANAGE permission to invoke this API. Example response: .. code:: 'acls': 'principal': 'admins', 'permission': 'MANAGE' , 'principal': 'data-scientists', 'permission': 'READ' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call.scope (string) required
post_api_2_0_secrets_acls_putCreates or overwrites the ACL associated with the given principal user or group on the specified scope point. In general, a user or group will use the most powerful permission available to them, and permissions are ordered as follows: MANAGE - Allowed to change ACLs, and read and write to this secret scope. WRITE - Allowed to read and write to this secret scope. READ - Allowed to read this secret scope and list what secrets are available. Note that in general, secret values can only be readdata: {
. permission
. principal (string)
. scope (string)
} (object) required
post_api_2_0_secrets_deleteDeletes the secret stored in this secret scope. You must have WRITE or MANAGE permission on the Secret Scope. Example request: .. code:: 'scope': 'my-secret-scope', 'key': 'my-secret-key' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope or secret exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws BAD_REQUEST if system user attempts to delete an internal secret, or request is made against Azure KeyVault backed scope.data: {
. key (string)
. scope (string)
} (object) required
get_api_2_0_secrets_getGets a secret for a given key and scope. This API can only be called from the DBUtils interface. Users need the READ permission to make this call. Example response: .. code:: 'key': 'my-string-key', 'value': bytes of the secret value Note that the secret value returned is in bytes. The interpretation of the bytes is determined by the caller in DBUtils and the type the data is decoded into. Throws RESOURCE_DOES_NOT_EXIST if no such secret or secret scope exists. Throws PERMISSION_DENIED ifscope (string) required
key (string) required
get_api_2_0_secrets_listLists the secret keys that are stored at this scope. This is a metadata-only operation; secret data cannot be retrieved using this API. Users need the READ permission to make this call. Example response: .. code:: 'secrets': 'key': 'my-string-key'', 'last_updated_timestamp': '1520467595000' , 'key': 'my-byte-key', 'last_updated_timestamp': '1520467595000' , The lastUpdatedTimestamp returned is in milliseconds since epoch. Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists.scope (string) required
post_api_2_0_secrets_putInserts a secret under the provided scope with the given name. If a secret already exists with the same name, this command overwrites the existing secret's value. The server encrypts the secret using the secret scope's encryption settings before storing it. You must have WRITE or MANAGE permission on the secret scope. The secret key must consist of alphanumeric characters, dashes, underscores, and periods, and cannot exceed 128 characters. The maximum allowed secret value size is 128 KB. The madata: {
. bytes_value (string)
. key (string)
. scope (string)
. string_value (string)
} (object) required
post_api_2_0_secrets_scopes_createCreates a new secret scope. The scope name must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters. Example request: .. code:: 'scope': 'my-simple-databricks-scope', 'initial_manage_principal': 'users' 'scope_backend_type': 'databricks|azure_keyvault', below is only required if scope type is azure_keyvault 'backend_azure_keyvault': 'resource_id': '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/xxxx/providers/Microsodata: {
. initial_manage_principal (string)
. scope (string)
. scope_backend_type
} (object) required
post_api_2_0_secrets_scopes_deleteDeletes a secret scope. Example request: .. code:: 'scope': 'my-secret-scope' Throws RESOURCE_DOES_NOT_EXIST if the scope does not exist. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws BAD_REQUEST if system user attempts to delete internal secret scope.data: {
. scope (string)
} (object) required
get_api_2_0_secrets_scopes_listLists all secret scopes available in the workspace. Example response: .. code:: 'scopes': 'name': 'my-databricks-scope', 'backend_type': 'DATABRICKS' , 'name': 'mount-points', 'backend_type': 'DATABRICKS' Throws PERMISSION_DENIED if the user does not have permission to make this API call.No parameters
get_api_2_0_serving_endpointsGet all serving endpoints.No parameters
post_api_2_0_serving_endpointsCreate a new serving endpoint.data: {
. ai_gateway
. budget_policy_id (string)
. config
. description (string)
. email_notifications
. name (string)
. rate_limits (array)
. route_optimized (boolean)
. tags (array)
} (object) required
post_api_2_0_serving_endpoints_ptCreate a new PT serving endpoint.data: {
. ai_gateway
. budget_policy_id (string)
. config
. email_notifications
. name (string)
. tags (array)
} (object) required
put_api_2_0_serving_endpoints_pt_by_name_configUpdates any combination of the pt endpoint's served entities, the compute configuration of those served entities, and the endpoint's traffic config. Updates are instantaneous and endpoint should be updated instantlyname (string)
data: {
. config
} (object) required
get_api_2_0_serving_endpoints_by_nameRetrieves the details for a single serving endpoint.name (string)
delete_api_2_0_serving_endpoints_by_nameDelete a serving endpoint.name (string)
put_api_2_0_serving_endpoints_by_name_ai_gatewayUsed to update the AI Gateway of a serving endpoint. NOTE: External model, provisioned throughput, and pay-per-token endpoints are fully supported; agent endpoints currently only support inference tables.name (string)
data: {
. fallback_config
. guardrails
. inference_table_config
. rate_limits (array)
. usage_tracking_config
} (object) required
put_api_2_0_serving_endpoints_by_name_configUpdates any combination of the serving endpoint's served entities, the compute configuration of those served entities, and the endpoint's traffic config. An endpoint that already has an update in progress can not be updated until the current update completes or fails.name (string)
data: {
. auto_capture_config
. served_entities (array)
. served_models (array)
. traffic_config
} (object) required
get_api_2_0_serving_endpoints_by_name_metricsRetrieves the metrics associated with the provided serving endpoint in either Prometheus or OpenMetrics exposition format.name (string)
get_api_2_0_serving_endpoints_by_name_openapiGet the query schema of the serving endpoint in OpenAPI format. The schema contains information for the supported paths, input and output format and datatypes.name (string)
put_api_2_0_serving_endpoints_by_name_rate_limitsDeprecated: Please use AI Gateway to manage rate limits instead.name (string)
data: {
. rate_limits (array)
} (object) required
get_api_2_0_serving_endpoints_by_name_served_models_by_served_model_name_build_logsRetrieves the build logs associated with the provided served model.name (string)
served_model_name (string)
get_api_2_0_serving_endpoints_by_name_served_models_by_served_model_name_logsRetrieves the service logs associated with the provided served model.name (string)
served_model_name (string)
patch_api_2_0_serving_endpoints_by_name_tagsUsed to batch add and delete tags from a serving endpoint with a single API call.name (string)
data: {
. add_tags (array)
. delete_tags (array)
} (object) required
get_api_2_0_settings_types_aibi_dash_embed_ws_acc_policy_names_defaultRetrieves the AI/BI dashboard embedding access policy. The default setting is ALLOW_APPROVED_DOMAINS, permitting AI/BI dashboards to be embedded on approved domains.etag (string)
patch_api_2_0_settings_types_aibi_dash_embed_ws_acc_policy_names_defaultUpdates the AI/BI dashboard embedding access policy at the workspace level.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_aibi_dash_embed_ws_acc_policy_names_defaultDelete the AI/BI dashboard embedding access policy, reverting back to the default.etag (string)
get_api_2_0_settings_types_aibi_dash_embed_ws_apprvd_domains_names_defaultRetrieves the list of domains approved to host embedded AI/BI dashboards.etag (string)
patch_api_2_0_settings_types_aibi_dash_embed_ws_apprvd_domains_names_defaultUpdates the list of domains approved to host embedded AI/BI dashboards. This update will fail if the current workspace access policy is not ALLOW_APPROVED_DOMAINS.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_aibi_dash_embed_ws_apprvd_domains_names_defaultDelete the list of domains approved to host embedded AI/BI dashboards, reverting back to the default empty list.etag (string)
get_api_2_0_settings_types_automatic_cluster_update_names_defaultGets the automatic cluster update setting.etag (string)
patch_api_2_0_settings_types_automatic_cluster_update_names_defaultUpdates the automatic cluster update setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
get_api_2_0_settings_types_dashboard_email_subscriptions_names_defaultGets the Dashboard Email Subscriptions setting.etag (string)
patch_api_2_0_settings_types_dashboard_email_subscriptions_names_defaultUpdates the Dashboard Email Subscriptions setting.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_dashboard_email_subscriptions_names_defaultReverts the Dashboard Email Subscriptions setting to its default value.etag (string)
get_api_2_0_settings_types_default_namespace_ws_names_defaultGets the default namespace setting.etag (string)
patch_api_2_0_settings_types_default_namespace_ws_names_defaultUpdates the default namespace setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. Note that if the setting does not exist, GET returns a NOT_FOUND error and the etag is present in the error response, which should be set in the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 4data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_default_namespace_ws_names_defaultDeletes the default namespace setting for the workspace. A fresh etag needs to be provided in DELETE requests as a query parameter. The etag can be retrieved by making a GET request before the DELETE request. If the setting is updated/deleted concurrently, DELETE fails with 409 and the request must be retried by using the fresh etag in the 409 response.etag (string)
get_api_2_0_settings_types_disable_legacy_access_names_defaultRetrieves legacy access disablement Status.etag (string)
patch_api_2_0_settings_types_disable_legacy_access_names_defaultUpdates legacy access disablement status.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_disable_legacy_access_names_defaultDeletes legacy access disablement status.etag (string)
get_api_2_0_settings_types_disable_legacy_dbfs_names_defaultGets the disable legacy DBFS setting.etag (string)
patch_api_2_0_settings_types_disable_legacy_dbfs_names_defaultUpdates the disable legacy DBFS setting for the workspace.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_disable_legacy_dbfs_names_defaultDeletes the disable legacy DBFS setting for a workspace, reverting back to the default.etag (string)
get_api_2_0_settings_types_enable_export_notebook_names_defaultGets the Notebook and File exporting setting.No parameters
patch_api_2_0_settings_types_enable_export_notebook_names_defaultUpdates the Notebook and File exporting setting. The model follows eventual consistency, which means the get after the update operation might receive stale values for some time.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
get_api_2_0_settings_types_enable_notebook_table_clipboard_names_defaultGets the Results Table Clipboard features setting.No parameters
patch_api_2_0_settings_types_enable_notebook_table_clipboard_names_defaultUpdates the Results Table Clipboard features setting. The model follows eventual consistency, which means the get after the update operation might receive stale values for some time.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
get_api_2_0_settings_types_enable_results_downloading_names_defaultGets the Notebook results download setting.No parameters
patch_api_2_0_settings_types_enable_results_downloading_names_defaultUpdates the Notebook results download setting. The model follows eventual consistency, which means the get after the update operation might receive stale values for some time.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
get_api_2_0_settings_types_restrict_workspace_admins_names_defaultGets the restrict workspace admins setting.etag (string)
patch_api_2_0_settings_types_restrict_workspace_admins_names_defaultUpdates the restrict workspace admins setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_restrict_workspace_admins_names_defaultReverts the restrict workspace admins setting status for the workspace. A fresh etag needs to be provided in DELETE requests as a query parameter. The etag can be retrieved by making a GET request before the DELETE request. If the setting is updated/deleted concurrently, DELETE fails with 409 and the request must be retried by using the fresh etag in the 409 response.etag (string)
get_api_2_0_settings_types_shield_csp_enablement_ws_db_names_defaultGets the compliance security profile setting.etag (string)
patch_api_2_0_settings_types_shield_csp_enablement_ws_db_names_defaultUpdates the compliance security profile setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
get_api_2_0_settings_types_shield_esm_enablement_ws_db_names_defaultGets the enhanced security monitoring setting.etag (string)
patch_api_2_0_settings_types_shield_esm_enablement_ws_db_names_defaultUpdates the enhanced security monitoring setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
get_api_2_0_settings_types_sql_results_download_names_defaultGets the SQL Results Download setting.etag (string)
patch_api_2_0_settings_types_sql_results_download_names_defaultUpdates the SQL Results Download setting.data: {
. allow_missing (boolean)
. field_mask (string)
. setting
} (object) required
delete_api_2_0_settings_types_sql_results_download_names_defaultReverts the SQL Results Download setting to its default value.etag (string)
get_api_2_0_sql_alertsGets a list of alerts accessible to the user, ordered by creation time. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban.page_token (string)
page_size (integer)
post_api_2_0_sql_alertsCreates an alert.data: {
. alert
. auto_resolve_display_name (boolean)
} (object) required
get_api_2_0_sql_alerts_by_idGets an alert.id (string)
patch_api_2_0_sql_alerts_by_idUpdates an alert.id (string)
data: {
. alert
. auto_resolve_display_name (boolean)
. update_mask (string)
} (object) required
delete_api_2_0_sql_alerts_by_idMoves an alert to the trash. Trashed alerts immediately disappear from searches and list views, and can no longer trigger. You can restore a trashed alert through the UI. A trashed alert is permanently deleted after 30 days.id (string)
get_api_2_0_sql_config_warehousesGets the workspace level configuration that is shared by all SQL warehouses in a workspace.No parameters
put_api_2_0_sql_config_warehousesSets the workspace level configuration that is shared by all SQL warehouses in a workspace.data: {
. channel
. config_param
. data_access_config (array)
. enabled_warehouse_types (array)
. global_param
. google_service_account (string)
. instance_profile_arn (string)
. security_policy
. sql_configuration_parameters
} (object) required
get_api_2_0_sql_history_queriesList the history of queries through SQL warehouses, and serverless compute. You can filter by user ID, warehouse ID, status, and time range. Most recently started queries are returned first up to max_results in request. The pagination token returned in response can be used to list subsequent query statuses.filter_by: {
. query_start_time_range
. statement_ids (array)
. statuses (array)
. user_ids (array)
. warehouse_ids (array)
} (object)
max_results (integer)
page_token (string)
include_metrics (boolean)
get_api_2_0_sql_queriesGets a list of queries accessible to the user, ordered by creation time. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban.page_token (string)
page_size (integer)
post_api_2_0_sql_queriesCreates a query.data: {
. auto_resolve_display_name (boolean)
. query
} (object) required
get_api_2_0_sql_queries_by_idGets a query.id (string)
patch_api_2_0_sql_queries_by_idUpdates a query.id (string)
data: {
. auto_resolve_display_name (boolean)
. query
. update_mask (string)
} (object) required
delete_api_2_0_sql_queries_by_idMoves a query to the trash. Trashed queries immediately disappear from searches and list views, and cannot be used for alerts. You can restore a trashed query through the UI. A trashed query is permanently deleted after 30 days.id (string)
post_api_2_0_sql_statementsExecute a SQL statement and optionally await its results for a specified time. Use case: small result sets with INLINE + JSON_ARRAY For flows that generate small and predictable result sets = 25 MiB, INLINE responses of JSON_ARRAY result data are typically the simplest way to execute and fetch result data. Use case: large result sets with EXTERNAL_LINKS Using EXTERNAL_LINKS to fetch result data allows you to fetch large result sets efficiently. The main differences from using INLINE dispositdata: {
. byte_limit (integer)
. catalog (string)
. disposition
. format
. on_wait_timeout
. parameters (array)
. row_limit (integer)
. schema (string)
. statement (string)
. wait_timeout (string)
. warehouse_id (string)
} (object) required
get_api_2_0_sql_statements_by_statement_idThis request can be used to poll for the statement's status. When the status.state field is SUCCEEDED it will also return the result manifest and the first chunk of the result data. When the statement is in the terminal states CANCELED, CLOSED or FAILED, it returns HTTP 200 with the state set. After at least 12 hours in terminal state, the statement is removed from the warehouse and further calls will receive an HTTP 404 response. NOTE This call currently might take up to 5 seconds to get the lstatement_id (string)
post_api_2_0_sql_statements_by_statement_id_cancelRequests that an executing statement be canceled. Callers must poll for status to see the terminal state.statement_id (string)
get_api_2_0_sql_statements_by_statement_id_result_chunks_by_chunk_indexAfter the statement execution has SUCCEEDED, this request can be used to fetch any chunk by index. Whereas the first chunk with chunk_index=0 is typically fetched with :method:statementexecution/executeStatement or :method:statementexecution/getStatement, this request can be used to fetch subsequent chunks. The response structure is identical to the nested result element described in the :method:statementexecution/getStatement request, and similarly includes the next_chunk_index and next_chunk_istatement_id (string)
chunk_index (string)
get_api_2_0_sql_warehousesLists all SQL warehouses that a user has manager permissions on.run_as_user_id (integer)
post_api_2_0_sql_warehousesCreates a new SQL warehouse.data: {
. auto_stop_mins (integer)
. channel
. cluster_size (string)
. creator_name (string)
. enable_photon (boolean)
. enable_serverless_compute (boolean)
. instance_profile_arn (string)
. max_num_clusters (integer)
. min_num_clusters (integer)
. name (string)
. spot_instance_policy
. tags
. warehouse_type
} (object) required
get_api_2_0_sql_warehouses_by_idGets the information for a single SQL warehouse.id (string)
delete_api_2_0_sql_warehouses_by_idDeletes a SQL warehouse.id (string)
post_api_2_0_sql_warehouses_by_id_editUpdates the configuration for a SQL warehouse.id (string)
data: {
. auto_stop_mins (integer)
. channel
. cluster_size (string)
. creator_name (string)
. enable_photon (boolean)
. enable_serverless_compute (boolean)
. instance_profile_arn (string)
. max_num_clusters (integer)
. min_num_clusters (integer)
. name (string)
. spot_instance_policy
. tags
. warehouse_type
} (object) required
post_api_2_0_sql_warehouses_by_id_startStarts a SQL warehouse.id (string)
post_api_2_0_sql_warehouses_by_id_stopStops a SQL warehouse.id (string)
post_api_2_0_token_management_on_behalf_of_tokensCreates a token on behalf of a service principal.data: {
. application_id (string)
. comment (string)
. lifetime_seconds (integer)
} (object) required
get_api_2_0_token_management_tokensLists all tokens associated with the specified workspace or user.created_by_id (integer)
created_by_username (string)
get_api_2_0_token_management_tokens_by_token_idGets information about a token, specified by its ID.token_id (string)
delete_api_2_0_token_management_tokens_by_token_idDeletes a token, specified by its ID.token_id (string)
post_api_2_0_token_createCreates and returns a token for a user. If this call is made through token authentication, it creates a token with the same client ID as the authenticated token. If the user's token quota is exceeded, this call returns an error QUOTA_EXCEEDED.data: {
. comment (string)
. lifetime_seconds (integer)
} (object) required
post_api_2_0_token_deleteRevokes an access token. If a token with the specified ID is not valid, this call returns an error RESOURCE_DOES_NOT_EXIST.data: {
. token_id (string)
} (object) required
get_api_2_0_token_listLists all the valid tokens for a user-workspace pair.No parameters
post_api_2_0_unity_catalog_temporary_path_credentialsGet a short-lived credential for directly accessing cloud storage locations registered in Databricks. The Generate Temporary Path Credentials API is only supported for external storage paths, specifically external locations and external tables. Managed tables are not supported by this API. The metastore must have external_access_enabled flag set to true default false. The caller must have the EXTERNAL_USE_LOCATION privilege on the external location; this privilege can only be granted by externaldata: {
. dry_run (boolean)
. operation
. url (string)
} (object) required
post_api_2_0_unity_catalog_temporary_table_credentialsGet a short-lived credential for directly accessing the table data on cloud storage. The metastore must have external_access_enabled flag set to true default false. The caller must have the EXTERNAL_USE_SCHEMA privilege on the parent schema and this privilege can only be granted by catalog owners.data: {
. operation
. table_id (string)
} (object) required
get_api_2_0_vector_search_endpointsList all vector search endpoints in the workspace.page_token (string)
post_api_2_0_vector_search_endpointsCreate a new endpoint.data: {
. endpoint_type
. name (string)
} (object) required
get_api_2_0_vector_search_endpoints_by_endpoint_nameGet details for a single vector search endpoint.endpoint_name (string)
delete_api_2_0_vector_search_endpoints_by_endpoint_nameDelete a vector search endpoint.endpoint_name (string)
patch_api_2_0_vector_search_endpoints_by_endpoint_name_budget_policyUpdate the budget policy of an endpointendpoint_name (string)
data: {
. budget_policy_id (string)
} (object) required
get_api_2_0_vector_search_indexesList all indexes in the given endpoint.endpoint_name (string) required
page_token (string)
post_api_2_0_vector_search_indexesCreate a new index.data: {
. delta_sync_index_spec
. direct_access_index_spec
. endpoint_name (string)
. index_type
. name (string)
. primary_key (string)
} (object) required
get_api_2_0_vector_search_indexes_by_index_nameGet an index.index_name (string)
delete_api_2_0_vector_search_indexes_by_index_nameDelete an index.index_name (string)
delete_api_2_0_vector_search_indexes_by_index_name_delete_dataHandles the deletion of data from a specified vector index.index_name (string)
primary_keys (array) required
post_api_2_0_vector_search_indexes_by_index_name_queryQuery the specified vector index.index_name (string)
data: {
. columns (array)
. filters_json (string)
. num_results (integer)
. query_text (string)
. query_type (string)
. query_vector (array)
. score_threshold (number)
} (object) required
post_api_2_0_vector_search_indexes_by_index_name_query_next_pageUse next_page_token returned from previous QueryVectorIndex or QueryVectorIndexNextPage request to fetch next page of results.index_name (string)
data: {
. endpoint_name (string)
. page_token (string)
} (object) required
post_api_2_0_vector_search_indexes_by_index_name_scanScan the specified vector index and return the first num_results entries after the exclusive primary_key.index_name (string)
data: {
. last_primary_key (string)
. num_results (integer)
} (object) required
post_api_2_0_vector_search_indexes_by_index_name_syncTriggers a synchronization process for a specified vector index.index_name (string)
post_api_2_0_vector_search_indexes_by_index_name_upsert_dataHandles the upserting of data into a specified vector index.index_name (string)
data: {
. inputs_json (string)
} (object) required
get_api_2_0_workspace_confGets the configuration status for a workspace.keys (string) required
patch_api_2_0_workspace_confSets the configuration status for a workspace, including enabling or disabling it.data (object) required
post_api_2_0_workspace_deleteDeletes an object or a directory and optionally recursively deletes all objects in the directory. If path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST. If path is a non-empty directory and recursive is set to false, this call returns an error DIRECTORY_NOT_EMPTY. Object deletion cannot be undone and deleting a directory recursively is not atomic.data: {
. path (string)
. recursive (boolean)
} (object) required
get_api_2_0_workspace_exportExports an object or the contents of an entire directory. If path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST. If the exported data would exceed size limit, this call returns MAX_NOTEBOOK_SIZE_EXCEEDED. Currently, this API does not support exporting a library.path (string) required
format (string)
direct_download (boolean)
get_api_2_0_workspace_get_statusGets the status of an object or a directory. If path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST.path (string) required
post_api_2_0_workspace_importImports a workspace object for example, a notebook or file or the contents of an entire directory. If path already exists and overwrite is set to false, this call returns an error RESOURCE_ALREADY_EXISTS. To import a directory, you can use either the DBC format or the SOURCE format with the language field unset. To import a single file as SOURCE, you must set the language field.data: {
. content (string)
. format
. language
. overwrite (boolean)
. path (string)
} (object) required
get_api_2_0_workspace_listLists the contents of a directory, or the object if it is not a directory. If the input path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST.path (string) required
notebooks_modified_after (integer)
post_api_2_0_workspace_mkdirsCreates the specified directory and necessary parent directories if they do not exist. If there is an object not a directory at any prefix of the input path, this call returns an error RESOURCE_ALREADY_EXISTS. Note that if this operation fails it may have succeeded in creating some of the necessary parent directories.data: {
. path (string)
} (object) required
post_api_2_1_clusters_change_ownerChange the owner of the cluster. You must be an admin and the cluster must be terminated to perform this operation. The service principal application ID can be supplied as an argument to owner_username.data: {
. cluster_id (string)
. owner_username (string)
} (object) required
post_api_2_1_clusters_createCreates a new Spark cluster. This method will acquire new instances from the cloud provider if necessary. This method is asynchronous; the returned cluster_id can be used to poll the cluster status. When this method returns, the cluster will be in a PENDING state. The cluster will be usable once it enters a RUNNING state. Note: Databricks may not be able to acquire some of the requested nodes, due to cloud provider limitations account limits, spot price, etc. or transient network issues. If Datdata: {
. apply_policy_default_values (boolean)
. autoscale
. autotermination_minutes (integer)
. aws_attributes
. clone_from
. cluster_log_conf
. cluster_name (string)
. custom_tags (object)
. data_security_mode
. docker_image
. driver_instance_pool_id (string)
. driver_node_type_id (string)
. enable_elastic_disk (boolean)
. enable_local_disk_encryption (boolean)
. init_scripts (array)
. instance_pool_id (string)
. is_single_node (boolean)
. kind
. node_type_id (string)
. num_workers (integer)
. policy_id (string)
. runtime_engine
. single_user_name (string)
. spark_conf (object)
. spark_env_vars (object)
. spark_version (string)
. ssh_public_keys (array)
. use_ml_runtime (boolean)
. workload_type
} (object) required
post_api_2_1_clusters_deleteTerminates the Spark cluster with the specified ID. The cluster is removed asynchronously. Once the termination has completed, the cluster will be in a TERMINATED state. If the cluster is already in a TERMINATING or TERMINATED state, nothing will happen.data: {
. cluster_id (string)
} (object) required
post_api_2_1_clusters_editUpdates the configuration of a cluster to match the provided attributes and size. A cluster can be updated if it is in a RUNNING or TERMINATED state. If a cluster is updated while in a RUNNING state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a TERMINATED state, it will remain TERMINATED. The next time it is started using the clusters/start API, the new attributes will take effect. Any attempt to update a cluster in any other state will bedata: {
. apply_policy_default_values (boolean)
. autoscale
. autotermination_minutes (integer)
. aws_attributes
. cluster_id (string)
. cluster_log_conf
. cluster_name (string)
. custom_tags (object)
. data_security_mode
. docker_image
. driver_instance_pool_id (string)
. driver_node_type_id (string)
. enable_elastic_disk (boolean)
. enable_local_disk_encryption (boolean)
. init_scripts (array)
. instance_pool_id (string)
. is_single_node (boolean)
. kind
. node_type_id (string)
. num_workers (integer)
. policy_id (string)
. runtime_engine
. single_user_name (string)
. spark_conf (object)
. spark_env_vars (object)
. spark_version (string)
. ssh_public_keys (array)
. use_ml_runtime (boolean)
. workload_type
} (object) required
post_api_2_1_clusters_eventsRetrieves a list of events about the activity of a cluster. This API is paginated. If there are more events to read, the response includes all the parameters necessary to request the next page of events.data: {
. cluster_id (string)
. end_time (integer)
. event_types (array)
. limit (integer)
. offset (integer)
. order
. page_size (integer)
. page_token (string)
. start_time (integer)
} (object) required
get_api_2_1_clusters_getRetrieves the information for a cluster given its identifier. Clusters can be described while they are running, or up to 60 days after they are terminated.cluster_id (string) required
get_api_2_1_clusters_listReturn information about all pinned and active clusters, and all clusters terminated within the last 30 days. Clusters terminated prior to this period are not included.filter_by: {
. cluster_sources (array)
. cluster_states (array)
. is_pinned (boolean)
. policy_id (string)
} (object)
page_token (string)
page_size (integer)
sort_by: {
. direction
. field
} (object)
get_api_2_1_clusters_list_node_typesReturns a list of supported Spark node types. These node types can be used to launch a cluster.No parameters
get_api_2_1_clusters_list_zonesReturns a list of availability zones where clusters can be created in For example, us-west-2a. These zones can be used to launch a cluster.No parameters
post_api_2_1_clusters_permanent_deletePermanently deletes a Spark cluster. This cluster is terminated and resources are asynchronously removed. In addition, users will no longer see permanently deleted clusters in the cluster list, and API users can no longer perform any action on permanently deleted clusters.data: {
. cluster_id (string)
} (object) required
post_api_2_1_clusters_pinPinning a cluster ensures that the cluster will always be returned by the ListClusters API. Pinning a cluster that is already pinned will have no effect. This API can only be called by workspace admins.data: {
. cluster_id (string)
} (object) required
post_api_2_1_clusters_resizeResizes a cluster to have a desired number of workers. This will fail unless the cluster is in a RUNNING state.data: {
. autoscale
. cluster_id (string)
. num_workers (integer)
} (object) required
post_api_2_1_clusters_restartRestarts a Spark cluster with the supplied ID. If the cluster is not currently in a RUNNING state, nothing will happen.data: {
. cluster_id (string)
. restart_user (string)
} (object) required
get_api_2_1_clusters_spark_versionsReturns the list of available Spark versions. These versions can be used to launch a cluster.No parameters
post_api_2_1_clusters_startStarts a terminated Spark cluster with the supplied ID. This works similar to createCluster except: - The previous cluster id and attributes are preserved. - The cluster starts with the last specified cluster size. - If the previous cluster was an autoscaling cluster, the current cluster starts with the minimum number of nodes. - If the cluster is not currently in a TERMINATED state, nothing will happen. - Clusters launched to run a job cannot be started.data: {
. cluster_id (string)
} (object) required
post_api_2_1_clusters_unpinUnpinning a cluster will allow the cluster to eventually be removed from the ListClusters API. Unpinning a cluster that is not pinned will have no effect. This API can only be called by workspace admins.data: {
. cluster_id (string)
} (object) required
post_api_2_1_clusters_updateUpdates the configuration of a cluster to match the partial set of attributes and size. Denote which fields to update using the update_mask field in the request body. A cluster can be updated if it is in a RUNNING or TERMINATED state. If a cluster is updated while in a RUNNING state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a TERMINATED state, it will remain TERMINATED. The updated attributes will take effect the next time the cluster is sdata: {
. cluster
. cluster_id (string)
. update_mask (string)
} (object) required
get_api_2_1_data_sharing_providers_by_provider_name_shares_by_share_nameGet arrays of assets associated with a specified provider's share. The caller is the recipient of the share.provider_name (string)
share_name (string)
table_max_results (integer)
function_max_results (integer)
volume_max_results (integer)
notebook_max_results (integer)
post_api_2_1_jobs_create⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/create for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Create a new job.data: {
. access_control_list (array)
. continuous
. deployment
. description (string)
. disable_auto_optimization (boolean)
. edit_mode
. email_notifications
. environments (array)
. format
. git_source
. health
. job_clusters (array)
. max_concurrent_runs (integer)
. max_retries (integer)
. min_retry_interval_millis (integer)
. name (string)
. notification_settings
. parameters (array)
. performance_target
. queue
. retry_on_timeout (boolean)
. run_as
. schedule
. tags (object)
. tasks (array)
. timeout_seconds (integer)
. trigger
. webhook_notifications
} (object) required
post_api_2_1_jobs_delete⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/delete for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Deletes a job.data: {
. job_id (integer)
} (object) required
get_api_2_1_jobs_get⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/get for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieves the details for a single job.job_id (integer) required
get_api_2_1_jobs_list⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/list for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieves a list of jobs.offset (integer)
limit (integer)
expand_tasks (boolean)
name (string)
page_token (string)
post_api_2_1_jobs_reset⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/reset for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Overwrite all settings for the given job. Use the Update endpoint:method:jobs/update to update job settings partially.data: {
. job_id (integer)
. new_settings
} (object) required
post_api_2_1_jobs_run_now⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/runnow for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Run a job and return the run_id of the triggered run.data: {
. dbt_commands (array)
. idempotency_token (string)
. jar_params (array)
. job_id (integer)
. job_parameters (object)
. notebook_params (object)
. only (array)
. performance_target
. pipeline_params
. python_named_params (object)
. python_params (array)
. queue
. spark_submit_params (array)
. sql_params (object)
} (object) required
post_api_2_1_jobs_runs_cancel⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/cancelrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Cancels a job run or a task run. The run is canceled asynchronously, so it may still be running when this request completes.data: {
. run_id (integer)
} (object) required
post_api_2_1_jobs_runs_cancel_all⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/cancelallruns for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started.data: {
. all_queued_runs (boolean)
. job_id (integer)
} (object) required
post_api_2_1_jobs_runs_delete⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/deleterun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Deletes a non-active run. Returns an error if the run is active.data: {
. run_id (integer)
} (object) required
get_api_2_1_jobs_runs_export⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/exportrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html.' Export and retrieve the job run task.run_id (integer) required
views_to_export (string)
get_api_2_1_jobs_runs_get⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/getrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieve the metadata of a run.run_id (integer) required
include_history (boolean)
include_resolved_values (boolean)
get_api_2_1_jobs_runs_get_output⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/getrunoutput for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit call, you can use trun_id (integer) required
get_api_2_1_jobs_runs_list⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/listruns for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. List runs in descending order by start time.job_id (integer)
active_only (boolean)
completed_only (boolean)
offset (integer)
limit (integer)
run_type (string)
expand_tasks (boolean)
start_time_from (integer)
start_time_to (integer)
page_token (string)
post_api_2_1_jobs_runs_repair⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/repairrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Re-run one or more tasks. Tasks are re-run as part of the original job run. They use the current job and task settings, and can be viewed in the hisdata: {
. dbt_commands (array)
. jar_params (array)
. job_parameters (object)
. latest_repair_id (integer)
. notebook_params (object)
. performance_target
. pipeline_params
. python_named_params (object)
. python_params (array)
. rerun_all_failed_tasks (boolean)
. rerun_dependent_tasks (boolean)
. rerun_tasks (array)
. run_id (integer)
. spark_submit_params (array)
. sql_params (object)
} (object) required
post_api_2_1_jobs_runs_submit⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/submit for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Runs submitted using this endpoint don’t displaydata: {
. access_control_list (array)
. email_notifications
. environments (array)
. git_source
. health
. idempotency_token (string)
. notification_settings
. queue
. run_as
. run_name (string)
. tasks (array)
. timeout_seconds (integer)
. webhook_notifications
} (object) required
post_api_2_1_jobs_update⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/update for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Add, update, or remove specific settings of an existing job. Use the Reset endpoint:method:jobs/reset to overwrite all job settings.data: {
. fields_to_remove (array)
. job_id (integer)
. new_settings
} (object) required
get_api_2_1_marketplace_consumer_installationsList all installations across all listings.page_token (string)
page_size (integer)
get_api_2_1_marketplace_consumer_listingsList all published listings in the Databricks Marketplace that the consumer has access to.page_token (string)
page_size (integer)
assets (array)
categories (array)
tags (array)
is_free (boolean)
is_private_exchange (boolean)
is_staff_pick (boolean)
provider_ids (array)
get_api_2_1_marketplace_consumer_listings_by_idGet a published listing in the Databricks Marketplace that the consumer has access to.id (string)
get_api_2_1_marketplace_consumer_listings_by_listing_id_contentGet a high level preview of the metadata of listing installable content.listing_id (string)
page_token (string)
page_size (integer)
get_api_2_1_marketplace_consumer_listings_by_listing_id_fulfillmentsGet all listings fulfillments associated with a listing. A fulfillment is a potential installation. Standard installations contain metadata about the attached share or git repo. Only one of these fields will be present. Personalized installations contain metadata about the attached share or git repo, as well as the Delta Sharing recipient type.listing_id (string)
page_token (string)
page_size (integer)
get_api_2_1_marketplace_consumer_listings_by_listing_id_installationsList all installations for a particular listing.listing_id (string)
page_token (string)
page_size (integer)
post_api_2_1_marketplace_consumer_listings_by_listing_id_installationsInstall payload associated with a Databricks Marketplace listing.listing_id (string)
data: {
. accepted_consumer_terms
. catalog_name (string)
. recipient_type
. repo_detail
. share_name (string)
} (object) required
put_api_2_1_marketplace_consumer_listings_by_listing_id_installations_by_installation_idThis is a update API that will update the part of the fields defined in the installation table as well as interact with external services according to the fields not included in the installation table 1. the token will be rotate if the rotateToken flag is true 2. the token will be forcibly rotate if the rotateToken flag is true and the tokenInfo field is emptylisting_id (string)
installation_id (string)
data: {
. installation
. rotate_token (boolean)
} (object) required
delete_api_2_1_marketplace_consumer_listings_by_listing_id_installations_by_installation_idUninstall an installation associated with a Databricks Marketplace listing.listing_id (string)
installation_id (string)
get_api_2_1_marketplace_consumer_listings_by_listing_id_personalization_requestsGet the personalization request for a listing. Each consumer can make at most one personalization request for a listing.listing_id (string)
post_api_2_1_marketplace_consumer_listings_by_listing_id_personalization_requestsCreate a personalization request for a listing.listing_id (string)
data: {
. accepted_consumer_terms
. comment (string)
. company (string)
. first_name (string)
. intended_use (string)
. is_from_lighthouse (boolean)
. last_name (string)
. recipient_type
} (object) required
get_api_2_1_marketplace_consumer_listings_batch_getBatch get a published listing in the Databricks Marketplace that the consumer has access to.ids (array)
get_api_2_1_marketplace_consumer_personalization_requestsList personalization requests for a consumer across all listings.page_token (string)
page_size (integer)
get_api_2_1_marketplace_consumer_providersList all providers in the Databricks Marketplace with at least one visible listing.page_token (string)
page_size (integer)
is_featured (boolean)
get_api_2_1_marketplace_consumer_providers_by_idGet a provider in the Databricks Marketplace with at least one visible listing.id (string)
get_api_2_1_marketplace_consumer_providers_batch_getBatch get a provider in the Databricks Marketplace with at least one visible listing.ids (array)
get_api_2_1_marketplace_consumer_search_listingsSearch published listings in the Databricks Marketplace that the consumer has access to. This query supports a variety of different search parameters and performs fuzzy matching.query (string) required
is_free (boolean)
is_private_exchange (boolean)
provider_ids (array)
categories (array)
assets (array)
page_token (string)
page_size (integer)
get_api_2_1_tag_policiesLists the tag policies for all governed tags in the account.page_size (integer)
page_token (string)
post_api_2_1_tag_policiesCreates a new tag policy, making the associated tag key governed.data: {
. description (string)
. id (string)
. tag_key (string)
. values (array)
} (object) required
get_api_2_1_tag_policies_by_tag_keyGets a single tag policy by its associated governed tag's key.tag_key (string)
patch_api_2_1_tag_policies_by_tag_keyUpdates an existing tag policy for a single governed tag.tag_key (string)
update_mask (string) required
data: {
. description (string)
. id (string)
. tag_key (string)
. values (array)
} (object) required
delete_api_2_1_tag_policies_by_tag_keyDeletes a tag policy by its associated governed tag's key, leaving that tag key ungoverned.tag_key (string)
get_api_2_1_unity_catalog_artifact_allowlists_by_artifact_typeGet the artifact allowlist of a certain artifact type. The caller must be a metastore admin or have the MANAGE ALLOWLIST privilege on the metastore.artifact_type (string)
put_api_2_1_unity_catalog_artifact_allowlists_by_artifact_typeSet the artifact allowlist of a certain artifact type. The whole artifact allowlist is replaced with the new allowlist. The caller must be a metastore admin or have the MANAGE ALLOWLIST privilege on the metastore.artifact_type (string)
data: {
. artifact_matchers (array)
. created_at (integer)
. created_by (string)
. metastore_id (string)
} (object) required
get_api_2_1_unity_catalog_bindings_by_securable_type_by_securable_nameGets workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable.securable_type (string)
securable_name (string)
max_results (integer)
page_token (string)
patch_api_2_1_unity_catalog_bindings_by_securable_type_by_securable_nameUpdates workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable.securable_type (string)
securable_name (string)
data: {
. add (array)
. remove (array)
} (object) required
get_api_2_1_unity_catalog_catalogsGets an array of catalogs in the metastore. If the caller is the metastore admin, all catalogs will be retrieved. Otherwise, only catalogs owned by the caller or for which the caller has the USE_CATALOG privilege will be retrieved. There is no guarantee of a specific ordering of the elements in the array.include_browse (boolean)
max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_catalogsCreates a new catalog instance in the parent metastore if the caller is a metastore admin or has the CREATE_CATALOG privilege.data: {
. comment (string)
. connection_name (string)
. name (string)
. options (object)
. properties (object)
. provider_name (string)
. share_name (string)
. storage_root (string)
} (object) required
get_api_2_1_unity_catalog_catalogs_by_nameGets the specified catalog in a metastore. The caller must be a metastore admin, the owner of the catalog, or a user that has the USE_CATALOG privilege set for their account.name (string)
include_browse (boolean)
patch_api_2_1_unity_catalog_catalogs_by_nameUpdates the catalog that matches the supplied name. The caller must be either the owner of the catalog, or a metastore admin when changing the owner field of the catalog.name (string)
data: {
. comment (string)
. enable_predictive_optimization
. isolation_mode
. new_name (string)
. options (object)
. owner (string)
. properties (object)
} (object) required
delete_api_2_1_unity_catalog_catalogs_by_nameDeletes the catalog that matches the supplied name. The caller must be a metastore admin or the owner of the catalog.name (string)
force (boolean)
get_api_2_1_unity_catalog_connectionsList all connections.max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_connectionsCreates a new connection Creates a new connection to an external data source. It allows users to specify connection details and configurations for interaction with the external server. Supported data sources for connections are listed herehttps://docs.databricks.com/aws/en/query-federation/ supported-data-sources.data: {
. comment (string)
. connection_type
. name (string)
. options (object)
. properties (object)
. read_only (boolean)
} (object) required
get_api_2_1_unity_catalog_connections_by_nameGets a connection from it's name.name (string)
patch_api_2_1_unity_catalog_connections_by_nameUpdates the connection that matches the supplied name.name (string)
data: {
. new_name (string)
. options (object)
. owner (string)
} (object) required
delete_api_2_1_unity_catalog_connections_by_nameDeletes the connection that matches the supplied name.name (string)
post_api_2_1_unity_catalog_constraintsCreates a new table constraint. For the table constraint creation to succeed, the user must satisfy both of these conditions: - the user must have the USE_CATALOG privilege on the table's parent catalog, the USE_SCHEMA privilege on the table's parent schema, and be the owner of the table. - if the new constraint is a ForeignKeyConstraint, the user must have the USE_CATALOG privilege on the referenced parent table's catalog, the USE_SCHEMA privilege on the referenced parent table's schema, adata: {
. constraint
. full_name_arg (string)
} (object) required
delete_api_2_1_unity_catalog_constraints_by_full_nameDeletes a table constraint. For the table constraint deletion to succeed, the user must satisfy both of these conditions: - the user must have the USE_CATALOG privilege on the table's parent catalog, the USE_SCHEMA privilege on the table's parent schema, and be the owner of the table. - if cascade argument is true, the user must have the following permissions on all of the child tables: the USE_CATALOG privilege on the table's catalog, the USE_SCHEMA privilege on the table's schema, and befull_name (string)
constraint_name (string) required
cascade (boolean) required
get_api_2_1_unity_catalog_credentialsGets an array of credentials as CredentialInfo objects. The array is limited to only the credentials that the caller has permission to access. If the caller is a metastore admin, retrieval of credentials is unrestricted. There is no guarantee of a specific ordering of the elements in the array.max_results (integer)
page_token (string)
purpose (string)
post_api_2_1_unity_catalog_credentialsCreates a new credential. The type of credential to be created is determined by the purpose field, which should be either SERVICE or STORAGE. The caller must be a metastore admin or have the metastore privilege CREATE_STORAGE_CREDENTIAL for storage credentials, or CREATE_SERVICE_CREDENTIAL for service credentials. The request object must contain an AwsIamRole with the arn of the IAM role. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.htmdata: {
. aws_iam_role
. comment (string)
. name (string)
. purpose
. read_only (boolean)
. skip_validation (boolean)
} (object) required
get_api_2_1_unity_catalog_credentials_by_name_argGets a service or storage credential from the metastore. The caller must be a metastore admin, the owner of the credential, or have any permission on the credential.name_arg (string)
patch_api_2_1_unity_catalog_credentials_by_name_argUpdates a service or storage credential on the metastore. The caller must be the owner of the credential or a metastore admin or have the MANAGE permission. If the caller is a metastore admin, only the owner field can be changed. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html, this role must specify an external ID in its trust policy. To enable this credential, the external ID specified in the external_id field of the responname_arg (string)
data: {
. aws_iam_role
. comment (string)
. force (boolean)
. isolation_mode
. new_name (string)
. owner (string)
. read_only (boolean)
. skip_validation (boolean)
} (object) required
delete_api_2_1_unity_catalog_credentials_by_name_argDeletes a service or storage credential from the metastore. The caller must be an owner of the credential.name_arg (string)
force (boolean)
get_api_2_1_unity_catalog_current_metastore_assignmentGets the metastore assignment for the workspace being accessed.No parameters
get_api_2_1_unity_catalog_effective_permissions_by_securable_type_by_full_nameGets the effective permissions for a securable. Includes inherited permissions from any parent securables.securable_type (string)
full_name (string)
principal (string)
max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_entity_tag_assignmentsCreates a tag assignment for an Unity Catalog entity. To add tags to Unity Catalog entities, you must own the entity or have the following privileges: - APPLY TAG on the entity - USE SCHEMA on the entity's parent schema - USE CATALOG on the entity's parent catalog To add a governed tag to Unity Catalog entities, you must also have the ASSIGN or MANAGE permission on the tag policy. See Manage tag policy permissionshttps://docs.databricks.com/aws/en/admin/tag-policies/manage-permissions.data: {
. entity_name (string)
. entity_type (string)
. tag_key (string)
. tag_value (string)
} (object) required
get_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tagsList tag assignments for an Unity Catalog entityentity_type (string)
entity_name (string)
max_results (integer)
page_token (string)
get_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags_by_tag_keyGets a tag assignment for an Unity Catalog entity by tag key.entity_type (string)
entity_name (string)
tag_key (string)
patch_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags_by_tag_keyUpdates an existing tag assignment for an Unity Catalog entity. To update tags to Unity Catalog entities, you must own the entity or have the following privileges: - APPLY TAG on the entity - USE SCHEMA on the entity's parent schema - USE CATALOG on the entity's parent catalog To update a governed tag to Unity Catalog entities, you must also have the ASSIGN or MANAGE permission on the tag policy. See Manage tag policy permissionshttps://docs.databricks.com/aws/en/admin/tag-policies/manage-permentity_type (string)
entity_name (string)
tag_key (string)
update_mask (string) required
data: {
. entity_name (string)
. entity_type (string)
. tag_key (string)
. tag_value (string)
} (object) required
delete_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags_by_tag_keyDeletes a tag assignment for an Unity Catalog entity by its key. To delete tags from Unity Catalog entities, you must own the entity or have the following privileges: - APPLY TAG on the entity - USE_SCHEMA on the entity's parent schema - USE_CATALOG on the entity's parent catalog To delete a governed tag from Unity Catalog entities, you must also have the ASSIGN or MANAGE permission on the tag policy. See Manage tag policy permissionshttps://docs.databricks.com/aws/en/admin/tag-policies/manageentity_type (string)
entity_name (string)
tag_key (string)
get_api_2_1_unity_catalog_external_locationsGets an array of external locations ExternalLocationInfo objects from the metastore. The caller must be a metastore admin, the owner of the external location, or a user that has some privilege on the external location. There is no guarantee of a specific ordering of the elements in the array.include_browse (boolean)
max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_external_locationsCreates a new external location entry in the metastore. The caller must be a metastore admin or have the CREATE_EXTERNAL_LOCATION privilege on both the metastore and the associated storage credential.data: {
. comment (string)
. credential_name (string)
. enable_file_events (boolean)
. encryption_details
. fallback (boolean)
. file_event_queue
. name (string)
. read_only (boolean)
. skip_validation (boolean)
. url (string)
} (object) required
get_api_2_1_unity_catalog_external_locations_by_nameGets an external location from the metastore. The caller must be either a metastore admin, the owner of the external location, or a user that has some privilege on the external location.name (string)
include_browse (boolean)
patch_api_2_1_unity_catalog_external_locations_by_nameUpdates an external location in the metastore. The caller must be the owner of the external location, or be a metastore admin. In the second case, the admin can only update the name of the external location.name (string)
data: {
. comment (string)
. credential_name (string)
. enable_file_events (boolean)
. encryption_details
. fallback (boolean)
. file_event_queue
. force (boolean)
. isolation_mode
. new_name (string)
. owner (string)
. read_only (boolean)
. skip_validation (boolean)
. url (string)
} (object) required
delete_api_2_1_unity_catalog_external_locations_by_nameDeletes the specified external location from the metastore. The caller must be the owner of the external location.name (string)
force (boolean)
get_api_2_1_unity_catalog_functionsList functions within the specified parent catalog and schema. If the user is a metastore admin, all functions are returned in the output list. Otherwise, the user must have the USE_CATALOG privilege on the catalog and the USE_SCHEMA privilege on the schema, and the output list contains only functions for which either the user has the EXECUTE privilege or the user is the owner. There is no guarantee of a specific ordering of the elements in the array.catalog_name (string) required
schema_name (string) required
max_results (integer)
page_token (string)
include_browse (boolean)
post_api_2_1_unity_catalog_functionsWARNING: This API is experimental and will change in future versions Creates a new function The user must have the following permissions in order for the function to be created: - USE_CATALOG on the function's parent catalog - USE_SCHEMA and CREATE_FUNCTION on the function's parent schemadata: {
. function_info
} (object) required
get_api_2_1_unity_catalog_functions_by_nameGets a function from within a parent catalog and schema. For the fetch to succeed, the user must satisfy one of the following requirements: - Is a metastore admin - Is an owner of the function's parent catalog - Have the USE_CATALOG privilege on the function's parent catalog and be the owner of the function - Have the USE_CATALOG privilege on the function's parent catalog, the USE_SCHEMA privilege on the function's parent schema, and the EXECUTE privilege on the function itselfname (string)
include_browse (boolean)
patch_api_2_1_unity_catalog_functions_by_nameUpdates the function that matches the supplied name. Only the owner of the function can be updated. If the user is not a metastore admin, the user must be a member of the group that is the new function owner. - Is a metastore admin - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and has the USE_CATALOG privilege on its parent catalog - Is the owner of the function itself and has the USE_CATALOG privilege on its parent catalog as well as the USE_SCHEname (string)
data: {
. owner (string)
} (object) required
delete_api_2_1_unity_catalog_functions_by_nameDeletes the function that matches the supplied name. For the deletion to succeed, the user must satisfy one of the following conditions: - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and have the USE_CATALOG privilege on its parent catalog - Is the owner of the function itself and have both the USE_CATALOG privilege on its parent catalog and the USE_SCHEMA privilege on its parent schemaname (string)
force (boolean)
get_api_2_1_unity_catalog_metastore_summaryGets information about a metastore. This summary includes the storage credential, the cloud vendor, the cloud region, and the global metastore ID.No parameters
get_api_2_1_unity_catalog_metastoresGets an array of the available metastores as MetastoreInfo objects. The caller must be an admin to retrieve this info. There is no guarantee of a specific ordering of the elements in the array.max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_metastoresCreates a new metastore based on a provided name and optional storage root path. By default if the owner field is not set, the owner of the new metastore is the user calling the createMetastore API. If the owner field is set to the empty string '', the ownership is assigned to the System User instead.data: {
. name (string)
. region (string)
. storage_root (string)
} (object) required
get_api_2_1_unity_catalog_metastores_by_idGets a metastore that matches the supplied ID. The caller must be a metastore admin to retrieve this info.id (string)
patch_api_2_1_unity_catalog_metastores_by_idUpdates information for a specific metastore. The caller must be a metastore admin. If the owner field is set to the empty string '', the ownership is updated to the System User.id (string)
data: {
. delta_sharing_organization_name (string)
. delta_sharing_recipient_token_lifetime_in_seconds (integer)
. delta_sharing_scope
. new_name (string)
. owner (string)
. privilege_model_version (string)
. storage_root_credential_id (string)
} (object) required
delete_api_2_1_unity_catalog_metastores_by_idDeletes a metastore. The caller must be a metastore admin.id (string)
force (boolean)
get_api_2_1_unity_catalog_metastores_by_metastore_id_systemschemasGets an array of system schemas for a metastore. The caller must be an account admin or a metastore admin.metastore_id (string)
max_results (integer)
page_token (string)
put_api_2_1_unity_catalog_metastores_by_metastore_id_systemschemas_by_schema_nameEnables the system schema and adds it to the system catalog. The caller must be an account admin or a metastore admin.metastore_id (string)
schema_name (string)
data: {
. catalog_name (string)
} (object) required
delete_api_2_1_unity_catalog_metastores_by_metastore_id_systemschemas_by_schema_nameDisables the system schema and removes it from the system catalog. The caller must be an account admin or a metastore admin.metastore_id (string)
schema_name (string)
get_api_2_1_unity_catalog_modelsList registered models. You can list registered models under a particular schema, or list all registered models in the current metastore. The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the registered models. A regular user needs to be the owner or have the EXECUTE privilege on the registered model to recieve the registered models in the response. For the latter case, the caller must also be the owner or have thecatalog_name (string)
schema_name (string)
max_results (integer)
page_token (string)
include_browse (boolean)
post_api_2_1_unity_catalog_modelsCreates a new registered model in Unity Catalog. File storage for model versions in the registered model will be located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore. For registered model creation to succeed, the user must satisfy the following conditions: - The caller must be a metastore admin, or be the owner of the parent catalog and schema, or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege ondata: {
. catalog_name (string)
. comment (string)
. name (string)
. schema_name (string)
. storage_location (string)
} (object) required
get_api_2_1_unity_catalog_models_by_full_nameGet a registered model. The caller must be a metastore admin or an owner of or have the EXECUTE privilege on the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
include_browse (boolean)
include_aliases (boolean)
patch_api_2_1_unity_catalog_models_by_full_nameUpdates the specified registered model. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. Currently only the name, the owner or the comment of the registered model can be updated.full_name (string)
data: {
. comment (string)
. new_name (string)
. owner (string)
} (object) required
delete_api_2_1_unity_catalog_models_by_full_nameDeletes a registered model and all its model versions from the specified parent catalog and schema. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
get_api_2_1_unity_catalog_models_by_full_name_aliases_by_aliasGet a model version by alias. The caller must be a metastore admin or an owner of or have the EXECUTE privilege on the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
alias (string)
include_aliases (boolean)
put_api_2_1_unity_catalog_models_by_full_name_aliases_by_aliasSet an alias on the specified registered model. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
alias (string)
data: {
. alias (string)
. full_name (string)
. version_num (integer)
} (object) required
delete_api_2_1_unity_catalog_models_by_full_name_aliases_by_aliasDeletes a registered model alias. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
alias (string)
get_api_2_1_unity_catalog_models_by_full_name_versionsList model versions. You can list model versions under a particular schema, or list all model versions in the current metastore. The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the model versions. A regular user needs to be the owner or have the EXECUTE privilege on the parent registered model to recieve the model versions in the response. For the latter case, the caller must also be the owner or have the USE_CATfull_name (string)
max_results (integer)
page_token (string)
include_browse (boolean)
get_api_2_1_unity_catalog_models_by_full_name_versions_by_versionGet a model version. The caller must be a metastore admin or an owner of or have the EXECUTE privilege on the parent registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
version (string)
include_browse (boolean)
include_aliases (boolean)
patch_api_2_1_unity_catalog_models_by_full_name_versions_by_versionUpdates the specified model version. The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. Currently only the comment of the model version can be updated.full_name (string)
version (string)
data: {
. comment (string)
} (object) required
delete_api_2_1_unity_catalog_models_by_full_name_versions_by_versionDeletes a model version from the specified registered model. Any aliases assigned to the model version will also be deleted. The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
version (string)
get_api_2_1_unity_catalog_permissions_by_securable_type_by_full_nameGets the permissions for a securable. Does not include inherited permissions.securable_type (string)
full_name (string)
principal (string)
max_results (integer)
page_token (string)
patch_api_2_1_unity_catalog_permissions_by_securable_type_by_full_nameUpdates the permissions for a securable.securable_type (string)
full_name (string)
data: {
. changes (array)
} (object) required
post_api_2_1_unity_catalog_policiesCreates a new policy on a securable. The new policy applies to the securable and all its descendants.data: {
. column_mask
. comment (string)
. created_at (integer)
. created_by (string)
. except_principals (array)
. for_securable_type
. id (string)
. match_columns (array)
. name (string)
. on_securable_fullname (string)
. on_securable_type
. policy_type
. row_filter
. to_principals (array)
. updated_at (integer)
. updated_by (string)
. when_condition (string)
} (object) required
get_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullnameList all policies defined on a securable. Optionally, the list can include inherited policies defined on the securable's parent schema or catalog.on_securable_type (string)
on_securable_fullname (string)
include_inherited (boolean)
max_results (integer)
page_token (string)
get_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname_by_nameGet the policy definition on a securableon_securable_type (string)
on_securable_fullname (string)
name (string)
patch_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname_by_nameUpdate an ABAC policy on a securable.on_securable_type (string)
on_securable_fullname (string)
name (string)
update_mask (string)
data: {
. column_mask
. comment (string)
. created_at (integer)
. created_by (string)
. except_principals (array)
. for_securable_type
. id (string)
. match_columns (array)
. name (string)
. on_securable_fullname (string)
. on_securable_type
. policy_type
. row_filter
. to_principals (array)
. updated_at (integer)
. updated_by (string)
. when_condition (string)
} (object) required
delete_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname_by_nameDelete an ABAC policy defined on a securable.on_securable_type (string)
on_securable_fullname (string)
name (string)
get_api_2_1_unity_catalog_providersGets an array of available authentication providers. The caller must either be a metastore admin or the owner of the providers. Providers not owned by the caller are not included in the response. There is no guarantee of a specific ordering of the elements in the array.data_provider_global_metastore_id (string)
max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_providersCreates a new authentication provider minimally based on a name and authentication type. The caller must be an admin on the metastore.data: {
. authentication_type
. comment (string)
. name (string)
. recipient_profile_str (string)
} (object) required
get_api_2_1_unity_catalog_providers_by_nameGets a specific authentication provider. The caller must supply the name of the provider, and must either be a metastore admin or the owner of the provider.name (string)
patch_api_2_1_unity_catalog_providers_by_nameUpdates the information for an authentication provider, if the caller is a metastore admin or is the owner of the provider. If the update changes the provider name, the caller must be both a metastore admin and the owner of the provider.name (string)
data: {
. comment (string)
. new_name (string)
. owner (string)
. recipient_profile_str (string)
} (object) required
delete_api_2_1_unity_catalog_providers_by_nameDeletes an authentication provider, if the caller is a metastore admin or is the owner of the provider.name (string)
get_api_2_1_unity_catalog_providers_by_name_sharesGets an array of a specified provider's shares within the metastore where: the caller is a metastore admin, or the caller is the owner.name (string)
max_results (integer)
page_token (string)
get_api_2_1_unity_catalog_public_data_sharing_activation_by_activation_urlRetrieve access token with an activation url. This is a public API without any authentication.activation_url (string)
get_api_2_1_unity_catalog_public_data_sharing_activation_info_by_activation_urlGets an activation URL for a share.activation_url (string)
get_api_2_1_unity_catalog_recipientsGets an array of all share recipients within the current metastore where: the caller is a metastore admin, or the caller is the owner. There is no guarantee of a specific ordering of the elements in the array.data_recipient_global_metastore_id (string)
max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_recipientsCreates a new recipient with the delta sharing authentication type in the metastore. The caller must be a metastore admin or have the CREATE_RECIPIENT privilege on the metastore.data: {
. authentication_type
. comment (string)
. data_recipient_global_metastore_id (string)
. expiration_time (integer)
. ip_access_list
. name (string)
. owner (string)
. properties_kvpairs
. sharing_code (string)
} (object) required
get_api_2_1_unity_catalog_recipients_by_nameGets a share recipient from the metastore if: the caller is the owner of the share recipient, or: is a metastore adminname (string)
patch_api_2_1_unity_catalog_recipients_by_nameUpdates an existing recipient in the metastore. The caller must be a metastore admin or the owner of the recipient. If the recipient name will be updated, the user must be both a metastore admin and the owner of the recipient.name (string)
data: {
. comment (string)
. expiration_time (integer)
. ip_access_list
. new_name (string)
. owner (string)
. properties_kvpairs
} (object) required
delete_api_2_1_unity_catalog_recipients_by_nameDeletes the specified recipient from the metastore. The caller must be the owner of the recipient.name (string)
post_api_2_1_unity_catalog_recipients_by_name_rotate_tokenRefreshes the specified recipient's delta sharing authentication token with the provided token info. The caller must be the owner of the recipient.name (string)
data: {
. existing_token_expire_in_seconds (integer)
} (object) required
get_api_2_1_unity_catalog_recipients_by_name_share_permissionsGets the share permissions for the specified Recipient. The caller must be a metastore admin or the owner of the Recipient.name (string)
max_results (integer)
page_token (string)
get_api_2_1_unity_catalog_resource_quotas_all_resource_quotasListQuotas returns all quota values under the metastore. There are no SLAs on the freshness of the counts returned. This API does not trigger a refresh of quota counts.max_results (integer)
page_token (string)
get_api_2_1_unity_catalog_resource_quotas_by_parent_securable_type_by_parent_full_name_by_quota_nameThe GetQuota API returns usage information for a single resource quota, defined as a child-parent pair. This API also refreshes the quota count if it is out of date. Refreshes are triggered asynchronously. The updated count might not be returned in the first call.parent_securable_type (string)
parent_full_name (string)
quota_name (string)
get_api_2_1_unity_catalog_schemasGets an array of schemas for a catalog in the metastore. If the caller is the metastore admin or the owner of the parent catalog, all schemas for the catalog will be retrieved. Otherwise, only schemas owned by the caller or for which the caller has the USE_SCHEMA privilege will be retrieved. There is no guarantee of a specific ordering of the elements in the array.catalog_name (string) required
max_results (integer)
page_token (string)
include_browse (boolean)
post_api_2_1_unity_catalog_schemasCreates a new schema for catalog in the Metastore. The caller must be a metastore admin, or have the CREATE_SCHEMA privilege in the parent catalog.data: {
. catalog_name (string)
. comment (string)
. name (string)
. properties (object)
. storage_root (string)
} (object) required
get_api_2_1_unity_catalog_schemas_by_full_nameGets the specified schema within the metastore. The caller must be a metastore admin, the owner of the schema, or a user that has the USE_SCHEMA privilege on the schema.full_name (string)
include_browse (boolean)
patch_api_2_1_unity_catalog_schemas_by_full_nameUpdates a schema for a catalog. The caller must be the owner of the schema or a metastore admin. If the caller is a metastore admin, only the owner field can be changed in the update. If the name field must be updated, the caller must be a metastore admin or have the CREATE_SCHEMA privilege on the parent catalog.full_name (string)
data: {
. comment (string)
. enable_predictive_optimization
. new_name (string)
. owner (string)
. properties (object)
} (object) required
delete_api_2_1_unity_catalog_schemas_by_full_nameDeletes the specified schema from the parent catalog. The caller must be the owner of the schema or an owner of the parent catalog.full_name (string)
force (boolean)
get_api_2_1_unity_catalog_sharesGets an array of data object shares from the metastore. The caller must be a metastore admin or the owner of the share. There is no guarantee of a specific ordering of the elements in the array.max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_sharesCreates a new share for data objects. Data objects can be added after creation with update. The caller must be a metastore admin or have the CREATE_SHARE privilege on the metastore.data: {
. comment (string)
. name (string)
. storage_root (string)
} (object) required
get_api_2_1_unity_catalog_shares_by_nameGets a data object share from the metastore. The caller must be a metastore admin or the owner of the share.name (string)
include_shared_data (boolean)
patch_api_2_1_unity_catalog_shares_by_nameUpdates the share with the changes and data objects in the request. The caller must be the owner of the share or a metastore admin. When the caller is a metastore admin, only the owner field can be updated. In the case the share name is changed, updateShare requires that the caller is the owner of the share and has the CREATE_SHARE privilege. If there are notebook files in the share, the storage_root field cannot be updated. For each table that is added through this method, the sharename (string)
data: {
. comment (string)
. new_name (string)
. owner (string)
. storage_root (string)
. updates (array)
} (object) required
delete_api_2_1_unity_catalog_shares_by_nameDeletes a data object share from the metastore. The caller must be an owner of the share.name (string)
get_api_2_1_unity_catalog_shares_by_name_permissionsGets the permissions for a data share from the metastore. The caller must be a metastore admin or the owner of the share.name (string)
max_results (integer)
page_token (string)
patch_api_2_1_unity_catalog_shares_by_name_permissionsUpdates the permissions for a data share in the metastore. The caller must be a metastore admin or an owner of the share. For new recipient grants, the user must also be the recipient owner or metastore admin. recipient revocations do not require additional privileges.name (string)
data: {
. changes (array)
. omit_permissions_list (boolean)
} (object) required
get_api_2_1_unity_catalog_storage_credentialsGets an array of storage credentials as StorageCredentialInfo objects. The array is limited to only those storage credentials the caller has permission to access. If the caller is a metastore admin, retrieval of credentials is unrestricted. There is no guarantee of a specific ordering of the elements in the array.max_results (integer)
page_token (string)
post_api_2_1_unity_catalog_storage_credentialsCreates a new storage credential. The caller must be a metastore admin or have the CREATE_STORAGE_CREDENTIAL privilege on the metastore. The request object must contain an AwsIamRole detailing the credentials of an IAM role. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html, this role must specify an external ID in its trust policy. To enable this credential, the external ID specified in the external_id field of the response objectdata: {
. aws_iam_role
. comment (string)
. name (string)
. read_only (boolean)
. skip_validation (boolean)
} (object) required
get_api_2_1_unity_catalog_storage_credentials_by_nameGets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have some permission on the storage credential.name (string)
patch_api_2_1_unity_catalog_storage_credentials_by_nameUpdates a storage credential on the metastore. The caller must be the owner of the storage credential or a metastore admin. If the caller is a metastore admin, only the owner field can be changed. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html, this role must specify an external ID in its trust policy. To enable this credential, the external ID specified in the external_id field of the response object must be added to the IAM roname (string)
data: {
. aws_iam_role
. comment (string)
. force (boolean)
. isolation_mode
. new_name (string)
. owner (string)
. read_only (boolean)
. skip_validation (boolean)
} (object) required
delete_api_2_1_unity_catalog_storage_credentials_by_nameDeletes a storage credential from the metastore. The caller must be an owner of the storage credential.name (string)
force (boolean)
get_api_2_1_unity_catalog_table_summariesGets an array of summaries for tables for a schema and catalog within the metastore. The table summaries returned are either: summaries for tables within the current metastore and parent catalog and schema, when the user is a metastore admin, or: summaries for tables and schemas within the current metastore and parent catalog for which the user has ownership or the SELECT privilege on the table and ownership or USE_SCHEMA privilege on the schema, provided that the user also has ownership or tcatalog_name (string) required
schema_name_pattern (string)
table_name_pattern (string)
max_results (integer)
page_token (string)
include_manifest_capabilities (boolean)
get_api_2_1_unity_catalog_tablesGets an array of all tables for the current metastore under the parent catalog and schema. The caller must be a metastore admin or an owner of or have the SELECT privilege on the table. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. There is no guarantee of a specific ordering of the elements in the array.catalog_name (string) required
schema_name (string) required
max_results (integer)
page_token (string)
omit_columns (boolean)
omit_properties (boolean)
omit_username (boolean)
include_browse (boolean)
include_manifest_capabilities (boolean)
post_api_2_1_unity_catalog_tablesCreates a new table in the specified catalog and schema. To create an external delta table, the caller must have the EXTERNAL_USE_SCHEMA privilege on the parent schema and the EXTERNAL_USE_LOCATION privilege on the external location. These privileges must always be granted explicitly, and cannot be inherited through ownership or ALL_PRIVILEGES. Standard UC permissions needed to create tables still apply: USE_CATALOG on the parent catalog or ownership of the parent catalog, CREATE_TABLE and USEdata: {
. catalog_name (string)
. columns (array)
. data_source_format
. name (string)
. properties (object)
. schema_name (string)
. storage_location (string)
. table_type
} (object) required
get_api_2_1_unity_catalog_tables_by_full_nameGets a table from the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: Be a metastore admin Be the owner of the parent catalog Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog Have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table.full_name (string)
include_delta_metadata (boolean)
include_browse (boolean)
include_manifest_capabilities (boolean)
delete_api_2_1_unity_catalog_tables_by_full_nameDeletes a table from the specified parent catalog and schema. The caller must be the owner of the parent catalog, have the USE_CATALOG privilege on the parent catalog and be the owner of the parent schema, or be the owner of the table and have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.full_name (string)
get_api_2_1_unity_catalog_tables_by_full_name_existsGets if a table exists in the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: Be a metastore admin Be the owner of the parent catalog Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog Have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table. Have BROWSE privilege on the parent cfull_name (string)
get_api_2_1_unity_catalog_tables_by_table_name_monitorGets a monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema. 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - SELECT privilege on the table. The returned information includes configuration values, as well as information on assets created by the monitor. Some information e.g., dastable_name (string)
post_api_2_1_unity_catalog_tables_by_table_name_monitorCreates a new monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog, have USE_SCHEMA on the table's parent schema, and have SELECT access on the table 2. have USE_CATALOG on the table's parent catalog, be an owner of the table's parent schema, and have SELECT access on the table. 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table. Workspace assets, sutable_name (string)
data: {
. assets_dir (string)
. baseline_table_name (string)
. custom_metrics (array)
. inference_log
. latest_monitor_failure_msg (string)
. notifications
. output_schema_name (string)
. schedule
. skip_builtin_dashboard (boolean)
. slicing_exprs (array)
. snapshot
. time_series
. warehouse_id (string)
} (object) required
put_api_2_1_unity_catalog_tables_by_table_name_monitorUpdates a monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table. Additionally, the call must be made from the workspace where the monitor was created, and the caller must be the original creator of the monittable_name (string)
data: {
. baseline_table_name (string)
. custom_metrics (array)
. dashboard_id (string)
. inference_log
. latest_monitor_failure_msg (string)
. notifications
. output_schema_name (string)
. schedule
. slicing_exprs (array)
. snapshot
. time_series
} (object) required
delete_api_2_1_unity_catalog_tables_by_table_name_monitorDeletes a monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table. Additionally, the call must be made from the workspace where the monitor was created. Note that the metric tables and dashboard will not be dtable_name (string)
get_api_2_1_unity_catalog_tables_by_table_name_monitor_refreshesGets an array containing the history of the most recent refreshes up to 25 for this table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - SELECT privilege on the table. Additionally, the call must be made from the workspace where the monitor was created.table_name (string)
post_api_2_1_unity_catalog_tables_by_table_name_monitor_refreshesQueues a metric refresh on the monitor for the specified table. The refresh will execute in the background. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table Additionally, the call must be made from the workspace where the monitor was ctable_name (string)
get_api_2_1_unity_catalog_tables_by_table_name_monitor_refreshes_by_refresh_idGets info about a specific monitor refresh using the given refresh ID. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - SELECT privilege on the table. Additionally, the call must be made from the workspace where the monitor was created.table_name (string)
refresh_id (integer)
post_api_2_1_unity_catalog_temporary_service_credentialsReturns a set of temporary credentials generated using the specified service credential. The caller must be a metastore admin or have the metastore privilege ACCESS on the service credential. The temporary credentials consist of an access key ID, a secret access key, and a security token.data: {
. credential_name (string)
} (object) required
post_api_2_1_unity_catalog_validate_credentialsValidates a credential. For service credentials purpose is SERVICE, either the credential_name or the cloud-specific credential must be provided. For storage credentials purpose is STORAGE, at least one of external_location_name and url need to be provided. If only one of them is provided, it will be used for validation. And if both are provided, the url will be used for validation, and external_location_name will be ignored when checking overlapping urls. Either the __creddata: {
. aws_iam_role
. credential_name (string)
. external_location_name (string)
. purpose
. read_only (boolean)
. url (string)
} (object) required
post_api_2_1_unity_catalog_validate_storage_credentialsValidates a storage credential. At least one of external_location_name and url need to be provided. If only one of them is provided, it will be used for validation. And if both are provided, the url will be used for validation, and external_location_name will be ignored when checking overlapping urls. Either the storage_credential_name or the cloud-specific credential must be provided. The caller must be a metastore admin or the storage credential owner or have the CREATE_Edata: {
. aws_iam_role
. external_location_name (string)
. read_only (boolean)
. storage_credential_name (string)
. url (string)
} (object) required
get_api_2_1_unity_catalog_volumesGets an array of volumes for the current metastore under the parent catalog and schema. The returned volumes are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the volumes. A regular user needs to be the owner or have the READ VOLUME privilege on the volume to recieve the volumes in the response. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege ocatalog_name (string) required
schema_name (string) required
max_results (integer)
page_token (string)
include_browse (boolean)
post_api_2_1_unity_catalog_volumesCreates a new volume. The user could create either an external volume or a managed volume. An external volume will be created in the specified external location, while a managed volume will be located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore. For the volume creation to succeed, the user must satisfy following conditions: - The caller must be a metastore admin, or be the owner of the parent catalog and schema, or have the USE_CATdata: {
. catalog_name (string)
. comment (string)
. name (string)
. schema_name (string)
. storage_location (string)
. volume_type
} (object) required
get_api_2_1_unity_catalog_volumes_by_nameGets a volume from the metastore for a specific catalog and schema. The caller must be a metastore admin or an owner of or have the READ VOLUME privilege on the volume. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.name (string)
include_browse (boolean)
patch_api_2_1_unity_catalog_volumes_by_nameUpdates the specified volume under the specified parent catalog and schema. The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. Currently only the name, the owner or the comment of the volume could be updated.name (string)
data: {
. comment (string)
. new_name (string)
. owner (string)
} (object) required
delete_api_2_1_unity_catalog_volumes_by_nameDeletes a volume from the specified parent catalog and schema. The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema.name (string)
get_api_2_1_unity_catalog_workspace_bindings_catalogs_by_nameGets workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog.name (string)
patch_api_2_1_unity_catalog_workspace_bindings_catalogs_by_nameUpdates workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog.name (string)
data: {
. assign_workspaces (array)
. unassign_workspaces (array)
} (object) required
put_api_2_1_unity_catalog_workspaces_by_workspace_id_metastoreCreates a new metastore assignment. If an assignment for the same workspace_id exists, it will be overwritten by the new metastore_id and default_catalog_name. The caller must be an account admin.workspace_id (integer)
data: {
. default_catalog_name (string)
. metastore_id (string)
} (object) required
patch_api_2_1_unity_catalog_workspaces_by_workspace_id_metastoreUpdates a metastore assignment. This operation can be used to update metastore_id or default_catalog_name for a specified Workspace, if the Workspace is already assigned a metastore. The caller must be an account admin to update metastore_id; otherwise, the caller can be a Workspace admin.workspace_id (integer)
data: {
. default_catalog_name (string)
. metastore_id (string)
} (object) required
delete_api_2_1_unity_catalog_workspaces_by_workspace_id_metastoreDeletes a metastore assignment. The caller must be an account administrator.workspace_id (integer)
metastore_id (string) required
post_api_2_2_jobs_createCreate a new job.data: {
. access_control_list (array)
. budget_policy_id (string)
. continuous
. deployment
. description (string)
. edit_mode
. email_notifications
. environments (array)
. format
. git_source
. health
. job_clusters (array)
. max_concurrent_runs (integer)
. name (string)
. notification_settings
. parameters (array)
. performance_target
. queue
. run_as
. schedule
. tags (object)
. tasks (array)
. timeout_seconds (integer)
. trigger
. webhook_notifications
} (object) required
post_api_2_2_jobs_deleteDeletes a job.data: {
. job_id (integer)
} (object) required
get_api_2_2_jobs_getRetrieves the details for a single job. Large arrays in the results will be paginated when they exceed 100 elements. A request for a single job will return all properties for that job, and the first 100 elements of array properties tasks, job_clusters, environments and parameters. Use the next_page_token field to check for more results and pass its value as the page_token in subsequent requests. If any array properties have more than 100 elements, additional results will be returned on subsequejob_id (integer) required
page_token (string)
get_api_2_2_jobs_listRetrieves a list of jobs.limit (integer)
expand_tasks (boolean)
name (string)
page_token (string)
post_api_2_2_jobs_resetOverwrite all settings for the given job. Use the Update endpoint:method:jobs/update to update job settings partially.data: {
. job_id (integer)
. new_settings
} (object) required
post_api_2_2_jobs_run_nowRun a job and return the run_id of the triggered run.data: {
. idempotency_token (string)
. job_id (integer)
. job_parameters (object)
. only (array)
. performance_target
. pipeline_params
. queue
} (object) required
post_api_2_2_jobs_runs_cancelCancels a job run or a task run. The run is canceled asynchronously, so it may still be running when this request completes.data: {
. run_id (integer)
} (object) required
post_api_2_2_jobs_runs_cancel_allCancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started.data: {
. all_queued_runs (boolean)
. job_id (integer)
} (object) required
post_api_2_2_jobs_runs_deleteDeletes a non-active run. Returns an error if the run is active.data: {
. run_id (integer)
} (object) required
get_api_2_2_jobs_runs_exportExport and retrieve the job run task.run_id (integer) required
views_to_export (string)
get_api_2_2_jobs_runs_getRetrieves the metadata of a run. Large arrays in the results will be paginated when they exceed 100 elements. A request for a single run will return all properties for that run, and the first 100 elements of array properties tasks, job_clusters, job_parameters and repair_history. Use the next_page_token field to check for more results and pass its value as the page_token in subsequent requests. If any array properties have more than 100 elements, additional results will be returned on subsequenrun_id (integer) required
include_history (boolean)
include_resolved_values (boolean)
page_token (string)
get_api_2_2_jobs_runs_get_outputRetrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit call, you can use this endpoint to retrieve that value. Databricks restricts this API to returning the first 5 MB of the output. To return a larger result, you can store job results in a cloud storage service. This endpoint validates that the run_id parameter is valid and returns an HTTP status code 400 if the run_id parameter is invalid. Runs are automatically rrun_id (integer) required
get_api_2_2_jobs_runs_listList runs in descending order by start time.job_id (integer)
active_only (boolean)
completed_only (boolean)
limit (integer)
run_type (string)
expand_tasks (boolean)
start_time_from (integer)
start_time_to (integer)
page_token (string)
post_api_2_2_jobs_runs_repairRe-run one or more tasks. Tasks are re-run as part of the original job run. They use the current job and task settings, and can be viewed in the history for the original job run.data: {
. job_parameters (object)
. latest_repair_id (integer)
. performance_target
. pipeline_params
. rerun_all_failed_tasks (boolean)
. rerun_dependent_tasks (boolean)
. rerun_tasks (array)
. run_id (integer)
} (object) required
post_api_2_2_jobs_runs_submitSubmit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Runs submitted using this endpoint don’t display in the UI. Use the jobs/runs/get API to check the run state after the job is submitted.data: {
. access_control_list (array)
. budget_policy_id (string)
. email_notifications
. environments (array)
. git_source
. health
. idempotency_token (string)
. notification_settings
. queue
. run_as
. run_name (string)
. tasks (array)
. timeout_seconds (integer)
. webhook_notifications
} (object) required
post_api_2_2_jobs_updateAdd, update, or remove specific settings of an existing job. Use the Reset endpoint:method:jobs/reset to overwrite all job settings.data: {
. fields_to_remove (array)
. job_id (integer)
. new_settings
} (object) required
post_api_3_0_mlflow_tracesCreate a new trace within an experiment. A trace is a collection of spans that each represent individual operations that a model performed while processing a request. This can be be done in two ways: 1. Start a trace and then end it later. In this case do not set status and execution_duration. The trace will be set to status = IN_PROGRESS and can then be ended with a call to the 'End a trace' API. 2. Create the trace after it has already completed. In this case set trace.trace_info.status andata: {
. trace
} (object) required
post_api_3_0_mlflow_traces_delete_tracesDelete traces. There are two supported ways to do this: Case 1: max_timestamp_millis and max_traces may both be specified for time-based deletion. Traces are deleted from oldest to newest until all traces older than max_timestamp_millis have been deleted or max_traces traces have been deleted. Case 2: trace_ids may be specified to delete traces by their IDs.data: {
. experiment_id (string)
. max_timestamp_millis (integer)
. max_traces (integer)
. trace_ids (array)
} (object) required
post_api_3_0_mlflow_traces_searchSearch for traces with filter and order by criteria.data: {
. filter (string)
. locations (array)
. max_results (integer)
. order_by (array)
. page_token (string)
} (object) required
get_api_3_0_mlflow_traces_by_trace_idGet the information for a trace. A trace is a collection of spans that each represent individual operations that a model performed while processing a request.trace_id (string)
patch_api_3_0_mlflow_traces_by_trace_idEnd an in-progress trace.trace_id (string)
data: {
. trace
. update_mask (string)
} (object) required
post_api_3_0_mlflow_traces_by_trace_id_assessmentsCreate an assessment of a trace. An assessment records a human or machine e.g. LLM judge annotation used for training, evaluation, or monitoring of quality. An assessment is on an individual trace or span of that trace. The trace is the parent resource for an assessment.trace_id (string)
data: {
. assessment
} (object) required
get_api_3_0_mlflow_traces_by_trace_id_assessments_by_assessment_idGet an assessment of a trace. An assessment records a human or machine e.g. LLM judge annotation used for training, evaluation, or monitoring of quality. An assessment is on an individual trace or span of that trace. The trace is the parent resource for an assessment.trace_id (string)
assessment_id (string)
patch_api_3_0_mlflow_traces_by_trace_id_assessments_by_assessment_idUpdate an assessment of a trace. This API does not maintain version history of assessments. If you wish to maintain a version history, please use the Create an assessment of a trace API/api/workspace/mlflowexperimenttrace/createassessmentv3 to create a new assessment with the updated information and set its overrides field to the existing assessment's ID.trace_id (string)
assessment_id (string)
data: {
. assessment
. update_mask (string)
} (object) required
delete_api_3_0_mlflow_traces_by_trace_id_assessments_by_assessment_idDelete an assessment of a trace.trace_id (string)
assessment_id (string)
get_api_3_0_mlflow_traces_by_trace_id_credentials_for_data_downloadGet credentials to download trace data.trace_id (string)
get_api_3_0_mlflow_traces_by_trace_id_credentials_for_data_uploadGet credentials to upload trace data.trace_id (string)
patch_api_3_0_mlflow_traces_by_trace_id_tagsSets a tag on a trace. Tags are mutable and can be updated as desired. Tag keys should not be prefixed with 'mlflow.' as this is a reserved namespace for system tags.trace_id (string)
data: {
. key (string)
. value (string)
} (object) required
delete_api_3_0_mlflow_traces_by_trace_id_tagsDelete a tag from a trace.trace_id (string)
key (string)
patch_api_3_0_rfa_destinationsUpdates the access request destinations for the given securable. The caller must be a metastore admin, the owner of the securable, or a user that has the MANAGE privilege on the securable in order to assign destinations. Destinations cannot be updated for securables underneath schemas tables, volumes, functions, and models. For these securable types, destinations are inherited from the parent securable. A maximum of 5 emails and 5 external notification destinations Slack, Microsoft Teams, and Geupdate_mask (string) required
data: {
. are_any_destinations_hidden (boolean)
. destinations (array)
. securable
} (object) required
get_api_3_0_rfa_destinations_by_securable_type_by_full_nameGets an array of access request destinations for the specified securable. Any caller can see URL destinations or the destinations on the metastore. Otherwise, only those with BROWSE permissions on the securable can see destinations. The supported securable types are: 'metastore', 'catalog', 'schema', 'table', 'external_location', 'connection', 'credential', 'function', 'registered_model', and 'volume'.securable_type (string)
full_name (string)
post_api_3_0_rfa_requestsCreates access requests for Unity Catalog permissions for a specified principal on a securable object. This Batch API can take in multiple principals, securable objects, and permissions as the input and returns the access request destinations for each. Principals must be unique across the API call. The supported securable types are: 'metastore', 'catalog', 'schema', 'table', 'external_location', 'connection', 'credential', 'function', 'registered_model', and 'volume'.data: {
. requests (array)
} (object) required
post_serving_endpoints_by_name_invocationsQuery a serving endpointname (string)
data: {
. client_request_id (string)
. dataframe_records (array)
. dataframe_split
. extra_params (object)
. input
. inputs
. instances (array)
. max_tokens (integer)
. messages (array)
. n (integer)
. prompt
. stop (array)
. stream (boolean)
. temperature (number)
. usage_context (object)
} (object) required