Databricks Workspace
Connect to your Databricks Workspace to manage clusters, jobs, and notebooks.
Authentication
This connector uses Token-based authentication.
info
Set up your connection in the Abstra Console before using it in your workflows.
How to use
Using the Smart Chat
Execute the action "CHOOSE_ONE_ACTION_BELOW" from my connector "YOUR_CONNECTOR_NAME" using the params "PARAMS_HERE".
Using the Web Editor
from abstra.connectors import run_connection_action
result = run_connection_action(
connection_name="your_connection_name",
action_name="your_action_name",
params={
"param1": "value1",
"param2": "value2"
})
Available Actions
This connector provides 724 actions:
| Action | Purpose | Parameters |
|---|---|---|
| post_api_1_2_commands_cancel | Cancels a currently running command within an execution context. The command ID is obtained from a prior successful call to execute. | data: { . clusterId (string) . commandId (string) . contextId (string) } (object) required |
| post_api_1_2_commands_execute | Runs a cluster command in the given execution context, using the provided language. If successful, it returns an ID for tracking the status of the command's execution. | data: { . clusterId (string) . command (string) . contextId (string) . language } (object) required |
| get_api_1_2_commands_status | Gets the status of and, if available, the results from a currently executing command. The command ID is obtained from a prior successful call to execute. | clusterId (string) required contextId (string) required commandId (string) required |
| post_api_1_2_contexts_create | Creates an execution context for running cluster commands. If successful, this method returns the ID of the new execution context. | data: { . clusterId (string) . language } (object) required |
| post_api_1_2_contexts_destroy | Deletes an execution context. | data: { . clusterId (string) . contextId (string) } (object) required |
| get_api_1_2_contexts_status | Gets the status for an execution context. | clusterId (string) required contextId (string) required |
| get_api_2_0_accounts_service_principals_by_service_principal_id_credentials_secrets | List all secrets associated with the given service principal. This operation only returns information about the secrets themselves and does not include the secret values. | service_principal_id (string) page_token (string) page_size (integer) |
| post_api_2_0_accounts_service_principals_by_service_principal_id_credentials_secrets | Create a secret for the given service principal. | service_principal_id (string) data: { . lifetime (string) } (object) required |
| delete_api_2_0_accounts_service_principals_by_service_principal_id_credentials_secrets_by_secret_id | Delete a secret from the given service principal. | service_principal_id (string) secret_id (string) |
| get_api_2_0_alerts | Gets a list of alerts accessible to the user, ordered by creation time. | page_token (string) page_size (integer) |
| post_api_2_0_alerts | Create Alert | data: { . create_time (string) . custom_description (string) . custom_summary (string) . display_name (string) . effective_run_as . evaluation . id (string) . lifecycle_state . owner_user_name (string) . parent_path (string) . query_text (string) . run_as . run_as_user_name (string) . schedule . update_time (string) . warehouse_id (string) } (object) required |
| get_api_2_0_alerts_by_id | Gets an alert. | id (string) |
| patch_api_2_0_alerts_by_id | Update alert | id (string) update_mask (string) required data: { . create_time (string) . custom_description (string) . custom_summary (string) . display_name (string) . effective_run_as . evaluation . id (string) . lifecycle_state . owner_user_name (string) . parent_path (string) . query_text (string) . run_as . run_as_user_name (string) . schedule . update_time (string) . warehouse_id (string) } (object) required |
| delete_api_2_0_alerts_by_id | Moves an alert to the trash. Trashed alerts immediately disappear from list views, and can no longer trigger. You can restore a trashed alert through the UI. A trashed alert is permanently deleted after 30 days. | id (string) |
| get_api_2_0_apps | Lists all apps in the workspace. | page_token (string) page_size (integer) |
| post_api_2_0_apps | Creates a new app. | no_compute (boolean) data: { . active_deployment . app_status . budget_policy_id (string) . compute_status . create_time (string) . creator (string) . default_source_code_path (string) . description (string) . effective_budget_policy_id (string) . effective_user_api_scopes (array) . id (string) . name (string) . oauth2_app_client_id (string) . oauth2_app_integration_id (string) . pending_deployment . resources (array) . service_principal_client_id (string) . service_principal_id (integer) . service_principal_name (string) . update_time (string) . updater (string) . url (string) . user_api_scopes (array) } (object) required |
| get_api_2_0_apps_by_app_name_deployments | Lists all app deployments for the app with the supplied name. | app_name (string) page_token (string) page_size (integer) |
| post_api_2_0_apps_by_app_name_deployments | Creates an app deployment for the app with the supplied name. | app_name (string) data: { . create_time (string) . creator (string) . deployment_artifacts . deployment_id (string) . mode . source_code_path (string) . status . update_time (string) } (object) required |
| get_api_2_0_apps_by_app_name_deployments_by_deployment_id | Retrieves information for the app deployment with the supplied name and deployment id. | app_name (string) deployment_id (string) |
| get_api_2_0_apps_by_name | Retrieves information for the app with the supplied name. | name (string) |
| patch_api_2_0_apps_by_name | Updates the app with the supplied name. | name (string) data: { . active_deployment . app_status . budget_policy_id (string) . compute_status . create_time (string) . creator (string) . default_source_code_path (string) . description (string) . effective_budget_policy_id (string) . effective_user_api_scopes (array) . id (string) . name (string) . oauth2_app_client_id (string) . oauth2_app_integration_id (string) . pending_deployment . resources (array) . service_principal_client_id (string) . service_principal_id (integer) . service_principal_name (string) . update_time (string) . updater (string) . url (string) . user_api_scopes (array) } (object) required |
| delete_api_2_0_apps_by_name | Deletes an app. | name (string) |
| post_api_2_0_apps_by_name_start | Start the last active deployment of the app in the workspace. | name (string) data (object) required |
| post_api_2_0_apps_by_name_stop | Stops the active deployment of the app in the workspace. | name (string) data (object) required |
| get_api_2_0_clean_rooms | Get a list of all clean rooms of the metastore. Only clean rooms the caller has access to are returned. | page_size (integer) page_token (string) |
| post_api_2_0_clean_rooms | Create a new clean room with the specified collaborators. This method is asynchronous; the returned name field inside the clean_room field can be used to poll the clean room status, using the :method:cleanrooms/get method. When this method returns, the clean room will be in a PROVISIONING state, with only name, owner, comment, created_at and status populated. The clean room will be usable once it enters an ACTIVE state. The caller must be a metastore admin or have the CREATE_CLEAN_ROOM privileg | data: { . access_restricted . comment (string) . created_at (integer) . local_collaborator_alias (string) . name (string) . output_catalog . owner (string) . remote_detailed_info . status . updated_at (integer) } (object) required |
| get_api_2_0_clean_rooms_by_clean_room_name_assets | List assets. | clean_room_name (string) page_token (string) |
| post_api_2_0_clean_rooms_by_clean_room_name_assets | Create a clean room asset —share an asset like a notebook or table into the clean room. For each UC asset that is added through this method, the clean room owner must also have enough privilege on the asset to consume it. The privilege must be maintained indefinitely for the clean room to be able to access the asset. Typically, you should use a group as the clean room owner. | clean_room_name (string) data: { . added_at (integer) . asset_type . clean_room_name (string) . foreign_table . foreign_table_local_details . name (string) . notebook . owner_collaborator_alias (string) . status . table . table_local_details . view . view_local_details . volume_local_details } (object) required |
| get_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name | Get the details of a clean room asset by its type and full name. | clean_room_name (string) asset_type (string) name (string) |
| patch_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name | Update a clean room asset. For example, updating the content of a notebook; changing the shared partitions of a table; etc. | clean_room_name (string) asset_type (string) name (string) data: { . added_at (integer) . asset_type . clean_room_name (string) . foreign_table . foreign_table_local_details . name (string) . notebook . owner_collaborator_alias (string) . status . table . table_local_details . view . view_local_details . volume_local_details } (object) required |
| delete_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name | Delete a clean room asset - unshare/remove the asset from the clean room | clean_room_name (string) asset_type (string) name (string) |
| post_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name_reviews | Submit an asset review | clean_room_name (string) asset_type (string) name (string) data: { . notebook_review } (object) required |
| get_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name_revisions | List revisions for an asset | clean_room_name (string) asset_type (string) name (string) page_size (integer) page_token (string) |
| get_api_2_0_clean_rooms_by_clean_room_name_assets_by_asset_type_by_name_revisions_by_etag | Get a specific revision of an asset | clean_room_name (string) asset_type (string) name (string) etag (string) |
| get_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules | List all auto-approval rules for the caller | clean_room_name (string) page_size (integer) page_token (string) |
| post_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules | Create an auto-approval rule | clean_room_name (string) data: { . auto_approval_rule } (object) required |
| get_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules_by_rule_id | Get a auto-approval rule by rule ID | clean_room_name (string) rule_id (string) |
| patch_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules_by_rule_id | Update a auto-approval rule by rule ID | clean_room_name (string) rule_id (string) data: { . author_collaborator_alias (string) . author_scope . clean_room_name (string) . created_at (integer) . rule_id (string) . rule_owner_collaborator_alias (string) . runner_collaborator_alias (string) } (object) required |
| delete_api_2_0_clean_rooms_by_clean_room_name_auto_approval_rules_by_rule_id | Delete a auto-approval rule by rule ID | clean_room_name (string) rule_id (string) |
| post_api_2_0_clean_rooms_by_clean_room_name_output_catalogs | Create the output catalog of the clean room. | clean_room_name (string) data: { . catalog_name (string) . status } (object) required |
| get_api_2_0_clean_rooms_by_clean_room_name_runs | List all the historical notebook task runs in a clean room. | clean_room_name (string) notebook_name (string) page_size (integer) page_token (string) |
| get_api_2_0_clean_rooms_by_name | Get the details of a clean room given its name. | name (string) |
| patch_api_2_0_clean_rooms_by_name | Update a clean room. The caller must be the owner of the clean room, have MODIFY_CLEAN_ROOM privilege, or be metastore admin. When the caller is a metastore admin, only the owner field can be updated. | name (string) data: { . clean_room } (object) required |
| delete_api_2_0_clean_rooms_by_name | Delete a clean room. After deletion, the clean room will be removed from the metastore. If the other collaborators have not deleted the clean room, they will still have the clean room in their metastore, but it will be in a DELETED state and no operations other than deletion can be performed on it. | name (string) |
| post_api_2_0_database_catalogs | Create a Database Catalog. | data: { . create_database_if_not_exists (boolean) . database_instance_name (string) . database_name (string) . name (string) . uid (string) } (object) required |
| get_api_2_0_database_catalogs_by_name | Get a Database Catalog. | name (string) |
| delete_api_2_0_database_catalogs_by_name | Delete a Database Catalog. | name (string) |
| post_api_2_0_database_credentials | Generates a credential that can be used to access database instances. | data: { . claims (array) . instance_names (array) . request_id (string) } (object) required |
| get_api_2_0_database_instances | List Database Instances. | page_token (string) page_size (integer) |
| post_api_2_0_database_instances | Create a Database Instance. | data: { . capacity (string) . child_instance_refs (array) . creation_time (string) . creator (string) . effective_enable_pg_native_login (boolean) . effective_enable_readable_secondaries (boolean) . effective_node_count (integer) . effective_retention_window_in_days (integer) . effective_stopped (boolean) . enable_pg_native_login (boolean) . enable_readable_secondaries (boolean) . name (string) . node_count (integer) . parent_instance_ref . pg_version (string) . read_only_dns (string) . read_write_dns (string) . retention_window_in_days (integer) . state . stopped (boolean) . uid (string) } (object) required |
| get_api_2_0_database_instances_by_name | Get a Database Instance. | name (string) |
| patch_api_2_0_database_instances_by_name | Update a Database Instance. | name (string) update_mask (string) required data: { . capacity (string) . child_instance_refs (array) . creation_time (string) . creator (string) . effective_enable_pg_native_login (boolean) . effective_enable_readable_secondaries (boolean) . effective_node_count (integer) . effective_retention_window_in_days (integer) . effective_stopped (boolean) . enable_pg_native_login (boolean) . enable_readable_secondaries (boolean) . name (string) . node_count (integer) . parent_instance_ref . pg_version (string) . read_only_dns (string) . read_write_dns (string) . retention_window_in_days (integer) . state . stopped (boolean) . uid (string) } (object) required |
| delete_api_2_0_database_instances_by_name | Delete a Database Instance. | name (string) force (boolean) purge (boolean) |
| get_api_2_0_database_instances_find_by_uid | Find a Database Instance by uid. | uid (string) |
| post_api_2_0_database_synced_tables | Create a Synced Database Table. | data: { . data_synchronization_status . database_instance_name (string) . effective_database_instance_name (string) . effective_logical_database_name (string) . logical_database_name (string) . name (string) . spec . unity_catalog_provisioning_state } (object) required |
| get_api_2_0_database_synced_tables_by_name | Get a Synced Database Table. | name (string) |
| delete_api_2_0_database_synced_tables_by_name | Delete a Synced Database Table. | name (string) |
| post_api_2_0_database_tables | Create a Database Table. Useful for registering pre-existing PG tables in UC. See CreateSyncedDatabaseTable for creating synced tables in PG from a source table in UC. | data: { . database_instance_name (string) . logical_database_name (string) . name (string) } (object) required |
| get_api_2_0_database_tables_by_name | Get a Database Table. | name (string) |
| delete_api_2_0_database_tables_by_name | Delete a Database Table. | name (string) |
| post_api_2_0_dbfs_add_block | Appends a block of data to the stream specified by the input handle. If the handle does not exist, this call will throw an exception with RESOURCE_DOES_NOT_EXIST. If the block of data exceeds 1 MB, this call will throw an exception with MAX_BLOCK_SIZE_EXCEEDED. | data: { . data (string) . handle (integer) } (object) required |
| post_api_2_0_dbfs_close | Closes the stream specified by the input handle. If the handle does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. | data: { . handle (integer) } (object) required |
| post_api_2_0_dbfs_create | Opens a stream to write to a file and returns a handle to this stream. There is a 10 minute idle timeout on this handle. If a file or directory already exists on the given path and overwrite is set to false, this call will throw an exception with RESOURCE_ALREADY_EXISTS. A typical workflow for file upload would be: 1. Issue a create call and get a handle. 2. Issue one or more add-block calls with the handle you have. 3. Issue a close call with the handle you have. | data: { . overwrite (boolean) . path (string) } (object) required |
| post_api_2_0_dbfs_delete | Delete the file or directory optionally recursively delete all files in the directory. This call throws an exception with IO_ERROR if the path is a non-empty directory and recursive is set to false or on other similar errors. When you delete a large number of files, the delete operation is done in increments. The call returns a response after approximately 45 seconds with an error message 503 Service Unavailable asking you to re-invoke the delete operation until the directory structure is fully | data: { . path (string) . recursive (boolean) } (object) required |
| get_api_2_0_dbfs_get_status | Gets the file information for a file or directory. If the file or directory does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. | path (string) required |
| get_api_2_0_dbfs_list | List the contents of a directory, or details of the file. If the file or directory does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. When calling list on a large directory, the list operation will time out after approximately 60 seconds. We strongly recommend using list only on directories containing less than 10K files and discourage using the DBFS REST API for operations that list more than 10K files. Instead, we recommend that you perform such operations in the cont | path (string) required |
| post_api_2_0_dbfs_mkdirs | Creates the given directory and necessary parent directories if they do not exist. If a file not a directory exists at any prefix of the input path, this call throws an exception with RESOURCE_ALREADY_EXISTS. Note: If this operation fails, it might have succeeded in creating some of the necessary parent directories. | data: { . path (string) } (object) required |
| post_api_2_0_dbfs_move | Moves a file from one location to another location within DBFS. If the source file does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. If a file already exists in the destination path, this call throws an exception with RESOURCE_ALREADY_EXISTS. If the given source path is a directory, this call always recursively moves all files. | data: { . destination_path (string) . source_path (string) } (object) required |
| post_api_2_0_dbfs_put | Uploads a file through the use of multipart form post. It is mainly used for streaming uploads, but can also be used as a convenient single call for data upload. Alternatively you can pass contents as base64 string. The amount of data that can be passed when not streaming using the contents parameter is limited to 1 MB. MAX_BLOCK_SIZE_EXCEEDED will be thrown if this limit is exceeded. If you want to upload large files, use the streaming upload. For details, see :method:dbfs/create, :meth | data: { . contents (string) . overwrite (boolean) . path (string) } (object) required |
| get_api_2_0_dbfs_read | Returns the contents of a file. If the file does not exist, this call throws an exception with RESOURCE_DOES_NOT_EXIST. If the path is a directory, the read length is negative, or if the offset is negative, this call throws an exception with INVALID_PARAMETER_VALUE. If the read length exceeds 1 MB, this call throws an exception with MAX_READ_SIZE_EXCEEDED. If offset + length exceeds the number of bytes in a file, it reads the contents until the end of file. | path (string) required offset (integer) length (integer) |
| get_api_2_0_fs_directoriesby_directory_path | Returns the contents of a directory. If there is no directory at the specified path, the API returns a HTTP 404 error. | directory_path (string) page_size (integer) page_token (string) |
| head_api_2_0_fs_directoriesby_directory_path | Get the metadata of a directory. The response HTTP headers contain the metadata. There is no response body. This method is useful to check if a directory exists and the caller has access to it. If you wish to ensure the directory exists, you can instead use PUT, which will create the directory if it does not exist, and is idempotent it will succeed if the directory already exists. | directory_path (string) |
| put_api_2_0_fs_directoriesby_directory_path | Creates an empty directory. If necessary, also creates any parent directories of the new, empty directory like the shell command mkdir -p. If called on an existing directory, returns a success response; this method is idempotent it will succeed if the directory already exists. | directory_path (string) |
| delete_api_2_0_fs_directoriesby_directory_path | Deletes an empty directory. To delete a non-empty directory, first delete all of its contents. This can be done by listing the directory contents and deleting each file and subdirectory recursively. | directory_path (string) |
| get_api_2_0_fs_filesby_file_path | Downloads a file. The file contents are the response body. This is a standard HTTP file download, not a JSON RPC. It supports the Range and If-Unmodified-Since HTTP headers. | file_path (string) Range (string) If-Unmodified-Since (string) |
| head_api_2_0_fs_filesby_file_path | Get the metadata of a file. The response HTTP headers contain the metadata. There is no response body. | file_path (string) Range (string) If-Unmodified-Since (string) |
| put_api_2_0_fs_filesby_file_path | Uploads a file of up to 5 GiB. The file contents should be sent as the request body as raw bytes an octet stream; do not encode or otherwise modify the bytes before sending. The contents of the resulting file will be exactly the bytes sent in the request body. If the request is successful, there is no response body. | file_path (string) overwrite (boolean) |
| delete_api_2_0_fs_filesby_file_path | Deletes a file. If the request is successful, there is no response body. | file_path (string) |
| get_api_2_0_genie_spaces | Get list of Genie Spaces. | page_size (integer) page_token (string) |
| get_api_2_0_genie_spaces_by_space_id | Get details of a Genie Space. | space_id (string) |
| delete_api_2_0_genie_spaces_by_space_id | Move a Genie Space to the trash. | space_id (string) |
| get_api_2_0_genie_spaces_by_space_id_conversations | Get a list of conversations in a Genie Space. | space_id (string) page_size (integer) page_token (string) |
| delete_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id | Delete a conversation. | space_id (string) conversation_id (string) |
| post_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages | Create new message in a conversation:method:genie/startconversation. The AI response uses all previously created messages in the conversation to respond. | space_id (string) conversation_id (string) data: { . content (string) } (object) required |
| get_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages_by_message_id | Get message from conversation. | space_id (string) conversation_id (string) message_id (string) |
| post_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages_by_message_id_attachments_by_attachment_id_execute_query | Execute the SQL for a message query attachment. Use this API when the query attachment has expired and needs to be re-executed. | space_id (string) conversation_id (string) message_id (string) attachment_id (string) |
| get_api_2_0_genie_spaces_by_space_id_conversations_by_conversation_id_messages_by_message_id_attachments_by_attachment_id_query_result | Get the result of SQL query if the message has a query attachment. This is only available if a message has a query attachment and the message status is EXECUTING_QUERY OR COMPLETED. | space_id (string) conversation_id (string) message_id (string) attachment_id (string) |
| post_api_2_0_genie_spaces_by_space_id_start_conversation | Start a new conversation. | space_id (string) data: { . content (string) } (object) required |
| get_api_2_0_git_credentials | Lists the calling user's Git credentials. One credential per user is supported. | No parameters |
| post_api_2_0_git_credentials | Creates a Git credential entry for the user. Only one Git credential per user is supported, so any attempts to create credentials if an entry already exists will fail. Use the PATCH endpoint to update existing credentials, or the DELETE endpoint to delete existing credentials. | data: { . git_provider (string) . git_username (string) . is_default_for_provider (boolean) . name (string) . personal_access_token (string) } (object) required |
| get_api_2_0_git_credentials_by_credential_id | Gets the Git credential with the specified credential ID. | credential_id (integer) |
| patch_api_2_0_git_credentials_by_credential_id | Updates the specified Git credential. | credential_id (integer) data: { . git_provider (string) . git_username (string) . is_default_for_provider (boolean) . name (string) . personal_access_token (string) } (object) required |
| delete_api_2_0_git_credentials_by_credential_id | Deletes the specified Git credential. | credential_id (integer) |
| get_api_2_0_global_init_scripts | Get a list of all global init scripts for this workspace. This returns all properties for each script but not the script contents. To retrieve the contents of a script, use the get a global init script:method:globalinitscripts/get operation. | No parameters |
| post_api_2_0_global_init_scripts | Creates a new global init script in this workspace. | data: { . enabled (boolean) . name (string) . position (integer) . script (string) } (object) required |
| get_api_2_0_global_init_scripts_by_script_id | Gets all the details of a script, including its Base64-encoded contents. | script_id (string) |
| patch_api_2_0_global_init_scripts_by_script_id | Updates a global init script, specifying only the fields to change. All fields are optional. Unspecified fields retain their current value. | script_id (string) data: { . enabled (boolean) . name (string) . position (integer) . script (string) } (object) required |
| delete_api_2_0_global_init_scripts_by_script_id | Deletes a global init script. | script_id (string) |
| post_api_2_0_identity_groups_resolve_by_external_id | Resolves a group with the given external ID from the customer's IdP. If the group does not exist, it will be created in the account. If the customer is not onboarded onto Automatic Identity Management AIM, this will return an error. | data: { . external_id (string) } (object) required |
| post_api_2_0_identity_service_principals_resolve_by_external_id | Resolves an SP with the given external ID from the customer's IdP. If the SP does not exist, it will be created. If the customer is not onboarded onto Automatic Identity Management AIM, this will return an error. | data: { . external_id (string) } (object) required |
| post_api_2_0_identity_users_resolve_by_external_id | Resolves a user with the given external ID from the customer's IdP. If the user does not exist, it will be created. If the customer is not onboarded onto Automatic Identity Management AIM, this will return an error. | data: { . external_id (string) } (object) required |
| get_api_2_0_identity_workspace_access_details_by_principal_id | Returns the access details for a principal in the current workspace. Allows for checking access details for any provisioned principal user, service principal, or group in the current workspace. Provisioned principal here refers to one that has been synced into Databricks from the customer's IdP or added explicitly to Databricks via SCIM/UI. Allows for passing in a 'view' parameter to control what fields are returned BASIC by default or FULL. | principal_id (integer) view (string) |
| post_api_2_0_instance_pools_create | Creates a new instance pool using idle and ready-to-use cloud instances. | data: { . aws_attributes . custom_tags (object) . disk_spec . enable_elastic_disk (boolean) . idle_instance_autotermination_minutes (integer) . instance_pool_name (string) . max_capacity (integer) . min_idle_instances (integer) . node_type_id (string) . preloaded_docker_images (array) . preloaded_spark_versions (array) } (object) required |
| post_api_2_0_instance_pools_delete | Deletes the instance pool permanently. The idle instances in the pool are terminated asynchronously. | data: { . instance_pool_id (string) } (object) required |
| post_api_2_0_instance_pools_edit | Modifies the configuration of an existing instance pool. | data: { . custom_tags (object) . idle_instance_autotermination_minutes (integer) . instance_pool_id (string) . instance_pool_name (string) . max_capacity (integer) . min_idle_instances (integer) . node_type_id (string) } (object) required |
| get_api_2_0_instance_pools_get | Retrieve the information for an instance pool based on its identifier. | instance_pool_id (string) required |
| get_api_2_0_instance_pools_list | Gets a list of instance pools with their statistics. | No parameters |
| post_api_2_0_instance_profiles_add | Registers an instance profile in Databricks. In the UI, you can then give users the permission to use this instance profile when launching clusters. This API is only available to admin users. | data: { . iam_role_arn (string) . instance_profile_arn (string) . is_meta_instance_profile (boolean) . skip_validation (boolean) } (object) required |
| post_api_2_0_instance_profiles_edit | The only supported field to change is the optional IAM role ARN associated with the instance profile. It is required to specify the IAM role ARN if both of the following are true: Your role name and instance profile name do not match. The name is the part after the last slash in each ARN. You want to use the instance profile with Databricks SQL Serverlesshttps://docs.databricks.com/sql/admin/serverless.html. To understand where these fields are in the AWS console, see Enable serverless SQL w | data: { . iam_role_arn (string) . instance_profile_arn (string) . is_meta_instance_profile (boolean) } (object) required |
| get_api_2_0_instance_profiles_list | List the instance profiles that the calling user can use to launch a cluster. This API is available to all users. | No parameters |
| post_api_2_0_instance_profiles_remove | Remove the instance profile with the provided ARN. Existing clusters with this instance profile will continue to function. This API is only accessible to admin users. | data: { . instance_profile_arn (string) } (object) required |
| get_api_2_0_ip_access_lists | Gets all IP access lists for the specified workspace. | No parameters |
| post_api_2_0_ip_access_lists | Creates an IP access list for this workspace. A list can be an allow list or a block list. See the top of this file for a description of how the server treats allow lists and block lists at runtime. When creating or updating an IP access list: For all allow lists and block lists combined, the API supports a maximum of 1000 IP/CIDR values, where one CIDR counts as a single value. Attempts to exceed that number return error 400 with error_code value QUOTA_EXCEEDED. If the new list would block | data: { . ip_addresses (array) . label (string) . list_type } (object) required |
| get_api_2_0_ip_access_lists_by_ip_access_list_id | Gets an IP access list, specified by its list ID. | ip_access_list_id (string) |
| put_api_2_0_ip_access_lists_by_ip_access_list_id | Replaces an IP access list, specified by its ID. A list can include allow lists and block lists. See the top of this file for a description of how the server treats allow lists and block lists at run time. When replacing an IP access list: For all allow lists and block lists combined, the API supports a maximum of 1000 IP/CIDR values, where one CIDR counts as a single value. Attempts to exceed that number return error 400 with error_code value QUOTA_EXCEEDED. If the resulting list would block | ip_access_list_id (string) data: { . enabled (boolean) . ip_addresses (array) . label (string) . list_type } (object) required |
| patch_api_2_0_ip_access_lists_by_ip_access_list_id | Updates an existing IP access list, specified by its ID. A list can include allow lists and block lists. See the top of this file for a description of how the server treats allow lists and block lists at run time. When updating an IP access list: For all allow lists and block lists combined, the API supports a maximum of 1000 IP/CIDR values, where one CIDR counts as a single value. Attempts to exceed that number return error 400 with error_code value QUOTA_EXCEEDED. If the updated list woul | ip_access_list_id (string) data: { . enabled (boolean) . ip_addresses (array) . label (string) . list_type } (object) required |
| delete_api_2_0_ip_access_lists_by_ip_access_list_id | Deletes an IP access list, specified by its list ID. | ip_access_list_id (string) |
| get_api_2_0_lakeview_dashboards | List dashboards. | page_size (integer) page_token (string) show_trashed (boolean) view (string) |
| post_api_2_0_lakeview_dashboards | Create a draft dashboard. | data: { . create_time (string) . dashboard_id (string) . display_name (string) . etag (string) . lifecycle_state . parent_path (string) . path (string) . serialized_dashboard (string) . update_time (string) . warehouse_id (string) } (object) required |
| post_api_2_0_lakeview_dashboards_migrate | Migrates a classic SQL dashboard to Lakeview. | data: { . display_name (string) . parent_path (string) . source_dashboard_id (string) . update_parameter_syntax (boolean) } (object) required |
| get_api_2_0_lakeview_dashboards_by_dashboard_id | Get a draft dashboard. | dashboard_id (string) |
| patch_api_2_0_lakeview_dashboards_by_dashboard_id | Update a draft dashboard. | dashboard_id (string) data: { . create_time (string) . dashboard_id (string) . display_name (string) . etag (string) . lifecycle_state . parent_path (string) . path (string) . serialized_dashboard (string) . update_time (string) . warehouse_id (string) } (object) required |
| delete_api_2_0_lakeview_dashboards_by_dashboard_id | Trash a dashboard. | dashboard_id (string) |
| get_api_2_0_lakeview_dashboards_by_dashboard_id_published | Get the current published dashboard. | dashboard_id (string) |
| post_api_2_0_lakeview_dashboards_by_dashboard_id_published | Publish the current draft dashboard. | dashboard_id (string) data: { . embed_credentials (boolean) . warehouse_id (string) } (object) required |
| delete_api_2_0_lakeview_dashboards_by_dashboard_id_published | Unpublish the dashboard. | dashboard_id (string) |
| get_api_2_0_lakeview_dashboards_by_dashboard_id_published_tokeninfo | Get a required authorization details and scopes of a published dashboard to mint an OAuth token. | dashboard_id (string) external_value (string) external_viewer_id (string) |
| get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules | List dashboard schedules. | dashboard_id (string) page_size (integer) page_token (string) |
| post_api_2_0_lakeview_dashboards_by_dashboard_id_schedules | Create dashboard schedule. | dashboard_id (string) data: { . create_time (string) . cron_schedule . dashboard_id (string) . display_name (string) . etag (string) . pause_status . schedule_id (string) . update_time (string) . warehouse_id (string) } (object) required |
| get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id | Get dashboard schedule. | dashboard_id (string) schedule_id (string) |
| put_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id | Update dashboard schedule. | dashboard_id (string) schedule_id (string) data: { . create_time (string) . cron_schedule . dashboard_id (string) . display_name (string) . etag (string) . pause_status . schedule_id (string) . update_time (string) . warehouse_id (string) } (object) required |
| delete_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id | Delete dashboard schedule. | dashboard_id (string) schedule_id (string) etag (string) |
| get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptions | List schedule subscriptions. | dashboard_id (string) schedule_id (string) page_size (integer) page_token (string) |
| post_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptions | Create schedule subscription. | dashboard_id (string) schedule_id (string) data: { . create_time (string) . created_by_user_id (integer) . dashboard_id (string) . etag (string) . schedule_id (string) . subscriber . subscription_id (string) . update_time (string) } (object) required |
| get_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptions_by_subscription_id | Get schedule subscription. | dashboard_id (string) schedule_id (string) subscription_id (string) |
| delete_api_2_0_lakeview_dashboards_by_dashboard_id_schedules_by_schedule_id_subscriptions_by_subscription_id | Delete schedule subscription. | dashboard_id (string) schedule_id (string) subscription_id (string) etag (string) |
| get_api_2_0_libraries_all_cluster_statuses | Get the status of all libraries on all clusters. A status is returned for all libraries installed on this cluster via the API or the libraries UI. | No parameters |
| get_api_2_0_libraries_cluster_status | Get the status of libraries on a cluster. A status is returned for all libraries installed on this cluster via the API or the libraries UI. The order of returned libraries is as follows: 1. Libraries set to be installed on this cluster, in the order that the libraries were added to the cluster, are returned first. 2. Libraries that were previously requested to be installed on this cluster or, but are now marked for removal, in no particular order, are returned last. | cluster_id (string) required |
| post_api_2_0_libraries_install | Add libraries to install on a cluster. The installation is asynchronous; it happens in the background after the completion of this request. | data: { . cluster_id (string) . libraries (array) } (object) required |
| post_api_2_0_libraries_uninstall | Set libraries to uninstall from a cluster. The libraries won't be uninstalled until the cluster is restarted. A request to uninstall a library that is not currently installed is ignored. | data: { . cluster_id (string) . libraries (array) } (object) required |
| get_api_2_0_lineage_tracking_external_lineage | Lists external lineage relationships of a Databricks object or external metadata given a supplied direction. | object_info: { . external_metadata . model_version . path . table } (object) required lineage_direction (string) required page_size (integer) page_token (string) |
| post_api_2_0_lineage_tracking_external_lineage | Creates an external lineage relationship between a Databricks or external metadata object and another external metadata object. | data: { . columns (array) . id (string) . properties (object) . source . target } (object) required |
| patch_api_2_0_lineage_tracking_external_lineage | Updates an external lineage relationship between a Databricks or external metadata object and another external metadata object. | update_mask (string) required data: { . columns (array) . id (string) . properties (object) . source . target } (object) required |
| delete_api_2_0_lineage_tracking_external_lineage | Deletes an external lineage relationship between a Databricks or external metadata object and another external metadata object. | external_lineage_relationship: { . id (string) . source . target } (object) required |
| get_api_2_0_lineage_tracking_external_metadata | Gets an array of external metadata objects in the metastore. If the caller is the metastore admin, all external metadata objects will be retrieved. Otherwise, only external metadata objects that the caller has BROWSE on will be retrieved. There is no guarantee of a specific ordering of the elements in the array. | page_size (integer) page_token (string) |
| post_api_2_0_lineage_tracking_external_metadata | Creates a new external metadata object in the parent metastore if the caller is a metastore admin or has the CREATE_EXTERNAL_METADATA privilege. Grants BROWSE to all account users upon creation by default. | data: { . columns (array) . create_time (string) . created_by (string) . description (string) . entity_type (string) . id (string) . metastore_id (string) . name (string) . owner (string) . properties (object) . system_type . update_time (string) . updated_by (string) . url (string) } (object) required |
| get_api_2_0_lineage_tracking_external_metadata_by_name | Gets the specified external metadata object in a metastore. The caller must be a metastore admin, the owner of the external metadata object, or a user that has the BROWSE privilege. | name (string) |
| patch_api_2_0_lineage_tracking_external_metadata_by_name | Updates the external metadata object that matches the supplied name. The caller can only update either the owner or other metadata fields in one request. The caller must be a metastore admin, the owner of the external metadata object, or a user that has the MODIFY privilege. If the caller is updating the owner, they must also have the MANAGE privilege. | name (string) update_mask (string) required data: { . columns (array) . create_time (string) . created_by (string) . description (string) . entity_type (string) . id (string) . metastore_id (string) . name (string) . owner (string) . properties (object) . system_type . update_time (string) . updated_by (string) . url (string) } (object) required |
| delete_api_2_0_lineage_tracking_external_metadata_by_name | Deletes the external metadata object that matches the supplied name. The caller must be a metastore admin, the owner of the external metadata object, or a user that has the MANAGE privilege. | name (string) |
| get_api_2_0_marketplace_exchange_exchanges | List exchanges visible to provider | page_token (string) page_size (integer) |
| post_api_2_0_marketplace_exchange_exchanges | Create an exchange | data: { . exchange } (object) required |
| get_api_2_0_marketplace_exchange_exchanges_for_listing | List exchanges associated with a listing | listing_id (string) required page_token (string) page_size (integer) |
| post_api_2_0_marketplace_exchange_exchanges_for_listing | Associate an exchange with a listing | data: { . exchange_id (string) . listing_id (string) } (object) required |
| delete_api_2_0_marketplace_exchange_exchanges_for_listing_by_id | Disassociate an exchange with a listing | id (string) |
| get_api_2_0_marketplace_exchange_exchanges_by_id | Get an exchange. | id (string) |
| put_api_2_0_marketplace_exchange_exchanges_by_id | Update an exchange | id (string) data: { . exchange } (object) required |
| delete_api_2_0_marketplace_exchange_exchanges_by_id | This removes a listing from marketplace. | id (string) |
| get_api_2_0_marketplace_exchange_filters | List exchange filter | exchange_id (string) required page_token (string) page_size (integer) |
| post_api_2_0_marketplace_exchange_filters | Add an exchange filter. | data: { . filter } (object) required |
| put_api_2_0_marketplace_exchange_filters_by_id | Update an exchange filter. | id (string) data: { . filter } (object) required |
| delete_api_2_0_marketplace_exchange_filters_by_id | Delete an exchange filter | id (string) |
| get_api_2_0_marketplace_exchange_listings_for_exchange | List listings associated with an exchange | exchange_id (string) required page_token (string) page_size (integer) |
| get_api_2_0_marketplace_provider_analytics_dashboard | Get provider analytics dashboard. | No parameters |
| post_api_2_0_marketplace_provider_analytics_dashboard | Create provider analytics dashboard. Returns Marketplace specific id. Not to be confused with the Lakeview dashboard id. | No parameters |
| get_api_2_0_marketplace_provider_analytics_dashboard_latest | Get latest version of provider analytics dashboard. | No parameters |
| put_api_2_0_marketplace_provider_analytics_dashboard_by_id | Update provider analytics dashboard. | id (string) data: { . version (integer) } (object) required |
| get_api_2_0_marketplace_provider_files | List files attached to a parent entity. | file_parent: { . file_parent_type . parent_id (string) } (object) required page_token (string) page_size (integer) |
| post_api_2_0_marketplace_provider_files | Create a file. Currently, only provider icons and attached notebooks are supported. | data: { . display_name (string) . file_parent . marketplace_file_type . mime_type (string) } (object) required |
| get_api_2_0_marketplace_provider_files_by_file_id | Get a file | file_id (string) |
| delete_api_2_0_marketplace_provider_files_by_file_id | Delete a file | file_id (string) |
| post_api_2_0_marketplace_provider_listing | Create a new listing | data: { . listing } (object) required |
| get_api_2_0_marketplace_provider_listings | List listings owned by this provider | page_token (string) page_size (integer) |
| get_api_2_0_marketplace_provider_listings_by_id | Get a listing | id (string) |
| put_api_2_0_marketplace_provider_listings_by_id | Update a listing | id (string) data: { . listing } (object) required |
| delete_api_2_0_marketplace_provider_listings_by_id | Delete a listing | id (string) |
| put_api_2_0_marketplace_provider_listings_by_listing_id_personalization_requests_by_request_id_request_status | Update personalization request. This method only permits updating the status of the request. | listing_id (string) request_id (string) data: { . reason (string) . share . status } (object) required |
| get_api_2_0_marketplace_provider_personalization_requests | List personalization requests to this provider. This will return all personalization requests, regardless of which listing they are for. | page_token (string) page_size (integer) |
| post_api_2_0_marketplace_provider_provider | Create a provider | data: { . provider } (object) required |
| get_api_2_0_marketplace_provider_providers | List provider profiles for account. | page_token (string) page_size (integer) |
| get_api_2_0_marketplace_provider_providers_by_id | Get provider profile | id (string) |
| put_api_2_0_marketplace_provider_providers_by_id | Update provider profile | id (string) data: { . provider } (object) required |
| delete_api_2_0_marketplace_provider_providers_by_id | Delete provider | id (string) |
| get_api_2_0_mlflow_artifacts_list | List artifacts for a run. Takes an optional artifact_path prefix which if specified, the response contains only artifacts with the specified prefix. A maximum of 1000 artifacts will be retrieved for UC Volumes. Please call /api/2.0/fs/directoriesdirectory_path for listing artifacts in UC Volumes, which supports pagination. See List directory contents | Files API/api/workspace/files/listdirectorycontents. | run_id (string) run_uuid (string) path (string) page_token (string) |
| post_api_2_0_mlflow_comments_create | Posts a comment on a model version. A comment can be submitted either by a user or programmatically to display relevant information about the model. For example, test results or deployment errors. | data: { . comment (string) . name (string) . version (string) } (object) required |
| delete_api_2_0_mlflow_comments_delete | Deletes a comment on a model version. | id (string) required |
| patch_api_2_0_mlflow_comments_update | Post an edit to a comment on a model version. | data: { . comment (string) . id (string) } (object) required |
| post_api_2_0_mlflow_databricks_model_versions_transition_stage | Transition a model version's stage. This is a Databricks workspace version of the MLflow endpointhttps://www.mlflow.org/docs/latest/rest-api.html transition-modelversion-stage that also accepts a comment associated with the transition to be recorded. | data: { . archive_existing_versions (boolean) . comment (string) . name (string) . stage (string) . version (string) } (object) required |
| get_api_2_0_mlflow_databricks_registered_models_get | Get the details of a model. This is a Databricks workspace version of the MLflow endpointhttps://www.mlflow.org/docs/latest/rest-api.html get-registeredmodel that also returns the model's Databricks workspace ID and the permission level of the requesting user on the model. | name (string) required |
| post_api_2_0_mlflow_databricks_runs_delete_runs | Bulk delete runs in an experiment that were created prior to or at the specified timestamp. Deletes at most max_runs per request. To call this API from a Databricks Notebook in Python, you can use the client code snippet on https://docs.databricks.com/en/mlflow/runs.html bulk-delete. | data: { . experiment_id (string) . max_runs (integer) . max_timestamp_millis (integer) } (object) required |
| post_api_2_0_mlflow_databricks_runs_restore_runs | Bulk restore runs in an experiment that were deleted no earlier than the specified timestamp. Restores at most max_runs per request. To call this API from a Databricks Notebook in Python, you can use the client code snippet on https://docs.databricks.com/en/mlflow/runs.html bulk-restore. | data: { . experiment_id (string) . max_runs (integer) . min_timestamp_millis (integer) } (object) required |
| post_api_2_0_mlflow_experiments_create | Creates an experiment with a name. Returns the ID of the newly created experiment. Validates that another experiment with the same name does not already exist and fails if another experiment with the same name already exists. Throws RESOURCE_ALREADY_EXISTS if an experiment with the given name exists. | data: { . artifact_location (string) . name (string) . tags (array) } (object) required |
| post_api_2_0_mlflow_experiments_delete | Marks an experiment and associated metadata, runs, metrics, params, and tags for deletion. If the experiment uses FileStore, artifacts associated with the experiment are also deleted. | data: { . experiment_id (string) } (object) required |
| get_api_2_0_mlflow_experiments_get | Gets metadata for an experiment. This method works on deleted experiments. | experiment_id (string) required |
| get_api_2_0_mlflow_experiments_get_by_name | Gets metadata for an experiment. This endpoint will return deleted experiments, but prefers the active experiment if an active and deleted experiment share the same name. If multiple deleted experiments share the same name, the API will return one of them. Throws RESOURCE_DOES_NOT_EXIST if no experiment with the specified name exists. | experiment_name (string) required |
| get_api_2_0_mlflow_experiments_list | Gets a list of all experiments. | view_type (string) max_results (integer) page_token (string) |
| post_api_2_0_mlflow_experiments_restore | Restore an experiment marked for deletion. This also restores associated metadata, runs, metrics, params, and tags. If experiment uses FileStore, underlying artifacts associated with experiment are also restored. Throws RESOURCE_DOES_NOT_EXIST if experiment was never created or was permanently deleted. | data: { . experiment_id (string) } (object) required |
| post_api_2_0_mlflow_experiments_search | Searches for experiments that satisfy specified search criteria. | data: { . filter (string) . max_results (integer) . order_by (array) . page_token (string) . view_type } (object) required |
| post_api_2_0_mlflow_experiments_set_experiment_tag | Sets a tag on an experiment. Experiment tags are metadata that can be updated. | data: { . experiment_id (string) . key (string) . value (string) } (object) required |
| post_api_2_0_mlflow_experiments_update | Updates experiment metadata. | data: { . experiment_id (string) . new_name (string) } (object) required |
| post_api_2_0_mlflow_logged_models | Create a logged model. | data: { . experiment_id (string) . model_type (string) . name (string) . params (array) . source_run_id (string) . tags (array) } (object) required |
| post_api_2_0_mlflow_logged_models_search | Search for Logged Models that satisfy specified search criteria. | data: { . datasets (array) . experiment_ids (array) . filter (string) . max_results (integer) . order_by (array) . page_token (string) } (object) required |
| get_api_2_0_mlflow_logged_models_by_model_id | Get a logged model. | model_id (string) |
| patch_api_2_0_mlflow_logged_models_by_model_id | Finalize a logged model. | model_id (string) data: { . status } (object) required |
| delete_api_2_0_mlflow_logged_models_by_model_id | Delete a logged model. | model_id (string) |
| post_api_2_0_mlflow_logged_models_by_model_id_params | Logs params for a logged model. A param is a key-value pair string key, string value. Examples include hyperparameters used for ML model training. A param can be logged only once for a logged model, and attempting to overwrite an existing param with a different value will result in an error | model_id (string) data: { . params (array) } (object) required |
| patch_api_2_0_mlflow_logged_models_by_model_id_tags | Set tags for a logged model. | model_id (string) data: { . tags (array) } (object) required |
| delete_api_2_0_mlflow_logged_models_by_model_id_tags_by_tag_key | Delete a tag on a logged model. | model_id (string) tag_key (string) |
| get_api_2_0_mlflow_metrics_get_history | Gets a list of all values for the specified metric for a given run. | run_id (string) run_uuid (string) metric_key (string) required page_token (string) max_results (integer) |
| post_api_2_0_mlflow_model_versions_create | Creates a model version. | data: { . description (string) . name (string) . run_id (string) . run_link (string) . source (string) . tags (array) } (object) required |
| delete_api_2_0_mlflow_model_versions_delete | Deletes a model version. | name (string) required version (string) required |
| delete_api_2_0_mlflow_model_versions_delete_tag | Deletes a model version tag. | name (string) required version (string) required key (string) required |
| get_api_2_0_mlflow_model_versions_get | Get a model version. | name (string) required version (string) required |
| get_api_2_0_mlflow_model_versions_get_download_uri | Gets a URI to download the model version. | name (string) required version (string) required |
| get_api_2_0_mlflow_model_versions_search | Searches for specific model versions based on the supplied filter. | filter (string) max_results (integer) order_by (array) page_token (string) |
| post_api_2_0_mlflow_model_versions_set_tag | Sets a model version tag. | data: { . key (string) . name (string) . value (string) . version (string) } (object) required |
| patch_api_2_0_mlflow_model_versions_update | Updates the model version. | data: { . description (string) . name (string) . version (string) } (object) required |
| post_api_2_0_mlflow_registered_models_create | Creates a new registered model with the name specified in the request body. Throws RESOURCE_ALREADY_EXISTS if a registered model with the given name exists. | data: { . description (string) . name (string) . tags (array) } (object) required |
| delete_api_2_0_mlflow_registered_models_delete | Deletes a registered model. | name (string) required |
| delete_api_2_0_mlflow_registered_models_delete_tag | Deletes the tag for a registered model. | name (string) required key (string) required |
| post_api_2_0_mlflow_registered_models_get_latest_versions | Gets the latest version of a registered model. | data: { . name (string) . stages (array) } (object) required |
| get_api_2_0_mlflow_registered_models_list | Lists all available registered models, up to the limit specified in max_results. | max_results (integer) page_token (string) |
| post_api_2_0_mlflow_registered_models_rename | Renames a registered model. | data: { . name (string) . new_name (string) } (object) required |
| get_api_2_0_mlflow_registered_models_search | Search for registered models based on the specified filter. | filter (string) max_results (integer) order_by (array) page_token (string) |
| post_api_2_0_mlflow_registered_models_set_tag | Sets a tag on a registered model. | data: { . key (string) . name (string) . value (string) } (object) required |
| patch_api_2_0_mlflow_registered_models_update | Updates a registered model. | data: { . description (string) . name (string) } (object) required |
| post_api_2_0_mlflow_registry_webhooks_create | NOTE: This endpoint is in Public Preview. Creates a registry webhook. | data: { . description (string) . events (array) . http_url_spec . job_spec . model_name (string) . status } (object) required |
| delete_api_2_0_mlflow_registry_webhooks_delete | NOTE: This endpoint is in Public Preview. Deletes a registry webhook. | id (string) required |
| get_api_2_0_mlflow_registry_webhooks_list | NOTE: This endpoint is in Public Preview. Lists all registry webhooks. | model_name (string) events (array) page_token (string) max_results (integer) |
| post_api_2_0_mlflow_registry_webhooks_test | NOTE: This endpoint is in Public Preview. Tests a registry webhook. | data: { . event . id (string) } (object) required |
| patch_api_2_0_mlflow_registry_webhooks_update | NOTE: This endpoint is in Public Preview. Updates a registry webhook. | data: { . description (string) . events (array) . http_url_spec . id (string) . job_spec . status } (object) required |
| post_api_2_0_mlflow_runs_create | Creates a new run within an experiment. A run is usually a single execution of a machine learning or data ETL pipeline. MLflow uses runs to track the mlflowParam, mlflowMetric, and mlflowRunTag associated with a single execution. | data: { . experiment_id (string) . run_name (string) . start_time (integer) . tags (array) . user_id (string) } (object) required |
| post_api_2_0_mlflow_runs_delete | Marks a run for deletion. | data: { . run_id (string) } (object) required |
| post_api_2_0_mlflow_runs_delete_tag | Deletes a tag on a run. Tags are run metadata that can be updated during a run and after a run completes. | data: { . key (string) . run_id (string) } (object) required |
| get_api_2_0_mlflow_runs_get | Gets the metadata, metrics, params, and tags for a run. In the case where multiple metrics with the same key are logged for a run, return only the value with the latest timestamp. If there are multiple values with the latest timestamp, return the maximum of these values. | run_id (string) required run_uuid (string) |
| post_api_2_0_mlflow_runs_log_batch | Logs a batch of metrics, params, and tags for a run. If any data failed to be persisted, the server will respond with an error non-200 status code. In case of error due to internal server error or an invalid request, partial data may be written. You can write metrics, params, and tags in interleaving fashion, but within a given entity type are guaranteed to follow the order specified in the request body. The overwrite behavior for metrics, params, and tags is as follows: Metrics: metric va | data: { . metrics (array) . params (array) . run_id (string) . tags (array) } (object) required |
| post_api_2_0_mlflow_runs_log_inputs | Logs inputs, such as datasets and models, to an MLflow Run. | data: { . datasets (array) . models (array) . run_id (string) } (object) required |
| post_api_2_0_mlflow_runs_log_metric | Log a metric for a run. A metric is a key-value pair string key, float value with an associated timestamp. Examples include the various metrics that represent ML model accuracy. A metric can be logged multiple times. | data: { . dataset_digest (string) . dataset_name (string) . key (string) . model_id (string) . run_id (string) . run_uuid (string) . step (integer) . timestamp (integer) . value (number) } (object) required |
| post_api_2_0_mlflow_runs_log_model | Note: the Create a logged model/api/workspace/experiments/createloggedmodel API replaces this endpoint. Log a model to an MLflow Run. | data: { . model_json (string) . run_id (string) } (object) required |
| post_api_2_0_mlflow_runs_log_parameter | Logs a param used for a run. A param is a key-value pair string key, string value. Examples include hyperparameters used for ML model training and constant dates and values used in an ETL pipeline. A param can be logged only once for a run. | data: { . key (string) . run_id (string) . run_uuid (string) . value (string) } (object) required |
| post_api_2_0_mlflow_runs_outputs | Logs outputs, such as models, from an MLflow Run. | data: { . models (array) . run_id (string) } (object) required |
| post_api_2_0_mlflow_runs_restore | Restores a deleted run. This also restores associated metadata, runs, metrics, params, and tags. Throws RESOURCE_DOES_NOT_EXIST if the run was never created or was permanently deleted. | data: { . run_id (string) } (object) required |
| post_api_2_0_mlflow_runs_search | Searches for runs that satisfy expressions. Search expressions can use mlflowMetric and mlflowParam keys. | data: { . experiment_ids (array) . filter (string) . max_results (integer) . order_by (array) . page_token (string) . run_view_type } (object) required |
| post_api_2_0_mlflow_runs_set_tag | Sets a tag on a run. Tags are run metadata that can be updated during a run and after a run completes. | data: { . key (string) . run_id (string) . run_uuid (string) . value (string) } (object) required |
| post_api_2_0_mlflow_runs_update | Updates run metadata. | data: { . end_time (integer) . run_id (string) . run_name (string) . run_uuid (string) . status } (object) required |
| post_api_2_0_mlflow_transition_requests_approve | Approves a model version stage transition request. | data: { . archive_existing_versions (boolean) . comment (string) . name (string) . stage (string) . version (string) } (object) required |
| post_api_2_0_mlflow_transition_requests_create | Creates a model version stage transition request. | data: { . comment (string) . name (string) . stage (string) . version (string) } (object) required |
| delete_api_2_0_mlflow_transition_requests_delete | Cancels a model version stage transition request. | name (string) required version (string) required stage (string) required creator (string) required comment (string) |
| get_api_2_0_mlflow_transition_requests_list | Gets a list of all open stage transition requests for the model version. | name (string) required version (string) required |
| post_api_2_0_mlflow_transition_requests_reject | Rejects a model version stage transition request. | data: { . comment (string) . name (string) . stage (string) . version (string) } (object) required |
| get_api_2_0_notification_destinations | Lists notification destinations. | page_token (string) page_size (integer) |
| post_api_2_0_notification_destinations | Creates a notification destination. Requires workspace admin permissions. | data: { . config . display_name (string) } (object) required |
| get_api_2_0_notification_destinations_by_id | Gets a notification destination. | id (string) |
| patch_api_2_0_notification_destinations_by_id | Updates a notification destination. Requires workspace admin permissions. At least one field is required in the request body. | id (string) data: { . config . display_name (string) } (object) required |
| delete_api_2_0_notification_destinations_by_id | Deletes a notification destination. Requires workspace admin permissions. | id (string) |
| post_api_2_0_online_tables | Create a new Online Table. | data: { . name (string) . spec . status . table_serving_url (string) . unity_catalog_provisioning_state } (object) required |
| get_api_2_0_online_tables_by_name | Get information about an existing online table and its status. | name (string) |
| delete_api_2_0_online_tables_by_name | Delete an online table. Warning: This will delete all the data in the online table. If the source Delta table was deleted or modified since this Online Table was created, this will lose the data forever! | name (string) |
| get_api_2_0_permissions_apps_by_app_name | Gets the permissions of an app. Apps can inherit permissions from their root object. | app_name (string) |
| put_api_2_0_permissions_apps_by_app_name | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | app_name (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_apps_by_app_name | Updates the permissions on an app. Apps can inherit permissions from their root object. | app_name (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_apps_by_app_name_permission_levels | Gets the permission levels that a user can have on an object. | app_name (string) |
| get_api_2_0_permissions_authorization_passwords | Gets the permissions of all passwords. Passwords can inherit permissions from their root object. | No parameters |
| put_api_2_0_permissions_authorization_passwords | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_authorization_passwords | Updates the permissions on all passwords. Passwords can inherit permissions from their root object. | data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_authorization_passwords_permission_levels | Gets the permission levels that a user can have on an object. | No parameters |
| get_api_2_0_permissions_authorization_tokens | Gets the permissions of all tokens. Tokens can inherit permissions from their root object. | No parameters |
| put_api_2_0_permissions_authorization_tokens | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_authorization_tokens | Updates the permissions on all tokens. Tokens can inherit permissions from their root object. | data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_authorization_tokens_permission_levels | Gets the permission levels that a user can have on an object. | No parameters |
| get_api_2_0_permissions_cluster_policies_by_cluster_policy_id | Gets the permissions of a cluster policy. Cluster policies can inherit permissions from their root object. | cluster_policy_id (string) |
| put_api_2_0_permissions_cluster_policies_by_cluster_policy_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | cluster_policy_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_cluster_policies_by_cluster_policy_id | Updates the permissions on a cluster policy. Cluster policies can inherit permissions from their root object. | cluster_policy_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_cluster_policies_by_cluster_policy_id_permission_levels | Gets the permission levels that a user can have on an object. | cluster_policy_id (string) |
| get_api_2_0_permissions_clusters_by_cluster_id | Gets the permissions of a cluster. Clusters can inherit permissions from their root object. | cluster_id (string) |
| put_api_2_0_permissions_clusters_by_cluster_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | cluster_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_clusters_by_cluster_id | Updates the permissions on a cluster. Clusters can inherit permissions from their root object. | cluster_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_clusters_by_cluster_id_permission_levels | Gets the permission levels that a user can have on an object. | cluster_id (string) |
| get_api_2_0_permissions_experiments_by_experiment_id | Gets the permissions of an experiment. Experiments can inherit permissions from their root object. | experiment_id (string) |
| put_api_2_0_permissions_experiments_by_experiment_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | experiment_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_experiments_by_experiment_id | Updates the permissions on an experiment. Experiments can inherit permissions from their root object. | experiment_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_experiments_by_experiment_id_permission_levels | Gets the permission levels that a user can have on an object. | experiment_id (string) |
| get_api_2_0_permissions_instance_pools_by_instance_pool_id | Gets the permissions of an instance pool. Instance pools can inherit permissions from their root object. | instance_pool_id (string) |
| put_api_2_0_permissions_instance_pools_by_instance_pool_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | instance_pool_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_instance_pools_by_instance_pool_id | Updates the permissions on an instance pool. Instance pools can inherit permissions from their root object. | instance_pool_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_instance_pools_by_instance_pool_id_permission_levels | Gets the permission levels that a user can have on an object. | instance_pool_id (string) |
| get_api_2_0_permissions_jobs_by_job_id | Gets the permissions of a job. Jobs can inherit permissions from their root object. | job_id (string) |
| put_api_2_0_permissions_jobs_by_job_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | job_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_jobs_by_job_id | Updates the permissions on a job. Jobs can inherit permissions from their root object. | job_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_jobs_by_job_id_permission_levels | Gets the permission levels that a user can have on an object. | job_id (string) |
| get_api_2_0_permissions_pipelines_by_pipeline_id | Gets the permissions of a pipeline. Pipelines can inherit permissions from their root object. | pipeline_id (string) |
| put_api_2_0_permissions_pipelines_by_pipeline_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | pipeline_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_pipelines_by_pipeline_id | Updates the permissions on a pipeline. Pipelines can inherit permissions from their root object. | pipeline_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_pipelines_by_pipeline_id_permission_levels | Gets the permission levels that a user can have on an object. | pipeline_id (string) |
| get_api_2_0_permissions_registered_models_by_registered_model_id | Gets the permissions of a registered model. Registered models can inherit permissions from their root object. | registered_model_id (string) |
| put_api_2_0_permissions_registered_models_by_registered_model_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | registered_model_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_registered_models_by_registered_model_id | Updates the permissions on a registered model. Registered models can inherit permissions from their root object. | registered_model_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_registered_models_by_registered_model_id_permission_levels | Gets the permission levels that a user can have on an object. | registered_model_id (string) |
| get_api_2_0_permissions_repos_by_repo_id | Gets the permissions of a repo. Repos can inherit permissions from their root object. | repo_id (string) |
| put_api_2_0_permissions_repos_by_repo_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | repo_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_repos_by_repo_id | Updates the permissions on a repo. Repos can inherit permissions from their root object. | repo_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_repos_by_repo_id_permission_levels | Gets the permission levels that a user can have on an object. | repo_id (string) |
| get_api_2_0_permissions_serving_endpoints_by_serving_endpoint_id | Gets the permissions of a serving endpoint. Serving endpoints can inherit permissions from their root object. | serving_endpoint_id (string) |
| put_api_2_0_permissions_serving_endpoints_by_serving_endpoint_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | serving_endpoint_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_serving_endpoints_by_serving_endpoint_id | Updates the permissions on a serving endpoint. Serving endpoints can inherit permissions from their root object. | serving_endpoint_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_serving_endpoints_by_serving_endpoint_id_permission_levels | Gets the permission levels that a user can have on an object. | serving_endpoint_id (string) |
| get_api_2_0_permissions_warehouses_by_warehouse_id | Gets the permissions of a SQL warehouse. SQL warehouses can inherit permissions from their root object. | warehouse_id (string) |
| put_api_2_0_permissions_warehouses_by_warehouse_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object. | warehouse_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_warehouses_by_warehouse_id | Updates the permissions on a SQL warehouse. SQL warehouses can inherit permissions from their root object. | warehouse_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_warehouses_by_warehouse_id_permission_levels | Gets the permission levels that a user can have on an object. | warehouse_id (string) |
| get_api_2_0_permissions_by_request_object_type_by_request_object_id | Gets the permissions of an object. Objects can inherit permissions from their parent objects or root object. | request_object_type (string) request_object_id (string) |
| put_api_2_0_permissions_by_request_object_type_by_request_object_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their parent objects or root object. | request_object_type (string) request_object_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_by_request_object_type_by_request_object_id | Updates the permissions on an object. Objects can inherit permissions from their parent objects or root object. | request_object_type (string) request_object_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_by_request_object_type_by_request_object_id_permission_levels | Gets the permission levels that a user can have on an object. | request_object_type (string) request_object_id (string) |
| get_api_2_0_permissions_by_workspace_object_type_by_workspace_object_id | Gets the permissions of a workspace object. Workspace objects can inherit permissions from their parent objects or root object. | workspace_object_type (string) workspace_object_id (string) |
| put_api_2_0_permissions_by_workspace_object_type_by_workspace_object_id | Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their parent objects or root object. | workspace_object_type (string) workspace_object_id (string) data: { . access_control_list (array) } (object) required |
| patch_api_2_0_permissions_by_workspace_object_type_by_workspace_object_id | Updates the permissions on a workspace object. Workspace objects can inherit permissions from their parent objects or root object. | workspace_object_type (string) workspace_object_id (string) data: { . access_control_list (array) } (object) required |
| get_api_2_0_permissions_by_workspace_object_type_by_workspace_object_id_permission_levels | Gets the permission levels that a user can have on an object. | workspace_object_type (string) workspace_object_id (string) |
| get_api_2_0_pipelines | Lists pipelines defined in the Delta Live Tables system. | page_token (string) max_results (integer) order_by (array) filter (string) |
| post_api_2_0_pipelines | Creates a new data processing pipeline based on the requested configuration. If successful, this method returns the ID of the new pipeline. | data: { . allow_duplicate_names (boolean) . catalog (string) . channel (string) . clusters (array) . configuration (object) . continuous (boolean) . deployment . development (boolean) . dry_run (boolean) . edition (string) . environment . event_log . filters . id (string) . ingestion_definition . libraries (array) . name (string) . notifications (array) . photon (boolean) . root_path (string) . schema (string) . serverless (boolean) . storage (string) . tags (object) . target (string) . trigger } (object) required |
| get_api_2_0_pipelines_by_pipeline_id | Get a pipeline. | pipeline_id (string) |
| put_api_2_0_pipelines_by_pipeline_id | Updates a pipeline with the supplied configuration. | pipeline_id (string) data: { . allow_duplicate_names (boolean) . catalog (string) . channel (string) . clusters (array) . configuration (object) . continuous (boolean) . deployment . development (boolean) . edition (string) . environment . event_log . expected_last_modified (integer) . filters . id (string) . ingestion_definition . libraries (array) . name (string) . notifications (array) . photon (boolean) . root_path (string) . schema (string) . serverless (boolean) . storage (string) . tags (object) . target (string) . trigger } (object) required |
| delete_api_2_0_pipelines_by_pipeline_id | Deletes a pipeline. Deleting a pipeline is a permanent action that stops and removes the pipeline and its tables. You cannot undo this action. | pipeline_id (string) |
| get_api_2_0_pipelines_by_pipeline_id_events | Retrieves events for a pipeline. | pipeline_id (string) page_token (string) max_results (integer) order_by (array) filter (string) |
| post_api_2_0_pipelines_by_pipeline_id_stop | Stops the pipeline by canceling the active update. If there is no active update for the pipeline, this request is a no-op. | pipeline_id (string) |
| get_api_2_0_pipelines_by_pipeline_id_updates | List updates for an active pipeline. | pipeline_id (string) page_token (string) max_results (integer) until_update_id (string) |
| post_api_2_0_pipelines_by_pipeline_id_updates | Starts a new update for the pipeline. If there is already an active update for the pipeline, the request will fail and the active update will remain running. | pipeline_id (string) data: { . cause . full_refresh (boolean) . full_refresh_selection (array) . refresh_selection (array) . validate_only (boolean) } (object) required |
| get_api_2_0_pipelines_by_pipeline_id_updates_by_update_id | Gets an update from an active pipeline. | pipeline_id (string) update_id (string) |
| post_api_2_0_policies_clusters_create | Creates a new policy with prescribed settings. | data: { . definition (string) . description (string) . libraries (array) . max_clusters_per_user (integer) . name (string) . policy_family_definition_overrides (string) . policy_family_id (string) } (object) required |
| post_api_2_0_policies_clusters_delete | Delete a policy for a cluster. Clusters governed by this policy can still run, but cannot be edited. | data: { . policy_id (string) } (object) required |
| post_api_2_0_policies_clusters_edit | Update an existing policy for cluster. This operation may make some clusters governed by the previous policy invalid. | data: { . definition (string) . description (string) . libraries (array) . max_clusters_per_user (integer) . name (string) . policy_family_definition_overrides (string) . policy_family_id (string) . policy_id (string) } (object) required |
| post_api_2_0_policies_clusters_enforce_compliance | Updates a cluster to be compliant with the current version of its policy. A cluster can be updated if it is in a RUNNING or TERMINATED state. If a cluster is updated while in a RUNNING state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a TERMINATED state, it will remain TERMINATED. The next time the cluster is started, the new attributes will take effect. Clusters created by the Databricks Jobs, DLT, or Models services cannot be enforced b | data: { . cluster_id (string) . validate_only (boolean) } (object) required |
| get_api_2_0_policies_clusters_get | Get a cluster policy entity. Creation and editing is available to admins only. | policy_id (string) required |
| get_api_2_0_policies_clusters_get_compliance | Returns the policy compliance status of a cluster. Clusters could be out of compliance if their policy was updated after the cluster was last edited. | cluster_id (string) required |
| get_api_2_0_policies_clusters_list | Returns a list of policies accessible by the requesting user. | sort_order (string) sort_column (string) |
| get_api_2_0_policies_clusters_list_compliance | Returns the policy compliance status of all clusters that use a given policy. Clusters could be out of compliance if their policy was updated after the cluster was last edited. | policy_id (string) required page_token (string) page_size (integer) |
| post_api_2_0_policies_jobs_enforce_compliance | Updates a job so the job clusters that are created when running the job specified in new_cluster are compliant with the current versions of their respective cluster policies. All-purpose clusters used in the job will not be updated. | data: { . job_id (integer) . validate_only (boolean) } (object) required |
| get_api_2_0_policies_jobs_get_compliance | Returns the policy compliance status of a job. Jobs could be out of compliance if a cluster policy they use was updated after the job was last edited and some of its job clusters no longer comply with their updated policies. | job_id (integer) required |
| get_api_2_0_policies_jobs_list_compliance | Returns the policy compliance status of all jobs that use a given policy. Jobs could be out of compliance if a cluster policy they use was updated after the job was last edited and its job clusters no longer comply with the updated policy. | policy_id (string) required page_token (string) page_size (integer) |
| get_api_2_0_policy_families | Returns the list of policy definition types available to use at their latest version. This API is paginated. | max_results (integer) page_token (string) |
| get_api_2_0_policy_families_by_policy_family_id | Retrieve the information for an policy family based on its identifier and version | policy_family_id (string) version (integer) |
| get_api_2_0_preview_accounts_access_control_assignable_roles | Gets all the roles that can be granted on an account level resource. A role is grantable if the rule set on the resource can contain an access rule of the role. | resource (string) required |
| get_api_2_0_preview_accounts_access_control_rule_sets | Get a rule set by its name. A rule set is always attached to a resource and contains a list of access rules on the said resource. Currently only a default rule set for each resource is supported. | name (string) required etag (string) required |
| put_api_2_0_preview_accounts_access_control_rule_sets | Replace the rules of a rule set. First, use get to read the current version of the rule set before modifying it. This pattern helps prevent conflicts between concurrent updates. | data: { . name (string) . rule_set } (object) required |
| get_api_2_0_preview_scim_v2_groups | Gets all details of the groups associated with the Databricks workspace. | filter (string) attributes (string) excludedAttributes (string) startIndex (integer) count (integer) sortBy (string) sortOrder (string) |
| post_api_2_0_preview_scim_v2_groups | Creates a group in the Databricks workspace with a unique name, using the supplied group details. | data: { . displayName (string) . entitlements (array) . externalId (string) . groups (array) . id (string) . members (array) . roles (array) . schemas (array) } (object) required |
| get_api_2_0_preview_scim_v2_groups_by_id | Gets the information for a specific group in the Databricks workspace. | id (string) |
| put_api_2_0_preview_scim_v2_groups_by_id | Updates the details of a group by replacing the entire group entity. | id (string) data: { . displayName (string) . entitlements (array) . externalId (string) . groups (array) . id (string) . members (array) . roles (array) . schemas (array) } (object) required |
| patch_api_2_0_preview_scim_v2_groups_by_id | Partially updates the details of a group. | id (string) data: { . Operations (array) . schemas (array) } (object) required |
| delete_api_2_0_preview_scim_v2_groups_by_id | Deletes a group from the Databricks workspace. | id (string) |
| get_api_2_0_preview_scim_v2_me | Get details about the current method caller's identity. | No parameters |
| get_api_2_0_preview_scim_v2_service_principals | Gets the set of service principals associated with a Databricks workspace. | attributes (string) count (integer) excludedAttributes (string) filter (string) sortBy (string) sortOrder (string) startIndex (integer) |
| post_api_2_0_preview_scim_v2_service_principals | Creates a new service principal in the Databricks workspace. | data: { . active (boolean) . applicationId (string) . displayName (string) . entitlements (array) . externalId (string) . groups (array) . id (string) . roles (array) . schemas (array) } (object) required |
| get_api_2_0_preview_scim_v2_service_principals_by_id | Gets the details for a single service principal define in the Databricks workspace. | id (string) |
| put_api_2_0_preview_scim_v2_service_principals_by_id | Updates the details of a single service principal. This action replaces the existing service principal with the same name. | id (string) data: { . active (boolean) . applicationId (string) . displayName (string) . entitlements (array) . externalId (string) . groups (array) . id (string) . roles (array) . schemas (array) } (object) required |
| patch_api_2_0_preview_scim_v2_service_principals_by_id | Partially updates the details of a single service principal in the Databricks workspace. | id (string) data: { . Operations (array) . schemas (array) } (object) required |
| delete_api_2_0_preview_scim_v2_service_principals_by_id | Delete a single service principal in the Databricks workspace. | id (string) |
| get_api_2_0_preview_scim_v2_users | Gets details for all the users associated with a Databricks workspace. | attributes (string) count (integer) excludedAttributes (string) filter (string) sortBy (string) sortOrder (string) startIndex (integer) |
| post_api_2_0_preview_scim_v2_users | Creates a new user in the Databricks workspace. This new user will also be added to the Databricks account. | data: { . active (boolean) . displayName (string) . emails (array) . entitlements (array) . externalId (string) . groups (array) . id (string) . name . roles (array) . schemas (array) . userName (string) } (object) required |
| get_api_2_0_preview_scim_v2_users_by_id | Gets information for a specific user in Databricks workspace. | id (string) attributes (string) count (integer) excludedAttributes (string) filter (string) sortBy (string) sortOrder (string) startIndex (integer) |
| put_api_2_0_preview_scim_v2_users_by_id | Replaces a user's information with the data supplied in request. | id (string) data: { . active (boolean) . displayName (string) . emails (array) . entitlements (array) . externalId (string) . groups (array) . id (string) . name . roles (array) . schemas (array) . userName (string) } (object) required |
| patch_api_2_0_preview_scim_v2_users_by_id | Partially updates a user resource by applying the supplied operations on specific user attributes. | id (string) data: { . Operations (array) . schemas (array) } (object) required |
| delete_api_2_0_preview_scim_v2_users_by_id | Deletes a user. Deleting a user from a Databricks workspace also removes objects associated with the user. | id (string) |
| get_api_2_0_preview_sql_alerts | Gets a list of alerts. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/list instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | No parameters |
| post_api_2_0_preview_sql_alerts | Creates an alert. An alert is a Databricks SQL object that periodically runs a query, evaluates a condition of its result, and notifies users or notification destinations if the condition was met. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/create instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | data: { . name (string) . options . parent (string) . query_id (string) . rearm (integer) } (object) required |
| get_api_2_0_preview_sql_alerts_by_alert_id | Gets an alert. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/get instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | alert_id (string) |
| put_api_2_0_preview_sql_alerts_by_alert_id | Updates an alert. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/update instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | alert_id (string) data: { . name (string) . options . query_id (string) . rearm (integer) } (object) required |
| delete_api_2_0_preview_sql_alerts_by_alert_id | Deletes an alert. Deleted alerts are no longer accessible and cannot be restored. Note: Unlike queries and dashboards, alerts cannot be moved to the trash. Note: A new version of the Databricks SQL API is now available. Please use :method:alerts/delete instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | alert_id (string) |
| get_api_2_0_preview_sql_dashboards | Fetch a paginated list of dashboard objects. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban. | order (string) page (integer) page_size (integer) q (string) |
| post_api_2_0_preview_sql_dashboards_trash_by_dashboard_id | A restored dashboard appears in list views and searches and can be shared. | dashboard_id (string) |
| get_api_2_0_preview_sql_dashboards_by_dashboard_id | Returns a JSON representation of a dashboard object, including its visualization and query objects. | dashboard_id (string) |
| post_api_2_0_preview_sql_dashboards_by_dashboard_id | Modify this dashboard definition. This operation only affects attributes of the dashboard object. It does not add, modify, or remove widgets. Note: You cannot undo this operation. | dashboard_id (string) data: { . name (string) . run_as_role . tags (array) } (object) required |
| delete_api_2_0_preview_sql_dashboards_by_dashboard_id | Moves a dashboard to the trash. Trashed dashboards do not appear in list views or searches, and cannot be shared. | dashboard_id (string) |
| get_api_2_0_preview_sql_data_sources | Retrieves a full list of SQL warehouses available in this workspace. All fields that appear in this API response are enumerated for clarity. However, you need only a SQL warehouse's id to create new queries against it. Note: A new version of the Databricks SQL API is now available. Please use :method:warehouses/list instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | No parameters |
| get_api_2_0_preview_sql_permissions_by_object_type_by_object_id | Gets a JSON representation of the access control list ACL for a specified object. Note: A new version of the Databricks SQL API is now available. Please use :method:workspace/getpermissions instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | objectType (string) objectId (string) |
| post_api_2_0_preview_sql_permissions_by_object_type_by_object_id | Sets the access control list ACL for a specified object. This operation will complete rewrite the ACL. Note: A new version of the Databricks SQL API is now available. Please use :method:workspace/setpermissions instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | objectType (string) objectId (string) data: { . access_control_list (array) } (object) required |
| post_api_2_0_preview_sql_permissions_by_object_type_by_object_id_transfer | Transfers ownership of a dashboard, query, or alert to an active user. Requires an admin API key. Note: A new version of the Databricks SQL API is now available. For queries and alerts, please use :method:queries/update and :method:alerts/update respectively instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | objectType (string) objectId: { . new_owner (string) } (object) data: { . new_owner (string) } (object) required |
| get_api_2_0_preview_sql_queries | Gets a list of queries. Optionally, this list can be filtered by a search term. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/list instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | order (string) page (integer) page_size (integer) q (string) |
| post_api_2_0_preview_sql_queries | Creates a new query definition. Queries created with this endpoint belong to the authenticated user making the request. The data_source_id field specifies the ID of the SQL warehouse to run this query against. You can use the Data Sources API to see a complete list of available SQL warehouses. Or you can copy the data_source_id from an existing query. Note: You cannot add a visualization until you create the query. Note: A new version of the Databricks SQL API is now available. Please use :me | data: { . data_source_id (string) . description (string) . name (string) . options . parent (string) . query (string) . run_as_role . tags (array) } (object) required |
| post_api_2_0_preview_sql_queries_trash_by_query_id | Restore a query that has been moved to the trash. A restored query appears in list views and searches. You can use restored queries for alerts. Note: A new version of the Databricks SQL API is now available. Please see the latest version. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | query_id (string) |
| get_api_2_0_preview_sql_queries_by_query_id | Retrieve a query object definition along with contextual permissions information about the currently authenticated user. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/get instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | query_id (string) |
| post_api_2_0_preview_sql_queries_by_query_id | Modify this query definition. Note: You cannot undo this operation. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/update instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | query_id (string) data: { . data_source_id (string) . description (string) . name (string) . options . query (string) . run_as_role . tags (array) } (object) required |
| delete_api_2_0_preview_sql_queries_by_query_id | Moves a query to the trash. Trashed queries immediately disappear from searches and list views, and they cannot be used for alerts. The trash is deleted after 30 days. Note: A new version of the Databricks SQL API is now available. Please use :method:queries/delete instead. Learn morehttps://docs.databricks.com/en/sql/dbsql-api-latest.html | query_id (string) |
| get_api_2_0_quality_monitors | Unimplemented List quality monitors | page_token (string) page_size (integer) |
| post_api_2_0_quality_monitors | Create a quality monitor on UC object | data: { . anomaly_detection_config . object_id (string) . object_type (string) } (object) required |
| get_api_2_0_quality_monitors_by_object_type_by_object_id | Read a quality monitor on UC object | object_type (string) object_id (string) |
| put_api_2_0_quality_monitors_by_object_type_by_object_id | Unimplemented Update a quality monitor on UC object | object_type (string) object_id (string) data: { . anomaly_detection_config . object_id (string) . object_type (string) } (object) required |
| delete_api_2_0_quality_monitors_by_object_type_by_object_id | Delete a quality monitor on UC object | object_type (string) object_id (string) |
| get_api_2_0_repos | Returns repos that the calling user has Manage permissions on. Use next_page_token to iterate through additional pages. | path_prefix (string) next_page_token (string) |
| post_api_2_0_repos | Creates a repo in the workspace and links it to the remote Git repo specified. Note that repos created programmatically must be linked to a remote Git repo, unlike repos created in the browser. | data: { . path (string) . provider (string) . sparse_checkout . url (string) } (object) required |
| get_api_2_0_repos_by_repo_id | Returns the repo with the given repo ID. | repo_id (integer) |
| patch_api_2_0_repos_by_repo_id | Updates the repo to a different branch or tag, or updates the repo to the latest commit on the same branch. | repo_id (integer) data: { . branch (string) . sparse_checkout . tag (string) } (object) required |
| delete_api_2_0_repos_by_repo_id | Deletes the specified repo. | repo_id (integer) |
| post_api_2_0_secrets_acls_delete | Deletes the given ACL on the given scope. Users must have the MANAGE permission to invoke this API. Example request: .. code:: 'scope': 'my-secret-scope', 'principal': 'data-scientists' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope, principal, or ACL exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws INVALID_PARAMETER_VALUE if the permission or principal is invalid. | data: { . principal (string) . scope (string) } (object) required |
| get_api_2_0_secrets_acls_get | Describes the details about the given ACL, such as the group and permission. Users must have the MANAGE permission to invoke this API. Example response: .. code:: 'principal': 'data-scientists', 'permission': 'READ' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws INVALID_PARAMETER_VALUE if the permission or principal is invalid. | scope (string) required principal (string) required |
| get_api_2_0_secrets_acls_list | Lists the ACLs set on the given scope. Users must have the MANAGE permission to invoke this API. Example response: .. code:: 'acls': 'principal': 'admins', 'permission': 'MANAGE' , 'principal': 'data-scientists', 'permission': 'READ' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. | scope (string) required |
| post_api_2_0_secrets_acls_put | Creates or overwrites the ACL associated with the given principal user or group on the specified scope point. In general, a user or group will use the most powerful permission available to them, and permissions are ordered as follows: MANAGE - Allowed to change ACLs, and read and write to this secret scope. WRITE - Allowed to read and write to this secret scope. READ - Allowed to read this secret scope and list what secrets are available. Note that in general, secret values can only be read | data: { . permission . principal (string) . scope (string) } (object) required |
| post_api_2_0_secrets_delete | Deletes the secret stored in this secret scope. You must have WRITE or MANAGE permission on the Secret Scope. Example request: .. code:: 'scope': 'my-secret-scope', 'key': 'my-secret-key' Throws RESOURCE_DOES_NOT_EXIST if no such secret scope or secret exists. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws BAD_REQUEST if system user attempts to delete an internal secret, or request is made against Azure KeyVault backed scope. | data: { . key (string) . scope (string) } (object) required |
| get_api_2_0_secrets_get | Gets a secret for a given key and scope. This API can only be called from the DBUtils interface. Users need the READ permission to make this call. Example response: .. code:: 'key': 'my-string-key', 'value': bytes of the secret value Note that the secret value returned is in bytes. The interpretation of the bytes is determined by the caller in DBUtils and the type the data is decoded into. Throws RESOURCE_DOES_NOT_EXIST if no such secret or secret scope exists. Throws PERMISSION_DENIED if | scope (string) required key (string) required |
| get_api_2_0_secrets_list | Lists the secret keys that are stored at this scope. This is a metadata-only operation; secret data cannot be retrieved using this API. Users need the READ permission to make this call. Example response: .. code:: 'secrets': 'key': 'my-string-key'', 'last_updated_timestamp': '1520467595000' , 'key': 'my-byte-key', 'last_updated_timestamp': '1520467595000' , The lastUpdatedTimestamp returned is in milliseconds since epoch. Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists. | scope (string) required |
| post_api_2_0_secrets_put | Inserts a secret under the provided scope with the given name. If a secret already exists with the same name, this command overwrites the existing secret's value. The server encrypts the secret using the secret scope's encryption settings before storing it. You must have WRITE or MANAGE permission on the secret scope. The secret key must consist of alphanumeric characters, dashes, underscores, and periods, and cannot exceed 128 characters. The maximum allowed secret value size is 128 KB. The ma | data: { . bytes_value (string) . key (string) . scope (string) . string_value (string) } (object) required |
| post_api_2_0_secrets_scopes_create | Creates a new secret scope. The scope name must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters. Example request: .. code:: 'scope': 'my-simple-databricks-scope', 'initial_manage_principal': 'users' 'scope_backend_type': 'databricks|azure_keyvault', below is only required if scope type is azure_keyvault 'backend_azure_keyvault': 'resource_id': '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/xxxx/providers/Microso | data: { . initial_manage_principal (string) . scope (string) . scope_backend_type } (object) required |
| post_api_2_0_secrets_scopes_delete | Deletes a secret scope. Example request: .. code:: 'scope': 'my-secret-scope' Throws RESOURCE_DOES_NOT_EXIST if the scope does not exist. Throws PERMISSION_DENIED if the user does not have permission to make this API call. Throws BAD_REQUEST if system user attempts to delete internal secret scope. | data: { . scope (string) } (object) required |
| get_api_2_0_secrets_scopes_list | Lists all secret scopes available in the workspace. Example response: .. code:: 'scopes': 'name': 'my-databricks-scope', 'backend_type': 'DATABRICKS' , 'name': 'mount-points', 'backend_type': 'DATABRICKS' Throws PERMISSION_DENIED if the user does not have permission to make this API call. | No parameters |
| get_api_2_0_serving_endpoints | Get all serving endpoints. | No parameters |
| post_api_2_0_serving_endpoints | Create a new serving endpoint. | data: { . ai_gateway . budget_policy_id (string) . config . description (string) . email_notifications . name (string) . rate_limits (array) . route_optimized (boolean) . tags (array) } (object) required |
| post_api_2_0_serving_endpoints_pt | Create a new PT serving endpoint. | data: { . ai_gateway . budget_policy_id (string) . config . email_notifications . name (string) . tags (array) } (object) required |
| put_api_2_0_serving_endpoints_pt_by_name_config | Updates any combination of the pt endpoint's served entities, the compute configuration of those served entities, and the endpoint's traffic config. Updates are instantaneous and endpoint should be updated instantly | name (string) data: { . config } (object) required |
| get_api_2_0_serving_endpoints_by_name | Retrieves the details for a single serving endpoint. | name (string) |
| delete_api_2_0_serving_endpoints_by_name | Delete a serving endpoint. | name (string) |
| put_api_2_0_serving_endpoints_by_name_ai_gateway | Used to update the AI Gateway of a serving endpoint. NOTE: External model, provisioned throughput, and pay-per-token endpoints are fully supported; agent endpoints currently only support inference tables. | name (string) data: { . fallback_config . guardrails . inference_table_config . rate_limits (array) . usage_tracking_config } (object) required |
| put_api_2_0_serving_endpoints_by_name_config | Updates any combination of the serving endpoint's served entities, the compute configuration of those served entities, and the endpoint's traffic config. An endpoint that already has an update in progress can not be updated until the current update completes or fails. | name (string) data: { . auto_capture_config . served_entities (array) . served_models (array) . traffic_config } (object) required |
| get_api_2_0_serving_endpoints_by_name_metrics | Retrieves the metrics associated with the provided serving endpoint in either Prometheus or OpenMetrics exposition format. | name (string) |
| get_api_2_0_serving_endpoints_by_name_openapi | Get the query schema of the serving endpoint in OpenAPI format. The schema contains information for the supported paths, input and output format and datatypes. | name (string) |
| put_api_2_0_serving_endpoints_by_name_rate_limits | Deprecated: Please use AI Gateway to manage rate limits instead. | name (string) data: { . rate_limits (array) } (object) required |
| get_api_2_0_serving_endpoints_by_name_served_models_by_served_model_name_build_logs | Retrieves the build logs associated with the provided served model. | name (string) served_model_name (string) |
| get_api_2_0_serving_endpoints_by_name_served_models_by_served_model_name_logs | Retrieves the service logs associated with the provided served model. | name (string) served_model_name (string) |
| patch_api_2_0_serving_endpoints_by_name_tags | Used to batch add and delete tags from a serving endpoint with a single API call. | name (string) data: { . add_tags (array) . delete_tags (array) } (object) required |
| get_api_2_0_settings_types_aibi_dash_embed_ws_acc_policy_names_default | Retrieves the AI/BI dashboard embedding access policy. The default setting is ALLOW_APPROVED_DOMAINS, permitting AI/BI dashboards to be embedded on approved domains. | etag (string) |
| patch_api_2_0_settings_types_aibi_dash_embed_ws_acc_policy_names_default | Updates the AI/BI dashboard embedding access policy at the workspace level. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_aibi_dash_embed_ws_acc_policy_names_default | Delete the AI/BI dashboard embedding access policy, reverting back to the default. | etag (string) |
| get_api_2_0_settings_types_aibi_dash_embed_ws_apprvd_domains_names_default | Retrieves the list of domains approved to host embedded AI/BI dashboards. | etag (string) |
| patch_api_2_0_settings_types_aibi_dash_embed_ws_apprvd_domains_names_default | Updates the list of domains approved to host embedded AI/BI dashboards. This update will fail if the current workspace access policy is not ALLOW_APPROVED_DOMAINS. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_aibi_dash_embed_ws_apprvd_domains_names_default | Delete the list of domains approved to host embedded AI/BI dashboards, reverting back to the default empty list. | etag (string) |
| get_api_2_0_settings_types_automatic_cluster_update_names_default | Gets the automatic cluster update setting. | etag (string) |
| patch_api_2_0_settings_types_automatic_cluster_update_names_default | Updates the automatic cluster update setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| get_api_2_0_settings_types_dashboard_email_subscriptions_names_default | Gets the Dashboard Email Subscriptions setting. | etag (string) |
| patch_api_2_0_settings_types_dashboard_email_subscriptions_names_default | Updates the Dashboard Email Subscriptions setting. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_dashboard_email_subscriptions_names_default | Reverts the Dashboard Email Subscriptions setting to its default value. | etag (string) |
| get_api_2_0_settings_types_default_namespace_ws_names_default | Gets the default namespace setting. | etag (string) |
| patch_api_2_0_settings_types_default_namespace_ws_names_default | Updates the default namespace setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. Note that if the setting does not exist, GET returns a NOT_FOUND error and the etag is present in the error response, which should be set in the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 4 | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_default_namespace_ws_names_default | Deletes the default namespace setting for the workspace. A fresh etag needs to be provided in DELETE requests as a query parameter. The etag can be retrieved by making a GET request before the DELETE request. If the setting is updated/deleted concurrently, DELETE fails with 409 and the request must be retried by using the fresh etag in the 409 response. | etag (string) |
| get_api_2_0_settings_types_disable_legacy_access_names_default | Retrieves legacy access disablement Status. | etag (string) |
| patch_api_2_0_settings_types_disable_legacy_access_names_default | Updates legacy access disablement status. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_disable_legacy_access_names_default | Deletes legacy access disablement status. | etag (string) |
| get_api_2_0_settings_types_disable_legacy_dbfs_names_default | Gets the disable legacy DBFS setting. | etag (string) |
| patch_api_2_0_settings_types_disable_legacy_dbfs_names_default | Updates the disable legacy DBFS setting for the workspace. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_disable_legacy_dbfs_names_default | Deletes the disable legacy DBFS setting for a workspace, reverting back to the default. | etag (string) |
| get_api_2_0_settings_types_enable_export_notebook_names_default | Gets the Notebook and File exporting setting. | No parameters |
| patch_api_2_0_settings_types_enable_export_notebook_names_default | Updates the Notebook and File exporting setting. The model follows eventual consistency, which means the get after the update operation might receive stale values for some time. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| get_api_2_0_settings_types_enable_notebook_table_clipboard_names_default | Gets the Results Table Clipboard features setting. | No parameters |
| patch_api_2_0_settings_types_enable_notebook_table_clipboard_names_default | Updates the Results Table Clipboard features setting. The model follows eventual consistency, which means the get after the update operation might receive stale values for some time. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| get_api_2_0_settings_types_enable_results_downloading_names_default | Gets the Notebook results download setting. | No parameters |
| patch_api_2_0_settings_types_enable_results_downloading_names_default | Updates the Notebook results download setting. The model follows eventual consistency, which means the get after the update operation might receive stale values for some time. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| get_api_2_0_settings_types_restrict_workspace_admins_names_default | Gets the restrict workspace admins setting. | etag (string) |
| patch_api_2_0_settings_types_restrict_workspace_admins_names_default | Updates the restrict workspace admins setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_restrict_workspace_admins_names_default | Reverts the restrict workspace admins setting status for the workspace. A fresh etag needs to be provided in DELETE requests as a query parameter. The etag can be retrieved by making a GET request before the DELETE request. If the setting is updated/deleted concurrently, DELETE fails with 409 and the request must be retried by using the fresh etag in the 409 response. | etag (string) |
| get_api_2_0_settings_types_shield_csp_enablement_ws_db_names_default | Gets the compliance security profile setting. | etag (string) |
| patch_api_2_0_settings_types_shield_csp_enablement_ws_db_names_default | Updates the compliance security profile setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| get_api_2_0_settings_types_shield_esm_enablement_ws_db_names_default | Gets the enhanced security monitoring setting. | etag (string) |
| patch_api_2_0_settings_types_shield_esm_enablement_ws_db_names_default | Updates the enhanced security monitoring setting for the workspace. A fresh etag needs to be provided in PATCH requests as part of the setting field. The etag can be retrieved by making a GET request before the PATCH request. If the setting is updated concurrently, PATCH fails with 409 and the request must be retried by using the fresh etag in the 409 response. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| get_api_2_0_settings_types_sql_results_download_names_default | Gets the SQL Results Download setting. | etag (string) |
| patch_api_2_0_settings_types_sql_results_download_names_default | Updates the SQL Results Download setting. | data: { . allow_missing (boolean) . field_mask (string) . setting } (object) required |
| delete_api_2_0_settings_types_sql_results_download_names_default | Reverts the SQL Results Download setting to its default value. | etag (string) |
| get_api_2_0_sql_alerts | Gets a list of alerts accessible to the user, ordered by creation time. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban. | page_token (string) page_size (integer) |
| post_api_2_0_sql_alerts | Creates an alert. | data: { . alert . auto_resolve_display_name (boolean) } (object) required |
| get_api_2_0_sql_alerts_by_id | Gets an alert. | id (string) |
| patch_api_2_0_sql_alerts_by_id | Updates an alert. | id (string) data: { . alert . auto_resolve_display_name (boolean) . update_mask (string) } (object) required |
| delete_api_2_0_sql_alerts_by_id | Moves an alert to the trash. Trashed alerts immediately disappear from searches and list views, and can no longer trigger. You can restore a trashed alert through the UI. A trashed alert is permanently deleted after 30 days. | id (string) |
| get_api_2_0_sql_config_warehouses | Gets the workspace level configuration that is shared by all SQL warehouses in a workspace. | No parameters |
| put_api_2_0_sql_config_warehouses | Sets the workspace level configuration that is shared by all SQL warehouses in a workspace. | data: { . channel . config_param . data_access_config (array) . enabled_warehouse_types (array) . global_param . google_service_account (string) . instance_profile_arn (string) . security_policy . sql_configuration_parameters } (object) required |
| get_api_2_0_sql_history_queries | List the history of queries through SQL warehouses, and serverless compute. You can filter by user ID, warehouse ID, status, and time range. Most recently started queries are returned first up to max_results in request. The pagination token returned in response can be used to list subsequent query statuses. | filter_by: { . query_start_time_range . statement_ids (array) . statuses (array) . user_ids (array) . warehouse_ids (array) } (object) max_results (integer) page_token (string) include_metrics (boolean) |
| get_api_2_0_sql_queries | Gets a list of queries accessible to the user, ordered by creation time. Warning: Calling this API concurrently 10 or more times could result in throttling, service degradation, or a temporary ban. | page_token (string) page_size (integer) |
| post_api_2_0_sql_queries | Creates a query. | data: { . auto_resolve_display_name (boolean) . query } (object) required |
| get_api_2_0_sql_queries_by_id | Gets a query. | id (string) |
| patch_api_2_0_sql_queries_by_id | Updates a query. | id (string) data: { . auto_resolve_display_name (boolean) . query . update_mask (string) } (object) required |
| delete_api_2_0_sql_queries_by_id | Moves a query to the trash. Trashed queries immediately disappear from searches and list views, and cannot be used for alerts. You can restore a trashed query through the UI. A trashed query is permanently deleted after 30 days. | id (string) |
| post_api_2_0_sql_statements | Execute a SQL statement and optionally await its results for a specified time. Use case: small result sets with INLINE + JSON_ARRAY For flows that generate small and predictable result sets = 25 MiB, INLINE responses of JSON_ARRAY result data are typically the simplest way to execute and fetch result data. Use case: large result sets with EXTERNAL_LINKS Using EXTERNAL_LINKS to fetch result data allows you to fetch large result sets efficiently. The main differences from using INLINE disposit | data: { . byte_limit (integer) . catalog (string) . disposition . format . on_wait_timeout . parameters (array) . row_limit (integer) . schema (string) . statement (string) . wait_timeout (string) . warehouse_id (string) } (object) required |
| get_api_2_0_sql_statements_by_statement_id | This request can be used to poll for the statement's status. When the status.state field is SUCCEEDED it will also return the result manifest and the first chunk of the result data. When the statement is in the terminal states CANCELED, CLOSED or FAILED, it returns HTTP 200 with the state set. After at least 12 hours in terminal state, the statement is removed from the warehouse and further calls will receive an HTTP 404 response. NOTE This call currently might take up to 5 seconds to get the l | statement_id (string) |
| post_api_2_0_sql_statements_by_statement_id_cancel | Requests that an executing statement be canceled. Callers must poll for status to see the terminal state. | statement_id (string) |
| get_api_2_0_sql_statements_by_statement_id_result_chunks_by_chunk_index | After the statement execution has SUCCEEDED, this request can be used to fetch any chunk by index. Whereas the first chunk with chunk_index=0 is typically fetched with :method:statementexecution/executeStatement or :method:statementexecution/getStatement, this request can be used to fetch subsequent chunks. The response structure is identical to the nested result element described in the :method:statementexecution/getStatement request, and similarly includes the next_chunk_index and next_chunk_i | statement_id (string) chunk_index (string) |
| get_api_2_0_sql_warehouses | Lists all SQL warehouses that a user has manager permissions on. | run_as_user_id (integer) |
| post_api_2_0_sql_warehouses | Creates a new SQL warehouse. | data: { . auto_stop_mins (integer) . channel . cluster_size (string) . creator_name (string) . enable_photon (boolean) . enable_serverless_compute (boolean) . instance_profile_arn (string) . max_num_clusters (integer) . min_num_clusters (integer) . name (string) . spot_instance_policy . tags . warehouse_type } (object) required |
| get_api_2_0_sql_warehouses_by_id | Gets the information for a single SQL warehouse. | id (string) |
| delete_api_2_0_sql_warehouses_by_id | Deletes a SQL warehouse. | id (string) |
| post_api_2_0_sql_warehouses_by_id_edit | Updates the configuration for a SQL warehouse. | id (string) data: { . auto_stop_mins (integer) . channel . cluster_size (string) . creator_name (string) . enable_photon (boolean) . enable_serverless_compute (boolean) . instance_profile_arn (string) . max_num_clusters (integer) . min_num_clusters (integer) . name (string) . spot_instance_policy . tags . warehouse_type } (object) required |
| post_api_2_0_sql_warehouses_by_id_start | Starts a SQL warehouse. | id (string) |
| post_api_2_0_sql_warehouses_by_id_stop | Stops a SQL warehouse. | id (string) |
| post_api_2_0_token_management_on_behalf_of_tokens | Creates a token on behalf of a service principal. | data: { . application_id (string) . comment (string) . lifetime_seconds (integer) } (object) required |
| get_api_2_0_token_management_tokens | Lists all tokens associated with the specified workspace or user. | created_by_id (integer) created_by_username (string) |
| get_api_2_0_token_management_tokens_by_token_id | Gets information about a token, specified by its ID. | token_id (string) |
| delete_api_2_0_token_management_tokens_by_token_id | Deletes a token, specified by its ID. | token_id (string) |
| post_api_2_0_token_create | Creates and returns a token for a user. If this call is made through token authentication, it creates a token with the same client ID as the authenticated token. If the user's token quota is exceeded, this call returns an error QUOTA_EXCEEDED. | data: { . comment (string) . lifetime_seconds (integer) } (object) required |
| post_api_2_0_token_delete | Revokes an access token. If a token with the specified ID is not valid, this call returns an error RESOURCE_DOES_NOT_EXIST. | data: { . token_id (string) } (object) required |
| get_api_2_0_token_list | Lists all the valid tokens for a user-workspace pair. | No parameters |
| post_api_2_0_unity_catalog_temporary_path_credentials | Get a short-lived credential for directly accessing cloud storage locations registered in Databricks. The Generate Temporary Path Credentials API is only supported for external storage paths, specifically external locations and external tables. Managed tables are not supported by this API. The metastore must have external_access_enabled flag set to true default false. The caller must have the EXTERNAL_USE_LOCATION privilege on the external location; this privilege can only be granted by external | data: { . dry_run (boolean) . operation . url (string) } (object) required |
| post_api_2_0_unity_catalog_temporary_table_credentials | Get a short-lived credential for directly accessing the table data on cloud storage. The metastore must have external_access_enabled flag set to true default false. The caller must have the EXTERNAL_USE_SCHEMA privilege on the parent schema and this privilege can only be granted by catalog owners. | data: { . operation . table_id (string) } (object) required |
| get_api_2_0_vector_search_endpoints | List all vector search endpoints in the workspace. | page_token (string) |
| post_api_2_0_vector_search_endpoints | Create a new endpoint. | data: { . endpoint_type . name (string) } (object) required |
| get_api_2_0_vector_search_endpoints_by_endpoint_name | Get details for a single vector search endpoint. | endpoint_name (string) |
| delete_api_2_0_vector_search_endpoints_by_endpoint_name | Delete a vector search endpoint. | endpoint_name (string) |
| patch_api_2_0_vector_search_endpoints_by_endpoint_name_budget_policy | Update the budget policy of an endpoint | endpoint_name (string) data: { . budget_policy_id (string) } (object) required |
| get_api_2_0_vector_search_indexes | List all indexes in the given endpoint. | endpoint_name (string) required page_token (string) |
| post_api_2_0_vector_search_indexes | Create a new index. | data: { . delta_sync_index_spec . direct_access_index_spec . endpoint_name (string) . index_type . name (string) . primary_key (string) } (object) required |
| get_api_2_0_vector_search_indexes_by_index_name | Get an index. | index_name (string) |
| delete_api_2_0_vector_search_indexes_by_index_name | Delete an index. | index_name (string) |
| delete_api_2_0_vector_search_indexes_by_index_name_delete_data | Handles the deletion of data from a specified vector index. | index_name (string) primary_keys (array) required |
| post_api_2_0_vector_search_indexes_by_index_name_query | Query the specified vector index. | index_name (string) data: { . columns (array) . filters_json (string) . num_results (integer) . query_text (string) . query_type (string) . query_vector (array) . score_threshold (number) } (object) required |
| post_api_2_0_vector_search_indexes_by_index_name_query_next_page | Use next_page_token returned from previous QueryVectorIndex or QueryVectorIndexNextPage request to fetch next page of results. | index_name (string) data: { . endpoint_name (string) . page_token (string) } (object) required |
| post_api_2_0_vector_search_indexes_by_index_name_scan | Scan the specified vector index and return the first num_results entries after the exclusive primary_key. | index_name (string) data: { . last_primary_key (string) . num_results (integer) } (object) required |
| post_api_2_0_vector_search_indexes_by_index_name_sync | Triggers a synchronization process for a specified vector index. | index_name (string) |
| post_api_2_0_vector_search_indexes_by_index_name_upsert_data | Handles the upserting of data into a specified vector index. | index_name (string) data: { . inputs_json (string) } (object) required |
| get_api_2_0_workspace_conf | Gets the configuration status for a workspace. | keys (string) required |
| patch_api_2_0_workspace_conf | Sets the configuration status for a workspace, including enabling or disabling it. | data (object) required |
| post_api_2_0_workspace_delete | Deletes an object or a directory and optionally recursively deletes all objects in the directory. If path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST. If path is a non-empty directory and recursive is set to false, this call returns an error DIRECTORY_NOT_EMPTY. Object deletion cannot be undone and deleting a directory recursively is not atomic. | data: { . path (string) . recursive (boolean) } (object) required |
| get_api_2_0_workspace_export | Exports an object or the contents of an entire directory. If path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST. If the exported data would exceed size limit, this call returns MAX_NOTEBOOK_SIZE_EXCEEDED. Currently, this API does not support exporting a library. | path (string) required format (string) direct_download (boolean) |
| get_api_2_0_workspace_get_status | Gets the status of an object or a directory. If path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST. | path (string) required |
| post_api_2_0_workspace_import | Imports a workspace object for example, a notebook or file or the contents of an entire directory. If path already exists and overwrite is set to false, this call returns an error RESOURCE_ALREADY_EXISTS. To import a directory, you can use either the DBC format or the SOURCE format with the language field unset. To import a single file as SOURCE, you must set the language field. | data: { . content (string) . format . language . overwrite (boolean) . path (string) } (object) required |
| get_api_2_0_workspace_list | Lists the contents of a directory, or the object if it is not a directory. If the input path does not exist, this call returns an error RESOURCE_DOES_NOT_EXIST. | path (string) required notebooks_modified_after (integer) |
| post_api_2_0_workspace_mkdirs | Creates the specified directory and necessary parent directories if they do not exist. If there is an object not a directory at any prefix of the input path, this call returns an error RESOURCE_ALREADY_EXISTS. Note that if this operation fails it may have succeeded in creating some of the necessary parent directories. | data: { . path (string) } (object) required |
| post_api_2_1_clusters_change_owner | Change the owner of the cluster. You must be an admin and the cluster must be terminated to perform this operation. The service principal application ID can be supplied as an argument to owner_username. | data: { . cluster_id (string) . owner_username (string) } (object) required |
| post_api_2_1_clusters_create | Creates a new Spark cluster. This method will acquire new instances from the cloud provider if necessary. This method is asynchronous; the returned cluster_id can be used to poll the cluster status. When this method returns, the cluster will be in a PENDING state. The cluster will be usable once it enters a RUNNING state. Note: Databricks may not be able to acquire some of the requested nodes, due to cloud provider limitations account limits, spot price, etc. or transient network issues. If Dat | data: { . apply_policy_default_values (boolean) . autoscale . autotermination_minutes (integer) . aws_attributes . clone_from . cluster_log_conf . cluster_name (string) . custom_tags (object) . data_security_mode . docker_image . driver_instance_pool_id (string) . driver_node_type_id (string) . enable_elastic_disk (boolean) . enable_local_disk_encryption (boolean) . init_scripts (array) . instance_pool_id (string) . is_single_node (boolean) . kind . node_type_id (string) . num_workers (integer) . policy_id (string) . runtime_engine . single_user_name (string) . spark_conf (object) . spark_env_vars (object) . spark_version (string) . ssh_public_keys (array) . use_ml_runtime (boolean) . workload_type } (object) required |
| post_api_2_1_clusters_delete | Terminates the Spark cluster with the specified ID. The cluster is removed asynchronously. Once the termination has completed, the cluster will be in a TERMINATED state. If the cluster is already in a TERMINATING or TERMINATED state, nothing will happen. | data: { . cluster_id (string) } (object) required |
| post_api_2_1_clusters_edit | Updates the configuration of a cluster to match the provided attributes and size. A cluster can be updated if it is in a RUNNING or TERMINATED state. If a cluster is updated while in a RUNNING state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a TERMINATED state, it will remain TERMINATED. The next time it is started using the clusters/start API, the new attributes will take effect. Any attempt to update a cluster in any other state will be | data: { . apply_policy_default_values (boolean) . autoscale . autotermination_minutes (integer) . aws_attributes . cluster_id (string) . cluster_log_conf . cluster_name (string) . custom_tags (object) . data_security_mode . docker_image . driver_instance_pool_id (string) . driver_node_type_id (string) . enable_elastic_disk (boolean) . enable_local_disk_encryption (boolean) . init_scripts (array) . instance_pool_id (string) . is_single_node (boolean) . kind . node_type_id (string) . num_workers (integer) . policy_id (string) . runtime_engine . single_user_name (string) . spark_conf (object) . spark_env_vars (object) . spark_version (string) . ssh_public_keys (array) . use_ml_runtime (boolean) . workload_type } (object) required |
| post_api_2_1_clusters_events | Retrieves a list of events about the activity of a cluster. This API is paginated. If there are more events to read, the response includes all the parameters necessary to request the next page of events. | data: { . cluster_id (string) . end_time (integer) . event_types (array) . limit (integer) . offset (integer) . order . page_size (integer) . page_token (string) . start_time (integer) } (object) required |
| get_api_2_1_clusters_get | Retrieves the information for a cluster given its identifier. Clusters can be described while they are running, or up to 60 days after they are terminated. | cluster_id (string) required |
| get_api_2_1_clusters_list | Return information about all pinned and active clusters, and all clusters terminated within the last 30 days. Clusters terminated prior to this period are not included. | filter_by: { . cluster_sources (array) . cluster_states (array) . is_pinned (boolean) . policy_id (string) } (object) page_token (string) page_size (integer) sort_by: { . direction . field } (object) |
| get_api_2_1_clusters_list_node_types | Returns a list of supported Spark node types. These node types can be used to launch a cluster. | No parameters |
| get_api_2_1_clusters_list_zones | Returns a list of availability zones where clusters can be created in For example, us-west-2a. These zones can be used to launch a cluster. | No parameters |
| post_api_2_1_clusters_permanent_delete | Permanently deletes a Spark cluster. This cluster is terminated and resources are asynchronously removed. In addition, users will no longer see permanently deleted clusters in the cluster list, and API users can no longer perform any action on permanently deleted clusters. | data: { . cluster_id (string) } (object) required |
| post_api_2_1_clusters_pin | Pinning a cluster ensures that the cluster will always be returned by the ListClusters API. Pinning a cluster that is already pinned will have no effect. This API can only be called by workspace admins. | data: { . cluster_id (string) } (object) required |
| post_api_2_1_clusters_resize | Resizes a cluster to have a desired number of workers. This will fail unless the cluster is in a RUNNING state. | data: { . autoscale . cluster_id (string) . num_workers (integer) } (object) required |
| post_api_2_1_clusters_restart | Restarts a Spark cluster with the supplied ID. If the cluster is not currently in a RUNNING state, nothing will happen. | data: { . cluster_id (string) . restart_user (string) } (object) required |
| get_api_2_1_clusters_spark_versions | Returns the list of available Spark versions. These versions can be used to launch a cluster. | No parameters |
| post_api_2_1_clusters_start | Starts a terminated Spark cluster with the supplied ID. This works similar to createCluster except: - The previous cluster id and attributes are preserved. - The cluster starts with the last specified cluster size. - If the previous cluster was an autoscaling cluster, the current cluster starts with the minimum number of nodes. - If the cluster is not currently in a TERMINATED state, nothing will happen. - Clusters launched to run a job cannot be started. | data: { . cluster_id (string) } (object) required |
| post_api_2_1_clusters_unpin | Unpinning a cluster will allow the cluster to eventually be removed from the ListClusters API. Unpinning a cluster that is not pinned will have no effect. This API can only be called by workspace admins. | data: { . cluster_id (string) } (object) required |
| post_api_2_1_clusters_update | Updates the configuration of a cluster to match the partial set of attributes and size. Denote which fields to update using the update_mask field in the request body. A cluster can be updated if it is in a RUNNING or TERMINATED state. If a cluster is updated while in a RUNNING state, it will be restarted so that the new attributes can take effect. If a cluster is updated while in a TERMINATED state, it will remain TERMINATED. The updated attributes will take effect the next time the cluster is s | data: { . cluster . cluster_id (string) . update_mask (string) } (object) required |
| get_api_2_1_data_sharing_providers_by_provider_name_shares_by_share_name | Get arrays of assets associated with a specified provider's share. The caller is the recipient of the share. | provider_name (string) share_name (string) table_max_results (integer) function_max_results (integer) volume_max_results (integer) notebook_max_results (integer) |
| post_api_2_1_jobs_create | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/create for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Create a new job. | data: { . access_control_list (array) . continuous . deployment . description (string) . disable_auto_optimization (boolean) . edit_mode . email_notifications . environments (array) . format . git_source . health . job_clusters (array) . max_concurrent_runs (integer) . max_retries (integer) . min_retry_interval_millis (integer) . name (string) . notification_settings . parameters (array) . performance_target . queue . retry_on_timeout (boolean) . run_as . schedule . tags (object) . tasks (array) . timeout_seconds (integer) . trigger . webhook_notifications } (object) required |
| post_api_2_1_jobs_delete | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/delete for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Deletes a job. | data: { . job_id (integer) } (object) required |
| get_api_2_1_jobs_get | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/get for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieves the details for a single job. | job_id (integer) required |
| get_api_2_1_jobs_list | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/list for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieves a list of jobs. | offset (integer) limit (integer) expand_tasks (boolean) name (string) page_token (string) |
| post_api_2_1_jobs_reset | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/reset for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Overwrite all settings for the given job. Use the Update endpoint:method:jobs/update to update job settings partially. | data: { . job_id (integer) . new_settings } (object) required |
| post_api_2_1_jobs_run_now | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/runnow for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Run a job and return the run_id of the triggered run. | data: { . dbt_commands (array) . idempotency_token (string) . jar_params (array) . job_id (integer) . job_parameters (object) . notebook_params (object) . only (array) . performance_target . pipeline_params . python_named_params (object) . python_params (array) . queue . spark_submit_params (array) . sql_params (object) } (object) required |
| post_api_2_1_jobs_runs_cancel | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/cancelrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Cancels a job run or a task run. The run is canceled asynchronously, so it may still be running when this request completes. | data: { . run_id (integer) } (object) required |
| post_api_2_1_jobs_runs_cancel_all | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/cancelallruns for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started. | data: { . all_queued_runs (boolean) . job_id (integer) } (object) required |
| post_api_2_1_jobs_runs_delete | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/deleterun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Deletes a non-active run. Returns an error if the run is active. | data: { . run_id (integer) } (object) required |
| get_api_2_1_jobs_runs_export | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/exportrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html.' Export and retrieve the job run task. | run_id (integer) required views_to_export (string) |
| get_api_2_1_jobs_runs_get | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/getrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieve the metadata of a run. | run_id (integer) required include_history (boolean) include_resolved_values (boolean) |
| get_api_2_1_jobs_runs_get_output | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/getrunoutput for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit call, you can use t | run_id (integer) required |
| get_api_2_1_jobs_runs_list | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/listruns for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. List runs in descending order by start time. | job_id (integer) active_only (boolean) completed_only (boolean) offset (integer) limit (integer) run_type (string) expand_tasks (boolean) start_time_from (integer) start_time_to (integer) page_token (string) |
| post_api_2_1_jobs_runs_repair | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/repairrun for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Re-run one or more tasks. Tasks are re-run as part of the original job run. They use the current job and task settings, and can be viewed in the his | data: { . dbt_commands (array) . jar_params (array) . job_parameters (object) . latest_repair_id (integer) . notebook_params (object) . performance_target . pipeline_params . python_named_params (object) . python_params (array) . rerun_all_failed_tasks (boolean) . rerun_dependent_tasks (boolean) . rerun_tasks (array) . run_id (integer) . spark_submit_params (array) . sql_params (object) } (object) required |
| post_api_2_1_jobs_runs_submit | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/submit for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Runs submitted using this endpoint don’t display | data: { . access_control_list (array) . email_notifications . environments (array) . git_source . health . idempotency_token (string) . notification_settings . queue . run_as . run_name (string) . tasks (array) . timeout_seconds (integer) . webhook_notifications } (object) required |
| post_api_2_1_jobs_update | ⚠ Warning This page describes a legacy 2.1 version of the endpoint. Databricks recommends that you use Jobs API 2.2 :method:jobs/update for new and existing clients and scripts. For details on the changes in the 2.2 version of the Jobs API, see Updating from Jobs API 2.1 to 2.2https://docs.databricks.com/en/reference/jobs-api-2-2-updates.html. Add, update, or remove specific settings of an existing job. Use the Reset endpoint:method:jobs/reset to overwrite all job settings. | data: { . fields_to_remove (array) . job_id (integer) . new_settings } (object) required |
| get_api_2_1_marketplace_consumer_installations | List all installations across all listings. | page_token (string) page_size (integer) |
| get_api_2_1_marketplace_consumer_listings | List all published listings in the Databricks Marketplace that the consumer has access to. | page_token (string) page_size (integer) assets (array) categories (array) tags (array) is_free (boolean) is_private_exchange (boolean) is_staff_pick (boolean) provider_ids (array) |
| get_api_2_1_marketplace_consumer_listings_by_id | Get a published listing in the Databricks Marketplace that the consumer has access to. | id (string) |
| get_api_2_1_marketplace_consumer_listings_by_listing_id_content | Get a high level preview of the metadata of listing installable content. | listing_id (string) page_token (string) page_size (integer) |
| get_api_2_1_marketplace_consumer_listings_by_listing_id_fulfillments | Get all listings fulfillments associated with a listing. A fulfillment is a potential installation. Standard installations contain metadata about the attached share or git repo. Only one of these fields will be present. Personalized installations contain metadata about the attached share or git repo, as well as the Delta Sharing recipient type. | listing_id (string) page_token (string) page_size (integer) |
| get_api_2_1_marketplace_consumer_listings_by_listing_id_installations | List all installations for a particular listing. | listing_id (string) page_token (string) page_size (integer) |
| post_api_2_1_marketplace_consumer_listings_by_listing_id_installations | Install payload associated with a Databricks Marketplace listing. | listing_id (string) data: { . accepted_consumer_terms . catalog_name (string) . recipient_type . repo_detail . share_name (string) } (object) required |
| put_api_2_1_marketplace_consumer_listings_by_listing_id_installations_by_installation_id | This is a update API that will update the part of the fields defined in the installation table as well as interact with external services according to the fields not included in the installation table 1. the token will be rotate if the rotateToken flag is true 2. the token will be forcibly rotate if the rotateToken flag is true and the tokenInfo field is empty | listing_id (string) installation_id (string) data: { . installation . rotate_token (boolean) } (object) required |
| delete_api_2_1_marketplace_consumer_listings_by_listing_id_installations_by_installation_id | Uninstall an installation associated with a Databricks Marketplace listing. | listing_id (string) installation_id (string) |
| get_api_2_1_marketplace_consumer_listings_by_listing_id_personalization_requests | Get the personalization request for a listing. Each consumer can make at most one personalization request for a listing. | listing_id (string) |
| post_api_2_1_marketplace_consumer_listings_by_listing_id_personalization_requests | Create a personalization request for a listing. | listing_id (string) data: { . accepted_consumer_terms . comment (string) . company (string) . first_name (string) . intended_use (string) . is_from_lighthouse (boolean) . last_name (string) . recipient_type } (object) required |
| get_api_2_1_marketplace_consumer_listings_batch_get | Batch get a published listing in the Databricks Marketplace that the consumer has access to. | ids (array) |
| get_api_2_1_marketplace_consumer_personalization_requests | List personalization requests for a consumer across all listings. | page_token (string) page_size (integer) |
| get_api_2_1_marketplace_consumer_providers | List all providers in the Databricks Marketplace with at least one visible listing. | page_token (string) page_size (integer) is_featured (boolean) |
| get_api_2_1_marketplace_consumer_providers_by_id | Get a provider in the Databricks Marketplace with at least one visible listing. | id (string) |
| get_api_2_1_marketplace_consumer_providers_batch_get | Batch get a provider in the Databricks Marketplace with at least one visible listing. | ids (array) |
| get_api_2_1_marketplace_consumer_search_listings | Search published listings in the Databricks Marketplace that the consumer has access to. This query supports a variety of different search parameters and performs fuzzy matching. | query (string) required is_free (boolean) is_private_exchange (boolean) provider_ids (array) categories (array) assets (array) page_token (string) page_size (integer) |
| get_api_2_1_tag_policies | Lists the tag policies for all governed tags in the account. | page_size (integer) page_token (string) |
| post_api_2_1_tag_policies | Creates a new tag policy, making the associated tag key governed. | data: { . description (string) . id (string) . tag_key (string) . values (array) } (object) required |
| get_api_2_1_tag_policies_by_tag_key | Gets a single tag policy by its associated governed tag's key. | tag_key (string) |
| patch_api_2_1_tag_policies_by_tag_key | Updates an existing tag policy for a single governed tag. | tag_key (string) update_mask (string) required data: { . description (string) . id (string) . tag_key (string) . values (array) } (object) required |
| delete_api_2_1_tag_policies_by_tag_key | Deletes a tag policy by its associated governed tag's key, leaving that tag key ungoverned. | tag_key (string) |
| get_api_2_1_unity_catalog_artifact_allowlists_by_artifact_type | Get the artifact allowlist of a certain artifact type. The caller must be a metastore admin or have the MANAGE ALLOWLIST privilege on the metastore. | artifact_type (string) |
| put_api_2_1_unity_catalog_artifact_allowlists_by_artifact_type | Set the artifact allowlist of a certain artifact type. The whole artifact allowlist is replaced with the new allowlist. The caller must be a metastore admin or have the MANAGE ALLOWLIST privilege on the metastore. | artifact_type (string) data: { . artifact_matchers (array) . created_at (integer) . created_by (string) . metastore_id (string) } (object) required |
| get_api_2_1_unity_catalog_bindings_by_securable_type_by_securable_name | Gets workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable. | securable_type (string) securable_name (string) max_results (integer) page_token (string) |
| patch_api_2_1_unity_catalog_bindings_by_securable_type_by_securable_name | Updates workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable. | securable_type (string) securable_name (string) data: { . add (array) . remove (array) } (object) required |
| get_api_2_1_unity_catalog_catalogs | Gets an array of catalogs in the metastore. If the caller is the metastore admin, all catalogs will be retrieved. Otherwise, only catalogs owned by the caller or for which the caller has the USE_CATALOG privilege will be retrieved. There is no guarantee of a specific ordering of the elements in the array. | include_browse (boolean) max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_catalogs | Creates a new catalog instance in the parent metastore if the caller is a metastore admin or has the CREATE_CATALOG privilege. | data: { . comment (string) . connection_name (string) . name (string) . options (object) . properties (object) . provider_name (string) . share_name (string) . storage_root (string) } (object) required |
| get_api_2_1_unity_catalog_catalogs_by_name | Gets the specified catalog in a metastore. The caller must be a metastore admin, the owner of the catalog, or a user that has the USE_CATALOG privilege set for their account. | name (string) include_browse (boolean) |
| patch_api_2_1_unity_catalog_catalogs_by_name | Updates the catalog that matches the supplied name. The caller must be either the owner of the catalog, or a metastore admin when changing the owner field of the catalog. | name (string) data: { . comment (string) . enable_predictive_optimization . isolation_mode . new_name (string) . options (object) . owner (string) . properties (object) } (object) required |
| delete_api_2_1_unity_catalog_catalogs_by_name | Deletes the catalog that matches the supplied name. The caller must be a metastore admin or the owner of the catalog. | name (string) force (boolean) |
| get_api_2_1_unity_catalog_connections | List all connections. | max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_connections | Creates a new connection Creates a new connection to an external data source. It allows users to specify connection details and configurations for interaction with the external server. Supported data sources for connections are listed herehttps://docs.databricks.com/aws/en/query-federation/ supported-data-sources. | data: { . comment (string) . connection_type . name (string) . options (object) . properties (object) . read_only (boolean) } (object) required |
| get_api_2_1_unity_catalog_connections_by_name | Gets a connection from it's name. | name (string) |
| patch_api_2_1_unity_catalog_connections_by_name | Updates the connection that matches the supplied name. | name (string) data: { . new_name (string) . options (object) . owner (string) } (object) required |
| delete_api_2_1_unity_catalog_connections_by_name | Deletes the connection that matches the supplied name. | name (string) |
| post_api_2_1_unity_catalog_constraints | Creates a new table constraint. For the table constraint creation to succeed, the user must satisfy both of these conditions: - the user must have the USE_CATALOG privilege on the table's parent catalog, the USE_SCHEMA privilege on the table's parent schema, and be the owner of the table. - if the new constraint is a ForeignKeyConstraint, the user must have the USE_CATALOG privilege on the referenced parent table's catalog, the USE_SCHEMA privilege on the referenced parent table's schema, a | data: { . constraint . full_name_arg (string) } (object) required |
| delete_api_2_1_unity_catalog_constraints_by_full_name | Deletes a table constraint. For the table constraint deletion to succeed, the user must satisfy both of these conditions: - the user must have the USE_CATALOG privilege on the table's parent catalog, the USE_SCHEMA privilege on the table's parent schema, and be the owner of the table. - if cascade argument is true, the user must have the following permissions on all of the child tables: the USE_CATALOG privilege on the table's catalog, the USE_SCHEMA privilege on the table's schema, and be | full_name (string) constraint_name (string) required cascade (boolean) required |
| get_api_2_1_unity_catalog_credentials | Gets an array of credentials as CredentialInfo objects. The array is limited to only the credentials that the caller has permission to access. If the caller is a metastore admin, retrieval of credentials is unrestricted. There is no guarantee of a specific ordering of the elements in the array. | max_results (integer) page_token (string) purpose (string) |
| post_api_2_1_unity_catalog_credentials | Creates a new credential. The type of credential to be created is determined by the purpose field, which should be either SERVICE or STORAGE. The caller must be a metastore admin or have the metastore privilege CREATE_STORAGE_CREDENTIAL for storage credentials, or CREATE_SERVICE_CREDENTIAL for service credentials. The request object must contain an AwsIamRole with the arn of the IAM role. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.htm | data: { . aws_iam_role . comment (string) . name (string) . purpose . read_only (boolean) . skip_validation (boolean) } (object) required |
| get_api_2_1_unity_catalog_credentials_by_name_arg | Gets a service or storage credential from the metastore. The caller must be a metastore admin, the owner of the credential, or have any permission on the credential. | name_arg (string) |
| patch_api_2_1_unity_catalog_credentials_by_name_arg | Updates a service or storage credential on the metastore. The caller must be the owner of the credential or a metastore admin or have the MANAGE permission. If the caller is a metastore admin, only the owner field can be changed. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html, this role must specify an external ID in its trust policy. To enable this credential, the external ID specified in the external_id field of the respon | name_arg (string) data: { . aws_iam_role . comment (string) . force (boolean) . isolation_mode . new_name (string) . owner (string) . read_only (boolean) . skip_validation (boolean) } (object) required |
| delete_api_2_1_unity_catalog_credentials_by_name_arg | Deletes a service or storage credential from the metastore. The caller must be an owner of the credential. | name_arg (string) force (boolean) |
| get_api_2_1_unity_catalog_current_metastore_assignment | Gets the metastore assignment for the workspace being accessed. | No parameters |
| get_api_2_1_unity_catalog_effective_permissions_by_securable_type_by_full_name | Gets the effective permissions for a securable. Includes inherited permissions from any parent securables. | securable_type (string) full_name (string) principal (string) max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_entity_tag_assignments | Creates a tag assignment for an Unity Catalog entity. To add tags to Unity Catalog entities, you must own the entity or have the following privileges: - APPLY TAG on the entity - USE SCHEMA on the entity's parent schema - USE CATALOG on the entity's parent catalog To add a governed tag to Unity Catalog entities, you must also have the ASSIGN or MANAGE permission on the tag policy. See Manage tag policy permissionshttps://docs.databricks.com/aws/en/admin/tag-policies/manage-permissions. | data: { . entity_name (string) . entity_type (string) . tag_key (string) . tag_value (string) } (object) required |
| get_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags | List tag assignments for an Unity Catalog entity | entity_type (string) entity_name (string) max_results (integer) page_token (string) |
| get_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags_by_tag_key | Gets a tag assignment for an Unity Catalog entity by tag key. | entity_type (string) entity_name (string) tag_key (string) |
| patch_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags_by_tag_key | Updates an existing tag assignment for an Unity Catalog entity. To update tags to Unity Catalog entities, you must own the entity or have the following privileges: - APPLY TAG on the entity - USE SCHEMA on the entity's parent schema - USE CATALOG on the entity's parent catalog To update a governed tag to Unity Catalog entities, you must also have the ASSIGN or MANAGE permission on the tag policy. See Manage tag policy permissionshttps://docs.databricks.com/aws/en/admin/tag-policies/manage-perm | entity_type (string) entity_name (string) tag_key (string) update_mask (string) required data: { . entity_name (string) . entity_type (string) . tag_key (string) . tag_value (string) } (object) required |
| delete_api_2_1_unity_catalog_entity_tag_assignments_by_entity_type_by_entity_name_tags_by_tag_key | Deletes a tag assignment for an Unity Catalog entity by its key. To delete tags from Unity Catalog entities, you must own the entity or have the following privileges: - APPLY TAG on the entity - USE_SCHEMA on the entity's parent schema - USE_CATALOG on the entity's parent catalog To delete a governed tag from Unity Catalog entities, you must also have the ASSIGN or MANAGE permission on the tag policy. See Manage tag policy permissionshttps://docs.databricks.com/aws/en/admin/tag-policies/manage | entity_type (string) entity_name (string) tag_key (string) |
| get_api_2_1_unity_catalog_external_locations | Gets an array of external locations ExternalLocationInfo objects from the metastore. The caller must be a metastore admin, the owner of the external location, or a user that has some privilege on the external location. There is no guarantee of a specific ordering of the elements in the array. | include_browse (boolean) max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_external_locations | Creates a new external location entry in the metastore. The caller must be a metastore admin or have the CREATE_EXTERNAL_LOCATION privilege on both the metastore and the associated storage credential. | data: { . comment (string) . credential_name (string) . enable_file_events (boolean) . encryption_details . fallback (boolean) . file_event_queue . name (string) . read_only (boolean) . skip_validation (boolean) . url (string) } (object) required |
| get_api_2_1_unity_catalog_external_locations_by_name | Gets an external location from the metastore. The caller must be either a metastore admin, the owner of the external location, or a user that has some privilege on the external location. | name (string) include_browse (boolean) |
| patch_api_2_1_unity_catalog_external_locations_by_name | Updates an external location in the metastore. The caller must be the owner of the external location, or be a metastore admin. In the second case, the admin can only update the name of the external location. | name (string) data: { . comment (string) . credential_name (string) . enable_file_events (boolean) . encryption_details . fallback (boolean) . file_event_queue . force (boolean) . isolation_mode . new_name (string) . owner (string) . read_only (boolean) . skip_validation (boolean) . url (string) } (object) required |
| delete_api_2_1_unity_catalog_external_locations_by_name | Deletes the specified external location from the metastore. The caller must be the owner of the external location. | name (string) force (boolean) |
| get_api_2_1_unity_catalog_functions | List functions within the specified parent catalog and schema. If the user is a metastore admin, all functions are returned in the output list. Otherwise, the user must have the USE_CATALOG privilege on the catalog and the USE_SCHEMA privilege on the schema, and the output list contains only functions for which either the user has the EXECUTE privilege or the user is the owner. There is no guarantee of a specific ordering of the elements in the array. | catalog_name (string) required schema_name (string) required max_results (integer) page_token (string) include_browse (boolean) |
| post_api_2_1_unity_catalog_functions | WARNING: This API is experimental and will change in future versions Creates a new function The user must have the following permissions in order for the function to be created: - USE_CATALOG on the function's parent catalog - USE_SCHEMA and CREATE_FUNCTION on the function's parent schema | data: { . function_info } (object) required |
| get_api_2_1_unity_catalog_functions_by_name | Gets a function from within a parent catalog and schema. For the fetch to succeed, the user must satisfy one of the following requirements: - Is a metastore admin - Is an owner of the function's parent catalog - Have the USE_CATALOG privilege on the function's parent catalog and be the owner of the function - Have the USE_CATALOG privilege on the function's parent catalog, the USE_SCHEMA privilege on the function's parent schema, and the EXECUTE privilege on the function itself | name (string) include_browse (boolean) |
| patch_api_2_1_unity_catalog_functions_by_name | Updates the function that matches the supplied name. Only the owner of the function can be updated. If the user is not a metastore admin, the user must be a member of the group that is the new function owner. - Is a metastore admin - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and has the USE_CATALOG privilege on its parent catalog - Is the owner of the function itself and has the USE_CATALOG privilege on its parent catalog as well as the USE_SCHE | name (string) data: { . owner (string) } (object) required |
| delete_api_2_1_unity_catalog_functions_by_name | Deletes the function that matches the supplied name. For the deletion to succeed, the user must satisfy one of the following conditions: - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and have the USE_CATALOG privilege on its parent catalog - Is the owner of the function itself and have both the USE_CATALOG privilege on its parent catalog and the USE_SCHEMA privilege on its parent schema | name (string) force (boolean) |
| get_api_2_1_unity_catalog_metastore_summary | Gets information about a metastore. This summary includes the storage credential, the cloud vendor, the cloud region, and the global metastore ID. | No parameters |
| get_api_2_1_unity_catalog_metastores | Gets an array of the available metastores as MetastoreInfo objects. The caller must be an admin to retrieve this info. There is no guarantee of a specific ordering of the elements in the array. | max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_metastores | Creates a new metastore based on a provided name and optional storage root path. By default if the owner field is not set, the owner of the new metastore is the user calling the createMetastore API. If the owner field is set to the empty string '', the ownership is assigned to the System User instead. | data: { . name (string) . region (string) . storage_root (string) } (object) required |
| get_api_2_1_unity_catalog_metastores_by_id | Gets a metastore that matches the supplied ID. The caller must be a metastore admin to retrieve this info. | id (string) |
| patch_api_2_1_unity_catalog_metastores_by_id | Updates information for a specific metastore. The caller must be a metastore admin. If the owner field is set to the empty string '', the ownership is updated to the System User. | id (string) data: { . delta_sharing_organization_name (string) . delta_sharing_recipient_token_lifetime_in_seconds (integer) . delta_sharing_scope . new_name (string) . owner (string) . privilege_model_version (string) . storage_root_credential_id (string) } (object) required |
| delete_api_2_1_unity_catalog_metastores_by_id | Deletes a metastore. The caller must be a metastore admin. | id (string) force (boolean) |
| get_api_2_1_unity_catalog_metastores_by_metastore_id_systemschemas | Gets an array of system schemas for a metastore. The caller must be an account admin or a metastore admin. | metastore_id (string) max_results (integer) page_token (string) |
| put_api_2_1_unity_catalog_metastores_by_metastore_id_systemschemas_by_schema_name | Enables the system schema and adds it to the system catalog. The caller must be an account admin or a metastore admin. | metastore_id (string) schema_name (string) data: { . catalog_name (string) } (object) required |
| delete_api_2_1_unity_catalog_metastores_by_metastore_id_systemschemas_by_schema_name | Disables the system schema and removes it from the system catalog. The caller must be an account admin or a metastore admin. | metastore_id (string) schema_name (string) |
| get_api_2_1_unity_catalog_models | List registered models. You can list registered models under a particular schema, or list all registered models in the current metastore. The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the registered models. A regular user needs to be the owner or have the EXECUTE privilege on the registered model to recieve the registered models in the response. For the latter case, the caller must also be the owner or have the | catalog_name (string) schema_name (string) max_results (integer) page_token (string) include_browse (boolean) |
| post_api_2_1_unity_catalog_models | Creates a new registered model in Unity Catalog. File storage for model versions in the registered model will be located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore. For registered model creation to succeed, the user must satisfy the following conditions: - The caller must be a metastore admin, or be the owner of the parent catalog and schema, or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on | data: { . catalog_name (string) . comment (string) . name (string) . schema_name (string) . storage_location (string) } (object) required |
| get_api_2_1_unity_catalog_models_by_full_name | Get a registered model. The caller must be a metastore admin or an owner of or have the EXECUTE privilege on the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) include_browse (boolean) include_aliases (boolean) |
| patch_api_2_1_unity_catalog_models_by_full_name | Updates the specified registered model. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. Currently only the name, the owner or the comment of the registered model can be updated. | full_name (string) data: { . comment (string) . new_name (string) . owner (string) } (object) required |
| delete_api_2_1_unity_catalog_models_by_full_name | Deletes a registered model and all its model versions from the specified parent catalog and schema. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) |
| get_api_2_1_unity_catalog_models_by_full_name_aliases_by_alias | Get a model version by alias. The caller must be a metastore admin or an owner of or have the EXECUTE privilege on the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) alias (string) include_aliases (boolean) |
| put_api_2_1_unity_catalog_models_by_full_name_aliases_by_alias | Set an alias on the specified registered model. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) alias (string) data: { . alias (string) . full_name (string) . version_num (integer) } (object) required |
| delete_api_2_1_unity_catalog_models_by_full_name_aliases_by_alias | Deletes a registered model alias. The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) alias (string) |
| get_api_2_1_unity_catalog_models_by_full_name_versions | List model versions. You can list model versions under a particular schema, or list all model versions in the current metastore. The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the model versions. A regular user needs to be the owner or have the EXECUTE privilege on the parent registered model to recieve the model versions in the response. For the latter case, the caller must also be the owner or have the USE_CAT | full_name (string) max_results (integer) page_token (string) include_browse (boolean) |
| get_api_2_1_unity_catalog_models_by_full_name_versions_by_version | Get a model version. The caller must be a metastore admin or an owner of or have the EXECUTE privilege on the parent registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) version (string) include_browse (boolean) include_aliases (boolean) |
| patch_api_2_1_unity_catalog_models_by_full_name_versions_by_version | Updates the specified model version. The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. Currently only the comment of the model version can be updated. | full_name (string) version (string) data: { . comment (string) } (object) required |
| delete_api_2_1_unity_catalog_models_by_full_name_versions_by_version | Deletes a model version from the specified registered model. Any aliases assigned to the model version will also be deleted. The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) version (string) |
| get_api_2_1_unity_catalog_permissions_by_securable_type_by_full_name | Gets the permissions for a securable. Does not include inherited permissions. | securable_type (string) full_name (string) principal (string) max_results (integer) page_token (string) |
| patch_api_2_1_unity_catalog_permissions_by_securable_type_by_full_name | Updates the permissions for a securable. | securable_type (string) full_name (string) data: { . changes (array) } (object) required |
| post_api_2_1_unity_catalog_policies | Creates a new policy on a securable. The new policy applies to the securable and all its descendants. | data: { . column_mask . comment (string) . created_at (integer) . created_by (string) . except_principals (array) . for_securable_type . id (string) . match_columns (array) . name (string) . on_securable_fullname (string) . on_securable_type . policy_type . row_filter . to_principals (array) . updated_at (integer) . updated_by (string) . when_condition (string) } (object) required |
| get_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname | List all policies defined on a securable. Optionally, the list can include inherited policies defined on the securable's parent schema or catalog. | on_securable_type (string) on_securable_fullname (string) include_inherited (boolean) max_results (integer) page_token (string) |
| get_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname_by_name | Get the policy definition on a securable | on_securable_type (string) on_securable_fullname (string) name (string) |
| patch_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname_by_name | Update an ABAC policy on a securable. | on_securable_type (string) on_securable_fullname (string) name (string) update_mask (string) data: { . column_mask . comment (string) . created_at (integer) . created_by (string) . except_principals (array) . for_securable_type . id (string) . match_columns (array) . name (string) . on_securable_fullname (string) . on_securable_type . policy_type . row_filter . to_principals (array) . updated_at (integer) . updated_by (string) . when_condition (string) } (object) required |
| delete_api_2_1_unity_catalog_policies_by_on_securable_type_by_on_securable_fullname_by_name | Delete an ABAC policy defined on a securable. | on_securable_type (string) on_securable_fullname (string) name (string) |
| get_api_2_1_unity_catalog_providers | Gets an array of available authentication providers. The caller must either be a metastore admin or the owner of the providers. Providers not owned by the caller are not included in the response. There is no guarantee of a specific ordering of the elements in the array. | data_provider_global_metastore_id (string) max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_providers | Creates a new authentication provider minimally based on a name and authentication type. The caller must be an admin on the metastore. | data: { . authentication_type . comment (string) . name (string) . recipient_profile_str (string) } (object) required |
| get_api_2_1_unity_catalog_providers_by_name | Gets a specific authentication provider. The caller must supply the name of the provider, and must either be a metastore admin or the owner of the provider. | name (string) |
| patch_api_2_1_unity_catalog_providers_by_name | Updates the information for an authentication provider, if the caller is a metastore admin or is the owner of the provider. If the update changes the provider name, the caller must be both a metastore admin and the owner of the provider. | name (string) data: { . comment (string) . new_name (string) . owner (string) . recipient_profile_str (string) } (object) required |
| delete_api_2_1_unity_catalog_providers_by_name | Deletes an authentication provider, if the caller is a metastore admin or is the owner of the provider. | name (string) |
| get_api_2_1_unity_catalog_providers_by_name_shares | Gets an array of a specified provider's shares within the metastore where: the caller is a metastore admin, or the caller is the owner. | name (string) max_results (integer) page_token (string) |
| get_api_2_1_unity_catalog_public_data_sharing_activation_by_activation_url | Retrieve access token with an activation url. This is a public API without any authentication. | activation_url (string) |
| get_api_2_1_unity_catalog_public_data_sharing_activation_info_by_activation_url | Gets an activation URL for a share. | activation_url (string) |
| get_api_2_1_unity_catalog_recipients | Gets an array of all share recipients within the current metastore where: the caller is a metastore admin, or the caller is the owner. There is no guarantee of a specific ordering of the elements in the array. | data_recipient_global_metastore_id (string) max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_recipients | Creates a new recipient with the delta sharing authentication type in the metastore. The caller must be a metastore admin or have the CREATE_RECIPIENT privilege on the metastore. | data: { . authentication_type . comment (string) . data_recipient_global_metastore_id (string) . expiration_time (integer) . ip_access_list . name (string) . owner (string) . properties_kvpairs . sharing_code (string) } (object) required |
| get_api_2_1_unity_catalog_recipients_by_name | Gets a share recipient from the metastore if: the caller is the owner of the share recipient, or: is a metastore admin | name (string) |
| patch_api_2_1_unity_catalog_recipients_by_name | Updates an existing recipient in the metastore. The caller must be a metastore admin or the owner of the recipient. If the recipient name will be updated, the user must be both a metastore admin and the owner of the recipient. | name (string) data: { . comment (string) . expiration_time (integer) . ip_access_list . new_name (string) . owner (string) . properties_kvpairs } (object) required |
| delete_api_2_1_unity_catalog_recipients_by_name | Deletes the specified recipient from the metastore. The caller must be the owner of the recipient. | name (string) |
| post_api_2_1_unity_catalog_recipients_by_name_rotate_token | Refreshes the specified recipient's delta sharing authentication token with the provided token info. The caller must be the owner of the recipient. | name (string) data: { . existing_token_expire_in_seconds (integer) } (object) required |
| get_api_2_1_unity_catalog_recipients_by_name_share_permissions | Gets the share permissions for the specified Recipient. The caller must be a metastore admin or the owner of the Recipient. | name (string) max_results (integer) page_token (string) |
| get_api_2_1_unity_catalog_resource_quotas_all_resource_quotas | ListQuotas returns all quota values under the metastore. There are no SLAs on the freshness of the counts returned. This API does not trigger a refresh of quota counts. | max_results (integer) page_token (string) |
| get_api_2_1_unity_catalog_resource_quotas_by_parent_securable_type_by_parent_full_name_by_quota_name | The GetQuota API returns usage information for a single resource quota, defined as a child-parent pair. This API also refreshes the quota count if it is out of date. Refreshes are triggered asynchronously. The updated count might not be returned in the first call. | parent_securable_type (string) parent_full_name (string) quota_name (string) |
| get_api_2_1_unity_catalog_schemas | Gets an array of schemas for a catalog in the metastore. If the caller is the metastore admin or the owner of the parent catalog, all schemas for the catalog will be retrieved. Otherwise, only schemas owned by the caller or for which the caller has the USE_SCHEMA privilege will be retrieved. There is no guarantee of a specific ordering of the elements in the array. | catalog_name (string) required max_results (integer) page_token (string) include_browse (boolean) |
| post_api_2_1_unity_catalog_schemas | Creates a new schema for catalog in the Metastore. The caller must be a metastore admin, or have the CREATE_SCHEMA privilege in the parent catalog. | data: { . catalog_name (string) . comment (string) . name (string) . properties (object) . storage_root (string) } (object) required |
| get_api_2_1_unity_catalog_schemas_by_full_name | Gets the specified schema within the metastore. The caller must be a metastore admin, the owner of the schema, or a user that has the USE_SCHEMA privilege on the schema. | full_name (string) include_browse (boolean) |
| patch_api_2_1_unity_catalog_schemas_by_full_name | Updates a schema for a catalog. The caller must be the owner of the schema or a metastore admin. If the caller is a metastore admin, only the owner field can be changed in the update. If the name field must be updated, the caller must be a metastore admin or have the CREATE_SCHEMA privilege on the parent catalog. | full_name (string) data: { . comment (string) . enable_predictive_optimization . new_name (string) . owner (string) . properties (object) } (object) required |
| delete_api_2_1_unity_catalog_schemas_by_full_name | Deletes the specified schema from the parent catalog. The caller must be the owner of the schema or an owner of the parent catalog. | full_name (string) force (boolean) |
| get_api_2_1_unity_catalog_shares | Gets an array of data object shares from the metastore. The caller must be a metastore admin or the owner of the share. There is no guarantee of a specific ordering of the elements in the array. | max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_shares | Creates a new share for data objects. Data objects can be added after creation with update. The caller must be a metastore admin or have the CREATE_SHARE privilege on the metastore. | data: { . comment (string) . name (string) . storage_root (string) } (object) required |
| get_api_2_1_unity_catalog_shares_by_name | Gets a data object share from the metastore. The caller must be a metastore admin or the owner of the share. | name (string) include_shared_data (boolean) |
| patch_api_2_1_unity_catalog_shares_by_name | Updates the share with the changes and data objects in the request. The caller must be the owner of the share or a metastore admin. When the caller is a metastore admin, only the owner field can be updated. In the case the share name is changed, updateShare requires that the caller is the owner of the share and has the CREATE_SHARE privilege. If there are notebook files in the share, the storage_root field cannot be updated. For each table that is added through this method, the share | name (string) data: { . comment (string) . new_name (string) . owner (string) . storage_root (string) . updates (array) } (object) required |
| delete_api_2_1_unity_catalog_shares_by_name | Deletes a data object share from the metastore. The caller must be an owner of the share. | name (string) |
| get_api_2_1_unity_catalog_shares_by_name_permissions | Gets the permissions for a data share from the metastore. The caller must be a metastore admin or the owner of the share. | name (string) max_results (integer) page_token (string) |
| patch_api_2_1_unity_catalog_shares_by_name_permissions | Updates the permissions for a data share in the metastore. The caller must be a metastore admin or an owner of the share. For new recipient grants, the user must also be the recipient owner or metastore admin. recipient revocations do not require additional privileges. | name (string) data: { . changes (array) . omit_permissions_list (boolean) } (object) required |
| get_api_2_1_unity_catalog_storage_credentials | Gets an array of storage credentials as StorageCredentialInfo objects. The array is limited to only those storage credentials the caller has permission to access. If the caller is a metastore admin, retrieval of credentials is unrestricted. There is no guarantee of a specific ordering of the elements in the array. | max_results (integer) page_token (string) |
| post_api_2_1_unity_catalog_storage_credentials | Creates a new storage credential. The caller must be a metastore admin or have the CREATE_STORAGE_CREDENTIAL privilege on the metastore. The request object must contain an AwsIamRole detailing the credentials of an IAM role. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html, this role must specify an external ID in its trust policy. To enable this credential, the external ID specified in the external_id field of the response object | data: { . aws_iam_role . comment (string) . name (string) . read_only (boolean) . skip_validation (boolean) } (object) required |
| get_api_2_1_unity_catalog_storage_credentials_by_name | Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have some permission on the storage credential. | name (string) |
| patch_api_2_1_unity_catalog_storage_credentials_by_name | Updates a storage credential on the metastore. The caller must be the owner of the storage credential or a metastore admin. If the caller is a metastore admin, only the owner field can be changed. To prevent the confused deputy problemhttps://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html, this role must specify an external ID in its trust policy. To enable this credential, the external ID specified in the external_id field of the response object must be added to the IAM ro | name (string) data: { . aws_iam_role . comment (string) . force (boolean) . isolation_mode . new_name (string) . owner (string) . read_only (boolean) . skip_validation (boolean) } (object) required |
| delete_api_2_1_unity_catalog_storage_credentials_by_name | Deletes a storage credential from the metastore. The caller must be an owner of the storage credential. | name (string) force (boolean) |
| get_api_2_1_unity_catalog_table_summaries | Gets an array of summaries for tables for a schema and catalog within the metastore. The table summaries returned are either: summaries for tables within the current metastore and parent catalog and schema, when the user is a metastore admin, or: summaries for tables and schemas within the current metastore and parent catalog for which the user has ownership or the SELECT privilege on the table and ownership or USE_SCHEMA privilege on the schema, provided that the user also has ownership or t | catalog_name (string) required schema_name_pattern (string) table_name_pattern (string) max_results (integer) page_token (string) include_manifest_capabilities (boolean) |
| get_api_2_1_unity_catalog_tables | Gets an array of all tables for the current metastore under the parent catalog and schema. The caller must be a metastore admin or an owner of or have the SELECT privilege on the table. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. There is no guarantee of a specific ordering of the elements in the array. | catalog_name (string) required schema_name (string) required max_results (integer) page_token (string) omit_columns (boolean) omit_properties (boolean) omit_username (boolean) include_browse (boolean) include_manifest_capabilities (boolean) |
| post_api_2_1_unity_catalog_tables | Creates a new table in the specified catalog and schema. To create an external delta table, the caller must have the EXTERNAL_USE_SCHEMA privilege on the parent schema and the EXTERNAL_USE_LOCATION privilege on the external location. These privileges must always be granted explicitly, and cannot be inherited through ownership or ALL_PRIVILEGES. Standard UC permissions needed to create tables still apply: USE_CATALOG on the parent catalog or ownership of the parent catalog, CREATE_TABLE and USE | data: { . catalog_name (string) . columns (array) . data_source_format . name (string) . properties (object) . schema_name (string) . storage_location (string) . table_type } (object) required |
| get_api_2_1_unity_catalog_tables_by_full_name | Gets a table from the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: Be a metastore admin Be the owner of the parent catalog Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog Have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table. | full_name (string) include_delta_metadata (boolean) include_browse (boolean) include_manifest_capabilities (boolean) |
| delete_api_2_1_unity_catalog_tables_by_full_name | Deletes a table from the specified parent catalog and schema. The caller must be the owner of the parent catalog, have the USE_CATALOG privilege on the parent catalog and be the owner of the parent schema, or be the owner of the table and have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | full_name (string) |
| get_api_2_1_unity_catalog_tables_by_full_name_exists | Gets if a table exists in the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: Be a metastore admin Be the owner of the parent catalog Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog Have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table. Have BROWSE privilege on the parent c | full_name (string) |
| get_api_2_1_unity_catalog_tables_by_table_name_monitor | Gets a monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema. 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - SELECT privilege on the table. The returned information includes configuration values, as well as information on assets created by the monitor. Some information e.g., das | table_name (string) |
| post_api_2_1_unity_catalog_tables_by_table_name_monitor | Creates a new monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog, have USE_SCHEMA on the table's parent schema, and have SELECT access on the table 2. have USE_CATALOG on the table's parent catalog, be an owner of the table's parent schema, and have SELECT access on the table. 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table. Workspace assets, su | table_name (string) data: { . assets_dir (string) . baseline_table_name (string) . custom_metrics (array) . inference_log . latest_monitor_failure_msg (string) . notifications . output_schema_name (string) . schedule . skip_builtin_dashboard (boolean) . slicing_exprs (array) . snapshot . time_series . warehouse_id (string) } (object) required |
| put_api_2_1_unity_catalog_tables_by_table_name_monitor | Updates a monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table. Additionally, the call must be made from the workspace where the monitor was created, and the caller must be the original creator of the monit | table_name (string) data: { . baseline_table_name (string) . custom_metrics (array) . dashboard_id (string) . inference_log . latest_monitor_failure_msg (string) . notifications . output_schema_name (string) . schedule . slicing_exprs (array) . snapshot . time_series } (object) required |
| delete_api_2_1_unity_catalog_tables_by_table_name_monitor | Deletes a monitor for the specified table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table. Additionally, the call must be made from the workspace where the monitor was created. Note that the metric tables and dashboard will not be d | table_name (string) |
| get_api_2_1_unity_catalog_tables_by_table_name_monitor_refreshes | Gets an array containing the history of the most recent refreshes up to 25 for this table. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - SELECT privilege on the table. Additionally, the call must be made from the workspace where the monitor was created. | table_name (string) |
| post_api_2_1_unity_catalog_tables_by_table_name_monitor_refreshes | Queues a metric refresh on the monitor for the specified table. The refresh will execute in the background. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - be an owner of the table Additionally, the call must be made from the workspace where the monitor was c | table_name (string) |
| get_api_2_1_unity_catalog_tables_by_table_name_monitor_refreshes_by_refresh_id | Gets info about a specific monitor refresh using the given refresh ID. The caller must either: 1. be an owner of the table's parent catalog 2. have USE_CATALOG on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - USE_CATALOG on the table's parent catalog - USE_SCHEMA on the table's parent schema - SELECT privilege on the table. Additionally, the call must be made from the workspace where the monitor was created. | table_name (string) refresh_id (integer) |
| post_api_2_1_unity_catalog_temporary_service_credentials | Returns a set of temporary credentials generated using the specified service credential. The caller must be a metastore admin or have the metastore privilege ACCESS on the service credential. The temporary credentials consist of an access key ID, a secret access key, and a security token. | data: { . credential_name (string) } (object) required |
| post_api_2_1_unity_catalog_validate_credentials | Validates a credential. For service credentials purpose is SERVICE, either the credential_name or the cloud-specific credential must be provided. For storage credentials purpose is STORAGE, at least one of external_location_name and url need to be provided. If only one of them is provided, it will be used for validation. And if both are provided, the url will be used for validation, and external_location_name will be ignored when checking overlapping urls. Either the __cred | data: { . aws_iam_role . credential_name (string) . external_location_name (string) . purpose . read_only (boolean) . url (string) } (object) required |
| post_api_2_1_unity_catalog_validate_storage_credentials | Validates a storage credential. At least one of external_location_name and url need to be provided. If only one of them is provided, it will be used for validation. And if both are provided, the url will be used for validation, and external_location_name will be ignored when checking overlapping urls. Either the storage_credential_name or the cloud-specific credential must be provided. The caller must be a metastore admin or the storage credential owner or have the CREATE_E | data: { . aws_iam_role . external_location_name (string) . read_only (boolean) . storage_credential_name (string) . url (string) } (object) required |
| get_api_2_1_unity_catalog_volumes | Gets an array of volumes for the current metastore under the parent catalog and schema. The returned volumes are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the volumes. A regular user needs to be the owner or have the READ VOLUME privilege on the volume to recieve the volumes in the response. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege o | catalog_name (string) required schema_name (string) required max_results (integer) page_token (string) include_browse (boolean) |
| post_api_2_1_unity_catalog_volumes | Creates a new volume. The user could create either an external volume or a managed volume. An external volume will be created in the specified external location, while a managed volume will be located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore. For the volume creation to succeed, the user must satisfy following conditions: - The caller must be a metastore admin, or be the owner of the parent catalog and schema, or have the USE_CAT | data: { . catalog_name (string) . comment (string) . name (string) . schema_name (string) . storage_location (string) . volume_type } (object) required |
| get_api_2_1_unity_catalog_volumes_by_name | Gets a volume from the metastore for a specific catalog and schema. The caller must be a metastore admin or an owner of or have the READ VOLUME privilege on the volume. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | name (string) include_browse (boolean) |
| patch_api_2_1_unity_catalog_volumes_by_name | Updates the specified volume under the specified parent catalog and schema. The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. Currently only the name, the owner or the comment of the volume could be updated. | name (string) data: { . comment (string) . new_name (string) . owner (string) } (object) required |
| delete_api_2_1_unity_catalog_volumes_by_name | Deletes a volume from the specified parent catalog and schema. The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the USE_CATALOG privilege on the parent catalog and the USE_SCHEMA privilege on the parent schema. | name (string) |
| get_api_2_1_unity_catalog_workspace_bindings_catalogs_by_name | Gets workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog. | name (string) |
| patch_api_2_1_unity_catalog_workspace_bindings_catalogs_by_name | Updates workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog. | name (string) data: { . assign_workspaces (array) . unassign_workspaces (array) } (object) required |
| put_api_2_1_unity_catalog_workspaces_by_workspace_id_metastore | Creates a new metastore assignment. If an assignment for the same workspace_id exists, it will be overwritten by the new metastore_id and default_catalog_name. The caller must be an account admin. | workspace_id (integer) data: { . default_catalog_name (string) . metastore_id (string) } (object) required |
| patch_api_2_1_unity_catalog_workspaces_by_workspace_id_metastore | Updates a metastore assignment. This operation can be used to update metastore_id or default_catalog_name for a specified Workspace, if the Workspace is already assigned a metastore. The caller must be an account admin to update metastore_id; otherwise, the caller can be a Workspace admin. | workspace_id (integer) data: { . default_catalog_name (string) . metastore_id (string) } (object) required |
| delete_api_2_1_unity_catalog_workspaces_by_workspace_id_metastore | Deletes a metastore assignment. The caller must be an account administrator. | workspace_id (integer) metastore_id (string) required |
| post_api_2_2_jobs_create | Create a new job. | data: { . access_control_list (array) . budget_policy_id (string) . continuous . deployment . description (string) . edit_mode . email_notifications . environments (array) . format . git_source . health . job_clusters (array) . max_concurrent_runs (integer) . name (string) . notification_settings . parameters (array) . performance_target . queue . run_as . schedule . tags (object) . tasks (array) . timeout_seconds (integer) . trigger . webhook_notifications } (object) required |
| post_api_2_2_jobs_delete | Deletes a job. | data: { . job_id (integer) } (object) required |
| get_api_2_2_jobs_get | Retrieves the details for a single job. Large arrays in the results will be paginated when they exceed 100 elements. A request for a single job will return all properties for that job, and the first 100 elements of array properties tasks, job_clusters, environments and parameters. Use the next_page_token field to check for more results and pass its value as the page_token in subsequent requests. If any array properties have more than 100 elements, additional results will be returned on subseque | job_id (integer) required page_token (string) |
| get_api_2_2_jobs_list | Retrieves a list of jobs. | limit (integer) expand_tasks (boolean) name (string) page_token (string) |
| post_api_2_2_jobs_reset | Overwrite all settings for the given job. Use the Update endpoint:method:jobs/update to update job settings partially. | data: { . job_id (integer) . new_settings } (object) required |
| post_api_2_2_jobs_run_now | Run a job and return the run_id of the triggered run. | data: { . idempotency_token (string) . job_id (integer) . job_parameters (object) . only (array) . performance_target . pipeline_params . queue } (object) required |
| post_api_2_2_jobs_runs_cancel | Cancels a job run or a task run. The run is canceled asynchronously, so it may still be running when this request completes. | data: { . run_id (integer) } (object) required |
| post_api_2_2_jobs_runs_cancel_all | Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started. | data: { . all_queued_runs (boolean) . job_id (integer) } (object) required |
| post_api_2_2_jobs_runs_delete | Deletes a non-active run. Returns an error if the run is active. | data: { . run_id (integer) } (object) required |
| get_api_2_2_jobs_runs_export | Export and retrieve the job run task. | run_id (integer) required views_to_export (string) |
| get_api_2_2_jobs_runs_get | Retrieves the metadata of a run. Large arrays in the results will be paginated when they exceed 100 elements. A request for a single run will return all properties for that run, and the first 100 elements of array properties tasks, job_clusters, job_parameters and repair_history. Use the next_page_token field to check for more results and pass its value as the page_token in subsequent requests. If any array properties have more than 100 elements, additional results will be returned on subsequen | run_id (integer) required include_history (boolean) include_resolved_values (boolean) page_token (string) |
| get_api_2_2_jobs_runs_get_output | Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit call, you can use this endpoint to retrieve that value. Databricks restricts this API to returning the first 5 MB of the output. To return a larger result, you can store job results in a cloud storage service. This endpoint validates that the run_id parameter is valid and returns an HTTP status code 400 if the run_id parameter is invalid. Runs are automatically r | run_id (integer) required |
| get_api_2_2_jobs_runs_list | List runs in descending order by start time. | job_id (integer) active_only (boolean) completed_only (boolean) limit (integer) run_type (string) expand_tasks (boolean) start_time_from (integer) start_time_to (integer) page_token (string) |
| post_api_2_2_jobs_runs_repair | Re-run one or more tasks. Tasks are re-run as part of the original job run. They use the current job and task settings, and can be viewed in the history for the original job run. | data: { . job_parameters (object) . latest_repair_id (integer) . performance_target . pipeline_params . rerun_all_failed_tasks (boolean) . rerun_dependent_tasks (boolean) . rerun_tasks (array) . run_id (integer) } (object) required |
| post_api_2_2_jobs_runs_submit | Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Runs submitted using this endpoint don’t display in the UI. Use the jobs/runs/get API to check the run state after the job is submitted. | data: { . access_control_list (array) . budget_policy_id (string) . email_notifications . environments (array) . git_source . health . idempotency_token (string) . notification_settings . queue . run_as . run_name (string) . tasks (array) . timeout_seconds (integer) . webhook_notifications } (object) required |
| post_api_2_2_jobs_update | Add, update, or remove specific settings of an existing job. Use the Reset endpoint:method:jobs/reset to overwrite all job settings. | data: { . fields_to_remove (array) . job_id (integer) . new_settings } (object) required |
| post_api_3_0_mlflow_traces | Create a new trace within an experiment. A trace is a collection of spans that each represent individual operations that a model performed while processing a request. This can be be done in two ways: 1. Start a trace and then end it later. In this case do not set status and execution_duration. The trace will be set to status = IN_PROGRESS and can then be ended with a call to the 'End a trace' API. 2. Create the trace after it has already completed. In this case set trace.trace_info.status an | data: { . trace } (object) required |
| post_api_3_0_mlflow_traces_delete_traces | Delete traces. There are two supported ways to do this: Case 1: max_timestamp_millis and max_traces may both be specified for time-based deletion. Traces are deleted from oldest to newest until all traces older than max_timestamp_millis have been deleted or max_traces traces have been deleted. Case 2: trace_ids may be specified to delete traces by their IDs. | data: { . experiment_id (string) . max_timestamp_millis (integer) . max_traces (integer) . trace_ids (array) } (object) required |
| post_api_3_0_mlflow_traces_search | Search for traces with filter and order by criteria. | data: { . filter (string) . locations (array) . max_results (integer) . order_by (array) . page_token (string) } (object) required |
| get_api_3_0_mlflow_traces_by_trace_id | Get the information for a trace. A trace is a collection of spans that each represent individual operations that a model performed while processing a request. | trace_id (string) |
| patch_api_3_0_mlflow_traces_by_trace_id | End an in-progress trace. | trace_id (string) data: { . trace . update_mask (string) } (object) required |
| post_api_3_0_mlflow_traces_by_trace_id_assessments | Create an assessment of a trace. An assessment records a human or machine e.g. LLM judge annotation used for training, evaluation, or monitoring of quality. An assessment is on an individual trace or span of that trace. The trace is the parent resource for an assessment. | trace_id (string) data: { . assessment } (object) required |
| get_api_3_0_mlflow_traces_by_trace_id_assessments_by_assessment_id | Get an assessment of a trace. An assessment records a human or machine e.g. LLM judge annotation used for training, evaluation, or monitoring of quality. An assessment is on an individual trace or span of that trace. The trace is the parent resource for an assessment. | trace_id (string) assessment_id (string) |
| patch_api_3_0_mlflow_traces_by_trace_id_assessments_by_assessment_id | Update an assessment of a trace. This API does not maintain version history of assessments. If you wish to maintain a version history, please use the Create an assessment of a trace API/api/workspace/mlflowexperimenttrace/createassessmentv3 to create a new assessment with the updated information and set its overrides field to the existing assessment's ID. | trace_id (string) assessment_id (string) data: { . assessment . update_mask (string) } (object) required |
| delete_api_3_0_mlflow_traces_by_trace_id_assessments_by_assessment_id | Delete an assessment of a trace. | trace_id (string) assessment_id (string) |
| get_api_3_0_mlflow_traces_by_trace_id_credentials_for_data_download | Get credentials to download trace data. | trace_id (string) |
| get_api_3_0_mlflow_traces_by_trace_id_credentials_for_data_upload | Get credentials to upload trace data. | trace_id (string) |
| patch_api_3_0_mlflow_traces_by_trace_id_tags | Sets a tag on a trace. Tags are mutable and can be updated as desired. Tag keys should not be prefixed with 'mlflow.' as this is a reserved namespace for system tags. | trace_id (string) data: { . key (string) . value (string) } (object) required |
| delete_api_3_0_mlflow_traces_by_trace_id_tags | Delete a tag from a trace. | trace_id (string) key (string) |
| patch_api_3_0_rfa_destinations | Updates the access request destinations for the given securable. The caller must be a metastore admin, the owner of the securable, or a user that has the MANAGE privilege on the securable in order to assign destinations. Destinations cannot be updated for securables underneath schemas tables, volumes, functions, and models. For these securable types, destinations are inherited from the parent securable. A maximum of 5 emails and 5 external notification destinations Slack, Microsoft Teams, and Ge | update_mask (string) required data: { . are_any_destinations_hidden (boolean) . destinations (array) . securable } (object) required |
| get_api_3_0_rfa_destinations_by_securable_type_by_full_name | Gets an array of access request destinations for the specified securable. Any caller can see URL destinations or the destinations on the metastore. Otherwise, only those with BROWSE permissions on the securable can see destinations. The supported securable types are: 'metastore', 'catalog', 'schema', 'table', 'external_location', 'connection', 'credential', 'function', 'registered_model', and 'volume'. | securable_type (string) full_name (string) |
| post_api_3_0_rfa_requests | Creates access requests for Unity Catalog permissions for a specified principal on a securable object. This Batch API can take in multiple principals, securable objects, and permissions as the input and returns the access request destinations for each. Principals must be unique across the API call. The supported securable types are: 'metastore', 'catalog', 'schema', 'table', 'external_location', 'connection', 'credential', 'function', 'registered_model', and 'volume'. | data: { . requests (array) } (object) required |
| post_serving_endpoints_by_name_invocations | Query a serving endpoint | name (string) data: { . client_request_id (string) . dataframe_records (array) . dataframe_split . extra_params (object) . input . inputs . instances (array) . max_tokens (integer) . messages (array) . n (integer) . prompt . stop (array) . stream (boolean) . temperature (number) . usage_context (object) } (object) required |