Skip to main content

Google BigQuery

Read and write data to Google BigQuery.

Google BigQuery Logo

Authentication

This connector uses OAuth 2.0 authentication.

info

Set up your connection in the Abstra Console before using it in your workflows.

How to use

Using the Smart Chat

Execute the action "CHOOSE_ONE_ACTION_BELOW" from my connector "YOUR_CONNECTOR_NAME" using the params "PARAMS_HERE".

Using the Web Editor

from abstra.connectors import run_connection_action

result = run_connection_action(
connection_name="your_connection_name",
action_name="your_action_name",
params={
"param1": "value1",
"param2": "value2"
})

Available Actions

This connector provides 47 actions:

ActionPurposeParameters
datasets_deleteDeletes the dataset specified by the datasetId value. Before you can delete a dataset, you must delete all its tables, either manually or by specifying deleteContents. Immediately after deletion, you can create another dataset with the same name.datasetId (string) required
deleteContents (boolean)
projectId (string) required
datasets_getReturns the dataset specified by datasetID.accessPolicyVersion (integer)
datasetId (string) required
datasetView (string)
projectId (string) required
datasets_insertCreates a new empty dataset.accessPolicyVersion (integer)
projectId (string) required
access (array)
creationTime (string)
datasetReference: {
. datasetId (string)
. projectId (string)
} (object)
defaultCollation (string)
defaultEncryptionConfiguration: {
. kmsKeyName (string)
} (object)
defaultPartitionExpirationMs (string)
defaultRoundingMode (string)
defaultTableExpirationMs (string)
description (string)
etag (string)
externalCatalogDatasetOptions: {
. defaultStorageLocationUri (string)
. parameters (object)
} (object)
externalDatasetReference: {
. connection (string)
. externalSource (string)
} (object)
friendlyName (string)
id (string)
isCaseInsensitive (boolean)
kind (string)
labels (object)
lastModifiedTime (string)
linkedDatasetMetadata: {
. linkState (string)
} (object)
linkedDatasetSource: {
. sourceDataset (object)
} (object)
location (string)
maxTimeTravelHours (string)
resourceTags (object)
restrictions: {
. type (string)
} (object)
satisfiesPzi (boolean)
satisfiesPzs (boolean)
selfLink (string)
storageBillingModel (string)
tags (array)
type (string)
datasets_listLists all datasets in the specified project to which the user has been granted the READER dataset role.all (boolean)
filter (string)
maxResults (integer)
pageToken (string)
projectId (string) required
datasets_patchUpdates information in an existing dataset. The update method replaces the entire dataset resource, whereas the patch method only replaces fields that are provided in the submitted dataset resource. This method supports RFC5789 patch semantics.accessPolicyVersion (integer)
datasetId (string) required
projectId (string) required
updateMode (string)
access (array)
creationTime (string)
datasetReference: {
. datasetId (string)
. projectId (string)
} (object)
defaultCollation (string)
defaultEncryptionConfiguration: {
. kmsKeyName (string)
} (object)
defaultPartitionExpirationMs (string)
defaultRoundingMode (string)
defaultTableExpirationMs (string)
description (string)
etag (string)
externalCatalogDatasetOptions: {
. defaultStorageLocationUri (string)
. parameters (object)
} (object)
externalDatasetReference: {
. connection (string)
. externalSource (string)
} (object)
friendlyName (string)
id (string)
isCaseInsensitive (boolean)
kind (string)
labels (object)
lastModifiedTime (string)
linkedDatasetMetadata: {
. linkState (string)
} (object)
linkedDatasetSource: {
. sourceDataset (object)
} (object)
location (string)
maxTimeTravelHours (string)
resourceTags (object)
restrictions: {
. type (string)
} (object)
satisfiesPzi (boolean)
satisfiesPzs (boolean)
selfLink (string)
storageBillingModel (string)
tags (array)
type (string)
datasets_undeleteUndeletes a dataset which is within time travel window based on datasetId. If a time is specified, the dataset version deleted at that time is undeleted, else the last live version is undeleted.datasetId (string) required
projectId (string) required
deletionTime (string)
datasets_updateUpdates information in an existing dataset. The update method replaces the entire dataset resource, whereas the patch method only replaces fields that are provided in the submitted dataset resource.accessPolicyVersion (integer)
datasetId (string) required
projectId (string) required
updateMode (string)
access (array)
creationTime (string)
datasetReference: {
. datasetId (string)
. projectId (string)
} (object)
defaultCollation (string)
defaultEncryptionConfiguration: {
. kmsKeyName (string)
} (object)
defaultPartitionExpirationMs (string)
defaultRoundingMode (string)
defaultTableExpirationMs (string)
description (string)
etag (string)
externalCatalogDatasetOptions: {
. defaultStorageLocationUri (string)
. parameters (object)
} (object)
externalDatasetReference: {
. connection (string)
. externalSource (string)
} (object)
friendlyName (string)
id (string)
isCaseInsensitive (boolean)
kind (string)
labels (object)
lastModifiedTime (string)
linkedDatasetMetadata: {
. linkState (string)
} (object)
linkedDatasetSource: {
. sourceDataset (object)
} (object)
location (string)
maxTimeTravelHours (string)
resourceTags (object)
restrictions: {
. type (string)
} (object)
satisfiesPzi (boolean)
satisfiesPzs (boolean)
selfLink (string)
storageBillingModel (string)
tags (array)
type (string)
jobs_cancelRequests that a job be cancelled. This call will return immediately, and the client will need to poll for the job status to see if the cancel completed successfully. Cancelled jobs may still incur costs.jobId (string) required
location (string)
projectId (string) required
jobs_deleteRequests the deletion of the metadata of a job. This call returns when the job's metadata is deleted.jobId (string) required
location (string)
projectId (string) required
jobs_getReturns information about a specific job. Job information is available for a six month period after creation. Requires that you're the person who ran the job, or have the Is Owner project role.jobId (string) required
location (string)
projectId (string) required
jobs_get_query_resultsRPC to get the results of a query job.formatOptions.timestampOutputFormat (string)
formatOptions.useInt64Timestamp (boolean)
jobId (string) required
location (string)
maxResults (integer)
pageToken (string)
projectId (string) required
startIndex (string)
timeoutMs (integer)
jobs_insertStarts a new asynchronous job. This API has two different kinds of endpoint URIs, as this method supports a variety of use cases. The Metadata URI is used for most interactions, as it accepts the job configuration directly. The Upload URI is ONLY for the case when you're sending both a load job configuration and a data stream together. In this case, the Upload URI accepts the job configuration and the data as two distinct multipart MIME parts.projectId (string) required
configuration: {
. copy (object)
. dryRun (boolean)
. extract (object)
. jobTimeoutMs (string)
. jobType (string)
. labels (object)
. load (object)
. query (object)
. reservation (string)
} (object)
etag (string)
id (string)
jobCreationReason: {
. code (string)
} (object)
jobReference: {
. jobId (string)
. location (string)
. projectId (string)
} (object)
kind (string)
principal_subject (string)
selfLink (string)
statistics: {
. completionRatio (number)
. copy (object)
. creationTime (string)
. dataMaskingStatistics (object)
. edition (string)
. endTime (string)
. extract (object)
. finalExecutionDurationMs (string)
. load (object)
. numChildJobs (string)
. parentJobId (string)
. query (object)
. quotaDeferments (array)
. reservationUsage (array)
. reservation_id (string)
. rowLevelSecurityStatistics (object)
. scriptStatistics (object)
. sessionInfo (object)
. startTime (string)
. totalBytesProcessed (string)
. totalSlotMs (string)
. transactionInfo (object)
} (object)
status: {
. errorResult (object)
. errors (array)
. state (string)
} (object)
user_email (string)
jobs_listLists all jobs that you started in the specified project. Job information is available for a six month period after creation. The job list is sorted in reverse chronological order, by job creation time. Requires the Can View project role, or the Is Owner project role if you set the allUsers property.allUsers (boolean)
maxCreationTime (string)
maxResults (integer)
minCreationTime (string)
pageToken (string)
parentJobId (string)
projectId (string) required
projection (string)
stateFilter (string)
jobs_queryRuns a BigQuery SQL query synchronously and returns query results if the query completes within a specified timeout.projectId (string) required
connectionProperties (array)
continuous (boolean)
createSession (boolean)
defaultDataset: {
. datasetId (string)
. projectId (string)
} (object)
destinationEncryptionConfiguration: {
. kmsKeyName (string)
} (object)
dryRun (boolean)
formatOptions: {
. timestampOutputFormat (string)
. useInt64Timestamp (boolean)
} (object)
jobCreationMode (string)
jobTimeoutMs (string)
kind (string)
labels (object)
location (string)
maxResults (integer)
maximumBytesBilled (string)
parameterMode (string)
preserveNulls (boolean)
query (string)
queryParameters (array)
requestId (string)
reservation (string)
timeoutMs (integer)
useLegacySql (boolean)
useQueryCache (boolean)
writeIncrementalResults (boolean)
models_deleteDeletes the model specified by modelId from the dataset.datasetId (string) required
modelId (string) required
projectId (string) required
models_getGets the specified model resource by model ID.datasetId (string) required
modelId (string) required
projectId (string) required
models_listLists all models in the specified dataset. Requires the READER dataset role. After retrieving the list of models, you can get information about a particular model by calling the models.get method.datasetId (string) required
maxResults (integer)
pageToken (string)
projectId (string) required
models_patchPatch specific fields in the specified model.datasetId (string) required
modelId (string) required
projectId (string) required
bestTrialId (string)
creationTime (string)
defaultTrialId (string)
description (string)
encryptionConfiguration: {
. kmsKeyName (string)
} (object)
etag (string)
expirationTime (string)
featureColumns (array)
friendlyName (string)
hparamSearchSpaces: {
. activationFn (object)
. batchSize (object)
. boosterType (object)
. colsampleBylevel (object)
. colsampleBynode (object)
. colsampleBytree (object)
. dartNormalizeType (object)
. dropout (object)
. hiddenUnits (object)
. l1Reg (object)
. l2Reg (object)
. learnRate (object)
. maxTreeDepth (object)
. minSplitLoss (object)
. minTreeChildWeight (object)
. numClusters (object)
. numFactors (object)
. numParallelTree (object)
. optimizer (object)
. subsample (object)
. treeMethod (object)
. walsAlpha (object)
} (object)
hparamTrials (array)
labelColumns (array)
labels (object)
lastModifiedTime (string)
location (string)
modelReference: {
. datasetId (string)
. modelId (string)
. projectId (string)
} (object)
modelType (string)
optimalTrialIds (array)
remoteModelInfo: {
. connection (string)
. endpoint (string)
. maxBatchingRows (string)
. remoteModelVersion (string)
. remoteServiceType (string)
. speechRecognizer (string)
} (object)
trainingRuns (array)
transformColumns (array)
projects_get_service_accountRPC to get the service account for a project used for interactions with Google Cloud KMSprojectId (string) required
projects_listRPC to list projects to which the user has been granted any project role. Users of this method are encouraged to consider the Resource Managerhttps://cloud.google.com/resource-manager/docs/ API, which provides the underlying data for this method and has more capabilities.maxResults (integer)
pageToken (string)
routines_deleteDeletes the routine specified by routineId from the dataset.datasetId (string) required
projectId (string) required
routineId (string) required
routines_getGets the specified routine resource by routine ID.datasetId (string) required
projectId (string) required
readMask (string)
routineId (string) required
routines_get_iam_policyGets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.resource (string) required
options: {
. requestedPolicyVersion (integer)
} (object)
routines_insertCreates a new routine in the dataset.datasetId (string) required
projectId (string) required
arguments (array)
creationTime (string)
dataGovernanceType (string)
definitionBody (string)
description (string)
determinismLevel (string)
etag (string)
externalRuntimeOptions: {
. containerCpu (number)
. containerMemory (string)
. maxBatchingRows (string)
. runtimeConnection (string)
. runtimeVersion (string)
} (object)
importedLibraries (array)
language (string)
lastModifiedTime (string)
pythonOptions: {
. entryPoint (string)
. packages (array)
} (object)
remoteFunctionOptions: {
. connection (string)
. endpoint (string)
. maxBatchingRows (string)
. userDefinedContext (object)
} (object)
returnTableType: {
. columns (array)
} (object)
returnType: {
. arrayElementType (object)
. rangeElementType (object)
. structType (object)
. typeKind (string)
} (object)
routineReference: {
. datasetId (string)
. projectId (string)
. routineId (string)
} (object)
routineType (string)
securityMode (string)
sparkOptions: {
. archiveUris (array)
. connection (string)
. containerImage (string)
. fileUris (array)
. jarUris (array)
. mainClass (string)
. mainFileUri (string)
. properties (object)
. pyFileUris (array)
. runtimeVersion (string)
} (object)
strictMode (boolean)
routines_listLists all routines in the specified dataset. Requires the READER dataset role.datasetId (string) required
filter (string)
maxResults (integer)
pageToken (string)
projectId (string) required
readMask (string)
routines_set_iam_policySets the access control policy on the specified resource. Replaces any existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors.resource (string) required
policy: {
. auditConfigs (array)
. bindings (array)
. etag (string)
. version (integer)
} (object)
updateMask (string)
routines_test_iam_permissionsReturns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error. Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may 'fail open' without warning.resource (string) required
permissions (array)
routines_updateUpdates information in an existing routine. The update method replaces the entire Routine resource.datasetId (string) required
projectId (string) required
routineId (string) required
arguments (array)
creationTime (string)
dataGovernanceType (string)
definitionBody (string)
description (string)
determinismLevel (string)
etag (string)
externalRuntimeOptions: {
. containerCpu (number)
. containerMemory (string)
. maxBatchingRows (string)
. runtimeConnection (string)
. runtimeVersion (string)
} (object)
importedLibraries (array)
language (string)
lastModifiedTime (string)
pythonOptions: {
. entryPoint (string)
. packages (array)
} (object)
remoteFunctionOptions: {
. connection (string)
. endpoint (string)
. maxBatchingRows (string)
. userDefinedContext (object)
} (object)
returnTableType: {
. columns (array)
} (object)
returnType: {
. arrayElementType (object)
. rangeElementType (object)
. structType (object)
. typeKind (string)
} (object)
routineReference: {
. datasetId (string)
. projectId (string)
. routineId (string)
} (object)
routineType (string)
securityMode (string)
sparkOptions: {
. archiveUris (array)
. connection (string)
. containerImage (string)
. fileUris (array)
. jarUris (array)
. mainClass (string)
. mainFileUri (string)
. properties (object)
. pyFileUris (array)
. runtimeVersion (string)
} (object)
strictMode (boolean)
row_access_policies_batch_deleteDeletes provided row access policies.datasetId (string) required
projectId (string) required
tableId (string) required
force (boolean)
policyIds (array)
row_access_policies_deleteDeletes a row access policy.datasetId (string) required
force (boolean)
policyId (string) required
projectId (string) required
tableId (string) required
row_access_policies_getGets the specified row access policy by policy ID.datasetId (string) required
policyId (string) required
projectId (string) required
tableId (string) required
row_access_policies_get_iam_policyGets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.resource (string) required
options: {
. requestedPolicyVersion (integer)
} (object)
row_access_policies_insertCreates a row access policy.datasetId (string) required
projectId (string) required
tableId (string) required
creationTime (string)
etag (string)
filterPredicate (string)
grantees (array)
lastModifiedTime (string)
rowAccessPolicyReference: {
. datasetId (string)
. policyId (string)
. projectId (string)
. tableId (string)
} (object)
row_access_policies_listLists all row access policies on the specified table.datasetId (string) required
pageSize (integer)
pageToken (string)
projectId (string) required
tableId (string) required
row_access_policies_test_iam_permissionsReturns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error. Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may 'fail open' without warning.resource (string) required
permissions (array)
row_access_policies_updateUpdates a row access policy.datasetId (string) required
policyId (string) required
projectId (string) required
tableId (string) required
creationTime (string)
etag (string)
filterPredicate (string)
grantees (array)
lastModifiedTime (string)
rowAccessPolicyReference: {
. datasetId (string)
. policyId (string)
. projectId (string)
. tableId (string)
} (object)
tabledata_insert_allStreams data into BigQuery one record at a time without needing to run a load job.datasetId (string) required
projectId (string) required
tableId (string) required
ignoreUnknownValues (boolean)
kind (string)
rows (array)
skipInvalidRows (boolean)
templateSuffix (string)
traceId (string)
tabledata_listList the content of a table in rows.datasetId (string) required
formatOptions.timestampOutputFormat (string)
formatOptions.useInt64Timestamp (boolean)
maxResults (integer)
pageToken (string)
projectId (string) required
selectedFields (string)
startIndex (string)
tableId (string) required
tables_deleteDeletes the table specified by tableId from the dataset. If the table contains data, all the data will be deleted.datasetId (string) required
projectId (string) required
tableId (string) required
tables_getGets the specified table resource by table ID. This method does not return the data in the table, it only returns the table resource, which describes the structure of this table.datasetId (string) required
projectId (string) required
selectedFields (string)
tableId (string) required
view (string)
tables_get_iam_policyGets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.resource (string) required
options: {
. requestedPolicyVersion (integer)
} (object)
tables_insertCreates a new, empty table in the dataset.datasetId (string) required
projectId (string) required
biglakeConfiguration: {
. connectionId (string)
. fileFormat (string)
. storageUri (string)
. tableFormat (string)
} (object)
cloneDefinition: {
. baseTableReference (object)
. cloneTime (string)
} (object)
clustering: {
. fields (array)
} (object)
creationTime (string)
defaultCollation (string)
defaultRoundingMode (string)
description (string)
encryptionConfiguration: {
. kmsKeyName (string)
} (object)
etag (string)
expirationTime (string)
externalCatalogTableOptions: {
. connectionId (string)
. parameters (object)
. storageDescriptor (object)
} (object)
externalDataConfiguration: {
. autodetect (boolean)
. avroOptions (object)
. bigtableOptions (object)
. compression (string)
. connectionId (string)
. csvOptions (object)
. dateFormat (string)
. datetimeFormat (string)
. decimalTargetTypes (array)
. fileSetSpecType (string)
. googleSheetsOptions (object)
. hivePartitioningOptions (object)
. ignoreUnknownValues (boolean)
. jsonExtension (string)
. jsonOptions (object)
. maxBadRecords (integer)
. metadataCacheMode (string)
. objectMetadata (string)
. parquetOptions (object)
. referenceFileSchemaUri (string)
. schema (object)
. sourceFormat (string)
. sourceUris (array)
. timeFormat (string)
. timeZone (string)
. timestampFormat (string)
} (object)
friendlyName (string)
id (string)
kind (string)
labels (object)
lastModifiedTime (string)
location (string)
managedTableType (string)
materializedView: {
. allowNonIncrementalDefinition (boolean)
. enableRefresh (boolean)
. lastRefreshTime (string)
. maxStaleness (string)
. query (string)
. refreshIntervalMs (string)
} (object)
materializedViewStatus: {
. lastRefreshStatus (object)
. refreshWatermark (string)
} (object)
maxStaleness (string)
model: {
. modelOptions (object)
. trainingRuns (array)
} (object)
numActiveLogicalBytes (string)
numActivePhysicalBytes (string)
numBytes (string)
numCurrentPhysicalBytes (string)
numLongTermBytes (string)
numLongTermLogicalBytes (string)
numLongTermPhysicalBytes (string)
numPartitions (string)
numPhysicalBytes (string)
numRows (string)
numTimeTravelPhysicalBytes (string)
numTotalLogicalBytes (string)
numTotalPhysicalBytes (string)
partitionDefinition: {
. partitionedColumn (array)
} (object)
rangePartitioning: {
. field (string)
. range (object)
} (object)
replicas (array)
requirePartitionFilter (boolean)
resourceTags (object)
restrictions: {
. type (string)
} (object)
schema: {
. fields (array)
. foreignTypeInfo (object)
} (object)
selfLink (string)
snapshotDefinition: {
. baseTableReference (object)
. snapshotTime (string)
} (object)
streamingBuffer: {
. estimatedBytes (string)
. estimatedRows (string)
. oldestEntryTime (string)
} (object)
tableConstraints: {
. foreignKeys (array)
. primaryKey (object)
} (object)
tableReference: {
. datasetId (string)
. projectId (string)
. tableId (string)
} (object)
tableReplicationInfo: {
. replicatedSourceLastRefreshTime (string)
. replicationError (object)
. replicationIntervalMs (string)
. replicationStatus (string)
. sourceTable (object)
} (object)
timePartitioning: {
. expirationMs (string)
. field (string)
. requirePartitionFilter (boolean)
. type (string)
} (object)
type (string)
view: {
. foreignDefinitions (array)
. privacyPolicy (object)
. query (string)
. useExplicitColumnNames (boolean)
. useLegacySql (boolean)
. userDefinedFunctionResources (array)
} (object)
tables_listLists all tables in the specified dataset. Requires the READER dataset role.datasetId (string) required
maxResults (integer)
pageToken (string)
projectId (string) required
tables_patchUpdates information in an existing table. The update method replaces the entire table resource, whereas the patch method only replaces fields that are provided in the submitted table resource. This method supports RFC5789 patch semantics.autodetect_schema (boolean)
datasetId (string) required
projectId (string) required
tableId (string) required
biglakeConfiguration: {
. connectionId (string)
. fileFormat (string)
. storageUri (string)
. tableFormat (string)
} (object)
cloneDefinition: {
. baseTableReference (object)
. cloneTime (string)
} (object)
clustering: {
. fields (array)
} (object)
creationTime (string)
defaultCollation (string)
defaultRoundingMode (string)
description (string)
encryptionConfiguration: {
. kmsKeyName (string)
} (object)
etag (string)
expirationTime (string)
externalCatalogTableOptions: {
. connectionId (string)
. parameters (object)
. storageDescriptor (object)
} (object)
externalDataConfiguration: {
. autodetect (boolean)
. avroOptions (object)
. bigtableOptions (object)
. compression (string)
. connectionId (string)
. csvOptions (object)
. dateFormat (string)
. datetimeFormat (string)
. decimalTargetTypes (array)
. fileSetSpecType (string)
. googleSheetsOptions (object)
. hivePartitioningOptions (object)
. ignoreUnknownValues (boolean)
. jsonExtension (string)
. jsonOptions (object)
. maxBadRecords (integer)
. metadataCacheMode (string)
. objectMetadata (string)
. parquetOptions (object)
. referenceFileSchemaUri (string)
. schema (object)
. sourceFormat (string)
. sourceUris (array)
. timeFormat (string)
. timeZone (string)
. timestampFormat (string)
} (object)
friendlyName (string)
id (string)
kind (string)
labels (object)
lastModifiedTime (string)
location (string)
managedTableType (string)
materializedView: {
. allowNonIncrementalDefinition (boolean)
. enableRefresh (boolean)
. lastRefreshTime (string)
. maxStaleness (string)
. query (string)
. refreshIntervalMs (string)
} (object)
materializedViewStatus: {
. lastRefreshStatus (object)
. refreshWatermark (string)
} (object)
maxStaleness (string)
model: {
. modelOptions (object)
. trainingRuns (array)
} (object)
numActiveLogicalBytes (string)
numActivePhysicalBytes (string)
numBytes (string)
numCurrentPhysicalBytes (string)
numLongTermBytes (string)
numLongTermLogicalBytes (string)
numLongTermPhysicalBytes (string)
numPartitions (string)
numPhysicalBytes (string)
numRows (string)
numTimeTravelPhysicalBytes (string)
numTotalLogicalBytes (string)
numTotalPhysicalBytes (string)
partitionDefinition: {
. partitionedColumn (array)
} (object)
rangePartitioning: {
. field (string)
. range (object)
} (object)
replicas (array)
requirePartitionFilter (boolean)
resourceTags (object)
restrictions: {
. type (string)
} (object)
schema: {
. fields (array)
. foreignTypeInfo (object)
} (object)
selfLink (string)
snapshotDefinition: {
. baseTableReference (object)
. snapshotTime (string)
} (object)
streamingBuffer: {
. estimatedBytes (string)
. estimatedRows (string)
. oldestEntryTime (string)
} (object)
tableConstraints: {
. foreignKeys (array)
. primaryKey (object)
} (object)
tableReference: {
. datasetId (string)
. projectId (string)
. tableId (string)
} (object)
tableReplicationInfo: {
. replicatedSourceLastRefreshTime (string)
. replicationError (object)
. replicationIntervalMs (string)
. replicationStatus (string)
. sourceTable (object)
} (object)
timePartitioning: {
. expirationMs (string)
. field (string)
. requirePartitionFilter (boolean)
. type (string)
} (object)
type (string)
view: {
. foreignDefinitions (array)
. privacyPolicy (object)
. query (string)
. useExplicitColumnNames (boolean)
. useLegacySql (boolean)
. userDefinedFunctionResources (array)
} (object)
tables_set_iam_policySets the access control policy on the specified resource. Replaces any existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and PERMISSION_DENIED errors.resource (string) required
policy: {
. auditConfigs (array)
. bindings (array)
. etag (string)
. version (integer)
} (object)
updateMask (string)
tables_test_iam_permissionsReturns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error. Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may 'fail open' without warning.resource (string) required
permissions (array)
tables_updateUpdates information in an existing table. The update method replaces the entire Table resource, whereas the patch method only replaces fields that are provided in the submitted Table resource.autodetect_schema (boolean)
datasetId (string) required
projectId (string) required
tableId (string) required
biglakeConfiguration: {
. connectionId (string)
. fileFormat (string)
. storageUri (string)
. tableFormat (string)
} (object)
cloneDefinition: {
. baseTableReference (object)
. cloneTime (string)
} (object)
clustering: {
. fields (array)
} (object)
creationTime (string)
defaultCollation (string)
defaultRoundingMode (string)
description (string)
encryptionConfiguration: {
. kmsKeyName (string)
} (object)
etag (string)
expirationTime (string)
externalCatalogTableOptions: {
. connectionId (string)
. parameters (object)
. storageDescriptor (object)
} (object)
externalDataConfiguration: {
. autodetect (boolean)
. avroOptions (object)
. bigtableOptions (object)
. compression (string)
. connectionId (string)
. csvOptions (object)
. dateFormat (string)
. datetimeFormat (string)
. decimalTargetTypes (array)
. fileSetSpecType (string)
. googleSheetsOptions (object)
. hivePartitioningOptions (object)
. ignoreUnknownValues (boolean)
. jsonExtension (string)
. jsonOptions (object)
. maxBadRecords (integer)
. metadataCacheMode (string)
. objectMetadata (string)
. parquetOptions (object)
. referenceFileSchemaUri (string)
. schema (object)
. sourceFormat (string)
. sourceUris (array)
. timeFormat (string)
. timeZone (string)
. timestampFormat (string)
} (object)
friendlyName (string)
id (string)
kind (string)
labels (object)
lastModifiedTime (string)
location (string)
managedTableType (string)
materializedView: {
. allowNonIncrementalDefinition (boolean)
. enableRefresh (boolean)
. lastRefreshTime (string)
. maxStaleness (string)
. query (string)
. refreshIntervalMs (string)
} (object)
materializedViewStatus: {
. lastRefreshStatus (object)
. refreshWatermark (string)
} (object)
maxStaleness (string)
model: {
. modelOptions (object)
. trainingRuns (array)
} (object)
numActiveLogicalBytes (string)
numActivePhysicalBytes (string)
numBytes (string)
numCurrentPhysicalBytes (string)
numLongTermBytes (string)
numLongTermLogicalBytes (string)
numLongTermPhysicalBytes (string)
numPartitions (string)
numPhysicalBytes (string)
numRows (string)
numTimeTravelPhysicalBytes (string)
numTotalLogicalBytes (string)
numTotalPhysicalBytes (string)
partitionDefinition: {
. partitionedColumn (array)
} (object)
rangePartitioning: {
. field (string)
. range (object)
} (object)
replicas (array)
requirePartitionFilter (boolean)
resourceTags (object)
restrictions: {
. type (string)
} (object)
schema: {
. fields (array)
. foreignTypeInfo (object)
} (object)
selfLink (string)
snapshotDefinition: {
. baseTableReference (object)
. snapshotTime (string)
} (object)
streamingBuffer: {
. estimatedBytes (string)
. estimatedRows (string)
. oldestEntryTime (string)
} (object)
tableConstraints: {
. foreignKeys (array)
. primaryKey (object)
} (object)
tableReference: {
. datasetId (string)
. projectId (string)
. tableId (string)
} (object)
tableReplicationInfo: {
. replicatedSourceLastRefreshTime (string)
. replicationError (object)
. replicationIntervalMs (string)
. replicationStatus (string)
. sourceTable (object)
} (object)
timePartitioning: {
. expirationMs (string)
. field (string)
. requirePartitionFilter (boolean)
. type (string)
} (object)
type (string)
view: {
. foreignDefinitions (array)
. privacyPolicy (object)
. query (string)
. useExplicitColumnNames (boolean)
. useLegacySql (boolean)
. userDefinedFunctionResources (array)
} (object)