This section describes how you interact through Cloud Dataprep by TRIFACTA® INC. with your BigQuery tables.
Uses of BigQuery
Cloud Dataprep by TRIFACTA INC. can use BigQuery for the following tasks:
- Create datasets by reading from BigQuery tables.
- Write data to BigQuery.
NOTE: If you are reading data from BigQuery and writing job results for it back to BigQuery, both locations must be in the same geographic region.
Before You Begin Using BigQuery
- Your BigQuery administrator must enable BigQuery for your Cloud Dataprep by TRIFACTA INC. project.
- Your BigQuery administrator should provide datasets or locations and access for storing datasets within BigQuery.
- Users should know where shared data is located and where personal data can be saved without interfering with or confusing other users.
NOTE: Cloud Dataprep by TRIFACTA INC. does not modify source data in BigQuery. Datasets sourced from BigQuery are read without modification from their source locations.
For more information on how data types are converted to and from BigQuery sources, see BigQuery Data Type Conversions.
Support for CMEK
Use of Customer Managed Encryption Keys (CMEK) is supported and is transparent to the user. For more information, see https://cloud.google.com/kms/docs/cmek.
Reading from Tables in BigQuery
You can create a dataset from a table stored in BigQuery.
- Standard SQL
NOTE: Standard SQL syntax is supported. Legacy SQL syntax is not supported.
- Nested tables are supported.
- Partitioned tables are supported, but these must include a schema.
- Partitioning filters are not supported.
NOTE: Reading from external tables or from tables without a schema is not supported.
NOTE: Cloud Dataprep by TRIFACTA INC. only supports native BigQuery tables and views. Schema information is required. Cloud Dataprep by TRIFACTA INC. does not support BigQuery sources that reference data stored in Google Suite.
Creating datasets with custom SQL
- Cloud Dataprep Standard by TRIFACTA INC.
- Cloud Dataprep Premium by TRIFACTA INC.
You can create your datasets by generating custom SQL SELECT statements on your BigQuery tables.
Tip: By pre-filtering your datasets using custom SQL, you can reduce the volume of data that is transferred from the database, which can significantly improve import performance.
For more information, see Create Dataset with SQL.
Reading from other projects
If you have read access to other projects, you can read from BigQuery tables that are associated with those projects. You must have read access on any table from which you are reading.
For more information, see BigQuery Browser.
Writing to BigQuery
You can write datasets to BigQuery as part of your job definition.
NOTE: Object and Array data types are written back to BigQuery as string values.
NOTE: BigQuery does not support destinations with a dot (.) in the name.
For more information on creating a publishing action to BigQuery, see Run Job Page.
Writing to other projects
If you have write access to other projects, you can write to BigQuery tables that are associated with those projects. You must have write access to any table to which you are writing.
You can specify the target table as part of the job specification. See Run Job Page.
This page has no comments.