As needed, you can insert custom SQL statements as part of the data import process. These custom SQL statements allow you to pre-filter the rows and columns of relational source data within the database, where performance is faster. This query method can also be used for wider operations on relational sources from within .
All queries are blindly executed. It is your responsibility to ensure that they are appropriate. Queries like |
NOTE: Column names in custom SQL statements are case-sensitive. Case mismatches between SQL statement and your datasource can cause jobs to fail. |
SQL statements must be valid for the syntax of the target relational system. Syntax examples are provided below.
Declared variables are not supported.
Function references such as:
UPPER(col) |
Must be specified as:
UPPER(col) as col_name |
When using custom SQL to read from a Hive view, the results of a nested function are saved to a temporary name, unless explicitly aliased.
The following limitations apply to creating datasets from a single statement.
All single-statement SQL queries must begin with a SELECT
keyword.
Selecting columns with the same name, even with "*"
, is not supported and generates an ambiguous column name error.
Tip: You should use fully qualified column names or proper aliasing. See Column Aliasing below. |
Users are encouraged to provide fully qualified path to table being used. Example:
SELECT "id", "value" FROM "public"."my_table" |
These limitations apply to creating datasets using a sequence of multiple SQL statements.
NOTE: Use of multiple SQL statements must be enabled. See Enable Custom SQL Query. |
Repeatable: When using multi-statements, you must verify that the statements are repeatable without failure. These statements are run multiple times during validation, datasets creation, data preview, and opening the dataset in the Transformer page.
NOTE: To ensure repeatability, any creation or deletion of data in the database must occur before the final required SELECT statement. |
Line Termination: Each query must terminate with a semi-colon and a new line.
Validation: All statements are run immediately when validating or creating dataset.
NOTE: No DROP or DELETE checking is done prior to statement execution. Statements are the responsibility of the user. |
Steps:
Locate the following setting:
Enable custom SQL Query |
Setting | Description |
---|---|
enabled | Set to true to enable the ability to create datasets using customized SQL statements. By default, this feature is enabled. |
To use, please complete the following steps.
Steps:
Click the Preview icon to review the columns in the dataset.
Tip: You may wish to copy the database, table name, and column names to a text editor to facilitate generating your SQL statement. |
Click Create Dataset with SQL. Enter or paste your SQL statement.
Through the custom SQL interface, it is possible to enter SQL statements that can delete data, change table schemas, or otherwise corrupt the targeted database. Please use this feature with caution. |
Create Dataset with SQL dialog |
See Create Dataset with SQL#Examples below.
To test the SQL, click Validate SQL. For details, see below.
To apply the SQL to the import process, click Create Dataset.
The customized source is added to the right panel. To re-edit, click Custom SQL.
Complete the other steps to define your imported dataset.
When the data is imported, it is altered or filtered based on your SQL statement.
If parameterization has been enabled, you can specify variables as part of your SQL statement. Suppose you had table names like the following:
publish_create_all_types_97912510 publish_create_all_types_97944183 publish_create_all_types_14202824 |
You can insert an inline variable as part of your custom SQL to capture all of these variations.
Insert variables in your custom SQL |
In the above, custom SQL has been added to match the first example table. When the value is highlighted and the icon is clicked, the highlighted value is specified as the default value. Provide a name for the variable, and click Save.
Through the Run Job page, you can specify overrides for the default value, so the same job definition can be used across all matching tables without much modification. For more information, see Run Job Page.
For more information on this feature, see Overview of Parameterization.
You can insert a timestamp parameter into your custom SQL. These parameters are used to describe timestamp formats for matching timestamps relative to the start of the job at the time of execution.
NOTE: A SQL timestamp parameter only describes the formatting of a timestamp value. It cannot be used to describe actual values. For example, you cannot insert fixed values for the month to parameterize your input using this method. Instead, parameterize the input using multiple input variables, as described in the previous section. |
NOTE: Values for seconds in a SQL timestamp parameter are not supported. The finest supported granularity is at the minutes level. |
NOTE: When the dataset is created, the current date is used for comparison, instead of the job execution date. |
In the following example, the timestamp parameter has been specified as YYYY-MM-DD
:
SELECT * FROM <YYYY-MM-DD> |
If the job executes on May 28th, 2019, then this parameter resolves as 2019-05-28
and gathers data from that table.
Insert timestamp parameter |
Steps:
Timestamp format: You can specify the format of the timestamp using supported characters.
Tip: The list and definition of available tokens is available in the help popover. |
Timestamp value: Choose whether the timestamp parameter is to match the exact start time or a time relative to the start of the job.
Tip: You can use relative timestamp parameters to collect data from the preceding week, for example. This relative timestamp allows you to execute weekly jobs for the preceding week's data. |
You cannot create a SQL-based dataset if any of your SQL statements do not pass validation. Errors must be corrected in the SQL or in the underlying database.
SELECT
statements are planned, which includes syntactical validation. However, these statements are not executed. Validation should be a matter of a few seconds.For multi-line statements, all non-SELECT
statements are planned and executed. The final SELECT
statement is only planned.
NOTE: For multi-line SQL statements, validation may take longer to complete if the non- |
Here are some basic SQL examples to get started.
Your SQL statements must be valid for the syntax expected by the target relational system. In particular, object delimiters may vary between systems.
NOTE: The proper syntax depends on your database system. Please consult the documentation for your product for details. |
Tip: Although some relational systems do not require object delimiters around column names, it is recommended that you add them to all applicable objects. |
Tip: Avoid using column type identifiers (e.g. |
NOTE: In the following sections, Oracle syntax is used in the examples. Please modify the examples for your target system. |
Object delimiter: double-quote
Example syntax:
Double quotes required around database and table names and not required around column names.
SELECT "column1","column2" FROM "databaseName"."tableName" |
Object delimiter: none
Example syntax:
SELECT "column1","column2" FROM "databaseName"."tableName" |
Object delimiter: double-quote
Example syntax:
Double quotes required around database, table names, and column names.
SELECT "column1","column2" FROM "databaseName"."tableName" |
Object delimiter: double-quote
Example syntax:
Double quotes required around database and table names and not required around column names.
SELECT "column1","column2" FROM "databaseName"."tableName" |
Object delimiter: backtick
Example syntax:
SELECT `column1`,`column2` FROM `databaseName`.`tableName` |
AWS Glue follows Hive syntax. See previous.
If your select statement results in multiple columns with same name, the query fails to validate or fails on execution, such as selecting all columns in a JOIN. In these cases, columns must be properly aliased.
NOTE: This error will be caught either during validating or during dataset import. |
For example, in the following JOIN, the EMPLOYEE
and DEPARTMENT
tables have column names department_id
and department_id
.
SELECT * FROM EMPLOYEE INNER JOIN DEPARTMENT ON (department_id = department_id) |
The above query generates an error. Columns must be properly aliased, as in the following:
SELECT e.id, e.department_id, e.first_name, e.last_name, d.department_name FROM EMPLOYEE AS E INNER JOIN DEPARTMENT d ON (e.department_id = d.department_id) |
SELECT * FROM "DB1"."table2" |
SELECT lastName,firstName FROM "DB1"."table2 |
SELECT lastName,firstName FROM "DB1"."table2" WHERE invoiceAmt > 10000 |
The following example uses a multi-line SQL sequence:
NOTE: Multi-line SQL support is considered an advanced use case. This feature must be enabled. |
The following example inserts values in the TABLE_INVENTORY
table and then queries the table. It utilizes Oracle syntax:
INSERT INTO "SALES"."TABLE_INVENTORY" ("ID", "AVAILABILITY") VALUES (1, 10); SELECT * FROM "SALES"."TABLE_INVENTORY" |
When you import a column from Snowflake that contains time zone information, you may see the following behavior:
The above issue is caused by the following:
2020-10-11 12:13:14
., which has been auto-converted to UTC. 2020-10-11 14:13:14 CEST
.Solution:
For a time stamp with a time zone, you must wrap your reference to it like the following:
TO_TIMESTAMP(CONVERT_TIMEZONE('UTC', <timestamp_column_or_function>)) |
Suppose your query was the following:
SELECT *, CURRENT_TIMESTAMP() AS current_time FROM MY_TABLE; |
To address this issue, the query needs to be rewritten as follows:
SELECT *, TO_TIMESTAMP(CONVERT_TIMEZONE('UTC', CURRENT_TIMESTAMP())) AS current_time FROM MY_TABLE; |
When the above wrapper function is applied, the data is imported normally and validated and published as expected.
If you run a job on an 0-row dataset that is sourced from Snowflake, the job execution fails.
Solution:
The solution is to union the empty dataset row with an empty row. Example:
SELECT col1, col2 FROM empty_table UNION ALL SELECT '' AS col1, '' AS col2 FROM empty_table; |
The insert row values prevent the job from failing.