Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0682

D toc

As needed, you can insert custom SQL statements as part of the data import process. These custom SQL statements allow you to pre-filter the rows and columns of relational source data within the database, where performance is faster. This query method can also be used for wider operations on relational sources from within

D s product
rtrue
.

Limitations

General

Warning

All queries are blindly executed. It is your responsibility to ensure that they are appropriate. Queries like DELETE and DROP can destroy data in the database. Please use caution.

Info

NOTE: Column names in custom SQL statements are case-sensitive. Case mismatches between SQL statement and your datasource can cause jobs to fail.

 

  • SQL statements are stored as part of the query instance for the object. If the same query is being made across multiple users using private connections, the SQL must be shared and entered by individual users.
  • SQL statements must be valid for the syntax of the target relational system. Syntax examples are provided below.

  • If you modify the custom SQL statement when reading from a source, all samples generated based on the previous SQL are invalidated.
  • Declared variables are not supported. 

  • When using custom SQL to read from a Hive view, the results of a nested function are saved to a temporary name, unless explicitly aliased. 

      • If aliases are not used, the temporary column names can cause jobs to fail, on Spark in particular.
      • For more information, see Using Hive.

Single Statement

The following limitations apply to creating datasets from a single statement. 

  1. Selecting columns with the same name, even with "*", is not supported and generates an ambiguous column name error. 

    Tip

    Tip: You should use fully qualified column names or proper aliasing. See Column Aliasing below.

  2. Users are encouraged to provide fully qualified path to table being used. Example:

    Code Block
    SELECT "id", "value" FROM "public"."my_table"
  3. You should use proper escaping in SQL.

Multi-Statement 

These limitations apply to creating datasets using a sequence of multiple SQL statements.

Info

NOTE: Use of multiple SQL statements must be enabled. See Enable Custom SQL Query.

  1. Repeatable: When using multi-statements, you must verify that the statements are repeatable without failure. These statements are run multiple times during validation, datasets creation, data preview, and opening the dataset in the Transformer page.

    Info

    NOTE: To ensure repeatability, any creation or deletion of data in the database must occur before the final required SELECT statement.

  2. Line Termination: Each query must terminate with a semi-colon and a new line.

  3. Validation: All statements are run immediately when validating or creating dataset. 

    Info

    NOTE: No DROP or DELETE checking is done prior to statement execution. Statements are the responsibility of the user.

  4. SELECT requirement: In a multi-statement execution, the last statement must be a SELECT statement.
  5. Database transactions: All statements are run in a transaction. DDL statements in most dialects (vendors) can't be run within a transaction and might be automatically committed by the driver.

Enable

Steps:

  1. D s config
  2. Locate the following setting:

    Code Block
    Enable custom SQL Query


    Setting

    Description

    enabledSet to true to enable the SQL pushdown feature. By default, this feature is enabled.

Use

To use, please complete the following steps.

Steps:

  1. In the Library page, click Import Data.
  2. In the Import Data page, select a connection. 
  3. Within your source, locate the table from which you wish to import. Do not select the table.
  4. Click the Preview icon to review the columns in the dataset.

    Tip

    Tip: You may wish to copy the database, table name, and column names to a text editor to facilitate generating your SQL statement.

  5. Click Create Dataset with SQL. Enter or paste your SQL statement.

    Warning

    Through the custom SQL interface, it is possible to enter SQL statements that can delete data, change table schemas, or otherwise corrupt the targeted database. Please use this feature with caution.



    D caption
    Create Dataset with SQL dialog

     

    1. See Examples below.

    2. To test the SQL, click Validate SQL. For details, see below.

    3. To apply the SQL to the import process, click Create Dataset.

  6. The customized source is added to the right panel. To re-edit, click Custom SQL.

  7. Complete the other steps to define your imported dataset. 

  8. When the data is imported, it is altered or filtered based on your SQL statement. 

    1. After dataset creation, you can modify the SQL, if needed. See Dataset Details Page.

Create with Variables

If parameterization has been enabled, you can specify variables as part of your SQL statement. Suppose you had table names like the following:

Code Block
publish_create_all_types_97912510
publish_create_all_types_97944183
publish_create_all_types_14202824

You can insert an inline variable as part of your custom SQL to capture all of these variations. 

D caption
Insert variables in your custom SQL

In the above, custom SQL has been added to match the first example table. When the value is highlighted and the icon is clicked, the highlighted value is specified as the default value. Provide a name for the variable, and click Save.

Through the Run Job page, you can specify overrides for the default value, so the same job definition can be used across all matching tables without much modification. For more information, see Run Job Page.

For more information on this feature, see Overview of Parameterization.

Create with timestamp parameter

You can insert a timestamp parameter into your custom SQL. These parameters are used to describe timestamp formats for matching timestamps relative to the start of the job at the time of execution. 


Info

NOTE: A SQL timestamp parameter only describes the formatting of a timestamp value. It cannot be used to describe actual values. For example, you cannot insert fixed values for the month to parameterize your input using this method. Instead, parameterize the input using multiple input variables, as described in the previous section.

Info

NOTE: Values for seconds in a SQL timestamp parameter are not supported. The finest supported granularity is at the minutes level.

Info

NOTE: When the dataset is created, the current date is used for comparison, instead of the job execution date.

In the following example, the timestamp parameter has been specified as YYYY-MM-DD:

Code Block
SELECT * FROM <YYYY-MM-DD> 

If the job executes on May 28th, 2019, then this parameter resolves as 2019-05-28 and gathers data from that table.

D caption
Insert timestamp parameter

Steps:

  1. Click the Clock icon in the custom SQL dialog.
  2. Timestamp format: You can specify the format of the timestamp using supported characters. 

    Tip

    Tip: The list and definition of available tokens is available in the help popover.

  3. Timestamp value: Choose whether the timestamp parameter is to match the exact start time or a time relative to the start of the job.

    Tip

    Tip: You can use relative timestamp parameters to collect data from the preceding week, for example. This relative timestamp allows you to execute weekly jobs for the preceding week's data.

  4. To indicate that the timestamps are from a timezone different from the system timezone, click Change.
  5. To save the specified timestamp parameter, click Save.

SQL Validation

You cannot create a SQL-based dataset if any of your SQL statements do not pass validation. Errors must be corrected in the SQL or in the underlying database.

  • All SELECT statements are planned, which includes syntactical validation. However, these statements are not executed. Validation should be a matter of a few seconds.

  • For multi-line statements, all non-SELECT statements are planned and executed. The final SELECT statement is only planned.

    Info

    NOTE: For multi-line SQL statements, validation may take longer to complete if the non-SELECT statements require significant time to execute.

Examples

Here are some basic SQL examples to get started.

Basic Syntax

Your SQL statements must be valid for the syntax expected by the target relational system. In particular, object delimiters may vary between systems. 

Info

NOTE: The proper syntax depends on your database system. Please consult the documentation for your product for details.


Tip

Tip: Although some relational systems do not require object delimiters around column names, it is recommended that you add them to all applicable objects.

Tip

Tip: Avoid using column type identifiers (e.g. int) and other SQL keywords as object names. Some systems may generate invalid SQL errors.

Oracle syntax

Object delimiter: double-quote

Example syntax:

Double quotes required around database and table names and not required around column names.

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"

SQL Server syntax

Object delimiter: none

Example syntax:

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"


PostgreSQL syntax

Object delimiter: double-quote

Example syntax:

Double quotes required around database, table names, and column names.

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"


Teradata syntax

Object delimiter: double-quote

Example syntax:

Double quotes required around database and table names and not required around column names.

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"

Hive syntax

Object delimiter: backtick

Example syntax:

Code Block
SELECT `column1`,`column2` FROM `databaseName`.`tableName`

AWS Glue syntax

AWS Glue follows Hive syntax. See previous.


Info

NOTE: In the following sections, Oracle syntax is used in the examples. Please modify the examples for your target system.\


Column Aliasing

If your select statement results in multiple columns with same name, the query fails to validate or fails on execution, such as selecting all columns in a JOIN. In these cases, columns must be properly aliased.

Info

NOTE: This error will be caught either during validating or during dataset import.

For example, in the following JOIN, the EMPLOYEE and DEPARTMENT tables have column names department_id and department_id

Code Block
SELECT * FROM EMPLOYEE INNER JOIN DEPARTMENT ON (department_id = department_id)

The above query generates an error. Columns must be properly aliased, as in the following:

Code Block
SELECT e.id, e.department_id, e.first_name, e.last_name, d.department_name FROM EMPLOYEE AS E INNER JOIN DEPARTMENT d ON (e.department_id = d.department_id)

Collect Whole Table

Code Block
SELECT * FROM "DB1"."table2"

Filter Columns

Code Block
SELECT lastName,firstName FROM "DB1"."table2

Filter Rows

Code Block
SELECT lastName,firstName FROM "DB1"."table2" WHERE invoiceAmt > 10000

Multi-line statement

The following example uses a multi-line SQL sequence:

Info

NOTE: Multi-line SQL support is considered an advanced use case. This feature must be enabled.

The following example inserts values in the TABLE_INVENTORY table and then queries the table. It utilizes Oracle syntax:

Code Block
INSERT INTO "SALES"."TABLE_INVENTORY" ("ID", "AVAILABILITY") VALUES (1, 10);
SELECT * FROM "SALES"."TABLE_INVENTORY"