Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.



NOTE: Column names in custom SQL statements are case-sensitive. Case mismatches between SQL statement and your datasource can cause jobs to fail.


  • SQL statements are stored as part of the query instance for the object. If the same query is being made across multiple users using private connections, the SQL must be shared and entered by individual users.
  • SQL statements must be valid for the syntax of the target relational system. Syntax examples are provided below.
  • If you modify the custom SQL statement when reading from a source, all samples generated based on the previous SQL are invalidated.
  • Declared variables are not supported. 

  • When using custom SQL to read from a Hive view, the results of a nested function are saved to a temporary name, unless explicitly aliased. 
    • If aliases are not used, the temporary column names can cause jobs to fail, on Spark in particular.
    • For more information, see Using Hive.
  • If you are using custom SQL to query an AWS Glue metadata store, you cannot apply the LIMIT keyword. For more information, see Enable AWS Glue Access.


The following limitations apply to creating datasets from a single statement. 

  1. All single-statement SQL queries must begin with a SELECT keyword.

  2. Selecting columns with the same name, even with "*", is not supported and generates an ambiguous column name error. 


    Tip: You should use fully qualified column names or proper aliasing. See Column Aliasing below.

  3. Users are encouraged to provide fully qualified path to table being used. Example:

    Code Block
    SELECT "id", "value" FROM "public"."my_table"

  4. You should use proper escaping in SQL.



NOTE: Use of multiple SQL statements must be enabled. See Enable Custom SQL Query.


  1. Repeatable: When using multi-statements, you must verify that the statements are repeatable without failure. These statements are run multiple times during validation, datasets creation, data preview, and opening the dataset in the Transformer page.


    NOTE: To ensure repeatability, any creation or deletion of data in the database must occur before the final required SELECT statement.

  2. Line Termination: Each query must terminate with a semi-colon and a new line.

  3. Validation: All statements are run immediately when validating or creating dataset. 


    NOTE: No DROP or DELETE checking is done prior to statement execution. Statements are the responsibility of the user.

  4. SELECT requirement: In a multi-statement execution, the last statement must be a SELECT statement.
  5. Database transactions: All statements are run in a transaction. DDL statements in most dialects (vendors) can't be run within a transaction and might be automatically committed by the driver.


  1. In the Library page, click Import Data.
  2. In the Import Data page, select a relational connection or Hive connection. 
    1. Hive and relational connections must be enabled and created.
      1. See Enable Relational Connections.
      2. See Configure for Hive.
  3. Within your source, locate the table from which you wish to import. Do not select the table.
  4. Click the Preview icon to review the columns in the dataset.


    Tip: You may wish to copy the database, table name, and column names to a text editor to facilitate generating your SQL statement.

  5. Click Create Dataset with SQL. Enter or paste your SQL statement.


    Through the custom SQL interface, it is possible to enter SQL statements that can delete data, change table schemas, or otherwise corrupt the targeted database. Please use this feature with caution.

    D caption
    Create Dataset with SQL dialog


    1. See Examples141212244 below.

    2. To test the SQL, click Validate SQL. For details, see below.

    3. To apply the SQL to the import process, click Create Dataset.

  6. The customized source is added to the right panel. To re-edit, click Custom SQL.

  7. Complete the other steps to define your imported dataset. 

  8. When the data is imported, it is altered or filtered based on your SQL statement. 

    1. After dataset creation, you can modify the SQL, if needed. See Dataset Details Page.


Relational SystemObject DelimiterExample Syntax

Code Block
SELECT `column1`,`column2` FROM `databaseName`.`tableName`

AWS Glue
See Hive.

Double quotes required around database and table names and not required around column names.

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"

SQL Server none

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"



Double quotes required around database, table names, and column names.
Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"


Double quotes required around database and table names and not required around column names.

Code Block
SELECT "column1","column2" FROM "databaseName"."tableName"