Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0642


  1. S3 base storage layer: Redshift access requires use of S3 as the base storage layer, which must be enabled. See Set Base Storage Layer. 
  2. Same region: The Redshift cluster must be in the same region as the default S3 bucket.
  3. Integration: Your  
    D s item
     is connected to one of the supported Spark running environments: 
  4. Cloudera: Supported Deployment Scenarios for Cloudera
  5. Hortonworks:Supported Deployment Scenarios for Hortonworksa running environment supported by your product edition.
  6. Deployment: 
    D s platform
    is deployed either on-premises or in EC2.


  1. When publishing to Redshift through the Publishing dialog, output must be in Avro or JSON format. This limitation does not apply to direct writing to Redshift. 

  2. You can publish any specific job once to Redshift through the export window. See Publishing Dialog.

  3. Management of nulls:
    1. Nulls are displayed as expected in the 
      D s webapp
    2. When Redshift jobs are run, the UNLOAD SQL command in Redshift converts all nulls to empty strings. Null values appear as empty strings in generated results, which can be confusing. This is a known issue with Redshift.

Create Connection

You can create Redshift connections through the following methods.