Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

D toc

Release 8.2.1

August 13, 2021

What's New

EMR:

Databricks:

D s node
:

  • NodeJS upgraded to 14.16.0.


Changes in System Behavior

None.

Key Bug Fixes

TicketDescription
TD-62689

Nginx returns Bad Request Status: 400 error, due to duplicate entries in /etc/nginx/conf.d/trifacta.conf for:

Code Block
proxy_set_header Host $host;


Tip

Tip: Workaround is to delete the second entry in the file manually.


New Known Issues

None.

Release 8.2

June 11, 2021

What's New

Preferences:

  • Re-organized user account, preferences, and storage settings to streamline the setup process. See Preferences Page.

API:

Connectivity:

...


Improved accessibility of job results:


The Jobs tabs have been enhanced to display the list of latest and the previous jobs that have been executed for the selected output.

...

You can monitor the status of all sample jobs that you have generated. Project administrators can access all sample jobs in the workspace. For more information, see Sample Jobs Page.

Users and groups:

The AD users and groups integration is now generally available. See Configure Users and Groups.

Install:

Support for Nginx 1.20.0 on the

D s node
. See System Requirements.


Changes in System Behavior


Java service classpath changes:

...

These endpoints have little value for public use.


Key Fixes

TicketDescription
TD-59854Datetime column from Parquet file incorrectly inferred to the wrong data type on import.
TD-59658IAM roles passed through SAML does not update after Hotfix upgrade
TD-59633Enabled session tag feature but running into "The security token included in the request is invalid" error
TD-59331When include quotes option is disabled on an output, Databricks still places quotes around empty values.
TD-59128BOM characters at the beginning of a file causing multiple headers to appear in Transformer Page.
TD-58932Cannot read file paths with colons from EMR Spark jobs
TD-58694Very large number of files generated during Spark job execution
TD-58523Cannot import dataset with filename in Korean alphabet from HDFS.

New Known Issues

TicketDescription
TD-60701Most non-ASCII characters incorrectly represented in visual profile downloaded in PDF format.

Release 8.1

February 26, 2021

What's New


Tip

In-app messaging: Be sure to check out the new in-app messaging feature, which allows us to share new features and relevant content to 

D s product
 users in your workspace. The user messaging feature can be disabled by workspace administrators if necessary. See Workspace Settings Page.



Install:

  • Support for PostgreSQL 12.X for 

    D s item
    itemdatabases
     on all supported operating systems.

    Info

    NOTE: Beginning in this release, the latest stable release of PostgreSQL 12 can be installed with the

    D s platform
    . Earlier versions of PostgreSQL 12.X can be installed manually.


    Info

    NOTE: Support for PostgreSQL 9.6 is deprecated for customer-managed Hadoop-based deployments and AWS deployments. PostgreSQL 9.6 is supported only for Azure deployments. When Azure supports PostgreSQL 12 or later, support for PostgreSQL 9.6 will be deprecated in the subsequent release of

    D s product
    .

...

  • Define permissions on individual objects when they are shared.

    Info

    NOTE: Fine-grained sharing permissions apply to flows and connections only.

    For more information, see Changes to User Management.

API:

  • Apply job-level overrides to AWS Databricks or Azure Databricks job executions via API. See API Workflow - Run Job.

...

Macro updates:


You can replace an existing macro definition with a macro that you have exported to your local desktop.

...

You can monitor the status of all sample jobs that you have generated. Project administrators can access all sample jobs in the workspace. For more information, see Sample Jobs Page.

Specify column headers during import


You can specify the column headers for your dataset during import. For more information, see Import Data Page.

...


Changes in System Behavior


Info

NOTE: CDH 6.1 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix.


Info

NOTE: HDP 2.6 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix.



Support for custom data types based on dictionary files to be deprecated:

...

Integrations between

D s product
and Alation and Waterline services are now deprecated. For more information, see End of Life and Deprecated Features.


Key Bug Fixes

TicketDescription
TD-56170The Test Connection button for some relational connection types does not perform a test authentication of user credentials.
TD-54440

Header sizes at intermediate nodes for JDBC queries cannot be larger than 16K.

Previously, the column names for JDBC data sources were passed as part of a header in a GET request. For very wide datasets, these GET requests often exceeded 16K in size, which represented a security risk.

The solution is to turn these GET requests into ingestion jobs.

Info

NOTE: To mitigate this issue, JDBC ingestion and JDBC long loading must be enabled in your environment. For more information, see Configure JDBC Ingestion.



New Known Issues

TicketDescription
TD-58818

Cannot run jobs on some builds HDP 2.6.5 and later. There is a known incompatibility between HDP 2.6.5.307-2 and later and the Hadoop bundle JARs that are shipped with the

D s item
iteminstaller
.

Tip

Solution: The solution is to use an earlier compatible version. For more information, see Configure for Hortonworks.


TD-58523

Cannot import dataset with filename in Korean alphabet from HDFS.

Tip

Workaround: You can upload files with Korean characters from your desktop. You can also add a 1 to the end of the file on HDFS, and it can then be imported.


TD-55299

Imported datasets with encodings other than UTF-8 and line delimiters other than \n may generate empty outputs on Spark or

D s dataflow
running environments.

TD-51516

Input data containing BOM (byte order mark) characters may cause Spark or

D s dataflow
running environments to read data improperly and/or generate invalid results.

Release 8.0

January 26, 2021

What's New

APIs:

  • Individual workspace users can be permitted to create and use their own access tokens for use with the REST APIs. For more information, see Workspace Settings Page.

...

Import:


  • Improved method for conversion and ingestion of XLS/XSLX files. For more information, see Import Excel Data.

...

  • You can enable the
    D s webapp
    to apply SQL filter pushdowns to your relational datasources to remove unused rows before their data is imported for a job execution. This optimization can significantly improve performance as less data is transferred during the job run. For more information, see Flow Optimization Settings Dialog.
  • Optimizations that were applied during the job run now appear in the Job Details Page. See Job Details Page.


Changes in System Behavior

None.

Key Bug Fixes

TicketDescription
TD-57354

Cannot import data from Azure Databricks. This issue is caused by an incompatibility between TLS v1.3 and Java 8, to which it was backported.

TD-57180

AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.

New Known Issues

TicketDescription
TD-56170

The Test Connection button for some relational connection types does not perform a test authentication of user credentials.

Tip

Workaround: Append the following to your Connect String Options:

Code Block
;ConnectOnOpen=true

This option forces the connection to validate user credentials as part of the connection. There may be a performance penalty when this option is used.



Release 7.10

December 21, 2020

What's New

Tip

Tip: Check out the new in-app tours, which walk you through the steps of wrangling your datasets into clean, actionable data.

...

Language:


...

  • NOTE: This feature can be changed or removed from the platform at any time without notice. Do not deploy it in a production environment.
  • For more information, see API Workflow - Wrangle Output to Python.


Changes in System Behavior


Rebuild custom UDF JARs for Databricks clusters

...

For more information, see Improvements to the Type System.


Key Bug Fixes

TicketDescription
TD-54742Access to S3 is disabled after upgrade.
TD-53527When importing a dataset via API that is sourced from a BZIP file stored on S3, the columns may not be properly split when the platform is permitted to detect the structure.

New Known Issues

TicketDescription
TD-57180

AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.

Tip

Workaround: Run the job on the Spark running environment instead.


TD-56830

Receive malformed_query: enter a filter criterion when importing table from Salesforce.

Info

NOTE: Some Salesforce tables require mandatory filters when they are queried. Mandatory filters are not currently supported for Salesforce connections.


Release 7.9

November 16, 2020

What's New

Plan View:

  • Execute Plan using status rules: Starting in Release 7.9, you can execute tasks based on the previous task execution result. For more information, see Create a Plan.
  • Execute Parallel Plan tasks: In previous releases, plans were limited to a sequential order of task execution. Beginning in Release 7.9, you can create branches in the graph into separate parallel nodes, enabling the corresponding tasks to run in parallel. This feature enables you to have a greater level of control of your plans' workflows. For more information, see Create a Plan.
  • Zoom options: Zoom control options and keyboard shortcuts have been introduced in the plan canvas. For more information, see Plan View Page.
  • Filter Plan Runs: Filter your plan runs based on dates or plan types. For more information, see Plan Runs Page

...

  • An All option has been added for selecting columns in the Transform Builder.  For more information, see Changes to the Language page.

Changes in System Behavior

Manage Users section has been deprecated:

...

  • The dependencies tab is renamed as dependency graph tab.
  • The old flow view in the dependency graph tab is replaced with the new flow view. For more information, see Job Details Page

Key Bug Fixes

TicketDescription
TD-55125Cannot copy flow. However, export and import of the flow enables copying.
TD-53475Missing associated artifact error when importing a flow.

New Known Issues

None.

Release 7.8

October 19, 2020

What's New


Plans:

...

  • Improved performance when publishing to Tableau Server.
  • Configure publishing chunk sizes as needed. For more information, see Configure Data Service.

Language:


  • Rename columns now supports uppercase or lowercase characters or shorten column names to a specified character length from the left or right. For more information, see Changes to the Language.

...


Changes in System Behavior

JDBC connection pooling disabled:

...

Key Bug Fixes

None.

New Known Issues

TicketDescription
TD-54030

When creating custom datasets from Snowflake, columns containing time zone data are rendered as null values in visual profiles, and publishing back to Snowflake fails.

Tip

Workaround: In your SELECT statement applied to a Snowflake database, references to time zone-based data must be wrapped in a function to convert it to UTC time zone. For more information, see Create Dataset with SQL.


Release 7.7

September 21. 2020

What's New

Flow View:

  • Automatically organize the nodes of your flow with a single click. See Flow View Page.

Changes in System Behavior

Deprecated Parameter History Panel Feature 

...

For more information, see Flow View Page.

Key Bug Fixes

TicketDescription
TD-53318
Cannot publish results to relational targets when flow name or output filename or table name contains a hyphen (e.g. my - filename.csv).

New Known Issues

None.