Skip to main content

Release Notes 8.2

Release 8.2.2

March 25, 2022

What's New

Databricks:

  • Support for Databricks 7.x and 8.x.

    Note

    Databricks 7.3 and Databricks 8.3 are recommended.

  • Support for Databricks cluster creation via cluster policies.

  • Store a user-defined set of secret information such as credentials in Databricks Secrets.

Changes in System Behavior

Publishing:

Improvements to publishing of Alteryx Date values to Snowflake. For more information, see Improvements to the Type System.

Nginx:

  • Upgraded to Nginx 1.20.1.

Deprecated

None.

Key Bug Fixes

Ticket

Description

TD-69201

Vulnerability scan detected compromised versions of log4j on the Hadoop dependency jars

TD-69052

Job fails using Spark when using parameterized files as input

TD-69004

Patch httpd to version 2.4.52

TD-68085

Designer Cloud Powered by Trifacta Enterprise Edition unavailable due to update lock on plantasksnapshotruns

TD-67953

Remove log4j dependencies from Java projects

TD-67747

CVE-2021-44832: Apache Log4j2 vulnerable to RCE via JDBC Appender when attacker controls configuration

TD-67677

EMR spark job fails with error "org.apache.spark.sql.AnalysisException: Cannot resolve column name" if flow optimizations are enabled.

TD-67640

Intermittent failure to publish to Tableau in Fileconverter.

TD-67572

EMR spark job fails with error "org.apache.spark.sql.AnalysisException: Cannot resolve column name"

TD-67558

CVE-2021-45105: Log4j vulnerability (denial of service)

TD-67531

Glue jobs not working after upgrade to Release 8.2

TD-67455

CVE-2021-45046: Log4j vulnerability

TD-67410

CVE-2021-23017: Nginx v.1.20.0 security vulnerability

TD-67388

Nest function failing

TD-67372

Patch/update Log4J (RCE 0-day exploit found in log4j)

TD-67329

Publish failing with "java.io.IOException: No FileSystem for scheme: sftp"

TD-66779

Output home directory is not picked correctly for job runs in wasb/adls-gen2

TD-66160

SSLHandshakeException when accessing Databricks table

TD-66025

Glue connection not working on after upgrade to Release 8.2

TD-65696

In Azure environment, changing the user output/upload directory only persists the path and not the container name/account storage.

TD-65331

Writing to ADLS failing in SSL Handshake to TLSv1.1

TD-65286

Alteryx jobs fail at Transform stage with Optimizer Service exception

TD-65058

Unable to upgrade due to migration failure

TD-64627

Designer Cloud Powered by Trifacta Enterprise Edition failing due to concurrent DB transaction

TD-64528

Upgrade to Release 8.2 failed to load dictionaries

TD-64281

/change-password page fails to load.

TD-64171

Cannot import parameterized datasets that include files with zero and non-zero byte sizes together.

TD-63981

Start/stop scripts should not modify any config/database settings during startup.

TD-63867

Jobs are not triggering for a parameterized datasets with zero-byte file sizes.

TD-63493

Unable to cancel a plan run

TD-60881

Incorrect path shown when using parameterized output path

TD-59706

No vertical scroll when there are too many connections on Import page

TD-58576

Cannot read property 'expandScriptLines' of undefined when flow node's activeSampleId is pointing to failed (null) sample.

New Known Issues

None.

Release 8.2.1

August 13, 2021

What's New

EMR:

Databricks:

Trifacta node:

  • NodeJS upgraded to 14.16.0.

Changes in System Behavior

None.

Key Bug Fixes

Ticket

Description

TD-62689

Nginx returns Bad Request Status: 400 error, due to duplicate entries in /etc/nginx/conf.d/trifacta.conf for:

proxy_set_header Host $host;

Tip

Workaround is to delete the second entry in the file manually.

New Known Issues

None.

Release 8.2

June 11, 2021

What's New

Preferences:

  • Re-organized user account, preferences, and storage settings to streamline the setup process. See Preferences Page.

API:

Connectivity:

Databricks:

Support for Databricks 7.3, using Spark 3.0.1.

Note

Databricks 5.5 LTS is scheduled for end of life in July 2021. An upgrade to Databricks 7.3 is recommended.

Note

In this release, Spark 3.0.1 is supported for use with Databricks 7.3 only.

Plan metadata references:

Use metadata values from other tasks and from the plan itself in your HTTP task definitions.

Improved accessibility of job results:

The Jobs tabs have been enhanced to display the list of latest and the previous jobs that have been executed for the selected output.

For more information, see View for Outputs.

Sample Jobs Page:

You can monitor the status of all sample jobs that you have generated. Project administrators can access all sample jobs in the workspace. For more information, see Sample Jobs Page.

Install:

Support for Nginx 1.20.0 on the Trifacta node. See System Requirements.

Users and groups:

The AD users and groups integration is now generally available. See Configure Users and Groups.

Changes in System Behavior

Java service classpath changes:

Note

This required update applies only to customers who have modified their Java service classpaths to include /etc/hadoop/conf.

In deployments on a Hadoop edge node, the classpath values for Java-based services may have been modified to include the following:

/etc/hadoop/conf

As of this release, symlinks must be created to locations within the Alteryx install directory to replace the above path modifications.

Note

Before you before the following update, please create a backup of /etc/hadoop/conf first.

In the following example, all files in the etc/hadoop/conf directory are updated with symlinks to the proper directory in the conf directory of files.

for file in `ls /etc/hadoop/conf`; do ln -sf /etc/hadoop/conf/$file /opt/trifacta/conf/hadoop-site/$file; done

Running Environment:

Cloudera 5.x, including Cloudera 5.16, is no longer supported. Please upgrade to a supported version of Cloudera 6.x.

Catalog integrations end of life:

The following catalog integrations are no longer available in the platform:

  • Alation

  • Waterline

  • Cloudera Navigator

For more information, see End of Life and Deprecated Features.

API:

The following API endpoints are scheduled for deprecation in a future release:

Note

Please avoid using the following endpoints.

/v4/connections/vendors
/v4/connections/credentialTypes
/v4/connections/:id/publish/info
/v4/connections/:id/import/info

These endpoints have little value for public use.

Key Fixes

Ticket

Description

TD-59854

Datetime column from Parquet file incorrectly inferred to the wrong data type on import.

TD-59658

IAM roles passed through SAML does not update after Hotfix upgrade

TD-59633

Enabled session tag feature but running into "The security token included in the request is invalid" error

TD-59331

When include quotes option is disabled on an output, Databricks still places quotes around empty values.

TD-59128

BOM characters at the beginning of a file causing multiple headers to appear in Transformer Page.

TD-58932

Cannot read file paths with colons from EMR Spark jobs

TD-58694

Very large number of files generated during Spark job execution

TD-58523

Cannot import dataset with filename in Korean alphabet from HDFS.

New Known Issues

Ticket

Description

TD-60701

Most non-ASCII characters incorrectly represented in visual profile downloaded in PDF format.

Release 8.1

February 26, 2021

What's New

Tip

Be sure to check out the new in-app messaging feature, which allows us to share new features and relevant content to Designer Cloud Powered by Trifacta Enterprise Edition users in your workspace. The user messaging feature can be disabled by workspace administrators if necessary. See Workspace Settings Page.

Install:

  • Support for PostgreSQL 12.X for Alteryx databases on all supported operating systems.

    Note

    Beginning in this release, the latest stable release of PostgreSQL 12 can be installed with the Designer Cloud Powered by Trifacta platform. Earlier versions of PostgreSQL 12.X can be installed manually.

    Note

    Support for PostgreSQL 9.6 is deprecated for customer-managed Hadoop-based deployments and AWS deployments. PostgreSQL 9.6 is supported only for Azure deployments. When Azure supports PostgreSQL 12 or later, support for PostgreSQL 9.6 will be deprecated in the subsequent release of Designer Cloud Powered by Trifacta Enterprise Edition.

Security:

Databases:

  • New databases:

    • The Secure Token Service database is used for managing the tokens used by the secure token service.

    • The Connector Configuration Service database stores the connection configuration information for a workspace's available connectors (connection types).

    • These databases are installed and managed in conjunction with the otherAlteryx databases . See Install Databases.

Connectivity:

  • For AWS-based installations, you can create multiple read-only S3 connections through the Trifacta Application. These connections use key and secret pair combinations to access specific S3 buckets. For more information, External S3 Connections.

  • You can enable logging of events from the CData driver underlying your supported relational connections. For more information, see Configure Connectivity.

Authorization:

Sharing:

  • Define permissions on individual objects when they are shared.

    Note

    Fine-grained sharing permissions apply to flows and connections only.

    For more information, see Changes to User Management.

API:

  • Apply job-level overrides to AWS Databricks or Azure Databricks job executions via API. See API Task - Run Job.

  • Customize connection types (connectors) to ensure consistency across all connections of the same type and to meet your enterprise requirements. For more information, see Changes to the APIs.

Running environment:

Publishing:

Macro updates:

You can replace an existing macro definition with a macro that you have exported to your local desktop.

Note

Before you replace the existing macro, you must export a macro to your local desktop. For more information, see Export Macro.

For more information, see Macros Page.

Sample Jobs Page:

You can monitor the status of all sample jobs that you have generated. Project administrators can access all sample jobs in the workspace. For more information, see Sample Jobs Page.

Specify column headers during import

You can specify the column headers for your dataset during import. For more information, see Import Data Page.

Services:

Changes in System Behavior

Note

CDH 6.1 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix.

Note

HDP 2.6 is no longer supported. Please upgrade to the latest supported version. For more information, see Product Support Matrix.

Support for custom data types based on dictionary files to be deprecated:

Note

The ability to upload dictionary files and use their contents to define custom data types is scheduled for deprecation in a future release. This feature is limited and inflexible. Until an improved feature can be released, please consider using workarounds. For more information, see Validate Your Data.

You can create custom data types using regular expressions.

Strong consistency management now provided by AWS S3:

Prior to this release, S3 sometimes did not accurately report the files that had been written to it, which resulted in consistency issues between the files that were written to disk and the files that were reported back to the Trifacta Application.

As of this release, AWS has improved S3 with strong consistency checking, which removes the need for the product to maintain a manifest file containing the list of files that have been written to S3 during job execution.

Note

As of this release, the S3 job manifest file is no longer maintained. All configuration related to this feature has been removed from the product. No additional configuration is needed.

For more information, see https://aws.amazon.com/s3/consistency/.

For more information on integration with S3, seeS3 Access.

Installation of database client is now required:

Before you install or upgrade the database or perform any required database cross-migrations, you must install the appropriate database client first.

Note

Use of the database client provided with each supported database distribution is now a required part of any installation or upgrade of the Designer Cloud Powered by Trifacta platform.

For more information:

Job logs collected asynchronously for Databricks jobs:

In prior releases, the Trifacta Application reported that a job failed only after the job logs had been collected from the Databricks cluster. This log collection process could take a while to complete, and the job was reported as in progress when it had already failed.

Beginning in this release, collection of Databricks job logs for failed jobs happens asynchronously. Jobs are now reported in the Trifacta Application as soon as they are known to have failed. Log collection happens in the background afterward.

Catalog integrations now deprecated:

Integrations between Designer Cloud Powered by Trifacta Enterprise Edition and Alation and Waterline services are now deprecated. For more information, see End of Life and Deprecated Features.

Key Bug Fixes

Ticket

Description

TD-56170

The Test Connection button for some relational connection types does not perform a test authentication of user credentials.

TD-54440

Header sizes at intermediate nodes for JDBC queries cannot be larger than 16K.

Previously, the column names for JDBC data sources were passed as part of a header in a GET request. For very wide datasets, these GET requests often exceeded 16K in size, which represented a security risk.

The solution is to turn these GET requests into ingestion jobs.

Note

To mitigate this issue, JDBC ingestion and JDBC long loading must be enabled in your environment. For more information, see Configure JDBC Ingestion.

New Known Issues

Ticket

Description

TD-58818

Cannot run jobs on some builds HDP 2.6.5 and later. There is a known incompatibility between HDP 2.6.5.307-2 and later and the Hadoop bundle JARs that are shipped with theAlteryx installer .

Tip

The solution is to use an earlier compatible version.

TD-58523

Cannot import dataset with filename in Korean alphabet from HDFS.

Tip

You can upload files with Korean characters from your desktop. You can also add a 1 to the end of the file on HDFS, and it can then be imported.

TD-55299

Imported datasets with encodings other than UTF-8 and line delimiters other than \n may generate empty outputs on Spark or Dataflow running environments.

TD-51516

Input data containing BOM (byte order mark) characters may cause Spark or Dataflow running environments to read data improperly and/or generate invalid results.

Release 8.0

January 26, 2021

What's New

APIs:

  • Individual workspace users can be permitted to create and use their own access tokens for use with the REST APIs. For more information, see Workspace Settings Page.

Connectivity:

  • Support for connections to SharePoint Lists. SeeSharePoint Connections.

  • Support for using OAuth2 authentication for Salesforce connections.

    Note

    Use of OAuth2 authentication requires additional configuration. For more information, see OAuth 2.0 for Salesforce.

    See Salesforce Connections.

  • Support for re-authenticating through connections that were first authenticated using OAuth2.

Import:

  • Improved method for conversion and ingestion of XLS/XSLX files. For more information, see Import Excel Data.

Recipe development:

  • The Flag for Review feature enables you to set review checkpoints in your recipes. You can flag recipe steps for review by other collaborators for review and approval. For more information, see Flag for Review.

Update Macros:

  • Replace / overwrite an existing macro's steps and inputs with a newly created macro.

  • Map new macro parameters to the existing parameters before replacing.

  • Edit macro input names and default values as needed.

Job execution:

  • You can enable the Trifacta Application to apply SQL filter pushdowns to your relational datasources to remove unused rows before their data is imported for a job execution. This optimization can significantly improve performance as less data is transferred during the job run. For more information, see Flow Optimization Settings Dialog.

  • Optimizations that were applied during the job run now appear in the Job Details Page. See Job Details Page.

Changes in System Behavior

None.

Key Bug Fixes

Ticket

Description

TD-57354

Cannot import data from Azure Databricks. This issue is caused by an incompatibility between TLS v1.3 and Java 8, to which it was backported.

TD-57180

AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.

New Known Issues

Ticket

Description

TD-56170

The Test Connection button for some relational connection types does not perform a test authentication of user credentials.

Tip

Append the following to your Connect String Options:

;ConnectOnOpen=true

This option forces the connection to validate user credentials as part of the connection. There may be a performance penalty when this option is used.

Release 7.10

December 21, 2020

What's New

Tip

Check out the new in-app tours, which walk you through the steps of wrangling your datasets into clean, actionable data.

Import:

  • The maximum permitted size of a file uploaded through Trifacta Application has been increased from 100 MB to 1 GB.

Plan View:

  • Import and Export Plans: You can import and export plans from one environment, workspace, or projects to others.

For more information, see Export Plan.

For more information, see Import Plan.

  • Share Plans: Share plans with one or more users to work together on the same plan. For more information, see Share a Plan.

  • Email notifications: Send email notifications to plan owners and collaborators based on the status of execution of plans. For more information, see Manage Plan Notifications Dialog.

Authentication:

Connectivity:

Language:

API:

  • Experimental feature: Export Python Pandas code to generate the transformation steps required to produce a defined output object.

  • NOTE: This feature can be changed or removed from the platform at any time without notice. Do not deploy it in a production environment.

  • For more information, see API Task - Wrangle Output to Python.

Changes in System Behavior

Rebuild custom UDF JARs for Databricks clusters

Previously, UDF files were checked for consistency based upon the creation time of the JAR file. However, if the JAR file was passed between Databricks nodes in a high availability environment or between services in the platform, this timestamp could change, which could cause job failures due to checks on the created-at timestamps.

Beginning in this release, the platform now inserts a build-at timestamp into the custom UDF manifest file when the JAR is built. This value is fixed, regardless of the location of the copy of the JAR file.

Note

Custom UDF JARs that were created using earlier releases of the platform and deployed to a Databricks cluster must be rebuilt and redeployed as of this release. For more information on troubleshooting the error conditions, see Java UDFs.

Custom credential provider JAR no longer required for EMR access

In prior releases of Designer Cloud Powered by Trifacta Enterprise Edition, integration with EMR required the deployment of a custom credential provider JAR file provided by the customer as part of the initial bootstrap of the EMR cluster. As of this release, this JAR file is no longer required. Instead, it is provided by the Designer Cloud Powered by Trifacta platform directly.

Note

If your deployment of the Designer Cloud Powered by Trifacta platform integrates with AWS Glue, you must still provide and deploy a custom credentials JAR file. For more information, see AWS Glue Access.

For more information on integrating with EMR, see Configure for EMR.

Upgrade nodeJS

On the Trifacta node, the version of nodeJS has been upgraded to nodeJS 14.15.4 LTS. For more information, see System Requirements.

Data type and row split inference utilize more data

When a dataset is loaded, the Trifacta Application now reads in more data before the type inference system and row splitting transformations analyze the data to break it into rows and columns. This larger data size should result in better data inference in the system.

Note

Types and row splits on pre-existing datasets may be affected by this change.

For more information, see Improvements to the Type System.

Key Bug Fixes

Ticket

Description

TD-54742

Access to S3 is disabled after upgrade.

TD-53527

When importing a dataset via API that is sourced from a BZIP file stored on S3, the columns may not be properly split when the platform is permitted to detect the structure.

New Known Issues

Ticket

Description

TD-57180

AWS jobs run on Photon to publish to HYPER format fail during file conversion or writing.

Tip

Run the job on the Spark running environment instead.

TD-56830

Receive malformed_query: enter a filter criterion when importing table from Salesforce.

Note

Some Salesforce tables require mandatory filters when they are queried. Mandatory filters are not currently supported for Salesforce connections.

Release 7.9

November 16, 2020

What's New

Plan View:

  • Execute Plan using status rules: Starting in Release 7.9, you can execute tasks based on the previous task execution result. For more information, see Create a Plan.

  • Execute Parallel Plan tasks: In previous releases, plans were limited to a sequential order of task execution. Beginning in Release 7.9, you can create branches in the graph into separate parallel nodes, enabling the corresponding tasks to run in parallel. This feature enables you to have a greater level of control of your plans' tasks. For more information, see Create a Plan.

  • Zoom options: Zoom control options and keyboard shortcuts have been introduced in the plan canvas. For more information, see Plan View Page.

  • Filter Plan Runs: Filter your plan runs based on dates or plan types. For more information, see Plan Runs Page.

Transform Builder:

  • An All option has been added for selecting columns in the Transform Builder. For more information, see Changes to the Language page.

Changes in System Behavior

Manage Users section has been deprecated:

In previous releases, user management functions were available through the Manage Users section of the Admin Settings page. These functions have been migrated to the Workspace Settings page, where all of the previous functions are now available. The Manage Users section has been deprecated.

Better license management:

In prior releases, the Trifacta Application locked out all users if the number of active users exceeded the number permitted by the license. This situation could occur if users were being added via API, for example.

Beginning in this release, the Trifacta Application does not block access when the number of licensed users is exceeded.

Note

If you see the notification banner about license key violations, please adjust your users until the banner is removed. If you need to adjust the number of users associated with your license key, please contact Alteryx Support.

For more information, see License Key.

Trifacta Photon jobs now use ingestion for relational sources:

When a job is run on Trifacta Photon, any relational data sources are ingested into the backend datastore as a preliminary step during sampling or transformation execution. This change aligns Trifacta Photon job execution with future improvements to the overall job execution framework. No additional configuration is required.

Tip

Jobs that are executed on the Alteryx Server are executed in an embedded running environment, called . Quick Scan samples are automatically executed in Trifacta Photon.

For more information on ingestion, see Configure JDBC Ingestion.Job results page changes:

  • The dependencies tab is renamed as dependency graph tab.

  • The old flow view in the dependency graph tab is replaced with the new flow view. For more information, see Job Details Page.

Key Bug Fixes

Ticket

Description

TD-55125

Cannot copy flow. However, export and import of the flow enables copying.

TD-53475

Missing associated artifact error when importing a flow.

New Known Issues

None.

Release 7.8

October 19, 2020

What's New

Plans:

  • The viewport position and zoom level are now preserved when returning to a given flow.

Publishing:

  • Improved performance when publishing to Tableau Server.

  • Configure publishing chunk sizes as needed. For more information, seeConfigure Data Service.

Language:

  • Rename columns now supports uppercase or lowercase characters or shorten column names to a specified character length from the left or right. For more information, see Changes to the Language.

Connectivity:

Changes in System Behavior

JDBC connection pooling disabled:

Note

The ability to create connection pools for JDBC-based connections has been disabled. Although it can be re-enabled if necessary, it is likely to be removed in a future release. For more information, see Changes to Configuration.

TDE format has been deprecated:

Tableau Server has deprecated support for the TDE file format. As of this release, all outputs and publications to Tableau Server must be generated using HYPER, the replacement format for TDE.

  • Any flow that uses TDE format is automatically switched to use HYPER format during the upgrade process.

  • Any flow that is imported into the upgraded environment is automatically switched to using the HYPER format.

For more information, see Tableau Hyper Data Type Conversions.

Enhanced Flow and Flow View menu options:

The context menu options for Flow View and Flow have been renamed and reorganized for a better user experience.

Key Bug Fixes

None.

New Known Issues

Ticket

Description

TD-54030

When creating custom datasets from Snowflake, columns containing time zone data are rendered as null values in visual profiles, and publishing back to Snowflake fails.

Tip

In your SELECT statement applied to a Snowflake database, references to time zone-based data must be wrapped in a function to convert it to UTC time zone. For more information, see Create Dataset with SQL.

Release 7.7

September 21. 2020

What's New

Flow View:

  • Automatically organize the nodes of your flow with a single click. See Flow View Page.

Changes in System Behavior

Deprecated Parameter History Panel Feature

As a part of collaborative suggestions enhancement, the support for Parameter History panel is deprecated from the software. For more information on collaborative suggestions feature, see Overview of Predictive Transformation.

Classic Flow View no longer available

In Release 7.6, an improved version of Flow View was released. At the time of release, users could switch back to using the classic version.

Beginning in this release, the classic version of Flow View is no longer available.

Tip

The objects in your flows that were created in classic Flow View may be misaligned in the new version of Flow View. You can use auto-arrange to re-align your flow objects.

For more information, see Flow View Page.

Key Bug Fixes

Ticket

Description

TD-53318

Cannot publish results to relational targets when flow name or output filename or table name contains a hyphen (e.g. my - filename.csv).

New Known Issues

None.