Skip to main content

Release Notes 8.7

Release 8.7.1

September 30, 2022

What's New

Connectivity:

Support for external tables for Azure Synapse Analytics (Formerly Microsoft SQL DW):

  • For dedicated SQL pool or serverless SQL pool connections to Azure Synapse Analytics (Formerly Microsoft SQL DW), you can now interact with external tables managed data in a variety of formats. For more information, see Microsoft SQL Data Warehouse Connections.

Install:

  • Support for Dockerized installs for AWS and Azure deployments.

    Note

    You cannot upgrade from an Dockerized on-premises installation to Dockerized AWS or Azure. You must perform a fresh install.

    For more information, see Install for Docker.

Changes in System Behavior

New Azure Key Vault secret permission:

If you are deploying the Designer Cloud Powered by Trifacta platform into Azure and are using an Azure Key Vault, the Recover secret permission is now strongly recommended. Soon, this secret permission will be required for all Azure key vaults. For more information, see Configure Azure Key Vault.

Flow collaborators can now edit custom SQL:

Collaborators on a flow who have the flow editor permission can now edit any custom SQL used in importing datasets into the flow.

Nginx:

  • Upgraded to Nginx 1.20.1.

Python:

Deprecated

Planned deprecation

In a subsequent release of Designer Cloud Powered by Trifacta Enterprise Edition:

  • Support for Java 8 will be deprecated. Customers must migrate to using Java 11. Additional instructions will be provided.

    Tip

    For Release 8.7.1, Java 11 is supported at runtime only.

  • Java 11 requires Spark 3.x. When Java 8 is deprecated, support for Spark 2.x will be deprecated. Customers must migrate to using Spark 3.x. Additional instructions will be provided.

  • These changes have the following implications:

    • Cloudera clusters do not support Spark 3.x. Customers using these running environments must migrate to Cloudera Data Platform.

    • Some deployments of EMR can migrate to using Spark 3.x in this release. For more information, see Configure for EMR.

    • Some deployments of Databricks can migrate to using Spark 3.x in this release. For more information:

  • For more information on these changes, please contact Alteryx Support.

Key Bug Fixes

Ticket

Description

TD-72498

UserId is not being inserted with queries in DB logs when configured to do so. For more information, see Configure Connectivity.

TD-70522

Cannot import converted files such as Excel, PDF, or JSON through SFTP connections.

TD-69652

Creating parameterized version of dataset from External S3 connection fails with Access Denied 403

TD-69201

Vulnerability scan detected compromised versions of log4j on the Trifacta Hadoop dependency jars

TD-69052

Job fails using Spark when using parameterized files as input

TD-69004

Patch httpd to version 2.4.54

TD-68085

Designer Cloud unavailable due to update lock on plantasksnapshotruns

TD-67953

Remove log4j dependencies from Java projects

TD-67747

CVE-2021-44832: Apache Log4j2 vulnerable to RCE via JDBC Appender when attacker controls configuration

TD-67558

CVE-2021-45105: Log4j vulnerability (denial of service)

TD-67531

Glue jobs not working after upgrade to Release 8.2

TD-67455

CVE-2021-45046: Log4j vulnerability

TD-67410

CVE-2021-23017: Nginx v.1.20.0 security vulnerability

TD-67372

Patch/update Log4J (RCE 0-day exploit found in log4j)

TD-66779

Output home directory is not picked correctly for job runs in wasb/adls-gen2

TD-66160

SSLHandshakeException when accessing Databricks table

TD-65331

Writing to ADLS failing in SSL Handshake to TLSv1.1

New Known Issues

Ticket

Description

TD-74648

Trifacta node filesystem is filling with Databricks executables and logs.

Tip

You should periodically archive the contents of /opt/trifacta/logs. Set your scheduled task to remove log files that are more than a week or so old.

TD-68499

MySQL Driver JAR that was not part of install package is causing MySQL connections to fail.

Tip

This JAR file cannot be packaged with the install package. You can license and download this JAR file for free. For more information, see MySQL Connections.

Release 8.7

November 12, 2021

What's New

EMR:

Install:

Databases:

  • Support for PostgreSQL 11 on Azure.

    Note

    PostgreSQL 12 is not supported in Azure at this time. Please install PostgreSQL 11. You can modify the installation commands for PostgreSQL 12 referenced in the database installation documentation to use PostgreSQL 11. For more information, see Install Databases.

  • Support for PostgreSQL 12 for all other deployments.

    Note

    Support for PostgreSQL 9.6 has been deprecated. Unless you are installing the Designer Cloud Powered by Trifacta platform in Azure, please install PostgreSQL 12. Azure deployments require PostgreSQL 11 for this release.

  • See Product Support Matrix.

Browsers:

  • Update to supported browsers:

    • Mozilla Firefox is generally supported.

    • Microsoft Edge is now supported.

      Note

      NOTE: This feature is in Beta release.

    • New versions of supported browsers are now supported.

    • For more information, see Browser Requirements.

Plans:

  • Create plan tasks to deliver messages to a specified Slack channel.

    For more information, see Create Slack Task.

Single Sign-On:

  • If using SAML integration, you can now configure the security algorithm for the Trifacta Application to use. See Configure SSO for SAML.

Connectivity:

  • Access to S3 is now managed using the native AWS SDK.

Connectivity:

  • Append Alteryx user identifiers to SQL queries to enable auditing through your database logs. For more information, see Configure Connectivity.

Parameterization:

Recipe panel:

Azure:

  • Support for installation on Azure Gov Cloud.

    Note

    The Azure environment must be set to US_GOV.

    For more information, see Configure for Azure.

Databricks:

  • Support for Databricks 7.x and 8.x.

    Note

    Databricks 7.3 and Databricks 8.3 are recommended.

  • Support for Databricks cluster creation via cluster policies.

  • Store a user-defined set of secret information such as credentials in Databricks Secrets.

Performance:

  • Improved performance when previewing or performing quick scan samples on Parquet files.

    Note

    This feature may require enabling in your environment.

    See Configure Photon Running Environment.

Logging:

Trifacta node:

  • NodeJS upgraded to 14.17.5.

Changes in System Behavior

None.

Deprecated

HDInsight no longer supported:

  • HDInsight 3.5 and 3.6 are no longer supported.

HDP 3.0 deprecated:

  • Please upgrade to HDP 3.1.

    Note

    In a future release, support for Hortonworks Data Platform (HDP) will be deprecated. Please migrate to using a different supported running environment. For more information, see Product Support Matrix.

API:

  • Deprecated API endpoint to transfer assets between users has been removed from the platform. This endpoint was previously replaced by an improved method of transfer.

  • Some connection-related endpoints have been deprecated. These endpoints have little value for public use.

  • For more information, see Changes to the APIs.

Key Bug Fixes

Ticket

Description

TD-65753

Some platform services do not bind to localhost only.

TD-65502

Datasets from parameters are improperly being permitted to be referenced in recipes and returns an error during job execution.

TD-63953

HYPER format publishing jobs remain in queued state on Azure.

TD-63564

Schedules created by a flow collaborator with editor access stop working if the collaborator is removed from the flow.

Collaborators with viewer access cannot create schedules.

New Known Issues

Ticket

Description

TD-63974

In imported datasets sourced from CSV files, double quotes that are escaped with a backslash ( \"backslash-escaped value\") can cause remainder of row to be compressed into a single cell.

TD-63517

Unpivoting a String column preserves null values in Spark but converts them to empty strings in Photon. Running jobs on the different running environments generates different results.

Tip

After the unpivot step, you can add an Edit with formula step. Set the columns to all of the columns in the unpivot and add the following formula, which converts all missing values to null values:

if(ismissing($col),NULL(),$col)

Release 8.6

August 2, 2021

What's New

Collaboration:

Databricks:

Performance:

  • Conversion jobs are now processed asynchronously.

  • Better management of file locking and concurrency during job execution.

    This feature must be enabled in your environment and requires installation of Redis server. See Configure for Redis.

Job execution:

Better Handling of JSON files

The Trifacta Application now supports the regularly formatted JSON files during import. You can now import flat JSON records contained in a single array object. With this, each array is treated as a single line and imported as a new row. For more information, see Working with JSON v2.

Changes in System Behavior

Compression scheme for imported files can now inferred

Prior to this release, when a file was imported, the Trifacta Application inferred any compression applied to the file based on the filename extension at the end of the file. For example, if the filename ended with .gz, then the file was passed through the internal code for decompressing GZIP files.

Beginning in this release, the Trifacta Application can be configured to detect the applied compression based on reading in the first few bytes of the file. Based on the data signature, the application passes the file for decompression to the appropriate code.

For more information on enabling, seeMiscellaneous Configuration.

Deprecated

None.

Key Bug Fixes

None.

New Known Issues

Ticket

Description

TD-63564

Schedules created by a flow collaborator with editor access stop working if the collaborator is removed from the flow.

Tip

Flow owners can delete the schedule and create a new one. When this issue is fixed, the original schedule will continue to be executed under the flow owner's account.

Collaborators with viewer access cannot create schedules.

Release 8.5

June 28, 2021

What's New

Parameterization:

  • Create environment parameters to ensure that all users of the project or workspace use consistent references.

    Note

    You must be a workspace administrator or project owner to create environment parameters.

    Note

    Environment parameters can be exported from one project or workspace and imported into another, so that these references are consistent across the enterprise.

  • Parameterize names of your storage buckets using environment parameters.

Flow View:

Job execution:

  • Define SQL scripts to execute before data ingestion or after publication for file-based or table-based jobs.

Connectivity:

  • Connect to your relational database systems hosted on Amazon RDS. In the Connections page, click the Amazon RDS card for your connection type.

    For more information, see Create Connection Window.

Developer:

Download and install the Python SDK, which enables you to leverage the visual tools of the Trifacta Application to transform data in your existing Python pipelines.

Note

This is an Alpha release. Do not deploy the Python SDK in production environments.

For more information, see Python SDK.

Job execution:

You can choose to ignore the recipe errors before job execution and then review any errors in the recipe through the Job Details page.

Language:

  • NUMVALUE function can be used to convert a String value formatted as a number into an Integer or Decimal value.

  • NUMFORMAT function now supports configurable grouping and decimal separators for localizing numeric values.

  • For more information, see Changes to the Language.

Changes in System Behavior

None.

Key Bug Fixes

None.

New Known Issues

None.

Release 8.4

May 24, 2021

What's New

Collaboration:

  • You can receive email notifications whenever a plan or a flow is shared with you by the owner.

Language:

Changes in System Behavior

None.

Key Bug Fixes

Ticket

Description

TD-60881

Incorrect file path and missing file extension in the application for parameterized outputs

TD-60382

Date format M/d/yyis handled differently by PARSEDATE function on Trifacta Photon and Spark.

New Known Issues

None.

Release 8.3

April 26, 2021

What's New

Spark:

EMR:

Connectivity:

You can publish results to external S3 buckets through an Access key and Secret key. For more information, see External S3 Connections.

Job execution:

Introducing new filter pushdowns to optimize the performance of your flows during job execution. For more information, see Flow Optimization Settings Dialog.

Job results:

You can now preview job results and download them from the Overview tab of the Job details page. For more information, see Job Details Page.

Tip

You can also preview job results in Flow View. See View for Outputs.

Changes in System Behavior

Improved method of JSON import

Beginning in this release, the Trifacta Application now uses the conversion service to ingest JSON files during import. This improved method of ingestion can save significant time wrangling JSON into records.

Note

The new method of JSON import is enabled by default but can be disabled as needed.

For more information, see Working with JSON v2.

Flows that use imported datasets created using the old method continue to work without modification.

Note

It is likely that support for the v1 version of JSON import is deprecated in a future release. You should switch to using the new version as soon as possible. For more information on migrating your flows and datasets to use the new version, see Working with JSON v1.

Future work on support for JSON is targeted for the v2 version only.For more information on using the old version and migrating to the new version, seeWorking with JSON v1.

Key Bug Fixes

Ticket

Description

TD-60701

Most non-ASCII characters incorrectly represented in visual profile downloaded in PDF format

TD-60660

Azure SSO redirects to the Home page instead of the target page after login.

TD-59854

Datetime column from Parquet file incorrectly inferred to the wrong data type on import.

New Known Issues

None.