Page tree

Release 5.0.1


Contents:

   

Contents:


Release 4.2.1

This release includes numerous bug fixes, support for new distributions, and new capabilities, such as the option to disable initial type inference on schematized sources.

What's New

Import:

  • Enable or disable initial type inference for schematized sources at global or individual connection level, or for individual dataset sources. See Configure Type Inference.

Publishing:

Install, Config & Admin:

Changes to System Behavior

None.

 

Key Bug Fixes

TicketDescription
TD-27799DATEDIF function does not work for inputs that are functions returning date values.
TD-27703Spark job fails with scala.MatchError
TD-24121When publishing multi-part files, different permissions are written to the parent directory when job was executed on Hadoop or Photon.

New Known Issues

TicketComponentDescription
TD-27950Transformer Page - Tools

When you join with an imported dataset not in your flow and it takes longer than expected to collect its initial sample, you may encounter the following error: Cannot join. Dataset is broken

Workaround: Create a recipe off of the imported dataset and then join to the recipe, which is the preferred method of joining. For more information, see Join Page.

TD-27784Installer/Upgrader/Utilities

Ubuntu 16 install for Azure: supervisord complains about "missing" Python packages.

Workaround: These packages are present but lack appropriate permissions. A workaround is documented as part of the installation and configuration process. For more information, see "Workaround for missing Python packages," see Configure for Azure.

Release 4.2

This release introduces deployment management, which enables separation of development and production flows and their related jobs. Develop your flows in a Dev environment and, when ready, push to Prod, where they can be versioned and triggered for production execution. Additionally, you can create and manage all of your connections through the new Connections page. A revamped flow view streamlines object interactions and now supports starting and stopping of jobs without leaving flow view. 

  • Release 4.2 also supports installation of the platform on Amazon EC2 instances and integration with EMR as well as installation for Microsoft Azure.

Details are below. 

What's New

Deployment Management:

Workspace:

Transformer Page:

  • Perform cross joins between datasets. See Join Page.
  • Cut, copy, and paste columns and column values. See Column Browser Panel.
  • Rename multiple columns in a single transformation step. See Rename Columns.
  • In Column Details, you can select a phone number or date pattern to generate suggestions for standardizing the values in the column to a single format. See Column Details Panel.

Personalization:

Install/Admin/Config:

Integration:

Language:

  • New string comparison functions. 
  • New SUBSTITUTE function replaces string literals or patterns with a new literal or column value. 
  • See Changes to the Language.

Import:

Performance:

  • Improved performance when initializing jobs and in Flow View for complex flows.

Changes to System Behavior

New session duration parameter and default value

For technical reasons, the name and default value of the following parameter has been changed in Release 4.2.

Affected ReleasesParameter NameDefault ValueMax Value
Release 4.2 and laterwebapp.session.DurationInMins10080 (one week)30000
Release 4.1.1 and earlierwebapp.session.DurationInMinutes43200 (one month)30000

NOTE: Upgrading customers have the new configuration setting automatically set to the default: 10080 minutes (one week). You must make adjustments as needed.

For more information on changing this parameter value, see Configure Application Limits

/docs endpoint is removed

In Release 4.0, the /docs endpoint was deprecated from use.  This endpoint displayed a documentation page containing information on Wrangle language, the command line interface, and Alteryx patterns.

In Release 4.2, this endpoint has been removed from the platform. Content has been superseded by the following content:

For more information on features that have been deprecated or removed, see End of Life and Deprecated Features.

s3n is no longer supported

If you are integrating with S3 sources, the platform now requires use of the s3a protocol. The s3n protocol is no longer supported.

No configuration changes in the Designer Cloud Powered by Trifacta platform are needed. See Enable S3 Access.

Key Bug Fixes

TicketDescription
TD-27748Direct publish to Hive fails on wide datasets due to Avro limitations.
TD-27368

SQL Server Database timing out with long load times.

TD-27197Column histogram does not update after adding pluck parameter to unnest transform.
TD-27127Send a Copy tab in Flow View sharing does not include all available users.
TD-27055Job run on flow with complex recipes fails on Hadoop but succeeds on Photon.
TD-26837Creating custom dictionaries fails on S3 backend datastore.
TD-26388Orphaned bzip2 processes owned by the platform user accumulate on the node.
TD-26041When editing a schedule that was set for 0 minutes after the hour, the schedule is displayed to execute at 15 minutes after the hour. 
TD-25903Overflow error when ROUND function is applied to large values.
TD-25733Attempting a union of 12 datasets crashes UI.
TD-25709Spark jobs fail if HDFS path includes commas.

New Known Issues

TicketComponentDescription
TD-27799Compilation/Execution

DATEDIF function does not work for inputs that are functions returning date values.

Workaround: Write function returning your date values to a new column. Then, apply DATEDIF function using that column as a new input.

TD-27703Compilation/ExecutionSpark job fails with scala.MatchError
TD-26069Compilation/ExecutionPhoton evaluates date(yr, month, 0) as first date of the previous month. It should return a null value.
TD-24121Compilation/ExecutionWhen publishing multi-part files, different permissions are written to the parent directory when job was executed on Hadoop or Photon. 

This page has no comments.