Page tree

 

Contents:


The following features have been deprecated or removed from Trifacta® Wrangler Enterprise in recent releases.

Legend:

StatusDescription
deprecatedFeature or capability is no longer actively supported. It may still work, but future fixes or enhancements are unlikely.
end of lifeFeature or capability has been removed from the product.

 

Release 5.0

ItemStatusDescription
aggregateend of lifeThe aggregate transform has been removed from the platform. Its functionality has been replaced by configuration in the pivot transform. See Pivot Transform.
CDH 5.11deprecatedPlease upgrade to CDH 5.14.
EMR 5.7deprecatedIf you are integrating with an EMR cluster, you must upgrade to an EMR 5.11 cluster.

Release 4.2.1

ItemStatusDescription
CDH 5.10deprecatedPlease upgrade to CDH 5.13.

 

Release 4.2

ItemStatusDescription
wrangled datasetend of life

Wrangled datasets are no longer available in the platform.

Instead, you create the actions that applied to wrangled datasets from recipes or two new objects: references and outputs. For more information, see Changes to the Object Model.

/docs URLend of life

This URL from the platform is no longer available.

s3n protoclend of lifeTo connect to S3 sources, use of s3n is no longer supported in the platform. You must use s3a. See Enable S3 Access.

Release 4.1.1

ItemStatusDescription
CDH 5.9deprecated

Please upgrade to CDH 5.12.

Single-file CLI publishing optiondeprecatedSupport for publishing to a single file as part of a CLI run_job action has been deprecated. Please use the external file method of specifying publishing targets. See Changes to the Command Line Interface.

 

Release 4.1

ItemStatusDescription
MapRdeprecated

Support for integration with MapR Hadoop clusters has been deprecated.

Please install a supported version of Cloudera or Hortonworks. See Install Reference.

CentOS 6.2.x
CentOS 6.3.x
deprecatedPlease upgrade to the latest CentOS 6.x release.
CDH 5.8deprecatedPlease upgrade to CDH 5.11.
HDP 2.4deprecatedPlease upgrade to HDP 2.6.
Pig running environmentend of life

The Hadoop Pig running environment is no longer available.

For running jobs on Hadoop, you must use the Spark running environment, which requires no additional configuration. See Configure Spark Running Environment.

Python UDFsend of life

You cannot use Python UDFs in the platform, due to the removal of the Hadoop Pig running environment.

All UDFs must be migrated to or authored in Java. See Changes to the User-Defined Functions.

Transform Editorend of life

The Transform Editor for entering raw text Wrangle steps has been removed. Please use the Transform Builder for creating transformation steps.

Standardize pageend of lifeFeature flag-only feature will be replaced by a better standardization capability in a future release.
standardize transformend of lifeFeature flag-only feature will be replaced by a better standardization capability in a future release.

Release 4.0.1

ItemStatusDescription
MapRdeprecated

Support for integration with MapR Hadoop clusters has been deprecated.

Please install a supported version of Cloudera or Hortonworks. See Hadoop Distribution Support.

CDH 5.7deprecated

Please upgrade to CDH 5.10.

Release 4.0

ItemStatusDescription
MapRdeprecated

Support for integration with MapR Hadoop clusters has been deprecated.

Please install a supported version of Cloudera or Hortonworks. See Hadoop Distribution Support.

Hadoop Pig running environmentdeprecated

This running environment has been superseded by the Scala-based running environment, which utilizes Spark's in-memory features for faster processing. The Scala Spark version is the default running environment for Hadoop environments. See Running Environment Options.

Python UDFdeprecatedPython UDFs are only supported with Hadoop Pig running environment.
Javascript running environment and profilerend of life

The original Javascript running environment and profiling engine have been superseded by the Photon running environment, which was introduced in Release 3.2.

The Photon running environment is enabled by default for front-end processing. See Running Environment Options.

Hadoop Pig profilerend of lifeThis profiling engine has been superseded by the Scala-based Spark profiler, which utilizes Spark's in-memory features for faster processing. The Scala version of Spark Profiler is the default profiling engine for Hadoop environments. See Profiling Options.
Python-based Spark profilerend of life

In Release 2.7, a Python-based Spark profiler was released. This version of the Spark profiler required that Spark had to be installed on each node of the cluster, reducing efficiency.

This profiling engine has been superseded by the Scala-based Spark profiler, which utilizes Spark's in-memory features for faster processing. The Scala version of Spark Profiler is the default profiling engine for Hadoop environments. See Profiling Options.

/docs URLdeprecated

This URL from the platform points to an in-app page of documentation. All content in this location has been replaced and superseded by product documentation content. 

HDP 2.3.2deprecatedPlease upgrade to HDP 2.4 or 2.5.

 

Release 3.2.1

ItemStatusDescription
CDH 5.5/CDH 5.6deprecated

Please upgrade to CDH 5.8.

Release 3.2

ItemStatusDescription
Aggregate Toolend of lifeThe separate page for building aggregations has been replaced by selecting your aggregation parameters through the Transform Builder. See Transform Builder.
Add Sample Rows to Transformerend of lifeIn the Job Results page, it was possible to add the displayed sample rows back to the Transformer page as a new sample. With the new object model, this is no longer possible.
MapReduce settingsend of lifeRemoved from Admin Settings page, as they are no longer applicable. Support for MapReduce 1 was previously deprecated. See Admin Settings Page.
multisplit transformend of lifeThis transform has been replaced by a more flexible version of the split transform. See Split Transform.
Batch Server serviceend of lifeInternal platform service has been replaced by Batch Job Runner service.
Monitor serviceend of lifeInternal platform service has been replaced by Batch Job Runner service.
Zookeeperend of life

The Trifacta platform no longer utilizes the Zookeeper service for managing Hadoop Pig jobs. It has been replaced by Batch Job Runner service.

NOTE: You may notice references to Zookeeper in configuration blocks or in the interface. These references will be removed in a future release.

Support for CDH 5.3/5.4deprecatedPlease upgrade to HDP 5.8.

Release 3.1.2

None.

Release 3.1.1

ItemDescription
Support for HDP 2.2Please upgrade to HDP 2.3 or HDP 2.4.

Release 3.1

ItemDescription
Support for HDP 2.1.1Please upgrade to HDP 2.3.
Support for CDH 5.2Please upgrade to CDH 5.3, 5.4, 5.5 or 5.6.
Support for MapR 4.1Please upgrade to MapR 5.1.

Replacements:

NOTE: These features may be deprecated in the future in favor of their replacements.

 

  • New if function is designed to replace the ternary construct.
  • New Spark Profiler on Scala is the default Spark profiler. 
    • The old version, which required Spark, is still available for specific scenarios; we recommend using the new Spark Profiler on Scala.
    • See Configure Spark Profiler

Release 3.0

None.

Release 2.7

ItemDescription
multiextract transformTransform was removed in favor of new multisplit transform.
Support for MapReduce 1Map Reduce 1 has been superseded by Map Reduce 2, also known as YARN.
WebHCat for publishingFor publishing to HCatalog, WebHCat has been replaced by publication through Hive.

For more information, please see  Trifacta Support.

 

This page has no comments.