Page tree


Outdated release! Latest docs are Release 8.7: Past Releases

   


Release 6.4

What’s New

  • Transform by Example
  • Macros
  • Snowflake Connector
  • Output Parameterization
  • And much more!

Transform By Example

Transform by Example expands the native, guided step creation in Trifacta Self-Managed Enterprise Edition. For any existing column value, you can type out the desired output value, and Trifacta Self-Managed Enterprise Edition assembles a program in the background to get you there: 

Figure: Transform by Example

After entering the example on the first row,  Trifacta Self-Managed Enterprise Edition infers the kind of transformation you're trying to do. It applies this transformation to your input column, and Trifacta Self-Managed Enterprise Edition provides you with a preview of what your data will look like after it is saved. If you're not satisfied with what it predicts, you can add more examples for different input records to fine-tune the transformation. You can toggle between the full column view and a pattern view that shows you the output for each of the pattern groups present in that column. When satisfied with the results, you can add the transformation to your recipe, which can be executed at scale on your full dataset.

For more information, see Overview of TBE.

Macros

Macros provide a repeatable way to accomplish repetitive or common tasks in Trifacta Self-Managed Enterprise Edition. In the example shown below, we use three steps to create a macro to remove outliers. Here are the steps bundled up into the macro:

  1. Create a column of the standard deviations, 
  2. Create a column of the mean, and 
  3. Create a formula to flag outliers based on whether or not the value falls more than 3.5 standard deviations from the mean. 

Below, we create a macro out of these three steps, with the original column as a parameter that can be changed from recipe to recipe. Rather than create these three steps from scratch, or manually apply from a separate recipe, we can locate the macro in our library of macros directly from the Transformer page to reduce the busy work:

Figure: Create a Macro

As needed, you can inspect a macro to see the underlying steps to verify the correct behavior. You can also parameterize values in the macros, such as columns, numbers, strings, patterns, booleans, and more. If you need to tweak any step in a macro, you can convert the macro back to the original set of discrete steps and modify.

Reusing a macro is easy; select it and enter the needed parameter values:

Figure: Apply a Macro

For more information, see Overview of Macros.

Snowflake Connector

This release includes a connector to Snowflake. Read data from Snowflake, wrangle in Trifacta Self-Managed Enterprise Edition, and publish the results back to Snowflake. For more information, see Enable Snowflake Connections.

Parameterized Output

You can now add parameters and variables to your output file paths. For example, the following appends the timestamp of the job execution time to the output filename:

Figure: Parameterized outputs

For more information. see Overview of Parameterization.

Release 6.0

What’s New

  • Cluster Clean

  • Selection Model

  • Job Monitoring

  • Metadata References

  • Relational Publishing

  • And Much More!

Cluster Clean

Standardizing values is a way of grouping similar values into a single, consistent format. With Cluster Clean, Trifacta® Self-Managed Enterprise Edition gives users access to multiple algorithms for grouping values and easy-to-use tools for standardizing to a single value.

The two different options that are presented in the Cluster Clean menu are by string similarity and by pronunciation. String Similarity compares strings against a combination of all values and uses either fingerprint or fingerprint ngram algorithms to cluster. You can see this in the following example:


The Pronunciation algorithm uses a double metaphone algorithm to compare values across languages by pronunciation. You can see this in action below. Determining which clustering algorithm to use depends on the scenario, but the Cluster Clean feature will give you the flexibility to choose depending on the context you have.

Tip: You can mix-and-match algorithms. Some values may be standardized using spelling, while others are more sensibly standardized based on international pronunciation standards.

Below, some values are still highlighted from the string similarity example:


   For more information,, see Overview of Standardization.

Selection Model

The enhanced Selection Model makes for quicker and more intuitive interactions within the Transformation Grid. Selecting a column now gives users a more complete profile of the column. Additionally, users now have quicker access points to more detailed profiling information depending on the column’s data type. For instance, a date column will give users options to explore the distributions of values in terms of years, months, days of the week, etc. Excluding weekends, as an example, now only requires a few interactions with the profile:


Likewise, cleaning up issues in columns with multiple date formats can be quickly addressed by exploring and interacting with Patterns:


The enhanced Selection Model enables similar interactions as those in the Columns View. You can now copy and paste columns and column values:


You can also perform multi-column selection in the Transformer Grid, which updates suggestions based on the context, and works with the Toolbar--allowing for quick and easy multi-column transformations:

For more information, see Selection Details Panel.

Job Monitoring

The 6.0 Enterprise release also has enhancements to our Job Details page . This redesigned page now includes the following tabs:

  • Overview - A summary page of the job run

  • Output Destinations - Information on the output datasets and download and publishing page

  • Profile - Overview of profiling information like missing values, column distributions, etc

  • Dependencies - An audit trail of the recipes and steps involved in the job run

  • Data sources - Information on the datasets used to create the job output

  • Parameters - An optional screen that lists any parameters used to create the data sources



For flows using parameters in the input, you will see the following information:



For more information, see Job Details Page.

Metadata References

With new metadata references, users can now reference the source file path and the source row number Trifacta Self-Managed Enterprise Edition using the following functions: $filepath and $sourcerownumber. This gives users access to lineage at both the source and record level in their data, improving governance and insight into changes made to your data. See Source Metadata References .

Relational Publishing

Publishing back to relational databases is now supported. Connections to Oracle, SQL Server, PostgreSQL or Teradata automatically support the ability to write your results back to the database.

NOTE: You cannot disable relational publishing for platform-native connections: Oracle, SQL Server, PostgreSQL, or Teradata. Please verify that all user accounts accessing databases of these types have appropriate permissions.

The following connection types are natively supported for relational publishing.

Relational publishing can be enabled for other relational connection types. See Connection Types.

These are just the highlights of this release. To see all of what’s new in the 6.0 Enterprise release, please see Release Notes 6.0.


This page has no comments.