Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version r0712

Release 6.8

December 6, 2019

What's New

  • Date formatting options and locale
  • Improvements to RapidTarget
  • Create Table task
  • Beta support for Mozilla Firefox 68
  • Macro Import/Export
  • Webhooks

Date formatting options and locale

Release 6.8 of

D s product
expands the formatting options for date columns and now factors in user-defined settings for locale.

When choosing a date format, the new drop-down menu simplifies navigation and search capabilities. Begin by entering the formatting string of your preference, and better suggestions are listed for you.

D caption
New Datetime format dialog

This list is factored based on your locale settings. For example, European date formats favor dd-mm-yyyy formats, so if you have configured a European locale in your locale settings, these options are surfaced higher in the list.

Locale settings also influence the suggestions presented to you by the predictive interaction service. Locale can be configured for all users in the workspace, and individual users can override these settings in their User Profile page. For example, a user in Europe may choose to override the default locale to select United States when working with a dataset sourced from the United States.

Improvements to RapidTarget

This release delivers improvements to the matching performance and logic of RapidTarget, which enables quick matching of your dataset to the target schema of your choice. RapidTarget can now auto-match based on the data in a column using fuzzy matching logic, in addition the column’s name.

D caption
Global fuzzy matching

Create Table task

The Create Table task allows you to specify the exact columns and their order to include in your dataset. In a single recipe step, you can list the columns in order that you want to include going forward. The remaining columns are dropped. This step works in a similar way to a SELECT statement in SQL to filter your data.

You can also insert literal values and functions are columns in Create Table.

D caption
Create Table replaces all columns in your dataset with only the ones you want

Mozilla Firefox


D s webapp
now works with Mozilla Firefox.

D beta

To download and install version 68 of Mozilla Firefox, click here.

Macro Import/Export

New in Release 6.8 is the ability to import and export Macros. A macro is a sequence of recipe steps that you can use in other recipes. This recently released feature enables you to simplify the creation of consistent steps across your recipes.

With the ability to import and export macros, you can maintain this consistency across all of your

D s item
, create backups and an audit trail for your macros, and build up a repository of commonly used macros, extending the flexibility of
D s lang
, the proprietary transformation language of the
D s platform

D caption
Import/Export of macros


Webhooks are a standard messaging method between web applications using HTTP REST protocols. Through the

D s webapp
, you can now design messages to be delivered to external applications on the success or failure of your jobs. For example, you can design a Webhook message to post to a Slack channel when a scheduled job succeeds.

D caption
Configure a webhook message to a Slack channel

In the above, a message has been crafted to be delivered to a Slack channel that has been configured to receive the Webhook message. In this example, whenever a job is executed from this flow, the contents of the Body textbox appears as a message in the designated Slack channel.

Using Webhooks, you can inform stakeholders within their favorite applications that fresh data is ready for use and broaden general awareness of

D s product
within your organization.

These are just the highlights of this release. To see all of what’s new in Release 6.4 for

D s product
, please see Release Notes 6.8.

Release 6.4

What’s New

  • Transform by Example
  • Macros
  • Snowflake Connector
  • Output Parameterization
  • And much more!

Transform By Example

Transform by Example expands the native, guided step creation in

D s product
. For any existing column value, you can type out the desired output value, and
D s product
assembles a program in the background to get you there: 

D caption
Transform by Example

After entering the example on the first row, 

D s product
infers the kind of transformation you're trying to do. It applies this transformation to your input column, and
D s product
provides you with a preview of what your data will look like after it is saved. If you're not satisfied with what it predicts, you can add more examples for different input records to fine-tune the transformation. You can toggle between the full column view and a pattern view that shows you the output for each of the pattern groups present in that column. When satisfied with the results, you can add the transformation to your recipe, which can be executed at scale on your full dataset.

For more information, see Overview of TBE.


Macros provide a repeatable way to accomplish repetitive or common tasks in

D s product
. In the example shown below, we use three steps to create a macro to remove outliers. Here are the steps bundled up into the macro:

  1. Create a column of the standard deviations, 
  2. Create a column of the mean, and 
  3. Create a formula to flag outliers based on whether or not the value falls more than 3.5 standard deviations from the mean. 

Below, we create a macro out of these three steps, with the original column as a parameter that can be changed from recipe to recipe. Rather than create these three steps from scratch, or manually apply from a separate recipe, we can locate the macro in our library of macros directly from the Transformer page to reduce the busy work:

D caption
Create a Macro

As needed, you can inspect a macro to see the underlying steps to verify the correct behavior. You can also parameterize values in the macros, such as columns, numbers, strings, patterns, booleans, and more. If you need to tweak any step in a macro, you can convert the macro back to the original set of discrete steps and modify.

Reusing a macro is easy; select it and enter the needed parameter values:

D caption
Apply a Macro

For more information, see Overview of Macros.

Snowflake Connector

This release includes a connector to Snowflake. Read data from Snowflake, wrangle in

D s product
, and publish the results back to Snowflake. For more information, see Enable Snowflake Connections.

Parameterized Output

You can now add parameters and variables to your output file paths. For example, the following appends the timestamp of the job execution time to the output filename:

D caption
Parameterized outputs

For more information. see Overview of Parameterization.

Release 6.0

What’s New

  • Cluster Clean

  • Selection Model

  • Job Monitoring

  • Metadata References

  • Relational Publishing

  • And Much More!

Cluster Clean

Standardizing values is a way of grouping similar values into a single, consistent format. With Cluster Clean,

D s product
gives users access to multiple algorithms for grouping values and easy-to-use tools for standardizing to a single value.

The two different options that are presented in the Cluster Clean menu are by string similarity and by pronunciation. String Similarity compares strings against a combination of all values and uses either fingerprint or fingerprint ngram algorithms to cluster. You can see this in the following example:

The Pronunciation algorithm uses a double metaphone algorithm to compare values across languages by pronunciation. You can see this in action below. Determining which clustering algorithm to use depends on the scenario, but the Cluster Clean feature will give you the flexibility to choose depending on the context you have.


Tip: You can mix-and-match algorithms. Some values may be standardized using spelling, while others are more sensibly standardized based on international pronunciation standards.

Below, some values are still highlighted from the string similarity example:

   For more information,, see Overview of Standardization.

Selection Model

The enhanced Selection Model makes for quicker and more intuitive interactions within the Transformation Grid. Selecting a column now gives users a more complete profile of the column. Additionally, users now have quicker access points to more detailed profiling information depending on the column’s data type. For instance, a date column will give users options to explore the distributions of values in terms of years, months, days of the week, etc. Excluding weekends, as an example, now only requires a few interactions with the profile:

Likewise, cleaning up issues in columns with multiple date formats can be quickly addressed by exploring and interacting with Patterns:

The enhanced Selection Model enables similar interactions as those in the Columns View. You can now copy and paste columns and column values:

You can also perform multi-column selection in the Transformer Grid, which updates suggestions based on the context, and works with the Toolbar--allowing for quick and easy multi-column transformations:

For more information, see Selection Details Panel.

Job Monitoring

The 6.0 Enterprise release also has enhancements to our Job Details page . This redesigned page now includes the following tabs:

  • Overview - A summary page of the job run

  • Output Destinations - Information on the output datasets and download and publishing page

  • Profile - Overview of profiling information like missing values, column distributions, etc

  • Dependencies - An audit trail of the recipes and steps involved in the job run

  • Data sources - Information on the datasets used to create the job output

  • Parameters - An optional screen that lists any parameters used to create the data sources

For flows using parameters in the input, you will see the following information:

For more information, see Job Details Page.

Metadata References

With new metadata references, users can now reference the source file path and the source row number

D s product
using the following functions: $filepath and $sourcerownumber. This gives users access to lineage at both the source and record level in their data, improving governance and insight into changes made to your data. See Source Metadata References .

Relational Publishing

Publishing back to relational databases is now supported. Connections to Oracle, SQL Server, PostgreSQL or Teradata automatically support the ability to write your results back to the database.


NOTE: You cannot disable relational publishing for platform-native connections: Oracle, SQL Server, PostgreSQL, or Teradata. Please verify that all user accounts accessing databases of these types have appropriate permissions.

The following connection types are natively supported for relational publishing.

Relational publishing can be enabled for other relational connection types. See Connection Types.

These are just the highlights of this release. To see all of what’s new in the 6.0 Enterprise release, please see Release Notes 6.0.