...
Support for Java 8 will be deprecated. Customers must migrate to using Java 11. Additional instructions will be provided.
Tip Tip: For Release 9.2, Java 11 is supported at runtime only.
- Java 11 requires Spark 3.x. When Java 8 is deprecated, support for Spark 2.x will be deprecated. Customers must migrate to using Spark 3.x. Additional instructions will be provided.
- These changes have the following implications:
- Cloudera clusters do not support Spark 3.x. Customers using these running environments must migrate to Cloudera Data Platform.
- Some deployments of EMR can migrate to using Spark 3.x in this release. For more information, see Configure for EMR.
- Some deployments of Databricks can migrate to using Spark 3.x in this release. For more information:
- For more information on these changes, please contact
.D s support
Key Bug Fixes
Ticket | Description |
---|---|
TD-70522 | Cannot import converted files such as Excel, PDF, or JSON through SFTP connections. |
TD-69279 | Test Connection button fails a ValidationFailed error when editing a working connection configured with SSH tunneling. |
TD-69004 | Patch httpd to version 2.4. |
54 | |
TD-66185 | Flatten transformation cannot handle multi-character delimiters. |
New Known Issues
None.
Release 9.1
...
- For more information, see Configure for AWS Databricks.
- For more information, see Configure for Azure Databricks.
Job execution:
The
D s webapp |
---|
...
Info |
---|
NOTE: Support for Databricks 5.5 LTS has been deprecated. For more information, see End of Life and Deprecated Features. |
Key Bug Fixes
Ticket | Description |
---|---|
TD-60881 | For ADLS datasets, parameter indicators in Flow View are shifted by one character. |
New Known Issues
None.
Release 9.0
...
Deprecated
None.
Key Bug Fixes
Ticket | Description |
---|---|
TD-68162 | Flow parameters cannot be displayed or edited in the Transformer page and cannot embedded in recipe steps. |
New Known Issues
None.
Release 8.11
...
Key Bug Fixes
None.
New Known Issues
Ticket | Description | ||||
---|---|---|---|---|---|
TD-68162 | Flow parameters cannot be displayed or edited in the Transformer page and cannot embedded in recipe steps.
|
Release 8.10
November 22, 2021
...
Maximum permitted record length has been increased from 1 MB to 20 MB. For more information, see Working with JSON v2.
Info |
---|
NOTE: This new default limit can be modified if you are experiencing performance issues. See Configure Application Limits. |
Publishing:
Improvements to publishing of
D s item | ||
---|---|---|
|
...
- You can view the list of collaborators and their corresponding avatars on shareable objects, such as Flows, Plans, and Connections pages.
- For more information, see Flows Page.
- For more information, see Connections Page.
- For more information, see Plans Page.
Sampling:
- Adjust the size of samples loaded in the browser for your current recipe to improve performance and address low-memory conditions. See Change Recipe Sample Size.
...
Example values in source CSV file:
Code Block """My product""",In stock,"16,000",0.05
Note that the value
16,000
must be double-quoted, since the value contains a comma, which is the field delimiter.Previously, this value appeared in the Transformer page in columns as the following:
c1 c2 c3 c4 """My product"""
In stock
"16,000"
0.05
As of this version, the
handles the values in a better manner when displaying them in the Transformer page:D s webapp c1 c2 c3 c4 "My product"
In stock
16,000
0.05
c1: Escaped values (tripe double-quotes) in the source no longer render in the application as triple double-quotes and are represented as quoted values.
c3: Note that the double quotes in
c3
have been stripped. Leading and trailing quotes are trimmed if the quotes are balanced within a cell.Info NOTE: This change in behavior applies only to newly created imported datasets sourced from a CSV file. Existing imported datasets should not be affected. However, if a newly imported dataset is transformed by a previously existing recipe that compensated for the extra quotes in the Transformer page, the effects on output data could be unpredictable. These recipes and their steps should be reviewed.
This change does apply to any newly imported dataset sourced from CSV and may cause the data to change. For example, if you export an older flow and import into a new workspace or project, this change in parsing behavior applies to the datasets that are newly created in the new environment. Recipes may require review upon import.
When results are generated in CSV, output files should continue to reflect the formatting of the source data before import. See above.
Tip Tip: You can also choose the Include quotes option when creating a CSV output.
When profiling is enabled, values that appear in CSV as
""
are now marked as missing.
...
Key Bug Fixes
None.
New Known Issues
Ticket | Description |
---|---|
TD-63974 | In imported datasets sourced from CSV files, double quotes that are escaped with a backslash ( |