D toc |
---|
Release 6.0.2
This release addresses several bug fixes.
What's New
- Support for Cloudera 6.2. For more information, see System Requirements.
Changes to System Behavior
Info | |
---|---|
NOTE: As of Release 6.0, all new and existing customers must license, download, and install the latest version of the Tableau SDK onto the
|
Upload:
- In previous releases, files that were uploaded to the
that had an unsupported filename extension received a warning before upload.D s platform - Beginning in this release, files with unsupported extensions are blocked from upload.
- You can change the list of supported file extensions. For more information, see Miscellaneous Configuration.
Documentation:
- In Release 6.0.x documentation, documentation for the API JobGroups Get Status v4 endpoint was mistakenly published. This endpoint does not exist. For more information on the v4 equivalent, see Changes to the APIs.
Key Bug Fixes
Ticket | Description |
---|---|
TD-40471 | SAM auth: Logout functionality not working |
TD-39318 | Spark job fails with parameterized datasets sourced from Parquet files |
TD-39213 | Publishing to Hive table fails |
New Known Issues
None.
Release 6.0.1
This release features support for several new Hadoop distributions and numerous bug fixes.
What's New
Connectivity:
Support for integration with CDH 5.16.
Support for integration with CDH 6.1. Version-specific configuration is required.
Info NOTE: If you have upgraded to Cloudera 6.0.0 or later and are using EC2 role-based authentication to access AWS resources, you must change two platform configuration properties. For more information, see Configure for EC2 Role-Based Authentication.
- Support for integration with HDP 3.1. Version-specific configuration is required. See Supported Deployment Scenarios for Hortonworks.
- Support for Hive 3.0 on HDP 3.0 or HDP 3.1. Version-specific configuration is required. See Configure for Hive.
Support for Spark 2.4.0.
Info NOTE: There are some restrictions around which running environment distributions support and do not support Spark 2.4.0.
For more information, see Configure for Spark.
Support for integration with high availability for Hive.
Info NOTE: High availability for Hive is supported on HDP 2.6 and HDP 3.0 with Hive 2.x enabled. Other configurations are not currently supported.
For more information, see Create Hive Connections.
Publishing:
Support for automatic publishing of job metadata to Cloudera Navigator.
Info NOTE: For this release, Cloudera 5.16 only is supported.
API:
- Create, edit, and assign AWS configurations for individual users through APIs. See API Workflow - Manage AWS Configurations.
Changes to System Behavior
Photon
In the application and documentation, the following changes have been applied.
Reference | Description | old Run Job Page term | new Run Job Page term | Doc | |
---|---|---|---|---|---|
Hadoop | Supported running environment on the Hadoop cluster | Run on Hadoop | Spark | Configure for Spark | |
Photon running environment | Supported running environment on the
| Trifacta Server | Photon | Configure Photon Running Environment | |
Photon in-browser client | In-browser web client | n/a | n/a | Configure Photon Client |
Key Bug Fixes
Ticket | Description | |||
---|---|---|---|---|
TD-39779 | MySQL JARs must be downloaded by user.
| |||
TD-39694 | Tricheck returns status code 200, but there is no response. It does not work through Admin Settings page. | |||
TD-39455 | HDI 3.6 is not compatible with Guava 26. | |||
TD-39086 | Hive ingest job fails on Microsoft Azure. |
New Known Issues
Ticket | Description | ||
---|---|---|---|
TD-40299 | Cloudera Navigator integration cannot locate the database name for JDBC sources on Hive. | ||
TD-40348 | When loading a recipe in imported flow that references an imported Excel dataset, Transformer page displays Input validation failed: (Cannot read property 'filter' of undefined) error, and the screen is blank.
| ||
TD-39969 | On import, some Parquet files cannot be previewed and result in a blank screen in the Transformer page.
|
Release 6.0
This release of
D s product | ||
---|---|---|
|
Info |
---|
NOTE: This release also announces the deprecation of several features, versions, and supported extensions. Please be sure to review Changes to System Behavior below. |
What's New
Info |
---|
NOTE: The PNaCl client for Google Chrome has been replaced by the WebAssembly client. This new client is now the default in use by the platform and is deployed to all clients through the browser. Please verify that all users in your environment are on Google Chrome 68+. For more information, see Desktop Requirements. |
Info | ||||
---|---|---|---|---|
NOTE: Beginning in this release, the
|
Wrangling:
- In data grid, you can select multiple columns before receiving suggestions and performing transformations on them. For more information, see Data Grid Panel.
- New Selection Details panel enables selection of values and groups of values within a selected column. See Selection Details Panel.
- Copy and paste columns and column values through the column menus. see Copy and Paste Columns.
- Support for importing files in Parquet format. See Supported File Formats.
- Specify ranges of key values in your joins. See Configure Range Join.
Jobs:
- Review details and monitor the status of in-progress jobs through the new Job Details page. See Job Details Page.
- Filter list of jobs by source of job execution or by date range. See Jobs Page.
Connectivity:
- Publishing (writeback) is now supported for relational connections.
This feature is enabled by default
Info NOTE: After a connection has been enabled for publishing, you cannot disable publishing for that connection. Before you enable, please verify that all user accounts accessing databases of these types have appropriate permissions.
- The following connection types are natively supported for publishing to relational systems.
- Import folders of Microsoft Excel workbooks. See Import Excel Data.
Support for integration with CDH 6.0. Version-specific configuration is required.
See Supported Deployment Scenarios for Cloudera.Info NOTE: If you have upgraded to Cloudera 6.0.0 or later and are using EC2 role-based authentication to access AWS resources, you must change two platform configuration properties. For more information, see Configure for EC2 Role-Based Authentication.
- Support for integration with HDP 3.0. Version-specific configuration is required. See Supported Deployment Scenarios for Hortonworks.
- Support for Hive 3.0 on HDP 3.0 only. Version-specific configuration is required. See Configure for Hive.
- Hive integration is now available when the backend datastore is S3. See Configure for Hive.
- Read Hive tables from AWS Glue Data Catalog.
D beta
See Enable AWS Glue Access.
- New functions. See Changes to the Language.
Track file-based lineage using
$filepath
and$sourcerownumber
references. See Source Metadata References.- In addition to directly imported files, the
$sourcerownumber
reference now works for converted files (such as Microsoft Excel workbooks) and for datasets with parameters. See Source Metadata References.
Workspace:
- Organize your flows into folders. See Flows Page.
Publishing:
Users can be permitted to append to Hive tables when they do not have CREATE or DROP permissions on the schema.
Info NOTE: This feature must be enabled. See Configure for Hive.
Administration:
New Workspace Settings page centralizes many of the most common admin settings. See Changes to System Behavior below.
- Download system logs through the
. See Admin Settings Page.D s webapp
Supportability:
- High availability for the
is now generally available. See Install for High Availability.D s node
Authentication:
Integrate SSO authentication with enterprise LDAP-AD using platform-native LDAP support.
D beta Info NOTE: In previous releases, LDAP-AD SSO utilizes an Apache reverse proxy. While this method is still supported, it is likely to be deprecated in a future release. Please migrate to using the above SSO method. See Configure SSO for AD-LDAP.
- Support for SAML SSO authentication. See Configure SSO for SAML.
- Support for per-user authentication for AWS resources. See Configure for AWS.
Support for Azure Databricks SSO/OAuth.
Info NOTE: If you integrate the platform with an Azure Databricks cluster and enable SSO for Azure, Azure Databricks is managed through SSO seamlessly. For more information, see Configure SSO for Azure AD.
API:
- Manage user access to APIs using renewable access tokens. For more information, see Changes to the APIs.
Changes to System Behavior
Info | |
---|---|
NOTE: The
|
Configuration:
To simplify configuration of the most common feature enablement settings, some settings have been migrated to the new Workspace Settings page. For more information, see Workspace Settings Page.
Info | |
---|---|
NOTE: Over subsequent releases, more settings will be migrated to the Workspace Settings page from the Admin Settings page and from
|
See Platform Configuration Methods.
See Admin Settings Page.
API:
Info | |||||
---|---|---|---|---|---|
NOTE: In the next release of
|
CLI:
Info | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
NOTE: The
|
Info | |||||
---|---|---|---|---|---|
NOTE: In the next release of
|
Changes to release numbering system:
In Release 5.0 and earlier, each release of
D s product | ||
---|---|---|
|
In Release 5.1,
D s company |
---|
Beginning in this release, each monthly milestone receives a separate release number. For this release, milestones are: Release 5.6, Release 5.7, and Release 5.8. Release 5.9 is the generally available release for
D s product | ||
---|---|---|
|
This change in numbering scheme does not affect the scope and frequency of
D s product | ||
---|---|---|
|
Errata:
In prior releases, the product and documentation stated that the platform implemented a version of regular expressions based on Javascript syntax. This is incorrect.
The
D s platform |
---|
Info |
---|
NOTE: This is not a change in behavior. Only the documentation has been changed. |
Key Bug Fixes
Ticket | Description | ||||
---|---|---|---|---|---|
TD-36332 | Data grid can display wrong results if a sample is collected and dataset is unioned. | ||||
TD-36192 | Canceling a step in recipe panel can result in column menus disappearing in the data grid. | ||||
TD-35916 | Cannot logout via SSO | ||||
TD-35899 | A deployment user can see all deployments in the instance. | ||||
TD-35780 | Upgrade: Duplicate metadata in separate publications causes DB migration failure. | ||||
TD-35644 | Extractpatterns with "HTTP Query strings" option doesn't work. | ||||
TD-35504 | Cancel job throws 405 status code error. Clicking Yes repeatedly pops up Cancel Job dialog. | ||||
TD-35486 | Spark jobs fail on LCM function that uses negative numbers as inputs. | ||||
TD-35483 | Differences in how WEEKNUM function is calculated in the
For more information, see Changes to the Language. | ||||
TD-35481 | Upgrade Script is malformed due to SplitRows not having a Load parent transform. | ||||
TD-35177 | Login screen pops up repeatedly when access permission is denied for a connection. | ||||
TD-27933 | For multi-file imports lacking a newline in the final record of a file, this final record may be merged with the first one in the next file and then dropped in the |
New Known Issues
Ticket | Description | ||
---|---|---|---|
TD-39513 | Import of folder of Excel files as parameterized dataset only imports the first file, and sampling may fail.
| ||
TD-39455 | HDI 3.6 is not compatible with Guava 26.
| ||
TD-39092 |
For more information on these references, see Source Metadata References. | ||
TD-39086 | Hive ingest job fails on Microsoft Azure. | ||
TD-39053 | Cannot read datasets from Parquet files generated by Spark containing nested values.
| ||
TD-39052 | Signout using reverse proxy method of SSO is not working after upgrade. | ||
TD-38869 | Upload of Parquet files does not support nested values, which appear as null values in the Transformer page.
| ||
TD-37683 | Send a copy does not create independent sets of recipes and datasets in new flow. If imported datasets are removed in the source flow, they disappear from the sent version.
| ||
TD-36145 | Spark running environment recognizes numeric values preceded by
| ||
TD-35867 | v3 publishing API fails when publishing to alternate S3 buckets
|