Page tree

 

Contents:


After installation of the Trifacta platform software and databases in your Microsoft Azure infrastructure, please complete these steps to perform the basic integration between the Trifacta node and Azure resources like the backend storage layer and running environment cluster. 

NOTE: This section includes only basic configuration for required platform functions and integrations with Azure. Please use the links in this section to access additional details on these key features.

Tip: When you save changes from within the Trifacta platform, your configuration is automatically validated, and the platform is automatically restarted.


Configure in Azure

These steps require admin access to your Azure deployment.

Create registered application

To create an Azure Active Directory (AAD) application, please complete the following steps in the Azure console.

Steps:

  1. Create registered application:

    1. In the Azure console, navigate to Azure Active Directory > App Registrations.

    2. Create a New App. Name it trifacta.

      NOTE: Retain the Application ID and Directory ID for configuration in the Trifacta platform.

  2. Create a client secret:
    1. Navigate to Certificates & secrets.
    2. Create a new Client secret.

      NOTE: Retain the value of the Client secret for configuration in the Trifacta platform.

  3. Add API permissions:
    1. Navigate to API Permissions.
    2. Add Azure Key Vault with the user_impersonation permission.

For additional details, see Configure for Azure.

Please complete the following steps in the Azure portal to create a Key Vault and to associate it with the Trifacta registered application.

NOTE: A Key Vault is required for use with the Trifacta platform.

Create Key Vault in Azure

Steps:

  1. Log into the Azure portal.
  2. Goto: https://portal.azure.com/#create/Microsoft.KeyVault
  3. Complete the form for creating a new Key Vault resource:
    1. Name: Provide a reasonable name for the resource. Example:

      <clusterName>-<applicationName>-<group/organizationName>

      Or, you can use trifacta.

    2. Location: Pick the location used by the HDI cluster.
    3. For other fields, add appropriate information based on your enterprise's preferences.
  4. To create the resource, click Create.

    NOTE: Retain the DNS Name value for later use.

Enable Key Vault access for the Trifacta platform

Steps:

In the Azure portal, you must assign access policies for application principal of the Trifacta registered application to access the Key Vault.

Steps:

  1. In the Azure portal, select the Key Vault you created. Then, select Access Policies.
  2. In the Access Policies window, select the Trifacta registered application.
  3. Click Add Access Policy.
  4. Select the following secret permissions (at a minimum):
    1. Get
    2. Set
    3. Delete
  5. Select the Trifacta application principal.
  6. Assign the policy you just created to that principal.

For additional details, see Configure Azure Key Vault.

Create or modify Azure backend datastore

In the Azure console, you must create or modify the backend datastore for use with the Trifacta platform. Supported datastores:

NOTE: You should review the limitations for your selected datastore before configuring the platform to use it. After the base storage layer has been defined in the platform, it cannot be modified.


DatastoreNotes
ADLS Gen2

Supported for use with Azure Databricks cluster only.

See Enable ADLS Gen2 Access.

ADLS Gen1See Enable ADLS Gen1 Access.
WASB

Only WASBS protocol is supported only.

See Enable WASB Access.


Create or modify running environment cluster

In the Azure console, you must create or modify the running environment where jobs are executed by the Trifacta platform. Supported running environments:

NOTE: You should review the limitations for your selected running environment before configuring the platform to use it.


Running EnvironmentNotes
Azure Databricks

See Configure for Azure Databricks.

HDISee Configure for HDInsight.

Configure the Platform

Please complete the following sections as soon as you can access the Trifacta application.

Change admin password

As part of the install process, an admin user account is created.

NOTE: Some platform functions cannot be executed without an admin account. Your deployment should always have an admin account.


After the Trifacta software has been installed, the administrator of the system should immediately change the password for the admin account through the Trifacta application. If you do not know the admin account credentials, please contact Trifacta Support.

Steps:

  1. Login to the application using the admin account.
  2. In the menu bar, click User menu >Preferences.
  3. Click Profile.
  4. Enter a new password, and click Save.
  5. Logout and login again using the new password.

Review self-registration

By default, any visitor to the Login page can create an account in the Trifacta platform

If the Trifacta platform is available on the public Internet or is otherwise vulnerable to unauthorized access, unauthorized users can register and use the product. If this level of access is unsatisfactory, you should disable self-registration.

Disabling self-registration means that a Trifacta administrator must enable all users. For more information, see Configure User Self-Registration.

Configure shared secret

To manage cookie signing, the platform deploys a shared secret, which is used for guaranteeing data transfer between the web client and the platform. 

At install time, the platform inserts a default shared secret. The default 64-character shared secret for the platform is the same for all instances of the platform of the same version. This secret should not be used across multiple deployments of the platform.

NOTE: If your instance of the platform is available on the public Internet or if you have deployed multiple instances of the same release of the platform, cookies can become insecure across instances when the secret is shared across instances. You should get in the habit of changing this value for each installation of the platform.

Please complete the following steps to change the shared secret.

Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Locate the following parameter:

    "sharedSecret": <64_character_value>
  3. Modify the current value. The new value can be any 64-character string. 
  4. Save your changes.

Configure the Platform for Azure

Please complete the following steps to configure the Trifacta platform and to integrate it with Azure resources.

Base platform configuration

Please complete the following configuration steps in the Trifacta® platform.

NOTE: If you are integrating with Azure Databricks and are Managed Identities for authentication, please skip this section. That configuration is covered in a later step.

NOTE: Except as noted, these configuration steps are required for all Azure installs. These values must be extracted from the Azure portal.

Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Azure registered application values:

    "azure.applicationId": "<azure_application_id>",
    "azure.directoryId": "<azure_directory_id>",
    "azure.secret": "<azure_secret>",
    ParameterDescription
    azure.applicationId

    Application ID for the Trifacta registered application that you created in the Azure console

    azure.directoryId

    The directory ID for the Trifacta registered application

    azure.secret

    The Secret value for the Trifacta registered application

  3. Configure Key Vault:

    "azure.keyVaultUrl": "<url_of_key_vault>",
    ParameterDescription
    azure.keyVaultUrlURL of the Azure Key Vault that you created in the Azure console
  4. Save your changes and restart the platform.

For additional details:

Set base storage layer

The Trifacta platform supports integration with the following backend datastores on Azure.

  • ADLS Gen2
  • ADLS Gen1
  • WASB

ADLS Gen2

Please complete the following configuration steps in the Trifacta® platform.

NOTE: Integration with ADLS Gen2 is supported only on Azure Databricks.


Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Enable ADLS Gen2 as the base storage layer:

    "webapp.storageProtocol": "abfss",
    "hdfs.enabled": false,
    "hdfs.protocolOverride": "",
    ParameterDescription
    webapp.storageProtocol

    Sets the base storage layer for the platform. Set this value to abfss.

    NOTE: After this parameter has been saved, you cannot modify it. You must re-install the platform to change it.

    hdfs.enabledFor ADLS Gen2 access, set this value to false.
    hdfs.protocolOverrideFor ADLS Gen2 access, this special parameter should be empty. It is ignored when the storage protocol is set to abfss.
  3. Configure ADLS Gen2 access mode. The following parameter must be set to system.

    "azure.adlsgen2.mode": "system",
  4. Set the protocol whitelist and base URIs for ADLS Gen2:

    "fileStorage.whitelist": ["abfss"],
    "fileStorage.defaultBaseUris": ["abfss://filesystem@storageaccount.dfs.core.windows.net/"],
    ParameterDescription
    fileStorage.whitelist

    A comma-separated list of protocols that are permitted to read and write with ADLS Gen2 storage.

    NOTE: The protocol identifier "abfss" must be included in this list.

    fileStorage.defaultBaseUris

    For each supported protocol, this param must contain a top-level path to the location where Trifacta platform files can be stored. These files include uploads, samples, and temporary storage used during job execution.

    NOTE: A separate base URI is required for each supported protocol. You may only have one base URI for each protocol.

  5. Save your changes.
  6. The Java VFS service must be enabled for ADLS Gen2 access. For more information, see Configure Java VFS Service in the Configuration Guide.

For additional details, see Enable ADLS Gen2 Access.

ADLS Gen1

ADLS Gen1 access leverages HDFS protocol and storage, so additional configuration is required.

Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Enable ADLS Gen1 as the base storage layer:

    "webapp.storageProtocol": "adl",
    "hdfs.enabled": false,
    ParameterDescription
    webapp.storageProtocol

    Sets the base storage layer for the platform. Set this value to adl.

    NOTE: After this parameter has been saved, you cannot modify it. You must re-install the platform to change it.

    hdfs.enabledFor ADLS Gen1 storage, set this value to false.
  3. These parameters specify the base location and protocol for storage. Only one datastore can be specified:

    "fileStorage": {
        "defaultBaseUris": [
          "<baseURIOfYourLocation>"
        ],
        "whitelist": ["adl"]
    }
    ParameterDescription
    filestorage.defaultBaseURIs

    Set this value to the base location for your ADLS Gen1 storage area. Example:

    adl://<YOUR_STORE_NAME>.azuredatalakestore.net
    
    whitelist

    This list must include adl.

  4. Save your changes.

For additional details, see Enable ADLS Gen1 Access.

WASB

Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Enable WASB as the base storage layer:

    "webapp.storageProtocol": "wasbs",
    "hdfs.enabled": false,
    ParameterDescription
    webapp.storageProtocol

    Sets the base storage layer for the platform. Set this value to wasbs.

    NOTE: After this parameter has been saved, you cannot modify it. You must re-install the platform to change it.

    wasb protocol is not supported.

    hdfs.enabledFor WASB blob storage, set this value to false.
  3. Save your changes.
  4. In the following sections, you configure where the platform acquires the SAS token to use for WASB access from one of the following:
    1. From platform configuration
    2. From the Azure key vault
Configure SAS token for WASB

When integrating with WASB, the platform must be configured to use a SAS token to gain access to WASB resources. This token can be made available in either of the following ways, each of which requires separate configuration.

Via Trifacta platform configuration:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Locate and specify the following parameter:

    "azure.wasb.fetchSasTokensFromKeyVault": false,
    ParameterDescription
    azure.wasb.fetchSasTokensFromKeyVaultFor acquiring the SAS token from platform configuration, set this value to false.
  3. Save your changes and restart the platform.

Via Azure Key Vault:

To require the Trifacta platform to acquire the SAS token from the Azure key vault, please complete the following configuration steps.

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2. Locate and specify the following parameter:

    "azure.wasb.fetchSasTokensFromKeyVault": true,
    ParameterDescription
    azure.wasb.fetchSasTokensFromKeyVaultFor acquiring the SAS token from the key vault, set this value to true.
Define WASB stores
  1. To apply this configuration change, login as an administrator to the Trifacta node. Then, edit trifacta-conf.json. Some of these settings may not be available through the Admin Settings Page. For more information, see Platform Configuration Methods.
  2. Locate the azure.wasb.stores configuration block.

  3. Apply the appropriate configuration as specified below.

    Tip: The default container must be specified as the first set of elements in the array. All containers listed after the first one are treated as extra stores.

    "azure.wasb.stores": 
        [
         {
          "sasToken": "<DEFAULT_VALUE1_HERE>",
          "keyVaultSasTokenSecretName": "<DEFAULT_VALUE1_HERE>",
          "container": "<DEFAULT_VALUE1_HERE>",
          "blobHost": "<DEFAULT_VALUE1_HERE>"
         },
         {
          "sasToken": "<VALUE2_HERE>",
          "keyVaultSasTokenSecretName": "<VALUE2_HERE>",
          "container": "<VALUE2_HERE>",
          "blobHost": "<VALUE2_HERE>"
         }
        ]
    },
    ParameterDescriptionSAS Token from Azure Key VaultSAS Token from Platform Configuration
    sasToken

    Set this value to the SAS token to use, if applicable.

    Example value:

    ?sv=2019-02-02&ss=bfqt&srt=sco&sp=rwdlacup&se=2022-02-13T00:00:00Z&st=2020-02-13T00:00:00Z&spr=https&sig=<redacted>

    Set this value to an empty string.

    NOTE: Do not delete the entire line. Leave the value as empty.

    See below for the command to execute to generate a SAS token.
    keyVaultSasTokenSecretName

    Set this value to the secret name of the SAS token in the Azure key vault to use for the specified blob host and container.

    If needed, you can generate and apply a per-container SAS token for use in this field for this specific store. Details are below.


    Set this value to an empty string.

    NOTE: Do not delete the entire line. Leave the value as empty.

    container

    Apply the name of the WASB container.

    NOTE: If you are specifying different blob host and container combinations for your extra stores, you must create a new Key Vault store. See above for details.



    blobHost

    Specify the blob host of the container.

    Example value:

    storage-account.blob.core.windows.net

    NOTE: If you are specifying different blob host and container combinations for your extra stores, you must create a new Key Vault store. See above for details.



  4. Save your changes and restart the platform.

For additional details, see Enable WASB Access.

Checkpoint: At this point, you should be able to load data from your backend datastore, if data is available. You can try to run a small job on Photon, which is native to the Trifacta node. You cannot yet run jobs on an integrated cluster.


Integrate with running environment

The Trifacta platform can run jobs on the following running environments.

NOTE: You may integrate with only one of these environments.

Base configuration for Azure running environments

The following parameters should be configured for all Azure running environments.

Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.
  2.  Parameters:

    "webapp.runInTrifactaServer": true,
    "webapp.runinEMR": false,
    "webapp.runInDataflow": false,
    ParameterDescription
    webapp.runInTrifactaServer

    When set to true, the platform recommends and can run smaller jobs on the Trifacta node, which uses the embedded Photon running environment.

    Tip: Unless otherwise instructed, the Photon running environment should be enabled.

    webapp.runinEMRFor Azure, set this value to false.
    webapp.runInDataflowFor Azure, set this value to false.
  3. Save your changes.

Azure Databricks

The Trifacta platform can be configured to integrate with supported versions of Azure Databricks clusters to run jobs in Spark. 

NOTE: Before you attempt to integrate, you should review the limitations around this integration. For more information, see Configure for Azure Databricks.

Steps:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.

  2. Configure the following parameters to enable job execution on the specified Azure Databricks cluster:

    "webapp.runInDatabricks": true,
    "webapp.runWithSparkSubmit": false,
    ParameterDescription
    webapp.runInDatabricks

    Defines if the platform runs jobs in Azure Databricks. Set this value to true.

    webapp.runWithSparkSubmitFor all Azure Databricks deployments, this value should be set to false.
  3. Configure the following Azure Databricks-specific parameters:

    "databricks.serviceUrl": "<url_to_databricks_service>",
    ParameterDescription
    databricks.serviceUrlURL to the Azure Databricks Service where Spark jobs will be run (Example: https://westus2.azuredatabricks.net)

    NOTE: If you are using instance pooling on the cluster, additional configuration is required. See Configure for Azure Databricks.

  4. Save your changes and restart the platform.

For additional details, see Configure for Azure Databricks.

HDInsight

The Trifacta platform can be configured to integrate with supported versions of HDInsight clusters to run jobs in Spark. 

NOTE: Before you attempt to integrate, you should review the limitations around this integration. For more information, see Configure for HDInsight.

Specify running environment options:

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.

  2. Configure the following parameters to enable job execution on the specified HDI cluster:

    "webapp.runInDatabricks": false,
    "webapp.runWithSparkSubmit": true,
    ParameterDescription
    webapp.runInDatabricks

    Defines if the platform runs jobs in Azure Databricks. Set this value to false.

    webapp.runWithSparkSubmitFor HDI deployments, this value should be set to true.

Specify Trifacta user:

Set the Hadoop username for the Trifacta platform to use for executing jobs [hadoop.user (default=trifacta)]:  

"hdfs.username": "[hadoop.user]",

Specify location of client distribution bundle JAR:

The Trifacta platform ships with client bundles supporting a number of major Hadoop distributions.  You must configure the jarfile for the distribution to use.  These distributions are stored in the following directory:

/trifacta/hadoop-deps

Configure the bundle distribution property (hadoopBundleJar):

  "hadoopBundleJar": "hadoop-deps/hdp-2.6/build/libs/hdp-2.6-bundle.jar"

Configure component settings:

For each of the following components, please explicitly set the following settings.

  1. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json. For more information, see Platform Configuration Methods.

  2. Configure Batch Job Runner:

      "batch-job-runner": {
       "autoRestart": true,
        ...
        "classpath": "%(topOfTree)s/services/batch-job-runner/build/install/batch-job-runner/batch-job-runner.jar:%(topOfTree)s/services/batch-job-runner/build/install/batch-job-runner/lib/*:%(topOfTree)s/conf/hadoop-site:%(topOfTree)s/%(hadoopBundleJar)"
      },
  3. Configure the following environment variables:

    "env.PATH": "${HOME}/bin:$PATH:/usr/local/bin:/usr/lib/zookeeper/bin",
    "env.TRIFACTA_CONF": "/opt/trifacta/conf"
    "env.JAVA_HOME": "/usr/lib/jvm/java-1.8.0-openjdk-amd64",
  4. Configure the following properties for various Trifacta components:

      "ml-service": {
       "autoRestart": true
      },
      "monitor": {
       "autoRestart": true,
        ...
       "port": <your_cluster_monitor_port>
      },
      "proxy": {
       "autoRestart": true
      },
      "udf-service": {
       "autoRestart": true
      },
      "webapp": {
        "autoRestart": true
      },
  5. Disable S3 access:

    "aws.s3.enabled": false,
  6. Configure the following Spark Job Service properties:

    "spark-job-service.classpath": "%(topOfTree)s/services/spark-job-server/server/build/install/server/lib/*:%(topOfTree)s/conf/hadoop-site/:%(sparkBundleJar)s:%(topOfTree)s/%(hadoopBundleJar)s",
    "spark-job-service.env.SPARK_DIST_CLASSPATH": "/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-mapreduce-client/*",
  7. Save your changes.

For additional details, see Configure for HDInsight.

Checkpoint: At this point, you should be able to load data from your backend datastore and run jobs on an integrated cluster.

Configure platform authentication

The Trifacta platform supports the following methods of authentication when hosted in Azure.

Integrate with Azure AD SSO

The platform can be configured to integrate with your enterprise's Azure Active Directory provider. For more information, see Configure SSO for Azure AD.

Non-SSO authentication

If you are not applying your enterprise SSO authentication to the Trifacta platform, platform users must be created and managed through the application. 

Self-managed:

Users can be permitted to self-register their accounts and manage their password reset requests:

NOTE: Self-created accounts are permitted to import data, generate samples, run jobs, and generate and download results. Admin roles must be assigned manually through the application.

Admin-managed:

If users are not permitted to create their accounts, an admin must do so: 

Checkpoint: Users who are authenticated or have been provisioned user accounts should be able to login to the Trifacta application and begin using the product.

Verify Operations

NOTE: You can try to verify operations using the Trifacta Photon running environment at this time.

 

Prepare Your Sample Dataset

To complete this test, you should locate or create a simple dataset. Your dataset should be created in the format that you wish to test.

Tip: The simplest way to test is to create a two-column CSV file with at least 25 non-empty rows of data. This data can be uploaded through the application.

Characteristics:

  • Two or more columns. 
  • If there are specific data types that you would like to test, please be sure to include them in the dataset.
  • A minimum of 25 rows is required for best results of type inference.
  • Ideally, your dataset is a single file or sheet. 


Store Your Dataset

If you are testing an integration, you should store your dataset in the datastore with which the product is integrated.

Tip: Uploading datasets is always available as a means of importing datasets.

 

  • You may need to create a connection between the platform and the datastore.
  • Read and write permissions must be enabled for the connecting user to the datastore.

Verification Steps

Steps:

  1. Login to the application.See Login.

  2. In the application menu bar, click Library.
  3. Click Import Data. See Import Data Page.
    1. Select the connection where the dataset is stored. For datasets stored on your local desktop, click Upload.
    2. Select the dataset.
    3. In the right panel, click the Add Dataset to a Flow checkbox. Enter a name for the new flow.
    4. Click Import and Add to Flow.

  4. In the left menu bar, click the Flows icon. Flows page, open the flow you just created. See Flows Page.
  5. In the Flows page, click the dataset you just imported. Click Add new Recipe.
  6. Select the recipe. Click Edit Recipe.
  7. The initial sample of the dataset is opened in the Transformer page, where you can edit your recipe to transform the dataset.
    1. In the Transformer page, some steps are automatically added to the recipe for you. So, you can run the job immediately.
    2. You can add additional steps if desired. See Transformer Page.
  8. Click Run Job
    1. If options are presented, select the defaults.

    2. To generate results in other formats or output locations, click Add Publishing Destination. Configure the output formats and locations. 
    3. To test dataset profiling, click the Profile Results checkbox. Note that profiling runs as a separate job and may take considerably longer. 
    4. See Run Job Page.

  9. When the job completes, you should see a success message under the Jobs tab in the Flow View page. 
    1. Troubleshooting: Either the Transform job or the Profiling job may break. To localize the problem, try re-running a job by deselecting the broken job type or running the job on a different running environment (if available). You can also download the log files to try to identify the problem. See Job Details Page.
  10. Click View Results from the context menu for the job listing. In the Job Details page, you can see a visual profile of the generated results. See Job Details Page.
  11. In the Output Destinations tab, click a link to download the results to your local desktop. 
  12. Load these results into a local application to verify that the content looks ok.

Checkpoint: You have verified importing from the selected datastore and transforming a dataset. If your job was successfully executed, you have verified that the product is connected to the job running environment and can write results to the defined output location. Optionally, you may have tested profiling of job results. If all of the above tasks completed, the product is operational end-to-end.

Documentation

You can access complete product documentation online and in PDF format. From within the product, select Help menu > Documentation.

Next Steps

The following install and configuration topics were not covered in this workflow. If these features apply, please reference the following topics in the Configuration Guide for more information.

TopicDescriptionConfiguration Guide sections
User AccessYou can enable self-service user registration or create users through the admin account.Required Platform Configuration
Relational ConnectionsThe platform can integrate with a variety of relational datastores.

Create Encryption Key File

Enable Relational Connections

Compressed ClustersThe platform can integrate with some compressed running environments.Enable Integration with Compressed Clusters
High AvailabilityThe platform can integrate with a highly available cluster.Enable Integration with Cluster High Availability

The Trifacta node can be configured to use other nodes in case of a failure.

Configure for High Availability
FeaturesSome features must be enabled and can be configured through platform configuration.

Configure Features

Feature flags: Miscellaneous Configuration

Services

Some platform services support additional configuration options.

Configure Services



This page has no comments.