If you are a project owner or workspace admin logging into for the first time, you can use these steps to set up the environment for use and invite other users to join.

Test Product by Running a Job

When you first log in to your project or workspace, you should be able to immediately run a job to ensure the product is working properly. The goal of these steps is to simply verify that you can run a job.

Tip: Any user invited to the project should be able to complete these steps, including uploading files from their local environment to begin wrangling immediately.

Steps:

  1. In the left nav bar, click the Flows icon at the top.
  2. In the Flows page, you should see an Example Flows folder. 
    1. If you do, click it to open it.
    2. If you do not, you can: 
      1. Click the Library icon. Then, click Import Data. In the Import Data page, you should be able to select an example dataset. For more information, see Example Datasets
      2. Upload a file. For more information, see Upload a File
      3. Then, continue the process of adding the file to a flow and running a job from there.
  3. Select either of the flows. 
  4. The flow opens in Flow View. Click the Plus icon next to one of the recipe objects in the flow canvas. Select Create Output to run.
  5. An output object is created. This output defines how job results are published. In this case, the default output is a CSV file in the default location. In the right panel, click Run
  6. In the Run Job page, you can review the options. For simplicity, accept the defaults. Click Run.
  7. The job is queued for execution. The Job Details page permits you to track progress.
  8. When the job completes, click the Output destinations tab to review your outputs.

If you have successfully completed the above steps, the product is working for end-to-end execution of importing, transforming, and outputting your data. 

Integrate the Product

Depending on the edition of the product that you licensed, there may be specific steps required to integrate  with your environment:

Tip: Please complete the steps listed below for your product edition by following the documentation link. You can then return to complete the remaining steps in this page.

Product EditionKey TasksSteps

  1. Enable the product in your project
  2. Set up your storage bucket
  3. Review IAM roles and permissions
  4. Whitelist the IP address range for the
https://docs.trifacta.com/display/DP/Getting+Started+with+Cloud+Dataprep

  1. Specify authentication method.
  2. Connect to S3 buckets (if needed)
  3. Whitelist the IP address range for the
https://docs.trifacta.com/display/AWS/Getting+Started+with+Trifacta+SaaS

  1. Installation of the product on an edge node of your cluster.
  2. Configuration of the product to integrate with the clustered running environment.
  3. Additional configuration settings may be required.
For more information, please see the Install and Configure guides provided with your product.

Review Environment Settings

After you have deployed the product, you should review the environment settings. 

Steps:

  1. Login to the  as an administrator. 
  2. Select User menu > Admin console.
  3. Select Project Settings or Workspace Settings.

These settings define features and behaviors in the project or workspace. Key categories and settings:

CategoryNotes
API

These settings define whether users are permitted to create and use API access tokens, which allow for access to the REST APIs.

Tip: API access is required for developers who wish to build on the platform or users who wish to automate aspects of their data pipelines.

Connectivity

These features can enable access to datastores and conversion features, including the use of custom SQL to create imported datasets.

Tip: If significant volumes of your data are hosted in relational sources, you should review these settings.

Flows, recipes, and plans

These settings enable features related to the development of features, recipes, and plans.

Tip: By default, users are permitted to import, export, and share flows and plans, as well as create webhooks to deliver messages outside of the product. If these features need to be disabled, please review these settings.

Job execution

These features define aspects of how jobs are executed on , which is an in-memory running environment hosted on the , and the available clustered running environment.

Tip: Most of these settings are advanced tuning properties. may require enablement in your environment.

Publishing

These settings can be modified to define the formats that is permitted to generate. Most output formats are enabled.

Tip: For , the default storage environment setting defines the base storage for the product. You should set this value when you begin using the product. Avoid changing the value after the product has been in use. For more information, see the links at the bottom of the page.

Experimental features

These features are early access features that may be modified or even removed at any time.

Tip: To begin, you should avoid enabling Experimental features until you are familiar with the product.

 

Invite Users

You can now invite users to your project or workspace. See Invite New Users.

Modify Roles

Each invited user is automatically assigned the default role. If needed, you can modify or add other roles to the user account. 

NOTE: Without modification, the default role assigned to users permits sufficient access to import, transform, and export data. Access to admin functions and other advanced features may be restricted.

Tip: Roles are additive. Users are permitted the maximum privileges in all assigned roles.

Steps:

  1. In the Users, find the user to modify. Click the More (...) menu and select Edit.
  2. Select roles from the Roles drop-down.
  3. Then, click Edit user.

As needed, you can modify the privileges of existing roles or define new roles. Please see the links below.