Page tree

Release 6.4.2


Contents:

   

Contents:


As of Release 6.4, the Alteryx command line interface tools have been deprecated. You must migrate your usage of the CLI to use the REST APIs. This content is valid as of Release 6.0.2. It is intended to assist in the migration. For more information, see CLI Migration to APIs.

Support for the Alteryx Command Line Interface (CLI) has been deprecated. This means:

  • A version of the CLI that is compatible with the release will no longer be available for use. 
  • Old versions of the CLI will not work with the new version of the platform.

Before you upgrade to the next  Designer Cloud Enterprise Edition release, you must migrate any scripts or other automation projects that currently use the CLI to use the v4 versions of the APIs. This section provides information on how to manage that migration. 

General Differences

API authentication

In CLI usage, you pass authentication username/password with each command. 

In API usage, you must pass some form of authentication as a header in each request. 

Tip: The recommended method is to create an API access token for the user account that is to be accessing the APIs. This feature may need to be enabled in your instance of the platform. For more information, see Enable API Access Tokens.

For more information, see API Authentication.

Terminology

Depending on the version you are using, please use the following mapping: 

CLI/UI termAPI termNotes
ConnectionConnection 
JobJobGroup

In the application, a job that you launch is composed of one or more sub-jobs, such as ingest, profiling, transformation, or sampling.

In the APIs, you reference a job that you launch as a jobGroup.

Script/RecipeWrangledDataset

Depending on your previous version of the platform, the object may be referenced as a script or a recipe.

In the APIs, the object is referenced by its internal platform name: wrangledDataset.

UserUserThe API endpoint is people.

 

Use internal identifiers

Parameters passed to the CLI are often user-friendly text values. The CLI tool then queries the appropriate REST API endpoint and converts those values to internal identifiers.

When using the APIs, you must reference the internal identifiers directly.

Below is some information on how you can acquire the appropriate internal identifiers for each type of operation supported by the CLI.

Object Identifiers

Get primary object identifiers

For each CLI command, there is an associated object identifier, which is used to uniquely reference the object. To reference the object through the APIs, you must use the API unique id.

Tip: In the JSON response from the listed APIs, there may be multiple id values. To assist, you may find it easier to use the secondary id's to locate each item.

NOTE: Each API endpoint returns only the objects to which the authenticating user has access. If other users have personal objects that they need to migrate, they must provide access to them to the authenticating user.

CLI objectCLI unique idCLI secondary idAPI EndpointAPI secondary idAPI unique IdNotes
Connectionconn_idconn_nameAPI Connections Get List v4nameid 
Jobjob_id API JobGroups Get List v4 id
This endpoint gets the list of jobs that have been launched.
script.clin/a. See below.Open recipe in Transformer page to acquire wrangledDataset Id. See "Important Notes on CLI packages" below.API WrangledDatasets Get List v4 idThis endpoint gets the list of available wrangled datasets (recipes), which are required for launching a new job. That endpoint is API JobGroups Create v4.
Userusername
usernameAPI People Get List v4emailid 

Example - Get User Id

The following example steps through the process of acquiring user ids so that you can use the APIs. 

CLI - Get list of usernames:

The CLI references users via their platform usernames.

  • If your CLI scripts contain references to individual users, search them for:

    --username <user_email_address>
  • If you want to acquire the list of all available usernames, it's easier to do that via the APIs.

API - Get list of users:

Use the following API endpoint to get the list of all users, including deleted and disabled users. 

Endpointhttp://example.com:3005/v4/people
AuthenticationRequired
MethodGET
Request Body

None.

Response Status Code200 - Ok
Response Body

Contains JSON representation of each user in the system.

Parsing the JSON: 

In the JSON response body, here are the key values for each user entry:

ObjectDescription
emailThis value maps to the username value in your CLI scripts.
id

Unique internal identifier that you can use in other people endpoints.

Tip: You must map each email address to its corresponding id value.

isAdmin

This value is true if the user is a Alteryx administrator.

isDisabledIf true, the user is disabled and cannot use the platform.
state

If active, the user is a currently active user.

Users who are suspended or deleted cannot access the platform.

Important notes on CLI packages

Unlike connection, job, or user objects, a CLI script package does not contain any references to platform objects by design. These independent, self-contained objects can be used to run a script snapshot as a job at any time.

NOTE: When running jobs via the CLI, you are executing against a static recipe and other configuration files on your local desktop. When you run via the APIs, you are executing against the current state of the recipe object. So, if it is important that you execute your jobs against a read-only version of your recipe, you should create copies of your flows before you run the job.

After you download, however, the script package is no longer aware of any changes that have occurred to the source objects on the platform, which has the following implications:

  1. If source objects, such as the source recipe, have changed, those changes are not present in the CLI package.
    1. The above does not apply to data sources. In the downloaded CLI package, sources are referenced by URL, which means that they should be using the latest data. 
    2. If the data source URL has changed, however, that is not reflected in any previously downloaded CLI packages. 
  2. There is no object identifier in the CLI that directly corresponds to a unique identifier in the platform. 

If you download the CLI package each time that you wish to run a job:

  • You can acquire the recipe identifier from the Flow View page or Transformer page where you download the CLI package. 
  • In Flow View page, select the recipe icon. The URL is the following:

    http://example.com:3005/flows/11?recipe=39&tab=recipe

    The recipe id is 39.

  • In the Transformer page, the URL is the following:

    http://example.com:3005/data/11/39

    The recipe id is 39.

  • This value corresponds to the value to look for in the API WrangledDatasets Get List v4 output.

If you wish to use the latest version of the recipe for running a job:

  • Use the Transformer page. See previous.

If you have downloaded and saved off a set of CLI packages for execution as recipe snapshots:

  • This use case may be problematic, as there may be differences between the platform version of the recipe and the version in the CLI package. Currently, the platform does not support importing individual recipes or CLI packages. 

Options:

  1. Make a copy of the closest approximation to your downloaded recipe. Modify the recipe steps in the application to change the copy to match your local CLI package. 
  2. Create a new flow and rebuild the individual steps in the recipe to match your local CLI package.

Reference - CLI for Jobs

In this section, you can review the commands available in the CLI, followed by their equivalent commands using the v4 APIs. 

CLI Docs: CLI for Jobs

Run Job

You can issue commands to the CLI to execute jobs using the local package downloaded from the Recipe panel.

NOTE: When you run a job using the CLI, you are executing against a snapshot of a recipe at the moment in time when the package was downloaded. Please be sure that you are aware of the Important notes on CLI packages in the previous section.

Below are the three files in the package and their API equivalents:

FileDescriptionAPI Equivalent
script.cliA CLI-only version of the recipe to execute.The APIs reference the latest definition of the recipe through the wrangledDataset object. See API WrangledDatasets Get v4.
datasources.tsvA CLI-only set of links to the data sources used to execute the recipe.The APIs reference the latest saved version of any datasource using the importedDataset object. When running a job, the data sources referenced in the WrangledDataset object are automatically pulled into job execution.
publishopts.jsonA CLI-only set of JSON definitions of the outputs that are generated when a job is executed.
  • If these outputs are part of the output definitions for the recipe in Flow View, they are automatically generated as part of running the job. For more information, see Flow View Page.
  • If these outputs are overrides to the Flow View definitions, you can insert these outputs as writesettings objects in the request body when you launch the job. An example of this is provided below.

 

CLI example:

./trifacta_cli.py run_job --user_name <trifacta_user> --password <trifacta_password> --job_type spark 
--output_format json --data redshift-test/datasources.tsv --script redshift-test/script.cli 
--cli_output_path ./job_info.out --profiler on --output_path hdfs://localhost:8020/trifacta/queryResults/foo@trifacta.com/MyDataset/42/cleaned_table_1.json

API v4 example - REST client: 

NOTE: Inside the platform, this identifier is a reference to the jobGroup, which is the collection of sub-jobs for a specified job. Sub-job types include: sampling, ingestion, transformation, and profiling. Collectively, these appear under a single job identifier in the Designer Cloud application, and the same value is used as the jobGroup Id in the APIs.

Default settings: After you have captured the wrangledDataset identifier, you can launch a new job using default settings:

Endpoint http://localhost:3005/v4/jobGroups
AuthenticationRequired
MethodPOST
Request Body
{
  "wrangledDataset" {
    "id", <wrangledDatasetId>
  }
}
Response Status Code201 - Created
Response Body

Job group definition

NOTE: A job group is composed of one or more sub-jobs for sampling, ingestion, transformation, and profiling, where applicable. You can append ?embed=jobs to include sub-job information in the response.


Specify job overrides:
The above request contains only the wrangledDataset identifier. All default output settings are used. 

If needed, you can override these default settings by specifying values as part of the request body. In the following example, the relevant parameters from the CLI have been added as elements of the JSON body of the request.

Through the APIs, you can also override the default files, formats, and locations where you output results in the writesettings block. 

Endpointhttp://localhost:3005/v4/jobGroups
AuthenticationRequired
MethodPOST
Request Body
{
  "wrangledDataset": {
    "id": <wrangled_dataset_id>
  },
  "overrides": {
    "execution": "spark",
    "profiler": true,
    "writesettings": [
      {
        "path": "hdfs://hadoop:50070/trifacta/queryResults/foo@trifacta.com/MyDataset/42/cleaned_table_1.json",
        "action": "create",
        "format": "json",
        "compression": "none",
        "header": false,
        "asSingleFile": false
      }
    ]
  },
  "ranfrom": "cli"
}
Response Status Code201 - Created
Response Body

Job group definition

NOTE: A job group is composed of one or more sub-jobs for sampling, ingestion, transformation, and profiling, where applicable. You can append ?embed=jobs to include sub-job information in the response.


Reference Docs:

File Publishing Options

You can specify publication options as part of your run_job command. In the following, a single CSV file with headers is written to a new file with each job execution.

Example (All one command):

./trifacta_cli.py run_job --user_name <trifacta_user> --password <trifacta_password> --job_type spark 
--output_format csv --data redshift-test/datasources.tsv --script redshift-test/script.cli 
--publish_action create --header true --single_file true
--cli_output_path ./job_info.out --profiler on --output_path hdfs://localhost:8020/trifacta/queryResults/foo@trifacta.com/MyDataset/43/cleaned_table_1.csv

CLI example:

./trifacta_cli.py run_job --user_name <trifacta_user> --password <trifacta_password> --job_type spark 
--output_format csv --data redshift-test/datasources.tsv --script redshift-test/script.cli 
--publish_action create --header true --single_file true
--cli_output_path ./job_info.out --profiler on --output_path hdfs://localhost:8020/trifacta/queryResults/foo@trifacta.com/MyDataset/43/cleaned_table_1.csv

API v4 example - REST client:

For more information, see API Workflow - Manage Outputs.

Reference Docs:

See API WriteSettings Create v4.

Get Job Status

After you queue a job through the CLI, you can review the status of the job through the application or through the CLI. 

Tip: You can acquire the job ID through the application as needed. For example, at some point in the future, you might decide to publish to Hive the results from a job you executed two weeks ago. It might be easiest to retrieve this job identifier from the Dataset Details page. See Dataset Details Page.

CLI example:

./trifacta_cli.py get_job_status --user_name <trifacta_user> --password <trifacta_password> --job_id 42 
--cli_output_path ./job_info.out

API v4 example - REST client:

Using the jobGroup identifier value, you can query the status of any job that has been launched.

Endpointhttp://localhost:3005/v4/jobGroup/42/status
AuthenticationRequired
MethodGET
Request Body

None.

 

Response Status Code200 - OK
Response Body

Response includes a status field on the current status of the job. See docs below for values.

Reference Docs:

See API JobGroups Get v4.

Publish

After a job has successfully completed, you can publish the results to another datastore with which the platform is integrated.

CLI example:


The following command publishes the results of jobId 42 through connectionId 1 to the dev database. Let's assume that this is a Hive database.

./trifacta_cli.py publish --user_name <trifacta_user> --password <trifacta_password> --job_id 42
--database dev --table table_job_42 --conn_name 1 --publish_format avro
--cli_output_path ./publish_info.out

API v4 example - REST client:

You can publish results for a specified jobId to an output that can be specified through properties in the request (see below). 

Endpointhttp://localhost:3005/v4/jobGroups/42/publish
AuthenticationRequired
MethodPUT
Request Body
{  "connection": {
    "id": 1
  },
  "path": ["dev"],
  "table": "table_job_42",
  "action": "create",
  "inputFormat": "avro",
  "flowNodeId": 27
}
Response Status Code200 - OK
Response Body
{    "jobgroupId": 42,
    "reason": "JobStarted",
    "sessionId": "f6c5f350-2102-11e9-bb80-9faf7b15f235"
}

Reference Docs:

See API JobGroups Put Publish v4.

Get Publications

You can retrieve a JSON list of all publications that have been executed for a specific job.

  • A publication is an object that corresponds to the delivery of a job's results to an external datastore.
  • In the Designer Cloud application, publications are executed through the Publishing Dialog, which is available through the Job Details page. See Publishing Dialog.

CLI example:

./trifacta_cli.py get_publications --user_name <trifacta_user> --password <trifacta_password> --job_id 42
--cli_output_path ./publications.out --publish_format avro

API v4 example - REST client:

You can use the job Id to retrieve the list of publications that have been executed for that job.

Endpointhttp://localhost:3005/v4/jobGroups/42/publications
AuthenticationRequired
MethodGET
Request Body

None.

Response Status Code200 - OK
Response Body

List of publication objects is included. See docs reference below.

 

Reference Docs:

See API JobGroups Get Publications v4.

Load Data into Table

You can load data into pre-existing Redshift tables.

  • Data is appended after any existing rows.
  • If the table does not exist, the job fails.

NOTE: When appending data into a Redshift table, the columns displayed in the Transformer page must match the order and data type of the columns in the target table.

CLI example:

In the following example, the results of jobId 42 are loaded into a Redshift table called table_42 using connectionId 2

./trifacta_cli.py load_data --user_name <trifacta_user> --password <trifacta_password> --job_id 42 
--database dev --table table_42 --conn_id 2 --publish_format avro 
--cli_output_path ./load_info.out

API v4 example - REST client:

In the request body, note that the action parameter is set to load.

Endpoint

http://localhost:3005/v4/jobgroups/42/publish

AuthenticationRequired
MethodPOST
Request Body
{  "connection": {
    "id": 2
  },
  "path": ["dev"],
  "table": "table_42",
  "action": "load",
  "inputFormat": "avro",
  "flowNodeId": 27
}
Response Status Code200 - Ok
Response Body
{    "jobgroupId": 42,
    "reason": "JobStarted",
    "sessionId": "f6c5f350-2102-11e9-bb80-9faf7b15g574"
}

Reference Docs:

API JobGroups Put Publish v4

Truncate and Load

For existing tables, you can clear them and load them with results from a job. If the table does not exist, a new one is created and populated.

CLI example:

./trifacta_cli.py truncate_and_load --user_name <trifacta_user> --password <trifacta_password> --job_id 10 
--database dev --table table_43 --conn_name aSQLServerConnection --publish_format avro 
--cli_output_path ./load_and_trunc_info.out

API v4 example - REST client:

In the request body, note that the action parameter is set to truncateAndLoad.

Endpoint

http://localhost:3005/v4/jobgroups/10/publish

AuthenticationRequired
MethodPOST
Request Body
{  "connection": {
    "id": 2
  },
  "path": ["dev"],
  "table": "table_43",
  "action": "truncateAndLoad",
  "inputFormat": "avro",
  "flowNodeId": 27
}
Response Status Code200 - Ok
Response Body
{    "jobgroupId": 10,
    "reason": "JobStarted",
    "sessionId": "f6c5f350-2102-11e9-bb80-9faf7b15v291"
}

Reference Docs:

API JobGroups Put Publish v4

Reference - CLI for Connections

You can use the CLI for basic management of your connections.

CLI Docs: CLI for Connections

Create Connection

To create a connection, you specify the connection parameters as part of your command line command.

CLI example:

./trifacta_cli.py create_connection --user_name <trifacta_user> --password <trifacta_password>
--conn_type microsoft_sqlserver --conn_name aSQLServerConnection
--conn_description "This is my connection."
--conn_host example.com --conn_port 1234
--conn_credential_type basic
--conn_credential_location ~/.trifacta/config_conn.json
--conn_params_location ~/.trifacta/p.json
--cli_output_path ./conn_create.out

API v4 example - REST client:

Endpointhttp://localhost:3005/v4/connections
AuthenticationRequired
MethodPOST
Request Body
{
    "connectParams": {
        "vendor": "sqlserver",
        "vendorName": "sqlserver",
        "host": "example.com",
        "port": "1234"
    },
    "host": "example.com",
    "port": 1234,
    "vendor": "sqlserver",
    "params": {
        "connectStrOpts": ""
    },
    "ssl": false,
    "vendorName": "sqlserver",
    "name": "aSQLServerConnection",
    "description": "",
    "type": "jdbc",
    "isGlobal": false,
    "credentialType": "basic",
    "credentialsShared": true,
    "disableTypeInference": false,
    "credentials": [
        {
            "username": "<username>",
            "password": "<password>"
        }
    ]
}
Response Status Code201 - Created
Response Body
{
    "connectParams": {
        "vendor": "sqlserver",
        "vendorName": "sqlserver",
        "host": "example.com",
        "port": "1234"
    },
    "id": 26,
    "host": "example.com",
    "port": 1234,
    "vendor": "sqlserver",
    "params": {
        "connectStrOpts": ""
    },
    "ssl": false,
    "vendorName": "sqlserver",
    "name": "aSQLServerConnection",
    "description": "",
    "type": "jdbc",
    "isGlobal": false,
    "credentialType": "basic",
    "credentialsShared": true,
    "uuid": "fa7e06c0-0143-11e8-8faf-27c0392328c5",
    "disableTypeInference": false,
    "createdAt": "2018-01-24T20:20:11.181Z",
    "updatedAt": "2018-01-24T20:20:11.181Z",
    "credentials": [
        {
            "username": "<username>"
        }
    ],
    "creator": {
        "id": 1
    },
    "updater": {
        "id": 1
    }
}

Reference Docs:

API Connections Create v4

Edit Connection

In the CLI, you use the edit_connection action to pass in modifications to a connection that is specified using the conn_name command line parameter. 

CLI example:

In the following example, the description, host, and port number are being changed for the aSQLServerConnection.

./trifacta_cli.py edit_connection --user_name <trifacta_user> --password <trifacta_password>
--conn_name aSQLServerConnection
--conn_description "This is my connection."
--conn_host mynewhost.com --conn_port 1234
--conn_credential_type basic --conn_credential_location ~/.trifacta/config_conn.json
--cli_output_path ./conn_edit.out

API v4 example - REST client:

When using the APIs, you use the internal identifier for the connection to modify.

In the body of the request, you should include only the parameters that you are modifying for the connection. In this example, the connectionId is 8.

Endpointhttp://localhost:3005/v4/connections/8
AuthenticationRequired
MethodPATCH
Request Body
{
  "description": "This is my connection.",
  "host": "mynewhost.com",
  "port": 1234
}
Response Status Code200 - OK
Response Body
{
    "id": 8,
    "updater": {
        "id": 1
    },
    "updatedAt": "2019-01-25T23:19:27.648Z"
}

Reference Docs:

See API Connections Patch v4.

List Connections

The CLI command list_connections dumps the JSON objects for all connections to a local file.

CLI example:

./trifacta_cli.py list_connections --host dev.redshift.example.com
--user_name <trifacta_user> --password <trifacta_password>
--cli_output_path ./conn_list.out

API v4 example - REST client:

The following API endpoint can be used to retrieve the JSON objects for all connections to which the authenticating user has access in the body of the response. 

Tip: For any endpoint using a GET method, if you omit an object identifier, you retrieve all accessible objects of that type from the platform.

Endpointhttp://localhost:3005/v4/connections
AuthenticationRequired
MethodGET
Request Body

None.

Response Status Code200 - OK
Response Body

JSON objects for all accessible connections.

Reference Docs:

See API Connections Get List v4.

Delete Connection

For the CLI, you use the delete_connection command to remove connections that are specified by conn_name.

CLI example:

./trifacta_cli.py delete_connection --user_name <trifacta_user> --password <trifacta_password>
--conn_name aSQLServerConnection --cli_output_path ./conn_delete.out

API v4 example - REST client:

Use the internal identifier for the connection to delete it. In the following example, the connectionId is 4.

Endpointhttp://localhost:3005/v4/connections/4
AuthenticationRequired
MethodDELETE
Request Body

None.

Response Status Code204 - No content
Response Body

None.

Reference Docs:

See API Connections Delete v4.

Reference - CLI for User Admin

You can use the CLI for handling of some elements of user management. 

NOTE: Some user account properties cannot be managed through the CLI. You must use the APIs or the application for some tasks.

CLI Docs: CLI for User Admin

Create User

CLI example: 

./trifacta_admin_cli.py --admin_username <trifacta_admin_username> 
--admin_password <trifacta_admin_password>  --verbose create_user 
--user_name joe@example.com --password user_pwd 
--name "<user_display_name>"

API v4 example - REST client:

The request body below contains the minimum set of required parameters to create a new user. 

  • The accept parameter must be set to accept for every new user.
Endpointhttp://www.example.com:3005/v4/people
AuthenticationRequired
MethodPOST
Request Body
{
  "accept": "accept",
  "password": "Hello2U",
  "password2": "Hello2U",
  "email": "joe@example.com",
  "name": "Joe"
}
Response Status Code201 - Created
Response Body
{
    "isDisabled": false,
    "forcePasswordChange": false,
    "state": "active",
    "id": 9,
    "email": "joe@example.com",
    "name": "Joe",
    "ssoPrincipal": null,
    "hadoopPrincipal": null,
    "isAdmin": false,
    "updatedAt": "2019-01-09T20:23:31.560Z",
    "createdAt": "2019-01-09T20:23:31.560Z",
    "outputHomeDir": "/trifacta/queryResults/joe@example.com",
    "lastStateChange": null,
    "fileUploadPath": "/trifacta/uploads",
    "awsConfig": null
}

Reference Docs:

API People Create v4

Show User

You can gather a specific user object using the username through the CLI.

CLI example:

./trifacta_admin_cli.py --admin_username <trifacta_admin_user> --admin_password <trifacta_admin_password> 
show_user --user_name joe@example.com 

API v4 example - REST client:

Through the APIs, you can retrieve individual users through the internal userId. In the following example, the user corresponding to userId 4 is retrieved.

Endpointhttp://www.example.com:3005/v4/people/4
AuthenticationRequired
MethodGET
Request Body

None.

Response Status Code200 - OK
Response Body
{
    "id": 4,
    "email": "joe2@example.com",
    "name": "Joe2",
    "ssoPrincipal": null,
    "hadoopPrincipal": null,
    "isAdmin": false,
    "isDisabled": false,
    "forcePasswordChange": false,
    "state": "active",
    "lastStateChange": null,
    "createdAt": "2019-02-20T20:05:49.882Z",
    "updatedAt": "2019-02-20T20:05:49.882Z",
    "outputHomeDir": "/trifacta/queryResults/joe@example.com",
    "fileUploadPath": "/trifacta/uploads",
    "awsConfig": null
}

Reference Docs:

API People Get v4.

Edit User

You can edit some properties through the CLI edit_user command.

CLI example:


In this example, the ssoPrincipal for the user is being changed.

./trifacta_admin_cli.py --admin_username <trifacta_admin_user> --admin_password <trifacta_admin_password> 
edit_user --user_name joe2@example.com --ssoPrincipal my_principal

API v4 example - REST client:

Using the APIs, you reference the user to modify by userId. In the following example, the userId is 4

Include only the parameters in the request that are being modified.

Endpointhttp://www.example.com:3005/v4/people/4
AuthenticationRequired
MethodPUT
Request Body
{
  "ssoPrincipal": "my_principal"
}
Response Status Code200 - OK
Response Body
{  "id": 4,
  "updatedAt": "2018-01-24T23:49:08.199Z"
}

Reference Docs:

API People Patch v4

Generate Password Reset URL

Through the CLI, admins can generate password reset emails to be sent to specific users.

CLI example:

./trifacta_admin_cli.py --admin_username <trifacta_admin_user> --admin_password <trifacta_admin_password> 
edit_user --user_name joe2@example.com --disable

API v4 example - REST client:

Use the following endpoint to generate a password reset code for the specified user (accountId). 

Endpointhttp://www.example.com:3005/v4/passwordresetrequest
AuthenticationRequired
MethodPOST
Request Body
{
  "accountId": 6,
  "email": "joe@example.com",
  "orginURL": "http://loginpage.example.com:3005/"
}
Response Status Code201 - Created
Response Body
{
    "code": "<AccountResetCode>",
    "email": "joe@example.com"
}

The above must be built into a URL in the following format:

<http|https>://<host_name>:<port_number>/password-reset?email=<user_id>&code=<AccountResetCode>
URL elementExample valueDescription
<http|https>
httpHTTP protocol type
<host_name>
loginpage.example.com

Host of the Designer Cloud application

<port_number>
3005

Port number used to login to the Designer Cloud application

<user_id>
joe@example.comUser ID (email address) of the user whose password is to be reset
<AccountResetCode>
CD44232791Password reset code


Reference Docs:

API Password Reset Request Create v4

Disable User

Through the CLI, you can disable individual users by adding the disable flag as part of an edit_user directive.

CLI example:

./trifacta_admin_cli.py --admin_username <trifacta_admin_user> --admin_password <trifacta_admin_password> edit_user --user_name joe@example.com --disable

API v4 example - REST client:

In the APIs, you disable a specified user by patching the user object with the disable flag.

Endpointhttp://www.example.com:3005/v4/people/4
AuthenticationRequired
MethodPATCH
Request Body
{
  "isDisabled": false
}
Response Status Code200 - OK
Response Body
{  "id": 4,
  "updatedAt": "2018-01-24T23:56:32.834Z"
}

Reference Docs:

API People Patch v4

Delete User

CLI example:

In the following example, the user is deleted by username, and the user's assets are transferred to another user.

NOTE: Transfer of assets is not required. However, if the assets are not transferred, they are no longer available.

./trifacta_admin_cli.py --admin_username <trifacta_admin_user> --admin_password <trifacta_admin_password> 
delete_user --user_name joe@example.com --transfer_assets_to jim@example.com

API v4 example - REST client:

Via the APIs, this transfer of assets and deletion of the user must be accomplished in two steps.

NOTE: You must verify that the transfer step occurs successfully before you execute the deletion. Deletion of a user cannot be undone.

NOTE: Transferring of assets does not check for access to the objects. It's possible that the receiving user may not be able to access connections or datasets that were created by the original user. You may wish to share those assets through the application before you perform the deletions.

Here is the mapping of example userIds between CLI and API:

CLI userIdAPI userId
joe@example.com4
jim@example.com7

Transfer of assets:

The following endpoint call transfers assets from userId 4 to userId 7.

 

Endpointhttp://www.example.com:3005/v4/people/7/assetTransfer/4
AuthenticationRequired
MethodPATCH
Request Body

None.

Response Status Code201 - Created
Response Body
[
    [
        1,
        [
            0,
            [
                {
                    "connectionId": 7,
                    "personId": 7,
                    "role": "owner",
                    "createdAt": "2019-02-21T19:52:22.993Z",
                    "updatedAt": "2019-02-21T19:52:22.993Z"
                }
            ]
        ]
    ]
]

NOTE: Please verify that you have received a response similar to the above before you delete the user. You should also verify that the receiving user has the assets accessible in the application.

Delete user:

After assets have been transferred, users can be deleted by userId (4).

Endpointhttp://www.example.com:3005/v4/people/4
AuthenticationRequired
MethodDELETE
Request Body

None.

Response Status Code204 - No Content
Response Body

None.

Reference Docs:

API People Delete v4

This page has no comments.