Contents:
When you upgrade to a new version of Designer Cloud Enterprise Edition, you must download your CLI packages from the Transformer page after the upgrade is complete. You may be able to execute your packages from the previous release, but backward compatibility of exported CLI packages is not explicitly supported.
Changes for Release 5.1
None.
Changes for Release 5.0
CLI for Connections does not support Redshift and SQL DW connections
In Release 5.0, the management of Redshift and SQL DW connections through the CLI for Connections is not supported.
NOTE: Please create Redshift and SQL DW connections through the application. See Connections Page.
Changes for Release 4.2
All CLI scripts with relational connections must be redownloaded
Each CLI script that references a dataset through a connection to run a job must be re-downloaded from the application in Release 4.2.
Scripts from Release 4.1 that utilize the run_job
command do not work in Release 4.2.
Requirements for Release 4.2 and later:
- In the executing environment for the CLI script, the relational (JDBC) connection must exist and must be accessible to the user running the job.
- When the CLI script is downloaded from the application, the connection ID in the
datasources.tsv
must be replaced by a corresponding connection ID from the new environment.- Connection identifiers can be retrieved using the
list_connections
command from the CLI. See CLI for Connections.
- Connection identifiers can be retrieved using the
After the above changes have been applied to the CLI script, it should work as expected in Release 4.2. For more information, see Run job in CLI for Jobs.
Redshift credentials format has changed
In Release 4.1.1 and earlier, the credentials file used for Redshift connection was similar to the following:
{ "awsAccessKeyId": "<your_awsAccessKeyId>", "awsSecretAccessKey": "<your_awsSecretAccessKey>", "user": "<your_user>", "password": "<your_password>" }
In Release 4.2:
- The AWS key and secret, which were stored in
trifacta-conf.json
, do not need to be replicated in the Redshift credentials file. - The Designer Cloud Powered by Trifacta platform now supports EC2 role-based instance authentication. This configuration can be optionally included in the credentials file.
The credentials file format looks like the following:
{ "user": "<your_user>", "password": "<your_password>", "iamRoleArn": "<your_IAM_role_ARN>" }
NOTE: For security purposes, you may wish to remove the AWS key/secret information from the Redshift credentials file.
NOTE: iamRoleArn
is optional. For more information on using IAM roles, see Configure for EC2 Role-Based Authentication.
Changes for Release 4.1.1
Single-file CLI publishing method is deprecated
In Release 4.1 and earlier, the run_job
action for the Command Line Interface supported specifying a single-file publishing action as part of the command itself.
In Release 4.1.1 and later, this method has been superseded by specifying single- and multi-file publishing actions through an external file. While the single-file version is likely to still work with the platform, it is likely to be removed from the platform in the future.
See CLI Publishing Options File.
CLI run job output has changed
NOTE: This change first appeared in Release 4.1 but was not surfaced in documentation until Release 4.1.1.
See below.
Changes for Release 4.1
CLI run job output has changed
In Release 4.0 and earlier, standard output for launching a job is similar to the following:
Job has been successfully launched: You may monitor the progress of your job here: http://localhost:3005/jobs Upon success, you may view the results of your job here: http://localhost:3005/jobs/34
In Release 4.1 and later, standard output has been changed to the following:
Job #34 has been successfully launched: You may monitor the progress of your job here: http://localhost:3005/jobs
NOTE: If your CLI script processing relies on the standard output for gathering job identifiers, you must modify how you collect the jobId to match the new format. For an example of how to parse the standard output to gather jobId's, see CLI Example - Parameterize Job Runs.
For more information, see CLI for Jobs.
Specify multiple publication targets for a job through an external file
In Release 4.0 and earlier, a run_job
command could generate a single publication.
In Release 4.1 and later, you can specify multiple publication targets to be executed when a job is run. These publication targets are specified in an external file.
NOTE: The legacy method of defining output options are part of individual parameters in the run_job
command is still supported. However, it is likely to be deprecated in a future release. You should migrate your CLI scripts to use the new method that references external publication files.
Release 4.0 Example:
/trifacta_cli.py run_job --user_name <trifacta_user> --password <trifacta_password> --job_type spark --output_format csv --data redshift-test/datasources.tsv --script redshift-test/script.cli --publish_action create --header true --single_file true --cli_output_path ./job_info.out --profiler on --output_path hdfs://localhost:8020/trifacta/queryResults/foo@trifacta.com/MyDataset/43/cleaned_table_1.csv
Release 4.1 Example:
Following references the --publish_opt_file
parameter.
./trifacta_cli.py run_job --user_name <trifacta_user> --password <trifacta_password> --job_type spark --data redshift-test/datasources.tsv --script redshift-test/script.cli --cli_output_path ./job_info.out --profiler on --publish_opt_file /json/publish/file/publishopts.json
The following parameters have been migrated into settings in the publishing options file:
output_format
publish_action
header
single_file
output_path
For more information, see CLI Publishing Options File.
Create schema command type is no longer available
In Release 4.0 and earlier, you could use the create_schema
command to create an empty schematized table in Hive or Redshift.
In Release 4.1 and later, this command is no longer available. With ad-hoc publication to Hive now available, you shouldn't need to use this command.
Pig job type is no longer available
In Release 4.0 and earlier, you could specify –job_type pig
to execute a job on the Hadoop Pig running environment.
In Release 4.1, the Hadoop Pig running environment has been deprecated.
NOTE: CLI scripts that reference the pig
job type must be updated.
Search your CLI scripts for:
pig
For references to this type of job, replace with one of the following references, depending on your deployment:
Job Type Value Description spark
Runs job in the Spark running environment. hadoop
Runs jobs in the default running environment for the Hadoop cluster. For this release, that environment is Spark. This setting future-proofs against subsequent changes to the default Hadoop running environment. photon
Runs job on the Alteryx Server. This setting is only recommended for smaller jobs.
- Save your scripts.
- Run an example of each on your upgraded Release 4.1 instance.
Default CLI job type is now Photon
In Release 4.0 and earlier, the default job type for CLI jobs was JS
. If the execution engine was not specified, then the job was run on the Javascript-based execution engine.
In Release 4.1 and later, the Javascript execution engine has been deprecated, and the default job type for CLI jobs is now Photon
, which is the default execution engine on the Alteryx node.
NOTE: If your CLI scripts do not specify a job_type
parameter, the job is executed on the Photon running environment, which replaces the Javascript running environment. If this is acceptable, no action is required.
Otherwise, you must review your scripts and manually specify a job_type
parameter for execution.
For more information, see CLI for Jobs.
Changes for Release 4.0
JS job type is no longer supported
Beginning in Release 4.0, the Javascript running environment is no longer supported.
In Release 4.0, you may be able to continue to use this running environment to execute your jobs. However, there may be differences in terms of functionality between CLI jobs executed under this running environment and the results generated through the GUI.
After upgrade, you should review all of your scripts for their --job_type
values. You should change these references to the following:
--job_type photon
For more information, see CLI for Jobs.
Pig job type is being replaced
Beginning in Release 4.0, the Pig running environment is being superseded by the new Spark running environment. The Spark running environment for execution is the default for Release 4.0 and later.
For CLI jobs, you should modify your script to use the following job type:
--job_type hadoop
The above option instructs the CLI to execute the job on the default running environment for Hadoop.
- For upgrading customers who have not enabled Spark, the job is executed on Pig, as in previous releases.
- In a future release, when the Pig running environment is deprecated, this setting will still apply to the default running environment (Spark), and your scripts will not need to be updated.
For more information, see CLI for Jobs.
Changes to the Hive Connection
NOTE: If you are upgrading from a previous version in which the Alteryx node is connected to Hive, you must recreate the Hive connection through the Command Line Interface.
In Release 3.2 and 3.2.1, in some environments, the Hive connection was created using a trifacta-conf.json
value that was deprecated. The configuration looked like the following:
"deprecated": { "kerberos": { "principals": { ... "hive": "hive/_HOST@EXAMPLE.COM" ... }, }, }
In Release 4.0 and later, this issue has been fixed by moving this principal value into the connection string options for the parameters file for the connection:
"connectStrOpts": ";principal=<principal_value>",
For more information, see Configure for Hive.
Changes for Release 3.2.1
New file publishing actions and parameters
Beginning in Release 3.2.1, you can specify file publishing options through the command line interface:
Option | Description |
---|---|
create, append, or overwrite | In the publish_action parameter, you specify whether you want new executions of the command to create a new file, append the new results to the existing file, or overwrite any file found in the specified location. These options match the new file publishing options available through the application in Release 3.2.1. |
header | If the header flag is included, the column headers are added as the first row in any CSV output. |
single_file | If the |
For more information, see CLI for Jobs.
Change in behavior for output_path
parameter
Prior to Release 3.2.1:
The output_path
parameter was optional for inclusion in a run_job
command.
It was used to specify a path to a folder. This folder was populated with the output files from runs of the run_job
command. The filename was generated by the CLI.
If this parameter was not specified, outputs were written to the default output location specified for the executing user.
Release 3.2.1 and later:
This parameter must be included in run_job
commands.
In Release 3.2.1 and later, this parameter now specifies a fully qualified URI to the output file, such as the following examples:
hdfs://host:port/path/filename.csv s3://bucketName/path/filename.csv
NOTE: If you created scripts that use the CLI for releases before Release 3.2.1, you must update any run_job
commands to include the output_path
parameter.
Depending on the publish_action
parameter, the following behaviors are applied to the file on subsequent job executions:
Publish action | Description |
---|---|
create | A new file is created with each job: filename_1.csv , filename_2.csv , etc. |
append | Any existing file is appended with the results from the new job execution. |
overwrite | Any existing file is replaced with the results from the new job execution. |
For more information, see CLI for Jobs.
CLI commands now accept only a single output format
Prior to Release 3.2.1, CLI commands could deliver results to multiple formats in a single command.
In Release 3.2.1, a CLI command can now accept only a single output format, since this output is now associated with a single publishing action, publishing options, and output_path
. Additionally, the output_formats
parameter has been changed to output_format
.
NOTE: Any CLI command from a release prior to Release 3.2.1 that specifies an output_formats
parameter must be updated to use the new parameter name: output_format
.
See CLI for Jobs.
Changes for Release 3.2
CLI supports Kerberos-based credentials
Beginning in Release 3.2, the Alteryx command line interface:r=true supports the use of Kerberos-based credentials for interactions with the Alteryx platform and Hadoop backend datastore.
Steps:
- To enable use of Kerberos, see Set up for a Kerberos-enabled Hadoop cluster.
- In particular, you must configure a Kerberos principal for the Alteryx platform to use.
- Then, you must enable use of the Kerberos keytab by the CLI.
See the above link for further instructions.
After you have performed the above configuration, you do not need to provide a password value for commands issued through the Alteryx CLI. See CLI for Jobs.
Transfer assets of a deleted user to another user
When you delete a user through the CLI, you can now transfer that user's assets to a new user. Add the --transfer_assets_to
parameter, which is followed by the userId of the user who is now the owner of them.
Example command:
./trifacta_admin_cli.py --admin_username <trifacta_admin_user> --admin_password <trifacta_admin_password> delete_user --user_name joe@example.com --transfer_assets_to jim@example.com
For more information, see CLI for User Admin.
Changes for Release 3.1.2
cli_output_path requires additional permissions
In any command that utilizes a cli_output_path
parameter, the user who executes the command must also have exec permissions on all parent folders to the cli_output_path
folder.
- This issue affects Release 3.1.1 and later.
Changes to command outputs and responses for connections
In Release 3.1.2, the outputs and responses for connection-related commands in the CLI have changed. Differences are listed below.
For more information, see CLI for Connections.
Create connection
Example command (same for both releases):
./trifacta_cli.py create_connection --user_name <trifacta_user> --password <trifacta_password> --conn_type microsoft_sqlserver --conn_name aSQLServerConnection --conn_description "This is my connection." --conn_host example.com --conn_port 1234 --conn_credential_type basic --conn_credential_location ~/.trifacta/config_conn.json --conn_params_location ~/.trifacta/p.json --cli_output_path ./conn_create.out
Release 3.1.1:
Output:
Creating connection aSQLServerConnection Connection information for aSQLServerConnection description: This is my connection. host: example.com credentials: ["{u'username': u'<trifacta_user>'}"] port: 1234 is_global: False name: aSQLServerConnection id: 9 credential_type: basic params: database: trifacta type: microsoft_sqlserver JSON results written to conn_create.out.
JSON Response:
{ "conn_credential_location": "~/.trifacta/config_conn.json", "conn_credential_type": "basic", "conn_host": "example.com", "conn_id": 9, "conn_name": "aSQLServerConnection", "conn_params_location": "~/.trifacta/p.json", "conn_port": "1234", "conn_type": "microsoft_sqlserver", "host": "http://example.com:3005", "status": "success", "user_name": "<trifacta_user>" }
Release 3.1.2:
Output:
Success: Connection aSQLServerConnection created JSON results written to conn_create.out.
JSON Response:
{ "conn_credential_location": "~/.trifacta/config_conn.json", "conn_credential_type": "basic", "conn_host": "example.com", "conn_id": 9, "conn_name": "aSQLServerConnection", "conn_params_location": "~/.trifacta/p.json", "conn_port": "1234", "conn_type": "microsoft_sqlserver", "host": "http://example.com:3005", "results": { "createdAt": "2016-06-30T21:53:58.977Z", "createdBy": 3, "credential_type": "basic", "credentials": [ { "username": "<trifacta_user>" } ], "deleted_at": null, "description": null, "host": "example.com", "id": 9, "is_global": false, "name": "aSQLServerConnection", "port": 1234, "type": "microsoft_sqlserver", "updatedAt": "2016-06-30T21:53:58.977Z", "updatedBy": 3 }, "status": "success", "user_name": "<trifacta_user>" }
Edit connection
Example command (same for both releases):
./trifacta_cli.py edit_connection --user_name <trifacta_user> --password <trifacta_password> --conn_name aSQLServerConnection <--conn_type microsoft_sqlserver> <--conn_description "This is my connection."> <--conn_host mynewhost.com> <--conn_port 1234> <--conn_credential_type basic> <--conn_credential_location ~/.trifacta/config_conn.json> <--cli_output_path ./conn_edit.out>
Release 3.1.1:
Output:
Updating connection aSQLServerConnection Connection information for aSQLServerConnection description: This is my connection. host: mynewhost.com credentials: ["{u'username': u'<trifacta_user>'}"] port: 1234 is_global: False name: aSQLServerConnection id: 9 credential_type: basic params: database: trifacta type: microsoft_sqlserver JSON results written to conn_edit.out.
JSON Response:
{ "conn_description": "This is my connection.", "conn_id": 9, "conn_name": "aSQLServerConnection", "conn_params_location": "~/.trifacta/p.json", "host": "http://localhost:3005", "status": "success", "user_name": "<trifacta_user>" }
Release 3.1.2:
Output:
Success: Updated connection aSQLServerConnection JSON results written to conn_edit.out.
JSON Response:
{ "conn_description": "This is my connection.", "conn_id": 9, "conn_name": "aSQLServerConnection", "conn_params_location": "~/.trifacta/p.json", "host": "http://nynewhost.com:3005", "results": { "createdAt": "2016-06-30T22:08:47.016Z", "createdBy": 3, "credential_type": "basic", "credentials": [ { "username": "<trifacta_user>" } ], "deleted_at": null, "description": "This is my connection.", "host": "mynewhost.com", "id": 9, "is_global": false, "name": "aSQLServerConnection", "port": 1234, "type": "microsoft_sqlserver", "updatedAt": "2016-06-30T22:09:03.670Z", "updatedBy": 3 }, "status": "success", "user_name": "<trifacta_user>" }
List connections
Example command (same for both releases):
./trifacta_cli.py list_connections --host dev.redshift.example.com --user_name <trifacta_user> --password <trifacta_password> --cli_output_path ./conn_list.out
Release 3.1.1:
Output:
Listing connections Found 2 connections for params {'noLimit': 'true'}. Redshift: description: None host: dev.redshift.example.com credentials: ["{u'username': u'<trifacta_user>'}"] port: 5439 is_global: True name: Redshift id: 2 credential_type: custom params: extraLoadParams: BLANKSASNULL EMPTYASNULL TRIMBLANKS TRUNCATECOLUMNS defaultDatabase: dev type: amazon_redshift Hive: description: None host: dev.hive.example.com credentials: ["{u'username': u'<trifacta_user>'}"] port: 10000 is_global: True name: Hive id: 1 credential_type: conf params: jdbc: hive2 connectStrOpts: defaultDatabase: default type: hadoop_hive JSON results written to conn_list.out.
JSON Response:
{ "connections": [ { "conn_createdAt": "2016-06-01T21:12:59.383Z", "conn_createdBy": 2, "conn_credential_type": "custom", "conn_credentials": [ { "username": "<trifacta_user>" } ], "conn_deleted_at": null, "conn_description": null, "conn_host": "dev.redshift.example.com", "conn_id": 2, "conn_is_global": true, "conn_name": "Redshift", "conn_params": { "extraLoadParams": "BLANKSASNULL EMPTYASNULL TRIMBLANKS TRUNCATECOLUMNS", "defaultDatabase": "dev" }, "conn_port": 5439, "conn_type": "amazon_redshift", "conn_updatedAt": "2016-06-01T21:33:38.672Z", "conn_updatedBy": 2 }, { "conn_createdAt": "2016-06-01T21:11:41.222Z", "conn_createdBy": 2, "conn_credential_type": "conf", "conn_credentials": [ { "username": "<trifacta_user>" } ], "conn_deleted_at": null, "conn_description": null, "conn_host": "dev.hive.example.com", "conn_id": 2, "conn_is_global": true, "conn_name": "Hive", "conn_params": { "jdbc": "hive2", "connectStrOpts": "", "defaultDatabase": "default" }, "conn_port": 10000, "conn_type": "hadoop_hive", "conn_updatedAt": "2016-06-01T21:39:58.090Z", "conn_updatedBy": 2 } ], "host": "http://localhost:3005", "status": "success", "user_name": "<trifacta_user>" }
Release 3.1.2:
No changes.
Delete connection
Example command (same for both releases):
./trifacta_cli.py delete_connection --user_name <trifacta_user> --password <trifacta_password> --conn_name aSQLServerConnection --cli_output_path ./conn_delete.out
Release 3.1.1:
Output:
Deleting connection with id: 9 Delete successful. JSON results written to conn_delete.out.
JSON Response:
{ "conn_name": "aSQLServerConnection", "host": "http://localhost:3005", "status": "success", "user_name": "<trifacta_user>" }
Release 3.1.2:
Output:
Success. Deleted connection with id: 9 JSON results written to conn_delete.out.
JSON Response:
Same as previous.
{ "conn_name": "aSQLServerConnection", "host": "http://localhost:3005", "status": "success", "user_name": "<trifacta_user>" }
Changes for Release 3.1
With the introduction of the Connection object model for publication, some parameters required for the Alteryx® Command Line Interface (CLI) have been removed and replaced by other parameters.
NOTE: This section identifies the changes that must be applied to your scripts for the Alteryx Command Line Interface if you are upgrading a Release 3.0.1 or earlier instance of the platform to Release 3.1 or later. Scripts that were functional on earlier versions of the platform do not function in Release 3.1 or later without these modifications.
See CLI for Connections.
See CLI for Jobs.
See CLI for User Admin.
CLI Tools Available as Executables
The command-line interface tools are now deployed as executables. Commands should continue to reference the Python scripts that were available in previous versions. These targets now serve as symlinks to the executables stored elsewhere in the Alteryx deployment.
Download Upgraded Transform Scripts
NOTE: If you have upgraded from a previous version of the Designer Cloud Powered by Trifacta platform where you were using the CLI, you must download again from the platform any transform scripts that you use in your CLI commands.
During the upgrade process, your transform scripts have been changed in their internal form, and the old versions are unlikely to work as expected with the command line interface.
You can download individual scripts through the Transformer page. See Transformer Page.
NOTE: If you have a large number of transform scripts used by the CLI, please contact Alteryx Support.
Behavior Changes
Asynchronous publishing
Jobs are executed asynchronously now.
When you execute a publish operation, the platform immediately returns a response containing the job Id to monitor.
You can monitor progress of your publish operation using a get_job_status
command.
Append and overwrite jobs for Hive
In Release 3.0.1 and earlier, you could publish to a new Hive table only using the publish
action. After publication, no further updating was available through the Designer Cloud Powered by Trifacta platform.
Beginning in Release 3.1, Hive supports the following two publication actions:
load_data
- Load data into the database table, to which a schema has already been applied. Use to append to existing table.truncate_and_load
- Overwrite data in specified table.
The publish
action is still supported, but its parameters have changed. See below.
Parameter Changes
Removed | Replaced With | Notes |
---|---|---|
publication_target | connection_name | In the new connection object model, publication targets are referred to using the internal name. |
disable_server_certificate_verification | disable_ssl_certification | Shortened name |
Localhost is assumed
If the --host
parameter is not specified, the following parameter value is assumed:
--host localhost:3005
Release 3.0 and earlier documentation included this parameter and value, which are unnecessary.
NOTE: If you are executing the CLI against a host other than localhost
, the host
parameter must be specified, including the port number. Port 3005
is the default value for the platform; to use that port, it must be included.
Unchanged commands
The parameters for the following commands have not changed between Release 3.0 and Release 3.1:
run_job
get_job_status
Example Commands
The following show examples of each type of command from Release 3.0. The equivalent for Release 3.1 is listed below it.
NOTE: These commands apply to Release 3.1.0 only. They change again for Release 3.1.1 and later. For details, see Changes for Release 3.1.1 below.
For more information on your build number, select User menu > About Trifacta in the application.
create_schema action
Release 3.0 command (all one command):
./bin/trifacta_cli.py create_schema --user_name <trifacta_user> --password <trifacta_password> --job_id 42 --database dev --table table_42 --cli_output_path ./create_info.out
Release 3.1 command (all one command):
./bin/trifacta_cli.py create_schema --user_name <trifacta_user> --password <trifacta_password> --job_id 42 --database dev --table table_42 --connection_name aSQLServerConnection --cli_output_path ./create_info.out
publish action
Release 3.0 command (all one command):
./bin/trifacta_cli.py publish --user_name <trifacta_user> --password <trifacta_password> --job_id 42 --database dev --table table_job_42 --publish_format avro --publication_target redshift --cli_output_path ./publish_info.out
Release 3.1 command (all one command):
./bin/trifacta_cli.py publish --user_name <trifacta_user> --password <trifacta_password> --job_id 42 --database dev --table table_job_42 --connection_name 1 --publish_format avro --cli_output_path ./publish_info.out
load_data action
Release 3.0 command (all one command):
./bin/trifacta_cli.py load_data --user_name <trifacta_user> --password <trifacta_password> --job_id 42 --database dev --table table_42 --cli_output_path ./load_info.out --publish_format avro
Release 3.1 command (all one command):
./bin/trifacta_cli.py load_data --user_name <trifacta_user> --password <trifacta_password> --job_id 42 --database dev --table table_42 --connection_name aSQLServerConnection --publish_format avro --cli_output_path ./load_info.out
truncate_and_load action
Release 3.0 command:
- This action was not available in Release 3.0.
Release 3.1 command (all one command):
./bin/trifacta_cli.py truncate_and_load --user_name <trifacta_user> --password <trifacta_password> --job_id 10 --database dev --table table_43 --connection_name aSQLServerConnection --publish_format avro --cli_output_path ./load_and_trunc_info.out
Connections actions
Beginning in Release 3.1, publishing targets are accessed through connection objects. Through the CLI, you can add, edit, list, or delete connections.
This capability did not exist prior to Release 3.1.
For more information on using the CLI to create connections, see CLI for Connections.
User Admin actions
There were no updates to CLI user administration actions in Release 3.1.
See CLI for User Admin.
Changes for Release 3.1.1
The following changes to the Command Line Interface have been applied to Release 3.1.1.
NOTE: If you have upgraded from Release 3.0 or earlier, you should review first the changes that were made available in Release 3.1.0. See Changes for Release 3.1.
Improved error messaging
For Release 3.1.0 and earlier, some error messages were ambiguous or outright confusing. For example, multiple error states returned a message with _dict_
as part of the information.
For Release 3.1.1, error messages have been improved to provide more specific information about the issue and the parameters that were applied as part of the command.
Update to Hive params file
NOTE: If you are upgrading from Release 3.1 or earlier and have been using the CLI to connect to Hive, this change is required.
In Release 3.0.1 and earlier, the Hive params file utilized a parameter called, connectStringOptions
for passing arguments to Hive:
"connectStringOptions": ";transportMode=http;httpPath=cliservice"
In Release 3.1.1, the name of this parameter has been changed to connectStrOpts
, which is more consistent with the internal storage of the parameter:
"connectStrOpts": ";transportMode=http;httpPath=cliservice"
Changes to CLI for Connections
For Release 3.1.1, a number of improvements have been applied to the CLI for connections.
The CLI for connections was introduced in Release 3.1.0.If you are upgrading from Release 3.1.0, these changes must be applied to each of your CLI scripts. For more information, see CLI for Connections.
Updated standard output and JSON response for connection commands
For Release 3.1.1, the messages delivered back to standard output and in the JSON response from the Alteryx node have been made consistent with the parameter names entered at the command line.
Tip: The names referenced in the JSON response should match the parameter names used in the command line interface. As of this release, you can use the JSON response as input for your next CLI command.
General structure of connection parameter names
Connection parameters are now prefaced with --conn
instead of --connection
. Some examples:
Release 3.1.0 example | Release 3.1.1 example |
---|---|
--connection_type | --conn_type |
--connection_host | --conn_host |
This general change applies to all connection parameters.
NOTE: If you are upgrading from Release 3.1 and used the CLI for publish
or get_publications
commands, those scripts must be updated to use the new conn_name
parameter. See CLI for Jobs.
Specific connection parameter name changes
Release 3.1.0 or earlier example | Release 3.1.1 example | Notes |
---|---|---|
--connection_params_file | --conn_params_location | Applies to upgrades from Release 3.1.0 only |
--connection_credential_file | --conn_credential_location | Applies to upgrades from Release 3.1.0 only |
Changes to connection example commands
Below are updates to example commands published in the Release 3.1.0 documentation.
Changes to Create Connection example
NOTE: Applies to upgrades from Release 3.1.0 only.
Release 3.1.0 example command:
./trifacta_cli.py create_connection --user_name <trifacta_user> --password <trifacta_password> --connection_type microsoft_sqlserver --connection_name aSQLServerConnection --connection_description "This is my connection." --connection_host example.com --connection_port 1234 --connection_credential_type basic --connection_credential_file ~/.trifacta/config_conn.json --connection_params_file ~/.trifacta/p.json --cli_output_path ./connection_create.out
Release 3.1.1 example command:
./trifacta_cli.py create_connection --user_name <trifacta_user> --password <trifacta_password> --conn_type microsoft_sqlserver --conn_name aSQLServerConnection --conn_description "This is my connection." --conn_host example.com --conn_port 1234 --conn_credential_type basic --conn_credential_location ~/.trifacta/config_conn.json --conn_params_location ~/.trifacta/p.json --cli_output_path ./conn_create.out
Changes to Edit Connection example
NOTE: Applies to upgrades from Release 3.1.0 only.
Release 3.1.0 example command:
./trifacta_cli.py edit_connection --user_name <trifacta_user> --password <trifacta_password> --connection_name aSQLServerConnection <--connection_type microsoft_sqlserver> <--connection_description "This is my connection."> <--connection_host mynewhost.com> <--connection_port 1234> <--connection_credential_type basic> <--connection_credential_file ~/.trifacta/config_conn.json> <--cli_output_path ./connection_edit.out>
Release 3.1.1 example command:
./trifacta_cli.py edit_connection --user_name <trifacta_user> --password <trifacta_password> --conn_name aSQLServerConnection <--conn_type microsoft_sqlserver> <--conn_description "This is my connection."> <--conn_host mynewhost.com> <--conn_port 1234> <--conn_credential_type basic> <--conn_credential_location ~/.trifacta/config_conn.json> <--cli_output_path ./conn_edit.out>
Changes to List Connections example
NOTE: Applies to upgrades from Release 3.1.0 only.
There are no changes to the command to list all connections. Example:
./trifacta_cli.py list_connections --host dev.redshift.example.com --user_name <trifacta_user> --password <trifacta_password> --cli_output_path ./connection_list.out
If you have filtered any of your list_connections
commands, please be sure to update your commands to the new parameter names. For example, if you are listing connections by name, you must change the parameter name of --connection_name
to -conn_name
.
Changes to Delete Connection example
NOTE: Applies to upgrades from Release 3.1.0 only.
Release 3.1.0 example command:
./trifacta_cli.py delete_connection --user_name <trifacta_user> --password <trifacta_password> --connection_name aSQLServerConnection --cli_output_path ./connection_delete.out
Release 3.1.1 example command:
./trifacta_cli.py delete_connection --user_name <trifacta_user> --password <trifacta_password> --conn_name aSQLServerConnection --cli_output_path ./conn_delete.out
Updates to connections documentation
In addition to the documentation changes for the above updates, the following items have been corrected in Release 3.1.1 documentation:
List connections by connection name
You can filter the list of connections by using the connection_name
parameter. In Release 3.1.0 documentation, the list_connections
command was not listed among the commands where conn_name
applies.
Commands that can use connection identifiers
The list_connections
and delete_connections
commands can reference the connection to change by the internal connection identifier, which is defined when a connection is created.
In Release 3.1.0, the conn_id
parameter was not documented.
This page has no comments.