Skip to main content

Workspace Settings Page

The following settings can be customized for the user experience in your workspace. When you modify a setting, the change is immediately applied to the workspace. To access the page, select User menu > Admin console > Settings.

Note

Users may not experience the changed environment until each user refreshes the application page or logs out and in again.

Enablement Options:

Note

Any values specified in the Workspace Settings page applies exclusively to the specific workspace and override any system-level defaults.

Option

Description

Default

The default value is applied. This value may be inherited from higher-level configuration.

Tip

You can review the default value as part of the help text.

Enabled

The setting is enabled.

Note

If the setting applies to a feature, the feature is enabled. Additional configuration may be required. See below.

Disabled

The setting is disabled.

Edit

Click Edit to enter a specific value for the setting.

Workspace Name

Note

This feature may not be available in all product editions. For more information on available features, see Compare Editions.

You can rename the workspace by clicking the Edit button. After you rename, the Workspace settings page is refreshed and the new name is reflected.

Note

Only Workspace administrators can edit the workspace name.

General

Filter Job History

Set the default number of days of jobs that are displayed in the Job History page. Default value is 180 days.

Tip

You can filter the dates of the jobs displayed in the Job History page.

For more information, see Job History Page.

Locale

Set the locale to use for inferring or validating data in the application, such as numeric values or dates. The default is United States.

Note

After saving changes to your locale, refresh your page. Subsequent executions of the data inference service use the new locale settings.

For more information, see Locale Settings.

Session duration

Specify the length of time in minutes before a session expires. Default is 10080 (one week).

Storage directories

Allow members of the workspace to change paths to their upload and output results locations through their user profile.

For more information, see Storage Page.

Trifacta File Storage

Allow workspace members to access TFS, a storage service managed by Alteryx for uploading datasets and generating results. For more information, see Using TFS.

User messaging

When enabled, workspace users can explore content through the Trifacta Application .

API

API Access Token

When accessing the REST APIs, you can optionally use a token for simpler use and enhanced security.

Note

This feature may not be available in all environments.

For more information, see Access Tokens Page.

Allow users to generate access tokens

When enabled, individual workspace users can generate their own personal access tokens, which enable access to REST APIs. For more information, see Manage API Access Tokens.

Maximum lifetime for user generated access tokens (days)

Defines the maximum number of days that a user-generated access token is permitted for use in the product.

Tip

To permit generation of access tokens that never expire, set this value to -1.

For more information, see Manage API Access Tokens.

Connectivity

Custom SQL query

When enabled, users can create custom SQL queries to import datasets from relational tables.

Enable S3 connectivity

When enabled, base connectivity to S3 is enabled for workspace users.

Note

Additional configuration may be required. See Configure Storage Environment.

Enable conversion of standard JSON files via conversion service

When enabled, the Trifacta Application utilizes the conversion service to ingest JSON files and convert them to a tabular format that is easier to import into the application.

Note

This feature is enabled by default but can be disabled as needed. The conversion process performs cleanup and re-organization of the ingested data for display in tabular format.

When disabled, the Trifacta Application uses the old version of JSON import, which does not restructure the data and may require additional recipe steps to manually structure it into tabular format.

Note

The legacy version of JSON import is required if you are working with compressed JSON files or only Newline JSON files.

Note

Although imported datasets and recipes created under v1 of the JSON importer continue to work without interruption, the v1 version is likely to be deprecated in a future release. You should switch your old imported datasets and recipes to using the new version. Instructions to migrate are provided at the link below.

Max endpoints per JDBC REST connection

For a REST API connection to a JDBC source, this parameter defines the maximum number of endpoints that can be defined to use the connection.

Avoid modifying this value unless you are experiencing timeouts or failures to connect.

For more information, see REST API Connections.

Flows, workflows, recipes and plans

Collaborative suggestions

If desired, you can enable the inclusion of suggestion cards that are generated from recent use of the Trifacta Application. As the application gathers more information about how you or members of your workspace apply transformations to your data, the suggestions become more meaningful for the data that you are processing.

Note

No data is shared with Alteryx or any system outside of the Alteryx Analytics Cloud.

These collaborative suggestion cards can be generated from individual usage or from workspace level usage. These suggestions appear under the Recently Used heading in the side panel.When this feature is enabled, individual users can still choose to opt-out of sharing their usage data with this feature. See User Profile Page.

Option

Description

disabled

Collaborative suggestions are not surfaced in the application.

personal

Collaborative suggestions are based on the individual user's previous transformations.

workspace

Collaborative suggestions are based on the transformations from all users in the workspace.

Default

The default setting for the workspace is applied.

Column from examples

When enabled, users can access a tool through the column menus that enables creation of new columns based on example mappings from the selected column.

Editor Scheduling

When enabled, flow editors are also permitted to create and edit schedules. For more information, see Flow View Page.

Note

The Scheduling feature may need to be enabled in your environment. When enabled, flow owners can always create and edit schedules.

When this feature is enabled, plan collaborators are also permitted to create and edit schedules. For more information, see Plan View Page.

Export

When enabled, workspace users are permitted to export their flows and plans. Exported flows can be imported into other workspaces or product editions.

Note

If plans are been enabled in your workspace, enabling this flag applies to flows and plans.

Import

When enabled, workspace users are permitted to import exported flows and plans.

Note

If plans have been enabled in your workspace, enabling this flag applies to flows and plans.

Maximum number of files to read in a directory for the initial sample

When the Trifacta Application is generating an initial sample of data for your dataset from a set of source files, you can define the maximum number of files in a directory from which the sample is generated. This limit is applied to reduce the overhead of reading in a new file, which improves performance in the Transformer page.

Tip

The initial sample type for files is generated by reading one file after another from the source. If the source is multiple files or a directory, this limit caps the maximum number of files that can be scanned for sampling purposes.

Note

If the files in the directory are small, the initial sample may contain the maximum number of files and less than the maximum size permitted for a sample. You may see fewer rows that expected.

If the generated sample is unsatisfactory, you can generate a new sample using a different method. In that case, this limit no longer applies. For more information, see Overview of Sampling.

Plan feature

Note

This feature may not be available in all product editions. For more information on available features, see Compare Editions.

When enabled, users can create plans to execute sequences of recipes across one or more flows. For more information, see Plans Page.

For more information on plans and orchestration, see Overview of Operationalization.

Sample downloads

When enabled, members can download the contents of the Transformer page at any time. For an individual step, a member can download the current sample, as modified by the current recipe up to the point of the current step. For more information, see Recipe Panel.

Schematized output

When enabled, all output columns for all types of outputs are typecast to their annotated types. This feature is enabled by default.

For non-schematized outputs, the Alteryx Analytics Cloud enforces casting of all values to the annotated data type of the column by default. For example, if the output value is -3.4 and the data type for the output column is Integer, the platform enforces Integer type casting and writes a null value instead.

  • true: All output values must match the data type of the output columns, or a null value is written.

  • false: All output values are written in their output form, regardless of the column's data type.

UI for range join

When enabled, workspace users can specify join key matching across a range of values. For more information, see Configure Range Join.

Webhooks

Note

This feature may not be available in all product editions. For more information on available features, see Compare Editions.

When enabled, webhook notification tasks can be configured on a per-flow basis in Flow View page. Webhook notifications allow you to deliver messages to third-party applications based on the success or failure of your job executions. For more information, see Create Flow Webhook Task.

Job execution

Combine Spark Transform and Profile jobs into one.

When enabled, the transform and profiling tasks of a job executed on the Spark running environment are combined. The profiling task is executed as a part of the transform task, which eliminates any time spent orchestrating the profiling task and accessing the profiler input file on storage.

Note

When these two tasks are combined, publishing actions are not undertaken if the profiling task fails.

In the Job Details page, combined jobs appear in a Transform with profile card. See Job Details Page.

Custom Spark Options Feature

When enabled, users can override Spark configuration options for output objects before running Spark jobs.

Tip

When enabled, a default set of Spark configuration options is available for users. Additional properties can be specified through the Spark Whitelist Properties setting.

Ignore publishing warnings for running jobs

When enabled, a user may execute a job if the previously saved location is not available for the current IAM permissions used to run the job. Default is Enabled .

Tip

Setting this value to Enabled is helpful for resolving changes in IAM permissions.

When disabled, the Run Job button is disabled if the previously saved location is not available through IAM permissions.

Tip

Setting this value to Disabled prevents execution of jobs that are going to fail at publication time, which can be expensive in terms of time and compute costs.

Logical and physical optimization of jobs

When enabled, the Trifacta Application attempts to optimize job execution through logical optimizations of your recipe and physical optimizations of your recipes interactions with data.

This workspace setting can be overridden for individual flows. For more information, see Flow Optimization Settings Dialog.

SQL Scripts

When enabled, users may define SQL scripts to execute as part of a job's run. Scripts can be executed before data ingestion, after output publication, or both through any write-supported relational connection to which the user has access.

For more information, see Create Output SQL Scripts.

Schema validation feature

When enabled, by default the structure and ordering of columns in your import datasets are checked for changes before data is ingested for job execution.

Tip

Schema validation can be overridden for individual jobs when the schema validation option is enabled in the job settings. See below.

Errors are immediately reported in the Job Details page. See Job Details Page.

For more information on schema validation, see Overview of Schema Management.

Schema validation option in job settings

When the schema validation feature and this setting are enabled, users can make choices on how individual jobs are managed when schema changes are detected. This setting is enabled by default.

For more information on schema validation, see Overview of Schema Management.

Schema validation option to fail job

When schema validation is enabled, this setting specifies the default behavior when schema changes are found.

  • When enabled, jobs are failed when schema changes are found, and error messages are surfaced in the Trifacta Application.

  • When disabled, jobs are permitted to continue.

    • Jobs may ultimately fail due to schema changes.

    • Jobs may result in bad data being written in outputs.

    • Job failures may be more challenging to debug.

      Tip

      Setting this value to Disabled matches the behavior of the Trifacta Application from before schema validation was possible.

Tip

This setting can be overridden for individual jobs, even if it is disabled.

Errors are immediately reported in the Job Details page. See Job Details Page.

For more information on schema validation, see Overview of Schema Management.

Skip write settings validation

When enabled, write settings objects are not validated as part of job execution. Write settings are used to define the outputs for file-based results. Default is enabled.

Note

When this feature is enabled, no validations are performed of any write settings objects for scheduled and API-based jobs. Issues with these objects may cause failures during the transformation and publishing stages of job execution.

Tip

Before running a job via schedule or API that produces file-based outputs, you should do a test manual execution of the job to verify the outputs.

Trifacta Photon execution

When enabled, users can choose to execute their jobs on Trifacta Photon, a proprietary running environment built for execution of small- to medium-sized jobs in memory on the Trifacta node.

Tip

When enabled, you can select to run jobs on Photon through the Run Job page. The default running environment is the one that is best for the size of your job.

When Trifacta Photon is disabled:

  • You cannot run jobs on the local running environment. All jobs must be executed on a clustered running environment.

  • Trifacta Photon is used for Quick Scan sampling jobs. If Trifacta Photon is disabled, the Trifacta Application attempts to run the Quick Scan job on another available running environment. If that job fails or no suitable running environment is available, the Quick Scan sampling job fails.

Publishing

Avro output format

When enabled, members can generate outputs in Avro format.

CSV output format

When enabled, members can generate outputs in CSV format.

Default storage environment

Choose the default storage environment for your workspace:

  • S3: Your enterprise S3 environment. Additional configuration is required.

  • TFS : A storage service managed by Alteryx. For more information, see Using TFS.

Hyper output format

Note

This feature may not be available in all product editions. For more information on available features, see Compare Editions.

When enabled, members can generate outputs in Hyper format for publication and use on Tableau Server.

JSON output format

When enabled, members can generate outputs in JSON format.

Parquet output format

When enabled, members can generate outputs in Parquet format.

Notifications

Email notification feature

When enabled, the Alteryx Analytics Cloud can send email notifications to users based on the success or failure of jobs. By default, this feature is Enabled.

Email notification trigger when flow jobs fail

When email notifications are enabled, you can configure the default setting for the types of failed jobs that generate an email to interested stakeholders. The value set here is the default value for each flow in the workspace.

Settings:

Setting

Description

Default (any jobs)

By default, email notifications are sent on failure of any job.

Never send

Email notifications are never sent for job failures.

Scheduled jobs

Notifications are sent only when scheduled jobs fail.

Manual jobs

Notifications are sent only when ad-hoc (manually executed) jobs fail.

Any

Notifications are sent for all job failures.

Individual users can opt out of receiving notifications or configure a different email address. See Email Notifications Page.

Emailed stakeholders are configured by individual flow. For more information, see Manage Flow Notifications Dialog.

Email notification trigger when flow jobs succeed

When email notifications are enabled, you can configure the default setting for the types of successful jobs that generate an email to interested stakeholders. The value set here is the default value for each flow in the workspace.

For more information on the settings, see the previous section. Default setting is Default (any jobs).

Individual users can opt out of receiving notifications or configure a different email address. See Email Notifications Page.

Emailed stakeholders are configured by individual flow. For more information, see Manage Flow Notifications Dialog.

Email notification trigger when plans run

You can configure the default trigger for email notifications when a plan runs. Default setting is Default (all runs).

Setting

Description

Default (all runs)

By default, email notifications are sent to users for all plan runs.

All runs

Emails are sent for all runs.

Failed runs

Emails are sent for failed runs only.

Success runs

Emails are sent for successful runs only.

Sharing email notifications

When email notifications are enabled, users automatically receive notifications whenever an owner shares the plan or flow with the user.

Individual users can opt out of receiving notifications. For more information, see Preferences Page.

Experimental features

These experimental features are not supported.

Warning

Experimental features are in active development. Their functionality may change from release to release, and they may be removed from the product at any time. Do not use experimental features in a production environment.

These settings may or may not change application behavior.

Cache data in the Transformer intelligently

Note

NOTE: This feature is in Beta release.

When enabled, this feature allows the Trifacta Application to cache data from the Transformer page periodically based on Trifacta Photon execution time. This feature enables users to move faster between recipe steps.

Default language

Select the default language to use in the Trifacta Application.

Edit recipes without loading sample

When enabled, you can perform edits in the Transformer page without loading a sample in the data grid.

Tip

This feature can be helpful when you know the edits that need to be performed and do not need sample data to perform the corrections. You can also use it to switch the active sample without loading.

In Flow View, select Edit recipe without datagrid from the context menu on the right side when the recipe is selected.

Enable/Disable data grid from view options

When enabled, you can enable or disable live previewing in the data grid of the Transformer page. Disabling can improve performance. These options are available in the Show/hide data grid options drop-down in the status bar at the bottom of the Transformer page:

  • Edit with data grid

    • When the data grid is disabled, you may not be able to edit some recipe steps. For steps that you can edit, select Preview to see the effects of the step on the data. When you select Preview, the data grid is re-enabled.

  • Show column histogram

    • When the data grid is enabled, you can choose to disable the column histograms in the data grid, which can improve performance.

For more information, see Data Grid Panel.

Execution time threshold (in milliseconds) to control caching in the Transformer

Note

NOTE: This feature is in Beta release.

When intelligent caching in the Transformer is enabled, you can set the threshold time in milliseconds for when Trifacta Photon updates the cache. At each threshold of execution time in Trifacta Photon, the output of the intermediate recipe (CDF) steps are cached in-memory, which speeds up movements between recipe steps in the Trifacta Application.

Language localization

When enabled, the Trifacta Application is permitted to display text in the selected language.

Show user language preference

When enabled, individual users can select a preferred language in which to display text in the Trifacta Application.

Note

This experimental feature requires installation of a language resource file on the Trifacta node. For this release, only U.S. English (default) and Korean are supported. For more information, please contact Alteryx Support.

Users can make personal language selections through their preferences. See Account Settings Page.