Page tree

Release 7.1.2


Contents:

   

Contents:


In the Run Job page, you can specify transformation and profiling jobs for the currently loaded dataset. Available options include output formats and output destinations.You can also configure the environment where the job is to be executed.

Tip: Jobs can be scheduled for periodic execution through Flow View page. For more information, see Add Schedule Dialog.


Tip: Columns that have been hidden in the Transformer page still appear in the generated output. Before you run a job, you should verify that all currently hidden columns are ok to include in the output.





Figure: Run Job Page


Running Environment

Select the environment where you wish to execute the job. Some of the following environments may not be available to you. These options appear only if there are multiple accessible running environments.

NOTE: Running a job executes the transformations on the entire dataset and saves the transformed data to the specified location. Depending on the size of the dataset and available processing resources, this process can take a while.

Tip: The application attempts to identify the best running environment for you. You should choose the default option, which factors in the available environments and the size of your dataset to identify the most efficient processing environment.

Photon: Executes the job on the running environment hosted on the same server as the  Designer Cloud Enterprise Edition

Spark: Executes the job using the Spark running environment.

Databricks: Executes the job on the Azure Databricks cluster with which the platform is integrated.

NOTE: Use of Azure Databricks is not supported on Marketplace installs.

For more information, see Configure for Azure Databricks.

Options

Profile Results: Optionally, you can disable profiling of your output, which can improve the speed of overall job execution. When the profiling job finishes, details are available through the Job Details page, including links to download results.

NOTE: Percentages for valid, missing, or mismatched column values may not add up to 100% due to rounding.This issue applies to the Photon running environment.

See Job Details Page.

Publishing Actions


You can add, remove, or edit the outputs generated from this job. By default, a CSV output for your home directory on the selected datastore is included in the list of destinations, which can be removed if needed. You must include at least one output destination. 

Columns:

  • Actions: Lists the action and the format for the output.
  • Location: The directory and filename or table information where the output is to be written.
  • Settings: Identifies the output format and any compression, if applicable, for the publication.

Actions:

  • To change format, location, and settings of an output, click the Edit icon.
  • To delete an output, click the X icon.

Add Publishing Action

From the available datastores in the left column, select the target for your publication. 

Figure: Add Publishing Action

NOTE: Do not create separate publishing actions that apply to the same file or database table.

New/Edit: You can create new or modify existing connections. By default, the displayed connections support publishing. See Create Connection Window.

 

Steps:

  1. Select the publishing target. Click an icon in the left column.
    1. If Hive publishing is enabled, you must select or specify a database table to which to publish.

      Depending on the running environment, results are generated in Avro or Parquet format. See below for details on specifying the action and the target table.

      If you are publishing a wide dataset to Hive, you should generate results using Parquet.

      For more information on how data is written to Hive, see Hive Data Type Conversions.

  2. Locate a publishing destination: Do one of the following.

    1. Explore: 

      NOTE: The publishing location must already exist before you can publish to it. The publishing user must have write permissions to the location.

      NOTE: If your HDFS environment is encrypted, the default output home directory for your user and the output directory where you choose to generate results must be in the same encryption zone. Otherwise, writing the job results fails with a Publish Job Failed error. For more information on your default output home directory, see Storage Config Page.

      1. To sort the listings in the current directory, click the carets next to any column name.
      2. For larger directories, browse using the paging controls.
      3. Use the breadcrumb trail to explore the target datastore. Navigate folders as needed.
    2. Search: Use the search bar to search for specific locations in the current folder only.
    3. Manual entry: Click the Edit icon to manually edit or paste in a destination.
  3. Choose an existing file or folder: When the location is found, select the file to overwrite or the folder into which to write the results.

    NOTE: You must have write permissions to the folder or file that you select.

    1. To write to a new file, click Create a new file

    Create a new file: See below.

  4. Create Folder: Depending on the storage destination, you can click it to create a new folder for the job inside the currently selected one. Do not include spaces in your folder name.
  5. As needed, you can parameterize the outputs that you are creating. Click Parameterize destination in the right panel. See Parameterize destination settings below.

  6. To save the publishing destination, click Add.

To update a publishing action, hover over its entry. Then, click Edit.

To delete a publishing action, select Delete from its context menu.

Variables

If any variable parameters have been specified for the datasets or outputs of the flow, you can apply overrides to their default values. Click the listed default value and insert a new value. A variable can have an empty value.

NOTE: Override values applied to a job are not validated. Invalid overrides may cause your job to fail.

NOTE: Unless this output is a scheduled destination, variable overrides apply only to this job. Subsequent jobs use the default variable values, unless specified again. No data validation is performed on entries for override values.

Tip: At the flow level, you can specify overrides at the flow level. Override values are applied to parameters of all types that are a case-sensitive match. However, values that are specified at runtime override flow-level overrides. For more information, see Manage Parameters Dialog.

For more information on variables, see Overview of Parameterization.

File settings

When you generate file-based results, you can configure the filename, storage format, compression, number of files, and the updating actions in the right-hand panel.

Figure: Output File Settings

Configure the following settings.

  1. Create a new file: Enter the filename to create. A filename extension is automatically added for you, so you should omit the extension from the filename.
  2. Output directory: Read-only value for the current directory. To change it, navigate to the proper directory.

    NOTE: During job execution, a canary file is written for each set of results to validate the path. For datasets with parameters, if the path includes folder-level parameterization, a separate folder is created for each parameterized path. During cleanup, only the the canary files and the original folder path are removed. The parameterized folders are not removed. This is a known issue.

  3. Data Storage Format: Select the output format you want to generate for the job.
    1. Avro: 

      This format is used to support data serialization within a Hadoop environment.
    2. CSV and JSON: These formats are supported for all types of imported datasets and all running environments. 

    3. Parquet: This format is a columnar storage format.
    4. HYPER: Choose HYPER to generate results that can be imported into Tableau.

      If you have created a Tableau Server connection, you can write results to Tableau Server or publish them after they have been generated in Hyper format.

      NOTE: If you encounter errors generating results in Hyper format, additional configuration may be required. See Supported File Formats.

      TDE: Choose TDE (Tableau Data Extract) to generate results that can be imported into Tableau.

      NOTE: Tableau TDE format has been superseded by Hyper format. TDE will be deprecated in a future release. Please switch to using Hyper format.

      If you have created a Tableau Server connection, you can write results to Tableau Server or publish them after they have been generated in TDE format.

      NOTE: If you encounter errors generating results in TDE format, additional configuration may be required. See Supported File Formats.

    5. For more information, see Supported File Formats.
  4. Publishing action: Select one of the following:

    NOTE: If multiple jobs are attempting to publish to the same filename, a numeric suffix (_N) is added to the end of subsequent filenames (e.g. filename_1.csv).

     

     

     

    1. Create new file every run: For each job run with the selected publishing destination, a new file is created with the same base name with the job number appended to it (e.g. myOutput_2.csvmyOutput_3.csv, and so on). 
    2. Append to this file every run: For each job run with the selected publishing destination, the same file is appended, which means that the file grows until it is purged or trimmed.

      NOTE: When publishing single files to S3 or WASB, the append action is not supported.

      NOTE: When appending data into a Hive table, the columns displayed in the Transformer page must match the order and data type of the columns in the Hive table.

      NOTE: This option is not available for outputs in TDE format.

      NOTE: Compression of published files is not supported for an append action.

    3. Replace this file every run: For each job run with the selected publishing destination, the existing file is overwritten by the contents of the new results.
  5. More Options:

    1. Include headers as first row on creation: For CSV outputs, you can choose to include the column headers as the first row in the output. For other formats, these headers are included automatically.

      NOTE: Headers cannot be applied to compressed outputs.

    2. Include quotes: For CSV outputs, you can choose to include double quote marks around all values, including headers.

    3. Delimiter: For CSV outputs, you can enter the delimiter that is used to separate fields in the output. The default value is the global delimiter, which you can override on a per-job basis in this field.

      Tip: If needed for your job, you can enter Unicode characters in the following format: \uXXXX.

      NOTE: The Spark running environment does not support use of multi-character delimiters for CSV outputs. You can switch your job to a different running environment or use single-character delimiters. For more information on this issue, see https://issues.apache.org/jira/browse/SPARK-24540.

    4. Single File: Output is written to a single file. Default setting for smaller, file-based jobs

      or for jobs executed on the Alteryx Server

      .

    5. Multiple Files: Output is written to multiple files. Default setting for larger file-based jobs

      or for jobs executed on in a remote, cluster-based running environment

      .

  6. Compression: For text-based outputs, compression can be applied to significantly reduce the size of the output. Select a preferred compression format for each format you want to compress.

    NOTE: If you encounter errors generating results using Snappy, additional configuration may be required. See Supported File Formats.

  7. To save the publishing action, click Add.

Relational table settings

Some relational connections can be configured to support writing directly to the database. Please configure the following settings to specify the output table.

NOTE: You cannot write to multiple relational outputs from the same job.

Steps:

  1. Select location: Navigate the database browser to select the database and table to which to publish.
    1. To create a new table, click Create a new table.
  2. Select table options:
    1. Table name:
      1. New table: enter a name for it. You may use a pre-existing table name, and schema checks are performed against it.
      2. Existing table: you cannot modify the name.
    2. Output database: To change the database to which you are publishing, click the database icon in the sidebar. Select a different database.

    3. Publish actions: Select one of the following.
      1. Create new table every run: Each run generates a new table with a timestamp appended to the name.
      2. Append to this table every run: Each run adds any new results to the end of the table.

      3. Truncate the table every run: With each run, all data in the table is truncated and replaced with any new results.
      4. Drop the table every run: With each run, the table is dropped (deleted), and all data is deleted. A new table with the same name is created, and any new results are added to it.
  3. To save the publishing action, click Add.

Redshift table settings

If you are creating a publishing action for aRedshift database table, you must provide the following information.

NOTE: Some Alteryx data types may be exported to Redshift using different data types. For more information, see Redshift Data Type Conversions.

Steps:

  1. Select location: Navigate the Redshift browser to select the schema and table to which to publish.
    1. To create a new table, click Create a new table.
  2. Select table options:
    1. Table name:
      1. New table: enter a name for it. You may use a pre-existing table name, and schema checks are performed against it.
      2. Existing table: you cannot modify the name.
    2. Output database: To change the database to which you are publishing, click the Redshift icon in the sidebar. Select a different database.

    3. Publish actions: Select one of the following.
      1. Create new table every run: Each run generates a new table with a timestamp appended to the name.
      2. Append to this table every run: Each run adds any new results to the end of the table.
      3. Truncate the table every run: With each run, all data in the table is truncated and replaced with any new results.
      4. Drop the table every run: With each run, the table is dropped (deleted), and all data is deleted. A new table with the same name is created, and any new results are added to it.
  3. To save the publishing action, click Add.

Hive table settings

When publishing to Hive, please complete the following steps to configure the table and settings to apply to the publish action.

NOTE: Some Alteryx data types may be exported to Hive using different data types. For more information on how types are exported to Hive, see Hive Data Type Conversions.

Steps:

  1. Select location: Navigate the Hive browser to select the database and table to which to publish.
    1. To create a new table, click Create a new table.
  2. Select table options:
    1. Table name:
      1. New table: enter a name for it. You may use a pre-existing table name, and schema checks are performed against it.
      2. Existing table: you cannot modify the name.
    2. Output database: To change the database to which you are publishing, click the Hive icon in the sidebar. Select a different database.

      NOTE: You cannot publish to a Hive database that is empty. The database must contain at least one table.

    3. Publish actions: Select one of the following.

      NOTE: If you are writing to unmanaged tables in Hive, create and drop & load actions are not supported.



      1. Create new table every run: Each run generates a new table with a timestamp appended to the name.
      2. Append to this table every run: Each run adds any new results to the end of the table.

        Tip: Optionally, users can be permitted to publish to Hive staging schemas to which they do not have full create and drop permissions. This feature must be enabled. For more information, see Configure for Hive.

        When enabled, the name of the staging DB must be inserted into your user profile. See User Profile Page.

      3. Truncate the table every run: With each run, all data in the table is truncated and replaced with any new results.
      4. Drop the table every run: With each run, the table is dropped (deleted), and all data is deleted. A new table with the same name is created, and any new results are added to it.
  3. To save the publishing action, click Add.

Databricks Tables table settings

When you select a Databricks Tables database to store your job results, you can configure the following options for the generated table.

NOTE: Access to Databricks Tables requires integration with Azure Databricks, a Databricks Tables connection, and a Databricks personal access token. For more information, see Configure for Azure Databricks.


Figure: Databricks Tables table settings

Steps:

  1. Select location: Navigate the Databricks Tables browser to select the database and table to which to publish.
    1. To create a new table, click Create a new table.
  2. Select table options:
    1. Table name:
      1. New table: enter a name for it. You may use a pre-existing table name, and schema checks are performed against it.
      2. Existing table: you cannot modify the name.

        NOTE: Writing to partitioned tables is not supported.

    2. Output database: To change the database to which you are publishing, click the Databricks icon in the sidebar. Select a different database.

    3. Optional table types: Select one or more table types to publish as well:

      1. Use Delta table: Output is stored as a Parquet-based Delta table.

        NOTE: Versioning and rollback of Delta tables is not supported within the Designer Cloud Powered by Trifacta platform. The latest version is always used. You must use external tools to manage versioning and rollback.

      2. Publish as external table: Output is published as an external table to the specified location in your ADLS or WASB bucket.
    4. Publish actions: Depending on your selection or selections above, the following publishing actions on the table are supported:

      1. Create new table every run: Each run generates a new table with a timestamp appended to the name.
      2. Append to this table every run: Each run adds any new results to the end of the table.

      3. Truncate the table every run: With each run, all data in the table is truncated and replaced with any new results.

        NOTE: Truncating the table is not supported for external tables.

      4. Drop the table every run: With each run, the table is dropped (deleted), and all data is deleted. A new table with the same name is created, and any new results are added to it.

        NOTE: Dropping the table is not supported for external tables.

  3. To save the publishing action, click Add.

Tableau Server Datasource Settings

When publishing to Tableau Server, please complete the following steps to configure the datasource and settings to apply to the publish action.

Steps:

  1. Select location: Navigate the Tableau Server browser to select the project and datasource to use for your publication.
    1. For more information on projects, see https://onlinehelp.tableau.com/current/server/en-us/projects.htm.
    2. To create a new datasource, click Create a new datasource.
      1. For more information, see https://onlinehelp.tableau.com/current/server/en-us/datasource.htm.
  2. Datasource options:
    1. Datasource name:
      1. New datasource: enter a datasource for it. You may use a pre-existing datasource name.
      2. Existing datasource: you cannot modify the name.
    2. Output project: To change the project to which you are publishing, click the Tableau icon in the sidebar. Select a different project.

    3. Publish actions: Select one of the following.
      1. Create new datasource every run: Each run generates a new datasource with a timestamp appended to the name.
      2. Append to this datasource every run: Each run adds any new results to the end of the datasource.

      3. Drop the datasource every run: With each run, the datasource is dropped (deleted), and all data is deleted. A new datasource with the same name is created, and any new results are added to it.
  3. To save the publishing action, click Add.

Tip: If you generate a Tableau format file as part of your output, you can choose to download and later publish it to Tableau Server. For more information, see Publishing Dialog.

Parameterize destination settings

For file- or table-based publishing actions, you can parameterize elements of the output path. Whenever you execute a job, you can pass in parameter values through the Run Job page.

NOTE: Output parameters are independent of dataset parameters. However, two variables of different types with the same name should resolve to the same value.

NOTE: During job execution, a canary file is written for each set of results to validate the path. For datasets with parameters, if the path includes folder-level parameterization, a separate folder is created for each parameterized path. During cleanup, only the the canary files and the original folder path are removed. The parameterized folders are not removed. This is a known issue.

Supported parameter types:

  • Timestamp
  • Variable

For more information, see Overview of Parameterization.


Figure: Define destination parameter

Steps:

  1. When you add or edit a publishing action, click Parameterize destination in the right panel.
  2. On the listed output path, highlight the part that you wish to parameterize. Then, choose the type of parameter.
  3. For Timestamp parameters:
    1. Timestamp format: Specify the format for the timestamp value.
    2. Timestamp value: You can choose to record the exact job start time or the time when the results are written relative to the job start time.
    3. Timezone: To change the timezone recorded in the timestamp, click Change.
  4. For Variable parameters:
    1. Name: Enter a display name for the variable.

      NOTE: Variable names do not have to be unique. Two variables with the same name should resolve to the same value.

    2. Default value: Enter a default value for the parameter.
  5. To save your output parameter, click Save.
  6. You can create multiple output parameters for the same output.
  7. To save all of your parameters for the output path, click Submit.
  8. The parameter or parameters that you have created are displayed at the bottom of the screen. You can change the value for each parameter whenever you run the job.

Tip: At the flow level, you can specify overrides at the flow level. Override values are applied to parameters of all types that are a case-sensitive match. However, values that are specified at runtime override flow-level overrides. For more information, see Manage Parameters Dialog.

Run job

To execute the job as configured, click Run Job. The job is queued for execution.After a job has been queued, you can track its progress toward completion. See Jobs Page.

Automation

Run jobs via API

You can use the available REST APIs to execute jobs for known datasets. For more information, see API Reference.

This page has no comments.