In Designer Cloud Enterprise Edition, you use the Job Details page to explore details about successful or failed jobs, including outputs, dependencies, and other metadata. Download results to your local desktop or, if enabled, explore a visual profile of the data in the results for further iteration on your recipe.
- Publish results: Publish your results to an external system. For more information, see Publishing Dialog.
Delete job: Delete the job and its results.
Deleting a job cannot be undone.
NOTE: This feature may not be enabled in your environment. For more information, see Miscellaneous Configuration.
Download logs: Download the log files associated with this job.
Tip: When jobs fail, the downloaded package includes additional configuration files and service logs to assist in debugging job execution issues. For more information, see Support Bundle Contents.
- Download profile as JSON: If visual profiling was enabled for the job, you can download a JSON representation of the profile to your desktop.
In the Overview tab, you can review the job status, its sources, and the details of the job run.
NOTE: If your job failed, you may be prompted with an error message indicating a job ID that differs from the listed one. This job ID refers to the sub-job that is part of the job listed in the Job summary.
You can review a snapshot of the results of your job.
- To review the recipe and dependencies in your job, click View steps and dependencies. See the Dependencies tab below.
- If you chose to profile results of your job, click View profile to review. See Profile tab below.
- A visual profile provides a graphical snapshot of the results of a successful transformation job for the entire dataset and individual columns in the dataset.
- For more information on enabling a visual profile job, see Run Job Page.
- For more information, see Overview of Visual Profiling.
You can hover over the status of each stage of a job to review breakdowns for individual phases of each stage:
Connect: Designer Cloud Powered by Trifacta platform is attempting to connect to the datastore hosting the asset sources for the datasets.
Request: Designer Cloud Powered by Trifacta platform is requesting the set of assets to deliver.
Prepare: (Publishing only) Depending on the destination, the Prepare phase includes the creation of temporary tables, generation of manifest files, and the fetching of extra connections for parallel data transfer.
Transfer: Assets are transferred to the target, which can be the Designer Cloud Powered by Trifacta platform or to the output datastore.
Process: Cleanup after data transfer, including the dropping of temporary tables or copying data within the instance.
For more information, see Overview of Job Monitoring.
You can also review the outputs generated as a result of your job. To review and export any of the generated results, click View all. See Outputs Destinations tab below.
Job ID: Unique identifier for the job
Tip: If you are using the REST APIs, this value can be used to retrieve and modify specifics related to this job. For more information, see API Reference.
- Job status: Current status of the job:
Queued:Job has been queued for execution.
Running:Job is in progress.
Completed: Job has successfully executed.
NOTE: Invalid steps in a recipe are skipped, and it's still possible for the job to be executed successfully.
Failed:Job failed to complete.
NOTE: You can re-run a failed job from the Transformer page. If you have since modified the recipe, those changes are applied during the second run. See Transformer Page.
Canceled: Job was canceled by the user.
- Flow: Name of the flow from which the job was executed. Click the link to open the flow. See Flow View Page.
- Output: Name of the output object that was used to define the generated results. Click the link to open the output. See Flow View Page.
- Job type: The method by which the job was executed:
Manual- Job was executed through the application interface.
Scheduled- Job was executed according to a predefined schedule. See Add Schedule Dialog.
- User: The user who launched the job
- Environment: Where applicable, the running environment where the job was executed is displayed.
- Start time: Timestamp for when processing began on the job. This value may not correspond to when the job was queued for execution.
- Finish time: Timestamp for when processing ended on the job, successful or not
- Last update: Timestamp for when the job was last updated
- Duration: Elapsed time of job execution
Output Destinations Tab
If the job has successfully completed, you can review the set of generated outputs and export results.
For each output, you can do the following:
View details: View details about the generated output in the side bar.
Tip: The View details panel contains breakdowns for each phase of a job. If the job fails, you can review error messages, which correspond to entries in the Data Service log file.
Download result: Download the generated output to your local desktop.
NOTE: Some file formats may not be downloadable to your desktop. See below.
Create imported dataset: Use the generated output to create a new imported dataset for use in your flows. See below.
NOTE: This option is not available for all file formats.
Direct file download
Click one of the provided links to download the file through your browser to your local desktop.
NOTE: If these options are not available, data download may have been disabled by an administrator.
TDE: You can download TDE formatted outputs to your desktop.
If you have generated output in TDE format and have configured a connection to Tableau Server, you can publish directly to the server. See Publishing Dialog.
Create imported dataset
Optionally, you can turn your generated results into new datasets for immediate use in Designer Cloud Enterprise Edition. For the generated output, select Create imported dataset from its context menu.
NOTE: If you generated results in Parquet format only, you cannot create a dataset from it, even if the Create button is present. This is a known issue.
NOTE: When you create a new dataset from your job results, the file or files that were written to the designated output location are used as the source. Depending on your backend datastore permissions are configured, this location may not be accessible to other users.
After the new output has been written, you can create new recipes from it. See Build Sequence of Datasets.
If Designer Cloud Enterprise Edition is connected to an external storage system, you may publish your job results to it. Requirements:
- Your version of the product supports publishing.
- Your connection to the storage system includes write permissions.
- Your results are generated in a format that the target system supports for writing.
- All sub-jobs, including profiling, successfully completed.
For more information, see Publishing Dialog.
Review the visual profile of your generated results in the Profile tab. Visual profiling can assist in identifying issues in your dataset that require further attention, including outlier values.
NOTE: This tab appears only if you selected to profile results in your job definition. See Run Job Page.
In particular, you should pay attention to the mismatched values and missing values counts, which identify the approximate percentage of affected values across the entire dataset. For more information, see Overview of Visual Profiling.
NOTE: The computational cost of generating exact visual profiling measurements on large datasets in interactive visual profiles severely impacts performance. As a result, visual profiles across an entire dataset represent statistically significant approximations.
NOTE: Designer Cloud Enterprise Edition treats null values as missing values. Imported values that are null are generated as missing values in job results (represented in the gray bar). See Manage Null Values.
Tip: Mouse over the color bars to see counts of values in the category.
Tip: Use the horizontal scroll bar to see profiles of all columns in wide datasets.
In the lower section, you can explore details of the transformations of individual columns. Use this area to explore mismatched or missing data elements in individual columns.
Depending on the data type of the column, varying information is displayed. For more information, see Column Statistics Reference.
Tip: You should review the type information for each column, which is indicated by the icon to the left of the column.
In this tab, you can review a simplified representation of the flow from which the job was executed. This flow view displays only the recipes and datasets that contributed to the generated results.
Tip: To open the full flow, you can click its name in the upper-left corner.
Download recipe: Download the text of the recipe in Wrangle.
Display Wrangle/natural language: Toggle display of the recipe in raw language or in readable language.
Data sources Tab
In the Data sources tab, you can review all of the sources of data for the executing recipe.
NOTE: If a flow is unshared with you, you cannot see or access the datasources for any jobs that you have already run on the flow. You can still access the job results. This is a known issue.
If your flow references parameters, you can review the state of the parameters at the time of job execution.
NOTE: This tab appears only if the job is sourced from a flow that references parameters. For more information, see Overview of Parameterization.
When a webhook task has been triggered for this job, you can review the status of its delivery to the target system.
- Webhooks are defined on a per-flow basis. For more information, see Create Flow Webhook Task.
NOTE: Webhook notifications may need to be enabled in your environment. See Workspace Admin Page.
- Name: Display name for the webhook task.
- URL: Target URL where the webhook notification is delivered.
- Status: HTTP status code returned from the delivery of the message.
200- message was delivered successfully.
- Delivered: Timestamp for when the webhook was delivered.
This page has no comments.