Terminology applicable to Designer Cloud Enterprise Edition.
NOTE: This list is not comprehensive.
Object Model Terms
These terms apply to the objects that you import, create, and generate in Designer Cloud Enterprise Edition.
connection
An integration between the product and a datastore, through which data is read from and optionally written to the store. A connection can be read-only or read-write, depending on the type. Some connections are provided by default.
Other connections are created through the Designer Cloud application.
- See Using Connections.
- See Connections Page.
- See Object Overview.
dataset with parameters
An imported dataset that has been created with parameterized references, typically used to collect multiple assets stored in similar locations or filenames containing identical structures. For example, if you stored orders in individual files for each week in a single directory, you could create a dataset with parameters to capture all of those files in a single object, even if more files are added at a later time.
The path to the asset or assets is specified with one or more of the following types of parameters: Datetime, Alteryx pattern, regular expression pattern, wildcard, or variable.
dataset with custom SQL
A dataset that is created by apply a custom SELECT
statement to a relational datasource. You can create custom SQL statements to change the scope of your imported dataset from a single, entire table.
deployment
A mechanism for versioning publication of your flows. In Deployment Manager, packages are imported as new releases assigned to a deployment. Within a deployment, you can choose which release is active, allowing you to version the publication of your flows. See Overview of Deployment Manager.
flow
A container for holding a set of related imported datasets, recipes, and output objects. Flows are managed in Flow View page.
- See Object Overview.
- See Flow View Page.
flow parameters
A named reference that you can apply in your recipe steps. When applied, the flow parameter is replaced with its corresponding value, which may be the default value or an override value. See Overview of Parameterization.
imported dataset
A reference to an object that contains data to be wrangled in Designer Cloud Enterprise Edition. An imported dataset is created when you specify the file(s) or table(s) that you wish to read through a connection.
- See Object Overview.
- See Import Data Page.
job
A job is the sequence of processing steps that apply each step of your recipe in sequence across the entire dataset to generate the desired set of results.
- See Object Overview.
- See Run Job Page.
macro
A macro is a sequence of one or more reusable recipe steps. Macros can be configured to accept parameterized inputs, so that their functionality can be tailored to the recipe in which they are referenced.
- See Object Overview.
- See Overview of Macros.
output
Associated with a recipe, an output is a user-defined set of files or tables, formats, and locations where results are written after a job run on the recipe has completed.
- See Object Overview.
- See Flow View Page.
output destinations
An output may contain one or more destinations, each of which defines a file type, filename, and location where the results of the output are written.
- See Run Job Page.
- See Flow View Page.
output parameters
You can create variable or timestamp parameters that can be applied to parts of the file or table paths of your outputs. Variable values can be specified at the time of job execution.
- See Overview of Parameterization.
- See Run Job Page.
package
A flow imported into a Production instance of the platform. A package contains a JSON-based definition of the flow in a ZIP file. On import, rules must be created to modify any mappings to connections or paths to datasets that may have changed from those of the platform instance from where the package was exported. See Overview of Deployment Manager.
plan
A plan is a sequence of triggers and tasks that can be applied across multiple flows. For example, you can schedule plans to execute sequences of flows at a specified frequency. For more information, see Overview of Operationalization.
publication
The delivery of a set of results generated by Designer Cloud Enterprise Edition to another system. See Publishing Dialog.
parameter override
A value that is applied instead of the default or inherited value for a parameter. A parameter override may be applied at the flow level or at the time of job execution.
- See Manage Parameters Dialog.
- See Run Job Page.
recipe
A sequence of steps that transforms one or more datasets into a desired output. Recipes are built in the Transformer page using a sample of the dataset or datasets. When a job is executed, the steps of the recipe are applied in the listed order to the imported dataset or datasets to generate the output.
- See Object Overview.
- See Recipe Panel.
reference
A pointer to the output of a recipe. A reference can be used in other flows, so that those flows get the latest version of the output from the referenced recipe.
- See Object Overview.
- See Flow View Page.
reference dataset
A reference that has been imported into another flow.
- See Object Overview.
- See Flow View Page.
release
A specific instance of a release that has been imported into the Deployment Manager. See Overview of Deployment Manager.
results
A set of generated files or tables containing the results of processing a selected recipe, its datasets, and all upstream dependencies. See Job Details Page.
results profile
Optionally, you can create a profile of your generated results. This profile is available through the Designer Cloud application and may assist in analyzing or troubleshooting issues with your dataset. See Overview of Visual Profiling.
sample
When you review and interact with your data in the data grid, you are seeing the current state of your recipe applied to a sample of the dataset. If the entire dataset is smaller than the defined limit, you are interacting with the entire dataset.
You can create new samples using one of several supported sampling techniques. See Overview of Sampling.
sample checkpointing
As you build more complex recipes and flows, it's a good idea to create samples periodically in your recipe steps. All steps between the currently displayed sample and the currently displayed recipe step are executed in the browser, so this type of checkpointing with samples can improve performance. For more information on best practices in sampling, see Overview of Sampling.
schedule
You can associate a single schedule with a flow. A schedule is a combination of one or more trigger times and the one or more scheduled destinations that are generated when the trigger is hit. A schedule must have at least one trigger and at least one scheduled destination in order to work.
- See also trigger and scheduled destination.
- See Overview of Automator.
scheduled destination
When a schedule's trigger is fired, each recipe that has a scheduled destination associated with it is queued for execution. When the job completes, the outputs specified in the scheduled destination are generated. A recipe may have only one scheduled destination, and a scheduled destination may have multiple outputs (publishing actions) associated with it.
- See also schedule and trigger.
- See Overview of Automator.
snapshot
When a plan is triggered, a snapshot of all flows in the plan and their dependent objects is taken. The tasks of the plan are executed against this snapshot, so subsequent revisions to these objects do not impact the execution of the plan. For more information, see Overview of Operationalization.
target
A set of columns, their order, and their formats to which you are attempting to wrangle your dataset. A target represents the schema to which you are attempting to wrangle. You can assign a target to your recipe, and the schema can be superimposed on the columns in the data grid, allowing you to make simple selections to transform your dataset to match the column names, order, and formats of the target. See Overview of RapidTarget.
task
A task is an executable action that is part of a plan. For example, when a plan is triggered, the first task in the plan is queued for execution, which may be to execute all of the recipes and their dependencies in a flow. For more information, see Overview of Operationalization.
trigger
A trigger is a periodic time associated with a schedule. When a trigger's time occurs, all of flows associated with the trigger are queued for execution.
- A schedule can have multiple triggers. See also schedule and scheduled destination.
- For more information on flow-based triggers, see Overview of Automator.
- For more information on plan-based triggers, see Overview of Operationalization.
variable (dataset)
A replacement for the parts of a file path to data that change with each refresh. A variable can be overwritten as needed at job runtime.
Application Terms
These terms apply to the Designer Cloud application, a web-based application for interacting with your datasets, flows, and recipes.
Add Schedule dialog
Create or modify scheduled executions of your flow.
- See Overview of Automator.
- See Add Schedule Dialog.
Automator
Feature that enables automated execution of flows according to user-defined schedules. See Overview of Automator.
Column Browser panel
Browse columns of your dataset, select and perform operations on one or more selected columns. See Column Browser Panel.
Column Details panel
Examine details and profile of the data in the selected column. See Column Details Panel.
Column menu
Perform transformation operations on the selected column from a list of menu options, including changing the column data type. See Column Menus.
Column Histogram
At the top of the column, review the counts of values in the column. Select one or more values in the column through the histogram. See Column Histograms.
Connections page
Create or edit connections to external storage. See Connections Page.
Data Grid
In the Transformer page, the data grid displays a sample of the dataset at the currently selected step in the recipe. Make selections in the dataset to prompt suggestions for transformations to add to your recipe. See Data Grid Panel.
Data Quality bars
Review color-coded counts of valid, missing, and mismatched values in your column based on the column's data type. Select a color bar to be prompted with suggestions for transformations on the relevant rows. See Data Quality Bars.
Data Type menu
Change the data type for the column from the icon to the left of the column header. See Column Menus.
Deployment Manager
Deploy Production versions of your flows through the Deployment Manager. See Overview of Deployment Manager.
Dataset Details page
Examine details about your dataset, including source of data and other information. See Dataset Details Page.
Flows page
Create, manage, and export your flows. See Flows Page.
Flow View page
Build your flow objects, including recipes, outputs, and references. See Flow View Page.
Home page
Landing page after login. See Home Page.
Import Data page
Import data from a valid connection as an imported dataset. See Import Data Page.
Library page
Manage your imported datasets and reference objects. See Library Page.
Jobs page
Review the list of jobs that you have launched. View status, explore job details, and export results. See Jobs Page.
Job Details page
Review the details of your job, including an optional profile of the resulting data. See Job Details Page.
Publishing Dialog
Publish results to an external system. See Publishing Dialog.
RapidTarget
Feature that enables matching of columns and data types of your dataset with a pre-defined target schema.
- See Overview of RapidTarget.
- See Flow View Page.
Recipe panel
Add, edit, and remove steps from your current recipe. Apply changes and see updates immediately in the data grid sample.
- See Transform Basics.
- See Recipe Panel.
Run Job page
Configure job, visual profiling, and job outputs before launching. See Run Job Page.
Samples panel
Review, create, and delete samples for the current recipe.
- See Overview of Sampling.
- See Samples Panel.
Search panel
Search for transformations to build as the next step in your recipe. See Search Panel.
Settings page
Review and modify settings. See Preferences Page.
Share Flow dialog
Share your flow or send a copy of it to other users.
- See Overview of Sharing.
- See Share Flow Dialog.
Selection Details panel
Based on selections you make in the data grid, you can review profiling information and a set of suggested transformations to add to your recipe. See Selection Details Panel.
Transformer toolbar
Select from common transformations in a toolbar across the top of the data grid. See Transformer Toolbar.
Transform Builder
Review and customize transformation steps. See Transform Builder.
Transformer page
Review sampled data, explore suggestions and previews, and build transformation steps. See Transformer Page.
User Profile page
Review and modify settings applicable to your user account. See User Profile Page.
Visible Columns panel
Review and toggle the visibility of the columns in your dataset. See Visible Columns Panel.
Recipe Development Terms
These terms pertain to building recipes in Wrangle in the Transformer page.
argument
An input to a function. See Wrangle Language.
binning
Several functions can be used to group values in a column into bins, which can assist in preparing your data for downstream use. See Prepare Data for Machine Processing.
data type
A data type is the set of constraints on expected values in a column. When you specify the data type for a column, you provide a means for the platform to identify the values in the column that do not match the selected type, which assists in wrangling the mismatched values. See Supported Data Types.
Data types can be selected from the column menus. See Column Menus.
dependency
An input to a recipe that is not the primary datasource for the recipe. For example, if your recipe includes a join step, the dataset that is joined into your recipe is an upstream dependency. Recipe steps and changes outside of the Designer Cloud application can create dependency errors, in which an upstream object can no longer be found and the reference to it cannot be resolved. These issues must be fixed prior to successful execution of a job. For more information, see Fix Dependency Issues.
dictionary
A dictionary is an external file that can be used to define the accepted values for a custom data type. You can create custom data types using an enumerated list of accepted values or by regular expression. See Create Custom Data Types.
file encoding
A file's encoding defines the set of characters that are in use in the file. There are many different encoding systems in use around the world. To represent English language, which uses a 26-character alphabet, UTF-8 is sufficient. However, to represent Asian character sets, which may contain thousands of characters, a different and broader set of characters is required. See Supported File Encoding Types.
When a file is imported, Designer Cloud Enterprise Edition assumes that the file is in the default encoding type. As needed, you can change the encoding type that is used to import the file. See Change File Encoding.
function
A function in Wrangle is an action that is applied to a set of values as part of a transformation step. A function can take 0 or more parameters as inputs, yielding a single output of a specific data type. For a list of supported functions, see Language Index.
initial structure
When a file-based dataset is imported, Designer Cloud Enterprise Edition attempts to detect the format and structure of the data and then to apply a set of initial parsing steps to transform the data for display in tabular form in the data grid. These steps may vary depending on the file format. See Initial Parsing Steps.
These steps do not appear in the recipe. As needed, you can disable the detection of structure on import. When disabled, these steps are added as the first steps of the recipe, where you can edit or remove them as needed. See Remove Initial Structure.
join
This database concept can be applied to datasets. In a join, two datasets are merged into one, based on a set of key columns. Values in these columns that match across the datasets are used to determined the values from each dataset to include in the joined dataset. See Join Types.
Joins are created as steps in your recipe. See Join Window.
lookup
A retrieval of a row of values from another dataset based on common values in columns in each dataset. A lookup is useful for bringing in reference information based on values in one of the columns of your dataset. See Lookup Wizard.
mismatched
Values in a column that do not conform to range or format of expected values for the column's data type.
missing
Cell values in the dataset that are empty.
multi-dataset operation
A multi-dataset (MDS) operation refers to any step in your recipe that uses two or more datasets. Joins and unions are examples of multi-dataset operations.
nested expression
An expression that is inside another expression. Example:
POWER(ABS(colA),colB)
Designer Cloud Enterprise Edition supports the use of nested expressions in your recipe steps. See Wrangle Language.
null
A value that does not exist in the dataset. See Manage Null Values.
operator
A single character that represents an arithmetic function or comparison. For example, the Plus sign (+
) represents the add function.
Operator Category | Description |
---|---|
Logical Operators | and, or, and not operators |
Numeric Operators | Add, subtract, multiply, and divide |
Comparison Operators | Compare two values with greater than, equals, not equals, and less than operators |
Ternary Operators | Use ternary operators to create if/then/else logic in your transforms. |
outliers
In statistics, an outlier refers to a value that is unusually above or below from the mean. In Designer Cloud Enterprise Edition, an outlier is 4 standard deviations away from the mean.
You can review outliers for column values. See Column Statistics Reference.
parameter (language)
An input to a transform in Wrangle. See Wrangle Language.
pattern
In Designer Cloud Enterprise Edition, a pattern is an object that describes a sub-string within a value. Patterns can be described using regular expressions, a common standard, or Alteryx patterns, a proprietary simplification of regular expressions. See Text Matching.
Patterns are widely used in the product for identifying and extract values from data, data type validation, and supporting pattern-based suggestions.
- See Alteryx pattern.
- See regular expression pattern.
regular expression pattern
Regular expressions are a powerful yet complex method of describing patterns of values for matching purposes. See Text Matching.
source row number
The row number for a record as it appeared in the original dataset. Source row number information can be obtained by function. This function may return a null value if multi-dataset operations, such as union and join, have been performed on the dataset. See SOURCEROWNUMBER Function.
source metadata reference
A source metadata reference is a programmatic reference to some aspect of the source file for your dataset. Using these programmatic references, you can write source information for your original datasource into your dataset for future reference. For more information, see Source Metadata References.
string collation
String collation refers to a method of comparison of strings based on a set of rules. Designer Cloud Enterprise Edition includes the following functions to perform string collation-based comparisons:
- See STRINGGREATERTHAN Function.
- See STRINGGREATERTHANEQUAL Function.
- See STRINGLESSTHAN Function.
- See STRINGLESSTHANEQUAL Function.
transformation
A transformation is the unit of action in a recipe step. A transformation applies one or more actions on a set of rows or columns. Transformations are specified in the Transformer page through the Transform Builder. See Transform Builder.
For a list of available transformations, see Transformation Reference.
transform
A transform in Wrangle is an action that is applied to rows or columns of your dataset. A transform can take zero or more parameters as inputs. A parameter may contain a reference to a column, a literal value, or a function.
NOTE: Transforms are not available through the Designer Cloud application. Instead, you build transformations, which are more complex steps that reference transforms from the underlying language.
For a list of supported transforms, see Language Index.
Alteryx pattern
A simplification of regular expressions, Alteryx patterns are custom selectors for patterns in your data and provide a simpler and more readable alternative to regular expressions. See Text Matching.
union
A union combines two or more datasets such that the rows of the second and later datasets are appended to the end of the first dataset. In a union operation, the columns must be matched up, or the results are a ragged dataset.
Unions are created as steps in your recipe. See Union Page.
wrangling
An informal term for the process of data preparation. Data wrangling was invented by the co-founders of Alteryx Inc.
Admin Terms
These terms apply to administration of your workspace and the underlying platform.
Admin Settings page
A page in the Designer Cloud application where administrators can configure platform users, settings, and other configuration options. See Admin Settings Page.
Deployment Manager Page
A page in the Designer Cloud application where provisioned users can manage their deployments for a Production instance. Users must have the Deployment role in their account, or the entire instance must be configured as a Production instance. See Deployment Manager Page.
Platform Terms
These terms apply to the underlying Designer Cloud Powered by Trifacta platform.
Avro
A data serialization format for Hadoop. For more information, see Supported File Formats.
API
Short for Application Protocol Interface, the platform APIs permit programmatic access to developers to platform actions from outside of the application interface. For more information, see API Reference.
batch job runner
A platform service for queued and managing the execution of jobs through external running environments. For more information, see Configure Batch Job Runner.
BZIP
A data serialization format for Hadoop. For more information, see Supported File Formats .
Chrome
The Designer Cloud application can be served through a supported version of Google Chrome. For more information, see Desktop Requirements.
cron
Time-based job scheduling format. The Designer Cloud Powered by Trifacta platform supports a modified form of cron. For more information, see cron Schedule Syntax Reference.
Data Service
A platform service for managing connections and interactions with relational storage. For more information, see Configure Data Service.
Firefox
The Designer Cloud application can be served through a supported version of Mozilla Firefox.
For more information, see Desktop Requirements.
GZIP
A file format for compression and decompression. For more information, see Supported File Formats.
Hyper
A native format for the Tableau data visualization platform. The Designer Cloud Powered by Trifacta platform can generate results in Hyper format. For more information, see Supported File Formats.
ingestion
The process by which relational datasources can be retrieved from their origin and transferred to the backend datastore of the platform, which improves performance in sampling and job execution. For more information, see Configure JDBC Ingestion.
JSON
Javascript Object Notation (JSON) is a human-readable format for transmitting data objects. For more information, see Supported File Formats .
Microsoft Excel
Microsoft Excel workbooks and worksheets can be used as imported datasets in the platform. For more information, see Import Excel Data .
MySQL
An open-source relational database management system. MySQL can host the Alteryx databases. For more information, see System Requirements.
machine learning
The process by which computer systems use data as inputs for algorithms and statistical models to make decisions and perform tasks.
operationalization
The process by which actions in the platform can be applied and scheduled in production environments.
Trifacta Photon client
An in-browser client for managing the sampling and transformation of data on the web client. For more information, see Configure Photon Client.
PostgreSQL
An open-source relational database management system. PostgreSQL can host the Alteryx databases . For more information, see System Requirements .
predictive transformation
Specific to the Designer Cloud Powered by Trifacta platform, predictive transformation serves as the foundation of design principles for how users interact with their data. For more information, see Overview of Predictive Transformation.
profile job
When a job is executed against a dataset, users can optionally choose to generate a visual profile of the results, which is processed as a separate job after the transformation job has completed. For more information, see Run Job Page.
running environment
One of several environments where transformation, profiling, and sampling jobs can be executed. The platform integrates with these environments and manages the queuing and monitoring of the jobs asynchronously, minimizing performance impacts on the Alteryx node. For more information, see Running Environment Options.
sharing
Users can optionally flows and connections with other users. For more information, see Overview of Sharing.
SSO
Short for Single-Sign On, SSO enables users to access multiple systems within the enterprise domain through one set of credentials. The Designer Cloud Powered by Trifacta platform can integrate with multiple types of SSO. For more information, see Configure SSO for AD-LDAP.
Snappy
A fast compression and decompression format. For more information, see Supported File Formats .
TDE
A native format for the Tableau data visualization platform. The Designer Cloud Powered by Trifacta platform can generate results in TDE format.
NOTE: TDE has been superseded by the Hyper format. Please switch to using Hyper format. TDE will be deprecated in a future release.
For more information, see Supported File Formats.
transform job
The process by which a recipe is applied across the entire dataset to generate results at the specified output locations. For more information, see Run Job Page.
trifacta-conf.json
The primary configuration file of the Designer Cloud Powered by Trifacta platform. This file is stored in JSON format on the Alteryx node.
NOTE: Administrators should perform platform configuration operations through the Admin Settings page, where possible. See Admin Settings Page.
For more information, see Platform Configuration Methods.
UDF
Short for user-defined function, a UDF is an externally developed function that can be used in your recipes to apply custom transformation logic. Building UDFs requires developer skills. For more information, see User-Defined Functions.
visual profiler
A platform service that can be optionally invoked to generate visual profiles on generated results for display in the Designer Cloud application. For more information, see Overview of Visual Profiling.
Webhook
A Webhook is a message sent over HTTP via REST API request from one application to another. In the Designer Cloud Powered by Trifacta platform, you can configure Webhooks to be sent to a third-party application based on the success or failure of a job execution. For more information, see Create Flow Webhook Task.
Hadoop Terms
Here are a few terms that are specific to Hadoop and Hadoop-based clusters.
Cloudera
A Hadoop-based platform storing large volumes of data and performing analytics on them. For more information, see Supported Deployment Scenarios for Cloudera.
cluster
With respect to the platform, a cluster is a remote collection of nodes for processing platform jobs and returning results. The platform supports integration with multiple types of clusters for job processing.
Hadoop
An open-source framework of utilities for managing analytics and data processing jobs across a network of many nodes in a cluster. Hadoop is scalable and extensible and well-suited for processing very large data volumes.
HDFS
Short for Hadoop Distributed File System, HDFS is a backend datastore for Hadoop-based clusters. Files are stored in large blocks distributed across many nodes of the cluster. Applications and users can interact with the files through a virtual file browser. For more information, see Using HDFS.
high availability
High availability refers to a general concept of automated redundancy and failover to backup servers when a primary server is down. The platform can integrate with high availability functions of a Hadoop-based cluster. For more information, see Enable Integration with Cluster High Availability.
Hortonworks
Hortonworks is the maker of the Hortonworks Data Platform (HDP), with which the platform can integrate for job execution and data storage. For more information, see Supported Deployment Scenarios for Hortonworks.
HttpFS
One of two supported communications protocols between the platform and HDFS, HttpFS utilizes HTTP protocol and is required in some deployments. For more information, see Enable HttpFS.
Kerberos
Kerberos provides secure protocols for authentication across a variety of platforms. For more information, see Configure for Kerberos Integration.
KMS
Short for Key Management System, KMS for Hadoop clusters is supported by the platform. For more information, see Configure for KMS.
Ranger
Authorization service for Hadoop clusters, from Apache product and supported by Cloudera. For more information, see Configure for KMS for Ranger.
Sentry
Authorization service for Hadoop clusters, from Apache product and supported by Hortonworks. For more information, see Configure for KMS for Sentry.
WebHDFS
WebHDFS is the default protocol for communicating between the platform and HDFS. For more information, see Prepare Hadoop for Integration with the Platform.
YARN
Hadoop resource manager.
AWS Terms
These terms apply to Amazon Web Services, where the Designer Cloud Powered by Trifacta platform can be hosted.
AWS
Short for Amazon Web Services, AWS is a cloud-based platform for developing and deploying applications. For more information, see Configure for AWS.
EC2
Elastic Compute Cloud (Amazon EC2) is a web-based service for running applications in the Amazon Web Services (AWS) public cloud. The Designer Cloud Powered by Trifacta platform can be deployed from an EC2 instance.
EMR
Short for Elastic Map Reduce, EMR is a Hadoop-based platform purpose built to manage large datasets on AWS. See Configure for EMR.
IAM
An identify and access management (IAM) role defines a set of permissions for making AWS requests. Trusted entities assume roles, like IAM users, applications, or AWS services. The Designer Cloud Powered by Trifacta platform can use IAM roles for enabling access to EC2-based resources controlled by the enterprise. For more information, see Configure for EC2 Role-Based Authentication.
RDS
Amazon Relational Database System (RDS) is a relational database management system available in the AWS cloud, The databases required by the Designer Cloud Powered by Trifacta platform can be installed on Amazon RDS. See Install Databases on Amazon RDS.
Redshift
A hosted data warehouse solution available through AWS. The Designer Cloud Powered by Trifacta platform can connect to Redshift databases. See Using Redshift.
S3
Simple Storage Service (S3) is an online storage service provided by AWS. The Designer Cloud Powered by Trifacta platform can use S3 as the backend storage system or can integrate with it as secondary storage. See Using S3.
Glue
Metadstore for Hive datasets, which can be used as a source of imported datasets. See Enable AWS Glue Access.
Azure Terms
These terms apply to Microsoft Azure, where the Designer Cloud Powered by Trifacta platform can be hosted, and its available datastores and services.
ADLS
Azure Data Lake Store (ADLS) is a scalable big data repository built on top of HDInsight. See Using ADLS.
Azure
Microsoft Azure is a cloud computing service for building, managing, and deploying applications. See Configure for Azure.
Azure Databricks
Spark-based analytics running environment built specifically for Microsoft Azure. See Configure for Azure Databricks.
HDInsight
An open-source Hadoop-based platform for storage and analytics in the Microsoft Azure platform. See Configure for HDInsight.
WASB
Windows Azure Storage Blob (WASB) is an abstraction layer on top of HDFS for storage across multiple clusters. See Using WASB.
Miscellaneous Terms
Epoch/Unix time
Unix time (a.k.a. POSIX time or Epoch time) is a system for describing instants in time, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970, not counting leap seconds.
This page has no comments.