Contents:
Before you deploy the Alteryx® software, you should complete the following configuration steps within your Hadoop environment.
- For a technical overview of how the Designer Cloud Powered by Trifacta platform interacts with Hadoop, see Platform Interactions with Hadoop.
NOTE: The Designer Cloud Powered by Trifacta platform requires access to a set of Hadoop components. See System Requirements.
Create Alteryx user account on Hadoop cluster
The Designer Cloud Powered by Trifacta platform interacts with Hadoop through a single system user account. A user for the platform must be added to the cluster.
NOTE: In a cluster without Kerberos or SSO user management, the [hadoop.user
(default=trifacta
)]
user must be created on each node of the cluster.
If LDAP is enabled, the [hadoop.user]
user should be created in the same realm as the cluster.
If Kerberos is enabled, the [hadoop.user]
user must exist on every node where jobs run.
For POSIX-compliant Hadoop environments, the user IDs of the Alteryx user accessing the cluster and the Hadoop user must match exactly.
UserID:
If possible, please create the user ID as: trifacta
This user should belong to the group: trifactausers
User requirements:
- Access to HDFS
- Permission to run YARN jobs on the cluster.
Verify that the following HDFS paths have been created and that their permissions enable access to the Alteryx user account:
NOTE: Depending on your Hadoop distribution, you may need to modify the following commands to use the Hadoop client installed on the Alteryx node.
Below, change the values for trifacta
to match the [hadoop.user]
user for your environment:
hdfs dfs -mkdir /trifacta hdfs dfs -chown trifacta /trifacta hdfs dfs -mkdir -p /user/trifacta hdfs dfs -chown trifacta /user/trifacta
HDFS directories
The following directories must be available to the [hadoop.user]
on HDFS. Below, you can review the minimum permissions set for basic and impersonated authentication for each default directory. Secure impersonation is described later.
NOTE: Except for the dictionaries
directory, which is used to hold smaller reference files, each of these directories should be configured to permit storage of a user's largest datasets.
Directory | Minimum required permissions | Secure impersonation permissions |
---|---|---|
/trifacta/uploads | 700 | 770 Set this to 730 to prevent users from browsing this directory. |
/trifacta/queryResults | 700 | 770 |
/trifacta/dictionaries | 700 | 770 |
/trifacta/tempfiles | 770 | 770 |
You can use the following commands to configure permissions on these directories. Following permissions scheme reflects the secure impersonation permissions in the above table:
$ hdfs dfs -mkdir -p /trifacta/uploads $ hdfs dfs -mkdir -p /trifacta/queryResults $ hdfs dfs -mkdir -p /trifacta/dictionaries $ hdfs dfs -mkdir -p /trifacta/tempfiles $ hdfs dfs -chown -R trifacta:trifacta /trifacta $ hdfs dfs -chmod -R 770 /trifacta $ hdfs dfs -chmod -R 730 /trifacta/uploads
If these standard locations cannot be used, you can configure the HDFS paths. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json
.
For more information, see Platform Configuration Methods.
"hdfs.pathsConfig.fileUpload": "/trifacta/uploads", "hdfs.pathsConfig.batchResults": "/trifacta/queryResults", "hdfs.pathsConfig.dictionaries": "/trifacta/dictionaries",
Kerberos authentication
The Designer Cloud Powered by Trifacta platform supports Kerberos authentication on Hadoop.
NOTE: If Kerberos is enabled for the Hadoop cluster, the keytab file must be made accessible to the Designer Cloud Powered by Trifacta platform. See Set up for a Kerberos-enabled Hadoop cluster.
Hadoop component configuration
Acquire cluster configuration files
The Hadoop cluster configuration files must be made available to the Designer Cloud Powered by Trifacta platform. You can either copy the files over from the cluster or create a local symlink to them.
For more information, see Configure for Hadoop.
YARN configuration overview
This section provides an overview of configuration recommendations to be applied to the Hadoop cluster from the Designer Cloud Powered by Trifacta platform.
NOTE: The recommendations in this section are optimized for use with the Designer Cloud Powered by Trifacta platform. These may or may not conform to requirements for other applications using the Hadoop cluster. Alteryx Inc assumes no responsibility for the configuration of the cluster.
YARN manages cluster resources (CPU and memory) by running all processes within allocated containers. Containers restrict the resources available to its process(es). Processes are monitored and killed if they overrun the container allocation.
- Multiple containers can run on a cluster node (if available resources permit).
- A job can request and use multiple containers across the cluster.
- Container requests specify virtual CPU (cores) and memory (in MB).
YARN configuration specifies:
- Per Cluster Node: Available virtual CPUs and memory per cluster node
- Per Container: virtual CPUs and memory for each container
The following parameters are available in yarn-site.xml
:
Parameter | Type | Description |
---|---|---|
| Per Cluster Node | Amount of physical memory, in MB, that can be allocated for containers |
yarn.nodemanager.resource.cpu-vcores | Per Cluster Node | Number of CPU cores that can be allocated for containers |
yarn.scheduler.minimum-allocation-mb | Per Container | Minimum container memory, in MBs; requests lower than this will be increased to this value |
yarn.scheduler.maximum-allocation-mb | Per Container | Maximum container memory, in MBs; requests higher than this will be capped to this value |
yarn.scheduler.increment-allocation-mb | Per Container | Granularity of container memory requests |
yarn.scheduler.minimum-allocation-vcores | Per Container | Minimum allocation virtual CPU cores per container; requests lower than will increased to this value. |
yarn.scheduler.maximum-allocation-vcores | Per Container | Maximum allocation virtual CPU cores per container; requests higher than this will be capped to this value |
yarn.scheduler.increment-allocation-vcores | Per Container | Granularity of container virtual CPU requests |
Spark configuration overview
Spark processes run multiple executors per job. Each executor must run within a YARN container. Therefore, resource requests must fit within YARN’s container limits.
Like YARN containers, multiple executors can run on a single node. More executors provide additional computational power and decreased runtime.
Spark’s dynamic allocation adjusts the number of executors to launch based on the following:
job size
job complexity
available resources
You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json
.
For more information, see Platform Configuration Methods.
The per-executor resource request sizes can be specified by setting the following properties in the spark.props
section:
Parameter | Description |
---|---|
| Amount of memory to use per executor process (in a specified unit) |
spark.executor.cores | Number of cores to use on each executor - limit to 5 cores per executor for best performance |
A single special process, the application driver, also runs in a container. Its resources are specified in the spark.props section:
Parameter | Description |
---|---|
| Amount of memory to use for the driver process (in a specified unit) |
spark.driver.cores | Number of cores to use for the driver process |
Recommendations
The following configuration settings can be applied through Designer Cloud Powered by Trifacta platform configuration based on the number of nodes in the Hadoop cluster.
NOTE: These recommendations should be modified based on the technical capabilities of your network, the nodes in the cluster, and other applications using the cluster.
1 | 2 | 4 | 10 | 16 | |
---|---|---|---|---|---|
Available memory (GB) | 16 | 32 | 64 | 160 | 256 |
Available vCPUs | 4 | 8 | 16 | 40 | 64 |
yarn.nodemanager.resource.memory-mb | 12288 | 24576 | 57344 | 147456 | 245760 |
yarn.nodemanager.resource.cpu-vcores | 3 | 6 | 13 | 32 | 52 |
yarn.scheduler.minimum-allocation-mb | 1024 | 1024 | 1024 | 1024 | 1024 |
yarn.scheduler.maximum-allocation-mb | 12288 | 24576 | 57344 | 147456 | 245760 |
yarn.scheduler.increment-allocation-mb | 512 | 512 | 512 | 512 | 512 |
yarn.scheduler.minimum-allocation-vcores | 1 | 1 | 1 | 1 | 1 |
yarn.scheduler.maximum-allocation-vcores | 3 | 6 | 13 | 32 | 52 |
yarn.scheduler.increment-allocation-vcores | 1 | 1 | 1 | 1 | 1 |
spark.executor.memory | 6GB | 6GB | 16GB | 20GB | 20GB |
spark.executor.cores | 2 | 2 | 4 | 5 | 5 |
spark.driver.memory | 4GB | 4GB | 4GB | 4GB | 4GB |
spark.driver.cores | 1 | 1 | 1 | 1 | 1 |
The specified configuration allows, maximally, the following Spark configuration per node:
CoresxNode | Configuration Options |
---|---|
1x1 | (1 driver + 1 executor) or 1 executor |
2x1 | (1 driver + 2 executor) or 3 executors |
4x1 | (1 driver + 3 executors) or 3 executors |
10x1 | (1 driver + 6 executors) or 6 executors |
16x1 | (1 driver + 10 executors) or 10 executors |
This page has no comments.