Use the following section to set up your EMR cluster for use with the . - Via AWS EMR UI: This method is assumed in this documentation.
- Via AWS command line interface: For this method, it is assumed that you know the required steps to perform the basic configuration. For custom configuration steps, additional documentation is provided below.
Info |
---|
NOTE: It is recommended that you set up your cluster for exclusive use by the . |
In the Amazon EMR console, click Create Cluster. Click Go to advanced options. Complete the sections listed below. Info |
---|
NOTE: Please be sure to read all of the cluster options before setting up your EMR cluster. |
Info |
---|
NOTE: Please perform your configuration through the Advanced Options workflow. |
For more information on setting up your EMR cluster, see http://docs.aws.amazon.com/cli/latest/reference/emr/create-cluster.html. In the Advanced Options screen, please select the following: Info |
---|
NOTE: Please apply the sizing information for your EMR cluster that was recommended for you. If you have not done so, please contact your . |
- Cluster name: Provide a descriptive name.
- Logging: Enable logging on the cluster.
- Debugging: Enable.
- Termination protection: Enable.
- Scale down behavior: Terminate at instance hour.
- Tags:
- Additional Options:
- EMRFS consistent view: You should enable this setting. Doing so may incur additional costs. For more information, see EMRFS consistent view is recommended below.
- Custom AMI ID: None.
- Bootstrap Actions:
If you performed all of the configuration, including the sections below, you can create the cluster. Info |
---|
NOTE: You must acquire your EMR cluster ID for use in configuration of the . |
The following cluster roles and their permissions are required. For more information on the specifics of these policies, see EMR cluster policies. - EMR Role:
- Read/write access to log bucket
- Read access to resource bucket
- EC2 instance profile:
- If using instance mode:
- EC2 profile should have read/write access for all users.
- EC2 profile should have same permissions as EC2 Edge node role.
- Read/write access to log bucket
- Read access to resource bucket
- Auto-scaling role:
- Read/write access to log bucket
- Read access to resource bucket
- Standard auto-scaling permissions
You can use one of two methods for authenticating the EMR cluster: - Role-based IAM authentication (recommended): This method leverages your IAM roles on the EC2 instance.
- Custom credential provider JAR file: This method utilizes a JAR file provided with the platform. This JAR file must be deployed to all nodes on the EMR cluster through a bootstrap action script.
You can leverage your IAM roles to provide role-based authentication to the S3 buckets. Info |
---|
NOTE: The IAM role that is assigned to the EMR cluster and to the EC2 instances on the cluster must have access to the data of all users on S3. |
For more information, see Configure for EC2 Role-Based Authentication.
If you are not using IAM roles for access, you can manage access using either of the following: - AWS key and secret values specified in
- AWS user mode
In either scenario, you must use the custom credential provider JAR provided in the installation. This JAR file must be available to all nodes of the EMR cluster. After you have installed the platform and configured the S3 buckets, please complete the following steps to deploy this JAR file. Info |
---|
NOTE: These steps must be completed before you create the EMR cluster. |
Info |
---|
NOTE: This section applies if you are using the default credential provider mechanism for AWS and are not using the IAM instance-based role authentication mechanism. |
Steps: From the installation of the , retrieve the following file: Code Block |
---|
[TRIFACTA_INSTALL_DIR]/aws/emr/build/libs/trifacta-aws-emr-credential-provider[TIMESTAMP].jar |
Info |
---|
NOTE: Do not remove the timestamp value from the filename. This information is useful for support purposes. |
Upload this JAR file to an S3 bucket location where the EMR cluster can access it: - Via AWS Console S3 UI: See http://docs.aws.amazon.com/cli/latest/reference/s3/index.html.
Via AWS command line: Code Block |
---|
aws s3 cp trifacta-aws-emr-credential-provider[TIMESTAMP].jar s3://<YOUR-BUCKET>/ |
Create a bootstrap action script named configure_emrfs_lib.sh . The contents must be the following: Code Block |
---|
sudo aws s3 cp s3://<YOUR-BUCKET>/trifacta-aws-emr-credential-provider[TIMESTAMP].jar /usr/share/aws/emr/emrfs/auxlib/ |
- This script must be uploaded into S3 in a location that can be accessed from the EMR cluster. Retain the full path to this location.
- Add bootstrap action to EMR cluster configuration.
Via AWS Console S3 UI: Create the bootstrap action to point to the script you uploaded on S3.
- Via AWS command line:
- Upload the
configure_emrfs_lib.sh file to the accessible S3 bucket. In the command line cluster creation script, add a custom bootstrap action, such as the following: Code Block |
---|
--bootstrap-actions '[
{"Path":"s3://<YOUR-BUCKET>/configure_emrfs_lib.sh","Name":"Custom action"}
]' |
When the EMR cluster is launched with the above custom bootstrap action, the cluster does one of the following: - Interacts with S3 using the credentials specified in
- if
aws.mode = user , then the credentials registered by the user are used.
For more information about AWSCredentialsProvider for EMRFS please see: Although it is not required, you should enable the consistent view feature for EMRFS on your cluster. During job execution, including profiling jobs, on EMR, the writes files in rapid succession, and these files are quickly read back from storage for further processing. However, Amazon S3 does not provide a guarantee of a consistent file listing until a later time. To ensure that the does not begin reading back an incomplete set of files, you should enable EMRFS consistent view. Info |
---|
NOTE: If EMRFS consistent view is enabled, additional policies must be added for users and the EMR cluster. Details are below. |
Info |
---|
NOTE: If EMRFS consistent view is not enabled, profiling jobs may not get a consistent set of files at the time of execution. Jobs can fail or generate inconsistent results. |
For more information on EMRFS consistent view, see http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-consistent-view.html. Amazon's DynamoDB is automatically enabled to store metadata for EMRFS consistent view.
Info |
---|
NOTE: DynamoDB does not automatically purge metadata after a job completes. You should configure periodic purges of the database during off-peak hours. |
You must set up S3 buckets for read and write access. Info |
---|
NOTE: Within the , you must enable use of S3 as the default storage layer. This configuration is described later. |
For more information, see Enable S3 Access. On the EMR cluster, all users of the platform must have access to the following locations: Location | Description | Required Access |
---|
EMR Resources bucket and path | The S3 bucket and path where resources can be stored by the for execution of Spark jobs on the cluster. Info |
---|
NOTE: If server-side encryption is in use, only SSE-S3 encryption type is supported for the resources bucket. If you are using the same bucket for resources and data and SSE-KMS is in use, you may need to deploy a second bucket for EMR resources. For more information on server-side encryption, see Enable S3 Access. |
The locations are configured separately in the . | Read/Write | EMR Logs bucket and path | The S3 bucket and path where logs are written for cluster job execution. | Read | These locations are configured on the later. require the following policies to run jobs on the EMR cluster: Code Block |
---|
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticmapreduce:AddJobFlowSteps",
"elasticmapreduce:DescribeStep",
"elasticmapreduce:DescribeCluster",
"elasticmapreduce:ListInstanceGroups"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::__EMR_LOG_BUCKET__",
"arn:aws:s3:::__EMR_LOG_BUCKET__/*",
"arn:aws:s3:::__EMR_RESOURCE_BUCKET__",
"arn:aws:s3:::__EMR_RESOURCE_BUCKET__/*"
]
}
]
} |
The following policies should be assigned to the EMR roles listed below for read/write access: Code Block |
---|
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::__EMR_LOG_BUCKET__",
"arn:aws:s3:::__EMR_LOG_BUCKET__/*",
"arn:aws:s3:::__EMR_RESOURCE_BUCKET__",
"arn:aws:s3:::__EMR_RESOURCE_BUCKET__/*"
]
}
} |
If EMRFS consistent view is enabled, the following policy must be added for users and the EMR cluster permissions: Code Block |
---|
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
} | for EMRPlease complete the following sections to configure the to communicate with the EMR cluster.
As soon as you have installed the software, you should login to the application and change the admin password. The initial admin password is the instanceId for the EC2 instance. For more information, see Change Password. EMR integrations requires use of S3 as the base storage layer. Info |
---|
NOTE: The base storage layer must be set during initial installation and set up of the . |
See Set Base Storage Layer. To integrate with S3, additional configuration is required. See Enable S3 Access. After you have configured S3 to be the base storage layer, you must enable EMR integration. Steps: Search for the following setting: Code Block |
---|
"webapp.runInEMR": false, |
- Set the above value to
true . Set the following value to false : Code Block |
---|
"webapp.runInHadoop": false, |
Verify the following property values: Code Block |
---|
"webapp.runInTrifactaServer": true,
"webapp.runInEMR": true,
"webapp.runInHadoop": false,
"webapp.runInDataflow": false,
"photon.enabled": true, |
The must be aware of the EMR cluster to which to connection. Steps: Under External Service Settings, enter your AWS EMR Cluster ID. Click the Save button below the textbox.
For more information, see Admin Settings Page. If you have deployed your EMR cluster on a private sub-net that is accessible outside of AWS, you must enable this property, which permits the extraction of the IP address of the master cluster node through DNS. Info |
---|
NOTE: This feature must be enabled if your EMR is accessible outside of AWS on a private network. |
Steps: Set the following property to true : Code Block |
---|
"emr.extractIPFromDNS": false, |
- Save your changes and restart the platform.
Depending on the authentication method you used, you must set the following properties. Authentication method | Properties and values |
---|
Use default credential provider for all including EMR. Info |
---|
NOTE: This method requires the deployment of a custom credential provider JAR. |
|
Code Block |
---|
"aws.credentialProvider":"default",
"aws.emr.forceInstanceRole":false, |
| Use default credential provider for all . However, EC2 role-based IAM authentication is used for EMR. |
Code Block |
---|
"aws.credentialProvider":"default",
"aws.emr.forceInstanceRole":true, |
| EC2 role-based IAM authentication for all |
Code Block |
---|
"aws.credentialProvider":"instance", |
| For EMR, you can configure a set of Spark-related properties to manage the integration and its performance. Depending on the version of EMR with which you are integrating, the must be modified to use the appropriate version of Spark to connect to EMR. For more information, see Configure for Spark.Through the Admin Settings page, you can specify the YARN queue to which to submit your Spark jobs. All Spark jobs from the are submitted to this queue.Steps: In platform configuration, locate the following: Code Block |
---|
"spark.props.spark.yarn.queue" |
- Specify the name of the queue.
Save your changes.
The following properties must be passed from the to Spark for proper execution on the EMR cluster. Info |
---|
NOTE: Do not modify these properties through the Admin Settings page. These properties must be added as extra properties through the Spark configuration block. Ignore any references in to these properties and their settings. |
Code Block |
---|
"spark": {
...
"props": {
"spark.dynamicAllocation.enabled": "true",
"spark.shuffle.service.enabled": "true",
"spark.executor.instances": "0",
"spark.executor.memory": "2048M",
"spark.executor.cores": "2",
"spark.driver.maxResultSize": "0"
}
...
} |
Property | Description | Value |
---|
spark.dynamicAllocation.enabled | Enable dynamic allocation on the Spark cluster, which allows Spark to dynamically adjust the number of executors. | true | spark.shuffle.service.enabled | Enable Spark shuffle service, which manages the shuffle data for jobs, instead of the executors. | true | spark.executor.instances | Default count of executor instances. | See Sizing GuideGuidelines. | spark.executor.memory | Default memory allocation of executor instances. | See Sizing GuideGuidelines. | spark.executor.cores | Default count of executor cores. | See Sizing GuideGuidelines. | spark.driver.maxResultSize | Enable serialized results of unlimited size by setting this parameter to zero (0). | 0 | For smaller datasets, the platform recommends using the .For larger datasets, if the size information is unavailable, the platform recommends by default that you run the job on the Hadoop cluster. For these jobs, the default publishing action for the job is specified to run on the Hadoop cluster, generating the output format defined by this parameter. Publishing actions, including output format, can always be changed as part of the job specification. As needed, you can change this default format. Code Block |
---|
"webapp.defaultHadoopFileFormat": "csv", |
Accepted values: csv , json , avro , pqt For more information, see Run Job Page. You can set the following parameters as needed: Steps: Property | Required | Description |
---|
aws.emr.resource.bucket | Y | S3 bucket name where , libraries, and other resources can be stored that are required for Spark execution. Info |
---|
NOTE: If server-side encryption is in use, only SSE-S3 encryption type is supported for the resources bucket. If you are using the same bucket for resources and data and SSE-KMS is in use, you may need to deploy a second bucket for EMR resources. For more information on server-side encryption, see Enable S3 Access. |
| aws.emr.resource.path | Y | S3 path within the bucket where resources can be stored for job execution on the EMR cluster. Info |
---|
NOTE: Do not include leading or trailing slashes for the path value. |
| aws.emr.proxyUser | Y | This value defines the user for the to use for connecting to the cluster. Info |
---|
NOTE: Do not modify this value. |
| aws.emr.maxLogPollingRetries | N | Configure maximum number of retries when polling for log files from EMR after job success or failure. Minimum value is 5 . | aws.emr.maxJobTimeoutMillis | N | Defines the timeout for EMR jobs in milliseconds. By default, this value is set to -1 , which allows jobs to run for an infinite length of time. Info |
---|
NOTE: This setting should be modified only if you are experiencing problems with jobs hanging during execution on the EMR cluster. |
| aws.emr.tempfilesCleanupAge | N | Defines the number of days that temporary files in the /trifacta/tempfiles directory on EMR HDFS are permitted to age. By default, this value is set to 0 , which means that cleanup is disabled. If needed, you can set this to a positive integer value. During each job run, the platform scans this directory for temp files older than the specified number of days and removes any that are found. This cleanup provides an additional level of system hygiene. Before enabling this secondary cleanup process, please execute the following command to clear the tempfiles directory: Code Block |
---|
hdfs dfs -rm -r -skipTrash /trifacta/tempfiles |
| For more information on configuring the platform to integrate with Redshift, see Create Redshift Connections. If needed, you can switch to a different EMR cluster through the application. For example, if the original cluster suffers a prolonged outage, you can switch clusters by entering the cluster ID of a new cluster. For more information, see Admin Settings Page. Batch Job Runner manages jobs executed on the EMR cluster. You can modify aspects of how jobs are executed and how logs are collected. For more information, see Configure Batch Job Runner. In environments where the EMR cluster is shared with other job-executing applications, you can review and specify the job tag prefix, which is prepended to job identifiers to avoid conflicts with other applications. Steps: Locate the following and modify if needed: Code Block |
---|
"aws.emr.jobTagPrefix": "TRIFACTA_JOB_", |
- Save your changes and restart the platform.
|