For individual users of the , an administrator may like to submit a custom set of properties to the Hadoop cluster for each job execution. For example, you may wish to change the available Java heap size for users who submit large jobs. This section describes how to define and deploy user-specific Java properties files for cluster jobs.
NOTE: User-specific custom properties override or append any other custom properties that are passed to Hadoop. Suppose the Java properties file contains a single property.
|
NOTE: You cannot specify user-specific properties for S3 jobs. |
This feature is enabled by defining the values for the Java properties directory for Spark in the platform configuration.
Steps:
Set the following properties to the directories where the user-specific property files can be stored on the . Example:
"spark.userPropsDirectory": "/opt/trifacta/conf/usr", |
For the above locations, the requires the following permissions:
spark.userPropsDirectory
.For each user that is passing in custom properties, a separate file must be created in the appropriate directory with the following filename pattern:
userEmail-user.properties |
where:
userEmail
is the email address for the user that is registered with the . For example, for userId
joe@example.com
, the filename is joe@example.com-user.properties
.
File Format:
Each file must follow Java Properties file format, which is the following:
property.a=value.a property.b=value.b |
NOTE: Property names must use the full property name. For example, if you are modifying the Spark YARN queue for the user, the property name must be |
For more information on this format, see https://docs.oracle.com/cd/E23095_01/Platform.93/ATGProgGuide/html/s0204propertiesfileformat01.html.