For individual users of the , an administrator may like to submit a custom set of properties to the Hadoop cluster for each job execution. For example, you may wish to change the available Java heap size for users who submit large jobs. This section describes how to define and deploy user-specific Java properties files for cluster jobs.

NOTE: User-specific custom properties override or append any other custom properties that are passed to Hadoop. Suppose the Java properties file contains a single property.

  • If the property is not specified elsewhere in the job definition, it is appended to any other properties that are passed.
  • If the property is specified elsewhere in the job definition, the Java properties file overrides the other custom property value.

NOTE: You cannot specify user-specific properties for S3 jobs.


Enable Java properties directory

This feature is enabled by defining the values for the Java properties directory for Spark in the platform configuration.



  1. Set the following properties to the directories where the user-specific property files can be stored on the . Example:

    "spark.userPropsDirectory": "/opt/trifacta/conf/usr",

  2. Save your changes and restart the platform.

Required Permissions

For the above locations, the  requires the following permissions:

Define user-specific properties files

For each user that is passing in custom properties, a separate file must be created in the appropriate directory with the following filename pattern:


userEmail is the email address for the user that is registered with the . For example, for userId, the filename is

File Format:

Each file must follow Java Properties file format, which is the following:


NOTE: Property names must use the full property name. For example, if you are modifying the Spark YARN queue for the user, the property name must be spark.props.spark.yarn.queue. Setting it to spark.yarn.queue does not work.

For more information on this format, see