For individual users of the Trifacta platform, an administrator may like to submit a custom set of properties to the Hadoop cluster for each job execution. For example, you may wish to change the available Java heap size for users who submit large jobs. This section describes how to define and deploy user-specific Java properties files for cluster jobs.
NOTE: User-specific custom properties override or append any other custom properties that are passed to Hadoop. Suppose the Java properties file contains a single property.
- If the property is not specified elsewhere in the job definition, it is appended to any other properties that are passed.
- If the property is specified elsewhere in the job definition, the Java properties file overrides the other custom property value.
NOTE: You cannot specify user-specific properties for S3 jobs.
Enable Java properties directory
This feature is enabled by defining the values for the Java properties directory for Spark in the platform configuration.
Set the following properties to the directories where the user-specific property files can be stored on the Trifacta node. Example:
- Save your changes and restart the platform.
For the above locations, the Trifacta service requires the following permissions:
- Execute permission on the directory defined in
- Read permission on the files in them.
Define user-specific properties files
For each user that is passing in custom properties, a separate file must be created in the appropriate directory with the following filename pattern:
userEmail is the email address for the user that is registered with the Trifacta platform. For example, for userId
firstname.lastname@example.org, the filename is
Each file must follow Java Properties file format, which is the following:
For more information on this format, see https://docs.oracle.com/cd/E23095_01/Platform.93/ATGProgGuide/html/s0204propertiesfileformat01.html.
This page has no comments.