This running environment is available through the
|D s server|
NOTE: This running environment is enabled by default.
Required Configuration: See Configure Trifacta Server Photon Running Environment.
Supported Output Formats: CSV, JSON, Avro, Parquet
NOTE: When a recipe containing a user-defined function is applied to text data, any null characters cause records to be truncated during
Spark Running Environment
This running environment is the new default running environment. The Spark running environment deploys Spark libraries from the
If you have deployed the
|D s platform|
Required Installation: None.
This running environment deploys Spark libraries from the
- The Spark running environment requires a Hadoop cluster as the backend job execution environment.
- In the Run Job page, select Run on HadoopSpark.
running environment executes on the
and provide processing to the front-end client and at execution time.
d-s-item item node
In the Run Job page, select Run on
D s server
is enabled by default. Photon.
D s server
For more information on disabling the
running environment running environment, see Configure Trifacta Server Photon Running Environment.
|Type||Running Environment||Configuration Parameters||Notes|
|Hadoop Backend||Spark||The Spark running environment is the default configuration.|
|Client Front-end and non-Hadoop Backend|
NOTE: If your environment has no running environment such as Hadoop Spark for running large-scale jobs, this parameter is not used. All jobs are run on the
|Running Environment||Default Condition|
|Size of primary datasource is less than or equal to the above value in bytes.|
|Cluster-based running environmentSpark||Size of primary datasource is greater than the above value in bytes.|
Setting this value too high forces more jobs onto the
Tip: To force the default setting to always be a Hadoop or bulk running environment, set this value to