Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version next

...

Info

NOTE: Spark is the default running environment for Hadoop cluster-based job execution in Release 4.0 and later. Unless you are upgrading from a pre-Release 4.0 environment, no additional configuration is required.

...

Info

NOTE: When a recipe containing a user-defined function is applied to text data, any non-printing (control) characters cause records to be truncated by the Spark running environment during Hadoop job execution. In these cases, please execute the job on the

d-s-serverphoton
rtrue
running environment.

 

  • You cannot publish through Cloudera Navigator for Spark jobs.

...

When Spark execution is enabled, it is available like any other execution environment in the application. When executing a job, select the Run on Hadoop Spark option from the drop-down in the Run Job page. See Run Job Page.

...