- After you have upgraded to Release 4.0, you must re-enable Pig for job execution. See Running Environment Options.
- For each of your existing Python UDFs, author a new one in Java:
- This new UDF must have the same name and schema.
- Deploy each new Java UDF to the .
- For more information on authoring and deployment, see Java UDFs.
- When all of your existing Python UDFs have been converted to Java and installed on the , enable the Spark running environment on the node. See Running Environment Options.
- At this point:
- When you select Run on , the running environment uses the Python UDFs.
- In later versions, this selection is Photon.
- When you select Run in on Hadoop, the Spark running environment uses the Java UDFs.
- In later versions, this selection is Spark.
- Verify job results of executions on each running environment for each of your Python UDFs.
- Remove the Python UDFs from the .
For Release 4.1 upgrade: