Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DEV and version next

...

  • Look to cut data volume. Some job failures occur due to high data volumes. For jobs that execute across a large dataset, you can re-examine your data to remove unneeded rows and columns of data. Use the deduplicate transform to remove duplicate rows. See Remove Data.
  • Gather a new sample. In some cases, jobs can fail when run at scale because the sample displayed in the Transformer page did not include problematic data. If you have modified the number of rows or columns in your dataset, you can generate a new sample, which might illuminate the problematic data. However, gathering a new sample may fail as well, which can indicate a broader problem. See Samples Panel.

  • Change the running environment. If the job failed on the 

    D s server
    Photon, try to execute executing it on HadoopSpark

    Tip

    Tip: The 

    D s server
     is The Photon running environment is not suitable for jobs on large datasets. You should accept the running environment recommended in the Run Job page.

...

Tip

Tip: Search this log file for error.

d-s-servernode
 logs

Info

NOTE: You must be an administrator to access these logs.

On the 

d-s-item
node
node
, these logs are located in the following directory:

...

  • batch-job-runner.log. This log contains vital information about the state of any launched jobs.
  • webapp.log. This log monitors interactions with the web application.Issues related to jobs running locally on the 

    d-s-
    server
    photon
     running environment can appear here.

...