Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For an expected number of peak concurrent (active) users, P, set the following parameters.

D s config

ParameterDefault ValueRecommended ValueExample
webapp.numProcesses2P / 15, rounded up to the nearest integer, minimum of 2for P = 40, set to 3
webapp.
db
database.pool.maxConnections105 * webapp.numProcessesfor webapp.numProcesses = 3, set to 15
vfs-service.numProcesses2P / 50, rounded up to the nearest integer, minimum of 2for P = 225, set to 5

Other applicable parameters

...

Info

NOTE: Avoid modifying these parameters unless instructed by

D s support
.


ParameterDefault Value
batch-job-runner.executor.maxPoolSize 50batch-job-runner.db.maxPoolSize50
batch-job-runner.systemProperties.httpMaxConnectionsPerDestination50

Limit Application Memory Utilization

Several 

D s node
 services allow limitations on memory utilization by varying their JVM configuration's -Xmx value. These can be limited by modifying the following parameters:

ParameterDefault Configuration

batch-job-runner.jvmOptions

-Xmx1024m
diagnostic-server.jvmOptions
-Xmx512m
data-service.jvmOptions-Xmx128m
spark-job-service.jvmOptions-Xmx128m

Other services have low memory requirements. 

...

Jobs run on mthe 

D s node
 and "Quick Scan" samples are executed by the Photon running environment embedded on the
D s node
, running alongside the application itself. Two main parameters can be used to tune concurrency of job execution and throughput of individual jobs:

ParameterDescriptionDefault
batchserver.workers.photon.maxMaximum number of simultaneous Photon processes; once exceeded, jobs are queued2
photon.numThreadsNumber of threads used by each Photon process4

Increasing the number of concurrent processes allows more users' jobs to execute concurrently. However, it also leads to resource contention among the jobs and the application services.

Photon's execution is purely in memory. It does not spill to disk when the total data size exceeds available memory. As a result, you should configure limits on Photon's memory utilization. If a job exceeds the configured memory threshold, it is killed by a parent process tasked with monitoring the job.

ParameterDescriptionDefault
photon.memoryMonitorEnabledSet true to enable the monitor, and set to false (limited only by operating system) otherwisetrue
photon.memoryPercentageThresholdPercentage of available system memory that each Photon process can use (if photon.memoryMonitorEnabled is true)50


Tip

A reasonable rule of thumb: the input data size should not exceed one tenth of the job’s memory limit. This rule of thumb accounts for joins and pivots and other operations that can increase memory usage over the data size. However, this parameter is intended as a safeguard; it is unlikely that all running jobs would approach the memory limit simultaneously. So  you should "oversubscribe" and use slightly more than (100 /  batchserver.workers.photon.max) for this threshold.

In addition to a memory threshold, execution time of any Photon job can also be limited via the following parameters:

ParameterDescriptionDefault
photon.timeoutEnabledSet true to enable the timeout, and set to false (unlimited) otherwisetrue
photon.timeoutMinutesTime in minutes after which to kill the Photon process (if photon.timeoutEnabled is true)

180 

For more information, see Configure Photon Running Environment.