Crypto executor

Comment

Author: Admin | 2025-04-27

Mesos coarse-grained mode, run $SPARK_HOME/sbin/start-mesos-shuffle-service.sh on allworker nodes with spark.shuffle.service.enabled set to true. For instance, you may do sothrough Marathon.In YARN mode, follow the instructions here.All other relevant configurations are optional and under the spark.dynamicAllocation.* andspark.shuffle.service.* namespaces. For more detail, see theconfigurations page.Resource Allocation PolicyAt a high level, Spark should relinquish executors when they are no longer used and acquireexecutors when they are needed. Since there is no definitive way to predict whether an executorthat is about to be removed will run a task in the near future, or whether a new executor that isabout to be added will actually be idle, we need a set of heuristics to determine when to removeand request executors.Request PolicyA Spark application with dynamic allocation enabled requests additional executors when it haspending tasks waiting to be scheduled. This condition necessarily implies that the existing setof executors is insufficient to simultaneously saturate all tasks that have been submitted butnot yet finished.Spark requests executors in rounds. The actual request is triggered when there have been pendingtasks for spark.dynamicAllocation.schedulerBacklogTimeout seconds, and then triggered againevery spark.dynamicAllocation.sustainedSchedulerBacklogTimeout seconds thereafter if the queueof pending tasks persists. Additionally, the number of executors requested in each round increasesexponentially from the previous round. For instance, an application will add 1 executor in thefirst round, and then 2, 4, 8 and so on executors in the subsequent rounds.The motivation for an exponential increase policy is twofold. First, an application should requestexecutors cautiously in the beginning in case it turns out that only a few additional executors issufficient. This echoes the justification for TCP slow start. Second, the application should beable to ramp up its resource usage in a timely manner in case it turns out that many executors areactually needed.Remove PolicyThe policy for removing executors is much simpler. A Spark application removes an executor whenit has been idle for more than spark.dynamicAllocation.executorIdleTimeout seconds. Note that,under most circumstances, this condition is mutually exclusive with the request condition, in thatan executor should not be idle if there are still pending tasks to be scheduled.Graceful Decommission of ExecutorsBefore dynamic allocation, if a Spark executor exits when the associated application has also exited then all state associated with the executor is no longer needed and can be safely discarded. With dynamic allocation, however, the application is still running when an executor is explicitly removed. If the application attempts to access state stored in or written by the executor,

Add Comment