public class Accumulator<T> extends Accumulable<T,T>
Accumulable
where the result type being accumulated is the same
as the types of elements being merged, i.e. variables that are only "added" to through an
associative operation and can therefore be efficiently supported in parallel. They can be used
to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric
value types, and programmers can add support for new types.
An accumulator is created from an initial value v
by calling SparkContext.accumulator(T, org.apache.spark.AccumulatorParam<T>)
.
Tasks running on the cluster can then add to it using the Accumulable#+=
operator.
However, they cannot read its value. Only the driver program can read the accumulator's value,
using its value method.
The interpreter session below shows an accumulator being used to add up the elements of an array:
scala> val accum = sc.accumulator(0)
accum: spark.Accumulator[Int] = 0
scala> sc.parallelize(Array(1, 2, 3, 4)).foreach(x => accum += x)
...
10/09/29 18:41:08 INFO SparkContext: Tasks finished in 0.317106 s
scala> accum.value
res2: Int = 10
Constructor and Description |
---|
Accumulator(T initialValue,
AccumulatorParam<T> param) |
Accumulator(T initialValue,
AccumulatorParam<T> param,
scala.Option<String> name) |
add, id, localValue, merge, name, setValue, toString, value, zero
public Accumulator(T initialValue, AccumulatorParam<T> param, scala.Option<String> name)
public Accumulator(T initialValue, AccumulatorParam<T> param)