Interface | Description |
---|---|
AccumulableParam<R,T> |
Helper object defining how to accumulate values of a particular type.
|
AccumulatorParam<T> |
A simpler version of
AccumulableParam where the only data type you can add
in is the same type as the accumulated value. |
CleanerListener |
Listener class used for testing when any item has been cleaned by the Cleaner class.
|
CleanupTask |
Classes that represent cleaning tasks.
|
ExecutorAllocationClient |
A client that communicates with the cluster manager to request or kill executors.
|
FutureAction<T> |
A future for the result of an action to support cancellation.
|
Logging |
:: DeveloperApi ::
Utility trait for classes that want to log data.
|
MapOutputTrackerMessage | |
Partition |
An identifier for a partition in an RDD.
|
SparkJobInfo |
Exposes information about Spark Jobs.
|
SparkStageInfo |
Exposes information about Spark Stages.
|
TaskEndReason |
:: DeveloperApi ::
Various possible reasons why a task ended.
|
TaskFailedReason |
:: DeveloperApi ::
Various possible reasons why a task failed.
|
Class | Description |
---|---|
Accumulable<R,T> |
A data type that can be accumulated, ie has an commutative and associative "add" operation,
but where the result type,
R , may be different from the element type being added, T . |
Accumulator<T> |
A simpler value of
Accumulable where the result type being accumulated is the same
as the types of elements being merged, i.e. |
AccumulatorParam.DoubleAccumulatorParam$ | |
AccumulatorParam.FloatAccumulatorParam$ | |
AccumulatorParam.IntAccumulatorParam$ | |
AccumulatorParam.LongAccumulatorParam$ | |
Accumulators | |
Aggregator<K,V,C> |
:: DeveloperApi ::
A set of functions used to aggregate data.
|
CacheManager |
Spark class responsible for passing RDDs partition contents to the BlockManager and making
sure a node doesn't load two copies of an RDD at once.
|
CleanBroadcast | |
CleanRDD | |
CleanShuffle | |
CleanupTaskWeakReference |
A WeakReference associated with a CleanupTask.
|
ComplexFutureAction<T> |
A
FutureAction for actions that could trigger multiple Spark jobs. |
ContextCleaner |
An asynchronous cleaner for RDD, shuffle, and broadcast state.
|
Dependency<T> |
:: DeveloperApi ::
Base class for dependencies.
|
ExceptionFailure |
:: DeveloperApi ::
Task failed due to a runtime exception.
|
ExecutorAllocationManager |
An agent that dynamically allocates and removes executors based on the workload.
|
ExecutorLostFailure |
:: DeveloperApi ::
The task failed because the executor that it was running on was lost.
|
FetchFailed |
:: DeveloperApi ::
Task failed to fetch shuffle data from a remote node.
|
GetMapOutputStatuses | |
GrowableAccumulableParam<R,T> | |
HashPartitioner |
A
Partitioner that implements hash-based partitioning using
Java's Object.hashCode . |
Heartbeat |
A heartbeat from executors to the driver.
|
HeartbeatReceiver |
Lives in the driver to receive heartbeats from executors..
|
HeartbeatResponse | |
HttpFileServer | |
HttpServer |
An HTTP server for static content used to allow worker nodes to access JARs added to SparkContext
as well as classes created by the interpreter when the user types in code.
|
InterruptibleIterator<T> |
:: DeveloperApi ::
An iterator that wraps around an existing iterator to provide task killing functionality.
|
JavaFutureActionWrapper<S,T> | |
JavaSparkListener |
Java clients should extend this class instead of implementing
SparkListener directly.
|
MapOutputTracker |
Class that keeps track of the location of the map output of
a stage.
|
MapOutputTrackerMaster |
MapOutputTracker for the driver.
|
MapOutputTrackerMasterActor |
Actor class for MapOutputTrackerMaster
|
MapOutputTrackerWorker |
MapOutputTracker for the executors, which fetches map output information from the driver's
MapOutputTrackerMaster.
|
NarrowDependency<T> |
:: DeveloperApi ::
Base class for dependencies where each partition of the child RDD depends on a small number
of partitions of the parent RDD.
|
OneToOneDependency<T> |
:: DeveloperApi ::
Represents a one-to-one dependency between partitions of the parent and child RDDs.
|
Partitioner |
An object that defines how the elements in a key-value pair RDD are partitioned by key.
|
RangeDependency<T> |
:: DeveloperApi ::
Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.
|
RangePartitioner<K,V> |
A
Partitioner that partitions sortable records by range into roughly
equal ranges. |
Resubmitted |
:: DeveloperApi ::
A
ShuffleMapTask that completed successfully earlier, but we
lost the executor before the stage completed. |
SecurityManager |
Spark class responsible for security.
|
SerializableWritable<T extends org.apache.hadoop.io.Writable> | |
ShuffleDependency<K,V,C> |
:: DeveloperApi ::
Represents a dependency on the output of a shuffle stage.
|
SimpleFutureAction<T> |
A
FutureAction holding the result of an action that triggers a single job. |
SparkConf |
Configuration for a Spark application.
|
SparkContext |
Main entry point for Spark functionality.
|
SparkContext.DoubleAccumulatorParam$ | |
SparkContext.FloatAccumulatorParam$ | |
SparkContext.IntAccumulatorParam$ | |
SparkContext.LongAccumulatorParam$ | |
SparkEnv |
:: DeveloperApi ::
Holds all the runtime environment objects for a running Spark instance (either master or worker),
including the serializer, Akka actor system, block manager, map output tracker, etc.
|
SparkFiles |
Resolves paths to files added through
SparkContext.addFile() . |
SparkFirehoseListener |
Class that allows users to receive all SparkListener events.
|
SparkHadoopWriter |
Internal helper class that saves an RDD using a Hadoop OutputFormat.
|
SparkJobInfoImpl | |
SparkStageInfoImpl | |
SparkStatusTracker |
Low-level status reporting APIs for monitoring job and stage progress.
|
SSLOptions |
SSLOptions class is a common container for SSL configuration options.
|
StopMapOutputTracker | |
Success |
:: DeveloperApi ::
Task succeeded.
|
TaskCommitDenied |
:: DeveloperApi ::
Task requested the driver to commit, but was denied.
|
TaskContext |
Contextual information about a task which can be read or mutated during
execution.
|
TaskContextHelper |
This class exists to restrict the visibility of TaskContext setters.
|
TaskContextImpl | |
TaskKilled |
:: DeveloperApi ::
Task was killed intentionally and needs to be rescheduled.
|
TaskResultLost |
:: DeveloperApi ::
The task finished successfully, but the result was lost from the executor's block manager before
it was fetched.
|
TaskState | |
TestUtils |
Utilities for tests.
|
UnknownReason |
:: DeveloperApi ::
We don't know why the task ended -- for example, because of a ClassNotFound exception when
deserializing the task result.
|
WritableConverter<T> |
A class encapsulating how to convert some type T to Writable.
|
WritableFactory<T> |
A class encapsulating how to convert some type T to Writable.
|
Enum | Description |
---|---|
JobExecutionStatus |
Exception | Description |
---|---|
ServerStateException |
Exception type thrown by HttpServer when it is in the wrong state for an operation.
|
SparkDriverExecutionException |
Exception thrown when execution of some user code in the driver process fails, e.g.
|
SparkException | |
TaskKilledException |
:: DeveloperApi ::
Exception thrown when a task is explicitly killed (i.e., task failure is expected).
|
TaskNotSerializableException |
Exception thrown when a task cannot be serialized.
|
Accumulator
and StorageLevel
, are also used in Java, but the
org.apache.spark.api.java
package contains the main Java API.