- abortJob(JobContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Aborts a job after the writes fail.
- abortJob(JobContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
-
- abortTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Aborts a task after the writes have failed.
- abortTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
-
- abs(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the absolute value.
- abs() - Method in class org.apache.spark.sql.types.Decimal
-
- absent() - Static method in class org.apache.spark.api.java.Optional
-
- AbsoluteError - Class in org.apache.spark.mllib.tree.loss
-
:: DeveloperApi ::
Class for absolute error loss calculation (for regression).
- AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
-
- accept(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(ES, Function1<ES, List<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- accept(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptIf(Function1<Object, Object>, Function1<Object, String>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptMatch(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptSeq(ES, Function1<ES, Iterable<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- acceptsType(DataType) - Method in class org.apache.spark.sql.types.ObjectType
-
- accId() - Method in class org.apache.spark.CleanAccum
-
- Accumulable<R,T> - Class in org.apache.spark
-
- Accumulable(R, AccumulableParam<R, T>) - Constructor for class org.apache.spark.Accumulable
-
Deprecated.
- accumulable(T, AccumulableParam<T, R>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulable(T, String, AccumulableParam<T, R>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulable(R, AccumulableParam<R, T>) - Method in class org.apache.spark.SparkContext
-
- accumulable(R, String, AccumulableParam<R, T>) - Method in class org.apache.spark.SparkContext
-
- accumulableCollection(R, Function1<R, Growable<T>>, ClassTag<R>) - Method in class org.apache.spark.SparkContext
-
- AccumulableInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
Information about an
Accumulable
modified during a task or stage.
- AccumulableInfo - Class in org.apache.spark.status.api.v1
-
- accumulableInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- accumulableInfoToJson(AccumulableInfo) - Static method in class org.apache.spark.util.JsonProtocol
-
- AccumulableParam<R,T> - Interface in org.apache.spark
-
- accumulables() - Method in class org.apache.spark.scheduler.StageInfo
-
Terminal values of accumulables updated during this stage, including all the user-defined
accumulators.
- accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
-
Intermediate updates to accumulables during this task.
- accumulables() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- accumulablesToJson(Traversable<AccumulableInfo>) - Static method in class org.apache.spark.util.JsonProtocol
-
- Accumulator<T> - Class in org.apache.spark
-
- accumulator(int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulator(int, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulator(double) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulator(double, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulator(T, AccumulatorParam<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulator(T, String, AccumulatorParam<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- accumulator(T, AccumulatorParam<T>) - Method in class org.apache.spark.SparkContext
-
- accumulator(T, String, AccumulatorParam<T>) - Method in class org.apache.spark.SparkContext
-
- AccumulatorContext - Class in org.apache.spark.util
-
An internal class used to track accumulators by Spark itself.
- AccumulatorContext() - Constructor for class org.apache.spark.util.AccumulatorContext
-
- AccumulatorParam<T> - Interface in org.apache.spark
-
- AccumulatorParam.DoubleAccumulatorParam$ - Class in org.apache.spark
-
- AccumulatorParam.FloatAccumulatorParam$ - Class in org.apache.spark
-
- AccumulatorParam.IntAccumulatorParam$ - Class in org.apache.spark
-
- AccumulatorParam.LongAccumulatorParam$ - Class in org.apache.spark
-
- AccumulatorParam.StringAccumulatorParam$ - Class in org.apache.spark
-
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
-
- accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
-
- AccumulatorV2<IN,OUT> - Class in org.apache.spark.util
-
The base class for accumulators, that can accumulate inputs of type IN
, and produce output of
type OUT
.
- AccumulatorV2() - Constructor for class org.apache.spark.util.AccumulatorV2
-
- accumUpdates() - Method in class org.apache.spark.ExceptionFailure
-
- accumUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns accuracy
(equals to the total number of correctly classified instances
out of the total number of instances.)
- accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns accuracy
- acos(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the cosine inverse of the given value; the returned angle is in the range
0.0 through pi.
- acos(String) - Static method in class org.apache.spark.sql.functions
-
Computes the cosine inverse of the given column; the returned angle is in the range
0.0 through pi.
- active() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns a list of active queries associated with this SQLContext
- active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- ACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- activeJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- activeStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- activeStorageStatusList() - Method in class org.apache.spark.ui.exec.ExecutorsListener
-
Deprecated.
- activeStorageStatusList() - Method in class org.apache.spark.ui.storage.StorageListener
-
Deprecated.
- activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- add(T) - Method in class org.apache.spark.Accumulable
-
Deprecated.
Add more data to this accumulator / accumulable
- add(T) - Static method in class org.apache.spark.Accumulator
-
Deprecated.
- add(org.apache.spark.ml.feature.Instance) - Method in class org.apache.spark.ml.classification.LinearSVCAggregator
-
Add a new training instance to this LinearSVCAggregator, and update the loss and gradient
of the objective function.
- add(org.apache.spark.ml.feature.Instance) - Method in class org.apache.spark.ml.classification.LogisticAggregator
-
Add a new training instance to this LogisticAggregator, and update the loss and gradient
of the objective function.
- add(Vector) - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
Add a new training instance to this ExpectationAggregator, update the weights,
means and covariances for each distributions, and update the log likelihood.
- add(AFTPoint) - Method in class org.apache.spark.ml.regression.AFTAggregator
-
Add a new training data to this AFTAggregator, and update the loss and gradient
of the objective function.
- add(org.apache.spark.ml.feature.Instance) - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
-
Add a new training instance to this LeastSquaresAggregator, and update the loss and gradient
of the objective function.
- add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
-
- add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Adds a new document.
- add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Adds the given block matrix other
to this
block matrix: this + other
.
- add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Add a new sample to this summarizer, and update the statistical summary.
- add(StructField) - Method in class org.apache.spark.sql.types.StructType
-
- add(String, DataType) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata.
- add(String, DataType, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata.
- add(String, DataType, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata.
- add(String, DataType, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata.
- add(String, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new nullable field with no metadata where the
dataType is specified as a String.
- add(String, String, boolean) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field with no metadata where the
dataType is specified as a String.
- add(String, String, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the
dataType is specified as a String.
- add(String, String, boolean, String) - Method in class org.apache.spark.sql.types.StructType
-
Creates a new
StructType
by adding a new field and specifying metadata where the
dataType is specified as a String.
- add(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
- add(IN) - Method in class org.apache.spark.util.AccumulatorV2
-
Takes the inputs and accumulates.
- add(T) - Method in class org.apache.spark.util.CollectionAccumulator
-
- add(Double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(double) - Method in class org.apache.spark.util.DoubleAccumulator
-
Adds v to the accumulator, i.e.
- add(T) - Method in class org.apache.spark.util.LegacyAccumulatorWrapper
-
- add(Long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(long) - Method in class org.apache.spark.util.LongAccumulator
-
Adds v to the accumulator, i.e.
- add(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- add(Object, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- add_months(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is numMonths after startDate.
- addAccumulator(R, T) - Method in interface org.apache.spark.AccumulableParam
-
Deprecated.
Add additional data to the accumulator value.
- addAccumulator(T, T) - Method in interface org.apache.spark.AccumulatorParam
-
Deprecated.
- addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
-
Adds command line arguments for the application.
- addBinary(byte[]) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addBinary(byte[], long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Adds a file to be submitted with the application.
- addFile(String) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFile(String, boolean) - Method in class org.apache.spark.SparkContext
-
Add a file to be downloaded with this Spark job on every node.
- addFilters(Seq<ServletContextHandler>, SparkConf) - Static method in class org.apache.spark.ui.JettyUtils
-
Add filters, if any, to the given list of ServletContextHandlers
- addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a param with multiple values (overwrites if the input param exists).
- addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a double param with multiple values.
- addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds an int param with multiple values.
- addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a float param with multiple values.
- addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a long param with multiple values.
- addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
-
Adds a boolean param with true and false.
- addInPlace(R, R) - Method in interface org.apache.spark.AccumulableParam
-
Deprecated.
Merge two accumulated values together.
- addInPlace(double, double) - Method in class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
-
Deprecated.
- addInPlace(float, float) - Method in class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
-
Deprecated.
- addInPlace(int, int) - Method in class org.apache.spark.AccumulatorParam.IntAccumulatorParam$
-
Deprecated.
- addInPlace(long, long) - Method in class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
-
Deprecated.
- addInPlace(String, String) - Method in class org.apache.spark.AccumulatorParam.StringAccumulatorParam$
-
Deprecated.
- addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
- addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Adds a jar file to be submitted with the application.
- addJar(String) - Method in class org.apache.spark.SparkContext
-
Adds a JAR dependency for all tasks to be executed on this SparkContext
in the future.
- addJar(String) - Method in class org.apache.spark.sql.hive.HiveSessionResourceLoader
-
- addListener(SparkAppHandle.Listener) - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Adds a listener to be notified of changes to the handle's information.
- addListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
- addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
-
Add Hadoop configuration specific to a single partition and attempt.
- addLong(long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addLong(long, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Adds a python file / zip / egg to be submitted with the application.
- address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
-
- addShutdownHook(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with default priority.
- addShutdownHook(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Adds a shutdown hook with the given priority.
- addSparkArg(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Adds a no-value argument to the Spark invocation.
- addSparkArg(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Adds an argument with a value to the Spark invocation.
- addSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Register a listener to receive up-calls from events that happen during execution.
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
- addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
-
- addString(StringBuilder, String, String, String) - Static method in class org.apache.spark.sql.types.StructType
-
- addString(StringBuilder, String) - Static method in class org.apache.spark.sql.types.StructType
-
- addString(StringBuilder) - Static method in class org.apache.spark.sql.types.StructType
-
- addString(String) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by one.
- addString(String, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
Increments item
's count by count
.
- addSuppressed(Throwable) - Static method in exception org.apache.spark.sql.AnalysisException
-
- addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
-
Adds a (Java friendly) listener to be executed on task completion.
- addTaskCompletionListener(Function1<TaskContext, BoxedUnit>) - Method in class org.apache.spark.TaskContext
-
Adds a listener in the form of a Scala closure to be executed on task completion.
- addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure.
- addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - Method in class org.apache.spark.TaskContext
-
Adds a listener to be executed on task failure.
- AddWebUIFilter(String, Map<String, String>, String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
-
- AddWebUIFilter$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
-
- AFTAggregator - Class in org.apache.spark.ml.regression
-
AFTAggregator computes the gradient and loss for a AFT loss function,
as used in AFT survival regression for samples in sparse or dense vector in an online fashion.
- AFTAggregator(Broadcast<DenseVector<Object>>, boolean, Broadcast<double[]>) - Constructor for class org.apache.spark.ml.regression.AFTAggregator
-
- AFTCostFun - Class in org.apache.spark.ml.regression
-
AFTCostFun implements Breeze's DiffFunction[T] for AFT cost.
- AFTCostFun(RDD<AFTPoint>, boolean, Broadcast<double[]>, int) - Constructor for class org.apache.spark.ml.regression.AFTCostFun
-
- AFTSurvivalRegression - Class in org.apache.spark.ml.regression
-
- AFTSurvivalRegression(String) - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- AFTSurvivalRegression() - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- AFTSurvivalRegressionModel - Class in org.apache.spark.ml.regression
-
- agg(Column, Column...) - Method in class org.apache.spark.sql.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Aggregates on the entire Dataset without groups.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
-
(Java-specific) Aggregates on the entire Dataset without groups.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Aggregates on the entire Dataset without groups.
- agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregation, returning a
Dataset
of tuples for each unique key
and the result of computing this aggregation over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Computes the given aggregations, returning a
Dataset
of tuples for each unique key
and the result of computing these aggregations over all elements in the group.
- agg(Column, Column...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying the column names and
aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Scala-specific) Compute aggregates by specifying a map from column name to
aggregate methods.
- agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
(Java-specific) Compute aggregates by specifying a map from column name to
aggregate methods.
- agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute aggregates by specifying a series of aggregate columns.
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregate the elements of each partition, and then the results for all the partitions, using
given combine functions and a neutral "zero value".
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Aggregate the elements of each partition, and then the results for all the partitions, using
given combine functions and a neutral "zero value".
- aggregate(U, Function2<U, T, U>, Function2<U, U, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- aggregate(Function0<B>, Function2<B, A, B>, Function2<B, B, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Partitioner, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, int, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- aggregateByKey(U, Function2<U, V, U>, Function2<U, U, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Aggregate the values of each key, using given combine functions and a neutral "zero value".
- AggregatedDialect - Class in org.apache.spark.sql.jdbc
-
AggregatedDialect can unify multiple dialects into one virtual Dialect.
- AggregatedDialect(List<JdbcDialect>) - Constructor for class org.apache.spark.sql.jdbc.AggregatedDialect
-
- aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Method in class org.apache.spark.graphx.Graph
-
Aggregates values from the neighboring edges and vertices of each vertex.
- aggregateMessages(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, ClassTag<A>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
- aggregateMessages$default$3() - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
- aggregateMessagesWithActiveSet(Function1<EdgeContext<VD, ED, A>, BoxedUnit>, Function2<A, A, A>, TripletFields, Option<Tuple2<VertexRDD<?>, EdgeDirection>>, ClassTag<A>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- aggregateUsingIndex(RDD<Tuple2<Object, VD2>>, Function2<VD2, VD2, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Aggregates vertices in messages
that have the same ids using reduceFunc
, returning a
VertexRDD co-indexed with this
.
- AggregatingEdgeContext<VD,ED,A> - Class in org.apache.spark.graphx.impl
-
- AggregatingEdgeContext(Function2<A, A, A>, Object, BitSet) - Constructor for class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- aggregationDepth() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- aggregationDepth() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- aggregationDepth() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- aggregationDepth() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- aggregationDepth() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- aggregationDepth() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- aggregationDepth() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- aggregationDepth() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- Aggregator<K,V,C> - Class in org.apache.spark
-
:: DeveloperApi ::
A set of functions used to aggregate data.
- Aggregator(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Constructor for class org.apache.spark.Aggregator
-
- aggregator() - Method in class org.apache.spark.ShuffleDependency
-
- Aggregator<IN,BUF,OUT> - Class in org.apache.spark.sql.expressions
-
:: Experimental ::
A base class for user-defined aggregations, which can be used in Dataset
operations to take
all of the elements of a group and reduce them to a single value.
- Aggregator() - Constructor for class org.apache.spark.sql.expressions.Aggregator
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- aic(RDD<Tuple3<Object, Object, Object>>, double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
- aic() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Akaike Information Criterion (AIC) for the fitted model.
- Algo - Class in org.apache.spark.mllib.tree.configuration
-
Enum to select the algorithm for the decision tree
- Algo() - Constructor for class org.apache.spark.mllib.tree.configuration.Algo
-
- algo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- algo() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- algo() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- algo() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- algorithm() - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
- alias(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- alias(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with an alias set.
- alias(Symbol) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- All - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose all the fields (source, edge, and destination).
- allAttributes() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- allAttributes() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- allAttributes() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- AllJobsCancelled - Class in org.apache.spark.scheduler
-
- AllJobsCancelled() - Constructor for class org.apache.spark.scheduler.AllJobsCancelled
-
- AllReceiverIds - Class in org.apache.spark.streaming.scheduler
-
A message used by ReceiverTracker to ask all receiver's ids still stored in
ReceiverTrackerEndpoint.
- AllReceiverIds() - Constructor for class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- allSources() - Static method in class org.apache.spark.metrics.source.StaticSources
-
The set of all static sources.
- alpha() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- alpha() - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- ALS - Class in org.apache.spark.ml.recommendation
-
Alternating Least Squares (ALS) matrix factorization.
- ALS(String) - Constructor for class org.apache.spark.ml.recommendation.ALS
-
- ALS() - Constructor for class org.apache.spark.ml.recommendation.ALS
-
- ALS - Class in org.apache.spark.mllib.recommendation
-
Alternating Least Squares matrix factorization.
- ALS() - Constructor for class org.apache.spark.mllib.recommendation.ALS
-
Constructs an ALS instance with default parameters: {numBlocks: -1, rank: 10, iterations: 10,
lambda: 0.01, implicitPrefs: false, alpha: 1.0}.
- ALS.InBlock$ - Class in org.apache.spark.ml.recommendation
-
- ALS.Rating<ID> - Class in org.apache.spark.ml.recommendation
-
:: DeveloperApi ::
Rating class for better code readability.
- ALS.Rating$ - Class in org.apache.spark.ml.recommendation
-
- ALS.RatingBlock$ - Class in org.apache.spark.ml.recommendation
-
- ALSModel - Class in org.apache.spark.ml.recommendation
-
Model fitted by ALS.
- am() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
-
- AnalysisException - Exception in org.apache.spark.sql
-
Thrown when a query fails to analyze, usually because the query itself is invalid.
- analyzed() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- analyzed() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- and(Column) - Method in class org.apache.spark.sql.Column
-
Boolean AND.
- And - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff both left
or right
evaluate to true
.
- And(Filter, Filter) - Constructor for class org.apache.spark.sql.sources.And
-
- andThen(Function1<B, C>) - Static method in class org.apache.spark.sql.types.StructType
-
- antecedent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
- ANY() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- AnyDataType - Class in org.apache.spark.sql.types
-
An AbstractDataType
that matches any concrete data types.
- AnyDataType() - Constructor for class org.apache.spark.sql.types.AnyDataType
-
- anyNull() - Method in interface org.apache.spark.sql.Row
-
Returns true if there are any NULL values in this row.
- appAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- Append() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which only the new rows in the streaming DataFrame/Dataset will be
written to the sink.
- appendBias(Vector) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Returns a new vector with 1.0
(bias) appended to the input vector.
- appendColumn(StructType, String, DataType, boolean) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- appendColumn(StructType, StructField) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Appends a new column to the input schema.
- appendReadColumns(Configuration, Seq<Integer>, Seq<String>) - Static method in class org.apache.spark.sql.hive.HiveShim
-
- appHistoryInfoToPublicAppInfo(ApplicationHistoryInfo) - Static method in class org.apache.spark.status.api.v1.ApplicationsListResource
-
- appId() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- APPLICATION_EXECUTOR_LIMIT() - Static method in class org.apache.spark.ui.ToolTips
-
- applicationAttemptId() - Method in class org.apache.spark.SparkContext
-
- ApplicationAttemptInfo - Class in org.apache.spark.status.api.v1
-
- applicationEndFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- applicationEndToJson(SparkListenerApplicationEnd) - Static method in class org.apache.spark.util.JsonProtocol
-
- ApplicationEnvironmentInfo - Class in org.apache.spark.status.api.v1
-
- applicationId() - Method in class org.apache.spark.SparkContext
-
A unique identifier for the Spark application.
- ApplicationInfo - Class in org.apache.spark.status.api.v1
-
- ApplicationsListResource - Class in org.apache.spark.status.api.v1
-
- ApplicationsListResource() - Constructor for class org.apache.spark.status.api.v1.ApplicationsListResource
-
- applicationStartFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- applicationStartToJson(SparkListenerApplicationStart) - Static method in class org.apache.spark.util.JsonProtocol
-
- ApplicationStatus - Enum in org.apache.spark.status.api.v1
-
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of vertices and
edges with attributes.
- apply(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from edges, setting referenced vertices to defaultVertexAttr
.
- apply(RDD<Tuple2<Object, VD>>, RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from vertices and edges, setting missing vertices to defaultVertexAttr
.
- apply(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from a VertexRDD and an EdgeRDD with arbitrary replicated vertices.
- apply(Graph<VD, ED>, A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<VD>, ClassTag<ED>, ClassTag<A>) - Static method in class org.apache.spark.graphx.Pregel
-
Execute a Pregel-like iterative vertex-parallel abstraction.
- apply(RDD<Tuple2<Object, VD>>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a standalone
VertexRDD
(one that is not set up for efficient joins with an
EdgeRDD
) from an RDD of vertex-attribute pairs.
- apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
from an RDD of vertex-attribute pairs.
- apply(RDD<Tuple2<Object, VD>>, EdgeRDD<?>, VD, Function2<VD, VD, VD>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
from an RDD of vertex-attribute pairs.
- apply(DenseMatrix<Object>, DenseMatrix<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
-
- apply(DenseMatrix<Object>, DenseMatrix<Object>, DenseMatrix<Object>, Function2<Object, Object, Object>) - Static method in class org.apache.spark.ml.ann.ApplyInPlace
-
- apply(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its name.
- apply(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its index.
- apply(int, int) - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- apply(int) - Method in class org.apache.spark.ml.linalg.DenseVector
-
- apply(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- apply(int) - Static method in class org.apache.spark.ml.linalg.SparseVector
-
- apply(int) - Method in interface org.apache.spark.ml.linalg.Vector
-
Gets the value of the ith element.
- apply(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Gets the value of the input param or its default value if it does not exist.
- apply(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
Constructs the FamilyAndLink object from a parameter map
- apply(Split) - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Precision
-
- apply(BinaryConfusionMatrix) - Static method in class org.apache.spark.mllib.evaluation.binary.Recall
-
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- apply(int) - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- apply(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Gets the (i, j)-th element.
- apply(int, int) - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- apply(int) - Static method in class org.apache.spark.mllib.linalg.SparseVector
-
- apply(int) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Gets the value of the ith element.
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- apply(int) - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- apply(int, Predict, double, boolean) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Construct a node with nodeIndex, predict, impurity and isLeaf parameters.
- apply(int) - Static method in class org.apache.spark.rdd.CheckpointState
-
- apply(long, String, Option<String>, String, boolean) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- apply(long, String, Option<String>, String) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- apply(long, String, String) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- apply(String, long, Enumeration.Value, ByteBuffer) - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
Alternate factory method that takes a ByteBuffer directly for the data field
- apply(long, TaskMetrics) - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- apply(int) - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- apply(int) - Static method in class org.apache.spark.scheduler.TaskLocality
-
- apply(Object) - Method in class org.apache.spark.sql.Column
-
Extracts a value or values from a complex type.
- apply(String) - Method in class org.apache.spark.sql.Dataset
-
Selects column based on the column name and return it as a
Column
.
- apply(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Creates a Column
for this UDAF using given Column
s as input arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Creates a Column
for this UDAF using given Column
s as input arguments.
- apply(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
Returns an expression that invokes the UDF, using the given arguments.
- apply(LogicalPlan) - Method in class org.apache.spark.sql.hive.DetermineTableStats
-
- apply(int) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- apply(ScriptInputOutputSchema) - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- apply(int) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- apply(int) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- apply(LogicalPlan) - Static method in class org.apache.spark.sql.hive.HiveAnalysis
-
- apply(LogicalPlan) - Method in class org.apache.spark.sql.hive.RelationConversions
-
- apply(LogicalPlan) - Method in class org.apache.spark.sql.hive.ResolveHiveSerdeTable
-
- apply(Dataset<Row>, Seq<Expression>, RelationalGroupedDataset.GroupType) - Static method in class org.apache.spark.sql.RelationalGroupedDataset
-
- apply(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- apply(String) - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
- apply(Duration) - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
- apply(DataType) - Static method in class org.apache.spark.sql.types.ArrayType
-
Construct a
ArrayType
object with the given element type.
- apply(double) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(long) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigInteger) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigInt) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(BigDecimal, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(String) - Static method in class org.apache.spark.sql.types.Decimal
-
- apply(DataType, DataType) - Static method in class org.apache.spark.sql.types.MapType
-
Construct a
MapType
object with the given key type and value type.
- apply(String) - Method in class org.apache.spark.sql.types.StructType
-
- apply(Set<String>) - Method in class org.apache.spark.sql.types.StructType
-
Returns a
StructType
containing
StructField
s of the given names, preserving the
original order of fields.
- apply(int) - Method in class org.apache.spark.sql.types.StructType
-
- apply(String) - Static method in class org.apache.spark.storage.BlockId
-
- apply(String, String, int, Option<String>) - Static method in class org.apache.spark.storage.BlockManagerId
-
- apply(ObjectInput) - Static method in class org.apache.spark.storage.BlockManagerId
-
- apply(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object.
- apply(boolean, boolean, boolean, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object without setting useOffHeap.
- apply(int, int) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Create a new StorageLevel object from its integer representation.
- apply(ObjectInput) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Read StorageLevel object from ObjectInput stream.
- apply(String, int) - Static method in class org.apache.spark.streaming.kafka.Broker
-
- apply(Map<String, String>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster.SimpleConsumerConfig$
-
Make a consumer config without requiring group.id or zookeeper.connect,
since communicating with brokers also needs common settings such as timeout
- apply(String, int, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
-
- apply(TopicAndPartition, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
-
- apply(long) - Static method in class org.apache.spark.streaming.Milliseconds
-
- apply(long) - Static method in class org.apache.spark.streaming.Minutes
-
- apply(int) - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- apply(long) - Static method in class org.apache.spark.streaming.Seconds
-
- apply(int) - Static method in class org.apache.spark.TaskState
-
- apply(InputMetrics) - Method in class org.apache.spark.ui.jobs.UIData.InputMetricsUIData$
-
- apply(OutputMetrics) - Method in class org.apache.spark.ui.jobs.UIData.OutputMetricsUIData$
-
- apply(ShuffleReadMetrics) - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData$
-
- apply(ShuffleWriteMetrics) - Method in class org.apache.spark.ui.jobs.UIData.ShuffleWriteMetricsUIData$
-
- apply(TaskInfo) - Method in class org.apache.spark.ui.jobs.UIData.TaskUIData$
-
- apply(TraversableOnce<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values.
- apply(Seq<Object>) - Static method in class org.apache.spark.util.StatCounter
-
Build a StatCounter from a list of values passed as variable-length arguments.
- ApplyInPlace - Class in org.apache.spark.ml.ann
-
Implements in-place application of functions in the arrays
- ApplyInPlace() - Constructor for class org.apache.spark.ml.ann.ApplyInPlace
-
- applyOrElse(A1, Function1<A1, B1>) - Static method in class org.apache.spark.sql.types.StructType
-
- applySchema(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- applySchema(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- appName() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- appName() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- appName() - Method in class org.apache.spark.SparkContext
-
- appName(String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a name for the application, which will be shown in the Spark web UI.
- approx_count_distinct(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approx_count_distinct(String, double) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the approximate number of distinct items in a group.
- approxCountDistinct(Column) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(String) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(Column, double) - Static method in class org.apache.spark.sql.functions
-
- approxCountDistinct(String, double) - Static method in class org.apache.spark.sql.functions
-
- ApproxHist() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- approxNearestNeighbors(Dataset<?>, Vector, int, String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- approxNearestNeighbors(Dataset<?>, Vector, int) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- approxNearestNeighbors(Dataset<?>, Vector, int, String) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- approxNearestNeighbors(Dataset<?>, Vector, int) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- approxQuantile(String, double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the approximate quantiles of a numerical column of a DataFrame.
- approxQuantile(String[], double[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the approximate quantiles of numerical columns of a DataFrame.
- approxSimilarityJoin(Dataset<?>, Dataset<?>, double, String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- approxSimilarityJoin(Dataset<?>, Dataset<?>, double) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- approxSimilarityJoin(Dataset<?>, Dataset<?>, double, String) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- approxSimilarityJoin(Dataset<?>, Dataset<?>, double) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- AreaUnderCurve - Class in org.apache.spark.mllib.evaluation
-
Computes the area under the curve (AUC) using the trapezoidal rule.
- AreaUnderCurve() - Constructor for class org.apache.spark.mllib.evaluation.AreaUnderCurve
-
- areaUnderPR() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the precision-recall curve.
- areaUnderROC() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
Computes the area under the receiver operating characteristic (ROC) curve.
- areaUnderROC() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Computes the area under the receiver operating characteristic (ROC) curve.
- argmax() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- argmax() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- argmax() - Method in interface org.apache.spark.ml.linalg.Vector
-
Find the index of a maximal element.
- argmax() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- argmax() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- argmax() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Find the index of a maximal element.
- argString() - Method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- argString() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- argString() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- array(DataType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type array.
- array(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new array column.
- array_contains(Column, Object) - Static method in class org.apache.spark.sql.functions
-
Returns null if the array is null, true if the array contains value
, and false otherwise.
- arrayLengthGt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check that the array length is greater than lowerBound.
- ArrayType - Class in org.apache.spark.sql.types
-
- ArrayType(DataType, boolean) - Constructor for class org.apache.spark.sql.types.ArrayType
-
- as(Encoder<U>) - Method in class org.apache.spark.sql.Column
-
Provides a type hint about the expected return value of this column.
- as(String) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(Seq<String>) - Method in class org.apache.spark.sql.Column
-
(Scala-specific) Assigns the given aliases to the results of a table generating function.
- as(String[]) - Method in class org.apache.spark.sql.Column
-
Assigns the given aliases to the results of a table generating function.
- as(Symbol) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias.
- as(String, Metadata) - Method in class org.apache.spark.sql.Column
-
Gives the column an alias with metadata.
- as(Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
Returns a new Dataset where each record has been mapped on to the specified type.
- as(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with an alias set.
- as(Symbol) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with an alias set.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts the instance to a breeze vector.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Converts to a breeze matrix.
- asBreeze() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the instance to a breeze vector.
- asc() - Method in class org.apache.spark.sql.Column
-
Returns an ascending ordering used in sorting.
- asc(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column.
- asc_nulls_first() - Method in class org.apache.spark.sql.Column
-
Returns an ascending ordering used in sorting, where null values appear before non-null values.
- asc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column,
and null values return before non-null values.
- asc_nulls_last() - Method in class org.apache.spark.sql.Column
-
Returns an ordering used in sorting, where null values appear after non-null values.
- asc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on ascending order of the column,
and null values appear after non-null values.
- ascii(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the numeric value of the first character of the string column, and returns the
result as an int column.
- asCode() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- asCode() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- asCode() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- asin(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the sine inverse of the given value; the returned angle is in the range
-pi/2 through pi/2.
- asin(String) - Static method in class org.apache.spark.sql.functions
-
Computes the sine inverse of the given column; the returned angle is in the range
-pi/2 through pi/2.
- asIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator.
- asJavaPairRDD() - Method in class org.apache.spark.api.r.PairwiseRRDD
-
- asJavaRDD() - Method in class org.apache.spark.api.r.RRDD
-
- asJavaRDD() - Method in class org.apache.spark.api.r.StringRRDD
-
- asKeyValueIterator() - Method in class org.apache.spark.serializer.DeserializationStream
-
Read the elements of this stream through an iterator over key-value pairs.
- AskPermissionToCommitOutput - Class in org.apache.spark.scheduler
-
- AskPermissionToCommitOutput(int, int, int, int) - Constructor for class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- askRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the default Spark timeout to use for RPC ask operations.
- askSlaves() - Method in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
-
- askSlaves() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
-
- asML() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- asML() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- asML() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convert this matrix to the new mllib-local representation.
- asML() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- asML() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- asML() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Convert this vector to the new mllib-local representation.
- asNullable() - Method in class org.apache.spark.sql.types.ObjectType
-
- asRDDId() - Method in class org.apache.spark.storage.BlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.RDDBlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.StreamBlockId
-
- asRDDId() - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- assertNotSpilled(SparkContext, String, Function0<T>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs
did not spill.
- assertSpilled(SparkContext, String, Function0<T>) - Static method in class org.apache.spark.TestUtils
-
Run some code involving jobs submitted to the given context and assert that the jobs spilled.
- Assignment(long, int) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
-
- Assignment$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
-
- assignments() - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
-
- AssociationRules - Class in org.apache.spark.ml.fpm
-
- AssociationRules() - Constructor for class org.apache.spark.ml.fpm.AssociationRules
-
- associationRules() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
Get association rules fitted using the minConfidence.
- AssociationRules - Class in org.apache.spark.mllib.fpm
-
Generates association rules from a RDD[FreqItemset[Item}
.
- AssociationRules() - Constructor for class org.apache.spark.mllib.fpm.AssociationRules
-
Constructs a default instance with default parameters {minConfidence = 0.8}.
- AssociationRules.Rule<Item> - Class in org.apache.spark.mllib.fpm
-
An association rule between sets of items.
- AsyncRDDActions<T> - Class in org.apache.spark.rdd
-
A set of asynchronous RDD actions available through an implicit conversion.
- AsyncRDDActions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.AsyncRDDActions
-
- atan(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the tangent inverse of the given value.
- atan(String) - Static method in class org.apache.spark.sql.functions
-
Computes the tangent inverse of the given column.
- atan2(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(Column, String) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(String, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(String, String) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(Column, double) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(String, double) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(double, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- atan2(double, String) - Static method in class org.apache.spark.sql.functions
-
Returns the angle theta from the conversion of rectangular coordinates (x, y) to
polar coordinates (r, theta).
- attempt() - Method in class org.apache.spark.status.api.v1.TaskData
-
- attemptId() - Method in class org.apache.spark.scheduler.StageInfo
-
- attemptId() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- attemptId() - Method in class org.apache.spark.status.api.v1.StageData
-
- attemptNumber() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- attemptNumber() - Method in class org.apache.spark.scheduler.TaskInfo
-
- attemptNumber() - Method in class org.apache.spark.TaskCommitDenied
-
- attemptNumber() - Method in class org.apache.spark.TaskContext
-
How many times this task has been attempted.
- attempts() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- attr() - Method in class org.apache.spark.graphx.Edge
-
- attr() - Method in class org.apache.spark.graphx.EdgeContext
-
The attribute associated with the edge.
- attr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- Attribute - Class in org.apache.spark.ml.attribute
-
:: DeveloperApi ::
Abstract class for ML attributes.
- Attribute() - Constructor for class org.apache.spark.ml.attribute.Attribute
-
- attribute() - Method in class org.apache.spark.sql.sources.EqualNullSafe
-
- attribute() - Method in class org.apache.spark.sql.sources.EqualTo
-
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThan
-
- attribute() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- attribute() - Method in class org.apache.spark.sql.sources.In
-
- attribute() - Method in class org.apache.spark.sql.sources.IsNotNull
-
- attribute() - Method in class org.apache.spark.sql.sources.IsNull
-
- attribute() - Method in class org.apache.spark.sql.sources.LessThan
-
- attribute() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- attribute() - Method in class org.apache.spark.sql.sources.StringContains
-
- attribute() - Method in class org.apache.spark.sql.sources.StringEndsWith
-
- attribute() - Method in class org.apache.spark.sql.sources.StringStartsWith
-
- AttributeGroup - Class in org.apache.spark.ml.attribute
-
:: DeveloperApi ::
Attributes that describe a vector ML column.
- AttributeGroup(String) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group without attribute info.
- AttributeGroup(String, int) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group knowing only the number of attributes.
- AttributeGroup(String, Attribute[]) - Constructor for class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group with attributes.
- AttributeKeys - Class in org.apache.spark.ml.attribute
-
Keys used to store attributes.
- AttributeKeys() - Constructor for class org.apache.spark.ml.attribute.AttributeKeys
-
- attributes() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Optional array of attributes.
- ATTRIBUTES() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- AttributeType - Class in org.apache.spark.ml.attribute
-
:: DeveloperApi ::
An enum-like type for attribute types: AttributeType$.Numeric
, AttributeType$.Nominal
,
and AttributeType$.Binary
.
- AttributeType(String) - Constructor for class org.apache.spark.ml.attribute.AttributeType
-
- attrType() - Method in class org.apache.spark.ml.attribute.Attribute
-
Attribute type.
- attrType() - Method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- attrType() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
- attrType() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- attrType() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- available() - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- available() - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- available() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- Average() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- avg(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Average aggregate function.
- avg(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Average aggregate function.
- avg(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- avg(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the mean value for each numeric columns for each group.
- avg() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the average of elements added to the accumulator.
- avg() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the average of elements added to the accumulator.
- avgEventRate() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- avgInputRate() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgMetrics() - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- avgProcessingTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgSchedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- avgTotalDelay() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- awaitAnyTermination() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the
creation of the context, or since resetTerminated()
was called.
- awaitAnyTermination(long) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Wait until any of the queries on the associated SQLContext has terminated since the
creation of the context, or since resetTerminated()
was called.
- awaitReady(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to Await.ready()
.
- awaitResult(Awaitable<T>, Duration) - Static method in class org.apache.spark.util.ThreadUtils
-
Preferred alternative to Await.result()
.
- awaitTermination() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Waits for the termination of this
query, either by query.stop()
or by an exception.
- awaitTermination(long) - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Waits for the termination of this
query, either by query.stop()
or by an exception.
- awaitTermination() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Wait for the execution to stop.
- awaitTermination() - Method in class org.apache.spark.streaming.StreamingContext
-
Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Wait for the execution to stop.
- awaitTerminationOrTimeout(long) - Method in class org.apache.spark.streaming.StreamingContext
-
Wait for the execution to stop.
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y += a * x
- axpy(double, Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y += a * x
- cache() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Method in class org.apache.spark.api.java.JavaRDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Static method in class org.apache.spark.api.r.RRDD
-
- cache() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- cache() - Method in class org.apache.spark.graphx.Graph
-
Caches the vertices and edges associated with this graph at the previously-specified target
storage levels, which default to MEMORY_ONLY
.
- cache() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
Persists the edge partitions using targetStorageLevel
, which defaults to MEMORY_ONLY.
- cache() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- cache() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
Persists the vertex partitions at targetStorageLevel
, which defaults to MEMORY_ONLY.
- cache() - Static method in class org.apache.spark.graphx.VertexRDD
-
- cache() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Caches the underlying RDD.
- cache() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- cache() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- cache() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- cache() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- cache() - Method in class org.apache.spark.rdd.RDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- cache() - Static method in class org.apache.spark.rdd.UnionRDD
-
- cache() - Method in class org.apache.spark.sql.Dataset
-
Persist this Dataset with the default storage level (MEMORY_AND_DISK
).
- cache() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cache() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- cache() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cache() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- cache() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- cache() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- cache() - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- cacheNodeIds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- cacheNodeIds() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- cacheSize() - Method in interface org.apache.spark.SparkExecutorInfo
-
- cacheSize() - Method in class org.apache.spark.SparkExecutorInfoImpl
-
- cacheSize() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the memory used by caching RDDs
- cacheTable(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Caches the specified table in-memory.
- cacheTable(String) - Method in class org.apache.spark.sql.SQLContext
-
Caches the specified table in-memory.
- calculate(DenseVector<Object>) - Method in class org.apache.spark.ml.classification.LinearSVCCostFun
-
- calculate(DenseVector<Object>) - Method in class org.apache.spark.ml.classification.LogisticCostFun
-
- calculate(DenseVector<Object>) - Method in class org.apache.spark.ml.regression.AFTCostFun
-
- calculate(DenseVector<Object>) - Method in class org.apache.spark.ml.regression.LeastSquaresCostFun
-
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
:: DeveloperApi ::
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Entropy
-
:: DeveloperApi ::
variance calculation
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
:: DeveloperApi ::
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Gini
-
:: DeveloperApi ::
variance calculation
- calculate(double[], double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
-
:: DeveloperApi ::
information calculation for multiclass classification
- calculate(double, double, double) - Method in interface org.apache.spark.mllib.tree.impurity.Impurity
-
:: DeveloperApi ::
information calculation for regression
- calculate(double[], double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
:: DeveloperApi ::
information calculation for multiclass classification
- calculate(double, double, double) - Static method in class org.apache.spark.mllib.tree.impurity.Variance
-
:: DeveloperApi ::
variance calculation
- calculateNumberOfPartitions(long, int, int) - Method in class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
-
Calculate the number of partitions to use in saving the model.
- CalendarIntervalType - Class in org.apache.spark.sql.types
-
The data type representing calendar time intervals.
- CalendarIntervalType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the CalendarIntervalType object.
- call(K, Iterator<V1>, Iterator<V2>) - Method in interface org.apache.spark.api.java.function.CoGroupFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.DoubleFlatMapFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.DoubleFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.FilterFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.FlatMapFunction
-
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.FlatMapFunction2
-
- call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsFunction
-
- call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.FlatMapGroupsWithStateFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.ForeachFunction
-
- call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.ForeachPartitionFunction
-
- call(T1) - Method in interface org.apache.spark.api.java.function.Function
-
- call() - Method in interface org.apache.spark.api.java.function.Function0
-
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.Function2
-
- call(T1, T2, T3) - Method in interface org.apache.spark.api.java.function.Function3
-
- call(T1, T2, T3, T4) - Method in interface org.apache.spark.api.java.function.Function4
-
- call(T) - Method in interface org.apache.spark.api.java.function.MapFunction
-
- call(K, Iterator<V>) - Method in interface org.apache.spark.api.java.function.MapGroupsFunction
-
- call(K, Iterator<V>, GroupState<S>) - Method in interface org.apache.spark.api.java.function.MapGroupsWithStateFunction
-
- call(Iterator<T>) - Method in interface org.apache.spark.api.java.function.MapPartitionsFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.PairFlatMapFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.PairFunction
-
- call(T, T) - Method in interface org.apache.spark.api.java.function.ReduceFunction
-
- call(T) - Method in interface org.apache.spark.api.java.function.VoidFunction
-
- call(T1, T2) - Method in interface org.apache.spark.api.java.function.VoidFunction2
-
- call(T1) - Method in interface org.apache.spark.sql.api.java.UDF1
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10) - Method in interface org.apache.spark.sql.api.java.UDF10
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11) - Method in interface org.apache.spark.sql.api.java.UDF11
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12) - Method in interface org.apache.spark.sql.api.java.UDF12
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13) - Method in interface org.apache.spark.sql.api.java.UDF13
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14) - Method in interface org.apache.spark.sql.api.java.UDF14
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15) - Method in interface org.apache.spark.sql.api.java.UDF15
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16) - Method in interface org.apache.spark.sql.api.java.UDF16
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17) - Method in interface org.apache.spark.sql.api.java.UDF17
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18) - Method in interface org.apache.spark.sql.api.java.UDF18
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19) - Method in interface org.apache.spark.sql.api.java.UDF19
-
- call(T1, T2) - Method in interface org.apache.spark.sql.api.java.UDF2
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20) - Method in interface org.apache.spark.sql.api.java.UDF20
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21) - Method in interface org.apache.spark.sql.api.java.UDF21
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, T12, T13, T14, T15, T16, T17, T18, T19, T20, T21, T22) - Method in interface org.apache.spark.sql.api.java.UDF22
-
- call(T1, T2, T3) - Method in interface org.apache.spark.sql.api.java.UDF3
-
- call(T1, T2, T3, T4) - Method in interface org.apache.spark.sql.api.java.UDF4
-
- call(T1, T2, T3, T4, T5) - Method in interface org.apache.spark.sql.api.java.UDF5
-
- call(T1, T2, T3, T4, T5, T6) - Method in interface org.apache.spark.sql.api.java.UDF6
-
- call(T1, T2, T3, T4, T5, T6, T7) - Method in interface org.apache.spark.sql.api.java.UDF7
-
- call(T1, T2, T3, T4, T5, T6, T7, T8) - Method in interface org.apache.spark.sql.api.java.UDF8
-
- call(T1, T2, T3, T4, T5, T6, T7, T8, T9) - Method in interface org.apache.spark.sql.api.java.UDF9
-
- callSite() - Method in class org.apache.spark.storage.RDDInfo
-
- callUDF(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- callUDF(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Call an user-defined function.
- cancel() - Method in class org.apache.spark.ComplexFutureAction
-
- cancel() - Method in interface org.apache.spark.FutureAction
-
Cancels the execution of this action.
- cancel() - Method in class org.apache.spark.SimpleFutureAction
-
- cancelAllJobs() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel all jobs that have been scheduled or are running.
- cancelAllJobs() - Method in class org.apache.spark.SparkContext
-
Cancel all jobs that have been scheduled or are running.
- cancelJob(int, String) - Method in class org.apache.spark.SparkContext
-
Cancel a given job if it's scheduled or running.
- cancelJob(int) - Method in class org.apache.spark.SparkContext
-
Cancel a given job if it's scheduled or running.
- cancelJobGroup(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Cancel active jobs for the specified group.
- cancelJobGroup(String) - Method in class org.apache.spark.SparkContext
-
Cancel active jobs for the specified group.
- cancelStage(int, String) - Method in class org.apache.spark.SparkContext
-
Cancel a given stage and all jobs associated with it.
- cancelStage(int) - Method in class org.apache.spark.SparkContext
-
Cancel a given stage and all jobs associated with it.
- canEqual(Object) - Static method in class org.apache.spark.Aggregator
-
- canEqual(Object) - Static method in class org.apache.spark.CleanAccum
-
- canEqual(Object) - Static method in class org.apache.spark.CleanBroadcast
-
- canEqual(Object) - Static method in class org.apache.spark.CleanCheckpoint
-
- canEqual(Object) - Static method in class org.apache.spark.CleanRDD
-
- canEqual(Object) - Static method in class org.apache.spark.CleanShuffle
-
- canEqual(Object) - Static method in class org.apache.spark.ExceptionFailure
-
- canEqual(Object) - Static method in class org.apache.spark.ExecutorLostFailure
-
- canEqual(Object) - Static method in class org.apache.spark.ExecutorRegistered
-
- canEqual(Object) - Static method in class org.apache.spark.ExecutorRemoved
-
- canEqual(Object) - Static method in class org.apache.spark.ExpireDeadHosts
-
- canEqual(Object) - Static method in class org.apache.spark.FetchFailed
-
- canEqual(Object) - Static method in class org.apache.spark.graphx.Edge
-
- canEqual(Object) - Static method in class org.apache.spark.ml.feature.Dot
-
- canEqual(Object) - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- canEqual(Object) - Static method in class org.apache.spark.ml.param.ParamPair
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.linalg.QRDecomposition
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- canEqual(Object) - Static method in class org.apache.spark.mllib.tree.model.Split
-
- canEqual(Object) - Static method in class org.apache.spark.Resubmitted
-
- canEqual(Object) - Static method in class org.apache.spark.rpc.netty.OnStart
-
- canEqual(Object) - Static method in class org.apache.spark.rpc.netty.OnStop
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.BlacklistedExecutor
-
- canEqual(Object) - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.JobSucceeded
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.local.KillTask
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- canEqual(Object) - Static method in class org.apache.spark.scheduler.StopCoordinator
-
- canEqual(Object) - Static method in class org.apache.spark.sql.DatasetHolder
-
- canEqual(Object) - Static method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- canEqual(Object) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- canEqual(Object) - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- canEqual(Object) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- canEqual(Object) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- canEqual(Object) - Static method in class org.apache.spark.sql.hive.RelationConversions
-
- canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- canEqual(Object) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.And
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.EqualTo
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.In
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.IsNull
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.LessThan
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.Not
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.Or
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.StringContains
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- canEqual(Object) - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- canEqual(Object) - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
Deprecated.
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ArrayType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.CharType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.DecimalType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.MapType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.ObjectType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.StructField
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.StructType
-
- canEqual(Object) - Static method in class org.apache.spark.sql.types.VarcharType
-
- canEqual(Object) - Static method in class org.apache.spark.StopMapOutputTracker
-
- canEqual(Object) - Static method in class org.apache.spark.storage.BlockStatus
-
- canEqual(Object) - Static method in class org.apache.spark.storage.BlockUpdatedInfo
-
- canEqual(Object) - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- canEqual(Object) - Static method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- canEqual(Object) - Static method in class org.apache.spark.storage.RDDBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.storage.StreamBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.Duration
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- canEqual(Object) - Static method in class org.apache.spark.streaming.Time
-
- canEqual(Object) - Static method in class org.apache.spark.Success
-
- canEqual(Object) - Static method in class org.apache.spark.TaskCommitDenied
-
- canEqual(Object) - Static method in class org.apache.spark.TaskKilled
-
- canEqual(Object) - Static method in class org.apache.spark.TaskResultLost
-
- canEqual(Object) - Static method in class org.apache.spark.TaskSchedulerIsSet
-
- canEqual(Object) - Static method in class org.apache.spark.UnknownReason
-
- canEqual(Object) - Static method in class org.apache.spark.util.MethodIdentifier
-
- canEqual(Object) - Method in class org.apache.spark.util.MutablePair
-
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- canHandle(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Check if this dialect instance can handle a certain jdbc url.
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- canHandle(String) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- canonicalized() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- canonicalized() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- canonicalized() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- CanonicalRandomVertexCut$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
-
- cartesian(JavaRDDLike<U, ?>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- cartesian(JavaRDDLike<U, ?>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- cartesian(JavaRDDLike<U, ?>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- cartesian(JavaRDDLike<U, ?>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of
elements (a, b) where a is in this
and b is in other
.
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- cartesian(RDD<U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of
elements (a, b) where a is in this
and b is in other
.
- cartesian(RDD<U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- caseSensitive() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
Whether to do a case sensitive comparison over the stop words.
- cast(DataType) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type.
- cast(String) - Method in class org.apache.spark.sql.Column
-
Casts the column to a different data type, using the canonical string representation
of the type.
- Catalog - Class in org.apache.spark.sql.catalog
-
Catalog interface for Spark.
- Catalog() - Constructor for class org.apache.spark.sql.catalog.Catalog
-
- catalog() - Method in class org.apache.spark.sql.SparkSession
-
Interface through which the user may create, drop, alter or query underlying
databases, tables, functions etc.
- catalogString() - Method in class org.apache.spark.sql.types.ArrayType
-
- catalogString() - Static method in class org.apache.spark.sql.types.BinaryType
-
- catalogString() - Static method in class org.apache.spark.sql.types.BooleanType
-
- catalogString() - Static method in class org.apache.spark.sql.types.ByteType
-
- catalogString() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- catalogString() - Static method in class org.apache.spark.sql.types.CharType
-
- catalogString() - Method in class org.apache.spark.sql.types.DataType
-
String representation for the type saved in external catalogs.
- catalogString() - Static method in class org.apache.spark.sql.types.DateType
-
- catalogString() - Static method in class org.apache.spark.sql.types.DecimalType
-
- catalogString() - Static method in class org.apache.spark.sql.types.DoubleType
-
- catalogString() - Static method in class org.apache.spark.sql.types.FloatType
-
- catalogString() - Static method in class org.apache.spark.sql.types.HiveStringType
-
- catalogString() - Static method in class org.apache.spark.sql.types.IntegerType
-
- catalogString() - Static method in class org.apache.spark.sql.types.LongType
-
- catalogString() - Method in class org.apache.spark.sql.types.MapType
-
- catalogString() - Static method in class org.apache.spark.sql.types.NullType
-
- catalogString() - Static method in class org.apache.spark.sql.types.NumericType
-
- catalogString() - Static method in class org.apache.spark.sql.types.ObjectType
-
- catalogString() - Static method in class org.apache.spark.sql.types.ShortType
-
- catalogString() - Static method in class org.apache.spark.sql.types.StringType
-
- catalogString() - Method in class org.apache.spark.sql.types.StructType
-
- catalogString() - Static method in class org.apache.spark.sql.types.TimestampType
-
- catalogString() - Static method in class org.apache.spark.sql.types.VarcharType
-
- CatalystScan - Interface in org.apache.spark.sql.sources
-
::Experimental::
An interface for experimenting with a more direct connection to the query planner.
- Categorical() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- categoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- CategoricalSplit - Class in org.apache.spark.ml.tree
-
Split which tests a categorical feature.
- categories() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- categories() - Method in class org.apache.spark.mllib.tree.model.Split
-
- categoryMaps() - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- cause() - Method in exception org.apache.spark.sql.AnalysisException
-
- cause() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
- CausedBy - Class in org.apache.spark.util
-
Extractor Object for pulling out the root cause of an error.
- CausedBy() - Constructor for class org.apache.spark.util.CausedBy
-
- cbrt(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the cube-root of the given value.
- cbrt(String) - Static method in class org.apache.spark.sql.functions
-
Computes the cube-root of the given column.
- ceil(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given value.
- ceil(String) - Static method in class org.apache.spark.sql.functions
-
Computes the ceiling of the given column.
- ceil() - Method in class org.apache.spark.sql.types.Decimal
-
- censorCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- censorCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, T, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- chainl1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<U>>, Function0<Parsers.Parser<Function2<T, U, T>>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- chainr1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Function2<T, U, U>>>, Function2<T, U, U>, U) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- changePrecision(int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Update precision and scale while keeping our value the same, and return true if successful.
- channelRead0(ChannelHandlerContext, byte[]) - Method in class org.apache.spark.api.r.RBackendAuthHandler
-
- CharType - Class in org.apache.spark.sql.types
-
Hive char type.
- CharType(int) - Constructor for class org.apache.spark.sql.types.CharType
-
- checkColumnNameDuplication(Seq<String>, String, boolean) - Static method in class org.apache.spark.sql.util.SchemaUtils
-
Checks if input column names have duplicate identifiers.
- checkColumnType(StructType, String, DataType, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of the required data type.
- checkColumnTypes(StructType, String, Seq<DataType>, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of one of the require data types.
- checkDataColumns(RFormula, Dataset<?>) - Static method in class org.apache.spark.ml.r.RWrapperUtils
-
DataFrame column check.
- checkErrors(Either<ArrayBuffer<Throwable>, T>) - Static method in class org.apache.spark.streaming.kafka.KafkaCluster
-
If the result is right, return it, otherwise throw SparkException
- checkFileExists(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
Check if the file exists at the given path.
- checkHost(String, String) - Static method in class org.apache.spark.util.Utils
-
- checkHostPort(String, String) - Static method in class org.apache.spark.util.Utils
-
- checkNumericType(StructType, String, String) - Static method in class org.apache.spark.ml.util.SchemaUtils
-
Check whether the given schema contains a column of the numeric data type.
- checkpoint() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- checkpoint() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- checkpoint() - Static method in class org.apache.spark.api.java.JavaRDD
-
- checkpoint() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Mark this RDD for checkpointing.
- checkpoint() - Static method in class org.apache.spark.api.r.RRDD
-
- checkpoint() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- checkpoint() - Method in class org.apache.spark.graphx.Graph
-
Mark this Graph for checkpointing.
- checkpoint() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- checkpoint() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- checkpoint() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- checkpoint() - Static method in class org.apache.spark.graphx.VertexRDD
-
- checkpoint() - Method in class org.apache.spark.rdd.HadoopRDD
-
- checkpoint() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- checkpoint() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- checkpoint() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- checkpoint() - Method in class org.apache.spark.rdd.RDD
-
Mark this RDD for checkpointing.
- checkpoint() - Static method in class org.apache.spark.rdd.UnionRDD
-
- checkpoint() - Method in class org.apache.spark.sql.Dataset
-
Eagerly checkpoint a Dataset and return the new Dataset.
- checkpoint(boolean) - Method in class org.apache.spark.sql.Dataset
-
Returns a checkpointed version of this Dataset.
- checkpoint(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- checkpoint(Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Enable periodic checkpointing of RDDs of this DStream.
- checkpoint(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- checkpoint(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- checkpoint(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- checkpoint(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- checkpoint(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- checkpoint(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Sets the context to periodically checkpoint the DStream operations for master
fault-tolerance.
- checkpoint(Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Enable periodic checkpointing of RDDs of this DStream
- checkpoint(String) - Method in class org.apache.spark.streaming.StreamingContext
-
Set the context to periodically checkpoint the DStream operations for driver
fault-tolerance.
- Checkpointed() - Static method in class org.apache.spark.rdd.CheckpointState
-
- CheckpointingInProgress() - Static method in class org.apache.spark.rdd.CheckpointState
-
- checkpointInterval() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- checkpointInterval() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- checkpointInterval() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- checkpointInterval() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.clustering.LDA
-
- checkpointInterval() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- checkpointInterval() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- checkpointInterval() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- checkpointInterval() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- checkpointInterval() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- checkpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- CheckpointReader - Class in org.apache.spark.streaming
-
- CheckpointReader() - Constructor for class org.apache.spark.streaming.CheckpointReader
-
- CheckpointState - Class in org.apache.spark.rdd
-
Enumeration to manage state transitions of an RDD through checkpointing
- CheckpointState() - Constructor for class org.apache.spark.rdd.CheckpointState
-
- checkState(boolean, Function0<String>) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
- child() - Method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- child() - Method in class org.apache.spark.sql.sources.Not
-
- CHILD_CONNECTION_TIMEOUT - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Maximum time (in ms) to wait for a child process to connect back to the launcher server
when using @link{#start()}.
- CHILD_PROCESS_LOGGER_NAME - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Logger name to use when launching a child process.
- children() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- children() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- children() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- childrenResolved() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- childrenResolved() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- chiSqFunc() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.Method
-
- ChiSqSelector - Class in org.apache.spark.ml.feature
-
Chi-Squared feature selection, which selects categorical features to use for predicting a
categorical label.
- ChiSqSelector(String) - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
-
- ChiSqSelector() - Constructor for class org.apache.spark.ml.feature.ChiSqSelector
-
- ChiSqSelector - Class in org.apache.spark.mllib.feature
-
Creates a ChiSquared feature selector.
- ChiSqSelector() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
-
- ChiSqSelector(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelector
-
The is the same to call this() and setNumTopFeatures(numTopFeatures)
- ChiSqSelectorModel - Class in org.apache.spark.ml.feature
-
- ChiSqSelectorModel - Class in org.apache.spark.mllib.feature
-
Chi Squared selector model.
- ChiSqSelectorModel(int[]) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
- ChiSqSelectorModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.feature
-
- ChiSqSelectorModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.feature
-
Model data for import/export
- chiSqTest(Vector, Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's chi-squared goodness of fit test of the observed data against the
expected distribution.
- chiSqTest(Vector) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's chi-squared goodness of fit test of the observed data against the uniform
distribution, with each category having an expected frequency of 1 / observed.size
.
- chiSqTest(Matrix) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's independence test on the input contingency matrix, which cannot contain
negative entries or columns or rows that sum up to 0.
- chiSqTest(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Conduct Pearson's independence test for every feature against the label across the input RDD.
- chiSqTest(JavaRDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of chiSqTest()
- ChiSqTest - Class in org.apache.spark.mllib.stat.test
-
Conduct the chi-squared test for the input RDDs using the specified method.
- ChiSqTest() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest
-
- ChiSqTest.Method - Class in org.apache.spark.mllib.stat.test
-
param: name String name for the method.
- ChiSqTest.Method$ - Class in org.apache.spark.mllib.stat.test
-
- ChiSqTest.NullHypothesis$ - Class in org.apache.spark.mllib.stat.test
-
- ChiSqTestResult - Class in org.apache.spark.mllib.stat.test
-
Object containing the test results for the chi-squared hypothesis test.
- chiSquared(Vector, Vector, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
- chiSquaredFeatures(RDD<LabeledPoint>, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
Conduct Pearson's independence test for each feature against the label across the input RDD.
- chiSquaredMatrix(Matrix, String) - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
- ChiSquareTest - Class in org.apache.spark.ml.stat
-
:: Experimental ::
- ChiSquareTest() - Constructor for class org.apache.spark.ml.stat.ChiSquareTest
-
- chmod700(File) - Static method in class org.apache.spark.util.Utils
-
JDK equivalent of chmod 700 file
.
- CholeskyDecomposition - Class in org.apache.spark.mllib.linalg
-
Compute Cholesky decomposition.
- CholeskyDecomposition() - Constructor for class org.apache.spark.mllib.linalg.CholeskyDecomposition
-
- classForName(String) - Static method in class org.apache.spark.util.Utils
-
Preferred alternative to Class.forName(className)
- Classification() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- ClassificationModel<FeaturesType,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
-
:: DeveloperApi ::
- ClassificationModel() - Constructor for class org.apache.spark.ml.classification.ClassificationModel
-
- ClassificationModel - Interface in org.apache.spark.mllib.classification
-
Represents a classification model that predicts to which of a set of categories an example
belongs.
- Classifier<FeaturesType,E extends Classifier<FeaturesType,E,M>,M extends ClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
-
:: DeveloperApi ::
- Classifier() - Constructor for class org.apache.spark.ml.classification.Classifier
-
- classifier() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- classifier() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- classIsLoadable(String) - Static method in class org.apache.spark.util.Utils
-
Determines whether the provided class is loadable in the current thread.
- className() - Method in class org.apache.spark.ExceptionFailure
-
- className() - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Unique class name for identifying JSON object encoded by this class.
- className() - Method in class org.apache.spark.sql.catalog.Function
-
- classpathEntries() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
-
- classpathEntries() - Method in class org.apache.spark.ui.env.EnvironmentListener
-
Deprecated.
- classTag() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
- classTag() - Method in class org.apache.spark.api.java.JavaPairRDD
-
- classTag() - Method in class org.apache.spark.api.java.JavaRDD
-
- classTag() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
- classTag() - Method in class org.apache.spark.sql.Dataset
-
- classTag() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- classTag() - Method in interface org.apache.spark.storage.memory.MemoryEntry
-
- classTag() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
- classTag() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- classTag() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- classTag() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- classTag() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- clean(long, boolean) - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Clean all the records that are older than the threshold time.
- clean(Object, boolean, boolean) - Static method in class org.apache.spark.util.ClosureCleaner
-
Clean the given closure in place.
- CleanAccum - Class in org.apache.spark
-
- CleanAccum(long) - Constructor for class org.apache.spark.CleanAccum
-
- CleanBroadcast - Class in org.apache.spark
-
- CleanBroadcast(long) - Constructor for class org.apache.spark.CleanBroadcast
-
- CleanCheckpoint - Class in org.apache.spark
-
- CleanCheckpoint(int) - Constructor for class org.apache.spark.CleanCheckpoint
-
- CleanRDD - Class in org.apache.spark
-
- CleanRDD(int) - Constructor for class org.apache.spark.CleanRDD
-
- CleanShuffle - Class in org.apache.spark
-
- CleanShuffle(int) - Constructor for class org.apache.spark.CleanShuffle
-
- CleanupTask - Interface in org.apache.spark
-
Classes that represent cleaning tasks.
- CleanupTaskWeakReference - Class in org.apache.spark
-
A WeakReference associated with a CleanupTask.
- CleanupTaskWeakReference(CleanupTask, Object, ReferenceQueue<Object>) - Constructor for class org.apache.spark.CleanupTaskWeakReference
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.LDA
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.DCT
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.IDF
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Imputer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Interaction
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.NGram
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.PCA
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.RFormula
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- clear(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Clears the user-supplied value for the input param.
- clear(Param<?>) - Static method in class org.apache.spark.ml.Pipeline
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.PipelineModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- clear(Param<?>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- clear() - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
- clear() - Static method in class org.apache.spark.util.AccumulatorContext
-
- clearActive() - Static method in class org.apache.spark.sql.SQLContext
-
- clearActiveSession() - Static method in class org.apache.spark.sql.SparkSession
-
Clears the active SparkSession for current thread.
- clearCache() - Method in class org.apache.spark.sql.catalog.Catalog
-
Removes all cached tables from the in-memory cache.
- clearCache() - Method in class org.apache.spark.sql.SQLContext
-
Removes all cached tables from the in-memory cache.
- clearCallSite() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Pass-through to SparkContext.setCallSite.
- clearCallSite() - Method in class org.apache.spark.SparkContext
-
Clear the thread-local property for overriding the call sites
of actions and RDDs.
- clearDefaultSession() - Static method in class org.apache.spark.sql.SparkSession
-
Clears the default SparkSession that is returned by the builder.
- clearDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
-
- clearDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
-
- clearDependencies() - Method in class org.apache.spark.rdd.UnionRDD
-
- clearJobGroup() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Clear the current thread's job group ID and its description.
- clearJobGroup() - Method in class org.apache.spark.SparkContext
-
Clear the current thread's job group ID and its description.
- clearThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Clears the threshold so that predict
will output raw prediction scores.
- clearThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
-
Clears the threshold so that predict
will output raw prediction scores.
- CLogLog$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
-
- clone() - Method in class org.apache.spark.SparkConf
-
Copy this object
- clone() - Method in class org.apache.spark.sql.ExperimentalMethods
-
- clone() - Method in class org.apache.spark.sql.types.Decimal
-
- clone() - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
Get an identical copy of this listener manager.
- clone() - Method in class org.apache.spark.storage.StorageLevel
-
- clone() - Method in class org.apache.spark.util.random.BernoulliCellSampler
-
- clone() - Method in class org.apache.spark.util.random.BernoulliSampler
-
- clone() - Method in class org.apache.spark.util.random.PoissonSampler
-
- clone() - Method in interface org.apache.spark.util.random.RandomSampler
-
return a copy of the RandomSampler object
- clone(T, SerializerInstance, ClassTag<T>) - Static method in class org.apache.spark.util.Utils
-
Clone an object using a Spark serializer.
- cloneComplement() - Method in class org.apache.spark.util.random.BernoulliCellSampler
-
Return a sampler that is the complement of the range specified of the current sampler.
- close() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- close() - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- close() - Method in class org.apache.spark.io.SnappyOutputStreamWrapper
-
- close() - Method in class org.apache.spark.serializer.DeserializationStream
-
- close() - Method in class org.apache.spark.serializer.SerializationStream
-
- close(Throwable) - Method in class org.apache.spark.sql.ForeachWriter
-
Called when stopping to process one partition of new data in the executor side.
- close() - Method in class org.apache.spark.sql.hive.execution.HiveOutputWriter
-
- close() - Method in class org.apache.spark.sql.SparkSession
-
Synonym for stop()
.
- close() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- close() - Method in class org.apache.spark.storage.CountingWritableChannel
-
- close() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
-
- close() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
- close() - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Close this log and release any resources.
- ClosureCleaner - Class in org.apache.spark.util
-
A cleaner that renders closures serializable if they can be done so safely.
- ClosureCleaner() - Constructor for class org.apache.spark.util.ClosureCleaner
-
- closureSerializer() - Method in class org.apache.spark.SparkEnv
-
- cls() - Method in class org.apache.spark.sql.types.ObjectType
-
- cls() - Method in class org.apache.spark.util.MethodIdentifier
-
- clsTag() - Method in interface org.apache.spark.sql.Encoder
-
A ClassTag that can be used to construct and Array to contain a collection of T
.
- cluster() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
-
Cluster centers of the transformed data.
- cluster() - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment
-
- clusterCenters() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- clusterCenters() - Method in class org.apache.spark.ml.clustering.KMeansModel
-
- clusterCenters() - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Leaf cluster centers.
- clusterCenters() - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
- clusterCenters() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
-
- ClusteringSummary - Class in org.apache.spark.ml.clustering
-
:: Experimental ::
Summary of clustering algorithms.
- clusterSizes() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
-
Size of (number of data points in) each cluster.
- clusterWeights() - Method in class org.apache.spark.mllib.clustering.StreamingKMeansModel
-
- cn() - Method in class org.apache.spark.mllib.feature.VocabWord
-
- coalesce(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int, boolean) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- coalesce(int, RDD<?>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Runs the packing algorithm and returns an array of PartitionGroups that if possible are
load balanced and grouped by locality
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- coalesce(int, RDD<?>) - Method in interface org.apache.spark.rdd.PartitionCoalescer
-
Coalesce the partitions of the given RDD.
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD that is reduced into numPartitions
partitions.
- coalesce(int, boolean, Option<PartitionCoalescer>, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- coalesce(int) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset that has exactly numPartitions
partitions, when the fewer partitions
are requested.
- coalesce(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the first column that is not null, or null if all inputs are null.
- coalesce(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the first column that is not null, or null if all inputs are null.
- coalesce$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- coalesce$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- coalesce$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- coalesce$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.api.r.RRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- coalesce$default$3() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- coalesce$default$3() - Static method in class org.apache.spark.graphx.VertexRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- coalesce$default$3() - Static method in class org.apache.spark.rdd.UnionRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.api.r.RRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- coalesce$default$4(int, boolean, Option<PartitionCoalescer>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- CoarseGrainedClusterMessages - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages
-
- CoarseGrainedClusterMessages.AddWebUIFilter - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.AddWebUIFilter$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.GetExecutorLossReason - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.GetExecutorLossReason$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.KillExecutors - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.KillExecutors$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.KillExecutorsOnHost - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.KillExecutorsOnHost$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.KillTask - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.KillTask$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.LaunchTask - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.LaunchTask$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterClusterManager - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterClusterManager$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisteredExecutor$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterExecutor - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterExecutor$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterExecutorFailed - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterExecutorFailed$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RegisterExecutorResponse - Interface in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RemoveExecutor - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RemoveExecutor$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RequestExecutors - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RequestExecutors$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.RetrieveSparkAppConfig$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.ReviveOffers$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.SetupDriver - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.SetupDriver$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.Shutdown$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.SparkAppConfig - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.SparkAppConfig$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.StatusUpdate - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.StatusUpdate$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.StopDriver$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.StopExecutor$ - Class in org.apache.spark.scheduler.cluster
-
- CoarseGrainedClusterMessages.StopExecutors$ - Class in org.apache.spark.scheduler.cluster
-
- code() - Method in class org.apache.spark.mllib.feature.VocabWord
-
- CodegenMetrics - Class in org.apache.spark.metrics.source
-
:: Experimental ::
Metrics for code generation.
- CodegenMetrics() - Constructor for class org.apache.spark.metrics.source.CodegenMetrics
-
- codeLen() - Method in class org.apache.spark.mllib.feature.VocabWord
-
- coefficientMatrix() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- coefficients() - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
- coefficients() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
A vector of model coefficients for "binomial" logistic regression.
- coefficients() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- coefficients() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- coefficients() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- coefficientStandardErrors() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
-
Standard error of estimated coefficients and intercept.
- coefficientStandardErrors() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Standard error of estimated coefficients and intercept.
- cogroup(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
- cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
- cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other1
or other2
or other3
,
return a resulting RDD that contains a tuple with the list of values
for that key in this
, other1
, other2
and other3
.
- cogroup(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
- cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
- cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other1
or other2
or other3
,
return a resulting RDD that contains a tuple with the list of values
for that key in this
, other1
, other2
and other3
.
- cogroup(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
- cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
- cogroup(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
For each key k in this
or other1
or other2
or other3
,
return a resulting RDD that contains a tuple with the list of values
for that key in this
, other1
, other2
and other3
.
- cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other1
or other2
or other3
,
return a resulting RDD that contains a tuple with the list of values
for that key in this
, other1
, other2
and other3
.
- cogroup(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
- cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
- cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other1
or other2
or other3
,
return a resulting RDD that contains a tuple with the list of values
for that key in this
, other1
, other2
and other3
.
- cogroup(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
- cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
- cogroup(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other
, return a resulting RDD that contains a tuple with the
list of values for that key in this
as well as other
.
- cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other1
or other2
, return a resulting RDD that contains a
tuple with the list of values for that key in this
, other1
and other2
.
- cogroup(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
For each key k in this
or other1
or other2
or other3
,
return a resulting RDD that contains a tuple with the list of values
for that key in this
, other1
, other2
and other3
.
- cogroup(KeyValueGroupedDataset<K, U>, Function3<K, Iterator<V>, Iterator<U>, TraversableOnce<R>>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Scala-specific)
Applies the given function to each cogrouped data.
- cogroup(KeyValueGroupedDataset<K, U>, CoGroupFunction<K, V, U, R>, Encoder<R>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Java-specific)
Applies the given function to each cogrouped data.
- cogroup(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream.
- cogroup(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream.
- cogroup(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream.
- cogroup(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- cogroup(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- cogroup(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- cogroup(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- cogroup(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- cogroup(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- cogroup(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream.
- cogroup(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream.
- cogroup(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'cogroup' between RDDs of this
DStream and other
DStream.
- CoGroupedRDD<K> - Class in org.apache.spark.rdd
-
:: DeveloperApi ::
An RDD that cogroups its parents.
- CoGroupedRDD(Seq<RDD<? extends Product2<K, ?>>>, Partitioner, ClassTag<K>) - Constructor for class org.apache.spark.rdd.CoGroupedRDD
-
- CoGroupFunction<K,V1,V2,R> - Interface in org.apache.spark.api.java.function
-
A function that returns zero or more output records from each grouping key and its values from 2
Datasets.
- col(String) - Method in class org.apache.spark.sql.Dataset
-
Selects column based on the column name and return it as a
Column
.
- col(String) - Static method in class org.apache.spark.sql.functions
-
Returns a
Column
based on the given column name.
- coldStartStrategy() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- coldStartStrategy() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- colIter() - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- colIter() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns an iterator of column vectors.
- colIter() - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- colIter() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- colIter() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Returns an iterator of column vectors.
- colIter() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- collect() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- collect() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- collect() - Static method in class org.apache.spark.api.java.JavaRDD
-
- collect() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an array that contains all of the elements in this RDD.
- collect() - Static method in class org.apache.spark.api.r.RRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- collect() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- collect() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- collect() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- collect() - Static method in class org.apache.spark.graphx.VertexRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- collect() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- collect() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- collect() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- collect() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- collect() - Method in class org.apache.spark.rdd.RDD
-
Return an array that contains all of the elements in this RDD.
- collect(PartialFunction<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD that contains all matching values by applying f
.
- collect() - Static method in class org.apache.spark.rdd.UnionRDD
-
- collect(PartialFunction<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- collect() - Method in class org.apache.spark.sql.Dataset
-
Returns an array that contains all rows in this Dataset.
- collect(PartialFunction<BaseType, B>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- collect(PartialFunction<BaseType, B>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- collect(PartialFunction<BaseType, B>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- collect(PartialFunction<A, B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- collect_list(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a list of objects with duplicates.
- collect_list(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a list of objects with duplicates.
- collect_set(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a set of objects with duplicate elements eliminated.
- collect_set(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns a set of objects with duplicate elements eliminated.
- collectAsList() - Method in class org.apache.spark.sql.Dataset
-
Returns a Java list that contains all rows in this Dataset.
- collectAsMap() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return the key-value pairs in this RDD to the master as a Map.
- collectAsMap() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return the key-value pairs in this RDD to the master as a Map.
- collectAsync() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- collectAsync() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- collectAsync() - Static method in class org.apache.spark.api.java.JavaRDD
-
- collectAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of collect
, which returns a future for
retrieving an array containing all of the elements in this RDD.
- collectAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Returns a future for retrieving all elements of this RDD.
- collectEdges(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
-
Returns an RDD that contains for each vertex v its local edges,
i.e., the edges that are incident on v, in the user-specified direction.
- collectFirst(PartialFunction<BaseType, B>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- collectFirst(PartialFunction<BaseType, B>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- collectFirst(PartialFunction<BaseType, B>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- collectFirst(PartialFunction<A, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- collectionAccumulator() - Method in class org.apache.spark.SparkContext
-
Create and register a CollectionAccumulator
, which starts with empty list and accumulates
inputs by adding them into the list.
- collectionAccumulator(String) - Method in class org.apache.spark.SparkContext
-
Create and register a CollectionAccumulator
, which starts with empty list and accumulates
inputs by adding them into the list.
- CollectionAccumulator<T> - Class in org.apache.spark.util
-
- CollectionAccumulator() - Constructor for class org.apache.spark.util.CollectionAccumulator
-
- CollectionsUtils - Class in org.apache.spark.util
-
- CollectionsUtils() - Constructor for class org.apache.spark.util.CollectionsUtils
-
- collectLeaves() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- collectLeaves() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- collectLeaves() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- collectNeighborIds(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
-
Collect the neighbor vertex ids for each vertex.
- collectNeighbors(EdgeDirection) - Method in class org.apache.spark.graphx.GraphOps
-
Collect the neighbor vertex attributes for each vertex.
- collectPartitions(int[]) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- collectPartitions(int[]) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- collectPartitions(int[]) - Static method in class org.apache.spark.api.java.JavaRDD
-
- collectPartitions(int[]) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an array that contains all of the elements in a specific partition of this RDD.
- colPtrs() - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- colPtrs() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- colsPerBlock() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- colStats(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Computes column-wise summary statistics for the input RDD[Vector].
- Column - Class in org.apache.spark.sql.catalog
-
A column in Spark, as returned by
listColumns
method in
Catalog
.
- Column(String, String, String, boolean, boolean, boolean) - Constructor for class org.apache.spark.sql.catalog.Column
-
- Column - Class in org.apache.spark.sql
-
A column that will be computed based on the data in a DataFrame
.
- Column(Expression) - Constructor for class org.apache.spark.sql.Column
-
- Column(String) - Constructor for class org.apache.spark.sql.Column
-
- column(String) - Static method in class org.apache.spark.sql.functions
-
Returns a
Column
based on the given column name.
- ColumnName - Class in org.apache.spark.sql
-
A convenient class used for constructing schema.
- ColumnName(String) - Constructor for class org.apache.spark.sql.ColumnName
-
- ColumnPruner - Class in org.apache.spark.ml.feature
-
Utility transformer for removing temporary columns from a DataFrame.
- ColumnPruner(String, Set<String>) - Constructor for class org.apache.spark.ml.feature.ColumnPruner
-
- ColumnPruner(Set<String>) - Constructor for class org.apache.spark.ml.feature.ColumnPruner
-
- columns() - Method in class org.apache.spark.sql.Dataset
-
Returns all column names as an array.
- columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Compute all cosine similarities between columns of this matrix using the brute-force
approach of computing normalized dot products.
- columnSimilarities() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Compute all cosine similarities between columns of this matrix using the brute-force
approach of computing normalized dot products.
- columnSimilarities(double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Compute similarities between columns of this matrix using a sampling approach.
- columnsToPrune() - Method in class org.apache.spark.ml.feature.ColumnPruner
-
- combinations(int) - Static method in class org.apache.spark.sql.types.StructType
-
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Generic function to combine the elements for each key using a custom set of aggregation
functions.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Generic function to combine the elements for each key using a custom set of aggregation
functions.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Simplified version of combineByKey that hash-partitions the output RDD and uses map-side
aggregation.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Simplified version of combineByKey that hash-partitions the resulting RDD using the existing
partitioner/parallelism level and using map-side aggregation.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Generic function to combine the elements for each key using a custom set of aggregation
functions.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the
existing partitioner/parallelism level.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Combine elements of each key in DStream's RDDs using custom function.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Combine elements of each key in DStream's RDDs using custom function.
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- combineByKey(Function<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- combineByKey(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, ClassTag<C>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Combine elements of each key in DStream's RDDs using custom functions.
- combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, Partitioner, boolean, Serializer, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
:: Experimental ::
Generic function to combine the elements for each key using a custom set of aggregation
functions.
- combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, int, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
:: Experimental ::
Simplified version of combineByKeyWithClassTag that hash-partitions the output RDD.
- combineByKeyWithClassTag(Function1<V, C>, Function2<C, V, C>, Function2<C, C, C>, ClassTag<C>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
:: Experimental ::
Simplified version of combineByKeyWithClassTag that hash-partitions the resulting RDD using the
existing partitioner/parallelism level.
- combineCombinersByKey(Iterator<? extends Product2<K, C>>, TaskContext) - Method in class org.apache.spark.Aggregator
-
- combineValuesByKey(Iterator<? extends Product2<K, V>>, TaskContext) - Method in class org.apache.spark.Aggregator
-
- commit(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- commitJob(JobContext, Seq<FileCommitProtocol.TaskCommitMessage>) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Commits a job after the writes succeed.
- commitJob(JobContext, Seq<FileCommitProtocol.TaskCommitMessage>) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
-
- commitTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Commits a task after the writes succeed.
- commitTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
-
- commitTask(OutputCommitter, TaskAttemptContext, int, int) - Static method in class org.apache.spark.mapred.SparkHadoopMapRedUtil
-
Commits a task output.
- commonHeaderNodes() - Static method in class org.apache.spark.ui.UIUtils
-
- companion() - Static method in class org.apache.spark.sql.types.StructType
-
- compare(PartitionGroup, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- compare(Option<PartitionGroup>, Option<PartitionGroup>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- compare(Decimal) - Method in class org.apache.spark.sql.types.Decimal
-
- compare(RDDInfo) - Method in class org.apache.spark.storage.RDDInfo
-
- compareTo(A) - Static method in class org.apache.spark.sql.types.Decimal
-
- compareTo(A) - Static method in class org.apache.spark.storage.RDDInfo
-
- compareTo(SparkShutdownHook) - Method in class org.apache.spark.util.SparkShutdownHook
-
- Complete() - Static method in class org.apache.spark.sql.streaming.OutputMode
-
OutputMode in which all the rows in the streaming DataFrame/Dataset will be written
to the sink every time there are some updates.
- completed() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- completedIndices() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- completedJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- completedStageIndices() - Method in class org.apache.spark.ui.jobs.UIData.JobUIData
-
- completedStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- completedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- completionTime() - Method in class org.apache.spark.scheduler.StageInfo
-
Time when all tasks in the stage completed or when the stage was cancelled.
- completionTime() - Method in class org.apache.spark.status.api.v1.JobData
-
- completionTime() - Method in class org.apache.spark.status.api.v1.StageData
-
- completionTime() - Method in class org.apache.spark.ui.jobs.UIData.JobUIData
-
- ComplexFutureAction<T> - Class in org.apache.spark
-
A
FutureAction
for actions that could trigger multiple Spark jobs.
- ComplexFutureAction(Function1<JobSubmitter, Future<T>>) - Constructor for class org.apache.spark.ComplexFutureAction
-
- compose(Function1<A, T1>) - Static method in class org.apache.spark.sql.types.StructType
-
- compressed() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- compressed() - Static method in class org.apache.spark.ml.linalg.DenseVector
-
- compressed() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns a matrix in dense column major, dense row major, sparse row major, or sparse column
major format, whichever uses less storage.
- compressed() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- compressed() - Static method in class org.apache.spark.ml.linalg.SparseVector
-
- compressed() - Method in interface org.apache.spark.ml.linalg.Vector
-
Returns a vector in either dense or sparse format, whichever uses less storage.
- compressed() - Static method in class org.apache.spark.mllib.linalg.DenseVector
-
- compressed() - Static method in class org.apache.spark.mllib.linalg.SparseVector
-
- compressed() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Returns a vector in either dense or sparse format, whichever uses less storage.
- compressedColMajor() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- compressedColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns a matrix in dense or sparse column major format, whichever uses less storage.
- compressedColMajor() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- compressedInputStream(InputStream) - Method in interface org.apache.spark.io.CompressionCodec
-
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
-
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
-
- compressedInputStream(InputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
-
- compressedOutputStream(OutputStream) - Method in interface org.apache.spark.io.CompressionCodec
-
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZ4CompressionCodec
-
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.LZFCompressionCodec
-
- compressedOutputStream(OutputStream) - Method in class org.apache.spark.io.SnappyCompressionCodec
-
- compressedRowMajor() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- compressedRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns a matrix in dense or sparse row major format, whichever uses less storage.
- compressedRowMajor() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- CompressionCodec - Interface in org.apache.spark.io
-
:: DeveloperApi ::
CompressionCodec allows the customization of choosing different compression implementations
to be used in block storage.
- compute(Partition, TaskContext) - Method in class org.apache.spark.api.r.BaseRRDD
-
- compute(Partition, TaskContext) - Static method in class org.apache.spark.api.r.RRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.EdgeRDD
-
- compute(Partition, TaskContext) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- compute(Partition, TaskContext) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.graphx.VertexRDD
-
Provides the RDD[(VertexId, VD)]
equivalent output.
- compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
-
Compute the gradient and loss given the features of a single data point.
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.Gradient
-
Compute the gradient and loss given the features of a single data point,
add the gradient to a provided vector to avoid creating new objects, and return loss.
- compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
-
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.HingeGradient
-
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.L1Updater
-
- compute(Vector, double, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
-
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LeastSquaresGradient
-
- compute(Vector, double, Vector, Vector) - Method in class org.apache.spark.mllib.optimization.LogisticGradient
-
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SimpleUpdater
-
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.SquaredL2Updater
-
- compute(Vector, Vector, double, int, double) - Method in class org.apache.spark.mllib.optimization.Updater
-
Compute an updated value for weights given the gradient, stepSize, iteration number and
regularization parameter.
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.CoGroupedRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.HadoopRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.JdbcRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.NewHadoopRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.PartitionPruningRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.RDD
-
:: DeveloperApi ::
Implemented by subclasses to compute a given partition.
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.ShuffledRDD
-
- compute(Partition, TaskContext) - Method in class org.apache.spark.rdd.UnionRDD
-
- compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Generate an RDD for the given duration
- compute(Time) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- compute(Time) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Method that generates an RDD for the given Duration
- compute(Time) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- compute(Time) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- compute(Time) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- compute(Time) - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
-
- compute(Time) - Method in class org.apache.spark.streaming.dstream.DStream
-
Method that generates an RDD for the given time
- compute(Time) - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
-
- computeColumnSummaryStatistics() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes column-wise summary statistics.
- computeCorrelation(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
Compute the Pearson correlation for two datasets.
- computeCorrelation(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
-
Compute Spearman's correlation for two datasets.
- computeCorrelationMatrix(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
Compute the Pearson correlation matrix S, for the input matrix, where S(i, j) is the
correlation between column i and j.
- computeCorrelationMatrix(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
-
Compute Spearman's correlation matrix S, for the input matrix, where S(i, j) is the
correlation between column i and j.
- computeCorrelationMatrixFromCovariance(Matrix) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
Compute the Pearson correlation matrix from the covariance matrix.
- computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
- computeCorrelationWithMatrixImpl(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
-
- computeCost(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
Computes the sum of squared distances between the input points and their corresponding cluster
centers.
- computeCost(Dataset<?>) - Method in class org.apache.spark.ml.clustering.KMeansModel
-
Return the K-means cost (sum of squared distances of points to their nearest center) for this
model on the given data.
- computeCost(Vector) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Computes the squared distance between the input point and the cluster center it belongs to.
- computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Computes the sum of squared distances between the input points and their corresponding cluster
centers.
- computeCost(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Java-friendly version of computeCost()
.
- computeCost(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Return the K-means cost (sum of squared distances of points to their nearest center) for this
model on the given data.
- computeCovariance() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the covariance matrix, treating each row as an observation.
- computeError(RDD<LabeledPoint>, DecisionTreeRegressionModel[], double[], Loss) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to calculate error of the base learner for the gradient boosting calculation.
- computeError(org.apache.spark.mllib.tree.model.TreeEnsembleModel, RDD<LabeledPoint>) - Method in interface org.apache.spark.mllib.tree.loss.Loss
-
Method to calculate error of the base learner for the gradient boosting calculation.
- computeError(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
-
Method to calculate loss when the predictions are already known.
- computeFractionForSampleSize(int, long, boolean) - Static method in class org.apache.spark.util.random.SamplingUtils
-
Returns a sampling rate that guarantees a sample of size greater than or equal to
sampleSizeLowerBound 99.99% of the time.
- computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Computes the Gramian matrix A^T A
.
- computeGramianMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the Gramian matrix A^T A
.
- computeInitialPredictionAndError(RDD<LabeledPoint>, double, DecisionTreeRegressionModel, Loss) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Compute the initial predictions and errors for a dataset for the first
iteration of gradient boosting.
- computeInitialPredictionAndError(RDD<LabeledPoint>, double, DecisionTreeModel, Loss) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
:: DeveloperApi ::
Compute the initial predictions and errors for a dataset for the first
iteration of gradient boosting.
- computePreferredLocations(Seq<InputFormatInfo>) - Static method in class org.apache.spark.scheduler.InputFormatInfo
-
Computes the preferred locations based on input(s) and returned a location to block map.
- computePrincipalComponents(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the top k principal components only.
- computePrincipalComponentsAndExplainedVariance(int) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes the top k principal components and a vector of proportions of
variance explained by each principal component.
- computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Computes the singular value decomposition of this IndexedRowMatrix.
- computeSVD(int, boolean, double) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Computes singular value decomposition of this matrix.
- computeThresholdByKey(Map<K, AcceptanceResult>, Map<K, Object>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Given the result returned by getCounts, determine the threshold for accepting items to
generate exact sample size.
- concat(Column...) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input string columns together into a single string column.
- concat(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input string columns together into a single string column.
- concat_ws(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input string columns together into a single string column,
using the given separator.
- concat_ws(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Concatenates multiple input string columns together into a single string column,
using the given separator.
- Conf(int, int, double, double, double, double, double, double) - Constructor for class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- conf() - Method in class org.apache.spark.SparkEnv
-
- conf() - Method in class org.apache.spark.sql.hive.RelationConversions
-
- conf() - Method in class org.apache.spark.sql.SparkSession
-
Runtime configuration interface for Spark.
- confidence() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
Returns the confidence of the rule.
- confidence() - Method in class org.apache.spark.partial.BoundedDouble
-
- confidence() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
- config(String, String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(String, long) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(String, double) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(String, boolean) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a config option.
- config(SparkConf) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets a list of config options based on the given SparkConf
.
- config() - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- ConfigEntryWithDefault<T> - Class in org.apache.spark.internal.config
-
- ConfigEntryWithDefault(String, T, Function1<String, T>, Function1<T, String>, String, boolean) - Constructor for class org.apache.spark.internal.config.ConfigEntryWithDefault
-
- ConfigEntryWithDefaultFunction<T> - Class in org.apache.spark.internal.config
-
- ConfigEntryWithDefaultFunction(String, Function0<T>, Function1<String, T>, Function1<T, String>, String, boolean) - Constructor for class org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
-
- ConfigEntryWithDefaultString<T> - Class in org.apache.spark.internal.config
-
- ConfigEntryWithDefaultString(String, String, Function1<String, T>, Function1<T, String>, String, boolean) - Constructor for class org.apache.spark.internal.config.ConfigEntryWithDefaultString
-
- ConfigHelpers - Class in org.apache.spark.internal.config
-
- ConfigHelpers() - Constructor for class org.apache.spark.internal.config.ConfigHelpers
-
- configTestLog4j(String) - Static method in class org.apache.spark.util.Utils
-
config a log4j properties used for testsuite
- configuration() - Method in class org.apache.spark.scheduler.InputFormatInfo
-
- CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.HadoopRDD
-
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
- CONFIGURATION_INSTANTIATION_LOCK() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).
- configureJobPropertiesForStorageHandler(TableDesc, Configuration, boolean) - Static method in class org.apache.spark.sql.hive.HiveTableUtil
-
- confusionMatrix() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns confusion matrix:
predicted classes are in columns,
they are ordered by class label ascending,
as in "labels"
- connect(String, int) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- connectedComponents() - Method in class org.apache.spark.graphx.GraphOps
-
Compute the connected component membership of each vertex and return a graph with the vertex
value containing the lowest vertex id in the connected component containing that vertex.
- connectedComponents(int) - Method in class org.apache.spark.graphx.GraphOps
-
Compute the connected component membership of each vertex and return a graph with the vertex
value containing the lowest vertex id in the connected component containing that vertex.
- ConnectedComponents - Class in org.apache.spark.graphx.lib
-
Connected components algorithm.
- ConnectedComponents() - Constructor for class org.apache.spark.graphx.lib.ConnectedComponents
-
- connectLeader(String, int) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- consequent() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
- ConstantInputDStream<T> - Class in org.apache.spark.streaming.dstream
-
An input stream that always returns the same RDD on each time step.
- ConstantInputDStream(StreamingContext, RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.ConstantInputDStream
-
- constraints() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- constraints() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- constraints() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- constructTree(org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData[]) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
Given a list of nodes from a tree, construct the tree.
- constructTrees(RDD<org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0.NodeData>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
- constructURIForAuthentication(URI, org.apache.spark.SecurityManager) - Static method in class org.apache.spark.util.Utils
-
Construct a URI container information used for authentication.
- contains(Param<?>) - Method in class org.apache.spark.ml.param.ParamMap
-
Checks whether a parameter is explicitly specified.
- contains(String) - Method in class org.apache.spark.SparkConf
-
Does the configuration contain a given parameter?
- contains(Object) - Method in class org.apache.spark.sql.Column
-
Contains the other element.
- contains(String) - Method in class org.apache.spark.sql.types.Metadata
-
Tests whether this Metadata contains a binding for a key.
- contains(A1) - Static method in class org.apache.spark.sql.types.StructType
-
- containsBlock(BlockId) - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return whether the given block is stored in this block manager in O(1) time.
- containsChild() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- containsChild() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- containsChild() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- containsDelimiters() - Method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- containsNull() - Method in class org.apache.spark.sql.types.ArrayType
-
- containsSlice(GenSeq<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- contentType() - Method in class org.apache.spark.ui.JettyUtils.ServletParams
-
- context() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- context() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- context() - Static method in class org.apache.spark.api.java.JavaRDD
-
- context() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
- context() - Static method in class org.apache.spark.api.r.RRDD
-
- context() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- context() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- context() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- context() - Static method in class org.apache.spark.graphx.VertexRDD
-
- context() - Method in class org.apache.spark.InterruptibleIterator
-
- context(SQLContext) - Static method in class org.apache.spark.ml.r.RWrappers
-
- context(SQLContext) - Method in class org.apache.spark.ml.util.MLReader
-
- context(SQLContext) - Method in class org.apache.spark.ml.util.MLWriter
-
- context() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- context() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- context() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- context() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- context() - Method in class org.apache.spark.rdd.RDD
-
- context() - Static method in class org.apache.spark.rdd.UnionRDD
-
- context() - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- context() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
- context() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- context() - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- context() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- context() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- context() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- context() - Method in class org.apache.spark.streaming.dstream.DStream
-
Return the StreamingContext associated with this DStream
- Continuous() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- ContinuousSplit - Class in org.apache.spark.ml.tree
-
Split which tests a continuous feature.
- conv(Column, int, int) - Static method in class org.apache.spark.sql.functions
-
Convert a number in a string column from one base to another.
- CONVERT_METASTORE_ORC() - Static method in class org.apache.spark.sql.hive.HiveUtils
-
- CONVERT_METASTORE_PARQUET() - Static method in class org.apache.spark.sql.hive.HiveUtils
-
- CONVERT_METASTORE_PARQUET_WITH_SCHEMA_MERGING() - Static method in class org.apache.spark.sql.hive.HiveUtils
-
- convertMatrixColumnsFromML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts matrix columns in an input Dataset to the
Matrix
type from the new
Matrix
type under the
spark.ml
package.
- convertMatrixColumnsFromML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts matrix columns in an input Dataset to the
Matrix
type from the new
Matrix
type under the
spark.ml
package.
- convertMatrixColumnsToML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts Matrix columns in an input Dataset from the
Matrix
type to the new
Matrix
type under the
spark.ml
package.
- convertMatrixColumnsToML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts Matrix columns in an input Dataset from the
Matrix
type to the new
Matrix
type under the
spark.ml
package.
- convertToCanonicalEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.GraphOps
-
Convert bi-directional edges into uni-directional ones.
- convertToTimeUnit(long, TimeUnit) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
Convert milliseconds
to the specified unit
.
- convertVectorColumnsFromML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts vector columns in an input Dataset to the
Vector
type from the new
Vector
type under the
spark.ml
package.
- convertVectorColumnsFromML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts vector columns in an input Dataset to the
Vector
type from the new
Vector
type under the
spark.ml
package.
- convertVectorColumnsToML(Dataset<?>, String...) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts vector columns in an input Dataset from the
Vector
type to the new
Vector
type under the
spark.ml
package.
- convertVectorColumnsToML(Dataset<?>, Seq<String>) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Converts vector columns in an input Dataset from the
Vector
type to the new
Vector
type under the
spark.ml
package.
- CoordinateMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a matrix in coordinate format.
- CoordinateMatrix(RDD<MatrixEntry>, long, long) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
- CoordinateMatrix(RDD<MatrixEntry>) - Constructor for class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Alternative constructor leaving matrix dimensions to be determined automatically.
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LinearSVC
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- copy(ParamMap) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.KMeans
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.KMeansModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.LDA
-
- copy(ParamMap) - Method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.Estimator
-
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.Evaluator
-
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- copy(ParamMap) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Binarizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ColumnPruner
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- copy(ParamMap) - Static method in class org.apache.spark.ml.feature.DCT
-
- copy(ParamMap) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.HashingTF
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.IDF
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.IDFModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Imputer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.ImputerModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.IndexToString
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Interaction
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinHashLSH
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- copy(ParamMap) - Static method in class org.apache.spark.ml.feature.NGram
-
- copy(ParamMap) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.PCA
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.PCAModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RFormula
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.RFormulaModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.SQLTransformer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StringIndexer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Tokenizer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorAssembler
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorIndexer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- copy(ParamMap) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- copy(ParamMap) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- copy(Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y = x
- copy() - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- copy() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- copy() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Get a deep copy of the matrix.
- copy() - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- copy() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- copy() - Method in interface org.apache.spark.ml.linalg.Vector
-
Makes a deep copy of this vector.
- copy(ParamMap) - Method in class org.apache.spark.ml.Model
-
- copy() - Method in class org.apache.spark.ml.param.ParamMap
-
Creates a copy of this param map.
- copy(ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Creates a copy of this instance with the same UID and some extra params.
- copy(ParamMap) - Method in class org.apache.spark.ml.Pipeline
-
- copy(ParamMap) - Method in class org.apache.spark.ml.PipelineModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.PipelineStage
-
- copy(ParamMap) - Method in class org.apache.spark.ml.Predictor
-
- copy(ParamMap) - Method in class org.apache.spark.ml.recommendation.ALS
-
- copy(ParamMap) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.LinearRegression
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- copy(ParamMap) - Method in class org.apache.spark.ml.Transformer
-
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- copy(ParamMap) - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- copy(ParamMap) - Method in class org.apache.spark.ml.UnaryTransformer
-
- copy(Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y = x
- copy() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- copy() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- copy() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Get a deep copy of the matrix.
- copy() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- copy() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- copy() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Makes a deep copy of this vector.
- copy() - Method in class org.apache.spark.mllib.random.ExponentialGenerator
-
- copy() - Method in class org.apache.spark.mllib.random.GammaGenerator
-
- copy() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
-
- copy() - Method in class org.apache.spark.mllib.random.PoissonGenerator
-
- copy() - Method in interface org.apache.spark.mllib.random.RandomDataGenerator
-
Returns a copy of the RandomDataGenerator with a new instance of the rng object used in the
class when applicable for non-locking concurrent usage.
- copy() - Method in class org.apache.spark.mllib.random.StandardNormalGenerator
-
- copy() - Method in class org.apache.spark.mllib.random.UniformGenerator
-
- copy() - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- copy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Returns a shallow copy of this instance.
- copy(Kryo, T) - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
-
- copy() - Method in interface org.apache.spark.sql.Row
-
Make a copy of the current
Row
object.
- copy() - Method in class org.apache.spark.util.AccumulatorV2
-
Creates a new copy of this accumulator.
- copy() - Method in class org.apache.spark.util.CollectionAccumulator
-
- copy() - Method in class org.apache.spark.util.DoubleAccumulator
-
- copy() - Method in class org.apache.spark.util.LegacyAccumulatorWrapper
-
- copy() - Method in class org.apache.spark.util.LongAccumulator
-
- copy() - Method in class org.apache.spark.util.StatCounter
-
Clone this StatCounter
- copyAndReset() - Method in class org.apache.spark.util.AccumulatorV2
-
Creates a new copy of this accumulator, which is zero value.
- copyAndReset() - Method in class org.apache.spark.util.CollectionAccumulator
-
- copyFileStreamNIO(FileChannel, FileChannel, long, long) - Static method in class org.apache.spark.util.Utils
-
- copyStream(InputStream, OutputStream, boolean, boolean) - Static method in class org.apache.spark.util.Utils
-
Copy all data from an InputStream to an OutputStream.
- copyToArray(Object, int) - Static method in class org.apache.spark.sql.types.StructType
-
- copyToArray(Object) - Static method in class org.apache.spark.sql.types.StructType
-
- copyToArray(Object, int, int) - Static method in class org.apache.spark.sql.types.StructType
-
- copyToBuffer(Buffer<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- copyValues(T, ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Copies param values from this instance to another instance for params shared by them.
- cores() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
-
- coresGranted() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- coresPerExecutor() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- corr(Dataset<?>, String, String) - Static method in class org.apache.spark.ml.stat.Correlation
-
:: Experimental ::
Compute the correlation matrix for the input Dataset of Vectors using the specified method.
- corr(Dataset<?>, String) - Static method in class org.apache.spark.ml.stat.Correlation
-
Compute the Pearson correlation matrix for the input Dataset of Vectors.
- corr(RDD<Object>, RDD<Object>, String) - Static method in class org.apache.spark.mllib.stat.correlation.Correlations
-
- corr(RDD<Vector>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the Pearson correlation matrix for the input RDD of Vectors.
- corr(RDD<Vector>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the correlation matrix for the input RDD of Vectors using the specified method.
- corr(RDD<Object>, RDD<Object>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the Pearson correlation for the input RDDs.
- corr(JavaRDD<Double>, JavaRDD<Double>) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of corr()
- corr(RDD<Object>, RDD<Object>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Compute the correlation for the input RDDs using the specified method.
- corr(JavaRDD<Double>, JavaRDD<Double>, String) - Static method in class org.apache.spark.mllib.stat.Statistics
-
Java-friendly version of corr()
- corr(String, String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the correlation of two columns of a DataFrame.
- corr(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculates the Pearson Correlation Coefficient of two columns of a DataFrame.
- corr(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the Pearson Correlation Coefficient for two columns.
- corr(String, String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the Pearson Correlation Coefficient for two columns.
- Correlation - Class in org.apache.spark.ml.stat
-
API for correlation functions in MLlib, compatible with DataFrames and Datasets.
- Correlation() - Constructor for class org.apache.spark.ml.stat.Correlation
-
- CorrelationNames - Class in org.apache.spark.mllib.stat.correlation
-
Maintains supported and default correlation names.
- CorrelationNames() - Constructor for class org.apache.spark.mllib.stat.correlation.CorrelationNames
-
- Correlations - Class in org.apache.spark.mllib.stat.correlation
-
Delegates computation to the specific correlation object based on the input method name.
- Correlations() - Constructor for class org.apache.spark.mllib.stat.correlation.Correlations
-
- corresponds(GenSeq<B>, Function2<A, B, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- corrMatrix(RDD<Vector>, String) - Static method in class org.apache.spark.mllib.stat.correlation.Correlations
-
- cos(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the cosine of the given value.
- cos(String) - Static method in class org.apache.spark.sql.functions
-
Computes the cosine of the given column.
- cosh(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the hyperbolic cosine of the given value.
- cosh(String) - Static method in class org.apache.spark.sql.functions
-
Computes the hyperbolic cosine of the given column.
- count() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- count() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- count() - Static method in class org.apache.spark.api.java.JavaRDD
-
- count() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the number of elements in the RDD.
- count() - Static method in class org.apache.spark.api.r.RRDD
-
- count() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- count() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
The number of edges in the RDD.
- count() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
The number of vertices in the RDD.
- count() - Static method in class org.apache.spark.graphx.VertexRDD
-
- count() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
- count() - Method in class org.apache.spark.ml.regression.AFTAggregator
-
- count() - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
-
- count() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Sample size.
- count() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Sample size.
- count() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- count() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- count() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- count() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- count() - Method in class org.apache.spark.rdd.RDD
-
Return the number of elements in the RDD.
- count() - Static method in class org.apache.spark.rdd.UnionRDD
-
- count() - Method in class org.apache.spark.sql.Dataset
-
Returns the number of rows in the Dataset.
- count(MapFunction<T, Object>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Count aggregate function.
- count(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Count aggregate function.
- count(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of items in a group.
- count(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of items in a group.
- count() - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
Returns a
Dataset
that contains a tuple with each key and the number of items present
for that key.
- count() - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Count the number of rows for each group.
- count(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- count() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
-
- count() - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- count() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by counting each RDD
of this DStream.
- count() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- count() - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- count() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- count() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- count() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- count() - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by counting each RDD
of this DStream.
- count() - Method in class org.apache.spark.streaming.kafka.OffsetRange
-
Number of messages this OffsetRange refers to
- count() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the number of elements added to the accumulator.
- count() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the number of elements added to the accumulator.
- count() - Method in class org.apache.spark.util.StatCounter
-
- countApprox(long, double) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countApprox(long) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countApprox(long) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.api.java.JavaRDD
-
- countApprox(long) - Static method in class org.apache.spark.api.java.JavaRDD
-
- countApprox(long, double) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of count() that returns a potentially incomplete result
within a timeout, even if not all tasks have finished.
- countApprox(long) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of count() that returns a potentially incomplete result
within a timeout, even if not all tasks have finished.
- countApprox(long, double) - Static method in class org.apache.spark.api.r.RRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countApprox(long, double) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countApprox(long, double) - Static method in class org.apache.spark.graphx.VertexRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countApprox(long, double) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countApprox(long, double) - Method in class org.apache.spark.rdd.RDD
-
Approximate version of count() that returns a potentially incomplete result
within a timeout, even if not all tasks have finished.
- countApprox(long, double) - Static method in class org.apache.spark.rdd.UnionRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countApprox$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countApprox$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countApprox$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.api.java.JavaRDD
-
- countApproxDistinct(double) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return approximate number of distinct elements in the RDD.
- countApproxDistinct(int, int) - Static method in class org.apache.spark.api.r.RRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.api.r.RRDD
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countApproxDistinct(double) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countApproxDistinct(double) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.graphx.VertexRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.graphx.VertexRDD
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countApproxDistinct(int, int) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countApproxDistinct(int, int) - Method in class org.apache.spark.rdd.RDD
-
Return approximate number of distinct elements in the RDD.
- countApproxDistinct(double) - Method in class org.apache.spark.rdd.RDD
-
Return approximate number of distinct elements in the RDD.
- countApproxDistinct(int, int) - Static method in class org.apache.spark.rdd.UnionRDD
-
- countApproxDistinct(double) - Static method in class org.apache.spark.rdd.UnionRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.api.r.RRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.graphx.VertexRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countApproxDistinct$default$1() - Static method in class org.apache.spark.rdd.UnionRDD
-
- countApproxDistinctByKey(double, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(int, int, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countApproxDistinctByKey(double) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return approximate number of distinct values for each key in this RDD.
- countAsync() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countAsync() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countAsync() - Static method in class org.apache.spark.api.java.JavaRDD
-
- countAsync() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of count
, which returns a
future for counting the number of elements in this RDD.
- countAsync() - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Returns a future for counting the number of elements in the RDD.
- countByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Count the number of elements for each key, and return the result to the master as a Map.
- countByKey() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Count the number of elements for each key, collecting the results to a local Map.
- countByKeyApprox(long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Approximate version of countByKey that can return a partial result if it does
not finish within a timeout.
- countByKeyApprox(long, double) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Approximate version of countByKey that can return a partial result if it does
not finish within a timeout.
- countByKeyApprox(long, double) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Approximate version of countByKey that can return a partial result if it does
not finish within a timeout.
- countByValue() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countByValue() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countByValue() - Static method in class org.apache.spark.api.java.JavaRDD
-
- countByValue() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the count of each unique value in this RDD as a map of (value, count) pairs.
- countByValue(Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countByValue(Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countByValue(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return the count of each unique value in this RDD as a local map of (value, count) pairs.
- countByValue(Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- countByValue() - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- countByValue(int) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- countByValue() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the counts of each distinct value in
each RDD of this DStream.
- countByValue(int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the counts of each distinct value in
each RDD of this DStream.
- countByValue() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- countByValue(int) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- countByValue() - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- countByValue(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- countByValue() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- countByValue(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- countByValue() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- countByValue(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- countByValue() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- countByValue(int) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- countByValue(int, Ordering<T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD contains the counts of each distinct value in
each RDD of this DStream.
- countByValue$default$1() - Static method in class org.apache.spark.api.r.RRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countByValue$default$1() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countByValue$default$1() - Static method in class org.apache.spark.graphx.VertexRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countByValue$default$1() - Static method in class org.apache.spark.rdd.UnionRDD
-
- countByValueAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- countByValueAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- countByValueAndWindow(Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the count of distinct elements in
RDDs in a sliding window over this DStream.
- countByValueAndWindow(Duration, Duration, int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD contains the count of distinct elements in
RDDs in a sliding window over this DStream.
- countByValueAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- countByValueAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- countByValueAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- countByValueAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- countByValueAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- countByValueAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- countByValueAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- countByValueAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- countByValueAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- countByValueAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- countByValueAndWindow(Duration, Duration, int, Ordering<T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD contains the count of distinct elements in
RDDs in a sliding window over this DStream.
- countByValueApprox(long, double) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countByValueApprox(long) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- countByValueApprox(long, double) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countByValueApprox(long) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- countByValueApprox(long, double) - Static method in class org.apache.spark.api.java.JavaRDD
-
- countByValueApprox(long) - Static method in class org.apache.spark.api.java.JavaRDD
-
- countByValueApprox(long, double) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of countByValue().
- countByValueApprox(long) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Approximate version of countByValue().
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countByValueApprox(long, double, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Approximate version of countByValue().
- countByValueApprox(long, double, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countByValueApprox$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.api.r.RRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.graphx.VertexRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- countByValueApprox$default$3(long, double) - Static method in class org.apache.spark.rdd.UnionRDD
-
- countByWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- countByWindow(Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by counting the number
of elements in a window over this DStream.
- countByWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- countByWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- countByWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- countByWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- countByWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- countByWindow(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by counting the number
of elements in a sliding window over this DStream.
- countDistinct(Column, Column...) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- countDistinct(String, String...) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- countDistinct(Column, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- countDistinct(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the number of distinct items in a group.
- CountingWritableChannel - Class in org.apache.spark.storage
-
- CountingWritableChannel(WritableByteChannel) - Constructor for class org.apache.spark.storage.CountingWritableChannel
-
- countMinSketch(String, int, int, int) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- countMinSketch(String, double, double, int) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- countMinSketch(Column, int, int, int) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- countMinSketch(Column, double, double, int) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Builds a Count-min Sketch over a specified column.
- CountMinSketch - Class in org.apache.spark.util.sketch
-
A Count-min sketch is a probabilistic data structure used for cardinality estimation using
sub-linear space.
- CountMinSketch() - Constructor for class org.apache.spark.util.sketch.CountMinSketch
-
- CountMinSketch.Version - Enum in org.apache.spark.util.sketch
-
- countTowardsTaskFailures() - Static method in class org.apache.spark.ExceptionFailure
-
- countTowardsTaskFailures() - Method in class org.apache.spark.ExecutorLostFailure
-
- countTowardsTaskFailures() - Method in class org.apache.spark.FetchFailed
-
Fetch failures lead to a different failure handling path: (1) we don't abort the stage after
4 task failures, instead we immediately go back to the stage which generated the map output,
and regenerate the missing data.
- countTowardsTaskFailures() - Static method in class org.apache.spark.Resubmitted
-
- countTowardsTaskFailures() - Method in class org.apache.spark.TaskCommitDenied
-
If a task failed because its attempt to commit was denied, do not count this failure
towards failing the stage.
- countTowardsTaskFailures() - Method in interface org.apache.spark.TaskFailedReason
-
Whether this task failure should be counted towards the maximum number of times the task is
allowed to fail before the stage is aborted.
- countTowardsTaskFailures() - Method in class org.apache.spark.TaskKilled
-
- countTowardsTaskFailures() - Static method in class org.apache.spark.TaskResultLost
-
- countTowardsTaskFailures() - Static method in class org.apache.spark.UnknownReason
-
- CountVectorizer - Class in org.apache.spark.ml.feature
-
- CountVectorizer(String) - Constructor for class org.apache.spark.ml.feature.CountVectorizer
-
- CountVectorizer() - Constructor for class org.apache.spark.ml.feature.CountVectorizer
-
- CountVectorizerModel - Class in org.apache.spark.ml.feature
-
Converts a text document to a sparse vector of token counts.
- CountVectorizerModel(String, String[]) - Constructor for class org.apache.spark.ml.feature.CountVectorizerModel
-
- CountVectorizerModel(String[]) - Constructor for class org.apache.spark.ml.feature.CountVectorizerModel
-
- cov() - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
- cov(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Calculate the sample covariance of two numerical columns of a DataFrame.
- covar_pop(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population covariance for two columns.
- covar_pop(String, String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population covariance for two columns.
- covar_samp(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample covariance for two columns.
- covar_samp(String, String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample covariance for two columns.
- covs() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
- crc32(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the cyclic redundancy check value (CRC32) of a binary column and
returns the value as a bigint.
- CreatableRelationProvider - Interface in org.apache.spark.sql.sources
-
- create(boolean, boolean, boolean, boolean, int) - Static method in class org.apache.spark.api.java.StorageLevels
-
Create a new StorageLevel object.
- create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int, Function<ResultSet, T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
Create an RDD that executes a SQL query on a JDBC connection and reads results.
- create(JavaSparkContext, JdbcRDD.ConnectionFactory, String, long, long, int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
Create an RDD that executes a SQL query on a JDBC connection and reads results.
- create(RDD<T>, Function1<Object, Object>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
Create a PartitionPruningRDD.
- create(Object...) - Static method in class org.apache.spark.sql.RowFactory
-
Create a
Row
from the given arguments.
- create(String) - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
- create(long, TimeUnit) - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
- create(String, int) - Static method in class org.apache.spark.streaming.kafka.Broker
-
- create(String, int, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
-
- create(TopicAndPartition, long, long) - Static method in class org.apache.spark.streaming.kafka.OffsetRange
-
- create(long) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Creates a
BloomFilter
with the expected number of insertions and a default expected
false positive probability of 3%.
- create(long, double) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Creates a
BloomFilter
with the expected number of insertions and expected false
positive probability.
- create(long, long) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
Creates a
BloomFilter
with given
expectedNumItems
and
numBits
, it will
pick an optimal
numHashFunctions
which can minimize
fpp
for the bloom filter.
- create(int, int, int) - Static method in class org.apache.spark.util.sketch.CountMinSketch
-
- create(double, double, int) - Static method in class org.apache.spark.util.sketch.CountMinSketch
-
Creates a
CountMinSketch
with given relative error (
eps
),
confidence
,
and random
seed
.
- createArrayType(DataType) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates an ArrayType by specifying the data type of elements (elementType
).
- createArrayType(DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates an ArrayType by specifying the data type of elements (elementType
) and
whether the array contains null values (containsNull
).
- createCombiner() - Method in class org.apache.spark.Aggregator
-
- createCompiledClass(String, File, TestUtils.JavaSourceFromString, Seq<URL>) - Static method in class org.apache.spark.TestUtils
-
Creates a compiled class with the source file.
- createCompiledClass(String, File, String, String, Seq<URL>) - Static method in class org.apache.spark.TestUtils
-
Creates a compiled class with the given name.
- createCryptoInputStream(InputStream, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Helper method to wrap InputStream
with CryptoInputStream
for decryption.
- createCryptoOutputStream(OutputStream, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Helper method to wrap OutputStream
with CryptoOutputStream
for encryption.
- createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a DataFrame
from an RDD of Product (e.g.
- createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a DataFrame
from a local Seq of Product.
- createDataFrame(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SparkSession
-
:: DeveloperApi ::
Creates a
DataFrame
from an
RDD
containing
Row
s using the given schema.
- createDataFrame(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SparkSession
-
:: DeveloperApi ::
Creates a
DataFrame
from a
JavaRDD
containing
Row
s using the given schema.
- createDataFrame(List<Row>, StructType) - Method in class org.apache.spark.sql.SparkSession
-
:: DeveloperApi ::
Creates a
DataFrame
from a
java.util.List
containing
Row
s using the given schema.
- createDataFrame(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SparkSession
-
Applies a schema to an RDD of Java Beans.
- createDataFrame(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SparkSession
-
Applies a schema to an RDD of Java Beans.
- createDataFrame(List<?>, Class<?>) - Method in class org.apache.spark.sql.SparkSession
-
Applies a schema to a List of Java Beans.
- createDataFrame(RDD<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(Seq<A>, TypeTags.TypeTag<A>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(RDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(JavaRDD<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(List<Row>, StructType) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(RDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(JavaRDD<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataFrame(List<?>, Class<?>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataset(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
from a local Seq of data of a given type.
- createDataset(RDD<T>, Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
from an RDD of a given type.
- createDataset(List<T>, Encoder<T>) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
from a
java.util.List
of a given type.
- createDataset(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataset(RDD<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLContext
-
- createDataset(List<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLContext
-
- createDecimalType(int, int) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a DecimalType by specifying the precision and scale.
- createDecimalType() - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a DecimalType with default precision and scale, which are 10 and 0.
- createDF(RDD<byte[]>, StructType, SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- createDirectory(String, String) - Static method in class org.apache.spark.util.Utils
-
Create a directory inside the given parent directory.
- createDirectStream(StreamingContext, Map<String, String>, Map<TopicAndPartition, Object>, Function1<MessageAndMetadata<K, V>, R>, ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>, ClassTag<R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that directly pulls messages from Kafka Brokers
without using any receiver.
- createDirectStream(StreamingContext, Map<String, String>, Set<String>, ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that directly pulls messages from Kafka Brokers
without using any receiver.
- createDirectStream(JavaStreamingContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Class<R>, Map<String, String>, Map<TopicAndPartition, Long>, Function<MessageAndMetadata<K, V>, R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that directly pulls messages from Kafka Brokers
without using any receiver.
- createDirectStream(JavaStreamingContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Map<String, String>, Set<String>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that directly pulls messages from Kafka Brokers
without using any receiver.
- createdTempDir() - Method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- createExternalTable(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
- createExternalTable(String, String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
- createExternalTable(String, String) - Method in class org.apache.spark.sql.SQLContext
-
- createExternalTable(String, String, String) - Method in class org.apache.spark.sql.SQLContext
-
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- createExternalTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- createExternalTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- createFilter(StructType, Filter[]) - Static method in class org.apache.spark.sql.hive.orc.OrcFilters
-
- createGlobalTempView(String) - Method in class org.apache.spark.sql.Dataset
-
Creates a global temporary view using the given name.
- CreateHiveTableAsSelectCommand - Class in org.apache.spark.sql.hive.execution
-
Create table and insert the query result into it.
- CreateHiveTableAsSelectCommand(CatalogTable, LogicalPlan, SaveMode) - Constructor for class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- createJar(Seq<File>, File, Option<String>) - Static method in class org.apache.spark.TestUtils
-
Create a jar file that contains this set of files.
- createJarWithClasses(Seq<String>, String, Seq<Tuple2<String, String>>, Seq<URL>) - Static method in class org.apache.spark.TestUtils
-
Create a jar that defines classes with the given names.
- createJarWithFiles(Map<String, String>, File) - Static method in class org.apache.spark.TestUtils
-
Create a jar file containing multiple files.
- createJobID(Date, int) - Static method in class org.apache.spark.internal.io.SparkHadoopWriterUtils
-
- createJobTrackerID(Date) - Static method in class org.apache.spark.internal.io.SparkHadoopWriterUtils
-
- createKey(SparkConf) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Creates a new encryption key.
- createLogForDriver(SparkConf, String, Configuration) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
Create a WriteAheadLog for the driver.
- createLogForReceiver(SparkConf, String, Configuration) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
Create a WriteAheadLog for the receiver.
- createMapType(DataType, DataType) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a MapType by specifying the data type of keys (keyType
) and values
(keyType
).
- createMapType(DataType, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a MapType by specifying the data type of keys (keyType
), the data type of
values (keyType
), and whether values contain any null value
(valueContainsNull
).
- createOrReplaceGlobalTempView(String) - Method in class org.apache.spark.sql.Dataset
-
Creates or replaces a global temporary view using the given name.
- createOrReplaceTempView(String) - Method in class org.apache.spark.sql.Dataset
-
Creates a local temporary view using the given name.
- createOutputOperationFailureForUI(String) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
- createPathFromString(String, JobConf) - Static method in class org.apache.spark.internal.io.SparkHadoopWriterUtils
-
- createPMMLModelExport(Object) - Static method in class org.apache.spark.mllib.pmml.export.PMMLModelExportFactory
-
Factory object to help creating the necessary PMMLModelExport implementation
taking as input the machine learning model (for example KMeansModel).
- createPollingStream(StreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createPollingStream(StreamingContext, Seq<InetSocketAddress>, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createPollingStream(StreamingContext, Seq<InetSocketAddress>, StorageLevel, int, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createPollingStream(JavaStreamingContext, String, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createPollingStream(JavaStreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createPollingStream(JavaStreamingContext, InetSocketAddress[], StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createPollingStream(JavaStreamingContext, InetSocketAddress[], StorageLevel, int, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
- createProxyHandler(String, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a handler for proxying request to Workers and Application Drivers
- createProxyLocationHeader(String, String, HttpServletRequest, URI) - Static method in class org.apache.spark.ui.JettyUtils
-
- createProxyURI(String, String, String, String) - Static method in class org.apache.spark.ui.JettyUtils
-
- createRDD(SparkContext, Map<String, String>, OffsetRange[], ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an RDD from Kafka using offset ranges for each topic and partition.
- createRDD(SparkContext, Map<String, String>, OffsetRange[], Map<TopicAndPartition, Broker>, Function1<MessageAndMetadata<K, V>, R>, ClassTag<K>, ClassTag<V>, ClassTag<KD>, ClassTag<VD>, ClassTag<R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an RDD from Kafka using offset ranges for each topic and partition.
- createRDD(JavaSparkContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Map<String, String>, OffsetRange[]) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an RDD from Kafka using offset ranges for each topic and partition.
- createRDD(JavaSparkContext, Class<K>, Class<V>, Class<KD>, Class<VD>, Class<R>, Map<String, String>, OffsetRange[], Map<TopicAndPartition, Broker>, Function<MessageAndMetadata<K, V>, R>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an RDD from Kafka using offset ranges for each topic and partition.
- createRDDFromArray(JavaSparkContext, byte[][]) - Static method in class org.apache.spark.api.r.RRDD
-
Create an RRDD given a sequence of byte arrays.
- createRDDFromFile(JavaSparkContext, String, int) - Static method in class org.apache.spark.api.r.RRDD
-
Create an RRDD given a temporary file name.
- createReadableChannel(ReadableByteChannel, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Wrap a ReadableByteChannel
for decryption.
- createRedirectHandler(String, String, Function1<HttpServletRequest, BoxedUnit>, String, Set<String>) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a handler that always redirects the user to the given path
- createRelation(SQLContext, SaveMode, Map<String, String>, Dataset<Row>) - Method in interface org.apache.spark.sql.sources.CreatableRelationProvider
-
Saves a DataFrame to a destination (using data source-specific parameters)
- createRelation(SQLContext, Map<String, String>) - Method in interface org.apache.spark.sql.sources.RelationProvider
-
Returns a new base relation with the given parameters.
- createRelation(SQLContext, Map<String, String>, StructType) - Method in interface org.apache.spark.sql.sources.SchemaRelationProvider
-
Returns a new base relation with the given parameters and user defined schema.
- createSecret(SparkConf) - Static method in class org.apache.spark.util.Utils
-
- createServlet(JettyUtils.ServletParams<T>, org.apache.spark.SecurityManager, SparkConf, Function1<T, Object>) - Static method in class org.apache.spark.ui.JettyUtils
-
- createServletHandler(String, JettyUtils.ServletParams<T>, org.apache.spark.SecurityManager, SparkConf, String, Function1<T, Object>) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a context handler that responds to a request with the given path prefix
- createServletHandler(String, HttpServlet, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a context handler that responds to a request with the given path prefix
- createSink(SQLContext, Map<String, String>, Seq<String>, OutputMode) - Method in interface org.apache.spark.sql.sources.StreamSinkProvider
-
- createSource(SQLContext, String, Option<StructType>, String, Map<String, String>) - Method in interface org.apache.spark.sql.sources.StreamSourceProvider
-
- createSparkContext(String, String, String, String[], Map<Object, Object>, Map<Object, Object>) - Static method in class org.apache.spark.api.r.RRDD
-
- createStaticHandler(String, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Create a handler for serving files from a static directory
- createStream(StreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Create a input stream from a Flume source.
- createStream(StreamingContext, String, int, StorageLevel, boolean) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Create a input stream from a Flume source.
- createStream(JavaStreamingContext, String, int) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates a input stream from a Flume source.
- createStream(JavaStreamingContext, String, int, StorageLevel) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates a input stream from a Flume source.
- createStream(JavaStreamingContext, String, int, StorageLevel, boolean) - Static method in class org.apache.spark.streaming.flume.FlumeUtils
-
Creates a input stream from a Flume source.
- createStream(StreamingContext, String, String, Map<String, Object>, StorageLevel) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that pulls messages from Kafka Brokers.
- createStream(StreamingContext, Map<String, String>, Map<String, Object>, StorageLevel, ClassTag<K>, ClassTag<V>, ClassTag<U>, ClassTag<T>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that pulls messages from Kafka Brokers.
- createStream(JavaStreamingContext, String, String, Map<String, Integer>) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that pulls messages from Kafka Brokers.
- createStream(JavaStreamingContext, String, String, Map<String, Integer>, StorageLevel) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that pulls messages from Kafka Brokers.
- createStream(JavaStreamingContext, Class<K>, Class<V>, Class<U>, Class<T>, Map<String, String>, Map<String, Integer>, StorageLevel) - Static method in class org.apache.spark.streaming.kafka.KafkaUtils
-
Create an input stream that pulls messages from Kafka Brokers.
- createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, Function1<Record, T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, Function1<Record, T>, String, String, ClassTag<T>) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, Function1<Record, T>, String, String, String, String, String, ClassTag<T>) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(StreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, String, String) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, Function<Record, T>, Class<T>) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, Function<Record, T>, Class<T>, String, String) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, Function<Record, T>, Class<T>, String, String, String, String, String) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(JavaStreamingContext, String, String, String, String, InitialPositionInStream, Duration, StorageLevel, String, String) - Static method in class org.apache.spark.streaming.kinesis.KinesisUtils
-
- createStream(JavaStreamingContext, String, String, String, String, int, Duration, StorageLevel, String, String, String, String, String) - Method in class org.apache.spark.streaming.kinesis.KinesisUtilsPythonHelper
-
- createStructField(String, String, boolean) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- createStructField(String, DataType, boolean, Metadata) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructField by specifying the name (name
), data type (dataType
) and
whether values of this field can be null values (nullable
).
- createStructField(String, DataType, boolean) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructField with empty metadata.
- createStructType(Seq<StructField>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- createStructType(List<StructField>) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructType with the given list of StructFields (fields
).
- createStructType(StructField[]) - Static method in class org.apache.spark.sql.types.DataTypes
-
Creates a StructType with the given StructField array (fields
).
- createTable(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
:: Experimental ::
Creates a table from the given path and returns the corresponding DataFrame.
- createTable(String, String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
:: Experimental ::
Creates a table from the given path based on a data source and returns the corresponding
DataFrame.
- createTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
:: Experimental ::
Creates a table based on the dataset in a data source and a set of options.
- createTable(String, String, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
:: Experimental ::
(Scala-specific)
Creates a table based on the dataset in a data source and a set of options.
- createTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
:: Experimental ::
Create a table based on the dataset in a data source, a schema and a set of options.
- createTable(String, String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.catalog.Catalog
-
:: Experimental ::
(Scala-specific)
Create a table based on the dataset in a data source, a schema and a set of options.
- createTempDir(String, String) - Static method in class org.apache.spark.util.Utils
-
Create a temporary directory inside the given parent directory.
- createTempView(String) - Method in class org.apache.spark.sql.Dataset
-
Creates a local temporary view using the given name.
- createUnsafe(long, int, int) - Static method in class org.apache.spark.sql.types.Decimal
-
Creates a decimal from unscaled, precision and scale without checking the bounds.
- createWorkspace(int) - Static method in class org.apache.spark.mllib.optimization.NNLS
-
- createWritableChannel(WritableByteChannel, SparkConf, byte[]) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
Wrap a WritableByteChannel
for encryption.
- crossJoin(Dataset<?>) - Method in class org.apache.spark.sql.Dataset
-
Explicit cartesian join with another DataFrame
.
- crosstab(String, String) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Computes a pair-wise frequency table of the given columns.
- CrossValidator - Class in org.apache.spark.ml.tuning
-
K-fold cross validation performs model selection by splitting the dataset into a set of
non-overlapping randomly partitioned folds which are used as separate training and test datasets
e.g., with k=3 folds, K-fold cross validation will generate 3 (training, test) dataset pairs,
each of which uses 2/3 of the data for training and 1/3 for testing.
- CrossValidator(String) - Constructor for class org.apache.spark.ml.tuning.CrossValidator
-
- CrossValidator() - Constructor for class org.apache.spark.ml.tuning.CrossValidator
-
- CrossValidatorModel - Class in org.apache.spark.ml.tuning
-
CrossValidatorModel contains the model with the highest average cross-validation
metric across folds and uses this model to transform input data.
- CryptoStreamUtils - Class in org.apache.spark.security
-
A util class for manipulating IO encryption and decryption streams.
- CryptoStreamUtils() - Constructor for class org.apache.spark.security.CryptoStreamUtils
-
- csv(String...) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads CSV files and returns the result as a DataFrame
.
- csv(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads a CSV file and returns the result as a DataFrame
.
- csv(Dataset<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads an Dataset[String]
storing CSV rows and returns the result as a DataFrame
.
- csv(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads CSV files and returns the result as a DataFrame
.
- csv(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the DataFrame
in CSV format at the specified path.
- csv(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a CSV file stream and returns the result as a DataFrame
.
- cube(Column...) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns,
so we can run aggregation on them.
- cube(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns,
so we can run aggregation on them.
- cube(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns,
so we can run aggregation on them.
- cube(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional cube for the current Dataset using the specified columns,
so we can run aggregation on them.
- CubeType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.CubeType$
-
- cume_dist() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the cumulative distribution of values within a window partition,
i.e.
- current_date() - Static method in class org.apache.spark.sql.functions
-
Returns the current date as a date column.
- current_timestamp() - Static method in class org.apache.spark.sql.functions
-
Returns the current timestamp as a timestamp column.
- currentAttemptId() - Method in interface org.apache.spark.SparkStageInfo
-
- currentAttemptId() - Method in class org.apache.spark.SparkStageInfoImpl
-
- currentDatabase() - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns the current default database in this session.
- currentRow() - Static method in class org.apache.spark.sql.expressions.Window
-
Value representing the current row.
- currPrefLocs(Partition, RDD<?>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- dapply(Dataset<Row>, byte[], byte[], Object[], StructType) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
The helper function for dapply() on R side.
- Data(Vector, double, Option<Object>) - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
-
- Data(double[], double[], double[][]) - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
-
- Data(double[], double[], double[][], String) - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- Data(int) - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
-
- Data(Vector, double) - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$.Data
-
- data() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
-
- data() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
-
- Database - Class in org.apache.spark.sql.catalog
-
A database in Spark, as returned by the
listDatabases
method defined in
Catalog
.
- Database(String, String, String) - Constructor for class org.apache.spark.sql.catalog.Database
-
- database() - Method in class org.apache.spark.sql.catalog.Function
-
- database() - Method in class org.apache.spark.sql.catalog.Table
-
- databaseExists(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Check if the database with the specified name exists.
- databaseTypeDefinition() - Method in class org.apache.spark.sql.jdbc.JdbcType
-
- dataDistribution() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
-
- DATAFRAME_DAPPLY() - Static method in class org.apache.spark.api.r.RRunnerModes
-
- DATAFRAME_GAPPLY() - Static method in class org.apache.spark.api.r.RRunnerModes
-
- DataFrameNaFunctions - Class in org.apache.spark.sql
-
Functionality for working with missing data in DataFrame
s.
- DataFrameReader - Class in org.apache.spark.sql
-
Interface used to load a
Dataset
from external storage systems (e.g.
- DataFrameStatFunctions - Class in org.apache.spark.sql
-
Statistic functions for DataFrame
s.
- DataFrameWriter<T> - Class in org.apache.spark.sql
-
Interface used to write a
Dataset
to external storage systems (e.g.
- Dataset<T> - Class in org.apache.spark.sql
-
A Dataset is a strongly typed collection of domain-specific objects that can be transformed
in parallel using functional or relational operations.
- Dataset(SparkSession, LogicalPlan, Encoder<T>) - Constructor for class org.apache.spark.sql.Dataset
-
- Dataset(SQLContext, LogicalPlan, Encoder<T>) - Constructor for class org.apache.spark.sql.Dataset
-
- DatasetHolder<T> - Class in org.apache.spark.sql
-
A container for a
Dataset
, used for implicit conversions in Scala.
- DataSourceRegister - Interface in org.apache.spark.sql.sources
-
Data sources should implement this trait so that they can register an alias to their data source.
- DataStreamReader - Class in org.apache.spark.sql.streaming
-
Interface used to load a streaming Dataset
from external storage systems (e.g.
- DataStreamWriter<T> - Class in org.apache.spark.sql.streaming
-
Interface used to write a streaming Dataset
to external storage systems (e.g.
- dataTablesHeaderNodes() - Static method in class org.apache.spark.ui.UIUtils
-
- dataType() - Method in class org.apache.spark.sql.catalog.Column
-
- dataType() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
- dataType() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- DataType - Class in org.apache.spark.sql.types
-
The base type of all Spark SQL data types.
- DataType() - Constructor for class org.apache.spark.sql.types.DataType
-
- dataType() - Method in class org.apache.spark.sql.types.StructField
-
- DataTypes - Class in org.apache.spark.sql.types
-
To get/create specific data type, users should use singleton objects and factory methods
provided by this class.
- DataTypes() - Constructor for class org.apache.spark.sql.types.DataTypes
-
- DataValidators - Class in org.apache.spark.mllib.util
-
:: DeveloperApi ::
A collection of methods used to validate data before applying ML algorithms.
- DataValidators() - Constructor for class org.apache.spark.mllib.util.DataValidators
-
- date() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type date.
- DATE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable date type.
- date_add(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is days
days after start
- date_format(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts a date/timestamp/string to a value of string in the format specified by the date
format given by the second argument.
- date_sub(Column, int) - Static method in class org.apache.spark.sql.functions
-
Returns the date that is days
days before start
- datediff(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the number of days from start
to end
.
- DateType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the DateType object.
- DateType - Class in org.apache.spark.sql.types
-
A date type, supporting "0001-01-01" through "9999-12-31".
- dayofmonth(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the day of the month as an integer from a given date/timestamp/string.
- dayofyear(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the day of the year as an integer from a given date/timestamp/string.
- DB2Dialect - Class in org.apache.spark.sql.jdbc
-
- DB2Dialect() - Constructor for class org.apache.spark.sql.jdbc.DB2Dialect
-
- DCT - Class in org.apache.spark.ml.feature
-
A feature transformer that takes the 1D discrete cosine transform of a real vector.
- DCT(String) - Constructor for class org.apache.spark.ml.feature.DCT
-
- DCT() - Constructor for class org.apache.spark.ml.feature.DCT
-
- deadStorageStatusList() - Method in class org.apache.spark.storage.StorageStatusListener
-
Deprecated.
- deadStorageStatusList() - Method in class org.apache.spark.ui.exec.ExecutorsListener
-
Deprecated.
- deallocate() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
-
- decayFactor() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
- decimal() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type decimal.
- decimal(int, int) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type decimal.
- DECIMAL() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable decimal type.
- Decimal - Class in org.apache.spark.sql.types
-
A mutable implementation of BigDecimal that can hold a Long if values are small enough.
- Decimal() - Constructor for class org.apache.spark.sql.types.Decimal
-
- Decimal.DecimalAsIfIntegral$ - Class in org.apache.spark.sql.types
-
A Integral
evidence parameter for Decimals.
- Decimal.DecimalIsFractional$ - Class in org.apache.spark.sql.types
-
A Fractional
evidence parameter for Decimals.
- DecimalAsIfIntegral$() - Constructor for class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
-
- DecimalIsFractional$() - Constructor for class org.apache.spark.sql.types.Decimal.DecimalIsFractional$
-
- DecimalType - Class in org.apache.spark.sql.types
-
The data type representing java.math.BigDecimal
values.
- DecimalType(int, int) - Constructor for class org.apache.spark.sql.types.DecimalType
-
- DecimalType(int) - Constructor for class org.apache.spark.sql.types.DecimalType
-
- DecimalType() - Constructor for class org.apache.spark.sql.types.DecimalType
-
- DecimalType.Expression$ - Class in org.apache.spark.sql.types
-
- DecimalType.Fixed$ - Class in org.apache.spark.sql.types
-
- DecisionTree - Class in org.apache.spark.mllib.tree
-
A class which implements a decision tree learning algorithm for classification and regression.
- DecisionTree(Strategy) - Constructor for class org.apache.spark.mllib.tree.DecisionTree
-
- DecisionTreeClassificationModel - Class in org.apache.spark.ml.classification
-
Decision tree model (http://en.wikipedia.org/wiki/Decision_tree_learning) for classification.
- DecisionTreeClassifier - Class in org.apache.spark.ml.classification
-
Decision tree learning algorithm (http://en.wikipedia.org/wiki/Decision_tree_learning)
for classification.
- DecisionTreeClassifier(String) - Constructor for class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- DecisionTreeClassifier() - Constructor for class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- DecisionTreeModel - Class in org.apache.spark.mllib.tree.model
-
Decision tree model for classification or regression.
- DecisionTreeModel(Node, Enumeration.Value) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- DecisionTreeModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.tree.model
-
- DecisionTreeModel.SaveLoadV1_0$.NodeData - Class in org.apache.spark.mllib.tree.model
-
Model data for model import/export
- DecisionTreeModel.SaveLoadV1_0$.PredictData - Class in org.apache.spark.mllib.tree.model
-
- DecisionTreeModel.SaveLoadV1_0$.SplitData - Class in org.apache.spark.mllib.tree.model
-
- DecisionTreeModelReadWrite - Class in org.apache.spark.ml.tree
-
Helper classes for tree model persistence
- DecisionTreeModelReadWrite() - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
-
- DecisionTreeModelReadWrite.NodeData - Class in org.apache.spark.ml.tree
-
- DecisionTreeModelReadWrite.NodeData$ - Class in org.apache.spark.ml.tree
-
- DecisionTreeModelReadWrite.SplitData - Class in org.apache.spark.ml.tree
-
- DecisionTreeModelReadWrite.SplitData$ - Class in org.apache.spark.ml.tree
-
- DecisionTreeRegressionModel - Class in org.apache.spark.ml.regression
-
- DecisionTreeRegressor - Class in org.apache.spark.ml.regression
-
- DecisionTreeRegressor(String) - Constructor for class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- DecisionTreeRegressor() - Constructor for class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- decode(Column, String) - Static method in class org.apache.spark.sql.functions
-
Computes the first argument into a string from a binary using the provided character set
(one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16').
- decodeFileNameInURI(URI) - Static method in class org.apache.spark.util.Utils
-
Get the file name from uri's raw path and decode it.
- decodeLabel(Vector) - Static method in class org.apache.spark.ml.classification.LabelConverter
-
Converts a vector to a label.
- decodeURLParameter(String) - Static method in class org.apache.spark.ui.UIUtils
-
Decode URLParameter if URL is encoded by YARN-WebAppProxyServlet.
- DEFAULT_CONNECTION_TIMEOUT() - Static method in class org.apache.spark.api.r.SparkRDefaults
-
- DEFAULT_DRIVER_MEM_MB() - Static method in class org.apache.spark.util.Utils
-
Define a default value for driver memory here since this value is referenced across the code
base and nearly all files already use Utils.scala
- DEFAULT_HEARTBEAT_INTERVAL() - Static method in class org.apache.spark.api.r.SparkRDefaults
-
- DEFAULT_MAX_FAILURES() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DEFAULT_MAX_TO_STRING_FIELDS() - Static method in class org.apache.spark.util.Utils
-
The performance overhead of creating and logging strings for wide schemas can be large.
- DEFAULT_NUM_RBACKEND_THREADS() - Static method in class org.apache.spark.api.r.SparkRDefaults
-
- DEFAULT_ROLLING_INTERVAL_SECS() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DEFAULT_SHUTDOWN_PRIORITY() - Static method in class org.apache.spark.util.ShutdownHookManager
-
- defaultAttr() - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
The default binary attribute.
- defaultAttr() - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
The default nominal attribute.
- defaultAttr() - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
The default numeric attribute.
- defaultCopy(ParamMap) - Method in interface org.apache.spark.ml.param.Params
-
Default implementation of copy with extra params.
- defaultCorrName() - Static method in class org.apache.spark.mllib.stat.correlation.CorrelationNames
-
- DefaultCredentials - Class in org.apache.spark.streaming.kinesis
-
Returns DefaultAWSCredentialsProviderChain for authentication.
- DefaultCredentials() - Constructor for class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- defaultLink() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
- defaultMinPartitions() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Default min number of partitions for Hadoop RDDs when not given by user
- defaultMinPartitions() - Method in class org.apache.spark.SparkContext
-
Default min number of partitions for Hadoop RDDs when not given by user
Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
- defaultParallelism() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Default level of parallelism to use when not given by user (e.g.
- defaultParallelism() - Method in class org.apache.spark.SparkContext
-
Default level of parallelism to use when not given by user (e.g.
- defaultParamMap() - Method in interface org.apache.spark.ml.param.Params
-
Internal param map for default values.
- defaultParams(String) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
Returns default configuration for the boosting algorithm
- defaultParams(Enumeration.Value) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
Returns default configuration for the boosting algorithm
- DefaultParamsReadable<T> - Interface in org.apache.spark.ml.util
-
:: DeveloperApi ::
- DefaultParamsWritable - Interface in org.apache.spark.ml.util
-
:: DeveloperApi ::
- DefaultPartitionCoalescer - Class in org.apache.spark.rdd
-
Coalesce the partitions of a parent RDD (prev
) into fewer partitions, so that each partition of
this RDD computes one or more of the parent ones.
- DefaultPartitionCoalescer(double) - Constructor for class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- DefaultPartitionCoalescer.PartitionLocations - Class in org.apache.spark.rdd
-
- defaultPartitioner(RDD<?>, Seq<RDD<?>>) - Static method in class org.apache.spark.Partitioner
-
Choose a partitioner to use for a cogroup-like operation between a number of RDDs.
- defaultSize() - Method in class org.apache.spark.sql.types.ArrayType
-
The default size of a value of the ArrayType is the default size of the element type.
- defaultSize() - Method in class org.apache.spark.sql.types.BinaryType
-
The default size of a value of the BinaryType is 100 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.BooleanType
-
The default size of a value of the BooleanType is 1 byte.
- defaultSize() - Method in class org.apache.spark.sql.types.ByteType
-
The default size of a value of the ByteType is 1 byte.
- defaultSize() - Method in class org.apache.spark.sql.types.CalendarIntervalType
-
- defaultSize() - Static method in class org.apache.spark.sql.types.CharType
-
- defaultSize() - Method in class org.apache.spark.sql.types.DataType
-
The default size of a value of this data type, used internally for size estimation.
- defaultSize() - Method in class org.apache.spark.sql.types.DateType
-
The default size of a value of the DateType is 4 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.DecimalType
-
The default size of a value of the DecimalType is 8 bytes when precision is at most 18,
and 16 bytes otherwise.
- defaultSize() - Method in class org.apache.spark.sql.types.DoubleType
-
The default size of a value of the DoubleType is 8 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.FloatType
-
The default size of a value of the FloatType is 4 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.HiveStringType
-
- defaultSize() - Method in class org.apache.spark.sql.types.IntegerType
-
The default size of a value of the IntegerType is 4 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.LongType
-
The default size of a value of the LongType is 8 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.MapType
-
The default size of a value of the MapType is
(the default size of the key type + the default size of the value type).
- defaultSize() - Method in class org.apache.spark.sql.types.NullType
-
- defaultSize() - Static method in class org.apache.spark.sql.types.NumericType
-
- defaultSize() - Method in class org.apache.spark.sql.types.ObjectType
-
- defaultSize() - Method in class org.apache.spark.sql.types.ShortType
-
The default size of a value of the ShortType is 2 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.StringType
-
The default size of a value of the StringType is 20 bytes.
- defaultSize() - Method in class org.apache.spark.sql.types.StructType
-
The default size of a value of the StructType is the total default sizes of all field types.
- defaultSize() - Method in class org.apache.spark.sql.types.TimestampType
-
The default size of a value of the TimestampType is 8 bytes.
- defaultSize() - Static method in class org.apache.spark.sql.types.VarcharType
-
- defaultStrategy(String) - Static method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- defaultStrategy(Enumeration.Value) - Static method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- DefaultTopologyMapper - Class in org.apache.spark.storage
-
A TopologyMapper that assumes all nodes are in the same rack
- DefaultTopologyMapper(SparkConf) - Constructor for class org.apache.spark.storage.DefaultTopologyMapper
-
- defaultValue() - Method in class org.apache.spark.internal.config.ConfigEntryWithDefault
-
- defaultValue() - Method in class org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
-
- defaultValue() - Method in class org.apache.spark.internal.config.ConfigEntryWithDefaultString
-
- defaultValueString() - Method in class org.apache.spark.internal.config.ConfigEntryWithDefault
-
- defaultValueString() - Method in class org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
-
- defaultValueString() - Method in class org.apache.spark.internal.config.ConfigEntryWithDefaultString
-
- defaultValueString() - Method in class org.apache.spark.internal.config.FallbackConfigEntry
-
- degree() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
-
The polynomial degree to expand, which should be greater than equal to 1.
- degrees() - Method in class org.apache.spark.graphx.GraphOps
-
The degree of each vertex in the graph.
- degrees(Column) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
- degrees(String) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in radians to an approximately equivalent angle measured in degrees.
- degreesOfFreedom() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Degrees of freedom.
- degreesOfFreedom() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Degrees of freedom
- degreesOfFreedom() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
-
- degreesOfFreedom() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
-
- degreesOfFreedom() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
Returns the degree(s) of freedom of the hypothesis test.
- delegate() - Method in class org.apache.spark.InterruptibleIterator
-
- deleteCheckpointFiles() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
:: DeveloperApi ::
- deleteRecursively(File) - Static method in class org.apache.spark.util.Utils
-
Delete a file or directory and its contents recursively.
- deleteWithJob(FileSystem, Path, boolean) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Specifies that a file should be deleted with the commit of this job.
- delimiterOptions() - Static method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- delta() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
-
Constant used in initialization and deviance to avoid numerical issues.
- dense(int, int, double[]) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Creates a column-major dense matrix.
- dense(double, double...) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double, Seq<Object>) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double[]) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a dense vector from a double array.
- dense(int, int, double[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Creates a column-major dense matrix.
- dense(double, double...) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double, Seq<Object>) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a dense vector from its values.
- dense(double[]) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a dense vector from a double array.
- dense_rank() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the rank of rows within a window partition, without any gaps.
- DenseMatrix - Class in org.apache.spark.ml.linalg
-
Column-major dense matrix.
- DenseMatrix(int, int, double[], boolean) - Constructor for class org.apache.spark.ml.linalg.DenseMatrix
-
- DenseMatrix(int, int, double[]) - Constructor for class org.apache.spark.ml.linalg.DenseMatrix
-
Column-major dense matrix.
- DenseMatrix - Class in org.apache.spark.mllib.linalg
-
Column-major dense matrix.
- DenseMatrix(int, int, double[], boolean) - Constructor for class org.apache.spark.mllib.linalg.DenseMatrix
-
- DenseMatrix(int, int, double[]) - Constructor for class org.apache.spark.mllib.linalg.DenseMatrix
-
Column-major dense matrix.
- DenseVector - Class in org.apache.spark.ml.linalg
-
A dense vector represented by a value array.
- DenseVector(double[]) - Constructor for class org.apache.spark.ml.linalg.DenseVector
-
- DenseVector - Class in org.apache.spark.mllib.linalg
-
A dense vector represented by a value array.
- DenseVector(double[]) - Constructor for class org.apache.spark.mllib.linalg.DenseVector
-
- dependencies() - Static method in class org.apache.spark.api.r.RRDD
-
- dependencies() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- dependencies() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- dependencies() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- dependencies() - Static method in class org.apache.spark.graphx.VertexRDD
-
- dependencies() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- dependencies() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- dependencies() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- dependencies() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- dependencies() - Method in class org.apache.spark.rdd.RDD
-
Get the list of dependencies of this RDD, taking into account whether the
RDD is checkpointed or not.
- dependencies() - Static method in class org.apache.spark.rdd.UnionRDD
-
- dependencies() - Method in class org.apache.spark.streaming.dstream.DStream
-
List of parent DStreams on which this DStream depends on
- dependencies() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
- Dependency<T> - Class in org.apache.spark
-
:: DeveloperApi ::
Base class for dependencies.
- Dependency() - Constructor for class org.apache.spark.Dependency
-
- DEPLOY_MODE - Static variable in class org.apache.spark.launcher.SparkLauncher
-
The Spark deploy mode.
- deployMode() - Method in class org.apache.spark.SparkContext
-
- depth() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- depth() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- depth() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Get depth of tree.
- depth() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
- DerbyDialect - Class in org.apache.spark.sql.jdbc
-
- DerbyDialect() - Constructor for class org.apache.spark.sql.jdbc.DerbyDialect
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
-
- deriv(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
-
- desc() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
-
- desc() - Method in class org.apache.spark.sql.Column
-
Returns an ordering used in sorting.
- desc(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on the descending order of the column.
- desc() - Method in class org.apache.spark.util.MethodIdentifier
-
- desc_nulls_first() - Method in class org.apache.spark.sql.Column
-
Returns a descending ordering used in sorting, where null values appear before non-null values.
- desc_nulls_first(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on the descending order of the column,
and null values appear before non-null values.
- desc_nulls_last() - Method in class org.apache.spark.sql.Column
-
Returns a descending ordering used in sorting, where null values appear after non-null values.
- desc_nulls_last(String) - Static method in class org.apache.spark.sql.functions
-
Returns a sort expression based on the descending order of the column,
and null values appear after non-null values.
- describe(String...) - Method in class org.apache.spark.sql.Dataset
-
Computes statistics for numeric and string columns, including count, mean, stddev, min, and
max.
- describe(Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Computes statistics for numeric and string columns, including count, mean, stddev, min, and
max.
- describeTopics(int) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- describeTopics() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- describeTopics(int) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Return the topics described by their top-weighted terms.
- describeTopics() - Method in class org.apache.spark.ml.clustering.LDAModel
-
- describeTopics(int) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- describeTopics() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- describeTopics(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
- describeTopics(int) - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Return the topics described by weighted terms.
- describeTopics() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Return the topics described by weighted terms.
- describeTopics(int) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- description() - Method in class org.apache.spark.ExceptionFailure
-
- description() - Method in class org.apache.spark.sql.catalog.Column
-
- description() - Method in class org.apache.spark.sql.catalog.Database
-
- description() - Method in class org.apache.spark.sql.catalog.Function
-
- description() - Method in class org.apache.spark.sql.catalog.Table
-
- description() - Method in class org.apache.spark.sql.streaming.SinkProgress
-
- description() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
- description() - Method in class org.apache.spark.status.api.v1.JobData
-
- description() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
-
- description() - Method in class org.apache.spark.storage.StorageLevel
-
- description() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- description() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- DeserializationStream - Class in org.apache.spark.serializer
-
:: DeveloperApi ::
A stream for reading serialized objects.
- DeserializationStream() - Constructor for class org.apache.spark.serializer.DeserializationStream
-
- deserialize(Object) - Method in class org.apache.spark.mllib.linalg.VectorUDT
-
- deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - Method in class org.apache.spark.serializer.DummySerializerInstance
-
- deserialize(ByteBuffer, ClassTag<T>) - Method in class org.apache.spark.serializer.DummySerializerInstance
-
- deserialize(ByteBuffer, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
-
- deserialize(ByteBuffer, ClassLoader, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
-
- deserialize(byte[]) - Static method in class org.apache.spark.util.Utils
-
Deserialize an object using Java serialization
- deserialize(byte[], ClassLoader) - Static method in class org.apache.spark.util.Utils
-
Deserialize an object using Java serialization and the given ClassLoader
- deserialized() - Method in class org.apache.spark.storage.StorageLevel
-
- DeserializedMemoryEntry<T> - Class in org.apache.spark.storage.memory
-
- DeserializedMemoryEntry(Object, long, ClassTag<T>) - Constructor for class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- deserializeLongValue(byte[]) - Static method in class org.apache.spark.util.Utils
-
Deserialize a Long value (used for PythonPartitioner
)
- deserializeStream(InputStream) - Method in class org.apache.spark.serializer.DummySerializerInstance
-
- deserializeStream(InputStream) - Method in class org.apache.spark.serializer.SerializerInstance
-
- deserializeViaNestedStream(InputStream, SerializerInstance, Function1<DeserializationStream, BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Deserialize via nested stream using specific serializer
- destroy() - Method in class org.apache.spark.broadcast.Broadcast
-
Destroy all data and metadata related to this broadcast variable.
- details() - Method in class org.apache.spark.scheduler.StageInfo
-
- details() - Method in class org.apache.spark.status.api.v1.StageData
-
- determineBounds(ArrayBuffer<Tuple2<K, Object>>, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.RangePartitioner
-
Determines the bounds for range partitioning from candidates with weights indicating how many
items each represents.
- DetermineTableStats - Class in org.apache.spark.sql.hive
-
- DetermineTableStats(SparkSession) - Constructor for class org.apache.spark.sql.hive.DetermineTableStats
-
- deterministic() - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Returns true iff this function is deterministic, i.e.
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- deviance(double, double, double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
- deviance() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
The deviance for the fitted model.
- devianceResiduals() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
The weighted residuals, the usual residuals rescaled by
the square root of the instance weights.
- dfToCols(Dataset<Row>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- dfToRowRDD(Dataset<Row>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- dgemm(double, DenseMatrix<Object>, DenseMatrix<Object>, double, DenseMatrix<Object>) - Static method in class org.apache.spark.ml.ann.BreezeUtil
-
DGEMM: C := alpha * A * B + beta * C
- dgemv(double, DenseMatrix<Object>, DenseVector<Object>, double, DenseVector<Object>) - Static method in class org.apache.spark.ml.ann.BreezeUtil
-
DGEMV: y := alpha * A * x + beta * y
- diag(Vector) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a diagonal matrix in DenseMatrix
format from the supplied values.
- diag(Vector) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a diagonal matrix in Matrix
format from the supplied values.
- diag(Vector) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a diagonal matrix in DenseMatrix
format from the supplied values.
- diag(Vector) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a diagonal matrix in Matrix
format from the supplied values.
- diff(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- diff(VertexRDD<VD>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- diff(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each vertex present in both this
and other
, diff
returns only those vertices with
differing values; for values that are different, keeps the values from other
.
- diff(VertexRDD<VD>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each vertex present in both this
and other
, diff
returns only those vertices with
differing values; for values that are different, keeps the values from other
.
- diff(GenSeq<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- dir() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
-
- directory(File) - Method in class org.apache.spark.launcher.SparkLauncher
-
Sets the working directory of spark-submit.
- disableOutputSpecValidation() - Static method in class org.apache.spark.internal.io.SparkHadoopWriterUtils
-
Allows for the spark.hadoop.validateOutputSpecs
checks to be disabled on a case-by-case
basis; see SPARK-4835 for more details.
- disconnect() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Disconnects the handle from the application, without stopping it.
- DISK_BYTES_SPILLED() - Static method in class org.apache.spark.InternalAccumulator
-
- DISK_ONLY - Static variable in class org.apache.spark.api.java.StorageLevels
-
- DISK_ONLY() - Static method in class org.apache.spark.storage.StorageLevel
-
- DISK_ONLY_2 - Static variable in class org.apache.spark.api.java.StorageLevels
-
- DISK_ONLY_2() - Static method in class org.apache.spark.storage.StorageLevel
-
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.StageData
-
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
-
- diskBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetrics
-
- diskBytesSpilled() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- diskBytesSpilled() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- diskBytesSpilled() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- diskSize() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- diskSize() - Method in class org.apache.spark.storage.BlockStatus
-
- diskSize() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- diskSize() - Method in class org.apache.spark.storage.RDDInfo
-
- diskUsed() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- diskUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
-
- diskUsed() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
-
- diskUsed() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
-
- diskUsed() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the disk space used by this block manager.
- diskUsedByRdd(int) - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the disk space used by the given RDD in this block manager in O(1) time.
- dispersion() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
The dispersion of the fitted model.
- dispose() - Method in class org.apache.spark.storage.EncryptedBlockData
-
- dispose(ByteBuffer) - Static method in class org.apache.spark.storage.StorageUtils
-
Attempt to clean up a ByteBuffer if it is direct or memory-mapped.
- distinct() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- distinct() - Static method in class org.apache.spark.api.r.RRDD
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- distinct() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- distinct() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- distinct() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- distinct() - Static method in class org.apache.spark.graphx.VertexRDD
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- distinct() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- distinct() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- distinct() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- distinct() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- distinct(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct() - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD containing the distinct elements in this RDD.
- distinct(int, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- distinct() - Static method in class org.apache.spark.rdd.UnionRDD
-
- distinct() - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset that contains only the unique rows from this Dataset.
- distinct(Column...) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Creates a Column
for this UDAF using the distinct values of the given
Column
s as input arguments.
- distinct(Seq<Column>) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Creates a Column
for this UDAF using the distinct values of the given
Column
s as input arguments.
- distinct() - Static method in class org.apache.spark.sql.types.StructType
-
- distinct$default$2(int) - Static method in class org.apache.spark.api.r.RRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- distinct$default$2(int) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- distinct$default$2(int) - Static method in class org.apache.spark.graphx.VertexRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- distinct$default$2(int) - Static method in class org.apache.spark.rdd.UnionRDD
-
- DistributedLDAModel - Class in org.apache.spark.ml.clustering
-
Distributed model fitted by
LDA
.
- DistributedLDAModel - Class in org.apache.spark.mllib.clustering
-
Distributed LDA model.
- DistributedMatrix - Interface in org.apache.spark.mllib.linalg.distributed
-
Represents a distributively stored matrix backed by one or more RDDs.
- div(Decimal, Decimal) - Method in class org.apache.spark.sql.types.Decimal.DecimalIsFractional$
-
- div(Duration) - Method in class org.apache.spark.streaming.Duration
-
- divide(Object) - Method in class org.apache.spark.sql.Column
-
Division this expression by another expression.
- doc() - Static method in class org.apache.spark.ml.param.DoubleParam
-
- doc() - Static method in class org.apache.spark.ml.param.FloatParam
-
- doc() - Method in class org.apache.spark.ml.param.Param
-
- docConcentration() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- docConcentration() - Static method in class org.apache.spark.ml.clustering.LDA
-
- docConcentration() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- docConcentration() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
- docConcentration() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Concentration parameter (commonly named "alpha") for the prior placed on documents'
distributions over topics ("theta").
- docConcentration() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- DocumentFrequencyAggregator(int) - Constructor for class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
- DocumentFrequencyAggregator() - Constructor for class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
- doesDirectoryContainAnyNewFiles(File, long) - Static method in class org.apache.spark.util.Utils
-
Determines if a directory contains any files newer than cutoff seconds.
- Dot - Class in org.apache.spark.ml.feature
-
- Dot() - Constructor for class org.apache.spark.ml.feature.Dot
-
- dot(Vector, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
dot(x, y)
- dot(Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
dot(x, y)
- doTest(DStream<Tuple2<StatCounter, StatCounter>>) - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
-
- doTest(DStream<Tuple2<StatCounter, StatCounter>>) - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
-
- DOUBLE() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable double type.
- doubleAccumulator(double) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- doubleAccumulator(double, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
- doubleAccumulator() - Method in class org.apache.spark.SparkContext
-
Create and register a double accumulator, which starts with 0 and accumulates inputs by add
.
- doubleAccumulator(String) - Method in class org.apache.spark.SparkContext
-
Create and register a double accumulator, which starts with 0 and accumulates inputs by add
.
- DoubleAccumulator - Class in org.apache.spark.util
-
An
accumulator
for computing sum, count, and averages for double precision
floating numbers.
- DoubleAccumulator() - Constructor for class org.apache.spark.util.DoubleAccumulator
-
- DoubleAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
-
Deprecated.
- DoubleArrayParam - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
Specialized version of Param[Array[Double}
for Java.
- DoubleArrayParam(Params, String, String, Function1<double[], Object>) - Constructor for class org.apache.spark.ml.param.DoubleArrayParam
-
- DoubleArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.DoubleArrayParam
-
- DoubleFlatMapFunction<T> - Interface in org.apache.spark.api.java.function
-
A function that returns zero or more records of type Double from each input record.
- DoubleFunction<T> - Interface in org.apache.spark.api.java.function
-
A function that returns Doubles, and can be used to construct DoubleRDDs.
- DoubleParam - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
Specialized version of Param[Double]
for Java.
- DoubleParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.DoubleParam
-
- DoubleParam(String, String, String) - Constructor for class org.apache.spark.ml.param.DoubleParam
-
- DoubleParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.DoubleParam
-
- DoubleParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.DoubleParam
-
- DoubleRDDFunctions - Class in org.apache.spark.rdd
-
Extra functions available on RDDs of Doubles through an implicit conversion.
- DoubleRDDFunctions(RDD<Object>) - Constructor for class org.apache.spark.rdd.DoubleRDDFunctions
-
- doubleRDDToDoubleRDDFunctions(RDD<Object>) - Static method in class org.apache.spark.rdd.RDD
-
- DoubleType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the DoubleType object.
- DoubleType - Class in org.apache.spark.sql.types
-
The data type representing Double
values.
- driver() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
-
- DRIVER_EXTRA_CLASSPATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver class path.
- DRIVER_EXTRA_JAVA_OPTIONS - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver VM options.
- DRIVER_EXTRA_LIBRARY_PATH - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver native library path.
- DRIVER_MEMORY - Static variable in class org.apache.spark.launcher.SparkLauncher
-
Configuration key for the driver memory.
- DRIVER_WAL_BATCHING_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DRIVER_WAL_BATCHING_TIMEOUT_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DRIVER_WAL_CLASS_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DRIVER_WAL_CLOSE_AFTER_WRITE_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DRIVER_WAL_MAX_FAILURES_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- DRIVER_WAL_ROLLING_INTERVAL_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- driverLogs() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- drop() - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that drops rows containing any null or NaN values.
- drop(String) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that drops rows containing null or NaN values.
- drop(String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that drops rows containing any null or NaN values
in the specified columns.
- drop(Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that drops rows containing any null or NaN values
in the specified columns.
- drop(String, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that drops rows containing null or NaN values
in the specified columns.
- drop(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that drops rows containing null or NaN values
in the specified columns.
- drop(int) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that drops rows containing
less than minNonNulls
non-null and non-NaN values.
- drop(int, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that drops rows containing
less than minNonNulls
non-null and non-NaN values in the specified columns.
- drop(int, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that drops rows containing less than
minNonNulls
non-null and non-NaN values in the specified columns.
- drop(String...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with columns dropped.
- drop(String) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with a column dropped.
- drop(Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with columns dropped.
- drop(Column) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with a column dropped.
- drop(int) - Static method in class org.apache.spark.sql.types.StructType
-
- dropDuplicates(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new
Dataset
with duplicate rows removed, considering only
the subset of columns.
- dropDuplicates() - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset that contains only the unique rows from this Dataset.
- dropDuplicates(Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
(Scala-specific) Returns a new Dataset with duplicate rows removed, considering only
the subset of columns.
- dropDuplicates(String[]) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with duplicate rows removed, considering only
the subset of columns.
- dropDuplicates(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new
Dataset
with duplicate rows removed, considering only
the subset of columns.
- dropGlobalTempView(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Drops the global temporary view with the given view name in the catalog.
- dropLast() - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
Whether to drop the last category in the encoded vector (default: true)
- dropRight(int) - Static method in class org.apache.spark.sql.types.StructType
-
- dropTempTable(String) - Method in class org.apache.spark.sql.SQLContext
-
- dropTempView(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Drops the local temporary view with the given view name in the catalog.
- dropWhile(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- dspmv(int, double, DenseVector, DenseVector, double, DenseVector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y := alpha*A*x + beta*y
- Dst - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose the destination and edge fields but not the source field.
- dstAttr() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex attribute of the edge's destination vertex.
- dstAttr() - Method in class org.apache.spark.graphx.EdgeTriplet
-
The destination vertex attribute
- dstAttr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- dstId() - Method in class org.apache.spark.graphx.Edge
-
- dstId() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex id of the edge's destination vertex.
- dstId() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- dstream() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
- dstream() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
- dstream() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- dstream() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- dstream() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- dstream() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- dstream() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- DStream<T> - Class in org.apache.spark.streaming.dstream
-
A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous
sequence of RDDs (of the same type) representing a continuous stream of data (see
org.apache.spark.rdd.RDD in the Spark core documentation for more details on RDDs).
- DStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.DStream
-
- dtypes() - Method in class org.apache.spark.sql.Dataset
-
Returns all column names and their data types as an array.
- DummySerializerInstance - Class in org.apache.spark.serializer
-
Unfortunately, we need a serializer instance in order to construct a DiskBlockObjectWriter.
- duration() - Method in class org.apache.spark.scheduler.TaskInfo
-
- duration() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- duration() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
-
- duration() - Method in class org.apache.spark.status.api.v1.TaskData
-
- Duration - Class in org.apache.spark.streaming
-
- Duration(long) - Constructor for class org.apache.spark.streaming.Duration
-
- duration() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
Return the duration of this output operation.
- durationMs() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- Durations - Class in org.apache.spark.streaming
-
- Durations() - Constructor for class org.apache.spark.streaming.Durations
-
- f() - Method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- f1Measure() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns document-based f1-measure averaged by the number of documents
- f1Measure(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns f1-measure for a given label (category)
- factorial(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the factorial of the given value.
- failed() - Method in class org.apache.spark.scheduler.TaskInfo
-
- FAILED() - Static method in class org.apache.spark.TaskState
-
- failedJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- failedStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- failedTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- failedTasks() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- failure(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- failureReason() - Method in class org.apache.spark.scheduler.StageInfo
-
If the stage failed, the reason why.
- failureReason() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
-
- failureReason() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- failureReasonCell(String, int, boolean) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
- FAIR() - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- FallbackConfigEntry<T> - Class in org.apache.spark.internal.config
-
A config entry whose default value is defined by another config entry.
- FallbackConfigEntry(String, String, boolean, ConfigEntry<T>) - Constructor for class org.apache.spark.internal.config.FallbackConfigEntry
-
- FalsePositiveRate - Class in org.apache.spark.mllib.evaluation.binary
-
False positive rate.
- FalsePositiveRate() - Constructor for class org.apache.spark.mllib.evaluation.binary.FalsePositiveRate
-
- falsePositiveRate(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns false positive rate for a given label (category)
- family() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- family() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- family() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- family() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- Family$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
-
- FamilyAndLink$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
- fastEquals(TreeNode<?>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- fastEquals(TreeNode<?>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- fastEquals(TreeNode<?>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- fdr() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- fdr() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- fdr() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- feature() - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$.Data
-
- feature() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- feature() - Method in class org.apache.spark.mllib.tree.model.Split
-
- featureImportances() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
Estimate of the importance of each feature.
- featureImportances() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
Estimate of the importance of each feature.
- featureImportances() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
Estimate of the importance of each feature.
- featureImportances() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
Estimate of the importance of each feature.
- featureImportances() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
Estimate of the importance of each feature.
- featureImportances() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
Estimate of the importance of each feature.
- featureIndex() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- featureIndex() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- featureIndex() - Method in class org.apache.spark.ml.tree.CategoricalSplit
-
- featureIndex() - Method in class org.apache.spark.ml.tree.ContinuousSplit
-
- featureIndex() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
-
- featureIndex() - Method in interface org.apache.spark.ml.tree.Split
-
Index of feature which this split tests
- features() - Method in class org.apache.spark.ml.feature.LabeledPoint
-
- features() - Method in class org.apache.spark.mllib.regression.LabeledPoint
-
- featuresCol() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- featuresCol() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Field in "predictions" which gives the features of each instance as a vector.
- featuresCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- featuresCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- featuresCol() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.LDA
-
- featuresCol() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- featuresCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- featuresCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- featuresCol() - Static method in class org.apache.spark.ml.feature.RFormula
-
- featuresCol() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- featuresCol() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- featuresCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- featureSubsetStrategy() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- featureSubsetStrategy() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- featureSubsetStrategy() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- featureSubsetStrategy() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- FeatureType - Class in org.apache.spark.mllib.tree.configuration
-
Enum to describe whether a feature is "continuous" or "categorical"
- FeatureType() - Constructor for class org.apache.spark.mllib.tree.configuration.FeatureType
-
- featureType() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- featureType() - Method in class org.apache.spark.mllib.tree.model.Split
-
- FETCH_WAIT_TIME() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
-
- FetchFailed - Class in org.apache.spark
-
:: DeveloperApi ::
Task failed to fetch shuffle data from a remote node.
- FetchFailed(BlockManagerId, int, int, int, String) - Constructor for class org.apache.spark.FetchFailed
-
- fetchFile(String, File, SparkConf, org.apache.spark.SecurityManager, Configuration, long, boolean) - Static method in class org.apache.spark.util.Utils
-
Download a file or directory to target directory.
- fetchPct() - Method in class org.apache.spark.scheduler.RuntimePercentage
-
- fetchWaitTime() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- fetchWaitTime() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
-
- fetchWaitTime() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- field() - Method in class org.apache.spark.storage.BroadcastBlockId
-
- fieldIndex(String) - Method in interface org.apache.spark.sql.Row
-
Returns the index of a given field name.
- fieldIndex(String) - Method in class org.apache.spark.sql.types.StructType
-
Returns the index of a given field.
- fieldNames() - Method in class org.apache.spark.sql.types.StructType
-
Returns all field names in an array.
- fields() - Method in class org.apache.spark.sql.types.StructType
-
- FIFO() - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- FILE_FORMAT() - Static method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- FileBasedTopologyMapper - Class in org.apache.spark.storage
-
A simple file based topology mapper.
- FileBasedTopologyMapper(SparkConf) - Constructor for class org.apache.spark.storage.FileBasedTopologyMapper
-
- FileCommitProtocol - Class in org.apache.spark.internal.io
-
An interface to define how a single Spark job commits its outputs.
- FileCommitProtocol() - Constructor for class org.apache.spark.internal.io.FileCommitProtocol
-
- FileCommitProtocol.EmptyTaskCommitMessage$ - Class in org.apache.spark.internal.io
-
- FileCommitProtocol.TaskCommitMessage - Class in org.apache.spark.internal.io
-
- fileFormat() - Method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- files() - Method in class org.apache.spark.SparkContext
-
- fileStream(String, Class<K>, Class<V>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them using the given key-value types and input format.
- fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them using the given key-value types and input format.
- fileStream(String, Class<K>, Class<V>, Class<F>, Function<Path, Boolean>, boolean, Configuration) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them using the given key-value types and input format.
- fileStream(String, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them using the given key-value types and input format.
- fileStream(String, Function1<Path, Object>, boolean, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them using the given key-value types and input format.
- fileStream(String, Function1<Path, Object>, boolean, Configuration, ClassTag<K>, ClassTag<V>, ClassTag<F>) - Method in class org.apache.spark.streaming.StreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them using the given key-value types and input format.
- fill(long) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null or NaN values in numeric columns with value
.
- fill(double) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null or NaN values in numeric columns with value
.
- fill(String) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null values in string columns with value
.
- fill(long, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null or NaN values in specified numeric columns.
- fill(double, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null or NaN values in specified numeric columns.
- fill(long, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that replaces null or NaN values in specified
numeric columns.
- fill(double, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that replaces null or NaN values in specified
numeric columns.
- fill(String, String[]) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null values in specified string columns.
- fill(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that replaces null values in
specified string columns.
- fill(Map<String, Object>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Returns a new DataFrame
that replaces null values.
- fill(Map<String, Object>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Returns a new DataFrame
that replaces null values.
- fillInStackTrace() - Static method in exception org.apache.spark.sql.AnalysisException
-
- filter(Function<Double, Boolean>) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function<T, Boolean>) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function1<T, Object>) - Static method in class org.apache.spark.api.r.RRDD
-
- filter(Function1<T, Object>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- filter(Function1<Graph<VD, ED>, Graph<VD2, ED2>>, Function1<EdgeTriplet<VD2, ED2>, Object>, Function2<Object, VD2, Object>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.GraphOps
-
Filter the graph by computing some values to filter on, and applying the predicates.
- filter(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- filter(Function1<Tuple2<Object, VD>, Object>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- filter(Function1<Tuple2<Object, VD>, Object>) - Method in class org.apache.spark.graphx.VertexRDD
-
Restricts the vertex set to the set of vertices satisfying the given predicate.
- filter(Params) - Method in class org.apache.spark.ml.param.ParamMap
-
Filters this param map for the given parent.
- filter(Function1<T, Object>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- filter(Function1<T, Object>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- filter(Function1<T, Object>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- filter(Function1<T, Object>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- filter(Function1<T, Object>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD containing only the elements that satisfy a predicate.
- filter(Function1<T, Object>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- filter(Column) - Method in class org.apache.spark.sql.Dataset
-
Filters rows using the given condition.
- filter(String) - Method in class org.apache.spark.sql.Dataset
-
Filters rows using the given SQL expression.
- filter(Function1<T, Object>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Scala-specific)
Returns a new Dataset that only contains elements where func
returns true
.
- filter(FilterFunction<T>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Java-specific)
Returns a new Dataset that only contains elements where func
returns true
.
- Filter - Class in org.apache.spark.sql.sources
-
A filter predicate for data sources.
- Filter() - Constructor for class org.apache.spark.sql.sources.Filter
-
- filter(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- filter() - Method in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
-
- filter(Function<T, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream containing only the elements that satisfy a predicate.
- filter(Function<T, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- filter(Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream containing only the elements that satisfy a predicate.
- filter(Function<Tuple2<K, V>, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- filter(Function<Tuple2<K, V>, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- filter(Function<T, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- filter(Function1<T, Object>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream containing only the elements that satisfy a predicate.
- filterByRange(K, K) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
-
Returns an RDD containing only the elements in the inclusive range lower
to upper
.
- FilterFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for a function used in Dataset's filter function.
- filterName() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
-
- filterNot(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- filterParams() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
-
- finalStorageLevel() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- find(Function1<BaseType, Object>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- find(Function1<BaseType, Object>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- find(Function1<BaseType, Object>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- find(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- findLeader(String, int) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- findLeaders(Set<TopicAndPartition>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- findSynonyms(String, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words closest in similarity to the given word, not
including the word itself.
- findSynonyms(Vector, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words whose vector representation is most similar to the supplied vector.
- findSynonyms(String, int) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Find synonyms of a word; do not include the word itself in results.
- findSynonyms(Vector, int) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Find synonyms of the vector representation of a word, possibly
including any words in the model vocabulary whose vector respresentation
is the supplied vector.
- findSynonymsArray(Vector, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words whose vector representation is most similar to the supplied vector.
- findSynonymsArray(String, int) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Find "num" number of words closest in similarity to the given word, not
including the word itself.
- finish(BUF) - Method in class org.apache.spark.sql.expressions.Aggregator
-
Transform the output of the reduction.
- finished() - Method in class org.apache.spark.scheduler.TaskInfo
-
- FINISHED() - Static method in class org.apache.spark.TaskState
-
- finishTime() - Method in class org.apache.spark.scheduler.TaskInfo
-
The time when the task has completed successfully (including the time to remotely fetch
results, if necessary).
- first() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
- first() - Method in class org.apache.spark.api.java.JavaPairRDD
-
- first() - Static method in class org.apache.spark.api.java.JavaRDD
-
- first() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the first element in this RDD.
- first() - Static method in class org.apache.spark.api.r.RRDD
-
- first() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- first() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- first() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- first() - Static method in class org.apache.spark.graphx.VertexRDD
-
- first() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- first() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- first() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- first() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- first() - Method in class org.apache.spark.rdd.RDD
-
Return the first element in this RDD.
- first() - Static method in class org.apache.spark.rdd.UnionRDD
-
- first() - Method in class org.apache.spark.sql.Dataset
-
Returns the first row.
- first(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value in a group.
- first(String, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value of a column in a group.
- first(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value in a group.
- first(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the first value of a column in a group.
- firstFailureReason() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- firstTaskLaunchedTime() - Method in class org.apache.spark.status.api.v1.StageData
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.KMeans
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDA
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Method in class org.apache.spark.ml.Estimator
-
Fits a single model to the input data with optional parameters.
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.Estimator
-
Fits a single model to the input data with optional parameters.
- fit(Dataset<?>, ParamMap) - Method in class org.apache.spark.ml.Estimator
-
Fits a single model to the input data with provided parameter map.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.Estimator
-
Fits a model to the input data.
- fit(Dataset<?>, ParamMap[]) - Method in class org.apache.spark.ml.Estimator
-
Fits multiple models to the input data with multiple sets of parameters.
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.IDF
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.Imputer
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.PCA
-
Computes a
PCAModel
that contains the principal components of the input vectors.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.RFormula
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.StringIndexer
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorIndexer
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.Pipeline
-
Fits the pipeline to the input dataset with additional parameters.
- fit(Dataset<?>) - Method in class org.apache.spark.ml.Predictor
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.recommendation.ALS
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- fit(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- fit(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- fit(Dataset<?>, ParamMap[]) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- fit(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- fit(Dataset<?>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- fit(Dataset<?>) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- fit(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
Returns a ChiSquared feature selector.
- fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDF
-
Computes the inverse document frequency.
- fit(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDF
-
Computes the inverse document frequency.
- fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.PCA
-
Computes a
PCAModel
that contains the principal components of the input vectors.
- fit(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.PCA
-
Java-friendly version of fit()
.
- fit(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.StandardScaler
-
Computes the mean and variance and stores as a model to be used for later scaling.
- fit(RDD<S>) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Computes the vector representation of each word in vocabulary.
- fit(JavaRDD<S>) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Computes the vector representation of each word in vocabulary (Java version).
- fitIntercept() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- fitIntercept() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- fitIntercept() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- fitIntercept() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- fitIntercept() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- fitIntercept() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- fitIntercept() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- fitIntercept() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- fitIntercept() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- fitIntercept() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- Fixed$() - Constructor for class org.apache.spark.sql.types.DecimalType.Fixed$
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- flatMap(FlatMapFunction<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by first applying a function to all elements of this
RDD, and then flattening the results.
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by first applying a function to all elements of this
RDD, and then flattening the results.
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- flatMap(Function1<T, TraversableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Scala-specific)
Returns a new Dataset by first applying a function to all elements of this Dataset,
and then flattening the results.
- flatMap(FlatMapFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Java-specific)
Returns a new Dataset by first applying a function to all elements of this Dataset,
and then flattening the results.
- flatMap(Function1<BaseType, TraversableOnce<A>>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- flatMap(Function1<BaseType, TraversableOnce<A>>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- flatMap(Function1<BaseType, TraversableOnce<A>>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- flatMap(Function1<A, GenTraversableOnce<B>>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- flatMap(FlatMapFunction<T, U>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream,
and then flattening the results
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- flatMap(FlatMapFunction<T, U>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- flatMap(Function1<T, TraversableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream by applying a function to all elements of this DStream,
and then flattening the results
- FlatMapFunction<T,R> - Interface in org.apache.spark.api.java.function
-
A function that returns zero or more output records from each input record.
- FlatMapFunction2<T1,T2,R> - Interface in org.apache.spark.api.java.function
-
A function that takes two inputs and returns zero or more output records.
- flatMapGroups(Function2<K, Iterator<V>, TraversableOnce<U>>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Scala-specific)
Applies the given function to each group of data.
- flatMapGroups(FlatMapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Java-specific)
Applies the given function to each group of data.
- FlatMapGroupsFunction<K,V,R> - Interface in org.apache.spark.api.java.function
-
A function that returns zero or more output records from each grouping key and its values.
- flatMapGroupsWithState(OutputMode, GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, Iterator<U>>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
::Experimental::
(Scala-specific)
Applies the given function to each group of data, while maintaining a user-defined per-group
state.
- flatMapGroupsWithState(FlatMapGroupsWithStateFunction<K, V, S, U>, OutputMode, Encoder<S>, Encoder<U>, GroupStateTimeout) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
::Experimental::
(Java-specific)
Applies the given function to each group of data, while maintaining a user-defined per-group
state.
- FlatMapGroupsWithStateFunction<K,V,S,R> - Interface in org.apache.spark.api.java.function
-
::Experimental::
Base interface for a map function used in
org.apache.spark.sql.KeyValueGroupedDataset.flatMapGroupsWithState(
FlatMapGroupsWithStateFunction, org.apache.spark.sql.streaming.OutputMode,
org.apache.spark.sql.Encoder, org.apache.spark.sql.Encoder)
- flatMapToDouble(DoubleFlatMapFunction<T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- flatMapToDouble(DoubleFlatMapFunction<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- flatMapToDouble(DoubleFlatMapFunction<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- flatMapToDouble(DoubleFlatMapFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by first applying a function to all elements of this
RDD, and then flattening the results.
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by first applying a function to all elements of this
RDD, and then flattening the results.
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream,
and then flattening the results
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- flatMapToPair(PairFlatMapFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- flatMapValues(Function<V, Iterable<U>>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Pass each value in the key-value pair RDD through a flatMap function without changing the
keys; this also retains the original RDD's partitioning.
- flatMapValues(Function1<V, TraversableOnce<U>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Pass each value in the key-value pair RDD through a flatMap function without changing the
keys; this also retains the original RDD's partitioning.
- flatMapValues(Function<V, Iterable<U>>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying a flatmap function to the value of each key-value pairs in
'this' DStream without changing the key.
- flatMapValues(Function<V, Iterable<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- flatMapValues(Function<V, Iterable<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- flatMapValues(Function1<V, TraversableOnce<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying a flatmap function to the value of each key-value pairs in
'this' DStream without changing the key.
- flatten(Function1<A, GenTraversableOnce<B>>) - Static method in class org.apache.spark.sql.types.StructType
-
- FLOAT() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable float type.
- FloatAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
-
Deprecated.
- FloatParam - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
Specialized version of Param[Float]
for Java.
- FloatParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.FloatParam
-
- FloatParam(String, String, String) - Constructor for class org.apache.spark.ml.param.FloatParam
-
- FloatParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.FloatParam
-
- FloatParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.FloatParam
-
- FloatType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the FloatType object.
- FloatType - Class in org.apache.spark.sql.types
-
The data type representing Float
values.
- floor(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the floor of the given value.
- floor(String) - Static method in class org.apache.spark.sql.functions
-
Computes the floor of the given column.
- floor() - Method in class org.apache.spark.sql.types.Decimal
-
- floor(Duration) - Method in class org.apache.spark.streaming.Time
-
- floor(Duration, Time) - Method in class org.apache.spark.streaming.Time
-
- FlumeUtils - Class in org.apache.spark.streaming.flume
-
- FlumeUtils() - Constructor for class org.apache.spark.streaming.flume.FlumeUtils
-
- flush() - Method in class org.apache.spark.io.SnappyOutputStreamWrapper
-
- flush() - Method in class org.apache.spark.serializer.SerializationStream
-
- flush() - Method in class org.apache.spark.storage.TimeTrackingOutputStream
-
- fMeasure(double, double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns f-measure for a given label (category)
- fMeasure(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns f1-measure for a given label (category)
- fMeasure() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
- fMeasureByThreshold() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
Returns a dataframe with two fields (threshold, F-Measure) curve with beta = 1.0.
- fMeasureByThreshold(double) - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, F-Measure) curve.
- fMeasureByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, F-Measure) curve with beta = 1.0.
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- fold(T, Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregate the elements of each partition, and then the results for all the partitions, using a
given associative function and a neutral "zero value".
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.api.r.RRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- fold(T, Function2<T, T, T>) - Method in class org.apache.spark.rdd.RDD
-
Aggregate the elements of each partition, and then the results for all the partitions, using a
given associative function and a neutral "zero value".
- fold(T, Function2<T, T, T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- fold(A1, Function2<A1, A1, A1>) - Static method in class org.apache.spark.sql.types.StructType
-
- foldByKey(V, Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative function and a neutral "zero value" which
may be added to the result an arbitrary number of times, and must not change the result
(e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, int, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative function and a neutral "zero value" which
may be added to the result an arbitrary number of times, and must not change the result
(e.g ., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative function and a neutral "zero value"
which may be added to the result an arbitrary number of times, and must not change the result
(e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative function and a neutral "zero value" which
may be added to the result an arbitrary number of times, and must not change the result
(e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, int, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative function and a neutral "zero value" which
may be added to the result an arbitrary number of times, and must not change the result
(e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldByKey(V, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative function and a neutral "zero value" which
may be added to the result an arbitrary number of times, and must not change the result
(e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
- foldLeft(B, Function2<B, A, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- foldRight(B, Function2<A, B, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- forall(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- forceIndexLabel() - Method in class org.apache.spark.ml.feature.RFormula
-
Force to index label whether it is numeric or string type.
- foreach(VoidFunction<T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- foreach(VoidFunction<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- foreach(VoidFunction<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- foreach(VoidFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Applies a function f to all elements of this RDD.
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.api.r.RRDD
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- foreach(Function1<T, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
-
Applies a function f to all elements of this RDD.
- foreach(Function1<T, BoxedUnit>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- foreach(Function1<T, BoxedUnit>) - Method in class org.apache.spark.sql.Dataset
-
Applies a function f
to all rows.
- foreach(ForeachFunction<T>) - Method in class org.apache.spark.sql.Dataset
-
(Java-specific)
Runs func
on each element of this Dataset.
- foreach(Function1<BaseType, BoxedUnit>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- foreach(Function1<BaseType, BoxedUnit>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- foreach(Function1<BaseType, BoxedUnit>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- foreach(ForeachWriter<T>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Starts the execution of the streaming query, which will continually send results to the given
ForeachWriter
as new data arrives.
- foreach(Function1<A, U>) - Static method in class org.apache.spark.sql.types.StructType
-
- foreach(Function1<A, U>) - Static method in class org.apache.spark.sql.types.StructType
-
- foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- foreachActive(Function2<Object, Object, BoxedUnit>) - Method in class org.apache.spark.ml.linalg.DenseVector
-
- foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Applies a function f
to all the active elements of dense and sparse matrix.
- foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- foreachActive(Function2<Object, Object, BoxedUnit>) - Method in class org.apache.spark.ml.linalg.SparseVector
-
- foreachActive(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.ml.linalg.Vector
-
Applies a function f
to all the active elements of dense and sparse vector.
- foreachActive(Function2<Object, Object, BoxedUnit>) - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- foreachActive(Function3<Object, Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Applies a function f
to all the active elements of dense and sparse matrix.
- foreachActive(Function2<Object, Object, BoxedUnit>) - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- foreachActive(Function2<Object, Object, BoxedUnit>) - Method in interface org.apache.spark.mllib.linalg.Vector
-
Applies a function f
to all the active elements of dense and sparse vector.
- foreachAsync(VoidFunction<T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- foreachAsync(VoidFunction<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- foreachAsync(VoidFunction<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- foreachAsync(VoidFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of the foreach
action, which
applies a function f to all the elements of this RDD.
- foreachAsync(Function1<T, BoxedUnit>) - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Applies a function f to all elements of this RDD.
- ForeachFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for a function used in Dataset's foreach function.
- foreachPartition(VoidFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- foreachPartition(VoidFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- foreachPartition(VoidFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- foreachPartition(VoidFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Applies a function f to each partition of this RDD.
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.api.r.RRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.rdd.RDD
-
Applies a function f to each partition of this RDD.
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- foreachPartition(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.sql.Dataset
-
Applies a function f
to each partition of this Dataset.
- foreachPartition(ForeachPartitionFunction<T>) - Method in class org.apache.spark.sql.Dataset
-
(Java-specific)
Runs func
on each partition of this Dataset.
- foreachPartitionAsync(VoidFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- foreachPartitionAsync(VoidFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- foreachPartitionAsync(VoidFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- foreachPartitionAsync(VoidFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of the foreachPartition
action, which
applies a function f to each partition of this RDD.
- foreachPartitionAsync(Function1<Iterator<T>, BoxedUnit>) - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Applies a function f to each partition of this RDD.
- ForeachPartitionFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for a function used in Dataset's foreachPartition function.
- foreachRDD(VoidFunction<R>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- foreachRDD(VoidFunction2<R, Time>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- foreachRDD(VoidFunction<R>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Apply a function to each RDD in this DStream.
- foreachRDD(VoidFunction2<R, Time>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Apply a function to each RDD in this DStream.
- foreachRDD(VoidFunction<R>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- foreachRDD(VoidFunction2<R, Time>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- foreachRDD(VoidFunction<R>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- foreachRDD(VoidFunction2<R, Time>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- foreachRDD(VoidFunction<R>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- foreachRDD(VoidFunction2<R, Time>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- foreachRDD(VoidFunction<R>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- foreachRDD(VoidFunction2<R, Time>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- foreachRDD(VoidFunction<R>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- foreachRDD(VoidFunction2<R, Time>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- foreachRDD(Function1<RDD<T>, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Apply a function to each RDD in this DStream.
- foreachRDD(Function2<RDD<T>, Time, BoxedUnit>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Apply a function to each RDD in this DStream.
- foreachUp(Function1<BaseType, BoxedUnit>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- foreachUp(Function1<BaseType, BoxedUnit>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- foreachUp(Function1<BaseType, BoxedUnit>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- ForeachWriter<T> - Class in org.apache.spark.sql
-
A class to consume data generated by a StreamingQuery
.
- ForeachWriter() - Constructor for class org.apache.spark.sql.ForeachWriter
-
- format(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Specifies the input data source format.
- format(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Specifies the underlying output data source.
- format(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Specifies the input data source format.
- format(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Specifies the underlying output data source.
- format_number(Column, int) - Static method in class org.apache.spark.sql.functions
-
Formats numeric column x to a format like '#,###,###.##', rounded to d decimal places
with HALF_EVEN round mode, and returns the result as a string column.
- format_string(String, Column...) - Static method in class org.apache.spark.sql.functions
-
Formats the arguments in printf-style and returns the result as a string column.
- format_string(String, Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Formats the arguments in printf-style and returns the result as a string column.
- formatBatchTime(long, long, boolean, TimeZone) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
If batchInterval
is less than 1 second, format batchTime
with milliseconds.
- formatDate(Date) - Static method in class org.apache.spark.ui.UIUtils
-
- formatDate(long) - Static method in class org.apache.spark.ui.UIUtils
-
- formatDuration(long) - Static method in class org.apache.spark.ui.UIUtils
-
- formatDurationVerbose(long) - Static method in class org.apache.spark.ui.UIUtils
-
Generate a verbose human-readable string representing a duration such as "5 second 35 ms"
- formatNumber(double) - Static method in class org.apache.spark.ui.UIUtils
-
Generate a human-readable string representing a number (e.g.
- formatVersion() - Method in interface org.apache.spark.mllib.util.Saveable
-
Current version of model save/load format.
- formula() - Method in class org.apache.spark.ml.feature.RFormula
-
R formula parameter.
- FPGrowth - Class in org.apache.spark.ml.fpm
-
:: Experimental ::
A parallel FP-growth algorithm to mine frequent itemsets.
- FPGrowth(String) - Constructor for class org.apache.spark.ml.fpm.FPGrowth
-
- FPGrowth() - Constructor for class org.apache.spark.ml.fpm.FPGrowth
-
- FPGrowth - Class in org.apache.spark.mllib.fpm
-
A parallel FP-growth algorithm to mine frequent itemsets.
- FPGrowth() - Constructor for class org.apache.spark.mllib.fpm.FPGrowth
-
Constructs a default instance with default parameters {minSupport: 0.3
, numPartitions: same
as the input data}.
- FPGrowth.FreqItemset<Item> - Class in org.apache.spark.mllib.fpm
-
Frequent itemset.
- FPGrowthModel - Class in org.apache.spark.ml.fpm
-
:: Experimental ::
Model fitted by FPGrowth.
- FPGrowthModel<Item> - Class in org.apache.spark.mllib.fpm
-
Model trained by
FPGrowth
, which holds frequent itemsets.
- FPGrowthModel(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - Constructor for class org.apache.spark.mllib.fpm.FPGrowthModel
-
- FPGrowthModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.fpm
-
- fpr() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- fpr() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- fpr() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- freq() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
-
- freq() - Method in class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
-
- freqItems(String[], double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Finding frequent items for columns, possibly with false positives.
- freqItems(String[]) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Finding frequent items for columns, possibly with false positives.
- freqItems(Seq<String>, double) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
(Scala-specific) Finding frequent items for columns, possibly with false positives.
- freqItems(Seq<String>) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
(Scala-specific) Finding frequent items for columns, possibly with false positives.
- FreqItemset(Object, long) - Constructor for class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
-
- freqItemsets() - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- freqItemsets() - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
-
- FreqSequence(Object[], long) - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
-
- freqSequences() - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel
-
- from_json(Column, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Parses a column containing a JSON string into a StructType
with the
specified schema.
- from_json(Column, DataType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Parses a column containing a JSON string into a StructType
or ArrayType
of StructType
s with the specified schema.
- from_json(Column, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a JSON string into a StructType
with the
specified schema.
- from_json(Column, DataType, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Parses a column containing a JSON string into a StructType
or ArrayType
of StructType
s with the specified schema.
- from_json(Column, StructType) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a JSON string into a StructType
with the specified schema.
- from_json(Column, DataType) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a JSON string into a StructType
or ArrayType
of StructType
s
with the specified schema.
- from_json(Column, String, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
Parses a column containing a JSON string into a StructType
or ArrayType
of StructType
s
with the specified schema.
- from_unixtime(Column) - Static method in class org.apache.spark.sql.functions
-
Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string
representing the timestamp of that moment in the current system time zone in the given
format.
- from_unixtime(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string
representing the timestamp of that moment in the current system time zone in the given
format.
- from_utc_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Given a timestamp, which corresponds to a certain time of day in UTC, returns another timestamp
that corresponds to the same time of day in the given timezone.
- fromAvroFlumeEvent(AvroFlumeEvent) - Static method in class org.apache.spark.streaming.flume.SparkFlumeEvent
-
- fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a SparseMatrix
from Coordinate List (COO) format.
- fromCOO(int, int, Iterable<Tuple3<Object, Object, Object>>) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a SparseMatrix
from Coordinate List (COO) format.
- fromDDL(String) - Static method in class org.apache.spark.sql.types.StructType
-
Creates StructType for a given DDL-formatted string, which is a comma separated list of field
definitions, e.g., a INT, b STRING.
- fromDecimal(Object) - Static method in class org.apache.spark.sql.types.Decimal
-
- fromDStream(DStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- fromEdgePartitions(RDD<Tuple2<Object, EdgePartition<ED, VD>>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from EdgePartitions, setting referenced vertices to defaultVertexAttr
.
- fromEdges(RDD<Edge<ED>>, ClassTag<ED>, ClassTag<VD>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
Creates an EdgeRDD from a set of edges.
- fromEdges(RDD<Edge<ED>>, VD, StorageLevel, StorageLevel, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of edges.
- fromEdges(EdgeRDD<?>, int, VD, ClassTag<VD>) - Static method in class org.apache.spark.graphx.VertexRDD
-
Constructs a VertexRDD
containing all vertices referred to in edges
.
- fromEdgeTuples(RDD<Tuple2<Object, Object>>, VD, Option<PartitionStrategy>, StorageLevel, StorageLevel, ClassTag<VD>) - Static method in class org.apache.spark.graphx.Graph
-
Construct a graph from a collection of edges encoded as vertex id pairs.
- fromExistingRDDs(VertexRDD<VD>, EdgeRDD<ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
Create a graph from a VertexRDD and an EdgeRDD with the same replicated vertex type as the
vertices.
- fromInputDStream(InputDStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- fromInputDStream(InputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- fromJavaDStream(JavaDStream<Tuple2<K, V>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- fromJavaRDD(JavaRDD<Tuple2<K, V>>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
Convert a JavaRDD of key-value pairs to JavaPairRDD.
- fromJson(String) - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Parses the JSON representation of a Matrix into a
Matrix
.
- fromJson(String) - Static method in class org.apache.spark.ml.linalg.JsonVectorConverter
-
Parses the JSON representation of a vector into a
Vector
.
- fromJson(String) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Parses the JSON representation of a vector into a
Vector
.
- fromJson(String) - Static method in class org.apache.spark.sql.types.DataType
-
- fromJson(String) - Static method in class org.apache.spark.sql.types.Metadata
-
Creates a Metadata instance from JSON.
- fromML(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Convert new linalg type to spark.mllib type.
- fromML(DenseVector) - Static method in class org.apache.spark.mllib.linalg.DenseVector
-
Convert new linalg type to spark.mllib type.
- fromML(Matrix) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Convert new linalg type to spark.mllib type.
- fromML(SparseMatrix) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Convert new linalg type to spark.mllib type.
- fromML(SparseVector) - Static method in class org.apache.spark.mllib.linalg.SparseVector
-
Convert new linalg type to spark.mllib type.
- fromML(Vector) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Convert new linalg type to spark.mllib type.
- fromName(String) - Static method in class org.apache.spark.ml.attribute.AttributeType
-
- fromNullable(T) - Static method in class org.apache.spark.api.java.Optional
-
- fromOffset() - Method in class org.apache.spark.streaming.kafka.OffsetRange
-
- fromOld(Node, Map<Object, Object>) - Static method in class org.apache.spark.ml.tree.Node
-
Create a new Node from the old Node format, recursively creating child nodes as needed.
- fromPairDStream(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- fromPairRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.mllib.rdd.MLPairRDDFunctions
-
Implicit conversion from a pair RDD to MLPairRDDFunctions.
- fromParams(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
-
Gets the Family
object based on param family and variancePower.
- fromParams(GeneralizedLinearRegressionBase) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
-
Gets the Link
object based on param family, link and linkPower.
- fromRDD(RDD<Object>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- fromRDD(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- fromRDD(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- fromRDD(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.mllib.rdd.RDDFunctions
-
Implicit conversion from an RDD to RDDFunctions.
- fromRdd(RDD<?>) - Static method in class org.apache.spark.storage.RDDInfo
-
- fromReceiverInputDStream(ReceiverInputDStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- fromReceiverInputDStream(ReceiverInputDStream<T>, ClassTag<T>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- fromSparkContext(SparkContext) - Static method in class org.apache.spark.api.java.JavaSparkContext
-
- fromStage(Stage, int, Option<Object>, TaskMetrics, Seq<Seq<TaskLocation>>) - Static method in class org.apache.spark.scheduler.StageInfo
-
Construct a StageInfo from a Stage.
- fromString(String) - Static method in enum org.apache.spark.JobExecutionStatus
-
- fromString(String) - Static method in class org.apache.spark.mllib.tree.impurity.Impurities
-
- fromString(String) - Static method in class org.apache.spark.mllib.tree.loss.Losses
-
- fromString(String) - Static method in enum org.apache.spark.status.api.v1.ApplicationStatus
-
- fromString(String) - Static method in enum org.apache.spark.status.api.v1.StageStatus
-
- fromString(String) - Static method in enum org.apache.spark.status.api.v1.streaming.BatchStatus
-
- fromString(String) - Static method in enum org.apache.spark.status.api.v1.TaskSorting
-
- fromString(String) - Static method in class org.apache.spark.storage.StorageLevel
-
:: DeveloperApi ::
Return the StorageLevel object with the specified name.
- fromStructField(StructField) - Static method in class org.apache.spark.ml.attribute.AttributeGroup
-
Creates an attribute group from a StructField
instance.
- fromTaskMetrics(TaskMetrics) - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData$
-
- fullOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a full outer join of this
and other
.
- fullOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a full outer join of this
and other
.
- fullOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a full outer join of this
and other
.
- fullOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a full outer join of this
and other
.
- fullOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a full outer join of this
and other
.
- fullOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a full outer join of this
and other
.
- fullOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and
other
DStream.
- fullOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and
other
DStream.
- fullOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and
other
DStream.
- fullOuterJoin(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- fullOuterJoin(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- fullOuterJoin(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- fullOuterJoin(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- fullOuterJoin(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- fullOuterJoin(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- fullOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and
other
DStream.
- fullOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and
other
DStream.
- fullOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'full outer join' between RDDs of this
DStream and
other
DStream.
- fullStackTrace() - Method in class org.apache.spark.ExceptionFailure
-
- Function<T1,R> - Interface in org.apache.spark.api.java.function
-
Base interface for functions whose return types do not create special RDDs.
- Function - Class in org.apache.spark.sql.catalog
-
A user-defined function in Spark, as returned by
listFunctions
method in
Catalog
.
- Function(String, String, String, String, boolean) - Constructor for class org.apache.spark.sql.catalog.Function
-
- function(Function4<Time, KeyType, Option<ValueType>, State<StateType>, Option<MappedType>>) - Static method in class org.apache.spark.streaming.StateSpec
-
- function(Function3<KeyType, Option<ValueType>, State<StateType>, MappedType>) - Static method in class org.apache.spark.streaming.StateSpec
-
- function(Function4<Time, KeyType, Optional<ValueType>, State<StateType>, Optional<MappedType>>) - Static method in class org.apache.spark.streaming.StateSpec
-
- function(Function3<KeyType, Optional<ValueType>, State<StateType>, MappedType>) - Static method in class org.apache.spark.streaming.StateSpec
-
- Function0<R> - Interface in org.apache.spark.api.java.function
-
A zero-argument function that returns an R.
- Function2<T1,T2,R> - Interface in org.apache.spark.api.java.function
-
A two-argument function that takes arguments of type T1 and T2 and returns an R.
- Function3<T1,T2,T3,R> - Interface in org.apache.spark.api.java.function
-
A three-argument function that takes arguments of type T1, T2 and T3 and returns an R.
- Function4<T1,T2,T3,T4,R> - Interface in org.apache.spark.api.java.function
-
A four-argument function that takes arguments of type T1, T2, T3 and T4 and returns an R.
- functionExists(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Check if the function with the specified name exists.
- functionExists(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Check if the function with the specified name exists in the specified database.
- functions - Class in org.apache.spark.sql
-
Functions available for DataFrame operations.
- functions() - Constructor for class org.apache.spark.sql.functions
-
- FutureAction<T> - Interface in org.apache.spark
-
A future for the result of an action to support cancellation.
- futureExecutionContext() - Static method in class org.apache.spark.rdd.AsyncRDDActions
-
- fwe() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- fwe() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- fwe() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- gain() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
-
- gain() - Method in class org.apache.spark.ml.tree.InternalNode
-
- gain() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
-
- Gamma$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
- gamma1() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- gamma2() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- gamma6() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- gamma7() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- GammaGenerator - Class in org.apache.spark.mllib.random
-
:: DeveloperApi ::
Generates i.i.d.
- GammaGenerator(double, double) - Constructor for class org.apache.spark.mllib.random.GammaGenerator
-
- gammaJavaRDD(JavaSparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of RandomRDDs.gammaRDD
.
- gammaJavaRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaRDD
with the default seed.
- gammaJavaRDD(JavaSparkContext, double, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaRDD
with the default number of partitions and the default seed.
- gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of RandomRDDs.gammaVectorRDD
.
- gammaJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaVectorRDD
with the default seed.
- gammaJavaVectorRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.gammaJavaVectorRDD
with the default number of partitions and the default seed.
- gammaRDD(SparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of i.i.d.
samples from the gamma distribution with the input
shape and scale.
- gammaVectorRDD(SparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing i.i.d.
samples drawn from the
gamma distribution with the input shape and scale.
- gapply(RelationalGroupedDataset, byte[], byte[], Object[], StructType) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
The helper function for gapply() on R side.
- gaps() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Indicates whether regex splits on gaps (true) or matches tokens (false).
- Gaussian$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- GaussianMixture - Class in org.apache.spark.ml.clustering
-
Gaussian Mixture clustering.
- GaussianMixture(String) - Constructor for class org.apache.spark.ml.clustering.GaussianMixture
-
- GaussianMixture() - Constructor for class org.apache.spark.ml.clustering.GaussianMixture
-
- GaussianMixture - Class in org.apache.spark.mllib.clustering
-
This class performs expectation maximization for multivariate Gaussian
Mixture Models (GMMs).
- GaussianMixture() - Constructor for class org.apache.spark.mllib.clustering.GaussianMixture
-
Constructs a default instance.
- GaussianMixtureModel - Class in org.apache.spark.ml.clustering
-
Multivariate Gaussian Mixture Model (GMM) consisting of k Gaussians, where points
are drawn from each Gaussian i with probability weights(i).
- GaussianMixtureModel - Class in org.apache.spark.mllib.clustering
-
Multivariate Gaussian Mixture Model (GMM) consisting of k Gaussians, where points
are drawn from each Gaussian i=1..k with probability w(i); mu(i) and sigma(i) are
the respective mean and covariance for each Gaussian distribution i=1..k.
- GaussianMixtureModel(double[], MultivariateGaussian[]) - Constructor for class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
- GaussianMixtureSummary - Class in org.apache.spark.ml.clustering
-
:: Experimental ::
Summary of GaussianMixture.
- gaussians() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- gaussians() - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
- gaussiansDF() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
Retrieve Gaussian distributions as a DataFrame.
- GBTClassificationModel - Class in org.apache.spark.ml.classification
-
Gradient-Boosted Trees (GBTs) (http://en.wikipedia.org/wiki/Gradient_boosting)
model for classification.
- GBTClassificationModel(String, DecisionTreeRegressionModel[], double[]) - Constructor for class org.apache.spark.ml.classification.GBTClassificationModel
-
Construct a GBTClassificationModel
- GBTClassifier - Class in org.apache.spark.ml.classification
-
Gradient-Boosted Trees (GBTs) (http://en.wikipedia.org/wiki/Gradient_boosting)
learning algorithm for classification.
- GBTClassifier(String) - Constructor for class org.apache.spark.ml.classification.GBTClassifier
-
- GBTClassifier() - Constructor for class org.apache.spark.ml.classification.GBTClassifier
-
- GBTRegressionModel - Class in org.apache.spark.ml.regression
-
- GBTRegressionModel(String, DecisionTreeRegressionModel[], double[]) - Constructor for class org.apache.spark.ml.regression.GBTRegressionModel
-
Construct a GBTRegressionModel
- GBTRegressor - Class in org.apache.spark.ml.regression
-
- GBTRegressor(String) - Constructor for class org.apache.spark.ml.regression.GBTRegressor
-
- GBTRegressor() - Constructor for class org.apache.spark.ml.regression.GBTRegressor
-
- GC_TIME() - Static method in class org.apache.spark.ui.ToolTips
-
- gemm(double, Matrix, DenseMatrix, double, DenseMatrix) - Static method in class org.apache.spark.ml.linalg.BLAS
-
C := alpha * A * B + beta * C
- gemm(double, Matrix, DenseMatrix, double, DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
C := alpha * A * B + beta * C
- gemv(double, Matrix, Vector, double, DenseVector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
y := alpha * A * x + beta * y
- gemv(double, Matrix, Vector, double, DenseVector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
y := alpha * A * x + beta * y
- GeneralizedLinearAlgorithm<M extends GeneralizedLinearModel> - Class in org.apache.spark.mllib.regression
-
:: DeveloperApi ::
GeneralizedLinearAlgorithm implements methods to train a Generalized Linear Model (GLM).
- GeneralizedLinearAlgorithm() - Constructor for class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
- GeneralizedLinearModel - Class in org.apache.spark.mllib.regression
-
:: DeveloperApi ::
GeneralizedLinearModel (GLM) represents a model trained using
GeneralizedLinearAlgorithm.
- GeneralizedLinearModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
- GeneralizedLinearRegression - Class in org.apache.spark.ml.regression
-
:: Experimental ::
- GeneralizedLinearRegression(String) - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- GeneralizedLinearRegression() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- GeneralizedLinearRegression.Binomial$ - Class in org.apache.spark.ml.regression
-
Binomial exponential family distribution.
- GeneralizedLinearRegression.CLogLog$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Family$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.FamilyAndLink$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Gamma$ - Class in org.apache.spark.ml.regression
-
Gamma exponential family distribution.
- GeneralizedLinearRegression.Gaussian$ - Class in org.apache.spark.ml.regression
-
Gaussian exponential family distribution.
- GeneralizedLinearRegression.Identity$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Inverse$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Link$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Log$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Logit$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Poisson$ - Class in org.apache.spark.ml.regression
-
Poisson exponential family distribution.
- GeneralizedLinearRegression.Probit$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Sqrt$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegression.Tweedie$ - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegressionModel - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegressionSummary - Class in org.apache.spark.ml.regression
-
- GeneralizedLinearRegressionTrainingSummary - Class in org.apache.spark.ml.regression
-
- generateAssociationRules(double) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
-
Generates association rules for the Item
s in freqItemsets
.
- generateKMeansRDD(SparkContext, int, int, int, double, int) - Static method in class org.apache.spark.mllib.util.KMeansDataGenerator
-
Generate an RDD containing test data for KMeans.
- generateLinearInput(double, double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
For compatibility, the generated data without specifying the mean and variance
will have zero mean and variance of (1.0/3.0) since the original output range is
[-1, 1] with uniform distribution, and the variance of uniform distribution
is (b - a)^2^ / 12 which will be (1.0/3.0)
- generateLinearInput(double, double[], double[], double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
- generateLinearInput(double, double[], double[], double[], int, int, double, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
- generateLinearInputAsList(double, double[], int, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
Return a Java List of synthetic data randomly generated according to a multi
collinear model.
- generateLinearRDD(SparkContext, int, int, double, int, double) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
Generate an RDD containing sample data for Linear Regression models - including Ridge, Lasso,
and unregularized variants.
- generateLogisticRDD(SparkContext, int, int, double, int, double) - Static method in class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
-
Generate an RDD containing test data for LogisticRegression.
- generateRandomEdges(int, int, int, long) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
- generateTreeString(int, Seq<Object>, StringBuilder, boolean, String, boolean) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- generateTreeString(int, Seq<Object>, StringBuilder, boolean, String, boolean) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- generateTreeString(int, Seq<Object>, StringBuilder, boolean, String, boolean) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- generateTreeString$default$5() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- generateTreeString$default$5() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- generateTreeString$default$5() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- generateTreeString$default$6() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- generateTreeString$default$6() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- generateTreeString$default$6() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- genericBuilder() - Static method in class org.apache.spark.sql.types.StructType
-
- geq(Object) - Method in class org.apache.spark.sql.Column
-
Greater than or equal to an expression.
- get(Object) - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
-
- get() - Method in class org.apache.spark.api.java.Optional
-
- get() - Method in interface org.apache.spark.FutureAction
-
Blocks and returns the result of this job.
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.LDA
-
- get(Param<T>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- get(Param<T>) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- get(Param<T>) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.DCT
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.IDF
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Imputer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Interaction
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.NGram
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.PCA
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.RFormula
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- get(Param<T>) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- get(Param<T>) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- get(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Optionally returns the value associated with a param.
- get(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Optionally returns the user-supplied value of a param.
- get(Param<T>) - Static method in class org.apache.spark.ml.Pipeline
-
- get(Param<T>) - Static method in class org.apache.spark.ml.PipelineModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- get(Param<T>) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- get(Param<T>) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- get(Param<T>) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- get(Param<T>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- get(Param<T>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- get(String) - Method in class org.apache.spark.SparkConf
-
Get a parameter; throws a NoSuchElementException if it's not set
- get(String, String) - Method in class org.apache.spark.SparkConf
-
Get a parameter, falling back to a default if not set
- get() - Static method in class org.apache.spark.SparkEnv
-
Returns the SparkEnv.
- get(String) - Static method in class org.apache.spark.SparkFiles
-
Get the absolute path of a file added through SparkContext.addFile()
.
- get(String) - Static method in class org.apache.spark.sql.jdbc.JdbcDialects
-
Fetch the JdbcDialect class corresponding to a given database url.
- get(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- get(String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns the value of Spark runtime configuration property for the given key.
- get(String, String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns the value of Spark runtime configuration property for the given key.
- get() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Get the state value if it exists, or throw NoSuchElementException.
- get(UUID) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns the query if there is an active query with the given id, or null.
- get(String) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Returns the query if there is an active query with the given id, or null.
- get() - Method in class org.apache.spark.streaming.State
-
Get the state if it exists, otherwise it will throw java.util.NoSuchElementException
.
- get() - Static method in class org.apache.spark.TaskContext
-
Return the currently active TaskContext.
- get(long) - Static method in class org.apache.spark.util.AccumulatorContext
-
- get_json_object(Column, String) - Static method in class org.apache.spark.sql.functions
-
Extracts json object from a json string based on json path specified, and returns json string
of the extracted json object.
- getAcceptanceResults(RDD<Tuple2<K, V>>, boolean, Map<K, Object>, Option<Map<K, Object>>, long) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Count the number of items instantly accepted and generate the waitlist for each stratum.
- getAcceptsNull() - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
-
- getActive() - Static method in class org.apache.spark.streaming.StreamingContext
-
:: Experimental ::
- getActiveJobIds() - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns an array containing the ids of all active jobs.
- getActiveJobIds() - Method in class org.apache.spark.SparkStatusTracker
-
Returns an array containing the ids of all active jobs.
- getActiveOrCreate(Function0<StreamingContext>) - Static method in class org.apache.spark.streaming.StreamingContext
-
:: Experimental ::
- getActiveOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.StreamingContext
-
:: Experimental ::
- getActiveSession() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the active SparkSession for the current thread, returned by the builder.
- getActiveStageIds() - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns an array containing the ids of all active stages.
- getActiveStageIds() - Method in class org.apache.spark.SparkStatusTracker
-
Returns an array containing the ids of all active stages.
- getAggregationDepth() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getAggregationDepth() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getAlgo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getAll() - Method in class org.apache.spark.SparkConf
-
Get all parameters as a list of pairs
- getAll() - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns all properties set in this conf.
- getAllConfs() - Method in class org.apache.spark.sql.SQLContext
-
Return all the configuration properties that have been set (i.e.
- getAllPools() - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Return pools for fair scheduler
- getAllPrefLocs(RDD<?>) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
-
- GetAllReceiverInfo - Class in org.apache.spark.streaming.scheduler
-
- GetAllReceiverInfo() - Constructor for class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- getAllWithPrefix(String) - Method in class org.apache.spark.SparkConf
-
Get all parameters that start with prefix
- getAlpha() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getAlpha() - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for getDocConcentration
- getAnyValAs(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- getAppId() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Returns the application ID, or null
if not yet known.
- getAppId() - Method in class org.apache.spark.SparkConf
-
Returns the Spark application id, valid in the Driver after TaskScheduler registration and
from the start in the Executor.
- getAs(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i.
- getAs(String) - Method in interface org.apache.spark.sql.Row
-
Returns the value of a given fieldName.
- getAssociationRulesFromFP(Dataset<?>, String, String, double, ClassTag<T>) - Static method in class org.apache.spark.ml.fpm.AssociationRules
-
Computes the association rules with confidence above minConfidence.
- getAsymmetricAlpha() - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for getAsymmetricDocConcentration
- getAsymmetricDocConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "alpha") for the prior placed on documents'
distributions over topics ("theta").
- getAttr(String) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its name.
- getAttr(int) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Gets an attribute by its index.
- getAvroSchema() - Method in class org.apache.spark.SparkConf
-
Gets all the avro schemas in the configuration used in the generic Avro record serializer
- getBatchingTimeout(SparkConf) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
How long we will wait for the wrappedLog in the BatchedWriteAheadLog to write the records
before we fail the write attempt to unblock receivers.
- getBernoulliSamplingFunction(RDD<Tuple2<K, V>>, Map<K, Object>, boolean, long) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Return the per partition sampling function used for sampling without replacement.
- getBeta() - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for getTopicConcentration
- getBinary() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getBinary() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getBinary() - Method in class org.apache.spark.ml.feature.HashingTF
-
- getBlock(BlockId) - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the given block stored in this block manager in O(1) time.
- getBlockSize() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- GetBlockStatus(BlockId, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus
-
- GetBlockStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus$
-
- getBoolean(String, boolean) - Method in class org.apache.spark.SparkConf
-
Get a parameter as a boolean, falling back to a default if not set
- getBoolean(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive boolean.
- getBoolean(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Boolean.
- getBooleanArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Boolean array.
- getBucketLength() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getBucketLength() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getByte(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive byte.
- getCachedBlockManagerId(BlockManagerId) - Static method in class org.apache.spark.storage.BlockManagerId
-
- getCachedMetadata(String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
The three methods below are helpers for accessing the local map, a property of the SparkEnv of
the local process.
- getCacheNodeIds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getCacheNodeIds() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getCallSite(Function1<String, Object>) - Static method in class org.apache.spark.util.Utils
-
When called inside a class in the spark package, returns the name of the user code class
(outside the spark package) that called into Spark, as well as which Spark method they called.
- getCaseSensitive() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Get the custom datatype mapping for the given jdbc meta information.
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- getCatalystType(int, String, int, MetadataBuilder) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- getCategoricalFeatures(StructField) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Examine a schema to identify categorical (Binary and Nominal) features.
- getCategoricalFeaturesInfo() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getCause() - Static method in exception org.apache.spark.sql.AnalysisException
-
- getCensorCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getCensorCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getCheckpointDir() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- getCheckpointDir() - Method in class org.apache.spark.SparkContext
-
- getCheckpointFile() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.api.java.JavaRDD
-
- getCheckpointFile() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Gets the name of the file to which this RDD was checkpointed
- getCheckpointFile() - Static method in class org.apache.spark.api.r.RRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- getCheckpointFile() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- getCheckpointFile() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- getCheckpointFile() - Static method in class org.apache.spark.graphx.VertexRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- getCheckpointFile() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- getCheckpointFile() - Method in class org.apache.spark.rdd.RDD
-
Gets the name of the directory to which this RDD was checkpointed.
- getCheckpointFile() - Static method in class org.apache.spark.rdd.UnionRDD
-
- getCheckpointFiles() - Method in class org.apache.spark.graphx.Graph
-
Gets the name of the files to which this Graph was checkpointed.
- getCheckpointFiles() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- getCheckpointFiles() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
:: DeveloperApi ::
- getCheckpointInterval() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getCheckpointInterval() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getCheckpointInterval() - Method in class org.apache.spark.mllib.clustering.LDA
-
Period (in iterations) between checkpoints.
- getCheckpointInterval() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getClassifier() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getClassifier() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getColdStartStrategy() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getColdStartStrategy() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getCombOp() - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Returns the function used combine results returned by seqOp from different partitions.
- getComment() - Method in class org.apache.spark.sql.types.StructField
-
Return the comment of this StructField.
- getConf() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Return a copy of this JavaSparkContext's configuration.
- getConf() - Method in class org.apache.spark.rdd.HadoopRDD
-
- getConf() - Method in class org.apache.spark.rdd.NewHadoopRDD
-
- getConf() - Method in class org.apache.spark.SparkContext
-
Return a copy of this SparkContext's configuration.
- getConf(String) - Method in class org.apache.spark.sql.SQLContext
-
Return the value of Spark SQL configuration property for the given key.
- getConf(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Return the value of Spark SQL configuration property for the given key.
- getConfiguration() - Method in class org.apache.spark.input.PortableDataStream
-
- getConfiguredLocalDirs(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the configured local directories where Spark can write files.
- getConnection() - Method in interface org.apache.spark.rdd.JdbcRDD.ConnectionFactory
-
- getConsumerOffsetMetadata(String, Set<TopicAndPartition>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
Requires Kafka 0.8.1.1 or later.
- getConsumerOffsetMetadata(String, Set<TopicAndPartition>, short) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getConsumerOffsets(String, Set<TopicAndPartition>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
Requires Kafka 0.8.1.1 or later.
- getConsumerOffsets(String, Set<TopicAndPartition>, short) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getContextOrSparkClassLoader() - Static method in class org.apache.spark.util.Utils
-
Get the Context ClassLoader on this thread or, if not present, the ClassLoader that
loaded Spark.
- getConvergenceTol() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the largest change in log-likelihood at which convergence is
considered to have occurred.
- getCorrelationFromName(String) - Static method in class org.apache.spark.mllib.stat.correlation.Correlations
-
- getCount() - Method in class org.apache.spark.storage.CountingWritableChannel
-
- getCurrentUserGroups(SparkConf, String) - Static method in class org.apache.spark.util.Utils
-
- getCurrentUserName() - Static method in class org.apache.spark.util.Utils
-
Returns the current user name.
- getDatabase(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Get the database with the specified name.
- getDate(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of date type as java.sql.Date.
- getDecimal(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of decimal type as java.math.BigDecimal.
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.LDA
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.DCT
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.IDF
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Imputer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Interaction
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.NGram
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.PCA
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.RFormula
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Gets the default value of a parameter.
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.Pipeline
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.PipelineModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getDefaultPropertiesFile(Map<String, String>) - Static method in class org.apache.spark.util.Utils
-
Return the path of the default Spark properties file.
- getDefaultSession() - Static method in class org.apache.spark.sql.SparkSession
-
Returns the default SparkSession that is returned by the builder.
- getDegree() - Method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- getDenseSizeInBytes() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the size of the dense representation of this `Matrix`.
- getDependencies() - Method in class org.apache.spark.rdd.CoGroupedRDD
-
- getDependencies() - Method in class org.apache.spark.rdd.ShuffledRDD
-
- getDependencies() - Method in class org.apache.spark.rdd.UnionRDD
-
- getDeprecatedConfig(String, SparkConf) - Static method in class org.apache.spark.SparkConf
-
Looks for available deprecated keys for the given config option, and return the first
value available.
- getDocConcentration() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getDocConcentration() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getDocConcentration() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getDocConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "alpha") for the prior placed on documents'
distributions over topics ("theta").
- getDouble(String, double) - Method in class org.apache.spark.SparkConf
-
Get a parameter as a double, falling back to a default if not set
- getDouble(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive double.
- getDouble(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Double.
- getDoubleArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Double array.
- getDropLast() - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- getDynamicAllocationInitialExecutors(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the initial number of executors for dynamic allocation.
- getEarliestLeaderOffsets(Set<TopicAndPartition>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getElasticNetParam() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getElasticNetParam() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getElasticNetParam() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getElasticNetParam() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getEndTimeEpoch() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- getEpsilon() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The distance threshold within which we've consider centers to have converged.
- getEstimator() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getEstimator() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getEstimator() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getEstimator() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getEstimatorParamMaps() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getEstimatorParamMaps() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getEstimatorParamMaps() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getEstimatorParamMaps() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getEvaluator() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getEvaluator() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getEvaluator() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getEvaluator() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- GetExecutorEndpointRef(String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef
-
- GetExecutorEndpointRef$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef$
-
- getExecutorEnv() - Method in class org.apache.spark.SparkConf
-
Get all executor environment variables set on this SparkConf
- getExecutorInfos() - Method in class org.apache.spark.SparkStatusTracker
-
Returns information of all known executors, including host, port, cacheSize, numRunningTasks.
- GetExecutorLossReason(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason
-
- GetExecutorLossReason$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason$
-
- getExecutorMemoryStatus() - Method in class org.apache.spark.SparkContext
-
Return a map from the slave to the max memory available for caching and the remaining
memory available for caching.
- getExecutorStorageStatus() - Method in class org.apache.spark.SparkContext
-
- getExternalTmpPath(Path, org.apache.spark.sql.hive.client.HiveVersion, Configuration, String, String) - Method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- getExtTmpPathRelTo(Path, Configuration, String) - Method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- getFamily() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getFamily() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getFamily() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getFamily() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getFdr() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getFdr() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getFeatureIndex() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getFeatureIndex() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getFeatureIndicesFromNames(StructField, String[]) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Takes a Vector column and a list of feature names, and returns the corresponding list of
feature indices in the column, in order.
- getFeaturesAndLabels(RFormulaModel, Dataset<?>) - Static method in class org.apache.spark.ml.r.RWrapperUtils
-
Get the feature names and original labels from the schema
of DataFrame transformed by RFormulaModel.
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.feature.RFormula
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getFeaturesCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getFeatureSubsetStrategy() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getFeatureSubsetStrategy() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getFeatureSubsetStrategy() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getFeatureSubsetStrategy() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getField(String) - Method in class org.apache.spark.sql.Column
-
An expression that gets a field by name in a StructType
.
- getFileLength(File, SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the file length, if the file is compressed it returns the uncompressed file length.
- getFilePath(File, String) - Static method in class org.apache.spark.util.Utils
-
Return the absolute path of a file in the given directory.
- getFileReader(String, Option<Configuration>) - Static method in class org.apache.spark.sql.hive.orc.OrcFileOperator
-
Retrieves an ORC file reader from a given path.
- getFileSegmentLocations(String, long, long, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
Get the locations of the HDFS blocks containing the given file segment.
- getFileSystemForPath(Path, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
- getFinalStorageLevel() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getFinalValue() - Method in class org.apache.spark.partial.PartialResult
-
Blocking method to wait for and return the final value.
- getFitIntercept() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getFitIntercept() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getFitIntercept() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getFitIntercept() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getFitIntercept() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getFitIntercept() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getFitIntercept() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getFitIntercept() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getFitIntercept() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getFitIntercept() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getFloat(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive float.
- getForceIndexLabel() - Method in class org.apache.spark.ml.feature.RFormula
-
- getFormattedClassName(Object) - Static method in class org.apache.spark.util.Utils
-
Return the class name of the given object, removing all dollar signs
- getFormula() - Method in class org.apache.spark.ml.feature.RFormula
-
- getFpr() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getFpr() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getFunction(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Get the function with the specified name.
- getFunction(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Get the function with the specified name.
- getFwe() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getFwe() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getGaps() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getGroups(String) - Method in interface org.apache.spark.security.GroupMappingServiceProvider
-
Get the groups the user belongs to.
- getHadoopFileSystem(URI, Configuration) - Static method in class org.apache.spark.util.Utils
-
Return a Hadoop FileSystem with the scheme encoded in the given path.
- getHadoopFileSystem(String, Configuration) - Static method in class org.apache.spark.util.Utils
-
Return a Hadoop FileSystem with the scheme encoded in the given path.
- getHandleInvalid() - Method in class org.apache.spark.ml.feature.Bucketizer
-
- getHandleInvalid() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getHandleInvalid() - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- getHandleInvalid() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- getImplicitPrefs() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getImpurity() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getImpurity() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getImpurity() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getImpurity() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getImpurity() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getImpurity() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getImpurity() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getImpurity() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getImpurity() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getImpurity() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getImpurity() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getImpurity() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getImpurity() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getIndices() - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- getInitializationMode() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The initialization algorithm.
- getInitializationSteps() - Method in class org.apache.spark.mllib.clustering.KMeans
-
Number of steps for the k-means|| initialization mode
- getInitialModel() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the user supplied initial GMM, if supplied
- getInitialPositionInStream(int) - Method in class org.apache.spark.streaming.kinesis.KinesisUtilsPythonHelper
-
- getInitialWeights() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getInitMode() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getInitMode() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getInitSteps() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getInitSteps() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.Binarizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.DCT
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.HashingTF
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.IDF
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.IndexToString
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.NGram
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.Normalizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.PCA
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getInputCol() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getInputCols() - Static method in class org.apache.spark.ml.feature.Imputer
-
- getInputCols() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getInputCols() - Static method in class org.apache.spark.ml.feature.Interaction
-
- getInputCols() - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- getInputFilePath() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Returns the holding file name or empty string if it is unknown.
- getInputStream(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
- getInt(String, int) - Method in class org.apache.spark.SparkConf
-
Get a parameter as an integer, falling back to a default if not set
- getInt(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive int.
- getIntermediateStorageLevel() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getInverse() - Method in class org.apache.spark.ml.feature.DCT
-
- getIsotonic() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getIsotonic() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getItem(Object) - Method in class org.apache.spark.sql.Column
-
An expression that gets an item at position ordinal
out of an array,
or gets a value by key key
in a MapType
.
- getItemCol() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getItemCol() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getItemsCol() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getItemsCol() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getIteratorSize(Iterator<T>) - Static method in class org.apache.spark.util.Utils
-
Counts the number of elements of an iterator using a while loop rather than calling
TraversableOnce.size()
because it uses a for loop, which is slightly slower
in the current version of Scala.
- getIteratorZipWithIndex(Iterator<T>, long) - Static method in class org.apache.spark.util.Utils
-
Generate a zipWithIndex iterator, avoid index value overflowing problem
in scala's zipWithIndex
- getJavaMap(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of array type as a java.util.Map
.
- getJavaSparkContext(SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.AggregatedDialect
-
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- getJDBCType(DataType) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Retrieve the jdbc / sql type for a given datatype.
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- getJDBCType(DataType) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- getJobIdsForGroup(String) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Return a list of all known jobs in a particular job group.
- getJobIdsForGroup(String) - Method in class org.apache.spark.SparkStatusTracker
-
Return a list of all known jobs in a particular job group.
- getJobInfo(int) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns job information, or null
if the job info could not be found or was garbage collected.
- getJobInfo(int) - Method in class org.apache.spark.SparkStatusTracker
-
Returns job information, or None
if the job info could not be found or was garbage collected.
- getK() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getK() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getK() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getK() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getK() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getK() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getK() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getK() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getK() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getK() - Static method in class org.apache.spark.ml.feature.PCA
-
- getK() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- getK() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the desired number of leaf clusters.
- getK() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the number of Gaussians in the mixture model
- getK() - Method in class org.apache.spark.mllib.clustering.KMeans
-
Number of clusters to create (k).
- getK() - Method in class org.apache.spark.mllib.clustering.LDA
-
Number of topics to infer, i.e., the number of soft cluster centers.
- getKappa() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Learning rate: exponential decay rate
- getKeepLastCheckpoint() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getKeepLastCheckpoint() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getKeepLastCheckpoint() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getKeepLastCheckpoint() - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
-
If using checkpointing, this indicates whether to keep the last checkpoint (vs clean up).
- getLabelCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getLabelCol() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- getLabelCol() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- getLabelCol() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- getLabelCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getLabelCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.feature.RFormula
-
- getLabelCol() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getLabelCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getLabels() - Method in class org.apache.spark.ml.feature.IndexToString
-
- getLambda() - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Get the smoothing parameter.
- getLastUpdatedEpoch() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- getLatestLeaderOffsets(Set<TopicAndPartition>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getLayers() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getLDAModel(double[]) - Method in interface org.apache.spark.mllib.clustering.LDAOptimizer
-
- getLeaderOffsets(Set<TopicAndPartition>, long) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getLeaderOffsets(Set<TopicAndPartition>, long, int) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getLearningDecay() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getLearningDecay() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getLearningDecay() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getLearningOffset() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getLearningOffset() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getLearningOffset() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getLearningRate() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- getLeastGroupHash(String) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Sorts and gets the least element of the list associated with key in groupHash
The returned PartitionGroup is the least loaded of all groups that represent the machine "key"
- getLength() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Returns the length of the block being read, or -1 if it is unknown.
- getLink() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getLink() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getLinkPower() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getLinkPower() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getLinkPredictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getLinkPredictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getList(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of array type as java.util.List
.
- getLocalDir(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Get the path of a temporary directory.
- getLocalizedMessage() - Static method in exception org.apache.spark.sql.AnalysisException
-
- getLocalProperty(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get a local property set in this thread, or null if it is missing.
- getLocalProperty(String) - Method in class org.apache.spark.SparkContext
-
Get a local property set in this thread, or null if it is missing.
- getLocalProperty(String) - Method in class org.apache.spark.TaskContext
-
Get a local property set upstream in the driver, or null if it is missing.
- getLocalUserJarsForShell(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the local jar files which will be added to REPL's classpath.
- GetLocations(BlockId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocations
-
- GetLocations$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocations$
-
- GetLocationsMultipleBlockIds(BlockId[]) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds
-
- GetLocationsMultipleBlockIds$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds$
-
- getLong(String, long) - Method in class org.apache.spark.SparkConf
-
Get a parameter as a long, falling back to a default if not set
- getLong(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive long.
- getLong(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Long.
- getLongArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Long array.
- getLoss() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- getLossType() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getLossType() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getLossType() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getLossType() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getLowerBound(double, long, double) - Static method in class org.apache.spark.util.random.BinomialBounds
-
Returns a threshold p
such that if we conduct n Bernoulli trials with success rate = p
,
it is very unlikely to have more than fraction * n
successes.
- getLowerBound(double) - Static method in class org.apache.spark.util.random.PoissonBounds
-
Returns a lambda such that Pr[X > s] is very small, where X ~ Pois(lambda).
- getLowerBoundsOnCoefficients() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getLowerBoundsOnCoefficients() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getLowerBoundsOnIntercepts() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getLowerBoundsOnIntercepts() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getMap(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of map type as a Scala Map.
- GetMatchingBlockIds(Function1<BlockId, Object>, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds
-
- GetMatchingBlockIds$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds$
-
- getMax() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getMax() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getMaxBins() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getMaxBins() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getMaxBins() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getMaxBins() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getMaxBins() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getMaxBins() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getMaxBins() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getMaxCategories() - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- getMaxCategories() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getMaxDepth() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getMaxDepth() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getMaxDepth() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getMaxDepth() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getMaxDepth() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getMaxDepth() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getMaxDepth() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getMaxFailures(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getMaxIter() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getMaxIter() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getMaxIter() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the max number of k-means iterations to split clusters.
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the maximum number of iterations allowed
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.KMeans
-
Maximum number of iterations allowed.
- getMaxIterations() - Method in class org.apache.spark.mllib.clustering.LDA
-
Maximum number of iterations allowed.
- getMaxLocalProjDBSize() - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Gets the maximum number of items allowed in a projected database before local processing.
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getMaxMemoryInMB() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getMaxMemoryInMB() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getMaxPatternLength() - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Gets the maximal pattern length (i.e.
- getMaxResultSize(SparkConf) - Static method in class org.apache.spark.util.Utils
-
- getMaxSentenceLength() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getMaxSentenceLength() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- GetMemoryStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus$
-
- getMessage() - Method in exception org.apache.spark.sql.AnalysisException
-
- getMetadata(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Metadata.
- getMetadataArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a Metadata array.
- getMetricName() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- getMetricName() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- getMetricName() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- getMetricsSources(String) - Method in class org.apache.spark.TaskContext
-
::DeveloperApi::
Returns all metrics sources with the given name which are associated with the instance
which runs the task.
- getMin() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getMin() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getMinConfidence() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getMinConfidence() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getMinCount() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getMinCount() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getMinDF() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getMinDF() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getMinDivisibleClusterSize() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getMinDivisibleClusterSize() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getMinDivisibleClusterSize() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the minimum number of points (if greater than or equal to 1.0
) or the minimum proportion
of points (if less than 1.0
) of a divisible cluster.
- getMinDocFreq() - Static method in class org.apache.spark.ml.feature.IDF
-
- getMinDocFreq() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- getMiniBatchFraction() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Mini-batch fraction, which sets the fraction of document sampled and used in each iteration
- getMinInfoGain() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getMinInfoGain() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getMinInfoGain() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getMinInstancesPerNode() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getMinInstancesPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getMinSupport() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getMinSupport() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getMinSupport() - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Get the minimal support (i.e.
- getMinTF() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getMinTF() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getMinTokenLength() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getMissingValue() - Static method in class org.apache.spark.ml.feature.Imputer
-
- getMissingValue() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getModelType() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getModelType() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getModelType() - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Get the model type.
- getN() - Method in class org.apache.spark.ml.feature.NGram
-
- getNames() - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- getNode(int, Node) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Traces down from a root node to get the node with the given node index.
- getNonnegative() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getNumBuckets() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getNumClasses(StructField) - Static method in class org.apache.spark.ml.util.MetadataUtils
-
Examine a schema to identify the number of classes in a label column.
- getNumClasses() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getNumFeatures() - Method in class org.apache.spark.ml.feature.HashingTF
-
- getNumFeatures() - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
- getNumFeatures() - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
- getNumFeatures() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
The dimension of training features.
- getNumFeatures() - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
- getNumFeatures() - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
- getNumFeatures() - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
- getNumFolds() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getNumFolds() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getNumHashTables() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getNumHashTables() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getNumHashTables() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getNumHashTables() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- getNumItemBlocks() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getNumIterations() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- getNumObjFields() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
-
- getNumPartitions() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- getNumPartitions() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- getNumPartitions() - Static method in class org.apache.spark.api.java.JavaRDD
-
- getNumPartitions() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return the number of partitions in this RDD.
- getNumPartitions() - Static method in class org.apache.spark.api.r.RRDD
-
- getNumPartitions() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- getNumPartitions() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- getNumPartitions() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- getNumPartitions() - Static method in class org.apache.spark.graphx.VertexRDD
-
- getNumPartitions() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getNumPartitions() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getNumPartitions() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getNumPartitions() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getNumPartitions() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- getNumPartitions() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- getNumPartitions() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- getNumPartitions() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- getNumPartitions() - Method in class org.apache.spark.rdd.RDD
-
Returns the number of partitions of this RDD.
- getNumPartitions() - Static method in class org.apache.spark.rdd.UnionRDD
-
- getNumTopFeatures() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getNumTopFeatures() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getNumTrees() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
Number of trees in ensemble
- getNumTrees() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getNumTrees() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getNumTrees() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
Number of trees in ensemble
- getNumTrees() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getNumTrees() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getNumUserBlocks() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getNumValues() - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Get the number of values, either from numValues
or from values
.
- getObjectInspector(String, Option<Configuration>) - Static method in class org.apache.spark.sql.hive.orc.OrcFileOperator
-
- getObjFieldValues(Object, Object[]) - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
-
- getOptimizeDocConcentration() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getOptimizeDocConcentration() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getOptimizeDocConcentration() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getOptimizeDocConcentration() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Optimize docConcentration, indicates whether docConcentration (Dirichlet parameter for
document-topic distribution) will be optimized during training.
- getOptimizer() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getOptimizer() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getOptimizer() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getOptimizer() - Method in class org.apache.spark.mllib.clustering.LDA
-
:: DeveloperApi ::
- getOption(String) - Method in class org.apache.spark.SparkConf
-
Get a parameter as an Option
- getOption(String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Returns the value of Spark runtime configuration property for the given key.
- getOption() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Get the state value as a scala Option.
- getOption() - Method in class org.apache.spark.streaming.State
-
Get the state as a scala.Option
.
- getOrCreate(SparkConf) - Static method in class org.apache.spark.SparkContext
-
This function may be used to get or instantiate a SparkContext and register it as a
singleton object.
- getOrCreate() - Static method in class org.apache.spark.SparkContext
-
This function may be used to get or instantiate a SparkContext and register it as a
singleton object.
- getOrCreate() - Method in class org.apache.spark.sql.SparkSession.Builder
-
Gets an existing
SparkSession
or, if there is no existing one, creates a new
one based on the options set in this builder.
- getOrCreate(SparkContext) - Static method in class org.apache.spark.sql.SQLContext
-
- getOrCreate(String, Function0<JavaStreamingContext>) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(String, Function0<JavaStreamingContext>, Configuration) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(String, Function0<JavaStreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreate(String, Function0<StreamingContext>, Configuration, boolean) - Static method in class org.apache.spark.streaming.StreamingContext
-
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext.
- getOrCreateSparkSession(JavaSparkContext, Map<Object, Object>, boolean) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.LDA
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.DCT
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.IDF
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Imputer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Interaction
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.NGram
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.PCA
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.RFormula
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getOrDefault(Param<T>) - Method in interface org.apache.spark.ml.param.Params
-
Gets the value of a param in the embedded param map or its default value.
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.Pipeline
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.PipelineModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getOrDefault(Param<T>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getOrElse(Param<T>, T) - Method in class org.apache.spark.ml.param.ParamMap
-
Returns the value associated with a param or a default value.
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Binarizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.DCT
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.HashingTF
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.IDF
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.IndexToString
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Interaction
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.NGram
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Normalizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.PCA
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getOutputCol() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getOutputCols() - Static method in class org.apache.spark.ml.feature.Imputer
-
- getOutputCols() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getOutputStream(String, Configuration) - Static method in class org.apache.spark.streaming.util.HdfsUtils
-
- getP() - Method in class org.apache.spark.ml.feature.Normalizer
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getParam(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.LDA
-
- getParam(String) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getParam(String) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- getParam(String) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- getParam(String) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.DCT
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.IDF
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Imputer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Interaction
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.NGram
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.PCA
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.RFormula
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getParam(String) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getParam(String) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getParam(String) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getParam(String) - Method in interface org.apache.spark.ml.param.Params
-
Gets a param by its name.
- getParam(String) - Static method in class org.apache.spark.ml.Pipeline
-
- getParam(String) - Static method in class org.apache.spark.ml.PipelineModel
-
- getParam(String) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getParam(String) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getParam(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getParam(String) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getParam(String) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getParam(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getParam(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getParents(int) - Method in class org.apache.spark.NarrowDependency
-
Get the parent partitions for a child partition.
- getParents(int) - Method in class org.apache.spark.OneToOneDependency
-
- getParents(int) - Method in class org.apache.spark.RangeDependency
-
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
-
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
-
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
-
- getPartition(long, long, int) - Method in interface org.apache.spark.graphx.PartitionStrategy
-
Returns the partition number for a given edge.
- getPartition(long, long, int) - Method in class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
-
- getPartition(Object) - Method in class org.apache.spark.HashPartitioner
-
- getPartition(Object) - Method in class org.apache.spark.Partitioner
-
- getPartition(Object) - Method in class org.apache.spark.RangePartitioner
-
- getPartitionId() - Static method in class org.apache.spark.TaskContext
-
Returns the partition id of currently active TaskContext.
- getPartitionMetadata(Set<String>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getPartitions() - Method in class org.apache.spark.api.r.BaseRRDD
-
- getPartitions() - Static method in class org.apache.spark.api.r.RRDD
-
- getPartitions() - Method in class org.apache.spark.rdd.CoGroupedRDD
-
- getPartitions() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- getPartitions() - Method in class org.apache.spark.rdd.HadoopRDD
-
- getPartitions() - Method in class org.apache.spark.rdd.JdbcRDD
-
- getPartitions() - Method in class org.apache.spark.rdd.NewHadoopRDD
-
- getPartitions() - Method in class org.apache.spark.rdd.ShuffledRDD
-
- getPartitions() - Method in class org.apache.spark.rdd.UnionRDD
-
- getPartitions(Set<String>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- getPath() - Method in class org.apache.spark.input.PortableDataStream
-
- getPattern() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- GetPeers(BlockManagerId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetPeers
-
- GetPeers$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetPeers$
-
- getPercentile() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getPercentile() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getPersistentRDDs() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Returns a Java map of JavaRDDs that have marked themselves as persistent via cache() call.
- getPersistentRDDs() - Method in class org.apache.spark.SparkContext
-
Returns an immutable map of RDDs that have marked themselves as persistent via cache() call.
- getPoissonSamplingFunction(RDD<Tuple2<K, V>>, Map<K, Object>, boolean, long, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Return the per partition sampling function used for sampling with replacement.
- getPoolForName(String) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Return the pool associated with the given name, if one exists
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getPredictionCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getPredictionCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getPredictionCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getPredictionCol() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- getPredictionCol() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- getPredictionCol() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- getPredictionCol() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getPredictionCol() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getPredictionCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.HadoopRDD
-
- getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.NewHadoopRDD
-
- getPreferredLocations(Partition) - Method in class org.apache.spark.rdd.UnionRDD
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getProbabilityCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getProcessName() - Static method in class org.apache.spark.util.Utils
-
Returns the name of this JVM process.
- getPropertiesFromFile(String) - Static method in class org.apache.spark.util.Utils
-
Load properties present in the given file.
- getQuantileCalculationStrategy() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getQuantileProbabilities() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getQuantileProbabilities() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getQuantilesCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getQuantilesCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getRandomSample(Seq<T>, int, Random) - Static method in class org.apache.spark.storage.BlockReplicationUtils
-
Get a random sample of size m from the elems
- getRank() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getRatingCol() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getRawPredictionCol() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- getRddBlockLocations(int, Seq<StorageStatus>) - Static method in class org.apache.spark.storage.StorageUtils
-
Return a mapping from block ID to its locations for each block that belongs to the given RDD.
- getRDDStorageInfo() - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Return information about what RDDs are cached, if they are in mem or on disk, how much space
they take, etc.
- getReceiver() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
-
Gets the receiver object that will be sent to the worker nodes
to receive data.
- getRegParam() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getRegParam() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getRegParam() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getRegParam() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getRegParam() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getRegParam() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getRegParam() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getRegParam() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getRegParam() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getRelativeError() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- getRollingIntervalSecs(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- getRootDirectory() - Static method in class org.apache.spark.SparkFiles
-
Get the root directory that contains files added through SparkContext.addFile()
.
- getRuns() - Method in class org.apache.spark.mllib.clustering.KMeans
-
- getScalingVec() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- getSchedulingMode() - Method in class org.apache.spark.SparkContext
-
Return current scheduling mode
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- getSchemaQuery(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
The SQL query that should be used to discover the schema of a table.
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- getSchemaQuery(String) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- getSeed() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getSeed() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getSeed() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getSeed() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getSeed() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getSeed() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getSeed() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getSeed() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getSeed() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- getSeed() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- getSeed() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getSeed() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getSeed() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getSeed() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getSeed() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getSeed() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getSeed() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getSeed() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getSeed() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getSeed() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- getSeed() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- getSeed() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getSeed() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getSeed() - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Gets the random seed.
- getSeed() - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Return the random seed
- getSeed() - Method in class org.apache.spark.mllib.clustering.KMeans
-
The random seed for cluster initialization.
- getSeed() - Method in class org.apache.spark.mllib.clustering.LDA
-
Random seed for cluster initialization.
- getSelectorType() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- getSelectorType() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- getSeq(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of array type as a Scala Seq.
- getSeqOp(boolean, Map<K, Object>, StratifiedSamplingUtils.RandomDataGenerator, Option<Map<K, Object>>) - Static method in class org.apache.spark.util.random.StratifiedSamplingUtils
-
Returns the function used by aggregate to collect sampling statistics for each partition.
- getSessionConf(SparkSession) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- getShort(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a primitive short.
- getSimpleMessage() - Method in exception org.apache.spark.sql.AnalysisException
-
- getSizeAsBytes(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as bytes; throws a NoSuchElementException if it's not set.
- getSizeAsBytes(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as bytes, falling back to a default if not set.
- getSizeAsBytes(String, long) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as bytes, falling back to a default if not set.
- getSizeAsGb(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Gibibytes; throws a NoSuchElementException if it's not set.
- getSizeAsGb(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Gibibytes, falling back to a default if not set.
- getSizeAsKb(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Kibibytes; throws a NoSuchElementException if it's not set.
- getSizeAsKb(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Kibibytes, falling back to a default if not set.
- getSizeAsMb(String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Mebibytes; throws a NoSuchElementException if it's not set.
- getSizeAsMb(String, String) - Method in class org.apache.spark.SparkConf
-
Get a size parameter as Mebibytes, falling back to a default if not set.
- getSizeInBytes() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the current size in bytes of this `Matrix`.
- getSlotDescs() - Method in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods
-
- getSmoothing() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getSmoothing() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getSolver() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getSolver() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getSolver() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getSolver() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getSolver() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getSparkClassLoader() - Static method in class org.apache.spark.util.Utils
-
Get the ClassLoader which loaded Spark.
- getSparkHome() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get Spark's home location from either a value set through the constructor,
or the spark.home Java property, or the SPARK_HOME environment variable
(in that order of preference).
- getSparkOrYarnConfig(SparkConf, String, String) - Static method in class org.apache.spark.util.Utils
-
Return the value of a config either through the SparkConf or the Hadoop configuration
if this is Yarn mode.
- getSparseSizeInBytes(boolean) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Gets the size of the minimal sparse representation of this `Matrix`.
- getSplit() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
-
- getSplits() - Method in class org.apache.spark.ml.feature.Bucketizer
-
- getSQLDataType(String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- getStackTrace() - Static method in exception org.apache.spark.sql.AnalysisException
-
- getStageInfo(int) - Method in class org.apache.spark.api.java.JavaSparkStatusTracker
-
Returns stage information, or null
if the stage info could not be found or was
garbage collected.
- getStageInfo(int) - Method in class org.apache.spark.SparkStatusTracker
-
Returns stage information, or None
if the stage info could not be found or was
garbage collected.
- getStagePath(String, int, int, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Get path for saving the given stage.
- getStages() - Method in class org.apache.spark.ml.Pipeline
-
- getStandardization() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getStandardization() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getStandardization() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getStandardization() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getStandardization() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getStandardization() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getStartOffset() - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Returns the starting offset of the block currently being read, or -1 if it is unknown.
- getStartTimeEpoch() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- getState() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Returns the current application state.
- getState() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
:: DeveloperApi ::
- getState() - Method in class org.apache.spark.streaming.StreamingContext
-
:: DeveloperApi ::
- getStatement() - Method in class org.apache.spark.ml.feature.SQLTransformer
-
- getStderr(Process, long) - Static method in class org.apache.spark.util.Utils
-
Return the stderr of a process after waiting for the process to terminate.
- getStepSize() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getStepSize() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getStepSize() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getStepSize() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getStepSize() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getStepSize() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getStepSize() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getStopWords() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- getStorageLevel() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- getStorageLevel() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- getStorageLevel() - Static method in class org.apache.spark.api.java.JavaRDD
-
- getStorageLevel() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
- getStorageLevel() - Static method in class org.apache.spark.api.r.RRDD
-
- getStorageLevel() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- getStorageLevel() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- getStorageLevel() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- getStorageLevel() - Static method in class org.apache.spark.graphx.VertexRDD
-
- getStorageLevel() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- getStorageLevel() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- getStorageLevel() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- getStorageLevel() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- getStorageLevel() - Method in class org.apache.spark.rdd.RDD
-
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
- getStorageLevel() - Static method in class org.apache.spark.rdd.UnionRDD
-
- GetStorageStatus$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.GetStorageStatus$
-
- getStrategy() - Static method in class org.apache.spark.ml.feature.Imputer
-
- getStrategy() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- getString(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i as a String object.
- getString(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a String.
- getStringArray(String) - Method in class org.apache.spark.sql.types.Metadata
-
Gets a String array.
- getStruct(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of struct type as a
Row
object.
- getSubsamplingRate() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- getSubsamplingRate() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- getSubsamplingRate() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getSuppressed() - Static method in exception org.apache.spark.sql.AnalysisException
-
- getSystemProperties() - Static method in class org.apache.spark.util.Utils
-
Returns the system properties map that is thread-safe to iterator over.
- getTable(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Get the table or view with the specified name.
- getTable(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Get the table or view with the specified name in the specified database.
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.DB2Dialect
-
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.DerbyDialect
-
- getTableExistsQuery(String) - Method in class org.apache.spark.sql.jdbc.JdbcDialect
-
Get the SQL query that should be used to find if the given table exists.
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.NoopDialect
-
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- getTableExistsQuery(String) - Static method in class org.apache.spark.sql.jdbc.PostgresDialect
-
- getTableNames(SparkSession, String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- getTables(SparkSession, String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- getTau0() - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
A (positive) learning parameter that downweights early iterations.
- getThreadDump() - Static method in class org.apache.spark.util.Utils
-
Return a thread dump of all threads' stacktraces.
- getThreadDumpForThread(long) - Static method in class org.apache.spark.util.Utils
-
- getThreshold() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getThreshold() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getThreshold() - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- getThreshold() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getThreshold() - Method in class org.apache.spark.ml.feature.Binarizer
-
- getThreshold() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
- getThreshold() - Method in class org.apache.spark.mllib.classification.SVMModel
-
Returns the threshold (if any) used for converting raw prediction scores into 0/1 predictions.
- getThresholds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- getThresholds() - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- getThresholds() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- getThresholds() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- getTimeAsMs(String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as milliseconds; throws a NoSuchElementException if it's not set.
- getTimeAsMs(String, String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as milliseconds, falling back to a default if not set.
- getTimeAsSeconds(String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as seconds; throws a NoSuchElementException if it's not set.
- getTimeAsSeconds(String, String) - Method in class org.apache.spark.SparkConf
-
Get a time parameter as seconds, falling back to a default if not set.
- getTimestamp(int) - Method in interface org.apache.spark.sql.Row
-
Returns the value at position i of date type as java.sql.Timestamp.
- getTimeZoneOffset() - Static method in class org.apache.spark.ui.UIUtils
-
- GETTING_RESULT_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- GETTING_RESULT_TIME() - Static method in class org.apache.spark.ui.ToolTips
-
- gettingResult() - Method in class org.apache.spark.scheduler.TaskInfo
-
- gettingResultTime() - Method in class org.apache.spark.scheduler.TaskInfo
-
The time when the task started remotely getting the result.
- getTol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getTol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getTol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getTol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getTol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- getTol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- getTol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- getTol() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- getTol() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- getTol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- getTol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- getTol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getTol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getTol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getTol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getToLowercase() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- getTopicConcentration() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getTopicConcentration() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getTopicConcentration() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getTopicConcentration() - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics'
distributions over terms.
- getTopicDistributionCol() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- getTopicDistributionCol() - Static method in class org.apache.spark.ml.clustering.LDA
-
- getTopicDistributionCol() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- getTopologyForHost(String) - Method in class org.apache.spark.storage.DefaultTopologyMapper
-
- getTopologyForHost(String) - Method in class org.apache.spark.storage.FileBasedTopologyMapper
-
- getTopologyForHost(String) - Method in class org.apache.spark.storage.TopologyMapper
-
Gets the topology information given the host name
- getTrainRatio() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- getTrainRatio() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- getTreeStrategy() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- getUDTFor(String) - Static method in class org.apache.spark.sql.types.UDTRegistration
-
Returns the Class of UserDefinedType for the name of a given user class.
- getUidMap(Params) - Static method in class org.apache.spark.ml.util.MetaAlgorithmReadWrite
-
Examine the given estimator (which may be a compound estimator) and extract a mapping
from UIDs to corresponding Params
instances.
- getUiRoot(ServletContext) - Static method in class org.apache.spark.status.api.v1.UIRootFromServletContext
-
- getUpperBound(double, long, double) - Static method in class org.apache.spark.util.random.BinomialBounds
-
Returns a threshold p
such that if we conduct n Bernoulli trials with success rate = p
,
it is very unlikely to have less than fraction * n
successes.
- getUpperBound(double) - Static method in class org.apache.spark.util.random.PoissonBounds
-
Returns a lambda such that Pr[X < s] is very small, where X ~ Pois(lambda).
- getUpperBoundsOnCoefficients() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getUpperBoundsOnCoefficients() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getUpperBoundsOnIntercepts() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getUpperBoundsOnIntercepts() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getUsedTimeMs(long) - Static method in class org.apache.spark.util.Utils
-
Return the string to tell how long has passed in milliseconds.
- getUseNodeIdCache() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- getUserCol() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- getUserCol() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- getUserJars(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Return the jar files pointed by the "spark.jars" property.
- getValidationTol() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- getValue(int) - Method in class org.apache.spark.ml.attribute.NominalAttribute
-
Gets a value given its index.
- getValuesMap(Seq<String>) - Method in interface org.apache.spark.sql.Row
-
Returns a Map consisting of names and values for the requested fieldNames
For primitive types if value is null it returns 'zero value' specific for primitive
ie.
- getVarianceCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- getVarianceCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- getVariancePower() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getVariancePower() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getVectors() - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Returns a dataframe with two fields, "word" and "vector", with "word" being a String and
and the vector the DenseVector that it is mapped to.
- getVectors() - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Returns a map of words to their vector representations.
- getVectorSize() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getVectorSize() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getVocabSize() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- getVocabSize() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- getWeightCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- getWeightCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- getWeightCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- getWeightCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- getWeightCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- getWindowSize() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- getWindowSize() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- getWithMean() - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getWithMean() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- getWithStd() - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- getWithStd() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- Gini - Class in org.apache.spark.mllib.tree.impurity
-
Class for calculating the Gini impurity
(http://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity)
during multiclass classification.
- Gini() - Constructor for class org.apache.spark.mllib.tree.impurity.Gini
-
- GLMClassificationModel - Class in org.apache.spark.mllib.classification.impl
-
Helper class for import/export of GLM classification models.
- GLMClassificationModel() - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel
-
- GLMClassificationModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.classification.impl
-
- GLMClassificationModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.classification.impl
-
Model data for import/export
- GLMRegressionModel - Class in org.apache.spark.mllib.regression.impl
-
Helper methods for import/export of GLM regression models.
- GLMRegressionModel() - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel
-
- GLMRegressionModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.regression.impl
-
- GLMRegressionModel.SaveLoadV1_0$.Data - Class in org.apache.spark.mllib.regression.impl
-
Model data for model import/export
- glom() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- glom() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- glom() - Static method in class org.apache.spark.api.java.JavaRDD
-
- glom() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by coalescing all elements within each partition into an array.
- glom() - Static method in class org.apache.spark.api.r.RRDD
-
- glom() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- glom() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- glom() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- glom() - Static method in class org.apache.spark.graphx.VertexRDD
-
- glom() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- glom() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- glom() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- glom() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- glom() - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by coalescing all elements within each partition into an array.
- glom() - Static method in class org.apache.spark.rdd.UnionRDD
-
- glom() - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- glom() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying glom() to each RDD of
this DStream.
- glom() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- glom() - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- glom() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- glom() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- glom() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- glom() - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying glom() to each RDD of
this DStream.
- goodnessOfFit() - Method in class org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
-
- grad() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
-
- gradient() - Method in class org.apache.spark.ml.classification.LinearSVCAggregator
-
- gradient() - Method in class org.apache.spark.ml.classification.LogisticAggregator
-
- gradient() - Method in class org.apache.spark.ml.regression.AFTAggregator
-
- gradient() - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
-
- Gradient - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
Class used to compute the gradient for a loss function, given a single data point.
- Gradient() - Constructor for class org.apache.spark.mllib.optimization.Gradient
-
- gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.AbsoluteError
-
Method to calculate the gradients for the gradient boosting calculation for least
absolute error calculation.
- gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.LogLoss
-
Method to calculate the loss gradients for the gradient boosting calculation for binary
classification
The gradient with respect to F(x) is: - 4 y / (1 + exp(2 y F(x)))
- gradient(double, double) - Method in interface org.apache.spark.mllib.tree.loss.Loss
-
Method to calculate the gradients for the gradient boosting calculation.
- gradient(double, double) - Static method in class org.apache.spark.mllib.tree.loss.SquaredError
-
Method to calculate the gradients for the gradient boosting calculation for least
squares error calculation.
- GradientBoostedTrees - Class in org.apache.spark.ml.tree.impl
-
- GradientBoostedTrees() - Constructor for class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
- GradientBoostedTrees - Class in org.apache.spark.mllib.tree
-
- GradientBoostedTrees(BoostingStrategy) - Constructor for class org.apache.spark.mllib.tree.GradientBoostedTrees
-
- GradientBoostedTreesModel - Class in org.apache.spark.mllib.tree.model
-
Represents a gradient boosted trees model.
- GradientBoostedTreesModel(Enumeration.Value, DecisionTreeModel[], double[]) - Constructor for class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- GradientDescent - Class in org.apache.spark.mllib.optimization
-
Class used to solve an optimization problem using Gradient Descent.
- Graph<VD,ED> - Class in org.apache.spark.graphx
-
The Graph abstractly represents a graph with arbitrary objects
associated with vertices and edges.
- GraphGenerators - Class in org.apache.spark.graphx.util
-
A collection of graph generating functions.
- GraphGenerators() - Constructor for class org.apache.spark.graphx.util.GraphGenerators
-
- GraphImpl<VD,ED> - Class in org.apache.spark.graphx.impl
-
An implementation of
Graph
to support computation on graphs.
- GraphLoader - Class in org.apache.spark.graphx
-
Provides utilities for loading
Graph
s from files.
- GraphLoader() - Constructor for class org.apache.spark.graphx.GraphLoader
-
- GraphOps<VD,ED> - Class in org.apache.spark.graphx
-
Contains additional functionality for
Graph
.
- GraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Constructor for class org.apache.spark.graphx.GraphOps
-
- graphToGraphOps(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.Graph
-
Implicitly extracts the
GraphOps
member from a graph.
- GraphXUtils - Class in org.apache.spark.graphx
-
- GraphXUtils() - Constructor for class org.apache.spark.graphx.GraphXUtils
-
- greater(Duration) - Method in class org.apache.spark.streaming.Duration
-
- greater(Time) - Method in class org.apache.spark.streaming.Time
-
- greaterEq(Duration) - Method in class org.apache.spark.streaming.Duration
-
- greaterEq(Time) - Method in class org.apache.spark.streaming.Time
-
- GreaterThan - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to a value
greater than value
.
- GreaterThan(String, Object) - Constructor for class org.apache.spark.sql.sources.GreaterThan
-
- GreaterThanOrEqual - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to a value
greater than or equal to value
.
- GreaterThanOrEqual(String, Object) - Constructor for class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- greatest(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of values, skipping null values.
- greatest(String, String...) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of column names, skipping null values.
- greatest(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of values, skipping null values.
- greatest(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Returns the greatest value of the list of column names, skipping null values.
- gridGraph(SparkContext, int, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
Create rows
by cols
grid graph with each vertex connected to its
row+1 and col+1 neighbors.
- groupArr() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- groupBy(Function<T, U>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- groupBy(Function<T, U>, int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- groupBy(Function<T, U>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- groupBy(Function<T, U>, int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- groupBy(Function<T, U>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- groupBy(Function<T, U>, int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- groupBy(Function<T, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD of grouped elements.
- groupBy(Function<T, U>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD of grouped elements.
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.api.r.RRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.api.r.RRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.api.r.RRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- groupBy(Function1<T, K>, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD of grouped items.
- groupBy(Function1<T, K>, int, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD of grouped elements.
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD of grouped items.
- groupBy(Function1<T, K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- groupBy(Function1<T, K>, int, ClassTag<K>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- groupBy(Function1<T, K>, Partitioner, ClassTag<K>, Ordering<K>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- groupBy(Column...) - Method in class org.apache.spark.sql.Dataset
-
Groups the Dataset using the specified columns, so we can run aggregation on them.
- groupBy(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Groups the Dataset using the specified columns, so that we can run aggregation on them.
- groupBy(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Groups the Dataset using the specified columns, so we can run aggregation on them.
- groupBy(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Groups the Dataset using the specified columns, so that we can run aggregation on them.
- groupBy(Function1<A, K>) - Static method in class org.apache.spark.sql.types.StructType
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.api.r.RRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.graphx.VertexRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- groupBy$default$4(Function1<T, K>, Partitioner) - Static method in class org.apache.spark.rdd.UnionRDD
-
- groupByKey(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Group the values for each key in the RDD into a single sequence.
- groupByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Group the values for each key in the RDD into a single sequence.
- groupByKey() - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Group the values for each key in the RDD into a single sequence.
- groupByKey(Function1<T, K>, Encoder<K>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Scala-specific)
Returns a
KeyValueGroupedDataset
where the data is grouped by the given key
func
.
- groupByKey(MapFunction<T, K>, Encoder<K>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Java-specific)
Returns a
KeyValueGroupedDataset
where the data is grouped by the given key
func
.
- groupByKey() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
to each RDD.
- groupByKey(int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
to each RDD.
- groupByKey(Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
on each RDD of this
DStream.
- groupByKey() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKey(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKey(Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKey() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKey(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKey(Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKey() - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying groupByKey
to each RDD.
- groupByKey(int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying groupByKey
to each RDD.
- groupByKey(Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying groupByKey
on each RDD.
- groupByKeyAndWindow(Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
over a sliding window.
- groupByKeyAndWindow(Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
over a sliding window.
- groupByKeyAndWindow(Duration, Duration, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
over a sliding window on this
DStream.
- groupByKeyAndWindow(Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying groupByKey
over a sliding window on this
DStream.
- groupByKeyAndWindow(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKeyAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKeyAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKeyAndWindow(Duration, Duration, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- groupByKeyAndWindow(Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKeyAndWindow(Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKeyAndWindow(Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKeyAndWindow(Duration, Duration, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- groupByKeyAndWindow(Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying groupByKey
over a sliding window.
- groupByKeyAndWindow(Duration, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying groupByKey
over a sliding window.
- groupByKeyAndWindow(Duration, Duration, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying groupByKey
over a sliding window on this
DStream.
- groupByKeyAndWindow(Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Create a new DStream by applying groupByKey
over a sliding window on this
DStream.
- GroupByType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.GroupByType$
-
- grouped(int) - Static method in class org.apache.spark.sql.types.StructType
-
- groupEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.Graph
-
Merges multiple edges between two vertices into a single edge.
- groupEdges(Function2<ED, ED, ED>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- groupHash() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- grouping(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated
or not, returns 1 for aggregated or 0 for not aggregated in the result set.
- grouping(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: indicates whether a specified column in a GROUP BY list is aggregated
or not, returns 1 for aggregated or 0 for not aggregated in the result set.
- grouping_id(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the level of grouping, equals to
- grouping_id(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the level of grouping, equals to
- GroupMappingServiceProvider - Interface in org.apache.spark.security
-
This Spark trait is used for mapping a given userName to a set of groups which it belongs to.
- GroupState<S> - Interface in org.apache.spark.sql.streaming
-
:: Experimental ::
- GroupStateTimeout - Class in org.apache.spark.sql.streaming
-
Represents the type of timeouts possible for the Dataset operations
`mapGroupsWithState` and `flatMapGroupsWithState`.
- GroupStateTimeout() - Constructor for class org.apache.spark.sql.streaming.GroupStateTimeout
-
- groupWith(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Alias for cogroup.
- groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Alias for cogroup.
- groupWith(JavaPairRDD<K, W1>, JavaPairRDD<K, W2>, JavaPairRDD<K, W3>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Alias for cogroup.
- groupWith(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Alias for cogroup.
- groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Alias for cogroup.
- groupWith(RDD<Tuple2<K, W1>>, RDD<Tuple2<K, W2>>, RDD<Tuple2<K, W3>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Alias for cogroup.
- gt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is greater than lowerBound
- gt(Object) - Method in class org.apache.spark.sql.Column
-
Greater than.
- gtEq(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is greater than or equal to lowerBound
- guard(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- L1Updater - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
Updater for L1 regularized problems.
- L1Updater() - Constructor for class org.apache.spark.mllib.optimization.L1Updater
-
- label() - Method in class org.apache.spark.ml.feature.LabeledPoint
-
- label() - Method in class org.apache.spark.mllib.regression.LabeledPoint
-
- labelCol() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
- labelCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- labelCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- labelCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- labelCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- labelCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- labelCol() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Field in "predictions" which gives the true label of each instance (if available).
- labelCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- labelCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- labelCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- labelCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- labelCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- labelCol() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- labelCol() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- labelCol() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- labelCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- labelCol() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- labelCol() - Static method in class org.apache.spark.ml.feature.RFormula
-
- labelCol() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- labelCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- labelCol() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- labelCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- labelCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- labelCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- labelCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- labelCol() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
- labelCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- labelCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- LabelConverter - Class in org.apache.spark.ml.classification
-
Label to vector converter.
- LabelConverter() - Constructor for class org.apache.spark.ml.classification.LabelConverter
-
- LabeledPoint - Class in org.apache.spark.ml.feature
-
Class that represents the features and label of a data point.
- LabeledPoint(double, Vector) - Constructor for class org.apache.spark.ml.feature.LabeledPoint
-
- LabeledPoint - Class in org.apache.spark.mllib.regression
-
Class that represents the features and labels of a data point.
- LabeledPoint(double, Vector) - Constructor for class org.apache.spark.mllib.regression.LabeledPoint
-
- LabelPropagation - Class in org.apache.spark.graphx.lib
-
Label Propagation algorithm.
- LabelPropagation() - Constructor for class org.apache.spark.graphx.lib.LabelPropagation
-
- labels() - Method in class org.apache.spark.ml.feature.IndexToString
-
Optional param for array of labels specifying index-string mapping.
- labels() - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- labels() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- labels() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
-
- labels() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- labels() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns the sequence of labels in ascending order
- labels() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns the sequence of labels in ascending order
- lag(Column, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows before the current row, and
null
if there is less than offset
rows before the current row.
- lag(String, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows before the current row, and
null
if there is less than offset
rows before the current row.
- lag(String, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows before the current row, and
defaultValue
if there is less than offset
rows before the current row.
- lag(Column, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows before the current row, and
defaultValue
if there is less than offset
rows before the current row.
- LassoModel - Class in org.apache.spark.mllib.regression
-
Regression model trained using Lasso.
- LassoModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.LassoModel
-
- LassoWithSGD - Class in org.apache.spark.mllib.regression
-
Train a regression model with L1-regularization using Stochastic Gradient Descent.
- LassoWithSGD() - Constructor for class org.apache.spark.mllib.regression.LassoWithSGD
-
- last(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value in a group.
- last(String, boolean) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value of the column in a group.
- last(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value in a group.
- last(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the last value of the column in a group.
- last() - Static method in class org.apache.spark.sql.types.StructType
-
- last_day(Column) - Static method in class org.apache.spark.sql.functions
-
Given a date column, returns the last day of the month which the given date belongs to.
- lastDir() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
-
- lastError() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- lastError() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- lastErrorMessage() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- lastErrorMessage() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- lastErrorTime() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- lastErrorTime() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- lastIndexOf(B) - Static method in class org.apache.spark.sql.types.StructType
-
- lastIndexOf(B, int) - Static method in class org.apache.spark.sql.types.StructType
-
- lastIndexOfSlice(GenSeq<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- lastIndexOfSlice(GenSeq<B>, int) - Static method in class org.apache.spark.sql.types.StructType
-
- lastIndexWhere(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- lastIndexWhere(Function1<A, Object>, int) - Static method in class org.apache.spark.sql.types.StructType
-
- lastOption() - Static method in class org.apache.spark.sql.types.StructType
-
- lastProgress() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
- lastUpdated() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- latestModel() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Return the latest model.
- latestModel() - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Return the latest model.
- launch() - Method in class org.apache.spark.launcher.SparkLauncher
-
Launches a sub-process that will start the configured Spark application.
- LAUNCHING() - Static method in class org.apache.spark.TaskState
-
- LaunchTask(org.apache.spark.util.SerializableBuffer) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask
-
- LaunchTask$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask$
-
- launchTime() - Method in class org.apache.spark.scheduler.TaskInfo
-
- launchTime() - Method in class org.apache.spark.status.api.v1.TaskData
-
- layers() - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- layers() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- LBFGS - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
Class used to solve an optimization problem using Limited-memory BFGS.
- LBFGS(Gradient, Updater) - Constructor for class org.apache.spark.mllib.optimization.LBFGS
-
- LDA - Class in org.apache.spark.ml.clustering
-
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
- LDA(String) - Constructor for class org.apache.spark.ml.clustering.LDA
-
- LDA() - Constructor for class org.apache.spark.ml.clustering.LDA
-
- LDA - Class in org.apache.spark.mllib.clustering
-
Latent Dirichlet Allocation (LDA), a topic model designed for text documents.
- LDA() - Constructor for class org.apache.spark.mllib.clustering.LDA
-
Constructs a LDA instance with default parameters.
- LDAModel - Class in org.apache.spark.ml.clustering
-
- LDAModel - Class in org.apache.spark.mllib.clustering
-
Latent Dirichlet Allocation (LDA) model.
- LDAOptimizer - Interface in org.apache.spark.mllib.clustering
-
:: DeveloperApi ::
- LDAUtils - Class in org.apache.spark.mllib.clustering
-
Utility methods for LDA.
- LDAUtils() - Constructor for class org.apache.spark.mllib.clustering.LDAUtils
-
- lead(String, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows after the current row, and
null
if there is less than offset
rows after the current row.
- lead(Column, int) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows after the current row, and
null
if there is less than offset
rows after the current row.
- lead(String, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows after the current row, and
defaultValue
if there is less than offset
rows after the current row.
- lead(Column, int, Object) - Static method in class org.apache.spark.sql.functions
-
Window function: returns the value that is offset
rows after the current row, and
defaultValue
if there is less than offset
rows after the current row.
- LeaderOffset(String, int, long) - Constructor for class org.apache.spark.streaming.kafka.KafkaCluster.LeaderOffset
-
- LeaderOffset$() - Constructor for class org.apache.spark.streaming.kafka.KafkaCluster.LeaderOffset$
-
- LeafNode - Class in org.apache.spark.ml.tree
-
Decision tree leaf node.
- learningDecay() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- learningDecay() - Static method in class org.apache.spark.ml.clustering.LDA
-
- learningDecay() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- learningOffset() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- learningOffset() - Static method in class org.apache.spark.ml.clustering.LDA
-
- learningOffset() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- learningRate() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- least(Column...) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of values, skipping null values.
- least(String, String...) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of column names, skipping null values.
- least(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of values, skipping null values.
- least(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Returns the least value of the list of column names, skipping null values.
- LeastSquaresAggregator - Class in org.apache.spark.ml.regression
-
LeastSquaresAggregator computes the gradient and loss for a Least-squared loss function,
as used in linear regression for samples in sparse or dense vector in an online fashion.
- LeastSquaresAggregator(Broadcast<Vector>, double, double, boolean, Broadcast<double[]>, Broadcast<double[]>) - Constructor for class org.apache.spark.ml.regression.LeastSquaresAggregator
-
- LeastSquaresCostFun - Class in org.apache.spark.ml.regression
-
LeastSquaresCostFun implements Breeze's DiffFunction[T] for Least Squares cost.
- LeastSquaresCostFun(RDD<org.apache.spark.ml.feature.Instance>, double, double, boolean, boolean, Broadcast<double[]>, Broadcast<double[]>, double, int) - Constructor for class org.apache.spark.ml.regression.LeastSquaresCostFun
-
- LeastSquaresGradient - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
Compute gradient and loss for a Least-squared loss function, as used in linear regression.
- LeastSquaresGradient() - Constructor for class org.apache.spark.mllib.optimization.LeastSquaresGradient
-
- left() - Method in class org.apache.spark.sql.sources.And
-
- left() - Method in class org.apache.spark.sql.sources.Or
-
- leftCategories() - Method in class org.apache.spark.ml.tree.CategoricalSplit
-
Get sorted categories which split to the left
- leftCategoriesOrThreshold() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
-
- leftChild() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
-
- leftChild() - Method in class org.apache.spark.ml.tree.InternalNode
-
- leftChildIndex(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the index of the left child of this node.
- leftImpurity() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
-
- leftJoin(RDD<Tuple2<Object, VD2>>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- leftJoin(RDD<Tuple2<Object, VD2>>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.VertexRDD
-
Left joins this VertexRDD with an RDD containing vertex attribute pairs.
- leftNode() - Method in class org.apache.spark.mllib.tree.model.Node
-
- leftNodeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- leftOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a left outer join of this
and other
.
- leftOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a left outer join of this
and other
.
- leftOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a left outer join of this
and other
.
- leftOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a left outer join of this
and other
.
- leftOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a left outer join of this
and other
.
- leftOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a left outer join of this
and other
.
- leftOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and
other
DStream.
- leftOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and
other
DStream.
- leftOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and
other
DStream.
- leftOuterJoin(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- leftOuterJoin(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- leftOuterJoin(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- leftOuterJoin(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- leftOuterJoin(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- leftOuterJoin(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- leftOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and
other
DStream.
- leftOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and
other
DStream.
- leftOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'left outer join' between RDDs of this
DStream and
other
DStream.
- leftPredict() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
-
- leftZipJoin(VertexRDD<VD2>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- leftZipJoin(VertexRDD<VD2>, Function3<Object, VD, Option<VD2>, VD3>, ClassTag<VD2>, ClassTag<VD3>) - Method in class org.apache.spark.graphx.VertexRDD
-
Left joins this RDD with another VertexRDD with the same index.
- LegacyAccumulatorWrapper<R,T> - Class in org.apache.spark.util
-
- LegacyAccumulatorWrapper(R, AccumulableParam<R, T>) - Constructor for class org.apache.spark.util.LegacyAccumulatorWrapper
-
- length() - Method in class org.apache.spark.scheduler.SplitInfo
-
- length(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the length of a given string or binary column.
- length() - Method in interface org.apache.spark.sql.Row
-
Number of elements in the Row.
- length() - Method in class org.apache.spark.sql.types.CharType
-
- length() - Method in class org.apache.spark.sql.types.HiveStringType
-
- length() - Method in class org.apache.spark.sql.types.StructType
-
- length() - Method in class org.apache.spark.sql.types.VarcharType
-
- lengthCompare(int) - Static method in class org.apache.spark.sql.types.StructType
-
- leq(Object) - Method in class org.apache.spark.sql.Column
-
Less than or equal to.
- less(Duration) - Method in class org.apache.spark.streaming.Duration
-
- less(Time) - Method in class org.apache.spark.streaming.Time
-
- lessEq(Duration) - Method in class org.apache.spark.streaming.Duration
-
- lessEq(Time) - Method in class org.apache.spark.streaming.Time
-
- LessThan - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to a value
less than value
.
- LessThan(String, Object) - Constructor for class org.apache.spark.sql.sources.LessThan
-
- LessThanOrEqual - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to a value
less than or equal to value
.
- LessThanOrEqual(String, Object) - Constructor for class org.apache.spark.sql.sources.LessThanOrEqual
-
- levenshtein(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Computes the Levenshtein distance of the two given string columns.
- libraryPathEnvName() - Static method in class org.apache.spark.util.Utils
-
Return the current system LD_LIBRARY_PATH name
- libraryPathEnvPrefix(Seq<String>) - Static method in class org.apache.spark.util.Utils
-
Return the prefix of a command that appends the given library paths to the
system-specific library path environment variable.
- LibSVMDataSource - Class in org.apache.spark.ml.source.libsvm
-
libsvm
package implements Spark SQL data source API for loading LIBSVM data as DataFrame
.
- lift() - Static method in class org.apache.spark.sql.types.StructType
-
- like(String) - Method in class org.apache.spark.sql.Column
-
SQL like expression.
- limit(int) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset by taking the first n
rows.
- line() - Method in exception org.apache.spark.sql.AnalysisException
-
- LinearDataGenerator - Class in org.apache.spark.mllib.util
-
:: DeveloperApi ::
Generate sample data used for Linear Data.
- LinearDataGenerator() - Constructor for class org.apache.spark.mllib.util.LinearDataGenerator
-
- LinearRegression - Class in org.apache.spark.ml.regression
-
Linear regression.
- LinearRegression(String) - Constructor for class org.apache.spark.ml.regression.LinearRegression
-
- LinearRegression() - Constructor for class org.apache.spark.ml.regression.LinearRegression
-
- LinearRegressionModel - Class in org.apache.spark.ml.regression
-
- LinearRegressionModel - Class in org.apache.spark.mllib.regression
-
Regression model trained using LinearRegression.
- LinearRegressionModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.LinearRegressionModel
-
- LinearRegressionSummary - Class in org.apache.spark.ml.regression
-
:: Experimental ::
Linear regression results evaluated on a dataset.
- LinearRegressionTrainingSummary - Class in org.apache.spark.ml.regression
-
:: Experimental ::
Linear regression training results.
- LinearRegressionWithSGD - Class in org.apache.spark.mllib.regression
-
Train a linear regression model with no regularization using Stochastic Gradient Descent.
- LinearRegressionWithSGD() - Constructor for class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
- LinearSVC - Class in org.apache.spark.ml.classification
-
:: Experimental ::
- LinearSVC(String) - Constructor for class org.apache.spark.ml.classification.LinearSVC
-
- LinearSVC() - Constructor for class org.apache.spark.ml.classification.LinearSVC
-
- LinearSVCAggregator - Class in org.apache.spark.ml.classification
-
LinearSVCAggregator computes the gradient and loss for hinge loss function, as used
in binary classification for instances in sparse or dense vector in an online fashion.
- LinearSVCAggregator(Broadcast<Vector>, Broadcast<double[]>, boolean) - Constructor for class org.apache.spark.ml.classification.LinearSVCAggregator
-
- LinearSVCCostFun - Class in org.apache.spark.ml.classification
-
LinearSVCCostFun implements Breeze's DiffFunction[T] for hinge loss function
- LinearSVCCostFun(RDD<org.apache.spark.ml.feature.Instance>, boolean, boolean, Broadcast<double[]>, double, int) - Constructor for class org.apache.spark.ml.classification.LinearSVCCostFun
-
- LinearSVCModel - Class in org.apache.spark.ml.classification
-
:: Experimental ::
Linear SVM Model trained by
LinearSVC
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
-
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
-
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
-
- link() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
-
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
-
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
-
- link(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
-
- link() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- Link$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
-
- linkPower() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- linkPower() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- linkPredictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- linkPredictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- listColumns(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of columns for the given table/view or temporary view.
- listColumns(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of columns for the given table/view in the specified database.
- listDatabases() - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of databases available across all sessions.
- listenerManager() - Method in class org.apache.spark.sql.SparkSession
-
- listenerManager() - Method in class org.apache.spark.sql.SQLContext
-
- listFiles() - Method in class org.apache.spark.SparkContext
-
Returns a list of file paths that are added to resources.
- listFunctions() - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of functions registered in the current database.
- listFunctions(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of functions registered in the specified database.
- listingTable(Seq<String>, Function1<T, Seq<Node>>, Iterable<T>, boolean, Option<String>, Seq<String>, boolean, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns an HTML table constructed by generating a row for each object in a sequence.
- listJars() - Method in class org.apache.spark.SparkContext
-
Returns a list of jar files that are added to resources.
- listOrcFiles(String, Configuration) - Static method in class org.apache.spark.sql.hive.orc.OrcFileOperator
-
- listTables() - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of tables/views in the current database.
- listTables(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Returns a list of tables/views in the specified database.
- lit(Object) - Static method in class org.apache.spark.sql.functions
-
Creates a
Column
of literal value.
- literal(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- load(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- load(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- load(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- load(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- load(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- load(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- load(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- load(String) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- load(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- load(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- load(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- load(String) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- load(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- load(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- load(String) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- load(String) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- load(String) - Static method in class org.apache.spark.ml.clustering.LDA
-
- load(String) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- load(String) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- load(String) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- load(String) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- load(String) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- load(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- load(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- load(String) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.DCT
-
- load(String) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- load(String) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- load(String) - Static method in class org.apache.spark.ml.feature.IDF
-
- load(String) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.Imputer
-
- load(String) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- load(String) - Static method in class org.apache.spark.ml.feature.Interaction
-
- load(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- load(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- load(String) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- load(String) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.NGram
-
- load(String) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- load(String) - Static method in class org.apache.spark.ml.feature.PCA
-
- load(String) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- load(String) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.RFormula
-
- load(String) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- load(String) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- load(String) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- load(String) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- load(String) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- load(String) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- load(String) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- load(String) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- load(String) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- load(String) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- load(String) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- load(String) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- load(String) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- load(String) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- load(String) - Static method in class org.apache.spark.ml.Pipeline
-
- load(String, SparkContext, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
- load(String) - Static method in class org.apache.spark.ml.PipelineModel
-
- load(String) - Static method in class org.apache.spark.ml.r.RWrappers
-
- load(String) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- load(String) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- load(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- load(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- load(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- load(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- load(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- load(String) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- load(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- load(String) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- load(String) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- load(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- load(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- load(String) - Method in interface org.apache.spark.ml.util.MLReadable
-
Reads an ML instance from the input path, a shortcut of read.load(path)
.
- load(String) - Method in class org.apache.spark.ml.util.MLReader
-
Loads the ML component from the input path.
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
- load(SparkContext, String, int) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.KMeansModel
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.feature.Word2VecModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.fpm.FPGrowthModel
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.fpm.PrefixSpanModel
-
- load(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Load a model from the given path.
- load(SparkContext, String) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- load(SparkContext, String, String, int) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- load(SparkContext, String) - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- load(SparkContext, String) - Method in interface org.apache.spark.mllib.util.Loader
-
Load a model from the given path.
- load(String...) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads input in as a DataFrame
, for data sources that support multiple paths.
- load() - Method in class org.apache.spark.sql.DataFrameReader
-
Loads input in as a DataFrame
, for data sources that don't require a path (e.g.
- load(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads input in as a DataFrame
, for data sources that require a path (e.g.
- load(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads input in as a DataFrame
, for data sources that support multiple paths.
- load(String) - Method in class org.apache.spark.sql.SQLContext
-
- load(String, String) - Method in class org.apache.spark.sql.SQLContext
-
- load(String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- load(String, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- load(String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- load(String, StructType, Map<String, String>) - Method in class org.apache.spark.sql.SQLContext
-
- load() - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads input data stream in as a DataFrame
, for data streams that don't require a path
(e.g.
- load(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads input in as a DataFrame
, for data streams that read from some path.
- loadData(SparkContext, String, String) - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
Helper method for loading GLM classification model data.
- loadData(SparkContext, String, String, int) - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
Helper method for loading GLM regression model data.
- loadDefaultSparkProperties(SparkConf, String) - Static method in class org.apache.spark.util.Utils
-
Load default Spark properties from the given file.
- loadDefaultStopWords(String) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
Loads the default stop words for the given language.
- loadDF(SparkSession, String, Map<String, String>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- loadDF(SparkSession, String, StructType, Map<String, String>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- Loader<M extends Saveable> - Interface in org.apache.spark.mllib.util
-
:: DeveloperApi ::
- loadImpl(String, SparkSession, String, String) - Static method in class org.apache.spark.ml.tree.EnsembleModelReadWrite
-
Helper method for loading a tree ensemble from disk.
- loadImpl(Dataset<Row>, Item, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
-
- loadImpl(Dataset<Row>, Item, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
-
- loadLabeledPoints(SparkContext, String, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile
.
- loadLabeledPoints(SparkContext, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled points saved using RDD[LabeledPoint].saveAsTextFile
with the default number of
partitions.
- loadLibSVMFile(SparkContext, String, int, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint].
- loadLibSVMFile(SparkContext, String, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads labeled data in the LIBSVM format into an RDD[LabeledPoint], with the default number of
partitions.
- loadLibSVMFile(SparkContext, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads binary labeled data in the LIBSVM format into an RDD[LabeledPoint], with number of
features determined automatically and the default number of partitions.
- loadTreeNodes(String, org.apache.spark.ml.util.DefaultParamsReader.Metadata, SparkSession) - Static method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite
-
Load a decision tree from a file.
- loadVectors(SparkContext, String, int) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads vectors saved using RDD[Vector].saveAsTextFile
.
- loadVectors(SparkContext, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Loads vectors saved using RDD[Vector].saveAsTextFile
with the default number of partitions.
- LOCAL_BLOCKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
-
- LOCAL_BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
-
- LOCAL_CLUSTER_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
-
- LOCAL_N_FAILURES_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
-
- LOCAL_N_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
-
- localBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- localBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
-
- localBlocksFetched() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- localBytesRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
-
- localBytesRead() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- localCheckpoint() - Static method in class org.apache.spark.api.r.RRDD
-
- localCheckpoint() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- localCheckpoint() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- localCheckpoint() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- localCheckpoint() - Static method in class org.apache.spark.graphx.VertexRDD
-
- localCheckpoint() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- localCheckpoint() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- localCheckpoint() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- localCheckpoint() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- localCheckpoint() - Method in class org.apache.spark.rdd.RDD
-
Mark this RDD for local checkpointing using Spark's existing caching layer.
- localCheckpoint() - Static method in class org.apache.spark.rdd.UnionRDD
-
- localHostName() - Static method in class org.apache.spark.util.Utils
-
Get the local machine's hostname.
- localHostNameForURI() - Static method in class org.apache.spark.util.Utils
-
Get the local machine's URI.
- localityAwareTasks() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
-
- LocalKMeans - Class in org.apache.spark.mllib.clustering
-
An utility object to run K-means locally.
- LocalKMeans() - Constructor for class org.apache.spark.mllib.clustering.LocalKMeans
-
- LocalLDAModel - Class in org.apache.spark.ml.clustering
-
Local (non-distributed) model fitted by
LDA
.
- LocalLDAModel - Class in org.apache.spark.mllib.clustering
-
Local LDA model.
- localSeqToDatasetHolder(Seq<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLImplicits
-
Creates a
Dataset
from a local Seq.
- localSparkRPackagePath() - Static method in class org.apache.spark.api.r.RUtils
-
Get the SparkR package path in the local spark distribution.
- localValue() - Method in class org.apache.spark.Accumulable
-
Deprecated.
Get the current value of this accumulator from within a task.
- localValue() - Static method in class org.apache.spark.Accumulator
-
Deprecated.
- locate(String, Column) - Static method in class org.apache.spark.sql.functions
-
Locate the position of the first occurrence of substr.
- locate(String, Column, int) - Static method in class org.apache.spark.sql.functions
-
Locate the position of the first occurrence of substr in a string column, after position pos.
- location() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- locationUri() - Method in class org.apache.spark.sql.catalog.Database
-
- log() - Method in interface org.apache.spark.internal.Logging
-
- log(Function0<Parsers.Parser<T>>, String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- log(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given value.
- log(String) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given column.
- log(double, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the first argument-base logarithm of the second argument.
- log(double, String) - Static method in class org.apache.spark.sql.functions
-
Returns the first argument-base logarithm of the second argument.
- Log$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
-
- log10(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given value in base 10.
- log10(String) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given value in base 10.
- log1p(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given value plus one.
- log1p(String) - Static method in class org.apache.spark.sql.functions
-
Computes the natural logarithm of the given column plus one.
- log2(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given column in base 2.
- log2(String) - Static method in class org.apache.spark.sql.functions
-
Computes the logarithm of the given value in base 2.
- log_() - Method in interface org.apache.spark.internal.Logging
-
- logDebug(Function0<String>) - Method in interface org.apache.spark.internal.Logging
-
- logDebug(Function0<String>, Throwable) - Method in interface org.apache.spark.internal.Logging
-
- logDeprecationWarning(String) - Static method in class org.apache.spark.SparkConf
-
Logs a warning message if the given config key is deprecated.
- logError(Function0<String>) - Method in interface org.apache.spark.internal.Logging
-
- logError(Function0<String>, Throwable) - Method in interface org.apache.spark.internal.Logging
-
- logEvent() - Method in interface org.apache.spark.scheduler.SparkListenerEvent
-
- Logging - Interface in org.apache.spark.internal
-
Utility trait for classes that want to log data.
- logInfo(Function0<String>) - Method in interface org.apache.spark.internal.Logging
-
- logInfo(Function0<String>, Throwable) - Method in interface org.apache.spark.internal.Logging
-
- LogisticAggregator - Class in org.apache.spark.ml.classification
-
LogisticAggregator computes the gradient and loss for binary or multinomial logistic (softmax)
loss function, as used in classification for instances in sparse or dense vector in an online
fashion.
- LogisticAggregator(Broadcast<Vector>, Broadcast<double[]>, int, boolean, boolean) - Constructor for class org.apache.spark.ml.classification.LogisticAggregator
-
- LogisticCostFun - Class in org.apache.spark.ml.classification
-
LogisticCostFun implements Breeze's DiffFunction[T] for a multinomial (softmax) logistic loss
function, as used in multi-class classification (it is also used in binary logistic regression).
- LogisticCostFun(RDD<org.apache.spark.ml.feature.Instance>, int, boolean, boolean, Broadcast<double[]>, double, boolean, int) - Constructor for class org.apache.spark.ml.classification.LogisticCostFun
-
- LogisticGradient - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
Compute gradient and loss for a multinomial logistic loss function, as used
in multi-class classification (it is also used in binary logistic regression).
- LogisticGradient(int) - Constructor for class org.apache.spark.mllib.optimization.LogisticGradient
-
- LogisticGradient() - Constructor for class org.apache.spark.mllib.optimization.LogisticGradient
-
- LogisticRegression - Class in org.apache.spark.ml.classification
-
Logistic regression.
- LogisticRegression(String) - Constructor for class org.apache.spark.ml.classification.LogisticRegression
-
- LogisticRegression() - Constructor for class org.apache.spark.ml.classification.LogisticRegression
-
- LogisticRegressionDataGenerator - Class in org.apache.spark.mllib.util
-
:: DeveloperApi ::
Generate test data for LogisticRegression.
- LogisticRegressionDataGenerator() - Constructor for class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
-
- LogisticRegressionModel - Class in org.apache.spark.ml.classification
-
- LogisticRegressionModel - Class in org.apache.spark.mllib.classification
-
Classification model trained using Multinomial/Binary Logistic Regression.
- LogisticRegressionModel(Vector, double, int, int) - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- LogisticRegressionModel(Vector, double) - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- LogisticRegressionSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for Logistic Regression Results for a given model.
- LogisticRegressionTrainingSummary - Interface in org.apache.spark.ml.classification
-
Abstraction for multinomial Logistic Regression Training results.
- LogisticRegressionWithLBFGS - Class in org.apache.spark.mllib.classification
-
Train a classification model for Multinomial/Binary Logistic Regression using
Limited-memory BFGS.
- LogisticRegressionWithLBFGS() - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
- LogisticRegressionWithSGD - Class in org.apache.spark.mllib.classification
-
Train a classification model for Binary Logistic Regression
using Stochastic Gradient Descent.
- LogisticRegressionWithSGD() - Constructor for class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
- Logit$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
-
- logLikelihood(Dataset<?>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- logLikelihood() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
- logLikelihood() - Method in class org.apache.spark.ml.clustering.GaussianMixtureSummary
-
- logLikelihood(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Calculates a lower bound on the log likelihood of the entire corpus.
- logLikelihood(Dataset<?>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- logLikelihood() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Log likelihood of the observed tokens in the training set,
given the current parameter estimates:
log P(docs | topics, topic distributions for docs, alpha, eta)
- logLikelihood() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
-
- logLikelihood(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Calculates a lower bound on the log likelihood of the entire corpus.
- logLikelihood(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Java-friendly version of logLikelihood
- LogLoss - Class in org.apache.spark.mllib.tree.loss
-
:: DeveloperApi ::
Class for log loss calculation (for classification).
- LogLoss() - Constructor for class org.apache.spark.mllib.tree.loss.LogLoss
-
- logName() - Method in interface org.apache.spark.internal.Logging
-
- LogNormalGenerator - Class in org.apache.spark.mllib.random
-
:: DeveloperApi ::
Generates i.i.d.
- LogNormalGenerator(double, double) - Constructor for class org.apache.spark.mllib.random.LogNormalGenerator
-
- logNormalGraph(SparkContext, int, int, double, double, long) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
Generate a graph whose vertex out degree distribution is log normal.
- logNormalJavaRDD(JavaSparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of RandomRDDs.logNormalRDD
.
- logNormalJavaRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaRDD
with the default seed.
- logNormalJavaRDD(JavaSparkContext, double, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaRDD
with the default number of partitions and the default seed.
- logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of RandomRDDs.logNormalVectorRDD
.
- logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaVectorRDD
with the default seed.
- logNormalJavaVectorRDD(JavaSparkContext, double, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.logNormalJavaVectorRDD
with the default number of partitions and
the default seed.
- logNormalRDD(SparkContext, double, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of i.i.d.
samples from the log normal distribution with the input
mean and standard deviation
- logNormalVectorRDD(SparkContext, double, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing i.i.d.
samples drawn from a
log normal distribution.
- logpdf(Vector) - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
Returns the log-density of this multivariate Gaussian at given point, x
- logpdf(Vector) - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
Returns the log-density of this multivariate Gaussian at given point, x
- logPerplexity(Dataset<?>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- logPerplexity(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Calculate an upper bound on perplexity.
- logPerplexity(Dataset<?>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- logPerplexity(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Calculate an upper bound on perplexity.
- logPerplexity(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Java-friendly version of logPerplexity
- logPrior() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
Log probability of the current parameter estimate:
log P(topics, topic distributions for docs | Dirichlet hyperparameters)
- logPrior() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Log probability of the current parameter estimate:
log P(topics, topic distributions for docs | alpha, eta)
- logStartFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- logStartToJson(org.apache.spark.scheduler.SparkListenerLogStart) - Static method in class org.apache.spark.util.JsonProtocol
-
- logTrace(Function0<String>) - Method in interface org.apache.spark.internal.Logging
-
- logTrace(Function0<String>, Throwable) - Method in interface org.apache.spark.internal.Logging
-
- logUncaughtExceptions(Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Execute the given block, logging and re-throwing any uncaught exception.
- logUrlMap() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
-
- logUrls() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
-
- logWarning(Function0<String>) - Method in interface org.apache.spark.internal.Logging
-
- logWarning(Function0<String>, Throwable) - Method in interface org.apache.spark.internal.Logging
-
- LONG() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable long type.
- longAccumulator() - Method in class org.apache.spark.SparkContext
-
Create and register a long accumulator, which starts with 0 and accumulates inputs by add
.
- longAccumulator(String) - Method in class org.apache.spark.SparkContext
-
Create and register a long accumulator, which starts with 0 and accumulates inputs by add
.
- LongAccumulator - Class in org.apache.spark.util
-
An
accumulator
for computing sum, count, and average of 64-bit integers.
- LongAccumulator() - Constructor for class org.apache.spark.util.LongAccumulator
-
- LongAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
-
Deprecated.
- longMetric(String) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- LongParam - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
Specialized version of Param[Long]
for Java.
- LongParam(String, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.LongParam
-
- LongParam(String, String, String) - Constructor for class org.apache.spark.ml.param.LongParam
-
- LongParam(Identifiable, String, String, Function1<Object, Object>) - Constructor for class org.apache.spark.ml.param.LongParam
-
- LongParam(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.LongParam
-
- LongType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the LongType object.
- LongType - Class in org.apache.spark.sql.types
-
The data type representing Long
values.
- lookup(K) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return the list of values in the RDD for key key
.
- lookup(K) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return the list of values in the RDD for key key
.
- lookupRpcTimeout(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the default Spark timeout to use for RPC remote endpoint lookup.
- loss() - Method in class org.apache.spark.ml.classification.LinearSVCAggregator
-
- loss() - Method in class org.apache.spark.ml.classification.LogisticAggregator
-
- loss() - Method in class org.apache.spark.ml.regression.AFTAggregator
-
- loss() - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
-
- loss() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- Loss - Interface in org.apache.spark.mllib.tree.loss
-
:: DeveloperApi ::
Trait for adding "pluggable" loss functions for the gradient boosting algorithm.
- Losses - Class in org.apache.spark.mllib.tree.loss
-
- Losses() - Constructor for class org.apache.spark.mllib.tree.loss.Losses
-
- LossReasonPending - Class in org.apache.spark.scheduler
-
A loss reason that means we don't yet know why the executor exited.
- LossReasonPending() - Constructor for class org.apache.spark.scheduler.LossReasonPending
-
- lossType() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- lossType() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- lossType() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- lossType() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- LOST() - Static method in class org.apache.spark.TaskState
-
- low() - Method in class org.apache.spark.partial.BoundedDouble
-
- lower(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a string column to lower case.
- lowerBoundsOnCoefficients() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- lowerBoundsOnCoefficients() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- lowerBoundsOnIntercepts() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- lowerBoundsOnIntercepts() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- LowPrioritySQLImplicits - Interface in org.apache.spark.sql
-
Lower priority implicit methods for converting Scala objects into
Dataset
s.
- lpad(Column, int, String) - Static method in class org.apache.spark.sql.functions
-
Left-pad the string column with pad to a length of len.
- lt(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is less than upperBound
- lt(Object) - Method in class org.apache.spark.sql.Column
-
Less than.
- ltEq(double) - Static method in class org.apache.spark.ml.param.ParamValidators
-
Check if value is less than or equal to upperBound
- ltrim(Column) - Static method in class org.apache.spark.sql.functions
-
Trim the spaces from left end for the specified string value.
- LZ4BlockInputStream - Class in org.apache.spark.io
-
InputStream
implementation to decode data written with
LZ4BlockOutputStream
.
- LZ4BlockInputStream(InputStream, LZ4FastDecompressor, Checksum) - Constructor for class org.apache.spark.io.LZ4BlockInputStream
-
Create a new InputStream
.
- LZ4BlockInputStream(InputStream, LZ4FastDecompressor) - Constructor for class org.apache.spark.io.LZ4BlockInputStream
-
Create a new instance using XXHash32
for checksuming.
- LZ4BlockInputStream(InputStream) - Constructor for class org.apache.spark.io.LZ4BlockInputStream
-
Create a new instance which uses the fastest LZ4FastDecompressor
available.
- LZ4CompressionCodec - Class in org.apache.spark.io
-
- LZ4CompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.LZ4CompressionCodec
-
- LZFCompressionCodec - Class in org.apache.spark.io
-
- LZFCompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.LZFCompressionCodec
-
- main(String[]) - Static method in class org.apache.spark.ml.param.shared.SharedParamsCodeGen
-
- main(String[]) - Static method in class org.apache.spark.mllib.util.KMeansDataGenerator
-
- main(String[]) - Static method in class org.apache.spark.mllib.util.LinearDataGenerator
-
- main(String[]) - Static method in class org.apache.spark.mllib.util.LogisticRegressionDataGenerator
-
- main(String[]) - Static method in class org.apache.spark.mllib.util.MFDataGenerator
-
- main(String[]) - Static method in class org.apache.spark.mllib.util.SVMDataGenerator
-
- main(String[]) - Static method in class org.apache.spark.streaming.util.RawTextSender
-
- main(String[]) - Static method in class org.apache.spark.ui.UIWorkloadGenerator
-
- majorMinorVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the (major version number, minor version number).
- majorVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the major version number.
- makeBinarySearch(Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.util.CollectionsUtils
-
- makeCopy(Object[]) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- makeCopy(Object[]) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- makeCopy(Object[]) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- makeDescription(String, String, boolean) - Static method in class org.apache.spark.ui.UIUtils
-
Returns HTML rendering of a job or stage description.
- makeDriverRef(String, SparkConf, org.apache.spark.rpc.RpcEnv) - Static method in class org.apache.spark.util.RpcUtils
-
Retrieve a RpcEndpointRef
which is located in the driver via its name.
- makeHref(boolean, String, String) - Static method in class org.apache.spark.ui.UIUtils
-
Return the correct Href after checking if master is running in the
reverse proxy mode or not.
- makeProgressBar(int, int, int, int, Map<String, Object>, int) - Static method in class org.apache.spark.ui.UIUtils
-
- makeRDD(Seq<T>, int, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Distribute a local Scala collection to form an RDD.
- makeRDD(Seq<Tuple2<T, Seq<String>>>, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Distribute a local Scala collection to form an RDD, with one or more
location preferences (hostnames of Spark nodes) for each object.
- map(Function<T, R>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- map(Function<T, R>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- map(Function<T, R>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- map(Function<T, R>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to all elements of this RDD.
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- map(Function1<Object, Object>) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Map the values of this matrix using a function.
- map(Function1<Object, Object>) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Map the values of this matrix using a function.
- map(Function1<R, T>) - Method in class org.apache.spark.partial.PartialResult
-
Transform this PartialResult into a PartialResult of type T.
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- map(Function1<T, U>, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying a function to all elements of this RDD.
- map(Function1<T, U>, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- map(DataType, DataType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type map.
- map(MapType) - Method in class org.apache.spark.sql.ColumnName
-
- map(Function1<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Scala-specific)
Returns a new Dataset that contains the result of applying func
to each element.
- map(MapFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Java-specific)
Returns a new Dataset that contains the result of applying func
to each element.
- map(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new map column.
- map(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new map column.
- map(Function1<BaseType, A>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- map(Function1<BaseType, A>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- map(Function1<BaseType, A>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- map(Function1<A, B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- map(Function<T, R>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- map(Function<T, R>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream.
- map(Function<T, R>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- map(Function<T, R>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- map(Function<T, R>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- map(Function<T, R>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- map(Function<T, R>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- map(Function1<T, U>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream by applying a function to all elements of this DStream.
- mapAsSerializableJavaMap(Map<A, B>) - Static method in class org.apache.spark.api.java.JavaUtils
-
- mapChildren(Function1<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- mapChildren(Function1<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- mapChildren(Function1<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- mapEdgePartitions(Function2<Object, EdgePartition<ED, VD>, EdgePartition<ED2, VD2>>, ClassTag<ED2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapEdges(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute in the graph using the map function.
- mapEdges(Function2<Object, Iterator<Edge<ED>>, Iterator<ED2>>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute using the map function, passing it a whole partition at a
time.
- mapEdges(Function2<Object, Iterator<Edge<ED>>, Iterator<ED2>>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- mapExpressions(Function1<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- mapExpressions(Function1<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- mapExpressions(Function1<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- mapFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
-------------------------------- *
Util JSON deserialization methods |
- MapFunction<T,U> - Interface in org.apache.spark.api.java.function
-
Base interface for a map function used in Dataset's map function.
- mapGroups(Function2<K, Iterator<V>, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Scala-specific)
Applies the given function to each group of data.
- mapGroups(MapGroupsFunction<K, V, U>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Java-specific)
Applies the given function to each group of data.
- MapGroupsFunction<K,V,R> - Interface in org.apache.spark.api.java.function
-
Base interface for a map function used in GroupedDataset's mapGroup function.
- mapGroupsWithState(Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
::Experimental::
(Scala-specific)
Applies the given function to each group of data, while maintaining a user-defined per-group
state.
- mapGroupsWithState(GroupStateTimeout, Function3<K, Iterator<V>, GroupState<S>, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
::Experimental::
(Scala-specific)
Applies the given function to each group of data, while maintaining a user-defined per-group
state.
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
::Experimental::
(Java-specific)
Applies the given function to each group of data, while maintaining a user-defined per-group
state.
- mapGroupsWithState(MapGroupsWithStateFunction<K, V, S, U>, Encoder<S>, Encoder<U>, GroupStateTimeout) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
::Experimental::
(Java-specific)
Applies the given function to each group of data, while maintaining a user-defined per-group
state.
- MapGroupsWithStateFunction<K,V,S,R> - Interface in org.apache.spark.api.java.function
-
- mapId() - Method in class org.apache.spark.FetchFailed
-
- mapId() - Method in class org.apache.spark.storage.ShuffleBlockId
-
- mapId() - Method in class org.apache.spark.storage.ShuffleDataBlockId
-
- mapId() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- mapOutputTracker() - Method in class org.apache.spark.SparkEnv
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>, boolean) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>, boolean) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>, boolean) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitions(FlatMapFunction<Iterator<T>, U>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Scala-specific)
Returns a new Dataset that contains the result of applying func
to each partition.
- mapPartitions(MapPartitionsFunction<T, U>, Encoder<U>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Java-specific)
Returns a new Dataset that contains the result of applying f
to each partition.
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs
of this DStream.
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- mapPartitions(FlatMapFunction<Iterator<T>, U>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- mapPartitions(Function1<Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs
of this DStream.
- mapPartitions$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapPartitions$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapPartitions$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- mapPartitions$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- MapPartitionsFunction<T,U> - Interface in org.apache.spark.api.java.function
-
Base interface for function used in Dataset's mapPartitions.
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- mapPartitionsInternal$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>, boolean) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>, boolean) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>, boolean) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToDouble(DoubleFlatMapFunction<Iterator<T>>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>, boolean) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>, boolean) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>, boolean) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying mapPartitions() to each RDDs
of this DStream.
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- mapPartitionsToPair(PairFlatMapFunction<Iterator<T>, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- mapPartitionsWithIndex(Function2<Integer, Iterator<T>, Iterator<R>>, boolean) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitionsWithIndex(Function2<Integer, Iterator<T>, Iterator<R>>, boolean) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitionsWithIndex(Function2<Integer, Iterator<T>, Iterator<R>>, boolean) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitionsWithIndex(Function2<Integer, Iterator<T>, Iterator<R>>, boolean) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to each partition of this RDD, while tracking the index
of the original partition.
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD by applying a function to each partition of this RDD, while tracking the index
of the original partition.
- mapPartitionsWithIndex(Function2<Object, Iterator<T>, Iterator<U>>, boolean, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- mapPartitionsWithIndex$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- mapPartitionsWithIndexInternal$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<R>>, boolean) - Method in class org.apache.spark.api.java.JavaHadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<R>>, boolean) - Method in class org.apache.spark.api.java.JavaNewHadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.HadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapPartitionsWithInputSplit(Function2<InputSplit, Iterator<Tuple2<K, V>>, Iterator<U>>, boolean, ClassTag<U>) - Method in class org.apache.spark.rdd.NewHadoopRDD
-
Maps over a partition, providing the InputSplit that was used as the base of the partition.
- mapredInputFormat() - Method in class org.apache.spark.scheduler.InputFormatInfo
-
- mapreduceInputFormat() - Method in class org.apache.spark.scheduler.InputFormatInfo
-
- mapSideCombine() - Method in class org.apache.spark.ShuffleDependency
-
- mapToDouble(DoubleFunction<T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapToDouble(DoubleFunction<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapToDouble(DoubleFunction<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapToDouble(DoubleFunction<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to all elements of this RDD.
- mapToJson(Map<String, String>) - Static method in class org.apache.spark.util.JsonProtocol
-
------------------------------ *
Util JSON serialization methods |
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- mapToPair(PairFunction<T, K2, V2>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return a new RDD by applying a function to all elements of this RDD.
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- mapToPair(PairFunction<T, K2, V2>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream by applying a function to all elements of this DStream.
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- mapToPair(PairFunction<T, K2, V2>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- mapTriplets(Function1<EdgeTriplet<VD, ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute using the map function, passing it the adjacent vertex
attributes as well.
- mapTriplets(Function1<EdgeTriplet<VD, ED>, ED2>, TripletFields, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute using the map function, passing it the adjacent vertex
attributes as well.
- mapTriplets(Function2<Object, Iterator<EdgeTriplet<VD, ED>>, Iterator<ED2>>, TripletFields, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each edge attribute a partition at a time using the map function, passing it the
adjacent vertex attributes as well.
- mapTriplets(Function2<Object, Iterator<EdgeTriplet<VD, ED>>, Iterator<ED2>>, TripletFields, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- MapType - Class in org.apache.spark.sql.types
-
The data type for Maps.
- MapType(DataType, DataType, boolean) - Constructor for class org.apache.spark.sql.types.MapType
-
- MapType() - Constructor for class org.apache.spark.sql.types.MapType
-
No-arg constructor for kryo.
- mapValues(Function<V, U>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Pass each value in the key-value pair RDD through a map function without changing the keys;
this also retains the original RDD's partitioning.
- mapValues(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.EdgeRDD
-
Map the values in an edge partitioning preserving the structure but changing the values.
- mapValues(Function1<Edge<ED>, ED2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- mapValues(Function1<VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapValues(Function2<Object, VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- mapValues(Function1<VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Maps each vertex attribute, preserving the index.
- mapValues(Function2<Object, VD, VD2>, ClassTag<VD2>) - Method in class org.apache.spark.graphx.VertexRDD
-
Maps each vertex attribute, additionally supplying the vertex ID.
- mapValues(Function1<V, U>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Pass each value in the key-value pair RDD through a map function without changing the keys;
this also retains the original RDD's partitioning.
- mapValues(Function1<V, W>, Encoder<W>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
- mapValues(MapFunction<V, W>, Encoder<W>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
- mapValues(Function<V, U>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying a map function to the value of each key-value pairs in
'this' DStream without changing the key.
- mapValues(Function<V, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- mapValues(Function<V, U>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- mapValues(Function1<V, U>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying a map function to the value of each key-value pairs in
'this' DStream without changing the key.
- mapVertices(Function2<Object, VD, VD2>, ClassTag<VD2>, Predef.$eq$colon$eq<VD, VD2>) - Method in class org.apache.spark.graphx.Graph
-
Transforms each vertex attribute in the graph using the map function.
- mapVertices(Function2<Object, VD, VD2>, ClassTag<VD2>, Predef.$eq$colon$eq<VD, VD2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- mapVertices$default$3(Function2<Object, VD, VD2>) - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
- mapWithState(StateSpec<K, V, StateType, MappedType>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
:: Experimental ::
Return a
JavaMapWithStateDStream
by applying a function to every key-value element of
this
stream, while maintaining some state data for each unique key.
- mapWithState(StateSpec<K, V, StateType, MappedType>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- mapWithState(StateSpec<K, V, StateType, MappedType>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- mapWithState(StateSpec<K, V, StateType, MappedType>, ClassTag<StateType>, ClassTag<MappedType>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
:: Experimental ::
Return a
MapWithStateDStream
by applying a function to every key-value element of
this
stream, while maintaining some state data for each unique key.
- MapWithStateDStream<KeyType,ValueType,StateType,MappedType> - Class in org.apache.spark.streaming.dstream
-
:: Experimental ::
DStream representing the stream of data generated by
mapWithState
operation on a
pair DStream
.
- MapWithStateDStream(StreamingContext, ClassTag<MappedType>) - Constructor for class org.apache.spark.streaming.dstream.MapWithStateDStream
-
- mark(int) - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- mark(int) - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- markSupported() - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- markSupported() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- mask(Graph<VD2, ED2>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.Graph
-
Restricts the graph to only the vertices and edges that are also in other
, but keeps the
attributes from this graph.
- mask(Graph<VD2, ED2>, ClassTag<VD2>, ClassTag<ED2>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- master() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- master() - Method in class org.apache.spark.SparkContext
-
- master(String) - Method in class org.apache.spark.sql.SparkSession.Builder
-
Sets the Spark master URL to connect to, such as "local" to run locally, "local[4]" to
run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
- Matrices - Class in org.apache.spark.ml.linalg
-
- Matrices() - Constructor for class org.apache.spark.ml.linalg.Matrices
-
- Matrices - Class in org.apache.spark.mllib.linalg
-
- Matrices() - Constructor for class org.apache.spark.mllib.linalg.Matrices
-
- Matrix - Interface in org.apache.spark.ml.linalg
-
Trait for a local matrix.
- Matrix - Interface in org.apache.spark.mllib.linalg
-
Trait for a local matrix.
- MatrixEntry - Class in org.apache.spark.mllib.linalg.distributed
-
Represents an entry in a distributed matrix.
- MatrixEntry(long, long, double) - Constructor for class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- MatrixFactorizationModel - Class in org.apache.spark.mllib.recommendation
-
Model representing the result of matrix factorization.
- MatrixFactorizationModel(int, RDD<Tuple2<Object, double[]>>, RDD<Tuple2<Object, double[]>>) - Constructor for class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
- MatrixFactorizationModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.recommendation
-
- MatrixImplicits - Class in org.apache.spark.mllib.linalg
-
Implicit methods available in Scala for converting
Matrix
to
Matrix
and vice versa.
- MatrixImplicits() - Constructor for class org.apache.spark.mllib.linalg.MatrixImplicits
-
- MatrixType() - Static method in class org.apache.spark.ml.linalg.SQLDataTypes
-
- max() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Returns the maximum element from this RDD as defined by
the default comparator natural order.
- max(Comparator<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- max(Comparator<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- max(Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the maximum element from this RDD as defined by the specified
Comparator[T].
- max(Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- max(Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- max(Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- max(Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- max(Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- MAX() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- max() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- max() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- max() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- max() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Maximum value of each dimension.
- max() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Maximum value of each column.
- max(Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- max(Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- max(Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- max(Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- max(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the max of this RDD as defined by the implicit Ordering[T].
- max(Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- max(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the maximum value of the expression in a group.
- max(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the maximum value of the column in a group.
- max(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the max value for each numeric columns for each group.
- max(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the max value for each numeric columns for each group.
- max(Ordering<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- max(Duration) - Method in class org.apache.spark.streaming.Duration
-
- max(Time) - Method in class org.apache.spark.streaming.Time
-
- max(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
- max() - Method in class org.apache.spark.util.StatCounter
-
- MAX_FEATURES_FOR_NORMAL_SOLVER() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
When using LinearRegression.solver
== "normal", the solver must limit the number of
features to at most this number.
- MAX_INT_DIGITS() - Static method in class org.apache.spark.sql.types.Decimal
-
Maximum number of decimal digits an Int can represent
- MAX_LONG_DIGITS() - Static method in class org.apache.spark.sql.types.Decimal
-
Maximum number of decimal digits a Long can represent
- MAX_PRECISION() - Static method in class org.apache.spark.sql.types.DecimalType
-
- MAX_SCALE() - Static method in class org.apache.spark.sql.types.DecimalType
-
- maxAbs() - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- MaxAbsScaler - Class in org.apache.spark.ml.feature
-
Rescale each feature individually to range [-1, 1] by dividing through the largest maximum
absolute value in each feature.
- MaxAbsScaler(String) - Constructor for class org.apache.spark.ml.feature.MaxAbsScaler
-
- MaxAbsScaler() - Constructor for class org.apache.spark.ml.feature.MaxAbsScaler
-
- MaxAbsScalerModel - Class in org.apache.spark.ml.feature
-
- maxBins() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- maxBins() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- maxBins() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- maxBins() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- maxBins() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- maxBins() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- maxBins() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- maxBins() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- maxBins() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- maxBins() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- maxBins() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- maxBins() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- maxBins() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- maxBufferSizeMb() - Method in class org.apache.spark.serializer.KryoSerializer
-
- maxBy(Function1<A, B>, Ordering<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- maxCategories() - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- maxCategories() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- maxCores() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- maxDepth() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- maxDepth() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- maxDepth() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- maxDepth() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- maxDepth() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- maxDepth() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- maxDepth() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- maxDepth() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- maxDepth() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- maxDepth() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- maxDepth() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- maxDepth() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- maxDepth() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- maxId() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- maxId() - Static method in class org.apache.spark.rdd.CheckpointState
-
- maxId() - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- maxId() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- maxId() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- maxId() - Static method in class org.apache.spark.TaskState
-
- maxIter() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- maxIter() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- maxIter() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- maxIter() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- maxIter() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- maxIter() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- maxIter() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.LDA
-
- maxIter() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- maxIter() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- maxIter() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- maxIter() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- maxIter() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- maxIter() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- maxIter() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- maxIter() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- maxIter() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- maxIter() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- maxIter() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- maxIter() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- maxIters() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- maxMem() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- maxMem() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the max memory can be used by this block manager.
- maxMemory() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- maxMemory() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
- maxMemoryInMB() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- maxMemoryInMB() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- maxMemoryInMB() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- maxMessageSizeBytes(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the configured max message size for messages in bytes.
- maxNodesInLevel(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the maximum number of nodes which can be in the given level of the tree.
- maxOffHeapMem() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- maxOffHeapMem() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
- maxOffHeapMemSize() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
-
- maxOnHeapMem() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- maxOnHeapMem() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
- maxOnHeapMemSize() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
-
- maxReplicas() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
-
- maxRows() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- maxRows() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- maxSentenceLength() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- maxSentenceLength() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- maxTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- maxVal() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- maybeUpdateOutputMetrics(OutputMetrics, Function0<Object>, long) - Static method in class org.apache.spark.internal.io.SparkHadoopWriterUtils
-
- md5(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the MD5 digest of a binary column and returns the value
as a 32 character hex string.
- mean() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the mean of this RDD's elements.
- mean() - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- mean() - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
- mean() - Method in class org.apache.spark.mllib.feature.StandardScalerModel
-
- mean() - Method in class org.apache.spark.mllib.random.ExponentialGenerator
-
- mean() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
-
- mean() - Method in class org.apache.spark.mllib.random.PoissonGenerator
-
- mean() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Sample mean of each dimension.
- mean() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Sample mean vector.
- mean() - Method in class org.apache.spark.partial.BoundedDouble
-
- mean() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the mean of this RDD's elements.
- mean(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- mean(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the average of the values in a group.
- mean(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the average value for each numeric columns for each group.
- mean(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the average value for each numeric columns for each group.
- mean() - Method in class org.apache.spark.util.StatCounter
-
- meanAbsoluteError() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the mean absolute error, which is a risk function corresponding to the
expected value of the absolute error loss or l1-norm loss.
- meanAbsoluteError() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the mean absolute error, which is a risk function corresponding to the
expected value of the absolute error loss or l1-norm loss.
- meanApprox(long, Double) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return the approximate mean of the elements in this RDD.
- meanApprox(long) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Approximate operation to return the mean within a timeout.
- meanApprox(long, double) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Approximate operation to return the mean within a timeout.
- meanAveragePrecision() - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Returns the mean average precision (MAP) of all the queries.
- means() - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
- means() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
-
- meanSquaredError() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the mean squared error, which is a risk function corresponding to the
expected value of the squared error loss or quadratic loss.
- meanSquaredError() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the mean squared error, which is a risk function corresponding to the
expected value of the squared error loss or quadratic loss.
- megabytesToString(long) - Static method in class org.apache.spark.util.Utils
-
Convert a quantity in megabytes to a human-readable string such as "4.0 MB".
- MEMORY_AND_DISK - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_AND_DISK() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_AND_DISK_2 - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_AND_DISK_2() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_AND_DISK_SER - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_AND_DISK_SER() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_AND_DISK_SER_2 - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_AND_DISK_SER_2() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_BYTES_SPILLED() - Static method in class org.apache.spark.InternalAccumulator
-
- MEMORY_ONLY - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_ONLY() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_ONLY_2 - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_ONLY_2() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_ONLY_SER - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_ONLY_SER() - Static method in class org.apache.spark.storage.StorageLevel
-
- MEMORY_ONLY_SER_2 - Static variable in class org.apache.spark.api.java.StorageLevels
-
- MEMORY_ONLY_SER_2() - Static method in class org.apache.spark.storage.StorageLevel
-
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.StageData
-
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
-
- memoryBytesSpilled() - Method in class org.apache.spark.status.api.v1.TaskMetrics
-
- memoryBytesSpilled() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- memoryBytesSpilled() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- memoryBytesSpilled() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- MemoryEntry<T> - Interface in org.apache.spark.storage.memory
-
- memoryManager() - Method in class org.apache.spark.SparkEnv
-
- memoryMetrics() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- MemoryMetrics - Class in org.apache.spark.status.api.v1
-
- memoryMode() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- memoryMode() - Method in interface org.apache.spark.storage.memory.MemoryEntry
-
- memoryMode() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- MemoryParam - Class in org.apache.spark.util
-
An extractor object for parsing JVM memory strings, such as "10g", into an Int representing
the number of megabytes.
- MemoryParam() - Constructor for class org.apache.spark.util.MemoryParam
-
- memoryPerExecutorMB() - Method in class org.apache.spark.status.api.v1.ApplicationInfo
-
- memoryRemaining() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
-
- memoryStringToMb(String) - Static method in class org.apache.spark.util.Utils
-
Convert a Java memory parameter passed to -Xmx (such as 300m or 1g) to a number of mebibytes.
- memoryUsed() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- memoryUsed() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
-
- memoryUsed() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
-
- memoryUsed() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
-
- memRemaining() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the memory remaining in this block manager.
- memSize() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- memSize() - Method in class org.apache.spark.storage.BlockStatus
-
- memSize() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- memSize() - Method in class org.apache.spark.storage.RDDInfo
-
- memUsed() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the memory used by this block manager.
- memUsedByRdd(int) - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the memory used by the given RDD in this block manager in O(1) time.
- merge(R) - Method in class org.apache.spark.Accumulable
-
Deprecated.
Merge two accumulable objects together
- merge(R) - Static method in class org.apache.spark.Accumulator
-
Deprecated.
- merge(LinearSVCAggregator) - Method in class org.apache.spark.ml.classification.LinearSVCAggregator
-
Merge another LinearSVCAggregator, and update the loss and gradient
of the objective function.
- merge(LogisticAggregator) - Method in class org.apache.spark.ml.classification.LogisticAggregator
-
Merge another LogisticAggregator, and update the loss and gradient
of the objective function.
- merge(ExpectationAggregator) - Method in class org.apache.spark.ml.clustering.ExpectationAggregator
-
Merge another ExpectationAggregator, update the weights, means and covariances
for each distributions, and update the log likelihood.
- merge(AFTAggregator) - Method in class org.apache.spark.ml.regression.AFTAggregator
-
Merge another AFTAggregator, and update the loss and gradient
of the objective function.
- merge(LeastSquaresAggregator) - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
-
Merge another LeastSquaresAggregator, and update the loss and gradient
of the objective function.
- merge(IDF.DocumentFrequencyAggregator) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
Merges another.
- merge(MultivariateOnlineSummarizer) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Merge another MultivariateOnlineSummarizer, and update the statistical summary.
- merge(BUF, BUF) - Method in class org.apache.spark.sql.expressions.Aggregator
-
Merge two intermediate values.
- merge(MutableAggregationBuffer, Row) - Method in class org.apache.spark.sql.expressions.UserDefinedAggregateFunction
-
Merges two aggregation buffers and stores the updated buffer values back to buffer1
.
- merge(AccumulatorV2<IN, OUT>) - Method in class org.apache.spark.util.AccumulatorV2
-
Merges another same-type accumulator into this one and update its state, i.e.
- merge(AccumulatorV2<T, List<T>>) - Method in class org.apache.spark.util.CollectionAccumulator
-
- merge(AccumulatorV2<Double, Double>) - Method in class org.apache.spark.util.DoubleAccumulator
-
- merge(AccumulatorV2<T, R>) - Method in class org.apache.spark.util.LegacyAccumulatorWrapper
-
- merge(AccumulatorV2<Long, Long>) - Method in class org.apache.spark.util.LongAccumulator
-
- merge(double) - Method in class org.apache.spark.util.StatCounter
-
Add a value into this StatCounter, updating the internal statistics.
- merge(TraversableOnce<Object>) - Method in class org.apache.spark.util.StatCounter
-
Add multiple values into this StatCounter, updating the internal statistics.
- merge(StatCounter) - Method in class org.apache.spark.util.StatCounter
-
Merge another StatCounter into this one, adding up the internal statistics.
- mergeCombiners() - Method in class org.apache.spark.Aggregator
-
- mergeInPlace(BloomFilter) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Combines this bloom filter with another bloom filter by performing a bitwise OR of the
underlying data.
- mergeInPlace(CountMinSketch) - Method in class org.apache.spark.util.sketch.CountMinSketch
-
- mergeValue() - Method in class org.apache.spark.Aggregator
-
- message() - Method in class org.apache.spark.FetchFailed
-
- message() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
-
- message() - Static method in class org.apache.spark.scheduler.ExecutorKilled
-
- message() - Static method in class org.apache.spark.scheduler.LossReasonPending
-
- message() - Method in exception org.apache.spark.sql.AnalysisException
-
- message() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
- message() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
-
- MetaAlgorithmReadWrite - Class in org.apache.spark.ml.util
-
Default Meta-Algorithm read and write implementation.
- MetaAlgorithmReadWrite() - Constructor for class org.apache.spark.ml.util.MetaAlgorithmReadWrite
-
- metadata() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- Metadata - Class in org.apache.spark.sql.types
-
Metadata is a wrapper over Map[String, Any] that limits the value type to simple ones: Boolean,
Long, Double, String, Metadata, Array[Boolean], Array[Long], Array[Double], Array[String], and
Array[Metadata].
- metadata() - Method in class org.apache.spark.sql.types.StructField
-
- metadata() - Method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- METADATA_KEY_DESCRIPTION() - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
The key for description in StreamInputInfo.metadata
.
- MetadataBuilder - Class in org.apache.spark.sql.types
-
- MetadataBuilder() - Constructor for class org.apache.spark.sql.types.MetadataBuilder
-
- metadataDescription() - Method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- MetadataUtils - Class in org.apache.spark.ml.util
-
Helper utilities for algorithms using ML metadata
- MetadataUtils() - Constructor for class org.apache.spark.ml.util.MetadataUtils
-
- Method(String, Function2<Object, Object, Object>) - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest.Method
-
- method() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
-
- Method$() - Constructor for class org.apache.spark.mllib.stat.test.ChiSqTest.Method$
-
- MethodIdentifier<T> - Class in org.apache.spark.util
-
Helper class to identify a method.
- MethodIdentifier(Class<T>, String, String) - Constructor for class org.apache.spark.util.MethodIdentifier
-
- methodName() - Static method in class org.apache.spark.mllib.stat.test.StudentTTest
-
- methodName() - Static method in class org.apache.spark.mllib.stat.test.WelchTTest
-
- METRIC_COMPILATION_TIME() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the time it took to compile source code text (in milliseconds).
- METRIC_FILE_CACHE_HITS() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of files served from the file status cache instead of discovered.
- METRIC_FILES_DISCOVERED() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of files discovered off of the filesystem by InMemoryFileIndex.
- METRIC_GENERATED_CLASS_BYTECODE_SIZE() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the bytecode size of each class generated by CodeGenerator.
- METRIC_GENERATED_METHOD_BYTECODE_SIZE() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the bytecode size of each method in classes generated by CodeGenerator.
- METRIC_HIVE_CLIENT_CALLS() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of Hive client calls (e.g.
- METRIC_PARALLEL_LISTING_JOB_COUNT() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of Spark jobs launched for parallel file listing.
- METRIC_PARTITIONS_FETCHED() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Tracks the total number of partition metadata entries fetched via the client api.
- METRIC_SOURCE_CODE_SIZE() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
Histogram of the length of source code text compiled by CodeGenerator (in characters).
- metricName() - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
param for metric name in evaluation (supports "areaUnderROC"
(default), "areaUnderPR"
)
- metricName() - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
param for metric name in evaluation (supports "f1"
(default), "weightedPrecision"
,
"weightedRecall"
, "accuracy"
)
- metricName() - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
Param for metric name in evaluation.
- metricRegistry() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
- metricRegistry() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
- metrics() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- metrics() - Method in class org.apache.spark.ui.jobs.UIData.TaskUIData
-
- METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
-
- metricsSystem() - Method in class org.apache.spark.SparkEnv
-
- MFDataGenerator - Class in org.apache.spark.mllib.util
-
:: DeveloperApi ::
Generate RDD(s) containing data for Matrix Factorization.
- MFDataGenerator() - Constructor for class org.apache.spark.mllib.util.MFDataGenerator
-
- microF1Measure() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns micro-averaged label-based f1-measure
(equals to micro-averaged document-based f1-measure)
- microPrecision() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns micro-averaged label-based precision
(equals to micro-averaged document-based precision)
- microRecall() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns micro-averaged label-based recall
(equals to micro-averaged document-based recall)
- mightContain(Object) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Returns true
if the element might have been put in this Bloom filter,
false
if this is definitely not the case.
- mightContainBinary(byte[]) - Method in class org.apache.spark.util.sketch.BloomFilter
-
- mightContainLong(long) - Method in class org.apache.spark.util.sketch.BloomFilter
-
- mightContainString(String) - Method in class org.apache.spark.util.sketch.BloomFilter
-
- milliseconds() - Method in class org.apache.spark.streaming.Duration
-
- milliseconds(long) - Static method in class org.apache.spark.streaming.Durations
-
- Milliseconds - Class in org.apache.spark.streaming
-
Helper object that creates instance of
Duration
representing
a given number of milliseconds.
- Milliseconds() - Constructor for class org.apache.spark.streaming.Milliseconds
-
- milliseconds() - Method in class org.apache.spark.streaming.Time
-
- millisToString(long) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
Reformat a time interval in milliseconds to a prettier format for output
- min() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Returns the minimum element from this RDD as defined by
the default comparator natural order.
- min(Comparator<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- min(Comparator<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- min(Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the minimum element from this RDD as defined by the specified
Comparator[T].
- min(Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- min(Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- min(Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- min(Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- min(Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- MIN() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- min() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- min() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- min() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- min() - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
Minimum value of each dimension.
- min() - Method in interface org.apache.spark.mllib.stat.MultivariateStatisticalSummary
-
Minimum value of each column.
- min(Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- min(Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- min(Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- min(Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- min(Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the min of this RDD as defined by the implicit Ordering[T].
- min(Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- min(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the minimum value of the expression in a group.
- min(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the minimum value of the column in a group.
- min(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the min value for each numeric column for each group.
- min(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the min value for each numeric column for each group.
- min(Ordering<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- min(Duration) - Method in class org.apache.spark.streaming.Duration
-
- min(Time) - Method in class org.apache.spark.streaming.Time
-
- min() - Method in class org.apache.spark.util.StatCounter
-
- minBy(Function1<A, B>, Ordering<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- minConfidence() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- minConfidence() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- minCount() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- minCount() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- minDF() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- minDF() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- minDivisibleClusterSize() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- minDivisibleClusterSize() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- minDocFreq() - Static method in class org.apache.spark.ml.feature.IDF
-
- minDocFreq() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- minDocFreq() - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
-
- minDocFreq() - Method in class org.apache.spark.mllib.feature.IDF
-
- MinHashLSH - Class in org.apache.spark.ml.feature
-
:: Experimental ::
- MinHashLSH(String) - Constructor for class org.apache.spark.ml.feature.MinHashLSH
-
- MinHashLSH() - Constructor for class org.apache.spark.ml.feature.MinHashLSH
-
- MinHashLSHModel - Class in org.apache.spark.ml.feature
-
:: Experimental ::
- minInfoGain() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- minInfoGain() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- minInfoGain() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- minInfoGain() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- minInfoGain() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- minInfoGain() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- minInfoGain() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- minInfoGain() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- minInfoGain() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- minInfoGain() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- minInfoGain() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- minInfoGain() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- minInfoGain() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- minInstancesPerNode() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- minInstancesPerNode() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- MinMax() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- MinMaxScaler - Class in org.apache.spark.ml.feature
-
Rescale each feature individually to a common range [min, max] linearly using column summary
statistics, which is also known as min-max normalization or Rescaling.
- MinMaxScaler(String) - Constructor for class org.apache.spark.ml.feature.MinMaxScaler
-
- MinMaxScaler() - Constructor for class org.apache.spark.ml.feature.MinMaxScaler
-
- MinMaxScalerModel - Class in org.apache.spark.ml.feature
-
- minorVersion(String) - Static method in class org.apache.spark.util.VersionUtils
-
Given a Spark version string, return the minor version number.
- minSamplingRate() - Static method in class org.apache.spark.util.random.BinomialBounds
-
- minSupport() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- minSupport() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- minTF() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- minTF() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- minTokenLength() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Minimum token length, greater than or equal to 0.
- minus(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- minus(VertexRDD<VD>) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- minus(RDD<Tuple2<Object, VD>>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each VertexId present in both this
and other
, minus will act as a set difference
operation returning only those unique VertexId's present in this
.
- minus(VertexRDD<VD>) - Method in class org.apache.spark.graphx.VertexRDD
-
For each VertexId present in both this
and other
, minus will act as a set difference
operation returning only those unique VertexId's present in this
.
- minus(Object) - Method in class org.apache.spark.sql.Column
-
Subtraction.
- minus(Duration) - Method in class org.apache.spark.streaming.Duration
-
- minus(Time) - Method in class org.apache.spark.streaming.Time
-
- minus(Duration) - Method in class org.apache.spark.streaming.Time
-
- minute(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the minutes as an integer from a given date/timestamp/string.
- minutes() - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- minutes(long) - Static method in class org.apache.spark.streaming.Durations
-
- Minutes - Class in org.apache.spark.streaming
-
Helper object that creates instance of
Duration
representing
a given number of minutes.
- Minutes() - Constructor for class org.apache.spark.streaming.Minutes
-
- minVal() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- missingInput() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- missingInput() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- missingInput() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- missingValue() - Static method in class org.apache.spark.ml.feature.Imputer
-
- missingValue() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- mkList() - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- mkString() - Method in interface org.apache.spark.sql.Row
-
Displays all elements of this sequence in a string (without a separator).
- mkString(String) - Method in interface org.apache.spark.sql.Row
-
Displays all elements of this sequence in a string using a separator string.
- mkString(String, String, String) - Method in interface org.apache.spark.sql.Row
-
Displays all elements of this traversable or iterator in a string using
start, end, and separator strings.
- mkString(String, String, String) - Static method in class org.apache.spark.sql.types.StructType
-
- mkString(String) - Static method in class org.apache.spark.sql.types.StructType
-
- mkString() - Static method in class org.apache.spark.sql.types.StructType
-
- ML_ATTR() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- mlDenseMatrixToMLlibDenseMatrix(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
-
- mlDenseVectorToMLlibDenseVector(DenseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
-
- mllibDenseMatrixToMLDenseMatrix(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
-
- mllibDenseVectorToMLDenseVector(DenseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
-
- mllibMatrixToMLMatrix(Matrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
-
- mllibSparseMatrixToMLSparseMatrix(SparseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
-
- mllibSparseVectorToMLSparseVector(SparseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
-
- mllibVectorToMLVector(Vector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
-
- mlMatrixToMLlibMatrix(Matrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
-
- MLPairRDDFunctions<K,V> - Class in org.apache.spark.mllib.rdd
-
:: DeveloperApi ::
Machine learning specific Pair RDD functions.
- MLPairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>) - Constructor for class org.apache.spark.mllib.rdd.MLPairRDDFunctions
-
- MLReadable<T> - Interface in org.apache.spark.ml.util
-
Trait for objects that provide MLReader
.
- MLReader<T> - Class in org.apache.spark.ml.util
-
Abstract class for utility classes that can load ML instances.
- MLReader() - Constructor for class org.apache.spark.ml.util.MLReader
-
- mlSparseMatrixToMLlibSparseMatrix(SparseMatrix) - Static method in class org.apache.spark.mllib.linalg.MatrixImplicits
-
- mlSparseVectorToMLlibSparseVector(SparseVector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
-
- MLUtils - Class in org.apache.spark.mllib.util
-
Helper methods to load, save and pre-process data used in MLLib.
- MLUtils() - Constructor for class org.apache.spark.mllib.util.MLUtils
-
- mlVectorToMLlibVector(Vector) - Static method in class org.apache.spark.mllib.linalg.VectorImplicits
-
- MLWritable - Interface in org.apache.spark.ml.util
-
Trait for classes that provide MLWriter
.
- MLWriter - Class in org.apache.spark.ml.util
-
Abstract class for utility classes that can save ML instances.
- MLWriter() - Constructor for class org.apache.spark.ml.util.MLWriter
-
- mod(Object) - Method in class org.apache.spark.sql.Column
-
Modulo (a.k.a.
- mode(SaveMode) - Method in class org.apache.spark.sql.DataFrameWriter
-
Specifies the behavior when data or table already exists.
- mode(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Specifies the behavior when data or table already exists.
- mode() - Method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- Model<M extends Model<M>> - Class in org.apache.spark.ml
-
- Model() - Constructor for class org.apache.spark.ml.Model
-
- models() - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- modelType() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- modelType() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- modelType() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- modelType() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- MODULE$ - Static variable in class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
-
Deprecated.
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
-
Deprecated.
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.AccumulatorParam.IntAccumulatorParam$
-
Deprecated.
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
-
Deprecated.
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.AccumulatorParam.StringAccumulatorParam$
-
Deprecated.
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.CanonicalRandomVertexCut$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.EdgePartition1D$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.EdgePartition2D$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.internal.io.FileCommitProtocol.EmptyTaskCommitMessage$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.input$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.output$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.shuffleRead$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.InternalAccumulator.shuffleWrite$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.feature.Word2VecModel.Word2VecModelWriter$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.recommendation.ALS.InBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.recommendation.ALS.Rating$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.recommendation.ALS.RatingBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.CLogLog$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Family$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.FamilyAndLink$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gamma$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Identity$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Inverse$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Link$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Log$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Logit$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.PowerIterationClustering.Assignment$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.PrefixSpan.Postfix$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.stat.test.ChiSqTest.Method$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.stat.test.ChiSqTest.NullHypothesis$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest.NullHypothesis$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.rdd.HadoopRDD.HadoopMapPartitionsWithSplitRDD$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.rdd.NewHadoopRDD.NewHadoopMapPartitionsWithSplitRDD$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.GetExecutorLossReason$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutors$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillExecutorsOnHost$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.LaunchTask$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.serializer.SerializationDebugger.ObjectStreamClassMethods$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.hive.HiveShim.HiveFunctionWrapper$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.CubeType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.GroupByType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.PivotType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.RelationalGroupedDataset.RollupType$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.Decimal.DecimalIsFractional$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.DecimalType.Expression$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.sql.types.DecimalType.Fixed$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.BlockManagerHeartbeat$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetBlockStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetExecutorEndpointRef$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetLocations$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetLocationsMultipleBlockIds$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetMatchingBlockIds$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetMemoryStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetPeers$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.GetStorageStatus$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.HasCachedBlocks$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveRdd$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.streaming.kafka.KafkaCluster.LeaderOffset$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.streaming.kafka.KafkaCluster.SimpleConsumerConfig$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.JettyUtils.ServletParams$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.InputMetricsUIData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.JobUIData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.OutputMetricsUIData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.ShuffleWriteMetricsUIData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData$
-
Static reference to the singleton instance of this Scala object.
- MODULE$ - Static variable in class org.apache.spark.ui.jobs.UIData.TaskUIData$
-
Static reference to the singleton instance of this Scala object.
- monotonically_increasing_id() - Static method in class org.apache.spark.sql.functions
-
A column expression that generates monotonically increasing 64-bit integers.
- monotonicallyIncreasingId() - Static method in class org.apache.spark.sql.functions
-
- month(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the month as an integer from a given date/timestamp/string.
- months_between(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns number of months between dates date1
and date2
.
- msDurationToString(long) - Static method in class org.apache.spark.util.Utils
-
Returns a human-readable string representing a duration such as "35ms"
- MsSqlServerDialect - Class in org.apache.spark.sql.jdbc
-
- MsSqlServerDialect() - Constructor for class org.apache.spark.sql.jdbc.MsSqlServerDialect
-
- mu() - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
- MulticlassClassificationEvaluator - Class in org.apache.spark.ml.evaluation
-
:: Experimental ::
Evaluator for multiclass classification, which expects two input columns: prediction and label.
- MulticlassClassificationEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- MulticlassClassificationEvaluator() - Constructor for class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- MulticlassMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for multiclass classification.
- MulticlassMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
- MultilabelMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for multilabel classification.
- MultilabelMetrics(RDD<Tuple2<double[], double[]>>) - Constructor for class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
- multiLabelValidator(int) - Static method in class org.apache.spark.mllib.util.DataValidators
-
Function to check if labels used for k class multi-label classification are
in the range of {0, 1, ..., k - 1}.
- MultilayerPerceptronClassificationModel - Class in org.apache.spark.ml.classification
-
Classification model based on the Multilayer Perceptron.
- MultilayerPerceptronClassifier - Class in org.apache.spark.ml.classification
-
Classifier trainer based on the Multilayer Perceptron.
- MultilayerPerceptronClassifier(String) - Constructor for class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- MultilayerPerceptronClassifier() - Constructor for class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- multiply(DenseMatrix) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- multiply(DenseVector) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- multiply(Vector) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- multiply(DenseMatrix) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Convenience method for Matrix
-DenseMatrix
multiplication.
- multiply(DenseVector) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Convenience method for Matrix
-DenseVector
multiplication.
- multiply(Vector) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Convenience method for Matrix
-Vector
multiplication.
- multiply(DenseMatrix) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- multiply(DenseVector) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- multiply(Vector) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- multiply(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- multiply(DenseVector) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- multiply(Vector) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- multiply(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- multiply(BlockMatrix, int) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- multiply(Matrix) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Multiply this matrix by a local matrix on the right.
- multiply(Matrix) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Multiply this matrix by a local matrix on the right.
- multiply(DenseMatrix) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convenience method for Matrix
-DenseMatrix
multiplication.
- multiply(DenseVector) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convenience method for Matrix
-DenseVector
multiplication.
- multiply(Vector) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Convenience method for Matrix
-Vector
multiplication.
- multiply(DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- multiply(DenseVector) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- multiply(Vector) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- multiply(Object) - Method in class org.apache.spark.sql.Column
-
Multiplication of this expression and another expression.
- MultivariateGaussian - Class in org.apache.spark.ml.stat.distribution
-
This class provides basic functionality for a Multivariate Gaussian (Normal) Distribution.
- MultivariateGaussian(Vector, Matrix) - Constructor for class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
- MultivariateGaussian - Class in org.apache.spark.mllib.stat.distribution
-
:: DeveloperApi ::
This class provides basic functionality for a Multivariate Gaussian (Normal) Distribution.
- MultivariateGaussian(Vector, Matrix) - Constructor for class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
- MultivariateOnlineSummarizer - Class in org.apache.spark.mllib.stat
-
:: DeveloperApi ::
MultivariateOnlineSummarizer implements
MultivariateStatisticalSummary
to compute the mean,
variance, minimum, maximum, counts, and nonzero counts for instances in sparse or dense vector
format in an online fashion.
- MultivariateOnlineSummarizer() - Constructor for class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
-
- MultivariateStatisticalSummary - Interface in org.apache.spark.mllib.stat
-
Trait for multivariate statistical summary of a data matrix.
- MutableAggregationBuffer - Class in org.apache.spark.sql.expressions
-
A Row
representing a mutable aggregation buffer.
- MutableAggregationBuffer() - Constructor for class org.apache.spark.sql.expressions.MutableAggregationBuffer
-
- MutablePair<T1,T2> - Class in org.apache.spark.util
-
:: DeveloperApi ::
A tuple of 2 elements.
- MutablePair(T1, T2) - Constructor for class org.apache.spark.util.MutablePair
-
- MutablePair() - Constructor for class org.apache.spark.util.MutablePair
-
No-arg constructor for serialization
- myName() - Method in class org.apache.spark.util.InnerClosureFinder
-
- MySQLDialect - Class in org.apache.spark.sql.jdbc
-
- MySQLDialect() - Constructor for class org.apache.spark.sql.jdbc.MySQLDialect
-
- p() - Method in class org.apache.spark.ml.feature.Normalizer
-
Normalization in L^p^ space.
- p(int) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- p(int) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- p(int) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- padTo(int, B, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- pageRank(double, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run a dynamic version of PageRank returning a graph with vertex attributes containing the
PageRank and edge attributes containing the normalized edge weight.
- PageRank - Class in org.apache.spark.graphx.lib
-
PageRank algorithm implementation.
- PageRank() - Constructor for class org.apache.spark.graphx.lib.PageRank
-
- PairDStreamFunctions<K,V> - Class in org.apache.spark.streaming.dstream
-
Extra functions available on DStream of (key, value) pairs through an implicit conversion.
- PairDStreamFunctions(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Constructor for class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
- PairFlatMapFunction<T,K,V> - Interface in org.apache.spark.api.java.function
-
A function that returns zero or more key-value pair records from each input record.
- PairFunction<T,K,V> - Interface in org.apache.spark.api.java.function
-
A function that returns key-value pairs (Tuple2<K, V>), and can be used to
construct PairRDDs.
- PairRDDFunctions<K,V> - Class in org.apache.spark.rdd
-
Extra functions available on RDDs of (key, value) pairs through an implicit conversion.
- PairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Constructor for class org.apache.spark.rdd.PairRDDFunctions
-
- PairwiseRRDD<T> - Class in org.apache.spark.api.r
-
Form an RDD[(Int, Array[Byte])] from key-value pairs returned from R.
- PairwiseRRDD(RDD<T>, int, byte[], String, byte[], Object[], ClassTag<T>) - Constructor for class org.apache.spark.api.r.PairwiseRRDD
-
- par() - Static method in class org.apache.spark.sql.types.StructType
-
- parallelize(List<T>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelize(List<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelize(Seq<T>, int, ClassTag<T>) - Method in class org.apache.spark.SparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizeDoubles(List<Double>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizeDoubles(List<Double>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizePairs(List<Tuple2<K, V>>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- parallelizePairs(List<Tuple2<K, V>>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Distribute a local Scala collection to form an RDD.
- Param<T> - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
A param with self-contained documentation and optionally default value.
- Param(String, String, String, Function1<T, Object>) - Constructor for class org.apache.spark.ml.param.Param
-
- Param(Identifiable, String, String, Function1<T, Object>) - Constructor for class org.apache.spark.ml.param.Param
-
- Param(String, String, String) - Constructor for class org.apache.spark.ml.param.Param
-
- Param(Identifiable, String, String) - Constructor for class org.apache.spark.ml.param.Param
-
- param() - Method in class org.apache.spark.ml.param.ParamPair
-
- ParamGridBuilder - Class in org.apache.spark.ml.tuning
-
Builder for a param grid used in grid search-based model selection.
- ParamGridBuilder() - Constructor for class org.apache.spark.ml.tuning.ParamGridBuilder
-
- ParamMap - Class in org.apache.spark.ml.param
-
A param to value map.
- ParamMap() - Constructor for class org.apache.spark.ml.param.ParamMap
-
Creates an empty param map.
- paramMap() - Method in interface org.apache.spark.ml.param.Params
-
Internal param map for user-supplied values.
- ParamPair<T> - Class in org.apache.spark.ml.param
-
A param and its value.
- ParamPair(Param<T>, T) - Constructor for class org.apache.spark.ml.param.ParamPair
-
- params() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- params() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- params() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- params() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- params() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- params() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- params() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- params() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- params() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- params() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- params() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- params() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- params() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- params() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- params() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- params() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- params() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- params() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- params() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- params() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- params() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- params() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- params() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- params() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- params() - Static method in class org.apache.spark.ml.clustering.LDA
-
- params() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- params() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- params() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- params() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- params() - Static method in class org.apache.spark.ml.feature.Binarizer
-
- params() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- params() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- params() - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- params() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- params() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- params() - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- params() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- params() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- params() - Static method in class org.apache.spark.ml.feature.DCT
-
- params() - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- params() - Static method in class org.apache.spark.ml.feature.HashingTF
-
- params() - Static method in class org.apache.spark.ml.feature.IDF
-
- params() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- params() - Static method in class org.apache.spark.ml.feature.Imputer
-
- params() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- params() - Static method in class org.apache.spark.ml.feature.IndexToString
-
- params() - Static method in class org.apache.spark.ml.feature.Interaction
-
- params() - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- params() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- params() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- params() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- params() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- params() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- params() - Static method in class org.apache.spark.ml.feature.NGram
-
- params() - Static method in class org.apache.spark.ml.feature.Normalizer
-
- params() - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- params() - Static method in class org.apache.spark.ml.feature.PCA
-
- params() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- params() - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- params() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- params() - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- params() - Static method in class org.apache.spark.ml.feature.RFormula
-
- params() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- params() - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- params() - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- params() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- params() - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- params() - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- params() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- params() - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- params() - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- params() - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- params() - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- params() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- params() - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- params() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- params() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- params() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- params() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- Params - Interface in org.apache.spark.ml.param
-
:: DeveloperApi ::
Trait for components that take parameters.
- params() - Method in interface org.apache.spark.ml.param.Params
-
Returns all params sorted by their names.
- params() - Static method in class org.apache.spark.ml.Pipeline
-
- params() - Static method in class org.apache.spark.ml.PipelineModel
-
- params() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- params() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- params() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- params() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- params() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- params() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- params() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- params() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- params() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- params() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- params() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- params() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- params() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- params() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- ParamValidators - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
Factory methods for common validation functions for Param.isValid
.
- ParamValidators() - Constructor for class org.apache.spark.ml.param.ParamValidators
-
- parent() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- parent() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- parent() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- parent() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- parent() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- parent() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- parent() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- parent() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- parent() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- parent() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- parent() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- parent() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- parent() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- parent() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- parent() - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- parent() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- parent() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- parent() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- parent() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- parent() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- parent() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- parent() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- parent() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- parent() - Method in class org.apache.spark.ml.Model
-
The parent estimator that produced this model.
- parent() - Static method in class org.apache.spark.ml.param.DoubleParam
-
- parent() - Static method in class org.apache.spark.ml.param.FloatParam
-
- parent() - Method in class org.apache.spark.ml.param.Param
-
- parent() - Static method in class org.apache.spark.ml.PipelineModel
-
- parent() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- parent() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- parent() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- parent() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.PipelineModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- parent_$eq(Estimator<M>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- parentIds() - Method in class org.apache.spark.scheduler.StageInfo
-
- parentIds() - Method in class org.apache.spark.storage.RDDInfo
-
- parentIndex(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Get the parent index of the given node, or 0 if it is the root.
- parentState() - Static method in class org.apache.spark.sql.hive.HiveSessionStateBuilder
-
- parquet(String...) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads a Parquet file, returning the result as a DataFrame
.
- parquet(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads a Parquet file, returning the result as a DataFrame
.
- parquet(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads a Parquet file, returning the result as a DataFrame
.
- parquet(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the DataFrame
in Parquet format at the specified path.
- parquet(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads a Parquet file stream, returning the result as a DataFrame
.
- parquetFile(String...) - Method in class org.apache.spark.sql.SQLContext
-
- parquetFile(Seq<String>) - Method in class org.apache.spark.sql.SQLContext
-
- parse(String) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- parse(String) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Parses a string resulted from
Vector.toString
into a
Vector
.
- parse(String) - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
Parses a string resulted from
LabeledPoint#toString
into
an
LabeledPoint
.
- parse(String) - Static method in class org.apache.spark.mllib.util.NumericParser
-
Parses a string into a Double, an Array[Double], or a Seq[Any].
- parseAll(Parsers.Parser<T>, Reader<Object>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- parseAll(Parsers.Parser<T>, Reader) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- parseAll(Parsers.Parser<T>, CharSequence) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- parseHostPort(String) - Static method in class org.apache.spark.util.Utils
-
- parseIgnoreCase(Class<E>, String) - Static method in class org.apache.spark.util.EnumUtil
-
- Parser(Function1<Reader<Object>, Parsers.ParseResult<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- parseStandaloneMasterUrls(String) - Static method in class org.apache.spark.util.Utils
-
Split the comma delimited string of master URLs into a list.
- PartialResult<R> - Class in org.apache.spark.partial
-
- PartialResult(R, boolean) - Constructor for class org.apache.spark.partial.PartialResult
-
- Partition - Interface in org.apache.spark
-
An identifier for a partition in an RDD.
- partition() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- partition() - Method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- partition(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- partition() - Method in class org.apache.spark.streaming.kafka.OffsetRange
-
- partitionBy(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a copy of the RDD partitioned using the specified partitioner.
- partitionBy(PartitionStrategy) - Method in class org.apache.spark.graphx.Graph
-
Repartitions the edges in the graph according to partitionStrategy
.
- partitionBy(PartitionStrategy, int) - Method in class org.apache.spark.graphx.Graph
-
Repartitions the edges in the graph according to partitionStrategy
.
- partitionBy(PartitionStrategy) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- partitionBy(PartitionStrategy, int) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- partitionBy(Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return a copy of the RDD partitioned using the specified partitioner.
- partitionBy(String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(String, String...) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined.
- partitionBy(Column...) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined.
- partitionBy(String, Seq<String>) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined.
- partitionBy(Seq<Column>) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the partitioning defined.
- partitionBy(String, String...) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
- partitionBy(Column...) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
- partitionBy(String, Seq<String>) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
- partitionBy(Seq<Column>) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
- partitionBy(String...) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Partitions the output by the given columns on the file system.
- partitionBy(Seq<String>) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Partitions the output by the given columns on the file system.
- PartitionCoalescer - Interface in org.apache.spark.rdd
-
::DeveloperApi::
A PartitionCoalescer defines how to coalesce the partitions of a given RDD.
- partitioner() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- partitioner() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- partitioner() - Static method in class org.apache.spark.api.java.JavaRDD
-
- partitioner() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The partitioner of this RDD.
- partitioner() - Static method in class org.apache.spark.api.r.RRDD
-
- partitioner() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- partitioner() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
If partitionsRDD
already has a partitioner, use it.
- partitioner() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- partitioner() - Static method in class org.apache.spark.graphx.VertexRDD
-
- Partitioner - Class in org.apache.spark
-
An object that defines how the elements in a key-value pair RDD are partitioned by key.
- Partitioner() - Constructor for class org.apache.spark.Partitioner
-
- partitioner() - Method in class org.apache.spark.rdd.CoGroupedRDD
-
- partitioner() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- partitioner() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- partitioner() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- partitioner() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- partitioner() - Method in class org.apache.spark.rdd.RDD
-
Optionally overridden by subclasses to specify how they are partitioned.
- partitioner() - Method in class org.apache.spark.rdd.ShuffledRDD
-
- partitioner() - Static method in class org.apache.spark.rdd.UnionRDD
-
- partitioner() - Method in class org.apache.spark.ShuffleDependency
-
- partitioner(Partitioner) - Method in class org.apache.spark.streaming.StateSpec
-
Set the partitioner by which the state RDDs generated by mapWithState
will be partitioned.
- PartitionGroup - Class in org.apache.spark.rdd
-
::DeveloperApi::
A group of Partition
s
param: prefLoc preferred location for the partition group
- PartitionGroup(Option<String>) - Constructor for class org.apache.spark.rdd.PartitionGroup
-
- partitionID() - Method in class org.apache.spark.TaskCommitDenied
-
- partitionId() - Method in class org.apache.spark.TaskContext
-
The ID of the RDD partition that is computed by this task.
- PartitionLocations(RDD<?>) - Constructor for class org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
-
- PartitionPruningRDD<T> - Class in org.apache.spark.rdd
-
:: DeveloperApi ::
An RDD used to prune RDD partitions/partitions so we can avoid launching tasks on
all partitions.
- PartitionPruningRDD(RDD<T>, Function1<Object, Object>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.PartitionPruningRDD
-
- partitions() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- partitions() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- partitions() - Static method in class org.apache.spark.api.java.JavaRDD
-
- partitions() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Set of partitions in this RDD.
- partitions() - Static method in class org.apache.spark.api.r.RRDD
-
- partitions() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- partitions() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- partitions() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- partitions() - Static method in class org.apache.spark.graphx.VertexRDD
-
- partitions() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- partitions() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- partitions() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- partitions() - Method in class org.apache.spark.rdd.PartitionGroup
-
- partitions() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- partitions() - Method in class org.apache.spark.rdd.RDD
-
Get the array of partitions of this RDD, taking into account whether the
RDD is checkpointed or not.
- partitions() - Static method in class org.apache.spark.rdd.UnionRDD
-
- partitions() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
-
- partitionsRDD() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- partitionsRDD() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- PartitionStrategy - Interface in org.apache.spark.graphx
-
Represents the way edges are assigned to edge partitions based on their source and destination
vertex IDs.
- PartitionStrategy.CanonicalRandomVertexCut$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions by hashing the source and destination vertex IDs in a canonical
direction, resulting in a random vertex cut that colocates all edges between two vertices,
regardless of direction.
- PartitionStrategy.EdgePartition1D$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions using only the source vertex ID, colocating edges with the same
source.
- PartitionStrategy.EdgePartition2D$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions using a 2D partitioning of the sparse edge adjacency matrix,
guaranteeing a 2 * sqrt(numParts)
bound on vertex replication.
- PartitionStrategy.RandomVertexCut$ - Class in org.apache.spark.graphx
-
Assigns edges to partitions by hashing the source and destination vertex IDs, resulting in a
random vertex cut that colocates all same-direction edges between two vertices.
- partsWithLocs() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
-
- partsWithoutLocs() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer.PartitionLocations
-
- patch(int, GenSeq<B>, int, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- path() - Method in class org.apache.spark.scheduler.InputFormatInfo
-
- path() - Method in class org.apache.spark.scheduler.SplitInfo
-
- pattern() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Regex pattern used to match delimiters if gaps
is true or tokens if gaps
is false.
- pc() - Method in class org.apache.spark.ml.feature.PCAModel
-
- pc() - Method in class org.apache.spark.mllib.feature.PCAModel
-
- PCA - Class in org.apache.spark.ml.feature
-
PCA trains a model to project vectors to a lower dimensional space of the top PCA!.k
principal components.
- PCA(String) - Constructor for class org.apache.spark.ml.feature.PCA
-
- PCA() - Constructor for class org.apache.spark.ml.feature.PCA
-
- PCA - Class in org.apache.spark.mllib.feature
-
A feature transformer that projects vectors to a low-dimensional space using PCA.
- PCA(int) - Constructor for class org.apache.spark.mllib.feature.PCA
-
- PCAModel - Class in org.apache.spark.ml.feature
-
- PCAModel - Class in org.apache.spark.mllib.feature
-
Model fitted by
PCA
that can project vectors to a low-dimensional space using PCA.
- pdf(Vector) - Method in class org.apache.spark.ml.stat.distribution.MultivariateGaussian
-
Returns density of this multivariate Gaussian at given point, x
- pdf(Vector) - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
Returns density of this multivariate Gaussian at given point, x
- PEAK_EXECUTION_MEMORY() - Static method in class org.apache.spark.InternalAccumulator
-
- PEAK_EXECUTION_MEMORY() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- PEAK_EXECUTION_MEMORY() - Static method in class org.apache.spark.ui.ToolTips
-
- peakExecutionMemory() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- PEARSON() - Static method in class org.apache.spark.mllib.stat.test.ChiSqTest
-
- PearsonCorrelation - Class in org.apache.spark.mllib.stat.correlation
-
Compute Pearson correlation for two RDDs of the type RDD[Double] or the correlation matrix
for an RDD of the type RDD[Vector].
- PearsonCorrelation() - Constructor for class org.apache.spark.mllib.stat.correlation.PearsonCorrelation
-
- pendingStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- percent_rank() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the relative rank (i.e.
- percentile() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- percentile() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- percentile() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- percentiles() - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- percentilesHeader() - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- permutations() - Static method in class org.apache.spark.sql.types.StructType
-
- persist(StorageLevel) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Set this RDD's storage level to persist its values across operations after the first time
it is computed.
- persist(StorageLevel) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Set this RDD's storage level to persist its values across operations after the first time
it is computed.
- persist(StorageLevel) - Method in class org.apache.spark.api.java.JavaRDD
-
Set this RDD's storage level to persist its values across operations after the first time
it is computed.
- persist(StorageLevel) - Static method in class org.apache.spark.api.r.RRDD
-
- persist() - Static method in class org.apache.spark.api.r.RRDD
-
- persist(StorageLevel) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- persist() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- persist(StorageLevel) - Method in class org.apache.spark.graphx.Graph
-
Caches the vertices and edges associated with this graph at the specified storage level,
ignoring any target storage levels previously set.
- persist(StorageLevel) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
Persists the edge partitions at the specified storage level, ignoring any existing target
storage level.
- persist(StorageLevel) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- persist(StorageLevel) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
Persists the vertex partitions at the specified storage level, ignoring any existing target
storage level.
- persist(StorageLevel) - Static method in class org.apache.spark.graphx.VertexRDD
-
- persist() - Static method in class org.apache.spark.graphx.VertexRDD
-
- persist(StorageLevel) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Persists the underlying RDD with the specified storage level.
- persist(StorageLevel) - Method in class org.apache.spark.rdd.HadoopRDD
-
- persist(StorageLevel) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- persist() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- persist(StorageLevel) - Method in class org.apache.spark.rdd.NewHadoopRDD
-
- persist(StorageLevel) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- persist() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- persist(StorageLevel) - Method in class org.apache.spark.rdd.RDD
-
Set this RDD's storage level to persist its values across operations after the first time
it is computed.
- persist() - Method in class org.apache.spark.rdd.RDD
-
Persist this RDD with the default storage level (MEMORY_ONLY
).
- persist(StorageLevel) - Static method in class org.apache.spark.rdd.UnionRDD
-
- persist() - Static method in class org.apache.spark.rdd.UnionRDD
-
- persist() - Method in class org.apache.spark.sql.Dataset
-
Persist this Dataset with the default storage level (MEMORY_AND_DISK
).
- persist(StorageLevel) - Method in class org.apache.spark.sql.Dataset
-
Persist this Dataset with the given storage level.
- persist() - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- persist(StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Persist the RDDs of this DStream with the given storage level
- persist() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- persist(StorageLevel) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- persist() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- persist(StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Persist the RDDs of this DStream with the given storage level
- persist() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- persist(StorageLevel) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- persist() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- persist(StorageLevel) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- persist() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- persist(StorageLevel) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- persist(StorageLevel) - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist the RDDs of this DStream with the given storage level
- persist() - Method in class org.apache.spark.streaming.dstream.DStream
-
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)
- persist$default$1() - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
- personalizedPageRank(long, double, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run personalized PageRank for a given vertex, such that all random walks
are started relative to the source node.
- phrase(Parsers.Parser<T>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- pi() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- pi() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- pi() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
-
- pi() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- pickBin(Partition, RDD<?>, double, DefaultPartitionCoalescer.PartitionLocations) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Takes a parent RDD partition and decides which of the partition groups to put it in
Takes locality into account, but also uses power of 2 choices to load balance
It strikes a balance between the two using the balanceSlack variable
- pickRandomVertex() - Method in class org.apache.spark.graphx.GraphOps
-
Picks a random vertex from the graph and returns its ID.
- pipe(String) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- pipe(List<String>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- pipe(List<String>, Map<String, String>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- pipe(List<String>, Map<String, String>, boolean, int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- pipe(List<String>, Map<String, String>, boolean, int, String) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- pipe(String) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- pipe(List<String>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- pipe(List<String>, Map<String, String>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- pipe(List<String>, Map<String, String>, boolean, int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- pipe(List<String>, Map<String, String>, boolean, int, String) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- pipe(String) - Static method in class org.apache.spark.api.java.JavaRDD
-
- pipe(List<String>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- pipe(List<String>, Map<String, String>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- pipe(List<String>, Map<String, String>, boolean, int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- pipe(List<String>, Map<String, String>, boolean, int, String) - Static method in class org.apache.spark.api.java.JavaRDD
-
- pipe(String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>, Map<String, String>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>, Map<String, String>, boolean, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(List<String>, Map<String, String>, boolean, int, String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an RDD created by piping elements to a forked external process.
- pipe(String) - Static method in class org.apache.spark.api.r.RRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.api.r.RRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.api.r.RRDD
-
- pipe(String) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe(String) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe(String) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe(String) - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe(String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe(String) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe(String) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe(String) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe(String) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by piping elements to a forked external process.
- pipe(String, Map<String, String>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by piping elements to a forked external process.
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD created by piping elements to a forked external process.
- pipe(String) - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe(String, Map<String, String>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe(Seq<String>, Map<String, String>, Function1<Function1<String, BoxedUnit>, BoxedUnit>, Function2<T, Function1<String, BoxedUnit>, BoxedUnit>, boolean, int, String) - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- pipe$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe$default$3() - Static method in class org.apache.spark.api.r.RRDD
-
- pipe$default$3() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe$default$3() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe$default$3() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe$default$3() - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe$default$3() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe$default$3() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe$default$3() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe$default$3() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe$default$3() - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe$default$4() - Static method in class org.apache.spark.api.r.RRDD
-
- pipe$default$4() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe$default$4() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe$default$4() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe$default$4() - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe$default$4() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe$default$4() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe$default$4() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe$default$4() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe$default$4() - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe$default$5() - Static method in class org.apache.spark.api.r.RRDD
-
- pipe$default$5() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe$default$5() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe$default$5() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe$default$5() - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe$default$5() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe$default$5() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe$default$5() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe$default$5() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe$default$5() - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe$default$6() - Static method in class org.apache.spark.api.r.RRDD
-
- pipe$default$6() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe$default$6() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe$default$6() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe$default$6() - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe$default$6() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe$default$6() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe$default$6() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe$default$6() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe$default$6() - Static method in class org.apache.spark.rdd.UnionRDD
-
- pipe$default$7() - Static method in class org.apache.spark.api.r.RRDD
-
- pipe$default$7() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- pipe$default$7() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- pipe$default$7() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- pipe$default$7() - Static method in class org.apache.spark.graphx.VertexRDD
-
- pipe$default$7() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- pipe$default$7() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- pipe$default$7() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- pipe$default$7() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- pipe$default$7() - Static method in class org.apache.spark.rdd.UnionRDD
-
- Pipeline - Class in org.apache.spark.ml
-
A simple pipeline, which acts as an estimator.
- Pipeline(String) - Constructor for class org.apache.spark.ml.Pipeline
-
- Pipeline() - Constructor for class org.apache.spark.ml.Pipeline
-
- Pipeline.SharedReadWrite$ - Class in org.apache.spark.ml
-
- PipelineModel - Class in org.apache.spark.ml
-
Represents a fitted pipeline.
- PipelineStage - Class in org.apache.spark.ml
-
- PipelineStage() - Constructor for class org.apache.spark.ml.PipelineStage
-
- pivot(String) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Pivots a column of the current DataFrame
and perform the specified aggregation.
- pivot(String, Seq<Object>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Pivots a column of the current DataFrame
and perform the specified aggregation.
- pivot(String, List<Object>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Pivots a column of the current DataFrame
and perform the specified aggregation.
- PivotType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.PivotType$
-
- plan() - Method in exception org.apache.spark.sql.AnalysisException
-
- plus(Object) - Method in class org.apache.spark.sql.Column
-
Sum of this expression and another expression.
- plus(Duration) - Method in class org.apache.spark.streaming.Duration
-
- plus(Duration) - Method in class org.apache.spark.streaming.Time
-
- PMMLExportable - Interface in org.apache.spark.mllib.pmml
-
:: DeveloperApi ::
Export model to the PMML format
Predictive Model Markup Language (PMML) is an XML-based file format
developed by the Data Mining Group (www.dmg.org).
- PMMLModelExportFactory - Class in org.apache.spark.mllib.pmml.export
-
- PMMLModelExportFactory() - Constructor for class org.apache.spark.mllib.pmml.export.PMMLModelExportFactory
-
- pmod(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the positive value of dividend mod divisor.
- point() - Method in class org.apache.spark.mllib.feature.VocabWord
-
- POINTS() - Static method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
- Poisson$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Poisson$
-
- PoissonBounds - Class in org.apache.spark.util.random
-
Utility functions that help us determine bounds on adjusted sampling rate to guarantee exact
sample sizes with high confidence when sampling with replacement.
- PoissonBounds() - Constructor for class org.apache.spark.util.random.PoissonBounds
-
- PoissonGenerator - Class in org.apache.spark.mllib.random
-
:: DeveloperApi ::
Generates i.i.d.
- PoissonGenerator(double) - Constructor for class org.apache.spark.mllib.random.PoissonGenerator
-
- poissonJavaRDD(JavaSparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of RandomRDDs.poissonRDD
.
- poissonJavaRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaRDD
with the default seed.
- poissonJavaRDD(JavaSparkContext, double, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaRDD
with the default number of partitions and the default seed.
- poissonJavaVectorRDD(JavaSparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Java-friendly version of RandomRDDs.poissonVectorRDD
.
- poissonJavaVectorRDD(JavaSparkContext, double, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaVectorRDD
with the default seed.
- poissonJavaVectorRDD(JavaSparkContext, double, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
RandomRDDs.poissonJavaVectorRDD
with the default number of partitions and the default seed.
- poissonRDD(SparkContext, double, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD comprised of i.i.d.
samples from the Poisson distribution with the input
mean.
- PoissonSampler<T> - Class in org.apache.spark.util.random
-
:: DeveloperApi ::
A sampler for sampling with replacement, based on values drawn from Poisson distribution.
- PoissonSampler(double, boolean) - Constructor for class org.apache.spark.util.random.PoissonSampler
-
- PoissonSampler(double) - Constructor for class org.apache.spark.util.random.PoissonSampler
-
- poissonVectorRDD(SparkContext, double, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
Generates an RDD[Vector] with vectors containing i.i.d.
samples drawn from the
Poisson distribution with the input mean.
- PolynomialExpansion - Class in org.apache.spark.ml.feature
-
Perform feature expansion in a polynomial space.
- PolynomialExpansion(String) - Constructor for class org.apache.spark.ml.feature.PolynomialExpansion
-
- PolynomialExpansion() - Constructor for class org.apache.spark.ml.feature.PolynomialExpansion
-
- poolToActiveStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- popStdev() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population standard deviation of this RDD's elements.
- popStdev() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population standard deviation of this RDD's elements.
- popStdev() - Method in class org.apache.spark.util.StatCounter
-
Return the population standard deviation of the values.
- popVariance() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population variance of this RDD's elements.
- popVariance() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population variance of this RDD's elements.
- popVariance() - Method in class org.apache.spark.util.StatCounter
-
Return the population variance of the values.
- port() - Method in interface org.apache.spark.SparkExecutorInfo
-
- port() - Method in class org.apache.spark.SparkExecutorInfoImpl
-
- port() - Method in class org.apache.spark.storage.BlockManagerId
-
- port() - Method in class org.apache.spark.streaming.kafka.Broker
-
Broker's port
- port() - Method in class org.apache.spark.streaming.kafka.KafkaCluster.LeaderOffset
-
- PortableDataStream - Class in org.apache.spark.input
-
A class that allows DataStreams to be serialized and moved around by not creating them
until they need to be read
- PortableDataStream(CombineFileSplit, TaskAttemptContext, Integer) - Constructor for class org.apache.spark.input.PortableDataStream
-
- portMaxRetries(SparkConf) - Static method in class org.apache.spark.util.Utils
-
Maximum number of retries when binding to a port before giving up.
- posexplode(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element with position in the given array or map column.
- posexplode_outer(Column) - Static method in class org.apache.spark.sql.functions
-
Creates a new row for each element with position in the given array or map column.
- position() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
-
- positioned(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- Postfix$() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan.Postfix$
-
- PostgresDialect - Class in org.apache.spark.sql.jdbc
-
- PostgresDialect() - Constructor for class org.apache.spark.sql.jdbc.PostgresDialect
-
- pow(Column, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(Column, String) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(String, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(String, String) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(Column, double) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(String, double) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(double, Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- pow(double, String) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the first argument raised to the power of the second argument.
- PowerIterationClustering - Class in org.apache.spark.mllib.clustering
-
Power Iteration Clustering (PIC), a scalable graph clustering algorithm developed by
Lin and Cohen.
- PowerIterationClustering() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Constructs a PIC instance with default parameters: {k: 2, maxIterations: 100,
initMode: "random"}.
- PowerIterationClustering.Assignment - Class in org.apache.spark.mllib.clustering
-
Cluster assignment.
- PowerIterationClustering.Assignment$ - Class in org.apache.spark.mllib.clustering
-
- PowerIterationClusteringModel - Class in org.apache.spark.mllib.clustering
-
- PowerIterationClusteringModel(int, RDD<PowerIterationClustering.Assignment>) - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
-
- PowerIterationClusteringModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.clustering
-
- pr() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
Returns the precision-recall curve, which is a Dataframe containing
two fields recall, precision with (0.0, 1.0) prepended to it.
- pr() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the precision-recall curve, which is an RDD of (recall, precision),
NOT (precision, recall), with (0.0, 1.0) prepended to it.
- Precision - Class in org.apache.spark.mllib.evaluation.binary
-
Precision.
- Precision() - Constructor for class org.apache.spark.mllib.evaluation.binary.Precision
-
- precision(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns precision for a given label (category)
- precision() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
- precision() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns document-based precision averaged by the number of documents
- precision(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns precision for a given label (category)
- precision() - Method in class org.apache.spark.sql.types.Decimal
-
- precision() - Method in class org.apache.spark.sql.types.DecimalType
-
- precisionAt(int) - Method in class org.apache.spark.mllib.evaluation.RankingMetrics
-
Compute the average precision of all the queries, truncated at ranking position k.
- precisionByThreshold() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
Returns a dataframe with two fields (threshold, precision) curve.
- precisionByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, precision) curve.
- predict(Vector) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- predict(RDD<Vector>) - Method in interface org.apache.spark.mllib.classification.ClassificationModel
-
Predict values for the given data set using the model trained.
- predict(Vector) - Method in interface org.apache.spark.mllib.classification.ClassificationModel
-
Predict values for a single data point using the model trained.
- predict(JavaRDD<Vector>) - Method in interface org.apache.spark.mllib.classification.ClassificationModel
-
Predict values for examples stored in a JavaRDD.
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- predict(Vector) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- predict(Vector) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- predict(Vector) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- predict(Vector) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Predicts the index of the cluster that the input point belongs to.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Predicts the indices of the clusters that the input points belong to.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
Java-friendly version of predict()
.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Maps given points to their cluster indices.
- predict(Vector) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Maps given point to its cluster index.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Java-friendly version of predict()
- predict(Vector) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Returns the cluster index that a given point belongs to.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Maps given points to their cluster indices.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
Maps given points to their cluster indices.
- predict(int, int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Predict the rating of one user for one product.
- predict(RDD<Tuple2<Object, Object>>) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Predict the rating of many users for many products.
- predict(JavaPairRDD<Integer, Integer>) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Java-friendly version of MatrixFactorizationModel.predict
.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
Predict values for the given data set using the model trained.
- predict(Vector) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
Predict values for a single data point using the model trained.
- predict(RDD<Object>) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
Predict labels for provided features.
- predict(JavaDoubleRDD) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
Predict labels for provided features.
- predict(double) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
Predict a single label.
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- predict(Vector) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- predict(Vector) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- predict(RDD<Vector>) - Method in interface org.apache.spark.mllib.regression.RegressionModel
-
Predict values for the given data set using the model trained.
- predict(Vector) - Method in interface org.apache.spark.mllib.regression.RegressionModel
-
Predict values for a single data point using the model trained.
- predict(JavaRDD<Vector>) - Method in interface org.apache.spark.mllib.regression.RegressionModel
-
Predict values for examples stored in a JavaRDD.
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- predict(Vector) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- predict(Vector) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Predict values for a single data point using the model trained.
- predict(RDD<Vector>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Predict values for the given data set using the model trained.
- predict(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Predict values for the given data set using the model trained.
- predict() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- predict() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- predict(Vector) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- predict() - Method in class org.apache.spark.mllib.tree.model.Node
-
- predict(Vector) - Method in class org.apache.spark.mllib.tree.model.Node
-
predict value if node is not leaf
- Predict - Class in org.apache.spark.mllib.tree.model
-
:: DeveloperApi ::
Predicted value for a node
param: predict predicted value
param: prob probability of the label (classification only)
- Predict(double, double) - Constructor for class org.apache.spark.mllib.tree.model.Predict
-
- predict() - Method in class org.apache.spark.mllib.tree.model.Predict
-
- predict(Vector) - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- predict(RDD<Vector>) - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- predict(JavaRDD<Vector>) - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- PredictData(double, double) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- prediction() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
-
- prediction() - Method in class org.apache.spark.ml.tree.InternalNode
-
- prediction() - Method in class org.apache.spark.ml.tree.LeafNode
-
- prediction() - Method in class org.apache.spark.ml.tree.Node
-
Prediction a leaf node makes, or which an internal node would make if it were a leaf node
- predictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- predictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- predictionCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- predictionCol() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- predictionCol() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
-
- predictionCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- predictionCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- predictionCol() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- predictionCol() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- predictionCol() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- predictionCol() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- predictionCol() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- predictionCol() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- predictionCol() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- predictionCol() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- predictionCol() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Field in "predictions" which gives the predicted value of each instance.
- predictionCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- predictionCol() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- predictionCol() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- PredictionModel<FeaturesType,M extends PredictionModel<FeaturesType,M>> - Class in org.apache.spark.ml
-
:: DeveloperApi ::
Abstraction for a model for prediction tasks (regression and classification).
- PredictionModel() - Constructor for class org.apache.spark.ml.PredictionModel
-
- predictions() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
- predictions() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Dataframe output by the model's transform
method.
- predictions() - Method in class org.apache.spark.ml.clustering.ClusteringSummary
-
- predictions() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Predictions output by the model's transform
method.
- predictions() - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
Predictions associated with the boundaries at the same index, monotone because of isotonic
regression.
- predictions() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
- predictions() - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
- predictOn(DStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Use the clustering model to make predictions on batches of data from a DStream.
- predictOn(JavaDStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Java-friendly version of predictOn
.
- predictOn(DStream<Vector>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Use the model to make predictions on batches of data from a DStream
- predictOn(JavaDStream<Vector>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Java-friendly version of predictOn
.
- predictOnValues(DStream<Tuple2<K, Vector>>, ClassTag<K>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Use the model to make predictions on the values of a DStream and carry over its keys.
- predictOnValues(JavaPairDStream<K, Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Java-friendly version of predictOnValues
.
- predictOnValues(DStream<Tuple2<K, Vector>>, ClassTag<K>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Use the model to make predictions on the values of a DStream and carry over its keys.
- predictOnValues(JavaPairDStream<K, Vector>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Java-friendly version of predictOnValues
.
- Predictor<FeaturesType,Learner extends Predictor<FeaturesType,Learner,M>,M extends PredictionModel<FeaturesType,M>> - Class in org.apache.spark.ml
-
:: DeveloperApi ::
Abstraction for prediction problems (regression and classification).
- Predictor() - Constructor for class org.apache.spark.ml.Predictor
-
- predictProbabilities(RDD<Vector>) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
Predict values for the given data set using the model trained.
- predictProbabilities(Vector) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
Predict posterior class probabilities for a single data point using the model trained.
- predictQuantiles(Vector) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- predictSoft(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Given the input vectors, return the membership value of each vector
to all mixture components.
- predictSoft(Vector) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
Given the input vector, return the membership values to all mixture components.
- preferredLocation() - Method in class org.apache.spark.streaming.receiver.Receiver
-
Override this to specify a preferred location (hostname).
- preferredLocations(Partition) - Static method in class org.apache.spark.api.r.RRDD
-
- preferredLocations(Partition) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- preferredLocations(Partition) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- preferredLocations(Partition) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- preferredLocations(Partition) - Static method in class org.apache.spark.graphx.VertexRDD
-
- preferredLocations(Partition) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- preferredLocations(Partition) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- preferredLocations(Partition) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- preferredLocations(Partition) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- preferredLocations(Partition) - Method in class org.apache.spark.rdd.RDD
-
Get the preferred locations of a partition, taking into account whether the
RDD is checkpointed.
- preferredLocations(Partition) - Static method in class org.apache.spark.rdd.UnionRDD
-
- Prefix$() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan.Prefix$
-
- prefixesToRewrite() - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- prefixLength(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- PrefixSpan - Class in org.apache.spark.mllib.fpm
-
A parallel PrefixSpan algorithm to mine frequent sequential patterns.
- PrefixSpan() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpan
-
Constructs a default instance with default parameters
{minSupport: 0.1
, maxPatternLength: 10
, maxLocalProjDBSize: 32000000L
}.
- PrefixSpan.FreqSequence<Item> - Class in org.apache.spark.mllib.fpm
-
Represents a frequent sequence.
- PrefixSpan.Postfix$ - Class in org.apache.spark.mllib.fpm
-
- PrefixSpan.Prefix$ - Class in org.apache.spark.mllib.fpm
-
- PrefixSpanModel<Item> - Class in org.apache.spark.mllib.fpm
-
Model fitted by
PrefixSpan
param: freqSequences frequent sequences
- PrefixSpanModel(RDD<PrefixSpan.FreqSequence<Item>>) - Constructor for class org.apache.spark.mllib.fpm.PrefixSpanModel
-
- PrefixSpanModel.SaveLoadV1_0$ - Class in org.apache.spark.mllib.fpm
-
- prefLoc() - Method in class org.apache.spark.rdd.PartitionGroup
-
- pregel(A, int, EdgeDirection, Function3<Object, VD, A, VD>, Function1<EdgeTriplet<VD, ED>, Iterator<Tuple2<Object, A>>>, Function2<A, A, A>, ClassTag<A>) - Method in class org.apache.spark.graphx.GraphOps
-
Execute a Pregel-like iterative vertex-parallel abstraction.
- Pregel - Class in org.apache.spark.graphx
-
Implements a Pregel-like bulk-synchronous message-passing API.
- Pregel() - Constructor for class org.apache.spark.graphx.Pregel
-
- prepare() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- prepareWritable(Writable, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.sql.hive.HiveShim
-
- prepareWrite(SparkSession, Job, Map<String, String>, StructType) - Method in class org.apache.spark.sql.hive.execution.HiveFileFormat
-
- prepareWrite(SparkSession, Job, Map<String, String>, StructType) - Method in class org.apache.spark.sql.hive.orc.OrcFileFormat
-
- prependBaseUri(String, String) - Static method in class org.apache.spark.ui.UIUtils
-
- prettyJson() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- prettyJson() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- prettyJson() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- prettyJson() - Method in class org.apache.spark.sql.streaming.SinkProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The pretty (i.e.
- prettyJson() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
-
The pretty (i.e.
- prettyJson() - Static method in class org.apache.spark.sql.types.ArrayType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.BinaryType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.BooleanType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.ByteType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.CharType
-
- prettyJson() - Method in class org.apache.spark.sql.types.DataType
-
The pretty (i.e.
- prettyJson() - Static method in class org.apache.spark.sql.types.DateType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.DecimalType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.DoubleType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.FloatType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.HiveStringType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.IntegerType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.LongType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.MapType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.NullType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.NumericType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.ObjectType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.ShortType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.StringType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.StructType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.TimestampType
-
- prettyJson() - Static method in class org.apache.spark.sql.types.VarcharType
-
- prettyPrint() - Method in class org.apache.spark.streaming.Duration
-
- prev() - Method in class org.apache.spark.rdd.ShuffledRDD
-
- print() - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- print(int) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- print() - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Print the first ten elements of each RDD generated in this DStream.
- print(int) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Print the first num elements of each RDD generated in this DStream.
- print() - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- print(int) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- print() - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- print(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- print() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- print(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- print() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- print(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- print() - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- print(int) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- print() - Method in class org.apache.spark.streaming.dstream.DStream
-
Print the first ten elements of each RDD generated in this DStream.
- print(int) - Method in class org.apache.spark.streaming.dstream.DStream
-
Print the first num elements of each RDD generated in this DStream.
- printSchema() - Method in class org.apache.spark.sql.Dataset
-
Prints the schema to the console in a nice tree format.
- printSchema() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- printSchema() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- printSchema() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- printStackTrace() - Static method in exception org.apache.spark.sql.AnalysisException
-
- printStackTrace(PrintStream) - Static method in exception org.apache.spark.sql.AnalysisException
-
- printStackTrace(PrintWriter) - Static method in exception org.apache.spark.sql.AnalysisException
-
- printStats() - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
-
- printTreeString() - Method in class org.apache.spark.sql.types.StructType
-
- prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - Method in class org.apache.spark.storage.BasicBlockReplicationPolicy
-
Method to prioritize a bunch of candidate peers of a block manager.
- prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - Method in interface org.apache.spark.storage.BlockReplicationPolicy
-
Method to prioritize a bunch of candidate peers of a block
- prioritize(BlockManagerId, Seq<BlockManagerId>, HashSet<BlockManagerId>, BlockId, int) - Method in class org.apache.spark.storage.RandomBlockReplicationPolicy
-
Method to prioritize a bunch of candidate peers of a block.
- prob() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- prob() - Method in class org.apache.spark.mllib.tree.model.Predict
-
- ProbabilisticClassificationModel<FeaturesType,M extends ProbabilisticClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
-
:: DeveloperApi ::
- ProbabilisticClassificationModel() - Constructor for class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- ProbabilisticClassifier<FeaturesType,E extends ProbabilisticClassifier<FeaturesType,E,M>,M extends ProbabilisticClassificationModel<FeaturesType,M>> - Class in org.apache.spark.ml.classification
-
:: DeveloperApi ::
- ProbabilisticClassifier() - Constructor for class org.apache.spark.ml.classification.ProbabilisticClassifier
-
- probabilities() - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- probability() - Method in class org.apache.spark.ml.clustering.GaussianMixtureSummary
-
Probability of each cluster.
- probabilityCol() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- probabilityCol() - Method in interface org.apache.spark.ml.classification.LogisticRegressionSummary
-
Field in "predictions" which gives the probability of each class as a vector.
- probabilityCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- probabilityCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- probabilityCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- probabilityCol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- probabilityCol() - Method in class org.apache.spark.ml.clustering.GaussianMixtureSummary
-
- Probit$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Probit$
-
- process(T) - Method in class org.apache.spark.sql.ForeachWriter
-
Called to process the data in the executor side.
- PROCESS_LOCAL() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- processAllAvailable() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Blocks until all available data in the source has been processed and committed to the sink.
- processedRowsPerSecond() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
- processedRowsPerSecond() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
The aggregate (across all sources) rate at which Spark is processing data.
- processingDelay() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
Time taken for the all jobs of this batch to finish processing from the time they started
processing.
- processingEndTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- processingStartTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- ProcessingTime - Class in org.apache.spark.sql.streaming
-
- ProcessingTime(long) - Constructor for class org.apache.spark.sql.streaming.ProcessingTime
-
Deprecated.
- ProcessingTime(long) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTime(long, TimeUnit) - Static method in class org.apache.spark.sql.streaming.Trigger
-
(Java-friendly)
A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTime(Duration) - Static method in class org.apache.spark.sql.streaming.Trigger
-
(Scala-friendly)
A trigger policy that runs a query periodically based on an interval in processing time.
- ProcessingTime(String) - Static method in class org.apache.spark.sql.streaming.Trigger
-
A trigger policy that runs a query periodically based on an interval in processing time.
- processingTime() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- ProcessingTimeTimeout() - Static method in class org.apache.spark.sql.streaming.GroupStateTimeout
-
Timeout based on processing time.
- processStreamByLine(String, InputStream, Function1<String, BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Return and start a daemon thread that processes the content of the input stream line by line.
- producedAttributes() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- producedAttributes() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- producedAttributes() - Method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- product() - Method in class org.apache.spark.mllib.recommendation.Rating
-
- product(TypeTags.TypeTag<T>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's product type (tuples, case classes, etc).
- product(Numeric<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- productArity() - Static method in class org.apache.spark.Aggregator
-
- productArity() - Static method in class org.apache.spark.CleanAccum
-
- productArity() - Static method in class org.apache.spark.CleanBroadcast
-
- productArity() - Static method in class org.apache.spark.CleanCheckpoint
-
- productArity() - Static method in class org.apache.spark.CleanRDD
-
- productArity() - Static method in class org.apache.spark.CleanShuffle
-
- productArity() - Static method in class org.apache.spark.ExceptionFailure
-
- productArity() - Static method in class org.apache.spark.ExecutorLostFailure
-
- productArity() - Static method in class org.apache.spark.ExecutorRegistered
-
- productArity() - Static method in class org.apache.spark.ExecutorRemoved
-
- productArity() - Static method in class org.apache.spark.ExpireDeadHosts
-
- productArity() - Static method in class org.apache.spark.FetchFailed
-
- productArity() - Static method in class org.apache.spark.graphx.Edge
-
- productArity() - Static method in class org.apache.spark.ml.feature.Dot
-
- productArity() - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- productArity() - Static method in class org.apache.spark.ml.param.ParamPair
-
- productArity() - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- productArity() - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- productArity() - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- productArity() - Static method in class org.apache.spark.mllib.linalg.QRDecomposition
-
- productArity() - Static method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- productArity() - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- productArity() - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
- productArity() - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- productArity() - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- productArity() - Static method in class org.apache.spark.mllib.tree.model.Split
-
- productArity() - Static method in class org.apache.spark.Resubmitted
-
- productArity() - Static method in class org.apache.spark.rpc.netty.OnStart
-
- productArity() - Static method in class org.apache.spark.rpc.netty.OnStop
-
- productArity() - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- productArity() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
-
- productArity() - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- productArity() - Static method in class org.apache.spark.scheduler.BlacklistedExecutor
-
- productArity() - Static method in class org.apache.spark.scheduler.JobSucceeded
-
- productArity() - Static method in class org.apache.spark.scheduler.local.KillTask
-
- productArity() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- productArity() - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- productArity() - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- productArity() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
-
- productArity() - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- productArity() - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- productArity() - Static method in class org.apache.spark.scheduler.StopCoordinator
-
- productArity() - Static method in class org.apache.spark.sql.DatasetHolder
-
- productArity() - Static method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- productArity() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- productArity() - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- productArity() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- productArity() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- productArity() - Static method in class org.apache.spark.sql.hive.RelationConversions
-
- productArity() - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- productArity() - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- productArity() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- productArity() - Static method in class org.apache.spark.sql.sources.And
-
- productArity() - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- productArity() - Static method in class org.apache.spark.sql.sources.EqualTo
-
- productArity() - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- productArity() - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- productArity() - Static method in class org.apache.spark.sql.sources.In
-
- productArity() - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- productArity() - Static method in class org.apache.spark.sql.sources.IsNull
-
- productArity() - Static method in class org.apache.spark.sql.sources.LessThan
-
- productArity() - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- productArity() - Static method in class org.apache.spark.sql.sources.Not
-
- productArity() - Static method in class org.apache.spark.sql.sources.Or
-
- productArity() - Static method in class org.apache.spark.sql.sources.StringContains
-
- productArity() - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- productArity() - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- productArity() - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
Deprecated.
- productArity() - Static method in class org.apache.spark.sql.types.ArrayType
-
- productArity() - Static method in class org.apache.spark.sql.types.CharType
-
- productArity() - Static method in class org.apache.spark.sql.types.DecimalType
-
- productArity() - Static method in class org.apache.spark.sql.types.MapType
-
- productArity() - Static method in class org.apache.spark.sql.types.ObjectType
-
- productArity() - Static method in class org.apache.spark.sql.types.StructField
-
- productArity() - Static method in class org.apache.spark.sql.types.StructType
-
- productArity() - Static method in class org.apache.spark.sql.types.VarcharType
-
- productArity() - Static method in class org.apache.spark.StopMapOutputTracker
-
- productArity() - Static method in class org.apache.spark.storage.BlockStatus
-
- productArity() - Static method in class org.apache.spark.storage.BlockUpdatedInfo
-
- productArity() - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- productArity() - Static method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- productArity() - Static method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- productArity() - Static method in class org.apache.spark.storage.RDDBlockId
-
- productArity() - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- productArity() - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- productArity() - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- productArity() - Static method in class org.apache.spark.storage.StreamBlockId
-
- productArity() - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- productArity() - Static method in class org.apache.spark.streaming.Duration
-
- productArity() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- productArity() - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- productArity() - Static method in class org.apache.spark.streaming.Time
-
- productArity() - Static method in class org.apache.spark.Success
-
- productArity() - Static method in class org.apache.spark.TaskCommitDenied
-
- productArity() - Static method in class org.apache.spark.TaskKilled
-
- productArity() - Static method in class org.apache.spark.TaskResultLost
-
- productArity() - Static method in class org.apache.spark.TaskSchedulerIsSet
-
- productArity() - Static method in class org.apache.spark.UnknownReason
-
- productArity() - Static method in class org.apache.spark.util.MethodIdentifier
-
- productArity() - Static method in class org.apache.spark.util.MutablePair
-
- productElement(int) - Static method in class org.apache.spark.Aggregator
-
- productElement(int) - Static method in class org.apache.spark.CleanAccum
-
- productElement(int) - Static method in class org.apache.spark.CleanBroadcast
-
- productElement(int) - Static method in class org.apache.spark.CleanCheckpoint
-
- productElement(int) - Static method in class org.apache.spark.CleanRDD
-
- productElement(int) - Static method in class org.apache.spark.CleanShuffle
-
- productElement(int) - Static method in class org.apache.spark.ExceptionFailure
-
- productElement(int) - Static method in class org.apache.spark.ExecutorLostFailure
-
- productElement(int) - Static method in class org.apache.spark.ExecutorRegistered
-
- productElement(int) - Static method in class org.apache.spark.ExecutorRemoved
-
- productElement(int) - Static method in class org.apache.spark.ExpireDeadHosts
-
- productElement(int) - Static method in class org.apache.spark.FetchFailed
-
- productElement(int) - Static method in class org.apache.spark.graphx.Edge
-
- productElement(int) - Static method in class org.apache.spark.ml.feature.Dot
-
- productElement(int) - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- productElement(int) - Static method in class org.apache.spark.ml.param.ParamPair
-
- productElement(int) - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- productElement(int) - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- productElement(int) - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- productElement(int) - Static method in class org.apache.spark.mllib.linalg.QRDecomposition
-
- productElement(int) - Static method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- productElement(int) - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- productElement(int) - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
- productElement(int) - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- productElement(int) - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- productElement(int) - Static method in class org.apache.spark.mllib.tree.model.Split
-
- productElement(int) - Static method in class org.apache.spark.Resubmitted
-
- productElement(int) - Static method in class org.apache.spark.rpc.netty.OnStart
-
- productElement(int) - Static method in class org.apache.spark.rpc.netty.OnStop
-
- productElement(int) - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- productElement(int) - Static method in class org.apache.spark.scheduler.AllJobsCancelled
-
- productElement(int) - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- productElement(int) - Static method in class org.apache.spark.scheduler.BlacklistedExecutor
-
- productElement(int) - Static method in class org.apache.spark.scheduler.JobSucceeded
-
- productElement(int) - Static method in class org.apache.spark.scheduler.local.KillTask
-
- productElement(int) - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- productElement(int) - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- productElement(int) - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- productElement(int) - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
-
- productElement(int) - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- productElement(int) - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- productElement(int) - Static method in class org.apache.spark.scheduler.StopCoordinator
-
- productElement(int) - Static method in class org.apache.spark.sql.DatasetHolder
-
- productElement(int) - Static method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- productElement(int) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- productElement(int) - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- productElement(int) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- productElement(int) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- productElement(int) - Static method in class org.apache.spark.sql.hive.RelationConversions
-
- productElement(int) - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- productElement(int) - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- productElement(int) - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.And
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.EqualTo
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.In
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.IsNull
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.LessThan
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.Not
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.Or
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.StringContains
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- productElement(int) - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- productElement(int) - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
Deprecated.
- productElement(int) - Static method in class org.apache.spark.sql.types.ArrayType
-
- productElement(int) - Static method in class org.apache.spark.sql.types.CharType
-
- productElement(int) - Static method in class org.apache.spark.sql.types.DecimalType
-
- productElement(int) - Static method in class org.apache.spark.sql.types.MapType
-
- productElement(int) - Static method in class org.apache.spark.sql.types.ObjectType
-
- productElement(int) - Static method in class org.apache.spark.sql.types.StructField
-
- productElement(int) - Static method in class org.apache.spark.sql.types.StructType
-
- productElement(int) - Static method in class org.apache.spark.sql.types.VarcharType
-
- productElement(int) - Static method in class org.apache.spark.StopMapOutputTracker
-
- productElement(int) - Static method in class org.apache.spark.storage.BlockStatus
-
- productElement(int) - Static method in class org.apache.spark.storage.BlockUpdatedInfo
-
- productElement(int) - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- productElement(int) - Static method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- productElement(int) - Static method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- productElement(int) - Static method in class org.apache.spark.storage.RDDBlockId
-
- productElement(int) - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- productElement(int) - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- productElement(int) - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- productElement(int) - Static method in class org.apache.spark.storage.StreamBlockId
-
- productElement(int) - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- productElement(int) - Static method in class org.apache.spark.streaming.Duration
-
- productElement(int) - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- productElement(int) - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- productElement(int) - Static method in class org.apache.spark.streaming.Time
-
- productElement(int) - Static method in class org.apache.spark.Success
-
- productElement(int) - Static method in class org.apache.spark.TaskCommitDenied
-
- productElement(int) - Static method in class org.apache.spark.TaskKilled
-
- productElement(int) - Static method in class org.apache.spark.TaskResultLost
-
- productElement(int) - Static method in class org.apache.spark.TaskSchedulerIsSet
-
- productElement(int) - Static method in class org.apache.spark.UnknownReason
-
- productElement(int) - Static method in class org.apache.spark.util.MethodIdentifier
-
- productElement(int) - Static method in class org.apache.spark.util.MutablePair
-
- productFeatures() - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
- productIterator() - Static method in class org.apache.spark.Aggregator
-
- productIterator() - Static method in class org.apache.spark.CleanAccum
-
- productIterator() - Static method in class org.apache.spark.CleanBroadcast
-
- productIterator() - Static method in class org.apache.spark.CleanCheckpoint
-
- productIterator() - Static method in class org.apache.spark.CleanRDD
-
- productIterator() - Static method in class org.apache.spark.CleanShuffle
-
- productIterator() - Static method in class org.apache.spark.ExceptionFailure
-
- productIterator() - Static method in class org.apache.spark.ExecutorLostFailure
-
- productIterator() - Static method in class org.apache.spark.ExecutorRegistered
-
- productIterator() - Static method in class org.apache.spark.ExecutorRemoved
-
- productIterator() - Static method in class org.apache.spark.ExpireDeadHosts
-
- productIterator() - Static method in class org.apache.spark.FetchFailed
-
- productIterator() - Static method in class org.apache.spark.graphx.Edge
-
- productIterator() - Static method in class org.apache.spark.ml.feature.Dot
-
- productIterator() - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- productIterator() - Static method in class org.apache.spark.ml.param.ParamPair
-
- productIterator() - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- productIterator() - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- productIterator() - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- productIterator() - Static method in class org.apache.spark.mllib.linalg.QRDecomposition
-
- productIterator() - Static method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- productIterator() - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- productIterator() - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
- productIterator() - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- productIterator() - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- productIterator() - Static method in class org.apache.spark.mllib.tree.model.Split
-
- productIterator() - Static method in class org.apache.spark.Resubmitted
-
- productIterator() - Static method in class org.apache.spark.rpc.netty.OnStart
-
- productIterator() - Static method in class org.apache.spark.rpc.netty.OnStop
-
- productIterator() - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- productIterator() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
-
- productIterator() - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- productIterator() - Static method in class org.apache.spark.scheduler.BlacklistedExecutor
-
- productIterator() - Static method in class org.apache.spark.scheduler.JobSucceeded
-
- productIterator() - Static method in class org.apache.spark.scheduler.local.KillTask
-
- productIterator() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- productIterator() - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- productIterator() - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- productIterator() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
-
- productIterator() - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- productIterator() - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- productIterator() - Static method in class org.apache.spark.scheduler.StopCoordinator
-
- productIterator() - Static method in class org.apache.spark.sql.DatasetHolder
-
- productIterator() - Static method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- productIterator() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- productIterator() - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- productIterator() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- productIterator() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- productIterator() - Static method in class org.apache.spark.sql.hive.RelationConversions
-
- productIterator() - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- productIterator() - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- productIterator() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- productIterator() - Static method in class org.apache.spark.sql.sources.And
-
- productIterator() - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- productIterator() - Static method in class org.apache.spark.sql.sources.EqualTo
-
- productIterator() - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- productIterator() - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- productIterator() - Static method in class org.apache.spark.sql.sources.In
-
- productIterator() - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- productIterator() - Static method in class org.apache.spark.sql.sources.IsNull
-
- productIterator() - Static method in class org.apache.spark.sql.sources.LessThan
-
- productIterator() - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- productIterator() - Static method in class org.apache.spark.sql.sources.Not
-
- productIterator() - Static method in class org.apache.spark.sql.sources.Or
-
- productIterator() - Static method in class org.apache.spark.sql.sources.StringContains
-
- productIterator() - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- productIterator() - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- productIterator() - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
Deprecated.
- productIterator() - Static method in class org.apache.spark.sql.types.ArrayType
-
- productIterator() - Static method in class org.apache.spark.sql.types.CharType
-
- productIterator() - Static method in class org.apache.spark.sql.types.DecimalType
-
- productIterator() - Static method in class org.apache.spark.sql.types.MapType
-
- productIterator() - Static method in class org.apache.spark.sql.types.ObjectType
-
- productIterator() - Static method in class org.apache.spark.sql.types.StructField
-
- productIterator() - Static method in class org.apache.spark.sql.types.StructType
-
- productIterator() - Static method in class org.apache.spark.sql.types.VarcharType
-
- productIterator() - Static method in class org.apache.spark.StopMapOutputTracker
-
- productIterator() - Static method in class org.apache.spark.storage.BlockStatus
-
- productIterator() - Static method in class org.apache.spark.storage.BlockUpdatedInfo
-
- productIterator() - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- productIterator() - Static method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- productIterator() - Static method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- productIterator() - Static method in class org.apache.spark.storage.RDDBlockId
-
- productIterator() - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- productIterator() - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- productIterator() - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- productIterator() - Static method in class org.apache.spark.storage.StreamBlockId
-
- productIterator() - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- productIterator() - Static method in class org.apache.spark.streaming.Duration
-
- productIterator() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- productIterator() - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- productIterator() - Static method in class org.apache.spark.streaming.Time
-
- productIterator() - Static method in class org.apache.spark.Success
-
- productIterator() - Static method in class org.apache.spark.TaskCommitDenied
-
- productIterator() - Static method in class org.apache.spark.TaskKilled
-
- productIterator() - Static method in class org.apache.spark.TaskResultLost
-
- productIterator() - Static method in class org.apache.spark.TaskSchedulerIsSet
-
- productIterator() - Static method in class org.apache.spark.UnknownReason
-
- productIterator() - Static method in class org.apache.spark.util.MethodIdentifier
-
- productIterator() - Static method in class org.apache.spark.util.MutablePair
-
- productPrefix() - Static method in class org.apache.spark.Aggregator
-
- productPrefix() - Static method in class org.apache.spark.CleanAccum
-
- productPrefix() - Static method in class org.apache.spark.CleanBroadcast
-
- productPrefix() - Static method in class org.apache.spark.CleanCheckpoint
-
- productPrefix() - Static method in class org.apache.spark.CleanRDD
-
- productPrefix() - Static method in class org.apache.spark.CleanShuffle
-
- productPrefix() - Static method in class org.apache.spark.ExceptionFailure
-
- productPrefix() - Static method in class org.apache.spark.ExecutorLostFailure
-
- productPrefix() - Static method in class org.apache.spark.ExecutorRegistered
-
- productPrefix() - Static method in class org.apache.spark.ExecutorRemoved
-
- productPrefix() - Static method in class org.apache.spark.ExpireDeadHosts
-
- productPrefix() - Static method in class org.apache.spark.FetchFailed
-
- productPrefix() - Static method in class org.apache.spark.graphx.Edge
-
- productPrefix() - Static method in class org.apache.spark.ml.feature.Dot
-
- productPrefix() - Static method in class org.apache.spark.ml.feature.LabeledPoint
-
- productPrefix() - Static method in class org.apache.spark.ml.param.ParamPair
-
- productPrefix() - Static method in class org.apache.spark.mllib.feature.VocabWord
-
- productPrefix() - Static method in class org.apache.spark.mllib.linalg.distributed.IndexedRow
-
- productPrefix() - Static method in class org.apache.spark.mllib.linalg.distributed.MatrixEntry
-
- productPrefix() - Static method in class org.apache.spark.mllib.linalg.QRDecomposition
-
- productPrefix() - Static method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- productPrefix() - Static method in class org.apache.spark.mllib.recommendation.Rating
-
- productPrefix() - Static method in class org.apache.spark.mllib.regression.LabeledPoint
-
- productPrefix() - Static method in class org.apache.spark.mllib.stat.test.BinarySample
-
- productPrefix() - Static method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- productPrefix() - Static method in class org.apache.spark.mllib.tree.model.Split
-
- productPrefix() - Static method in class org.apache.spark.Resubmitted
-
- productPrefix() - Static method in class org.apache.spark.rpc.netty.OnStart
-
- productPrefix() - Static method in class org.apache.spark.rpc.netty.OnStop
-
- productPrefix() - Static method in class org.apache.spark.scheduler.AccumulableInfo
-
- productPrefix() - Static method in class org.apache.spark.scheduler.AllJobsCancelled
-
- productPrefix() - Static method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- productPrefix() - Static method in class org.apache.spark.scheduler.BlacklistedExecutor
-
- productPrefix() - Static method in class org.apache.spark.scheduler.JobSucceeded
-
- productPrefix() - Static method in class org.apache.spark.scheduler.local.KillTask
-
- productPrefix() - Static method in class org.apache.spark.scheduler.local.ReviveOffers
-
- productPrefix() - Static method in class org.apache.spark.scheduler.local.StatusUpdate
-
- productPrefix() - Static method in class org.apache.spark.scheduler.local.StopExecutor
-
- productPrefix() - Static method in class org.apache.spark.scheduler.ResubmitFailedStages
-
- productPrefix() - Static method in class org.apache.spark.scheduler.RuntimePercentage
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- productPrefix() - Static method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- productPrefix() - Static method in class org.apache.spark.scheduler.StopCoordinator
-
- productPrefix() - Static method in class org.apache.spark.sql.DatasetHolder
-
- productPrefix() - Static method in class org.apache.spark.sql.expressions.UserDefinedFunction
-
- productPrefix() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- productPrefix() - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- productPrefix() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- productPrefix() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- productPrefix() - Static method in class org.apache.spark.sql.hive.RelationConversions
-
- productPrefix() - Static method in class org.apache.spark.sql.jdbc.JdbcType
-
- productPrefix() - Static method in class org.apache.spark.sql.jdbc.MySQLDialect
-
- productPrefix() - Static method in class org.apache.spark.sql.jdbc.OracleDialect
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.And
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.EqualNullSafe
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.EqualTo
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.GreaterThan
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.In
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.IsNotNull
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.IsNull
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.LessThan
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.Not
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.Or
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.StringContains
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.StringEndsWith
-
- productPrefix() - Static method in class org.apache.spark.sql.sources.StringStartsWith
-
- productPrefix() - Static method in class org.apache.spark.sql.streaming.ProcessingTime
-
Deprecated.
- productPrefix() - Static method in class org.apache.spark.sql.types.ArrayType
-
- productPrefix() - Static method in class org.apache.spark.sql.types.CharType
-
- productPrefix() - Static method in class org.apache.spark.sql.types.DecimalType
-
- productPrefix() - Static method in class org.apache.spark.sql.types.MapType
-
- productPrefix() - Static method in class org.apache.spark.sql.types.ObjectType
-
- productPrefix() - Static method in class org.apache.spark.sql.types.StructField
-
- productPrefix() - Static method in class org.apache.spark.sql.types.StructType
-
- productPrefix() - Static method in class org.apache.spark.sql.types.VarcharType
-
- productPrefix() - Static method in class org.apache.spark.StopMapOutputTracker
-
- productPrefix() - Static method in class org.apache.spark.storage.BlockStatus
-
- productPrefix() - Static method in class org.apache.spark.storage.BlockUpdatedInfo
-
- productPrefix() - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- productPrefix() - Static method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- productPrefix() - Static method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- productPrefix() - Static method in class org.apache.spark.storage.RDDBlockId
-
- productPrefix() - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- productPrefix() - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- productPrefix() - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- productPrefix() - Static method in class org.apache.spark.storage.StreamBlockId
-
- productPrefix() - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- productPrefix() - Static method in class org.apache.spark.streaming.Duration
-
- productPrefix() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.AllReceiverIds
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.GetAllReceiverInfo
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- productPrefix() - Static method in class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- productPrefix() - Static method in class org.apache.spark.streaming.Time
-
- productPrefix() - Static method in class org.apache.spark.Success
-
- productPrefix() - Static method in class org.apache.spark.TaskCommitDenied
-
- productPrefix() - Static method in class org.apache.spark.TaskKilled
-
- productPrefix() - Static method in class org.apache.spark.TaskResultLost
-
- productPrefix() - Static method in class org.apache.spark.TaskSchedulerIsSet
-
- productPrefix() - Static method in class org.apache.spark.UnknownReason
-
- productPrefix() - Static method in class org.apache.spark.util.MethodIdentifier
-
- productPrefix() - Static method in class org.apache.spark.util.MutablePair
-
- progress() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryProgressEvent
-
- project(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Binomial$
-
- project(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression.Gaussian$
-
- properties() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- properties() - Method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- propertiesFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- propertiesToJson(Properties) - Static method in class org.apache.spark.util.JsonProtocol
-
- provider() - Static method in class org.apache.spark.streaming.kinesis.DefaultCredentials
-
- proxyBase() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.AddWebUIFilter
-
- PrunedFilteredScan - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can eliminate unneeded columns and filter using selected
predicates before producing an RDD containing all matching tuples as Row objects.
- PrunedScan - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can eliminate unneeded columns before producing an RDD
containing all of its tuples as Row objects.
- Pseudorandom - Interface in org.apache.spark.util.random
-
:: DeveloperApi ::
A class with pseudorandom behavior.
- put(ParamPair<?>...) - Method in class org.apache.spark.ml.param.ParamMap
-
Puts a list of param pairs (overwrites if the input params exists).
- put(Param<T>, T) - Method in class org.apache.spark.ml.param.ParamMap
-
Puts a (param, value) pair (overwrites if the input param exists).
- put(Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.param.ParamMap
-
Puts a list of param pairs (overwrites if the input params exists).
- put(Object) - Method in class org.apache.spark.util.sketch.BloomFilter
-
Puts an item into this BloomFilter
.
- putBinary(byte[]) - Method in class org.apache.spark.util.sketch.BloomFilter
-
- putBoolean(String, boolean) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Boolean.
- putBooleanArray(String, boolean[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Boolean array.
- putDouble(String, double) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Double.
- putDoubleArray(String, double[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Double array.
- putLong(String, long) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Long.
- putLong(long) - Method in class org.apache.spark.util.sketch.BloomFilter
-
- putLongArray(String, long[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a Long array.
- putMetadata(String, Metadata) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
- putMetadataArray(String, Metadata[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
- putNull(String) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a null.
- putString(String, String) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a String.
- putString(String) - Method in class org.apache.spark.util.sketch.BloomFilter
-
- putStringArray(String, String[]) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
Puts a String array.
- pValue() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
-
- pValue() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
-
- pValue() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
The probability of obtaining a test statistic result at least as extreme as the one that was
actually observed, assuming that the null hypothesis is true.
- pValues() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
-
Two-sided p-value of estimated coefficients and intercept.
- pValues() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Two-sided p-value of estimated coefficients and intercept.
- pyUDT() - Method in class org.apache.spark.mllib.linalg.VectorUDT
-
- R() - Method in class org.apache.spark.mllib.linalg.QRDecomposition
-
- r2() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns R^2^, the coefficient of determination.
- r2() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns R^2^, the unadjusted coefficient of determination.
- RACK_LOCAL() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- radians(Column) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
- radians(String) - Static method in class org.apache.spark.sql.functions
-
Converts an angle measured in degrees to an approximately equivalent angle measured in radians.
- rand(int, int, Random) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a DenseMatrix
consisting of i.i.d.
uniform random numbers.
- rand(int, int, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a DenseMatrix
consisting of i.i.d.
uniform random numbers.
- rand(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a DenseMatrix
consisting of i.i.d.
uniform random numbers.
- rand(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a DenseMatrix
consisting of i.i.d.
uniform random numbers.
- rand(long) - Static method in class org.apache.spark.sql.functions
-
Generate a random column with independent and identically distributed (i.i.d.) samples
from U[0.0, 1.0].
- rand() - Static method in class org.apache.spark.sql.functions
-
Generate a random column with independent and identically distributed (i.i.d.) samples
from U[0.0, 1.0].
- randn(int, int, Random) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
Generate a DenseMatrix
consisting of i.i.d.
gaussian random numbers.
- randn(int, int, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a DenseMatrix
consisting of i.i.d.
gaussian random numbers.
- randn(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a DenseMatrix
consisting of i.i.d.
gaussian random numbers.
- randn(int, int, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a DenseMatrix
consisting of i.i.d.
gaussian random numbers.
- randn(long) - Static method in class org.apache.spark.sql.functions
-
Generate a column with independent and identically distributed (i.i.d.) samples from
the standard normal distribution.
- randn() - Static method in class org.apache.spark.sql.functions
-
Generate a column with independent and identically distributed (i.i.d.) samples from
the standard normal distribution.
- RANDOM() - Static method in class org.apache.spark.mllib.clustering.KMeans
-
- random() - Static method in class org.apache.spark.util.Utils
-
- RandomBlockReplicationPolicy - Class in org.apache.spark.storage
-
- RandomBlockReplicationPolicy() - Constructor for class org.apache.spark.storage.RandomBlockReplicationPolicy
-
- RandomDataGenerator<T> - Interface in org.apache.spark.mllib.random
-
:: DeveloperApi ::
Trait for random data generators that generate i.i.d.
- RandomForest - Class in org.apache.spark.ml.tree.impl
-
ALGORITHM
- RandomForest() - Constructor for class org.apache.spark.ml.tree.impl.RandomForest
-
- RandomForest - Class in org.apache.spark.mllib.tree
-
A class that implements a
Random Forest
learning algorithm for classification and regression.
- RandomForest(Strategy, int, String, int) - Constructor for class org.apache.spark.mllib.tree.RandomForest
-
- RandomForestClassificationModel - Class in org.apache.spark.ml.classification
-
- RandomForestClassifier - Class in org.apache.spark.ml.classification
-
- RandomForestClassifier(String) - Constructor for class org.apache.spark.ml.classification.RandomForestClassifier
-
- RandomForestClassifier() - Constructor for class org.apache.spark.ml.classification.RandomForestClassifier
-
- RandomForestModel - Class in org.apache.spark.mllib.tree.model
-
Represents a random forest model.
- RandomForestModel(Enumeration.Value, DecisionTreeModel[]) - Constructor for class org.apache.spark.mllib.tree.model.RandomForestModel
-
- RandomForestRegressionModel - Class in org.apache.spark.ml.regression
-
- RandomForestRegressor - Class in org.apache.spark.ml.regression
-
- RandomForestRegressor(String) - Constructor for class org.apache.spark.ml.regression.RandomForestRegressor
-
- RandomForestRegressor() - Constructor for class org.apache.spark.ml.regression.RandomForestRegressor
-
- randomize(TraversableOnce<T>, ClassTag<T>) - Static method in class org.apache.spark.util.Utils
-
Shuffle the elements of a collection into a random order, returning the
result in a new collection.
- randomizeInPlace(Object, Random) - Static method in class org.apache.spark.util.Utils
-
Shuffle the elements of an array into a random order, modifying the
original array.
- randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
Generates an RDD comprised of i.i.d.
samples produced by the input RandomDataGenerator.
- randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
RandomRDDs.randomJavaRDD
with the default seed.
- randomJavaRDD(JavaSparkContext, RandomDataGenerator<T>, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
RandomRDDs.randomJavaRDD
with the default seed & numPartitions
- randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
Java-friendly version of RandomRDDs.randomVectorRDD
.
- randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
RandomRDDs.randomJavaVectorRDD
with the default seed.
- randomJavaVectorRDD(JavaSparkContext, RandomDataGenerator<Object>, long, int) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
RandomRDDs.randomJavaVectorRDD
with the default number of partitions and the default seed.
- randomRDD(SparkContext, RandomDataGenerator<T>, long, int, long, ClassTag<T>) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
Generates an RDD comprised of i.i.d.
samples produced by the input RandomDataGenerator.
- RandomRDDs - Class in org.apache.spark.mllib.random
-
Generator methods for creating RDDs comprised of i.i.d.
samples from some distribution.
- RandomRDDs() - Constructor for class org.apache.spark.mllib.random.RandomRDDs
-
- RandomSampler<T,U> - Interface in org.apache.spark.util.random
-
:: DeveloperApi ::
A pseudorandom sampler.
- randomSplit(double[]) - Method in class org.apache.spark.api.java.JavaRDD
-
Randomly splits this RDD with the provided weights.
- randomSplit(double[], long) - Method in class org.apache.spark.api.java.JavaRDD
-
Randomly splits this RDD with the provided weights.
- randomSplit(double[], long) - Static method in class org.apache.spark.api.r.RRDD
-
- randomSplit(double[], long) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- randomSplit(double[], long) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- randomSplit(double[], long) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- randomSplit(double[], long) - Static method in class org.apache.spark.graphx.VertexRDD
-
- randomSplit(double[], long) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- randomSplit(double[], long) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- randomSplit(double[], long) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- randomSplit(double[], long) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- randomSplit(double[], long) - Method in class org.apache.spark.rdd.RDD
-
Randomly splits this RDD with the provided weights.
- randomSplit(double[], long) - Static method in class org.apache.spark.rdd.UnionRDD
-
- randomSplit(double[], long) - Method in class org.apache.spark.sql.Dataset
-
Randomly splits this Dataset with the provided weights.
- randomSplit(double[]) - Method in class org.apache.spark.sql.Dataset
-
Randomly splits this Dataset with the provided weights.
- randomSplit$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- randomSplit$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- randomSplit$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- randomSplit$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- randomSplitAsList(double[], long) - Method in class org.apache.spark.sql.Dataset
-
Returns a Java list that contains randomly split Dataset with the provided weights.
- randomVectorRDD(SparkContext, RandomDataGenerator<Object>, long, int, int, long) - Static method in class org.apache.spark.mllib.random.RandomRDDs
-
:: DeveloperApi ::
Generates an RDD[Vector] with vectors containing i.i.d.
samples produced by the
input RandomDataGenerator.
- RandomVertexCut$() - Constructor for class org.apache.spark.graphx.PartitionStrategy.RandomVertexCut$
-
- range(long, long, long, int) - Method in class org.apache.spark.SparkContext
-
Creates a new RDD[Long] containing elements from start
to end
(exclusive), increased by
step
every element.
- range(long) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
with a single
LongType
column named
id
, containing elements
in a range from 0 to
end
(exclusive) with step value 1.
- range(long, long) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
with a single
LongType
column named
id
, containing elements
in a range from
start
to
end
(exclusive) with step value 1.
- range(long, long, long) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
with a single
LongType
column named
id
, containing elements
in a range from
start
to
end
(exclusive) with a step value.
- range(long, long, long, int) - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Creates a
Dataset
with a single
LongType
column named
id
, containing elements
in a range from
start
to
end
(exclusive) with a step value, with partition number
specified.
- range(long) - Method in class org.apache.spark.sql.SQLContext
-
- range(long, long) - Method in class org.apache.spark.sql.SQLContext
-
- range(long, long, long) - Method in class org.apache.spark.sql.SQLContext
-
- range(long, long, long, int) - Method in class org.apache.spark.sql.SQLContext
-
- rangeBetween(long, long) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the frame boundaries defined,
from
start
(inclusive) to
end
(inclusive).
- rangeBetween(long, long) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the frame boundaries, from start
(inclusive) to end
(inclusive).
- RangeDependency<T> - Class in org.apache.spark
-
:: DeveloperApi ::
Represents a one-to-one dependency between ranges of partitions in the parent and child RDDs.
- RangeDependency(RDD<T>, int, int, int) - Constructor for class org.apache.spark.RangeDependency
-
- RangePartitioner<K,V> - Class in org.apache.spark
-
A
Partitioner
that partitions sortable records by range into roughly
equal ranges.
- RangePartitioner(int, RDD<? extends Product2<K, V>>, boolean, Ordering<K>, ClassTag<K>) - Constructor for class org.apache.spark.RangePartitioner
-
- rank() - Method in class org.apache.spark.graphx.lib.SVDPlusPlus.Conf
-
- rank() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- rank() - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- rank() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
The numeric rank of the fitted linear model.
- rank() - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
- rank() - Static method in class org.apache.spark.sql.functions
-
Window function: returns the rank of rows within a window partition.
- RankingMetrics<T> - Class in org.apache.spark.mllib.evaluation
-
Evaluator for ranking algorithms.
- RankingMetrics(RDD<Tuple2<Object, Object>>, ClassTag<T>) - Constructor for class org.apache.spark.mllib.evaluation.RankingMetrics
-
- Rating(ID, ID, float) - Constructor for class org.apache.spark.ml.recommendation.ALS.Rating
-
- rating() - Method in class org.apache.spark.ml.recommendation.ALS.Rating
-
- Rating - Class in org.apache.spark.mllib.recommendation
-
A more compact class to represent a rating than Tuple3[Int, Int, Double].
- Rating(int, int, double) - Constructor for class org.apache.spark.mllib.recommendation.Rating
-
- rating() - Method in class org.apache.spark.mllib.recommendation.Rating
-
- Rating$() - Constructor for class org.apache.spark.ml.recommendation.ALS.Rating$
-
- RatingBlock$() - Constructor for class org.apache.spark.ml.recommendation.ALS.RatingBlock$
-
- ratingCol() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- rawPredictionCol() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- rawSocketStream(String, int, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream from network source hostname:port, where data is received
as serialized blocks (serialized using the Spark's serializer) that can be directly
pushed into the block manager without deserializing them.
- rawSocketStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream from network source hostname:port, where data is received
as serialized blocks (serialized using the Spark's serializer) that can be directly
pushed into the block manager without deserializing them.
- rawSocketStream(String, int, StorageLevel, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Create an input stream from network source hostname:port, where data is received
as serialized blocks (serialized using the Spark's serializer) that can be directly
pushed into the block manager without deserializing them.
- RawTextHelper - Class in org.apache.spark.streaming.util
-
- RawTextHelper() - Constructor for class org.apache.spark.streaming.util.RawTextHelper
-
- RawTextSender - Class in org.apache.spark.streaming.util
-
A helper program that sends blocks of Kryo-serialized text strings out on a socket at a
specified rate.
- RawTextSender() - Constructor for class org.apache.spark.streaming.util.RawTextSender
-
- RBackendAuthHandler - Class in org.apache.spark.api.r
-
Authentication handler for connections from the R process.
- RBackendAuthHandler(String) - Constructor for class org.apache.spark.api.r.RBackendAuthHandler
-
- rdd() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
- rdd() - Method in class org.apache.spark.api.java.JavaPairRDD
-
- rdd() - Method in class org.apache.spark.api.java.JavaRDD
-
- rdd() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
- RDD() - Static method in class org.apache.spark.api.r.RRunnerModes
-
- rdd() - Method in class org.apache.spark.Dependency
-
- rdd() - Method in class org.apache.spark.NarrowDependency
-
- RDD<T> - Class in org.apache.spark.rdd
-
A Resilient Distributed Dataset (RDD), the basic abstraction in Spark.
- RDD(SparkContext, Seq<Dependency<?>>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.RDD
-
- RDD(RDD<?>, ClassTag<T>) - Constructor for class org.apache.spark.rdd.RDD
-
Construct an RDD with just a one-to-one dependency on one parent
- rdd() - Method in class org.apache.spark.ShuffleDependency
-
- rdd() - Method in class org.apache.spark.sql.Dataset
-
Represents the content of the Dataset as an RDD
of T
.
- RDD() - Static method in class org.apache.spark.storage.BlockId
-
- RDDBlockId - Class in org.apache.spark.storage
-
- RDDBlockId(int, int) - Constructor for class org.apache.spark.storage.RDDBlockId
-
- rddBlocks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- rddBlocks() - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the RDD blocks stored in this block manager.
- rddBlocksById(int) - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the blocks that belong to the given RDD stored in this block manager.
- RDDDataDistribution - Class in org.apache.spark.status.api.v1
-
- RDDFunctions<T> - Class in org.apache.spark.mllib.rdd
-
:: DeveloperApi ::
Machine learning specific RDD functions.
- RDDFunctions(RDD<T>, ClassTag<T>) - Constructor for class org.apache.spark.mllib.rdd.RDDFunctions
-
- rddId() - Method in class org.apache.spark.CleanCheckpoint
-
- rddId() - Method in class org.apache.spark.CleanRDD
-
- rddId() - Method in class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- rddId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveRdd
-
- rddId() - Method in class org.apache.spark.storage.RDDBlockId
-
- RDDInfo - Class in org.apache.spark.storage
-
- RDDInfo(int, String, int, StorageLevel, Seq<Object>, String, Option<org.apache.spark.rdd.RDDOperationScope>) - Constructor for class org.apache.spark.storage.RDDInfo
-
- rddInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- rddInfoList() - Method in class org.apache.spark.ui.storage.StorageListener
-
Deprecated.
Filter RDD info to include only those with cached partitions
- rddInfos() - Method in class org.apache.spark.scheduler.StageInfo
-
- rddInfoToJson(RDDInfo) - Static method in class org.apache.spark.util.JsonProtocol
-
- RDDPartitionInfo - Class in org.apache.spark.status.api.v1
-
- rdds() - Method in class org.apache.spark.rdd.CoGroupedRDD
-
- rdds() - Method in class org.apache.spark.rdd.UnionRDD
-
- RDDStorageInfo - Class in org.apache.spark.status.api.v1
-
- rddStorageLevel(int) - Method in class org.apache.spark.storage.StorageStatus
-
Deprecated.
Return the storage level, if any, used by the given RDD in this block manager.
- rddToAsyncRDDActions(RDD<T>, ClassTag<T>) - Static method in class org.apache.spark.rdd.RDD
-
- rddToDatasetHolder(RDD<T>, Encoder<T>) - Method in class org.apache.spark.sql.SQLImplicits
-
- rddToOrderedRDDFunctions(RDD<Tuple2<K, V>>, Ordering<K>, ClassTag<K>, ClassTag<V>) - Static method in class org.apache.spark.rdd.RDD
-
- rddToPairRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Static method in class org.apache.spark.rdd.RDD
-
- rddToSequenceFileRDDFunctions(RDD<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, <any>, <any>) - Static method in class org.apache.spark.rdd.RDD
-
- read() - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- read(byte[], int, int) - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- read(byte[]) - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- read() - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- read(byte[], int, int) - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- read() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- read() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- read() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- read() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- read() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- read() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- read() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- read() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- read() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- read() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- read() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- read() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- read() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- read() - Static method in class org.apache.spark.ml.clustering.LDA
-
- read() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- read() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- read() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- read() - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- read() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- read() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- read() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- read() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- read() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- read() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- read() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- read() - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- read() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- read() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- read() - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- read() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- read() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- read() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- read() - Static method in class org.apache.spark.ml.Pipeline
-
- read() - Static method in class org.apache.spark.ml.PipelineModel
-
- read() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- read() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- read() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- read() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- read() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- read() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- read() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- read() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- read() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- read() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- read() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- read() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- read() - Method in interface org.apache.spark.ml.util.DefaultParamsReadable
-
- read() - Method in interface org.apache.spark.ml.util.MLReadable
-
Returns an MLReader
instance for this class.
- read(Kryo, Input, Class<Iterable<?>>) - Method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
-
- read() - Method in class org.apache.spark.sql.SparkSession
-
Returns a
DataFrameReader
that can be used to read non-streaming data in as a
DataFrame
.
- read() - Method in class org.apache.spark.sql.SQLContext
-
- read() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- read(byte[]) - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- read(byte[], int, int) - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- read(String) - Static method in class org.apache.spark.streaming.CheckpointReader
-
Read checkpoint files present in the given checkpoint directory.
- read(String, SparkConf, Configuration, boolean) - Static method in class org.apache.spark.streaming.CheckpointReader
-
Read checkpoint files present in the given checkpoint directory.
- read(WriteAheadLogRecordHandle) - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Read a written record based on the given record handle.
- ReadableChannelFileRegion - Class in org.apache.spark.storage
-
- ReadableChannelFileRegion(ReadableByteChannel, long) - Constructor for class org.apache.spark.storage.ReadableChannelFileRegion
-
- readAll() - Method in class org.apache.spark.streaming.util.WriteAheadLog
-
Read and return an iterator of all the records that have been written but not yet cleaned up.
- readArray(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
-
- readBoolean(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readBooleanArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readBytes(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readBytes() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- readBytesArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readDate(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readDouble(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readDoubleArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readExternal(ObjectInput) - Method in class org.apache.spark.serializer.JavaSerializer
-
- readExternal(ObjectInput) - Method in class org.apache.spark.storage.BlockManagerId
-
- readExternal(ObjectInput) - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- readExternal(ObjectInput) - Method in class org.apache.spark.storage.StorageLevel
-
- readExternal(ObjectInput) - Static method in class org.apache.spark.streaming.flume.EventTransformer
-
- readExternal(ObjectInput) - Method in class org.apache.spark.streaming.flume.SparkFlumeEvent
-
- readFrom(ConfigReader) - Method in class org.apache.spark.internal.config.ConfigEntryWithDefault
-
- readFrom(ConfigReader) - Method in class org.apache.spark.internal.config.ConfigEntryWithDefaultFunction
-
- readFrom(ConfigReader) - Method in class org.apache.spark.internal.config.ConfigEntryWithDefaultString
-
- readFrom(ConfigReader) - Method in class org.apache.spark.internal.config.FallbackConfigEntry
-
- readFrom(InputStream) - Static method in class org.apache.spark.util.sketch.BloomFilter
-
- readFrom(InputStream) - Static method in class org.apache.spark.util.sketch.CountMinSketch
-
- readFrom(byte[]) - Static method in class org.apache.spark.util.sketch.CountMinSketch
-
- readInt(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readIntArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readKey(ClassTag<T>) - Method in class org.apache.spark.serializer.DeserializationStream
-
Reads the object representing the key of a key-value pair.
- readList(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
-
- readMap(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
-
- readObject(DataInputStream, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
-
- readObject(ClassTag<T>) - Method in class org.apache.spark.serializer.DeserializationStream
-
The most general-purpose method to read an object.
- readObjectType(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readRecords() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- readSchema(Seq<String>, Option<Configuration>) - Static method in class org.apache.spark.sql.hive.orc.OrcFileOperator
-
- readSqlObject(DataInputStream, char) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- readStream() - Method in class org.apache.spark.sql.SparkSession
-
Returns a DataStreamReader
that can be used to read streaming data in as a DataFrame
.
- readStream() - Method in class org.apache.spark.sql.SQLContext
-
- readString(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readStringArr(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readStringBytes(DataInputStream, int) - Static method in class org.apache.spark.api.r.SerDe
-
- readTime(DataInputStream) - Static method in class org.apache.spark.api.r.SerDe
-
- readTypedObject(DataInputStream, char, JVMObjectTracker) - Static method in class org.apache.spark.api.r.SerDe
-
- readValue(ClassTag<T>) - Method in class org.apache.spark.serializer.DeserializationStream
-
Reads the object representing the value of a key-value pair.
- ready(Duration, CanAwait) - Method in class org.apache.spark.ComplexFutureAction
-
- ready(Duration, CanAwait) - Method in interface org.apache.spark.FutureAction
-
Blocks until this action completes.
- ready(Duration, CanAwait) - Method in class org.apache.spark.SimpleFutureAction
-
- reason() - Method in class org.apache.spark.ExecutorLostFailure
-
- reason() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
-
- reason() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
-
- reason() - Method in class org.apache.spark.scheduler.local.KillTask
-
- reason() - Method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- reason() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- reason() - Method in class org.apache.spark.TaskKilled
-
- reason() - Method in exception org.apache.spark.TaskKilledException
-
- reasonToNumKilled() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- reasonToNumKilled() - Method in class org.apache.spark.ui.jobs.UIData.JobUIData
-
- reasonToNumKilled() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- Recall - Class in org.apache.spark.mllib.evaluation.binary
-
Recall.
- Recall() - Constructor for class org.apache.spark.mllib.evaluation.binary.Recall
-
- recall(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns recall for a given label (category)
- recall() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
- recall() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns document-based recall averaged by the number of documents
- recall(double) - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns recall for a given label (category)
- recallByThreshold() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
Returns a dataframe with two fields (threshold, recall) curve.
- recallByThreshold() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the (threshold, recall) curve.
- Receiver<T> - Class in org.apache.spark.streaming.receiver
-
:: DeveloperApi ::
Abstract class of a receiver that can be run on worker nodes to receive external data.
- Receiver(StorageLevel) - Constructor for class org.apache.spark.streaming.receiver.Receiver
-
- RECEIVER_WAL_CLASS_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- RECEIVER_WAL_CLOSE_AFTER_WRITE_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- RECEIVER_WAL_ENABLE_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- RECEIVER_WAL_MAX_FAILURES_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- RECEIVER_WAL_ROLLING_INTERVAL_CONF_KEY() - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- ReceiverInfo - Class in org.apache.spark.status.api.v1.streaming
-
- ReceiverInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
Class having information about a receiver
- ReceiverInfo(int, String, boolean, String, String, String, String, long) - Constructor for class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- receiverInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- receiverInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- receiverInfo() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- receiverInputDStream() - Method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- receiverInputDStream() - Method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- ReceiverInputDStream<T> - Class in org.apache.spark.streaming.dstream
-
Abstract class for defining any
InputDStream
that has to start a receiver on worker nodes to receive external data.
- ReceiverInputDStream(StreamingContext, ClassTag<T>) - Constructor for class org.apache.spark.streaming.dstream.ReceiverInputDStream
-
- ReceiverState - Class in org.apache.spark.streaming.scheduler
-
Enumeration to identify current state of a Receiver
- ReceiverState() - Constructor for class org.apache.spark.streaming.scheduler.ReceiverState
-
- receiverStream(Receiver<T>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream with any arbitrary user implemented receiver.
- receiverStream(Receiver<T>, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Create an input stream with any arbitrary user implemented receiver.
- recentProgress() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
- recommendForAllItems(int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Returns top numUsers
users recommended for each item, for all items.
- recommendForAllUsers(int) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
Returns top numItems
items recommended for each user, for all users.
- recommendProducts(int, int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends products to a user.
- recommendProductsForUsers(int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends top products for all users.
- recommendUsers(int, int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends users to a product.
- recommendUsersForProducts(int) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Recommends top users for all products.
- recordReader(InputStream, Configuration) - Method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- recordReaderClass() - Method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- RECORDS_BETWEEN_BYTES_READ_METRIC_UPDATES() - Static method in class org.apache.spark.rdd.HadoopRDD
-
Update the input bytes read metric each time this number of records has been read
- RECORDS_READ() - Method in class org.apache.spark.InternalAccumulator.input$
-
- RECORDS_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
-
- RECORDS_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.output$
-
- RECORDS_WRITTEN() - Method in class org.apache.spark.InternalAccumulator.shuffleWrite$
-
- recordsRead() - Method in class org.apache.spark.status.api.v1.InputMetricDistributions
-
- recordsRead() - Method in class org.apache.spark.status.api.v1.InputMetrics
-
- recordsRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
-
- recordsRead() - Method in class org.apache.spark.ui.jobs.UIData.InputMetricsUIData
-
- recordsRead() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- recordsWritten() - Method in class org.apache.spark.status.api.v1.OutputMetricDistributions
-
- recordsWritten() - Method in class org.apache.spark.status.api.v1.OutputMetrics
-
- recordsWritten() - Method in class org.apache.spark.status.api.v1.ShuffleWriteMetrics
-
- recordsWritten() - Method in class org.apache.spark.ui.jobs.UIData.OutputMetricsUIData
-
- recordsWritten() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleWriteMetricsUIData
-
- recordWriter(OutputStream, Configuration) - Method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- recordWriterClass() - Method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- recoverPartitions(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Recovers all the partitions in the directory of a table and update the catalog.
- redact(SparkConf, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.util.Utils
-
Redact the sensitive values in the given map.
- redact(SparkConf, String) - Static method in class org.apache.spark.util.Utils
-
Redact the sensitive information in the given string.
- redact(Option<Regex>, Seq<Tuple2<String, String>>) - Static method in class org.apache.spark.util.Utils
-
Redact the sensitive values in the given map.
- redact(Map<String, String>) - Static method in class org.apache.spark.util.Utils
-
Looks up the redaction regex from within the key value pairs and uses it to redact the rest
of the key value pairs.
- REDIRECT_CONNECTOR_NAME() - Static method in class org.apache.spark.ui.JettyUtils
-
- redirectError() - Method in class org.apache.spark.launcher.SparkLauncher
-
Specifies that stderr in spark-submit should be redirected to stdout.
- redirectError(ProcessBuilder.Redirect) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects error output to the specified Redirect.
- redirectError(File) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects error output to the specified File.
- redirectOutput(ProcessBuilder.Redirect) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects standard output to the specified Redirect.
- redirectOutput(File) - Method in class org.apache.spark.launcher.SparkLauncher
-
Redirects error output to the specified File.
- redirectToLog(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Sets all output to be logged and redirected to a logger with the specified name.
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- reduce(Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Reduces the elements of this RDD using the specified commutative and associative binary
operator.
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.r.RRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.rdd.RDD
-
Reduces the elements of this RDD using the specified commutative and
associative binary operator.
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Scala-specific)
Reduces the elements of this Dataset using the specified binary function.
- reduce(ReduceFunction<T>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
(Java-specific)
Reduces the elements of this Dataset using the specified binary function.
- reduce(BUF, IN) - Method in class org.apache.spark.sql.expressions.Aggregator
-
Combine two values to produce a new value.
- reduce(Function2<A1, A1, A1>) - Static method in class org.apache.spark.sql.types.StructType
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- reduce(Function2<T, T, T>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by reducing each RDD
of this DStream.
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduce(Function2<T, T, T>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- reduce(Function2<T, T, T>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by reducing each RDD
of this DStream.
- reduceByKey(Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Partitioner, Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function.
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying reduceByKey
to each RDD.
- reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying reduceByKey
to each RDD.
- reduceByKey(Function2<V, V, V>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying reduceByKey
to each RDD.
- reduceByKey(Function2<V, V, V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKey(Function2<V, V, V>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKey(Function2<V, V, V>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKey(Function2<V, V, V>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKey(Function2<V, V, V>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKey(Function2<V, V, V>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKey(Function2<V, V, V>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
to each RDD.
- reduceByKey(Function2<V, V, V>, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
to each RDD.
- reduceByKey(Function2<V, V, V>, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
to each RDD.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Create a new DStream by applying reduceByKey
over a sliding window on this
DStream.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by reducing over a using incremental computation.
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying incremental reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function<Tuple2<K, V>, Boolean>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying incremental reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function<Tuple2<K, V>, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function<Tuple2<K, V>, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function<Tuple2<K, V>, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function<Tuple2<K, V>, Boolean>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByKeyAndWindow(Function2<V, V, V>, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
over a sliding window on this
DStream.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, int) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Duration, Duration, Partitioner) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, int, Function1<Tuple2<K, V>, Object>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying incremental reduceByKey
over a sliding window.
- reduceByKeyAndWindow(Function2<V, V, V>, Function2<V, V, V>, Duration, Duration, Partitioner, Function1<Tuple2<K, V>, Object>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying incremental reduceByKey
over a sliding window.
- reduceByKeyLocally(Function2<V, V, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Merge the values for each key using an associative and commutative reduce function, but return
the result immediately to the master as a Map.
- reduceByKeyLocally(Function2<V, V, V>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Merge the values for each key using an associative and commutative reduce function, but return
the results immediately to the master as a Map.
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by reducing all
elements in a sliding window over this DStream.
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD has a single element generated by reducing all
elements in a sliding window over this DStream.
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- reduceByWindow(Function2<T, T, T>, Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by reducing all
elements in a sliding window over this DStream.
- reduceByWindow(Function2<T, T, T>, Function2<T, T, T>, Duration, Duration) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD has a single element generated by reducing all
elements in a sliding window over this DStream.
- ReduceFunction<T> - Interface in org.apache.spark.api.java.function
-
Base interface for function used in Dataset's reduce.
- reduceGroups(Function2<V, V, V>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Scala-specific)
Reduces the elements of each group of data using the specified binary function.
- reduceGroups(ReduceFunction<V>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
-
(Java-specific)
Reduces the elements of each group of data using the specified binary function.
- reduceId() - Method in class org.apache.spark.FetchFailed
-
- reduceId() - Method in class org.apache.spark.storage.ShuffleBlockId
-
- reduceId() - Method in class org.apache.spark.storage.ShuffleDataBlockId
-
- reduceId() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- reduceLeft(Function2<B, A, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- reduceLeftOption(Function2<B, A, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- reduceOption(Function2<A1, A1, A1>) - Static method in class org.apache.spark.sql.types.StructType
-
- reduceRight(Function2<A, B, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- reduceRightOption(Function2<A, B, B>) - Static method in class org.apache.spark.sql.types.StructType
-
- references() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- references() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- references() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- references() - Method in class org.apache.spark.sql.sources.And
-
- references() - Method in class org.apache.spark.sql.sources.EqualNullSafe
-
- references() - Method in class org.apache.spark.sql.sources.EqualTo
-
- references() - Method in class org.apache.spark.sql.sources.Filter
-
List of columns that are referenced by this filter.
- references() - Method in class org.apache.spark.sql.sources.GreaterThan
-
- references() - Method in class org.apache.spark.sql.sources.GreaterThanOrEqual
-
- references() - Method in class org.apache.spark.sql.sources.In
-
- references() - Method in class org.apache.spark.sql.sources.IsNotNull
-
- references() - Method in class org.apache.spark.sql.sources.IsNull
-
- references() - Method in class org.apache.spark.sql.sources.LessThan
-
- references() - Method in class org.apache.spark.sql.sources.LessThanOrEqual
-
- references() - Method in class org.apache.spark.sql.sources.Not
-
- references() - Method in class org.apache.spark.sql.sources.Or
-
- references() - Method in class org.apache.spark.sql.sources.StringContains
-
- references() - Method in class org.apache.spark.sql.sources.StringEndsWith
-
- references() - Method in class org.apache.spark.sql.sources.StringStartsWith
-
- refresh() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- refresh() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- refreshByPath(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Invalidates and refreshes all the cached data (and the associated metadata) for any Dataset
that contains the given data source path.
- refreshTable(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Invalidates and refreshes all the cached data and metadata of the given table.
- refreshTable(String) - Method in class org.apache.spark.sql.hive.HiveContext
-
Deprecated.
Invalidate and refresh all the cached the metadata of the given table.
- regex(Regex) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- regexFromString(String, String) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- regexp_extract(Column, String, int) - Static method in class org.apache.spark.sql.functions
-
Extract a specific group matched by a Java regex, from the specified string column.
- regexp_replace(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Replace all substrings of the specified string value that match regexp with rep.
- regexp_replace(Column, Column, Column) - Static method in class org.apache.spark.sql.functions
-
Replace all substrings of the specified string value that match regexp with rep.
- RegexTokenizer - Class in org.apache.spark.ml.feature
-
A regex based tokenizer that extracts tokens either by using the provided regex pattern to split
the text (default) or repeatedly matching the regex (if gaps
is false).
- RegexTokenizer(String) - Constructor for class org.apache.spark.ml.feature.RegexTokenizer
-
- RegexTokenizer() - Constructor for class org.apache.spark.ml.feature.RegexTokenizer
-
- register(AccumulatorV2<?, ?>) - Method in class org.apache.spark.SparkContext
-
Register the given accumulator.
- register(AccumulatorV2<?, ?>, String) - Method in class org.apache.spark.SparkContext
-
Register the given accumulator with given name.
- register(String, String) - Static method in class org.apache.spark.sql.types.UDTRegistration
-
Registers an UserDefinedType to an user class.
- register(String, UserDefinedAggregateFunction) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined aggregate function (UDAF).
- register(String, UserDefinedFunction) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function (UDF), for a UDF that's already defined using the DataFrame
API (i.e.
- register(String, Function0<RT>, TypeTags.TypeTag<RT>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 0 arguments as user-defined function (UDF).
- register(String, Function1<A1, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 1 arguments as user-defined function (UDF).
- register(String, Function2<A1, A2, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 2 arguments as user-defined function (UDF).
- register(String, Function3<A1, A2, A3, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 3 arguments as user-defined function (UDF).
- register(String, Function4<A1, A2, A3, A4, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 4 arguments as user-defined function (UDF).
- register(String, Function5<A1, A2, A3, A4, A5, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 5 arguments as user-defined function (UDF).
- register(String, Function6<A1, A2, A3, A4, A5, A6, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 6 arguments as user-defined function (UDF).
- register(String, Function7<A1, A2, A3, A4, A5, A6, A7, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 7 arguments as user-defined function (UDF).
- register(String, Function8<A1, A2, A3, A4, A5, A6, A7, A8, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 8 arguments as user-defined function (UDF).
- register(String, Function9<A1, A2, A3, A4, A5, A6, A7, A8, A9, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 9 arguments as user-defined function (UDF).
- register(String, Function10<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 10 arguments as user-defined function (UDF).
- register(String, Function11<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 11 arguments as user-defined function (UDF).
- register(String, Function12<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 12 arguments as user-defined function (UDF).
- register(String, Function13<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 13 arguments as user-defined function (UDF).
- register(String, Function14<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 14 arguments as user-defined function (UDF).
- register(String, Function15<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 15 arguments as user-defined function (UDF).
- register(String, Function16<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 16 arguments as user-defined function (UDF).
- register(String, Function17<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 17 arguments as user-defined function (UDF).
- register(String, Function18<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 18 arguments as user-defined function (UDF).
- register(String, Function19<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 19 arguments as user-defined function (UDF).
- register(String, Function20<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 20 arguments as user-defined function (UDF).
- register(String, Function21<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>, TypeTags.TypeTag<A21>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 21 arguments as user-defined function (UDF).
- register(String, Function22<A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, RT>, TypeTags.TypeTag<RT>, TypeTags.TypeTag<A1>, TypeTags.TypeTag<A2>, TypeTags.TypeTag<A3>, TypeTags.TypeTag<A4>, TypeTags.TypeTag<A5>, TypeTags.TypeTag<A6>, TypeTags.TypeTag<A7>, TypeTags.TypeTag<A8>, TypeTags.TypeTag<A9>, TypeTags.TypeTag<A10>, TypeTags.TypeTag<A11>, TypeTags.TypeTag<A12>, TypeTags.TypeTag<A13>, TypeTags.TypeTag<A14>, TypeTags.TypeTag<A15>, TypeTags.TypeTag<A16>, TypeTags.TypeTag<A17>, TypeTags.TypeTag<A18>, TypeTags.TypeTag<A19>, TypeTags.TypeTag<A20>, TypeTags.TypeTag<A21>, TypeTags.TypeTag<A22>) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a Scala closure of 22 arguments as user-defined function (UDF).
- register(String, UDF1<?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 1 arguments.
- register(String, UDF2<?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 2 arguments.
- register(String, UDF3<?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 3 arguments.
- register(String, UDF4<?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 4 arguments.
- register(String, UDF5<?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 5 arguments.
- register(String, UDF6<?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 6 arguments.
- register(String, UDF7<?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 7 arguments.
- register(String, UDF8<?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 8 arguments.
- register(String, UDF9<?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 9 arguments.
- register(String, UDF10<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 10 arguments.
- register(String, UDF11<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 11 arguments.
- register(String, UDF12<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 12 arguments.
- register(String, UDF13<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 13 arguments.
- register(String, UDF14<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 14 arguments.
- register(String, UDF15<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 15 arguments.
- register(String, UDF16<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 16 arguments.
- register(String, UDF17<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 17 arguments.
- register(String, UDF18<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 18 arguments.
- register(String, UDF19<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 19 arguments.
- register(String, UDF20<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 20 arguments.
- register(String, UDF21<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 21 arguments.
- register(String, UDF22<?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType) - Method in class org.apache.spark.sql.UDFRegistration
-
Register a user-defined function with 22 arguments.
- register(QueryExecutionListener) - Method in class org.apache.spark.sql.util.ExecutionListenerManager
-
- register(AccumulatorV2<?, ?>) - Static method in class org.apache.spark.util.AccumulatorContext
-
Registers an
AccumulatorV2
created on the driver such that it can be used on the executors.
- register(String, Function0<Object>) - Static method in class org.apache.spark.util.SignalUtils
-
Adds an action to be run when a given signal is received by this process.
- registerAvroSchemas(Seq<Schema>) - Method in class org.apache.spark.SparkConf
-
Use Kryo serialization and register the given set of Avro schemas so that the generic
record serializer can decrease network IO
- RegisterBlockManager(BlockManagerId, long, long, org.apache.spark.rpc.RpcEndpointRef) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
-
- RegisterBlockManager$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager$
-
- registerClasses(Kryo) - Method in interface org.apache.spark.serializer.KryoRegistrator
-
- RegisterClusterManager(org.apache.spark.rpc.RpcEndpointRef) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager
-
- RegisterClusterManager$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterClusterManager$
-
- registerDialect(JdbcDialect) - Static method in class org.apache.spark.sql.jdbc.JdbcDialects
-
Register a dialect for use on all new matching jdbc org.apache.spark.sql.DataFrame
.
- RegisteredExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisteredExecutor$
-
- RegisterExecutor(String, org.apache.spark.rpc.RpcEndpointRef, String, int, Map<String, String>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor
-
- RegisterExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutor$
-
- RegisterExecutorFailed(String) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed
-
- RegisterExecutorFailed$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RegisterExecutorFailed$
-
- registerKryoClasses(SparkConf) - Static method in class org.apache.spark.graphx.GraphXUtils
-
Registers classes that GraphX uses with Kryo.
- registerKryoClasses(Class<?>[]) - Method in class org.apache.spark.SparkConf
-
Use Kryo serialization and register the given set of classes with Kryo.
- registerLogger(Logger) - Static method in class org.apache.spark.util.SignalUtils
-
Register a signal handler to log signals on UNIX-like systems.
- registerShutdownDeleteDir(File) - Static method in class org.apache.spark.util.ShutdownHookManager
-
- registerStream(DStream<BinarySample>) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Register a DStream
of values for significance testing.
- registerStream(JavaDStream<BinarySample>) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Register a JavaDStream
of values for significance testing.
- registerTempTable(String) - Method in class org.apache.spark.sql.Dataset
-
- regParam() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- regParam() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- regParam() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- regParam() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- regParam() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- regParam() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- regParam() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- regParam() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- regParam() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- Regression() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- RegressionEvaluator - Class in org.apache.spark.ml.evaluation
-
:: Experimental ::
Evaluator for regression, which expects two input columns: prediction and label.
- RegressionEvaluator(String) - Constructor for class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- RegressionEvaluator() - Constructor for class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- RegressionMetrics - Class in org.apache.spark.mllib.evaluation
-
Evaluator for regression.
- RegressionMetrics(RDD<Tuple2<Object, Object>>, boolean) - Constructor for class org.apache.spark.mllib.evaluation.RegressionMetrics
-
- RegressionMetrics(RDD<Tuple2<Object, Object>>) - Constructor for class org.apache.spark.mllib.evaluation.RegressionMetrics
-
- RegressionModel<FeaturesType,M extends RegressionModel<FeaturesType,M>> - Class in org.apache.spark.ml.regression
-
:: DeveloperApi ::
- RegressionModel() - Constructor for class org.apache.spark.ml.regression.RegressionModel
-
- RegressionModel - Interface in org.apache.spark.mllib.regression
-
- reindex() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- reindex() - Method in class org.apache.spark.graphx.VertexRDD
-
Construct a new VertexRDD that is indexed by only the visible vertices.
- RelationalGroupedDataset - Class in org.apache.spark.sql
-
A set of methods for aggregations on a DataFrame
, created by Dataset.groupBy
.
- RelationalGroupedDataset.CubeType$ - Class in org.apache.spark.sql
-
To indicate it's the CUBE
- RelationalGroupedDataset.GroupByType$ - Class in org.apache.spark.sql
-
To indicate it's the GroupBy
- RelationalGroupedDataset.PivotType$ - Class in org.apache.spark.sql
-
- RelationalGroupedDataset.RollupType$ - Class in org.apache.spark.sql
-
To indicate it's the ROLLUP
- RelationConversions - Class in org.apache.spark.sql.hive
-
Relation conversion from metastore relations to data source relations for better performance
- RelationConversions(SQLConf, HiveSessionCatalog) - Constructor for class org.apache.spark.sql.hive.RelationConversions
-
- RelationProvider - Interface in org.apache.spark.sql.sources
-
Implemented by objects that produce relations for a specific kind of data source.
- relativeDirection(long) - Method in class org.apache.spark.graphx.Edge
-
Return the relative direction of the edge to the corresponding
vertex.
- relativeError() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- relativeError() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
- rem(Decimal, Decimal) - Method in class org.apache.spark.sql.types.Decimal.DecimalAsIfIntegral$
-
- remainder(Decimal) - Method in class org.apache.spark.sql.types.Decimal
-
- remember(Duration) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Sets each DStreams in this context to remember RDDs it generated in the last given duration.
- remember(Duration) - Method in class org.apache.spark.streaming.StreamingContext
-
Set each DStream in this context to remember RDDs it generated in the last given duration.
- REMOTE_BLOCKS_FETCHED() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
-
- REMOTE_BYTES_READ() - Method in class org.apache.spark.InternalAccumulator.shuffleRead$
-
- remoteBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- remoteBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
-
- remoteBlocksFetched() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- remoteBytesRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- remoteBytesRead() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetrics
-
- remoteBytesRead() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- remove(Param<T>) - Method in class org.apache.spark.ml.param.ParamMap
-
Removes a key from this map and returns its value associated previously as an option.
- remove(String) - Method in class org.apache.spark.SparkConf
-
Remove a parameter from the configuration
- remove() - Method in interface org.apache.spark.sql.streaming.GroupState
-
Remove this state.
- remove(String) - Method in class org.apache.spark.sql.types.MetadataBuilder
-
- remove() - Method in class org.apache.spark.streaming.State
-
Remove the state if it exists.
- remove(long) - Static method in class org.apache.spark.util.AccumulatorContext
-
- RemoveBlock(BlockId) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBlock
-
- RemoveBlock$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBlock$
-
- RemoveBroadcast(long, boolean) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
-
- RemoveBroadcast$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast$
-
- RemoveExecutor(String, org.apache.spark.scheduler.ExecutorLossReason) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor
-
- RemoveExecutor(String) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor
-
- RemoveExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RemoveExecutor$
-
- RemoveExecutor$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveExecutor$
-
- removeFromDriver() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveBroadcast
-
- removeListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
- RemoveRdd(int) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveRdd
-
- RemoveRdd$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveRdd$
-
- removeSelfEdges() - Method in class org.apache.spark.graphx.GraphOps
-
Remove self edges.
- RemoveShuffle(int) - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
-
- RemoveShuffle$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle$
-
- removeShutdownDeleteDir(File) - Static method in class org.apache.spark.util.ShutdownHookManager
-
- removeShutdownHook(Object) - Static method in class org.apache.spark.util.ShutdownHookManager
-
Remove a previously installed shutdown hook.
- removeSparkListener(SparkListenerInterface) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Deregister the listener from Spark's listener bus.
- rep(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- rep1(Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- rep1(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- rep1sep(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- repartition(int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- repartition(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return a new RDD that has exactly numPartitions partitions.
- repartition(int, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- repartition(int, Column...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions into
numPartitions
.
- repartition(Column...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions, using
spark.sql.shuffle.partitions
as number of partitions.
- repartition(int) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset that has exactly numPartitions
partitions.
- repartition(int, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions into
numPartitions
.
- repartition(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset partitioned by the given partitioning expressions, using
spark.sql.shuffle.partitions
as number of partitions.
- repartition(int) - Method in class org.apache.spark.streaming.api.java.JavaDStream
-
Return a new DStream with an increased or decreased level of parallelism.
- repartition(int) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- repartition(int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream with an increased or decreased level of parallelism.
- repartition(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- repartition(int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- repartition(int) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- repartition(int) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream with an increased or decreased level of parallelism.
- repartition$default$2(int) - Static method in class org.apache.spark.api.r.RRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- repartition$default$2(int) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- repartition$default$2(int) - Static method in class org.apache.spark.graphx.VertexRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- repartition$default$2(int) - Static method in class org.apache.spark.rdd.UnionRDD
-
- repartitionAndSortWithinPartitions(Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys.
- repartitionAndSortWithinPartitions(Partitioner, Comparator<K>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys.
- repartitionAndSortWithinPartitions(Partitioner) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
-
Repartition the RDD according to the given partitioner and, within each resulting partition,
sort records by their keys.
- repeat(Column, int) - Static method in class org.apache.spark.sql.functions
-
Repeats a string column n times, and returns it as a new string column.
- replace(String, Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Replaces values matching keys in replacement
map with the corresponding values.
- replace(String[], Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
Replaces values matching keys in replacement
map with the corresponding values.
- replace(String, Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Replaces values matching keys in replacement
map.
- replace(Seq<String>, Map<T, T>) - Method in class org.apache.spark.sql.DataFrameNaFunctions
-
(Scala-specific) Replaces values matching keys in replacement
map.
- replaceCharType(DataType) - Static method in class org.apache.spark.sql.types.HiveStringType
-
- replicas() - Method in class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
-
- ReplicateBlock(BlockId, Seq<BlockManagerId>, int) - Constructor for class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock
-
- ReplicateBlock$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.ReplicateBlock$
-
- replicatedVertexView() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- replication() - Method in class org.apache.spark.storage.StorageLevel
-
- repN(int, Function0<Parsers.Parser<T>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- reportError(String, Throwable) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Report exceptions in receiving data.
- repr() - Static method in class org.apache.spark.sql.types.StructType
-
- repsep(Function0<Parsers.Parser<T>>, Function0<Parsers.Parser<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- requestedTotal() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
-
- RequestExecutors(int, int, Map<String, Object>, Set<String>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors
-
- requestExecutors(int) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Request an additional number of executors from the cluster manager.
- RequestExecutors$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RequestExecutors$
-
- requestTotalExecutors(int, int, Map<String, Object>) - Method in class org.apache.spark.SparkContext
-
Update the cluster manager on our scheduling needs.
- requiredChildDistribution() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- requiredChildOrdering() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- res() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
-
- reservoirSampleAndCount(Iterator<T>, int, long, ClassTag<T>) - Static method in class org.apache.spark.util.random.SamplingUtils
-
Reservoir sampling implementation that also returns the input size.
- reset() - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- reset() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
Resets the values of all metrics to zero.
- reset() - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- reset() - Method in class org.apache.spark.util.AccumulatorV2
-
Resets this accumulator, which is zero value.
- reset() - Method in class org.apache.spark.util.CollectionAccumulator
-
- reset() - Method in class org.apache.spark.util.DoubleAccumulator
-
- reset() - Method in class org.apache.spark.util.LegacyAccumulatorWrapper
-
- reset() - Method in class org.apache.spark.util.LongAccumulator
-
- resetMetrics() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- resetTerminated() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
-
Forget about past terminated queries so that awaitAnyTermination()
can be used again to
wait for new terminations.
- residualDegreeOfFreedom() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
The residual degrees of freedom.
- residualDegreeOfFreedomNull() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
The residual degrees of freedom for the null model.
- residuals() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Get the default residuals (deviance residuals) of the fitted model.
- residuals(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionSummary
-
Get the residuals of the fitted model by type.
- residuals() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Residuals (label - predicted value)
- resolve(StructType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolve(Seq<String>, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolve(StructType, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- resolve(Seq<String>, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- resolveChildren(Seq<String>, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolveChildren(Seq<String>, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- resolved() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolved() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- resolveExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolveExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- ResolveHiveSerdeTable - Class in org.apache.spark.sql.hive
-
Determine the database, serde/format and schema of the Hive serde table, according to the storage
properties.
- ResolveHiveSerdeTable(SparkSession) - Constructor for class org.apache.spark.sql.hive.ResolveHiveSerdeTable
-
- resolveOperators(PartialFunction<LogicalPlan, LogicalPlan>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolveOperators(PartialFunction<LogicalPlan, LogicalPlan>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- resolveQuoted(String, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- resolveQuoted(String, Function2<String, String, Object>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- resolveURI(String) - Static method in class org.apache.spark.util.Utils
-
Return a well-formed URI for the file described by a user input string.
- resolveURIs(String) - Static method in class org.apache.spark.util.Utils
-
Resolve a comma-separated list of paths.
- responder() - Method in class org.apache.spark.ui.JettyUtils.ServletParams
-
- responseFromBackup(String) - Static method in class org.apache.spark.util.Utils
-
Return true if the response message is sent from a backup Master on standby.
- restart(String) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Restart the receiver.
- restart(String, Throwable) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Restart the receiver.
- restart(String, Throwable, int) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Restart the receiver.
- ResubmitFailedStages - Class in org.apache.spark.scheduler
-
- ResubmitFailedStages() - Constructor for class org.apache.spark.scheduler.ResubmitFailedStages
-
- Resubmitted - Class in org.apache.spark
-
:: DeveloperApi ::
A org.apache.spark.scheduler.ShuffleMapTask
that completed successfully earlier, but we
lost the executor before the stage completed.
- Resubmitted() - Constructor for class org.apache.spark.Resubmitted
-
- result(Duration, CanAwait) - Method in class org.apache.spark.ComplexFutureAction
-
- result(Duration, CanAwait) - Method in interface org.apache.spark.FutureAction
-
Awaits and returns the result (of type T) of this action.
- result(Duration, CanAwait) - Method in class org.apache.spark.SimpleFutureAction
-
- RESULT_SERIALIZATION_TIME() - Static method in class org.apache.spark.InternalAccumulator
-
- RESULT_SERIALIZATION_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- RESULT_SERIALIZATION_TIME() - Static method in class org.apache.spark.ui.ToolTips
-
- RESULT_SIZE() - Static method in class org.apache.spark.InternalAccumulator
-
- resultSerializationTime() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
-
- resultSerializationTime() - Method in class org.apache.spark.status.api.v1.TaskMetrics
-
- resultSerializationTime() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- resultSetToObjectArray(ResultSet) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- resultSize() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
-
- resultSize() - Method in class org.apache.spark.status.api.v1.TaskMetrics
-
- resultSize() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- retainedJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- retainedStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- retainedTasks() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- RetrieveLastAllocatedExecutorId$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveLastAllocatedExecutorId$
-
- RetrieveSparkAppConfig$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.RetrieveSparkAppConfig$
-
- retryWaitMs(SparkConf) - Static method in class org.apache.spark.util.RpcUtils
-
Returns the configured number of milliseconds to wait on each retry
- ReturnStatementFinder - Class in org.apache.spark.util
-
- ReturnStatementFinder() - Constructor for class org.apache.spark.util.ReturnStatementFinder
-
- reverse() - Method in class org.apache.spark.graphx.EdgeDirection
-
Reverse the direction of an edge.
- reverse() - Method in class org.apache.spark.graphx.EdgeRDD
-
Reverse all the edges in this RDD.
- reverse() - Method in class org.apache.spark.graphx.Graph
-
Reverses all edges in the graph.
- reverse() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- reverse() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- reverse(Column) - Static method in class org.apache.spark.sql.functions
-
Reverses the string column and returns it as a new string column.
- reverse() - Static method in class org.apache.spark.sql.types.StructType
-
- reverseIterator() - Static method in class org.apache.spark.sql.types.StructType
-
- reverseMap(Function1<A, B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- reverseRoutingTables() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- reverseRoutingTables() - Method in class org.apache.spark.graphx.VertexRDD
-
Returns a new
VertexRDD
reflecting a reversal of all edge directions in the corresponding
EdgeRDD
.
- ReviveOffers - Class in org.apache.spark.scheduler.local
-
- ReviveOffers() - Constructor for class org.apache.spark.scheduler.local.ReviveOffers
-
- ReviveOffers$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.ReviveOffers$
-
- RFormula - Class in org.apache.spark.ml.feature
-
:: Experimental ::
Implements the transforms required for fitting a dataset against an R model formula.
- RFormula(String) - Constructor for class org.apache.spark.ml.feature.RFormula
-
- RFormula() - Constructor for class org.apache.spark.ml.feature.RFormula
-
- RFormulaModel - Class in org.apache.spark.ml.feature
-
:: Experimental ::
Model fitted by
RFormula
.
- RFormulaParser - Class in org.apache.spark.ml.feature
-
Limited implementation of R formula parsing.
- RFormulaParser() - Constructor for class org.apache.spark.ml.feature.RFormulaParser
-
- RidgeRegressionModel - Class in org.apache.spark.mllib.regression
-
Regression model trained using RidgeRegression.
- RidgeRegressionModel(Vector, double) - Constructor for class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- RidgeRegressionWithSGD - Class in org.apache.spark.mllib.regression
-
Train a regression model with L2-regularization using Stochastic Gradient Descent.
- RidgeRegressionWithSGD() - Constructor for class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
- right() - Method in class org.apache.spark.sql.sources.And
-
- right() - Method in class org.apache.spark.sql.sources.Or
-
- rightCategories() - Method in class org.apache.spark.ml.tree.CategoricalSplit
-
Get sorted categories which split to the right
- rightChild() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
-
- rightChild() - Method in class org.apache.spark.ml.tree.InternalNode
-
- rightChildIndex(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the index of the right child of this node.
- rightImpurity() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
-
- rightNode() - Method in class org.apache.spark.mllib.tree.model.Node
-
- rightNodeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- rightOuterJoin(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a right outer join of this
and other
.
- rightOuterJoin(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a right outer join of this
and other
.
- rightOuterJoin(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Perform a right outer join of this
and other
.
- rightOuterJoin(RDD<Tuple2<K, W>>, Partitioner) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a right outer join of this
and other
.
- rightOuterJoin(RDD<Tuple2<K, W>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a right outer join of this
and other
.
- rightOuterJoin(RDD<Tuple2<K, W>>, int) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Perform a right outer join of this
and other
.
- rightOuterJoin(JavaPairDStream<K, W>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and
other
DStream.
- rightOuterJoin(JavaPairDStream<K, W>, int) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and
other
DStream.
- rightOuterJoin(JavaPairDStream<K, W>, Partitioner) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and
other
DStream.
- rightOuterJoin(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- rightOuterJoin(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- rightOuterJoin(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- rightOuterJoin(JavaPairDStream<K, W>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- rightOuterJoin(JavaPairDStream<K, W>, int) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- rightOuterJoin(JavaPairDStream<K, W>, Partitioner) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- rightOuterJoin(DStream<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and
other
DStream.
- rightOuterJoin(DStream<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and
other
DStream.
- rightOuterJoin(DStream<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Return a new DStream by applying 'right outer join' between RDDs of this
DStream and
other
DStream.
- rightPredict() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
-
- rint(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the double value that is closest in value to the argument and
is equal to a mathematical integer.
- rint(String) - Static method in class org.apache.spark.sql.functions
-
Returns the double value that is closest in value to the argument and
is equal to a mathematical integer.
- rlike(String) - Method in class org.apache.spark.sql.Column
-
SQL RLIKE expression (LIKE with Regex).
- RMATa() - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
- RMATb() - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
- RMATc() - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
- RMATd() - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
- rmatGraph(SparkContext, int, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
A random graph generator using the R-MAT model, proposed in
"R-MAT: A Recursive Model for Graph Mining" by Chakrabarti et al.
- rnd() - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- roc() - Method in class org.apache.spark.ml.classification.BinaryLogisticRegressionSummary
-
Returns the receiver operating characteristic (ROC) curve,
which is a Dataframe having two fields (FPR, TPR)
with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.
- roc() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns the receiver operating characteristic (ROC) curve,
which is an RDD of (false positive rate, true positive rate)
with (0.0, 0.0) prepended and (1.0, 1.0) appended to it.
- rollup(Column...) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns,
so we can run aggregation on them.
- rollup(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns,
so we can run aggregation on them.
- rollup(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns,
so we can run aggregation on them.
- rollup(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Create a multi-dimensional rollup for the current Dataset using the specified columns,
so we can run aggregation on them.
- RollupType$() - Constructor for class org.apache.spark.sql.RelationalGroupedDataset.RollupType$
-
- rootMeanSquaredError() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
Returns the root mean squared error, which is defined as the square root of
the mean squared error.
- rootMeanSquaredError() - Method in class org.apache.spark.mllib.evaluation.RegressionMetrics
-
Returns the root mean squared error, which is defined as the square root of
the mean squared error.
- rootNode() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- rootNode() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- round(Column) - Static method in class org.apache.spark.sql.functions
-
Returns the value of the column e
rounded to 0 decimal places with HALF_UP round mode.
- round(Column, int) - Static method in class org.apache.spark.sql.functions
-
Round the value of e
to scale
decimal places with HALF_UP round mode
if scale
is greater than or equal to 0 or at integral part when scale
is less than 0.
- ROUND_CEILING() - Static method in class org.apache.spark.sql.types.Decimal
-
- ROUND_FLOOR() - Static method in class org.apache.spark.sql.types.Decimal
-
- ROUND_HALF_EVEN() - Static method in class org.apache.spark.sql.types.Decimal
-
- ROUND_HALF_UP() - Static method in class org.apache.spark.sql.types.Decimal
-
- ROW() - Static method in class org.apache.spark.api.r.SerializationFormats
-
- Row - Interface in org.apache.spark.sql
-
Represents one row of output from a relational operator.
- row_number() - Static method in class org.apache.spark.sql.functions
-
Window function: returns a sequential number starting at 1 within a window partition.
- RowFactory - Class in org.apache.spark.sql
-
A factory class used to construct
Row
objects.
- RowFactory() - Constructor for class org.apache.spark.sql.RowFactory
-
- rowIndices() - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- rowIndices() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- rowIter() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- rowIter() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Returns an iterator of row vectors.
- rowIter() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- rowIter() - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- rowIter() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Returns an iterator of row vectors.
- rowIter() - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- RowMatrix - Class in org.apache.spark.mllib.linalg.distributed
-
Represents a row-oriented distributed Matrix with no meaningful row indices.
- RowMatrix(RDD<Vector>, long, int) - Constructor for class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
- RowMatrix(RDD<Vector>) - Constructor for class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
Alternative constructor leaving matrix dimensions to be determined automatically.
- rows() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
- rows() - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
- rowsBetween(long, long) - Static method in class org.apache.spark.sql.expressions.Window
-
Creates a
WindowSpec
with the frame boundaries defined,
from
start
(inclusive) to
end
(inclusive).
- rowsBetween(long, long) - Method in class org.apache.spark.sql.expressions.WindowSpec
-
Defines the frame boundaries, from start
(inclusive) to end
(inclusive).
- rowsPerBlock() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
- rPackages() - Static method in class org.apache.spark.api.r.RUtils
-
- rpad(Column, int, String) - Static method in class org.apache.spark.sql.functions
-
Right-pad the string column with pad to a length of len.
- RpcUtils - Class in org.apache.spark.util
-
- RpcUtils() - Constructor for class org.apache.spark.util.RpcUtils
-
- RRDD<T> - Class in org.apache.spark.api.r
-
An RDD that stores serialized R objects as Array[Byte].
- RRDD(RDD<T>, byte[], String, String, byte[], Object[], ClassTag<T>) - Constructor for class org.apache.spark.api.r.RRDD
-
- RRunnerModes - Class in org.apache.spark.api.r
-
- RRunnerModes() - Constructor for class org.apache.spark.api.r.RRunnerModes
-
- rtrim(Column) - Static method in class org.apache.spark.sql.functions
-
Trim the spaces from right end for the specified string value.
- ruleName() - Static method in class org.apache.spark.sql.hive.HiveAnalysis
-
- ruleName() - Static method in class org.apache.spark.sql.hive.RelationConversions
-
- run(Graph<VD, ED>, int, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.ConnectedComponents
-
Compute the connected component membership of each vertex and return a graph with the vertex
value containing the lowest vertex id in the connected component containing that vertex.
- run(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.ConnectedComponents
-
Compute the connected component membership of each vertex and return a graph with the vertex
value containing the lowest vertex id in the connected component containing that vertex.
- run(Graph<VD, ED>, int, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.LabelPropagation
-
Run static Label Propagation for detecting communities in networks.
- run(Graph<VD, ED>, int, double, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph
with vertex attributes containing the PageRank and edge
attributes the normalized edge weight.
- run(Graph<VD, ED>, Seq<Object>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.ShortestPaths
-
Computes shortest paths to the given set of landmark vertices.
- run(Graph<VD, ED>, int, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.StronglyConnectedComponents
-
Compute the strongly connected component (SCC) of each vertex and return a graph with the
vertex value containing the lowest vertex id in the SCC containing that vertex.
- run(RDD<Edge<Object>>, SVDPlusPlus.Conf) - Static method in class org.apache.spark.graphx.lib.SVDPlusPlus
-
Implement SVD++ based on "Factorization Meets the Neighborhood:
a Multifaceted Collaborative Filtering Model",
available at
here.
- run(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.TriangleCount
-
- run(RDD<LabeledPoint>, BoostingStrategy, long) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to train a gradient boosting model
- run(RDD<LabeledPoint>, Strategy, int, String, long, Option<<any>>, Option<String>) - Static method in class org.apache.spark.ml.tree.impl.RandomForest
-
Train a random forest.
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
Run Logistic Regression with the configured parameters on an input RDD
of LabeledPoint entries.
- run(RDD<LabeledPoint>, Vector) - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
Run Logistic Regression with the configured parameters on an input RDD
of LabeledPoint entries starting from the initial weights provided.
- run(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
- run(RDD<LabeledPoint>, Vector) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Run the algorithm with the configured parameters on an input RDD of LabeledPoint entries.
- run(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
- run(RDD<LabeledPoint>, Vector) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
- run(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Runs the bisecting k-means algorithm.
- run(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Java-friendly version of run()
.
- run(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Perform expectation maximization
- run(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Java-friendly version of run()
- run(RDD<Vector>) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Train a K-means model on the given set of points; data
should be cached for high
performance, because this is an iterative algorithm.
- run(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LDA
-
Learn an LDA model using the given dataset.
- run(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LDA
-
Java-friendly version of run()
- run(Graph<Object, Object>) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Run the PIC algorithm on Graph.
- run(RDD<Tuple3<Object, Object, Object>>) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Run the PIC algorithm.
- run(JavaRDD<Tuple3<Long, Long, Double>>) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
A Java-friendly version of PowerIterationClustering.run
.
- run(RDD<FPGrowth.FreqItemset<Item>>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Computes the association rules with confidence above minConfidence
.
- run(JavaRDD<FPGrowth.FreqItemset<Item>>) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Java-friendly version of run
.
- run(RDD<Object>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Computes an FP-Growth model that contains frequent itemsets.
- run(JavaRDD<Basket>) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Java-friendly version of run
.
- run(RDD<Object[]>, ClassTag<Item>) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Finds the complete set of frequent sequential patterns in the input sequences of itemsets.
- run(JavaRDD<Sequence>) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
A Java-friendly version of
run()
that reads sequences from a
JavaRDD
and returns
frequent sequences in a
PrefixSpanModel
.
- run(RDD<Rating>) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Run ALS with the configured parameters on an input RDD of
Rating
objects.
- run(JavaRDD<Rating>) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Java-friendly version of ALS.run
.
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Run the algorithm with the configured parameters on an input
RDD of LabeledPoint entries.
- run(RDD<LabeledPoint>, Vector) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Run the algorithm with the configured parameters on an input RDD
of LabeledPoint entries starting from the initial weights provided.
- run(RDD<Tuple3<Object, Object, Object>>) - Method in class org.apache.spark.mllib.regression.IsotonicRegression
-
Run IsotonicRegression algorithm to obtain isotonic regression model.
- run(JavaRDD<Tuple3<Double, Double, Double>>) - Method in class org.apache.spark.mllib.regression.IsotonicRegression
-
Run pool adjacent violators algorithm to obtain isotonic regression model.
- run(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
- run(RDD<LabeledPoint>, Vector) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
- run(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
- run(RDD<LabeledPoint>, Vector) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
- run(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
- run(RDD<LabeledPoint>, Vector) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model over an RDD
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Method to train a gradient boosting model
- run(JavaRDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Java-friendly API for org.apache.spark.mllib.tree.GradientBoostedTrees.run
.
- run(RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model over an RDD
- run(SparkSession) - Method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- run(SparkSession) - Method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
Inserts all the rows in the table into Hive.
- run() - Method in class org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread
-
- run() - Method in class org.apache.spark.util.SparkShutdownHook
-
- runApproximateJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, <any>, long) - Method in class org.apache.spark.SparkContext
-
:: DeveloperApi ::
Run a job that can return approximate results.
- runId() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Returns the unique id of this run of the query.
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryStartedEvent
-
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryListener.QueryTerminatedEvent
-
- runId() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- runInNewThread(String, boolean, Function0<T>) - Static method in class org.apache.spark.util.ThreadUtils
-
Run a piece of code in a new thread and return the result.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a function on a given set of partitions in an RDD and pass the results to the given
handler function.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Seq<Object>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a function on a given set of partitions in an RDD and return the results as an array.
- runJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a function on a given set of partitions in an RDD and return the results as an array.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and return the results in an array.
- runJob(RDD<T>, Function1<Iterator<T>, U>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and return the results in an array.
- runJob(RDD<T>, Function2<TaskContext, Iterator<T>, U>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and pass the results to a handler function.
- runJob(RDD<T>, Function1<Iterator<T>, U>, Function2<Object, U, BoxedUnit>, ClassTag<U>) - Method in class org.apache.spark.SparkContext
-
Run a job on all partitions in an RDD and pass the results to a handler function.
- runLBFGS(RDD<Tuple2<Object, Vector>>, Gradient, Updater, int, double, int, double, Vector) - Static method in class org.apache.spark.mllib.optimization.LBFGS
-
Run Limited-memory BFGS (L-BFGS) in parallel.
- runMiniBatchSGD(RDD<Tuple2<Object, Vector>>, Gradient, Updater, double, int, double, double, Vector, double) - Static method in class org.apache.spark.mllib.optimization.GradientDescent
-
Run stochastic gradient descent (SGD) in parallel using mini batches.
- runMiniBatchSGD(RDD<Tuple2<Object, Vector>>, Gradient, Updater, double, int, double, double, Vector) - Static method in class org.apache.spark.mllib.optimization.GradientDescent
-
Alias of runMiniBatchSGD
with convergenceTol set to default value of 0.001.
- running() - Method in class org.apache.spark.scheduler.TaskInfo
-
- RUNNING() - Static method in class org.apache.spark.TaskState
-
- runParallelPersonalizedPageRank(Graph<VD, ED>, int, double, long[], ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run Personalized PageRank for a fixed number of iterations, for a
set of starting nodes in parallel.
- runPreCanonicalized(Graph<VD, ED>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.TriangleCount
-
- runtime() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
-
- RuntimeConfig - Class in org.apache.spark.sql
-
Runtime configuration interface for Spark.
- RuntimeInfo - Class in org.apache.spark.status.api.v1
-
- RuntimePercentage - Class in org.apache.spark.scheduler
-
- RuntimePercentage(double, Option<Object>, double) - Constructor for class org.apache.spark.scheduler.RuntimePercentage
-
- runUntilConvergence(Graph<VD, ED>, double, double, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run a dynamic version of PageRank returning a graph with vertex attributes containing the
PageRank and edge attributes containing the normalized edge weight.
- runUntilConvergenceWithOptions(Graph<VD, ED>, double, double, Option<Object>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run a dynamic version of PageRank returning a graph with vertex attributes containing the
PageRank and edge attributes containing the normalized edge weight.
- runWith(Function1<B, U>) - Static method in class org.apache.spark.sql.types.StructType
-
- runWithOptions(Graph<VD, ED>, int, double, Option<Object>, ClassTag<VD>, ClassTag<ED>) - Static method in class org.apache.spark.graphx.lib.PageRank
-
Run PageRank for a fixed number of iterations returning a graph
with vertex attributes containing the PageRank and edge
attributes the normalized edge weight.
- runWithValidation(RDD<LabeledPoint>, RDD<LabeledPoint>, BoostingStrategy, long) - Static method in class org.apache.spark.ml.tree.impl.GradientBoostedTrees
-
Method to validate a gradient boosting model
- runWithValidation(RDD<LabeledPoint>, RDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Method to validate a gradient boosting model
- runWithValidation(JavaRDD<LabeledPoint>, JavaRDD<LabeledPoint>) - Method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Java-friendly API for org.apache.spark.mllib.tree.GradientBoostedTrees.runWithValidation
.
- RUtils - Class in org.apache.spark.api.r
-
- RUtils() - Constructor for class org.apache.spark.api.r.RUtils
-
- RWrappers - Class in org.apache.spark.ml.r
-
This is the Scala stub of SparkR read.ml.
- RWrappers() - Constructor for class org.apache.spark.ml.r.RWrappers
-
- RWrapperUtils - Class in org.apache.spark.ml.r
-
- RWrapperUtils() - Constructor for class org.apache.spark.ml.r.RWrapperUtils
-
- s() - Method in class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- sameElements(GenIterable<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- sameResult(PlanType) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- sameResult(PlanType) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- sameResult(PlanType) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- sameThread() - Static method in class org.apache.spark.util.ThreadUtils
-
An ExecutionContextExecutor
that runs each task in the thread that invokes execute/submit
.
- sample(boolean, Double) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a sampled subset of this RDD.
- sample(boolean, Double, long) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a sampled subset of this RDD.
- sample(boolean, double) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a sampled subset of this RDD.
- sample(boolean, double, long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a sampled subset of this RDD.
- sample(boolean, double) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a sampled subset of this RDD with a random seed.
- sample(boolean, double, long) - Method in class org.apache.spark.api.java.JavaRDD
-
Return a sampled subset of this RDD, with a user-supplied seed.
- sample(boolean, double, long) - Static method in class org.apache.spark.api.r.RRDD
-
- sample(boolean, double, long) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- sample(boolean, double, long) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- sample(boolean, double, long) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- sample(boolean, double, long) - Static method in class org.apache.spark.graphx.VertexRDD
-
- sample(boolean, double, long) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- sample(boolean, double, long) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- sample(boolean, double, long) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- sample(boolean, double, long) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- sample(boolean, double, long) - Method in class org.apache.spark.rdd.RDD
-
Return a sampled subset of this RDD.
- sample(boolean, double, long) - Static method in class org.apache.spark.rdd.UnionRDD
-
- sample(boolean, double, long) - Method in class org.apache.spark.sql.Dataset
-
Returns a new
Dataset
by sampling a fraction of rows, using a user-supplied seed.
- sample(boolean, double) - Method in class org.apache.spark.sql.Dataset
-
Returns a new
Dataset
by sampling a fraction of rows, using a random seed.
- sample() - Method in class org.apache.spark.util.random.BernoulliCellSampler
-
- sample() - Method in class org.apache.spark.util.random.BernoulliSampler
-
- sample() - Method in class org.apache.spark.util.random.PoissonSampler
-
- sample(Iterator<T>) - Method in class org.apache.spark.util.random.PoissonSampler
-
- sample(Iterator<T>) - Method in interface org.apache.spark.util.random.RandomSampler
-
take a random sample
- sample() - Method in interface org.apache.spark.util.random.RandomSampler
-
Whether to sample the next item or not.
- sample$default$3() - Static method in class org.apache.spark.api.r.RRDD
-
- sample$default$3() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- sample$default$3() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- sample$default$3() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- sample$default$3() - Static method in class org.apache.spark.graphx.VertexRDD
-
- sample$default$3() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- sample$default$3() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- sample$default$3() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- sample$default$3() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- sample$default$3() - Static method in class org.apache.spark.rdd.UnionRDD
-
- sampleBy(String, Map<T, Object>, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Returns a stratified sample without replacement based on the fraction given on each stratum.
- sampleBy(String, Map<T, Double>, long) - Method in class org.apache.spark.sql.DataFrameStatFunctions
-
Returns a stratified sample without replacement based on the fraction given on each stratum.
- sampleByKey(boolean, Map<K, Double>, long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling).
- sampleByKey(boolean, Map<K, Double>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling).
- sampleByKey(boolean, Map<K, Object>, long) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return a subset of this RDD sampled by key (via stratified sampling).
- sampleByKeyExact(boolean, Map<K, Double>, long) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly
math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
- sampleByKeyExact(boolean, Map<K, Double>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly
math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
- sampleByKeyExact(boolean, Map<K, Object>, long) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly
math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
- sampleStdev() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the sample standard deviation of this RDD's elements (which corrects for bias in
estimating the standard deviation by dividing by N-1 instead of N).
- sampleStdev() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the sample standard deviation of this RDD's elements (which corrects for bias in
estimating the standard deviation by dividing by N-1 instead of N).
- sampleStdev() - Method in class org.apache.spark.util.StatCounter
-
Return the sample standard deviation of the values, which corrects for bias in estimating the
variance by dividing by N-1 instead of N.
- sampleVariance() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the sample variance of this RDD's elements (which corrects for bias in
estimating the standard variance by dividing by N-1 instead of N).
- sampleVariance() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the sample variance of this RDD's elements (which corrects for bias in
estimating the variance by dividing by N-1 instead of N).
- sampleVariance() - Method in class org.apache.spark.util.StatCounter
-
Return the sample variance, which corrects for bias in estimating the variance by dividing
by N-1 instead of N.
- SamplingUtils - Class in org.apache.spark.util.random
-
- SamplingUtils() - Constructor for class org.apache.spark.util.random.SamplingUtils
-
- save(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- save(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- save(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- save(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- save(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- save(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- save(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- save(String) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- save(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- save(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- save(String) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- save(String) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- save(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- save(String) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- save(String) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- save(String) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- save(String) - Static method in class org.apache.spark.ml.clustering.LDA
-
- save(String) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- save(String) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- save(String) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- save(String) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- save(String) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- save(String) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- save(String) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- save(String) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.DCT
-
- save(String) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- save(String) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- save(String) - Static method in class org.apache.spark.ml.feature.IDF
-
- save(String) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.Imputer
-
- save(String) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- save(String) - Static method in class org.apache.spark.ml.feature.Interaction
-
- save(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- save(String) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- save(String) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- save(String) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.NGram
-
- save(String) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- save(String) - Static method in class org.apache.spark.ml.feature.PCA
-
- save(String) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- save(String) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.RFormula
-
- save(String) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- save(String) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- save(String) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- save(String) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- save(String) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- save(String) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- save(String) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- save(String) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- save(String) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- save(String) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- save(String) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- save(String) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- save(String) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- save(String) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- save(String) - Static method in class org.apache.spark.ml.Pipeline
-
- save(String) - Static method in class org.apache.spark.ml.PipelineModel
-
- save(String) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- save(String) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- save(String) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- save(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- save(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- save(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- save(String) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- save(String) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- save(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- save(String) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- save(String) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- save(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- save(String) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- save(String) - Method in interface org.apache.spark.ml.util.MLWritable
-
Saves this ML instance to the input path, a shortcut of write.save(path)
.
- save(String) - Method in class org.apache.spark.ml.util.MLWriter
-
Saves the ML instances to the input path.
- save(SparkContext, String, String, int, int, Vector, double, Option<Object>) - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
Helper method for saving GLM classification model metadata and data.
- save(SparkContext, String) - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- save(SparkContext, String, org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0.Data) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
- save(SparkContext, String, org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0.Data) - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.classification.SVMModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel
-
- save(SparkContext, BisectingKMeansModel, String) - Method in class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.GaussianMixtureModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel
-
- save(SparkContext, KMeansModel, String) - Method in class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel
-
- save(SparkContext, PowerIterationClusteringModel, String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
- save(SparkContext, ChiSqSelectorModel, String) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel
-
Save this model to the given path.
- save(FPGrowthModel<?>, String) - Method in class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel
-
Save this model to the given path.
- save(PrefixSpanModel<?>, String) - Method in class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel
-
Save this model to the given path.
- save(MatrixFactorizationModel, String) - Method in class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
-
Saves a
MatrixFactorizationModel
, where user features are saved under
data/users
and
product features are saved under
data/products
.
- save(SparkContext, String, String, Vector, double) - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
Helper method for saving GLM regression model metadata and data.
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.IsotonicRegressionModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.LassoModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- save(SparkContext, String, DecisionTreeModel) - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- save(SparkContext, String) - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- save(SparkContext, String) - Method in interface org.apache.spark.mllib.util.Saveable
-
Save this model to the given path.
- save(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the DataFrame
at the specified path.
- save() - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the DataFrame
as the specified table.
- Saveable - Interface in org.apache.spark.mllib.util
-
:: DeveloperApi ::
- saveAsHadoopDataset(JobConf) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for
that storage system.
- saveAsHadoopDataset(JobConf) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported storage system, using a Hadoop JobConf object for
that storage system.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>, JobConf) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<F>, Class<? extends CompressionCodec>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system, compressing with the supplied codec.
- saveAsHadoopFile(String, ClassTag<F>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat
class
supporting the key and value types K and V in this RDD.
- saveAsHadoopFile(String, Class<? extends CompressionCodec>, ClassTag<F>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat
class
supporting the key and value types K and V in this RDD.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Class<? extends CompressionCodec>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat
class
supporting the key and value types K and V in this RDD.
- saveAsHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, JobConf, Option<Class<? extends CompressionCodec>>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a Hadoop OutputFormat
class
supporting the key and value types K and V in this RDD.
- saveAsHadoopFiles(String, String) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, JobConf) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsHadoopFiles(String, String) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, JobConf) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsHadoopFiles(String, String) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, JobConf) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsHadoopFiles(String, String, ClassTag<F>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsHadoopFiles(String, String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, JobConf) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsLibSVMFile(RDD<LabeledPoint>, String) - Static method in class org.apache.spark.mllib.util.MLUtils
-
Save labeled data in LIBSVM format.
- saveAsNewAPIHadoopDataset(Configuration) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported storage system, using
a Configuration object for that storage system.
- saveAsNewAPIHadoopDataset(Configuration) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported storage system with new Hadoop API, using a Hadoop
Configuration object for that storage system.
- saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<F>, Configuration) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Output the RDD to any Hadoop-supported file system.
- saveAsNewAPIHadoopFile(String, ClassTag<F>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
- saveAsNewAPIHadoopFile(String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Configuration) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Output the RDD to any Hadoop-supported file system, using a new Hadoop API OutputFormat
(mapreduce.OutputFormat) object supporting the key and value types K and V in this RDD.
- saveAsNewAPIHadoopFiles(String, String) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, Configuration) - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsNewAPIHadoopFiles(String, String) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, Configuration) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsNewAPIHadoopFiles(String, String) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<F>, Configuration) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsNewAPIHadoopFiles(String, String, ClassTag<F>) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsNewAPIHadoopFiles(String, String, Class<?>, Class<?>, Class<? extends OutputFormat<?, ?>>, Configuration) - Method in class org.apache.spark.streaming.dstream.PairDStreamFunctions
-
Save each RDD in this
DStream as a Hadoop file.
- saveAsNewAPIHadoopFiles$default$6() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- saveAsNewAPIHadoopFiles$default$6() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.api.java.JavaRDD
-
- saveAsObjectFile(String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Save this RDD as a SequenceFile of serialized objects.
- saveAsObjectFile(String) - Static method in class org.apache.spark.api.r.RRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.graphx.VertexRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- saveAsObjectFile(String) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- saveAsObjectFile(String) - Method in class org.apache.spark.rdd.RDD
-
Save this RDD as a SequenceFile of serialized objects.
- saveAsObjectFile(String) - Static method in class org.apache.spark.rdd.UnionRDD
-
- saveAsObjectFiles(String, String) - Method in class org.apache.spark.streaming.dstream.DStream
-
Save each RDD in this DStream as a Sequence file of serialized objects.
- saveAsSequenceFile(String, Option<Class<? extends CompressionCodec>>) - Method in class org.apache.spark.rdd.SequenceFileRDDFunctions
-
Output the RDD as a Hadoop SequenceFile using the Writable types we infer from the RDD's key
and value types.
- saveAsTable(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the DataFrame
as the specified table.
- saveAsTextFile(String) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.api.java.JavaRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- saveAsTextFile(String) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Save this RDD as a text file, using string representations of elements.
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Save this RDD as a compressed text file, using string representations of elements.
- saveAsTextFile(String) - Static method in class org.apache.spark.api.r.RRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.api.r.RRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- saveAsTextFile(String) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- saveAsTextFile(String) - Static method in class org.apache.spark.graphx.VertexRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- saveAsTextFile(String) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- saveAsTextFile(String) - Method in class org.apache.spark.rdd.RDD
-
Save this RDD as a text file, using string representations of elements.
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Method in class org.apache.spark.rdd.RDD
-
Save this RDD as a compressed text file, using string representations of elements.
- saveAsTextFile(String) - Static method in class org.apache.spark.rdd.UnionRDD
-
- saveAsTextFile(String, Class<? extends CompressionCodec>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- saveAsTextFiles(String, String) - Method in class org.apache.spark.streaming.dstream.DStream
-
Save each RDD in this DStream as at text file, using string representation
of elements.
- saveImpl(Params, PipelineStage[], SparkContext, String) - Method in class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
Save metadata and stages for a
Pipeline
or
PipelineModel
- save metadata to path/metadata
- save stages to stages/IDX_UID
- saveImpl(M, String, SparkSession, JsonAST.JObject) - Static method in class org.apache.spark.ml.tree.EnsembleModelReadWrite
-
Helper method for saving a tree ensemble to disk.
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.clustering.BisectingKMeansModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.clustering.KMeansModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.clustering.PowerIterationClusteringModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.feature.ChiSqSelectorModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.fpm.FPGrowthModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.fpm.PrefixSpanModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.recommendation.MatrixFactorizationModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
- SaveLoadV1_0$() - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
- SaveLoadV2_0$() - Constructor for class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
- saveMode(String) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- SaveMode - Enum in org.apache.spark.sql
-
SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.
- sc() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- sc() - Method in class org.apache.spark.sql.SQLImplicits.StringToColumn
-
- scal(double, Vector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
x = a * x
- scal(double, Vector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
x = a * x
- scalaBoolean() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive boolean type.
- scalaByte() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive byte type.
- scalaDouble() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive double type.
- scalaFloat() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive float type.
- scalaInt() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive int type.
- scalaIntToJavaLong(DStream<Object>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- scalaIntToJavaLong(DStream<Object>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
- scalaIntToJavaLong(DStream<Object>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- scalaIntToJavaLong(DStream<Object>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- scalaIntToJavaLong(DStream<Object>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- scalaIntToJavaLong(DStream<Object>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- scalaIntToJavaLong(DStream<Object>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- scalaLong() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive long type.
- scalaShort() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for Scala's primitive short type.
- scalaToJavaLong(JavaPairDStream<K, Object>, ClassTag<K>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- scalaVersion() - Method in class org.apache.spark.status.api.v1.RuntimeInfo
-
- scale() - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- scale() - Method in class org.apache.spark.mllib.random.GammaGenerator
-
- scale() - Method in class org.apache.spark.sql.types.Decimal
-
- scale() - Method in class org.apache.spark.sql.types.DecimalType
-
- scalingVec() - Method in class org.apache.spark.ml.feature.ElementwiseProduct
-
the vector to multiply with input vectors
- scalingVec() - Method in class org.apache.spark.mllib.feature.ElementwiseProduct
-
- scan(B, Function2<B, B, B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- scanLeft(B, Function2<B, A, B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- scanRight(B, Function2<A, B, B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
-
- SCHEDULED() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- SCHEDULER_DELAY() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- SCHEDULER_DELAY() - Static method in class org.apache.spark.ui.ToolTips
-
- schedulingDelay() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- schedulingDelay() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
Time taken for the first job of this batch to start processing from the time this batch
was submitted to the streaming scheduler.
- SchedulingMode - Class in org.apache.spark.scheduler
-
"FAIR" and "FIFO" determines which policy is used
to order tasks amongst a Schedulable's sub-queues
"NONE" is used when the a Schedulable has no sub-queues.
- SchedulingMode() - Constructor for class org.apache.spark.scheduler.SchedulingMode
-
- schedulingMode() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- schedulingPool() - Method in class org.apache.spark.status.api.v1.StageData
-
- schedulingPool() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- schema(StructType) - Method in class org.apache.spark.sql.DataFrameReader
-
Specifies the input schema.
- schema() - Method in class org.apache.spark.sql.Dataset
-
Returns the schema of this Dataset.
- schema() - Method in interface org.apache.spark.sql.Encoder
-
Returns the schema of encoding this type of object as a Row.
- schema() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- schema() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- schema() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- schema() - Method in interface org.apache.spark.sql.Row
-
Schema for the row.
- schema() - Method in class org.apache.spark.sql.sources.BaseRelation
-
- schema(StructType) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Specifies the input schema.
- schemaLess() - Method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- SchemaRelationProvider - Interface in org.apache.spark.sql.sources
-
Implemented by objects that produce relations for a specific kind of data source
with a given schema.
- schemaString() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- schemaString() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- schemaString() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- SchemaUtils - Class in org.apache.spark.ml.util
-
Utils for handling schemas.
- SchemaUtils() - Constructor for class org.apache.spark.ml.util.SchemaUtils
-
- SchemaUtils - Class in org.apache.spark.sql.util
-
Utils for handling schemas.
- SchemaUtils() - Constructor for class org.apache.spark.sql.util.SchemaUtils
-
- scope() - Method in class org.apache.spark.storage.RDDInfo
-
- scoreAndLabels() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
- scratch() - Method in class org.apache.spark.mllib.optimization.NNLS.Workspace
-
- script() - Method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- ScriptTransformationExec - Class in org.apache.spark.sql.hive.execution
-
Transforms the input by forking and running the specified script.
- ScriptTransformationExec(Seq<Expression>, String, Seq<Attribute>, SparkPlan, HiveScriptIOSchema) - Constructor for class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- ScriptTransformationWriterThread - Class in org.apache.spark.sql.hive.execution
-
- ScriptTransformationWriterThread(Iterator<InternalRow>, Seq<DataType>, org.apache.spark.sql.catalyst.expressions.Projection, AbstractSerDe, ObjectInspector, HiveScriptIOSchema, OutputStream, Process, org.apache.spark.util.CircularBuffer, TaskContext, Configuration) - Constructor for class org.apache.spark.sql.hive.execution.ScriptTransformationWriterThread
-
- second(Column) - Static method in class org.apache.spark.sql.functions
-
Extracts the seconds as an integer from a given date/timestamp/string.
- seconds() - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- seconds(long) - Static method in class org.apache.spark.streaming.Durations
-
- Seconds - Class in org.apache.spark.streaming
-
Helper object that creates instance of
Duration
representing
a given number of seconds.
- Seconds() - Constructor for class org.apache.spark.streaming.Seconds
-
- securityManager() - Method in class org.apache.spark.SparkEnv
-
- seed() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- seed() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- seed() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- seed() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- seed() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- seed() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- seed() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- seed() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- seed() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- seed() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- seed() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- seed() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- seed() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- seed() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- seed() - Static method in class org.apache.spark.ml.clustering.LDA
-
- seed() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- seed() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- seed() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- seed() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- seed() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- seed() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- seed() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- seed() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- seed() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- seed() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- seed() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- seed() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- seed() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- seed() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- seed() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- seed() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- seedBrokers() - Method in class org.apache.spark.streaming.kafka.KafkaCluster.SimpleConsumerConfig
-
- segmentLength(Function1<A, Object>, int) - Static method in class org.apache.spark.sql.types.StructType
-
- select(Column...) - Method in class org.apache.spark.sql.Dataset
-
Selects a set of column based expressions.
- select(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Selects a set of columns.
- select(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Selects a set of column based expressions.
- select(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Selects a set of columns.
- select(TypedColumn<T, U1>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
Returns a new Dataset by computing the given
Column
expression for each element.
- select(TypedColumn<T, U1>, TypedColumn<T, U2>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
Returns a new Dataset by computing the given
Column
expressions for each element.
- select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
Returns a new Dataset by computing the given
Column
expressions for each element.
- select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
Returns a new Dataset by computing the given
Column
expressions for each element.
- select(TypedColumn<T, U1>, TypedColumn<T, U2>, TypedColumn<T, U3>, TypedColumn<T, U4>, TypedColumn<T, U5>) - Method in class org.apache.spark.sql.Dataset
-
:: Experimental ::
Returns a new Dataset by computing the given
Column
expressions for each element.
- selectedFeatures() - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
list of indices to select (filter).
- selectedFeatures() - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
- selectExpr(String...) - Method in class org.apache.spark.sql.Dataset
-
Selects a set of SQL expressions.
- selectExpr(Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Selects a set of SQL expressions.
- selectorType() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- selectorType() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- selectorType() - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- semanticHash() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- semanticHash() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- semanticHash() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- sender() - Method in class org.apache.spark.storage.BlockManagerMessages.RegisterBlockManager
-
- sendToDst(A) - Method in class org.apache.spark.graphx.EdgeContext
-
Sends a message to the destination vertex.
- sendToDst(A) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- sendToSrc(A) - Method in class org.apache.spark.graphx.EdgeContext
-
Sends a message to the source vertex.
- sendToSrc(A) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- seq() - Static method in class org.apache.spark.sql.types.StructType
-
- seqToString(Seq<T>, Function1<T, String>) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- sequence() - Method in class org.apache.spark.mllib.fpm.PrefixSpan.FreqSequence
-
- sequenceFile(String, Class<K>, Class<V>, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop SequenceFile with given key and value types.
- sequenceFile(String, Class<K>, Class<V>) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Get an RDD for a Hadoop SequenceFile.
- sequenceFile(String, Class<K>, Class<V>, int) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop SequenceFile with given key and value types.
- sequenceFile(String, Class<K>, Class<V>) - Method in class org.apache.spark.SparkContext
-
Get an RDD for a Hadoop SequenceFile with given key and value types.
- sequenceFile(String, int, ClassTag<K>, ClassTag<V>, Function0<WritableConverter<K>>, Function0<WritableConverter<V>>) - Method in class org.apache.spark.SparkContext
-
Version of sequenceFile() for types implicitly convertible to Writables through a
WritableConverter.
- SequenceFileRDDFunctions<K,V> - Class in org.apache.spark.rdd
-
Extra functions available on RDDs of (key, value) pairs to create a Hadoop SequenceFile,
through an implicit conversion.
- SequenceFileRDDFunctions(RDD<Tuple2<K, V>>, Class<? extends Writable>, Class<? extends Writable>, Function1<K, Writable>, ClassTag<K>, Function1<V, Writable>, ClassTag<V>) - Constructor for class org.apache.spark.rdd.SequenceFileRDDFunctions
-
- SerDe - Class in org.apache.spark.api.r
-
Utility functions to serialize, deserialize objects to / from R
- SerDe() - Constructor for class org.apache.spark.api.r.SerDe
-
- SERDE() - Static method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- serde() - Method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- serdeProperties() - Method in class org.apache.spark.sql.hive.execution.HiveOptions
-
- SerializableMapWrapper(Map<A, B>) - Constructor for class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
-
- SerializableWritable<T extends org.apache.hadoop.io.Writable> - Class in org.apache.spark
-
- SerializableWritable(T) - Constructor for class org.apache.spark.SerializableWritable
-
- SerializationDebugger - Class in org.apache.spark.serializer
-
- SerializationDebugger() - Constructor for class org.apache.spark.serializer.SerializationDebugger
-
- SerializationDebugger.ObjectStreamClassMethods - Class in org.apache.spark.serializer
-
An implicit class that allows us to call private methods of ObjectStreamClass.
- SerializationDebugger.ObjectStreamClassMethods$ - Class in org.apache.spark.serializer
-
- SerializationFormats - Class in org.apache.spark.api.r
-
- SerializationFormats() - Constructor for class org.apache.spark.api.r.SerializationFormats
-
- SerializationStream - Class in org.apache.spark.serializer
-
:: DeveloperApi ::
A stream for writing serialized objects.
- SerializationStream() - Constructor for class org.apache.spark.serializer.SerializationStream
-
- serialize(Vector) - Method in class org.apache.spark.mllib.linalg.VectorUDT
-
- serialize(T, ClassTag<T>) - Method in class org.apache.spark.serializer.DummySerializerInstance
-
- serialize(T, ClassTag<T>) - Method in class org.apache.spark.serializer.SerializerInstance
-
- serialize(T) - Static method in class org.apache.spark.util.Utils
-
Serialize an object using Java serialization
- SERIALIZED_R_DATA_SCHEMA() - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- serializedData() - Method in class org.apache.spark.scheduler.local.StatusUpdate
-
- SerializedMemoryEntry<T> - Class in org.apache.spark.storage.memory
-
- SerializedMemoryEntry(org.apache.spark.util.io.ChunkedByteBuffer, MemoryMode, ClassTag<T>) - Constructor for class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- Serializer - Class in org.apache.spark.serializer
-
:: DeveloperApi ::
A serializer.
- Serializer() - Constructor for class org.apache.spark.serializer.Serializer
-
- serializer() - Method in class org.apache.spark.ShuffleDependency
-
- serializer() - Method in class org.apache.spark.SparkEnv
-
- SerializerInstance - Class in org.apache.spark.serializer
-
:: DeveloperApi ::
An instance of a serializer, for use by one thread at a time.
- SerializerInstance() - Constructor for class org.apache.spark.serializer.SerializerInstance
-
- serializerManager() - Method in class org.apache.spark.SparkEnv
-
- serializeStream(OutputStream) - Method in class org.apache.spark.serializer.DummySerializerInstance
-
- serializeStream(OutputStream) - Method in class org.apache.spark.serializer.SerializerInstance
-
- serializeViaNestedStream(OutputStream, SerializerInstance, Function1<SerializationStream, BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Serialize via nested stream using specific serializer
- ServletParams(Function1<HttpServletRequest, T>, String, Function1<T, String>, Function1<T, Object>) - Constructor for class org.apache.spark.ui.JettyUtils.ServletParams
-
- ServletParams$() - Constructor for class org.apache.spark.ui.JettyUtils.ServletParams$
-
- session(SparkSession) - Static method in class org.apache.spark.ml.r.RWrappers
-
- session(SparkSession) - Method in class org.apache.spark.ml.util.MLReader
-
- session(SparkSession) - Method in class org.apache.spark.ml.util.MLWriter
-
- session() - Static method in class org.apache.spark.sql.hive.HiveSessionStateBuilder
-
- sessionCatalog() - Method in class org.apache.spark.sql.hive.RelationConversions
-
- sessionState() - Method in class org.apache.spark.sql.SparkSession
-
State isolated across sessions, including SQL configurations, temporary tables, registered
functions, and everything else that accepts a SQLConf
.
- set(long, long, int, int, VD, VD, ED) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.KMeans
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.LDA
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Binarizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.DCT
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.HashingTF
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.IDF
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Imputer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.IndexToString
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Interaction
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.NGram
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.PCA
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.RFormula
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- set(Param<T>, T) - Method in interface org.apache.spark.ml.param.Params
-
Sets a parameter in the embedded param map.
- set(String, Object) - Method in interface org.apache.spark.ml.param.Params
-
Sets a parameter (by name) in the embedded param map.
- set(ParamPair<?>) - Method in interface org.apache.spark.ml.param.Params
-
Sets a parameter in the embedded param map.
- set(Param<T>, T) - Static method in class org.apache.spark.ml.Pipeline
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.PipelineModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.recommendation.ALS
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- set(Param<T>, T) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- set(String, long, long) - Static method in class org.apache.spark.rdd.InputFileBlockHolder
-
Sets the thread-local input block.
- set(String, String) - Method in class org.apache.spark.SparkConf
-
Set a configuration variable.
- set(SparkEnv) - Static method in class org.apache.spark.SparkEnv
-
- set(String, String) - Method in class org.apache.spark.sql.RuntimeConfig
-
Sets the given Spark runtime configuration property.
- set(String, boolean) - Method in class org.apache.spark.sql.RuntimeConfig
-
Sets the given Spark runtime configuration property.
- set(String, long) - Method in class org.apache.spark.sql.RuntimeConfig
-
Sets the given Spark runtime configuration property.
- set(long) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given Long.
- set(int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given Int.
- set(long, int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given unscaled Long, with a given precision and scale.
- set(BigDecimal, int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given BigDecimal value, with a given precision and scale.
- set(BigDecimal) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given BigDecimal value, inheriting its precision and scale.
- set(BigInteger) - Method in class org.apache.spark.sql.types.Decimal
-
If the value is not in the range of long, convert it to BigDecimal and
the precision and scale are based on the converted value.
- set(Decimal) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given Decimal value.
- setAcceptsNull(boolean) - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
-
- setActive(SQLContext) - Static method in class org.apache.spark.sql.SQLContext
-
- setActiveSession(SparkSession) - Static method in class org.apache.spark.sql.SparkSession
-
Changes the SparkSession that will be returned in this thread and its children when
SparkSession.getOrCreate() is called.
- setAggregationDepth(int) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregationDepth(int) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregationDepth(int) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregationDepth(int) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Suggested depth for treeAggregate (greater than or equal to 2).
- setAggregator(Aggregator<K, V, C>) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set aggregator for RDD's shuffle.
- setAlgo(String) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Sets Algorithm using a String.
- setAll(Traversable<Tuple2<String, String>>) - Method in class org.apache.spark.SparkConf
-
Set multiple parameters together
- setAlpha(double) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setAlpha(Vector) - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for setDocConcentration()
- setAlpha(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for setDocConcentration()
- setAlpha(double) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets the constant used in computing confidence in implicit ALS.
- setAppName(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set the application name.
- setAppName(String) - Method in class org.apache.spark.SparkConf
-
Set a name for your application.
- setAppResource(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set the main application resource.
- setBandwidth(double) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Sets the bandwidth (standard deviation) of the Gaussian kernel (default: 1.0
).
- setBeta(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Alias for setTopicConcentration()
- setBinary(boolean) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- setBinary(boolean) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- setBinary(boolean) - Method in class org.apache.spark.ml.feature.HashingTF
-
- setBinary(boolean) - Method in class org.apache.spark.mllib.feature.HashingTF
-
If true, term frequency vector will be binary such that non-zero term counts will be set to 1
(default: false)
- setBlocks(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of blocks for both user blocks and product blocks to parallelize the computation
into; pass -1 for an auto-configured number of blocks.
- setBlockSize(int) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param blockSize
.
- setBucketLength(double) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- setCacheNodeIds(boolean) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setCacheNodeIds(boolean) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setCacheNodeIds(boolean) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setCacheNodeIds(boolean) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setCacheNodeIds(boolean) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setCacheNodeIds(boolean) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setCacheNodeIds(boolean) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setCallSite(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Pass-through to SparkContext.setCallSite.
- setCallSite(String) - Method in class org.apache.spark.SparkContext
-
Set the thread-local property for overriding the call sites
of actions and RDDs.
- setCaseSensitive(boolean) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- setCategoricalFeaturesInfo(Map<Integer, Integer>) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
Sets categoricalFeaturesInfo using a Java Map.
- setCensorCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- setCheckpointDir(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Set the directory under which RDDs are going to be checkpointed.
- setCheckpointDir(String) - Method in class org.apache.spark.SparkContext
-
Set the directory under which RDDs are going to be checkpointed.
- setCheckpointInterval(int) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.clustering.LDA
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setCheckpointInterval(int) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setCheckpointInterval(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Specifies how often to checkpoint the cached node IDs.
- setCheckpointInterval(int) - Method in class org.apache.spark.mllib.clustering.LDA
-
Parameter for set checkpoint interval (greater than or equal to 1) or disable checkpoint (-1).
- setCheckpointInterval(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
:: DeveloperApi ::
Set period (in iterations) between checkpoints (default = 10).
- setCheckpointInterval(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setClassifier(Classifier<?, ?, ?>) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- setColdStartStrategy(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setColdStartStrategy(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- setConf(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set a single configuration value for the application.
- setConf(Properties) - Method in class org.apache.spark.sql.SQLContext
-
Set Spark SQL configuration properties.
- setConf(String, String) - Method in class org.apache.spark.sql.SQLContext
-
Set the given Spark SQL configuration property.
- setConfig(String, String) - Static method in class org.apache.spark.launcher.SparkLauncher
-
Set a configuration value for the launcher library.
- setConsumerOffsetMetadata(String, Map<TopicAndPartition, OffsetAndMetadata>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
Requires Kafka 0.8.1.1 or later.
- setConsumerOffsetMetadata(String, Map<TopicAndPartition, OffsetAndMetadata>, short) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- setConsumerOffsets(String, Map<TopicAndPartition, Object>) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
Requires Kafka 0.8.1.1 or later.
- setConsumerOffsets(String, Map<TopicAndPartition, Object>, short) - Method in class org.apache.spark.streaming.kafka.KafkaCluster
-
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the largest change in log-likelihood at which convergence is
considered to have occurred.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the convergence tolerance.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the convergence tolerance of iterations for L-BFGS.
- setConvergenceTol(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the convergence tolerance.
- setCurrentDatabase(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Sets the current default database in this session.
- setCustomHostname(String) - Static method in class org.apache.spark.util.Utils
-
Allow setting a custom host name because when we run on Mesos we need to use the same
hostname it reports to the master.
- setDecayFactor(double) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Set the forgetfulness of the previous centroids.
- setDefault(Param<T>, T) - Method in interface org.apache.spark.ml.param.Params
-
Sets a default value for a param.
- setDefault(Seq<ParamPair<?>>) - Method in interface org.apache.spark.ml.param.Params
-
Sets default values for a list of params.
- setDefaultClassLoader(ClassLoader) - Static method in class org.apache.spark.serializer.KryoSerializer
-
- setDefaultClassLoader(ClassLoader) - Method in class org.apache.spark.serializer.Serializer
-
Sets a class loader for the serializer to use in deserialization.
- setDefaultSession(SparkSession) - Static method in class org.apache.spark.sql.SparkSession
-
Sets the default SparkSession that is returned by the builder.
- setDegree(int) - Method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- setDeployMode(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set the deploy mode for the application.
- setDocConcentration(double[]) - Method in class org.apache.spark.ml.clustering.LDA
-
- setDocConcentration(double) - Method in class org.apache.spark.ml.clustering.LDA
-
- setDocConcentration(Vector) - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "alpha") for the prior placed on documents'
distributions over topics ("theta").
- setDocConcentration(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Replicates a Double
docConcentration to create a symmetric prior.
- setDropLast(boolean) - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- setElasticNetParam(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the ElasticNet mixing parameter.
- setElasticNetParam(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the ElasticNet mixing parameter.
- setEpsilon(double) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the distance threshold within which we've consider centers to have converged.
- setEstimator(Estimator<?>) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- setEstimator(Estimator<?>) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- setEstimatorParamMaps(ParamMap[]) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- setEstimatorParamMaps(ParamMap[]) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- setEvaluator(Evaluator) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- setEvaluator(Evaluator) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- setExecutorEnv(String, String) - Method in class org.apache.spark.SparkConf
-
Set an environment variable to be used when launching executors for this application.
- setExecutorEnv(Seq<Tuple2<String, String>>) - Method in class org.apache.spark.SparkConf
-
Set multiple environment variables to be used when launching executors.
- setExecutorEnv(Tuple2<String, String>[]) - Method in class org.apache.spark.SparkConf
-
Set multiple environment variables to be used when launching executors.
- setFamily(String) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Sets the value of param family
.
- setFamily(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param family
.
- setFdr(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setFdr(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- setFeatureIndex(int) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- setFeatureIndex(int) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.KMeansModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.LDA
-
The features for LDA should be a Vector
representing the word counts in a document.
- setFeaturesCol(String) - Method in class org.apache.spark.ml.clustering.LDAModel
-
The features for LDA should be a Vector
representing the word counts in a document.
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.feature.RFormula
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.PredictionModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.Predictor
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- setFeaturesCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setFeaturesCol(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setFeatureSubsetStrategy(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setFeatureSubsetStrategy(String) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setFeatureSubsetStrategy(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setFeatureSubsetStrategy(String) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setFinalRDDStorageLevel(StorageLevel) - Method in class org.apache.spark.mllib.recommendation.ALS
-
:: DeveloperApi ::
Sets storage level for final RDDs (user/product used in MatrixFactorizationModel).
- setFinalStorageLevel(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Whether to fit an intercept term.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Whether to fit an intercept term.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Set if we should fit the intercept
Default is true.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets if we should fit the intercept.
- setFitIntercept(boolean) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set if we should fit the intercept.
- setForceIndexLabel(boolean) - Method in class org.apache.spark.ml.feature.RFormula
-
- setFormula(String) - Method in class org.apache.spark.ml.feature.RFormula
-
Sets the formula to use for this transformer.
- setFpr(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setFpr(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- setFwe(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setFwe(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- setGaps(boolean) - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- setGenerics(Kryo, Class<?>[]) - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
-
- setGradient(Gradient) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the gradient function (of the loss function of one single data example)
to be used for SGD.
- setGradient(Gradient) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the gradient function (of the loss function of one single data example)
to be used for L-BFGS.
- setHalfLife(double, String) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Set the half life and time unit ("batches" or "points").
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.StringIndexer
-
- setHandleInvalid(String) - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- setHashAlgorithm(String) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Set the hash algorithm used when mapping term to integer.
- setIfMissing(String, String) - Method in class org.apache.spark.SparkConf
-
Set a parameter if it isn't already configured
- setImmutable(boolean) - Static method in class org.apache.spark.serializer.JavaIterableWrapperSerializer
-
- setImplicitPrefs(boolean) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setImplicitPrefs(boolean) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets whether to use implicit preference.
- setImpurity(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setImpurity(String) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setImpurity(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setImpurity(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
The impurity setting is ignored for GBT models.
- setImpurity(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setImpurity(String) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setImpurity(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setImpurity(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setImpurity(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setImpurity(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
The impurity setting is ignored for GBT models.
- setImpurity(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setImpurity(String) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setImpurity(Impurity) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setIndices(int[]) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- setInitialCenters(Vector[], double[]) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Specify initial centers directly.
- setInitializationMode(String) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the initialization algorithm.
- setInitializationMode(String) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Set the initialization mode.
- setInitializationSteps(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the number of steps for the k-means|| initialization mode.
- setInitialModel(GaussianMixtureModel) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the initial GMM starting point, bypassing the random initialization.
- setInitialModel(KMeansModel) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the initial starting point, bypassing the random initialization or k-means||
The condition model.k == this.k must be met, failure results
in an IllegalArgumentException.
- setInitialWeights(Vector) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param initialWeights
.
- setInitialWeights(Vector) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the initial weights.
- setInitialWeights(Vector) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the initial weights.
- setInitMode(String) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setInitSteps(int) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Binarizer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.DCT
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.HashingTF
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.IDF
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.IDFModel
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.IndexToString
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinHashLSH
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.NGram
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.PCA
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.PCAModel
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- setInputCol(String) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setInputCol(String) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
- setInputCol(String) - Method in class org.apache.spark.ml.UnaryTransformer
-
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.Imputer
-
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.ImputerModel
-
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.Interaction
-
- setInputCols(String[]) - Method in class org.apache.spark.ml.feature.VectorAssembler
-
- setIntercept(boolean) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
- setIntercept(boolean) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
- setIntercept(boolean) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Set if the algorithm should add an intercept.
- setIntercept(boolean) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
- setIntercept(boolean) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
- setIntercept(boolean) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
- setIntermediateRDDStorageLevel(StorageLevel) - Method in class org.apache.spark.mllib.recommendation.ALS
-
:: DeveloperApi ::
Sets storage level for intermediate RDDs (user/product in/out links).
- setIntermediateStorageLevel(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setInverse(boolean) - Method in class org.apache.spark.ml.feature.DCT
-
- setIsotonic(boolean) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- setIsotonic(boolean) - Method in class org.apache.spark.mllib.regression.IsotonicRegression
-
Sets the isotonic parameter.
- setItemCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setItemCol(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- setItemsCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- setItemsCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- setIterations(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of iterations to run.
- setJars(Seq<String>) - Method in class org.apache.spark.SparkConf
-
Set JAR files to distribute to the cluster.
- setJars(String[]) - Method in class org.apache.spark.SparkConf
-
Set JAR files to distribute to the cluster.
- setJavaHome(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set a custom JAVA_HOME for launching the Spark application.
- setJobDescription(String) - Method in class org.apache.spark.SparkContext
-
Set a human readable description of the current job.
- setJobGroup(String, String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Assigns a group ID to all the jobs started by this thread until the group ID is set to a
different value or cleared.
- setJobGroup(String, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Assigns a group ID to all the jobs started by this thread until the group ID is set to a
different value or cleared.
- setJobGroup(String, String, boolean) - Method in class org.apache.spark.SparkContext
-
Assigns a group ID to all the jobs started by this thread until the group ID is set to a
different value or cleared.
- setK(int) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- setK(int) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setK(int) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setK(int) - Method in class org.apache.spark.ml.clustering.LDA
-
- setK(int) - Method in class org.apache.spark.ml.feature.PCA
-
- setK(int) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the desired number of leaf clusters (default: 4).
- setK(int) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the number of Gaussians in the mixture model.
- setK(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the number of clusters to create (k).
- setK(int) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the number of topics to infer, i.e., the number of soft cluster centers.
- setK(int) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Set the number of clusters.
- setK(int) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Set the number of clusters.
- setKappa(double) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Learning rate: exponential decay rate---should be between
(0.5, 1.0] to guarantee asymptotic convergence.
- setKeepLastCheckpoint(boolean) - Method in class org.apache.spark.ml.clustering.LDA
-
- setKeepLastCheckpoint(boolean) - Method in class org.apache.spark.mllib.clustering.EMLDAOptimizer
-
If using checkpointing, this indicates whether to keep the last checkpoint (vs clean up).
- setKeyOrdering(Ordering<K>) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set key ordering for RDD's shuffle.
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- setLabelCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- setLabelCol(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- setLabelCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setLabelCol(String) - Method in class org.apache.spark.ml.feature.RFormula
-
- setLabelCol(String) - Method in class org.apache.spark.ml.Predictor
-
- setLabelCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- setLabelCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- setLabelCol(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setLabels(String[]) - Method in class org.apache.spark.ml.feature.IndexToString
-
- setLambda(double) - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Set the smoothing parameter.
- setLambda(double) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the regularization parameter, lambda.
- setLayers(int[]) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param layers
.
- setLearningDecay(double) - Method in class org.apache.spark.ml.clustering.LDA
-
- setLearningOffset(double) - Method in class org.apache.spark.ml.clustering.LDA
-
- setLearningRate(double) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets initial learning rate (default: 0.025).
- setLearningRate(double) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- setLink(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param link
.
- setLinkPower(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param linkPower
.
- setLinkPredictionCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the link prediction (linear predictor) column name.
- setLinkPredictionCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
Sets the link prediction (linear predictor) column name.
- setLocalProperty(String, String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Set a local property that affects jobs submitted from this thread, and all child
threads, such as the Spark fair scheduler pool.
- setLocalProperty(String, String) - Method in class org.apache.spark.SparkContext
-
Set a local property that affects jobs submitted from this thread, such as the Spark fair
scheduler pool.
- setLogLevel(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Control our logLevel.
- setLogLevel(String) - Method in class org.apache.spark.SparkContext
-
Control our logLevel.
- setLogLevel(Level) - Static method in class org.apache.spark.util.Utils
-
configure a new log4j level
- setLoss(Loss) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- setLossType(String) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setLossType(String) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setLowerBoundsOnCoefficients(Matrix) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the lower bounds on coefficients if fitting under bound constrained optimization.
- setLowerBoundsOnIntercepts(Vector) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the lower bounds on intercepts if fitting under bound constrained optimization.
- setMainClass(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Sets the application class name for Java/Scala applications.
- setMapSideCombine(boolean) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set mapSideCombine flag for RDD's shuffle.
- setMaster(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set the Spark master for the application.
- setMaster(String) - Method in class org.apache.spark.SparkConf
-
The master URL to connect to, such as "local" to run locally with one thread, "local[4]" to
run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone cluster.
- setMax(double) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- setMax(double) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- setMaxBins(int) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setMaxBins(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setMaxBins(int) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setMaxBins(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setMaxBins(int) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setMaxBins(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setMaxBins(int) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setMaxBins(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setMaxBins(int) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setMaxBins(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setMaxBins(int) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setMaxBins(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setMaxBins(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setMaxCategories(int) - Method in class org.apache.spark.ml.feature.VectorIndexer
-
- setMaxDepth(int) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setMaxDepth(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setMaxDepth(int) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setMaxDepth(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setMaxDepth(int) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setMaxDepth(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setMaxDepth(int) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setMaxDepth(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setMaxDepth(int) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setMaxDepth(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setMaxDepth(int) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setMaxDepth(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setMaxDepth(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setMaxIter(int) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Set the maximum number of iterations.
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setMaxIter(int) - Method in class org.apache.spark.ml.clustering.LDA
-
- setMaxIter(int) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setMaxIter(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Set the maximum number of iterations.
- setMaxIter(int) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the maximum number of iterations (applicable for solver "irls").
- setMaxIter(int) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the maximum number of iterations.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the max number of k-means iterations to split clusters (default: 20).
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the maximum number of iterations allowed.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set maximum number of iterations allowed.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the maximum number of iterations allowed.
- setMaxIterations(int) - Method in class org.apache.spark.mllib.clustering.PowerIterationClustering
-
Set maximum number of iterations of the power iteration loop
- setMaxLocalProjDBSize(long) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Sets the maximum number of items (including delimiters used in the internal storage format)
allowed in a projected database before local processing (default: 32000000L
).
- setMaxMemoryInMB(int) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setMaxMemoryInMB(int) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setMaxMemoryInMB(int) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setMaxMemoryInMB(int) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setMaxMemoryInMB(int) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setMaxMemoryInMB(int) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setMaxMemoryInMB(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setMaxPatternLength(int) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Sets maximal pattern length (default: 10
).
- setMaxSentenceLength(int) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setMaxSentenceLength(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets the maximum length (in words) of each sentence in the input data.
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- setMetricName(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- setMin(double) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- setMin(double) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- setMinConfidence(double) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- setMinConfidence(double) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- setMinConfidence(double) - Method in class org.apache.spark.mllib.fpm.AssociationRules
-
Sets the minimal confidence (default: 0.8
).
- setMinCount(int) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setMinCount(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets minCount, the minimum number of times a token must appear to be included in the word2vec
model's vocabulary (default: 5).
- setMinDF(double) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- setMinDivisibleClusterSize(double) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- setMinDivisibleClusterSize(double) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the minimum number of points (if greater than or equal to 1.0
) or the minimum proportion
of points (if less than 1.0
) of a divisible cluster (default: 1).
- setMinDocFreq(int) - Method in class org.apache.spark.ml.feature.IDF
-
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the fraction of each batch to use for updates.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Mini-batch fraction in (0, 1], which sets the fraction of document sampled and used in
each iteration.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set fraction of data to be used for each SGD iteration.
- setMiniBatchFraction(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the fraction of each batch to use for updates.
- setMinInfoGain(double) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setMinInfoGain(double) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setMinInfoGain(double) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setMinInfoGain(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setMinInfoGain(double) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setMinInfoGain(double) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setMinInfoGain(double) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setMinInfoGain(double) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setMinInfoGain(double) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setMinInfoGain(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setMinInfoGain(double) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setMinInfoGain(double) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setMinInfoGain(double) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setMinInstancesPerNode(int) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setMinInstancesPerNode(int) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setMinInstancesPerNode(int) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setMinInstancesPerNode(int) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setMinInstancesPerNode(int) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setMinInstancesPerNode(int) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setMinInstancesPerNode(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setMinSupport(double) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- setMinSupport(double) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Sets the minimal support level (default: 0.3
).
- setMinSupport(double) - Method in class org.apache.spark.mllib.fpm.PrefixSpan
-
Sets the minimal support level (default: 0.1
).
- setMinTF(double) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- setMinTF(double) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- setMinTokenLength(int) - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- setMissingValue(double) - Method in class org.apache.spark.ml.feature.Imputer
-
- setModelType(String) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
Set the model type using a string (case-sensitive).
- setModelType(String) - Method in class org.apache.spark.mllib.classification.NaiveBayes
-
Set the model type using a string (case-sensitive).
- setN(int) - Method in class org.apache.spark.ml.feature.NGram
-
- setName(String) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Assign a name to this RDD
- setName(String) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Assign a name to this RDD
- setName(String) - Method in class org.apache.spark.api.java.JavaRDD
-
Assign a name to this RDD
- setName(String) - Static method in class org.apache.spark.api.r.RRDD
-
- setName(String) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- setName(String) - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- setName(String) - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- setName(String) - Static method in class org.apache.spark.graphx.VertexRDD
-
- setName(String) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- setName(String) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- setName(String) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- setName(String) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- setName(String) - Method in class org.apache.spark.rdd.RDD
-
Assign a name to this RDD
- setName(String) - Static method in class org.apache.spark.rdd.UnionRDD
-
- setNames(String[]) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- setNonnegative(boolean) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setNonnegative(boolean) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set whether the least-squares problems solved at each iteration should have
nonnegativity constraints.
- setNumBlocks(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
Sets both numUserBlocks and numItemBlocks to the specific value.
- setNumBuckets(int) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- setNumClasses(int) - Method in class org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
-
Set the number of possible outcomes for k classes classification problem in
Multinomial Logistic Regression.
- setNumClasses(int) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setNumCorrections(int) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the number of corrections used in the LBFGS update.
- setNumFeatures(int) - Method in class org.apache.spark.ml.feature.HashingTF
-
- setNumFolds(int) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- setNumHashTables(int) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- setNumHashTables(int) - Method in class org.apache.spark.ml.feature.MinHashLSH
-
- setNumItemBlocks(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setNumIterations(int) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the number of iterations of gradient descent to run per update.
- setNumIterations(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets number of iterations (default: 1), which should be smaller than or equal to number of
partitions.
- setNumIterations(int) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the number of iterations for SGD.
- setNumIterations(int) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the maximal number of iterations for L-BFGS.
- setNumIterations(int) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the number of iterations of gradient descent to run per update.
- setNumIterations(int) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- setNumPartitions(int) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setNumPartitions(int) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- setNumPartitions(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets number of partitions (default: 1).
- setNumPartitions(int) - Method in class org.apache.spark.mllib.fpm.FPGrowth
-
Sets the number of partitions used by parallel FP-growth (default: same as input data).
- setNumTopFeatures(int) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setNumTopFeatures(int) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- setNumTrees(int) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setNumTrees(int) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setNumTrees(int) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setNumTrees(int) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setNumUserBlocks(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setOptimizeDocConcentration(boolean) - Method in class org.apache.spark.ml.clustering.LDA
-
- setOptimizeDocConcentration(boolean) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
Sets whether to optimize docConcentration parameter during training.
- setOptimizer(String) - Method in class org.apache.spark.ml.clustering.LDA
-
- setOptimizer(LDAOptimizer) - Method in class org.apache.spark.mllib.clustering.LDA
-
:: DeveloperApi ::
- setOptimizer(String) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the LDAOptimizer used to perform the actual calculation by algorithm name.
- setOrNull(long, int, int) - Method in class org.apache.spark.sql.types.Decimal
-
Set this Decimal to the given unscaled Long, with a given precision and scale,
and return it, or return null if it cannot be set due to overflow.
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Binarizer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.DCT
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.HashingTF
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.IDF
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.IDFModel
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.IndexToString
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Interaction
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinHashLSH
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.NGram
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.PCA
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.PCAModel
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- setOutputCol(String) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorAssembler
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setOutputCol(String) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
- setOutputCol(String) - Method in class org.apache.spark.ml.UnaryTransformer
-
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.Imputer
-
- setOutputCols(String[]) - Method in class org.apache.spark.ml.feature.ImputerModel
-
- setP(double) - Method in class org.apache.spark.ml.feature.Normalizer
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.IDFModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.PCAModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.RFormulaModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- setParent(Estimator<M>) - Method in class org.apache.spark.ml.Model
-
Sets the parent of this model (Java API).
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.PipelineModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- setParent(Estimator<M>) - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- setPattern(String) - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- setPeacePeriod(int) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Set the number of initial batches to ignore.
- setPercentile(double) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setPercentile(double) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.clustering.KMeansModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.PredictionModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.Predictor
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- setPredictionCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setPredictionCol(String) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- setProbabilityCol(String) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- setProbabilityCol(String) - Method in class org.apache.spark.ml.classification.ProbabilisticClassifier
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setProbabilityCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setProbabilityCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setProbabilityCol(String) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- setProductBlocks(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of product blocks to parallelize the computation.
- setPropertiesFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set a custom properties file with Spark configuration for the application.
- setQuantileCalculationStrategy(Enumeration.Value) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setQuantileProbabilities(double[]) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- setQuantileProbabilities(double[]) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- setQuantilesCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- setQuantilesCol(String) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- setRandomCenters(int, double, long) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Initialize random centers, requiring only the number of dimensions.
- setRank(int) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setRank(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the rank of the feature matrices computed (number of features).
- setRatingCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.classification.ClassificationModel
-
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.classification.Classifier
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setRawPredictionCol(String) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setRawPredictionCol(String) - Method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- setRegParam(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setRegParam(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the regularization parameter for L2 regularization.
- setRegParam(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the regularization parameter.
- setRegParam(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the regularization parameter.
- setRelativeError(double) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- setRequiredColumns(Configuration, StructType, StructType) - Static method in class org.apache.spark.sql.hive.orc.OrcRelation
-
- setRest(long, int, VD, ED) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- setRuns(int) - Method in class org.apache.spark.mllib.clustering.KMeans
-
- setSample(RDD<Object>) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Sets the sample to use for density estimation.
- setSample(JavaRDD<Double>) - Method in class org.apache.spark.mllib.stat.KernelDensity
-
Sets the sample to use for density estimation (for Java users).
- setScalingVec(Vector) - Method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- setSeed(long) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setSeed(long) - Method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setSeed(long) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setSeed(long) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setSeed(long) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Set the seed for weights initialization if weights are not set
- setSeed(long) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setSeed(long) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setSeed(long) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- setSeed(long) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- setSeed(long) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setSeed(long) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setSeed(long) - Method in class org.apache.spark.ml.clustering.LDA
-
- setSeed(long) - Method in class org.apache.spark.ml.clustering.LDAModel
-
- setSeed(long) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- setSeed(long) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- setSeed(long) - Method in class org.apache.spark.ml.feature.MinHashLSH
-
- setSeed(long) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setSeed(long) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setSeed(long) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setSeed(long) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setSeed(long) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setSeed(long) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setSeed(long) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setSeed(long) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setSeed(long) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- setSeed(long) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.BisectingKMeans
-
Sets the random seed (default: hash value of the class name).
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Set the random seed
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.KMeans
-
Set the random seed for cluster initialization.
- setSeed(long) - Method in class org.apache.spark.mllib.clustering.LDA
-
Set the random seed for cluster initialization.
- setSeed(long) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets random seed (default: a random long integer).
- setSeed(long) - Method in class org.apache.spark.mllib.random.ExponentialGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.random.GammaGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.random.LogNormalGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.random.PoissonGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.random.StandardNormalGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.random.UniformGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.random.WeibullGenerator
-
- setSeed(long) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Sets a random seed to have deterministic results.
- setSeed(long) - Method in class org.apache.spark.util.random.BernoulliCellSampler
-
- setSeed(long) - Method in class org.apache.spark.util.random.BernoulliSampler
-
- setSeed(long) - Method in class org.apache.spark.util.random.PoissonSampler
-
- setSeed(long) - Method in interface org.apache.spark.util.random.Pseudorandom
-
Set random seed.
- setSelectorType(String) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- setSelectorType(String) - Method in class org.apache.spark.mllib.feature.ChiSqSelector
-
- setSerializer(Serializer) - Method in class org.apache.spark.rdd.CoGroupedRDD
-
Set a serializer for this RDD's shuffle, or null to use the default (spark.serializer)
- setSerializer(Serializer) - Method in class org.apache.spark.rdd.ShuffledRDD
-
Set a serializer for this RDD's shuffle, or null to use the default (spark.serializer)
- setSmoothing(double) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
Set the smoothing parameter.
- setSolver(String) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param solver
.
- setSolver(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the solver algorithm used for optimization.
- setSolver(String) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the solver algorithm used for optimization.
- setSparkContextSessionConf(SparkSession, Map<Object, Object>) - Static method in class org.apache.spark.sql.api.r.SQLUtils
-
- setSparkHome(String) - Method in class org.apache.spark.launcher.SparkLauncher
-
Set a custom Spark installation location for the application.
- setSparkHome(String) - Method in class org.apache.spark.SparkConf
-
Set the location where Spark is installed on worker nodes.
- setSplits(double[]) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- setSQLReadObject(Function2<DataInputStream, Object, Object>) - Static method in class org.apache.spark.api.r.SerDe
-
- setSQLWriteObject(Function2<DataOutputStream, Object, Object>) - Static method in class org.apache.spark.api.r.SerDe
-
- setSrcOnly(long, int, VD) - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- setStackTrace(StackTraceElement[]) - Static method in exception org.apache.spark.sql.AnalysisException
-
- setStages(PipelineStage[]) - Method in class org.apache.spark.ml.Pipeline
-
- setStandardization(boolean) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Whether to standardize the training features before fitting the model.
- setStandardization(boolean) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Whether to standardize the training features before fitting the model.
- setStandardization(boolean) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Whether to standardize the training features before fitting the model.
- setStatement(String) - Method in class org.apache.spark.ml.feature.SQLTransformer
-
- setStepSize(double) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setStepSize(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setStepSize(double) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Sets the value of param stepSize
(applicable only for solver "gd").
- setStepSize(double) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setStepSize(double) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setStepSize(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setStepSize(double) - Method in class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Set the step size for gradient descent.
- setStepSize(double) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the initial step size of SGD for the first step.
- setStepSize(double) - Method in class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Set the step size for gradient descent.
- setStopWords(String[]) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- setStrategy(String) - Method in class org.apache.spark.ml.feature.Imputer
-
Imputation strategy.
- setSubsamplingRate(double) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.classification.GBTClassifier
-
- setSubsamplingRate(double) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.clustering.LDA
-
- setSubsamplingRate(double) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.regression.GBTRegressor
-
- setSubsamplingRate(double) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- setSubsamplingRate(double) - Method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- setSubsamplingRate(double) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setTau0(double) - Method in class org.apache.spark.mllib.clustering.OnlineLDAOptimizer
-
A (positive) learning parameter that downweights early iterations.
- setTestMethod(String) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Set the statistical method used for significance testing.
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set threshold in binary classification.
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- setThreshold(double) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setThreshold(double) - Method in class org.apache.spark.ml.feature.Binarizer
-
- setThreshold(double) - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
Sets the threshold that separates positive predictions from negative predictions
in Binary Logistic Regression.
- setThreshold(double) - Method in class org.apache.spark.mllib.classification.SVMModel
-
Sets the threshold that separates positive predictions from negative predictions.
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- setThresholds(double[]) - Method in class org.apache.spark.ml.classification.ProbabilisticClassifier
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- setThresholds(double[]) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- setTimeoutDuration(long) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout duration in ms for this key.
- setTimeoutDuration(String) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout duration for this key as a string.
- setTimeoutTimestamp(long) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as milliseconds in epoch time.
- setTimeoutTimestamp(long, String) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as milliseconds in epoch time and an additional
duration as a string (e.g.
- setTimeoutTimestamp(Date) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as a java.sql.Date.
- setTimeoutTimestamp(Date, String) - Method in interface org.apache.spark.sql.streaming.GroupState
-
Set the timeout timestamp for this key as a java.sql.Date and an additional
duration as a string (e.g.
- setTol(double) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- setTol(double) - Method in class org.apache.spark.ml.clustering.KMeans
-
- setTol(double) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
Set the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the convergence tolerance of iterations.
- setTol(double) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Set the convergence tolerance of iterations.
- setToLowercase(boolean) - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
- setTopicConcentration(double) - Method in class org.apache.spark.ml.clustering.LDA
-
- setTopicConcentration(double) - Method in class org.apache.spark.mllib.clustering.LDA
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics'
distributions over terms.
- setTopicDistributionCol(String) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- setTopicDistributionCol(String) - Method in class org.apache.spark.ml.clustering.LDA
-
- setTopicDistributionCol(String) - Method in class org.apache.spark.ml.clustering.LDAModel
-
- setTopicDistributionCol(String) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- setTrainRatio(double) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- setTreeStrategy(Strategy) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- setUiRoot(ContextHandler, UIRoot) - Static method in class org.apache.spark.status.api.v1.UIRootFromServletContext
-
- setUpdater(Updater) - Method in class org.apache.spark.mllib.optimization.GradientDescent
-
Set the updater function to actually perform a gradient step in a given direction.
- setUpdater(Updater) - Method in class org.apache.spark.mllib.optimization.LBFGS
-
Set the updater function to actually perform a gradient step in a given direction.
- SetupDriver(org.apache.spark.rpc.RpcEndpointRef) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver
-
- SetupDriver$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SetupDriver$
-
- setupGroups(int, DefaultPartitionCoalescer.PartitionLocations) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
Initializes targetLen partition groups.
- setupJob(JobContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Setups up a job.
- setupJob(JobContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
-
- setUpperBoundsOnCoefficients(Matrix) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the upper bounds on coefficients if fitting under bound constrained optimization.
- setUpperBoundsOnIntercepts(Vector) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Set the upper bounds on intercepts if fitting under bound constrained optimization.
- setupSecureURLConnection(URLConnection, org.apache.spark.SecurityManager) - Static method in class org.apache.spark.util.Utils
-
If the given URL connection is HttpsURLConnection, it sets the SSL socket factory and
the host verifier from the given security manager.
- setupTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
-
Sets up a task within a job.
- setupTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
-
- setUseNodeIdCache(boolean) - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- setUserBlocks(int) - Method in class org.apache.spark.mllib.recommendation.ALS
-
Set the number of user blocks to parallelize the computation.
- setUserCol(String) - Method in class org.apache.spark.ml.recommendation.ALS
-
- setUserCol(String) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- setValidateData(boolean) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
- setValidateData(boolean) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
- setValidateData(boolean) - Method in class org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm
-
Set if the algorithm should validate data before training.
- setValidateData(boolean) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
- setValidateData(boolean) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
- setValidateData(boolean) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
- setValidationTol(double) - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- setValue(R) - Method in class org.apache.spark.Accumulable
-
Deprecated.
Set the accumulator's value.
- setValue(R) - Static method in class org.apache.spark.Accumulator
-
Deprecated.
- setVarianceCol(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- setVarianceCol(String) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- setVariancePower(double) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param variancePower
.
- setVectorSize(int) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setVectorSize(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets vector size (default: 100).
- setVerbose(boolean) - Method in class org.apache.spark.launcher.SparkLauncher
-
Enables verbose reporting for SparkSubmit.
- setVocabSize(int) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- setWeightCol(String) - Method in class org.apache.spark.ml.classification.LinearSVC
-
Set the value of param weightCol
.
- setWeightCol(double) - Method in class org.apache.spark.ml.classification.LinearSVCModel
-
- setWeightCol(String) - Method in class org.apache.spark.ml.classification.LogisticRegression
-
Sets the value of param weightCol
.
- setWeightCol(String) - Method in class org.apache.spark.ml.classification.NaiveBayes
-
Sets the value of param weightCol
.
- setWeightCol(String) - Method in class org.apache.spark.ml.classification.OneVsRest
-
Sets the value of param weightCol
.
- setWeightCol(String) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
Sets the value of param weightCol
.
- setWeightCol(String) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- setWeightCol(String) - Method in class org.apache.spark.ml.regression.LinearRegression
-
Whether to over-/under-sample training instances according to the given weights in weightCol.
- setWindowSize(int) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- setWindowSize(int) - Method in class org.apache.spark.mllib.feature.Word2Vec
-
Sets the window of words (default: 5)
- setWindowSize(int) - Method in class org.apache.spark.mllib.stat.test.StreamingTest
-
Set the number of batches to compute significance tests over.
- setWithMean(boolean) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- setWithMean(boolean) - Method in class org.apache.spark.mllib.feature.StandardScalerModel
-
:: DeveloperApi ::
- setWithStd(boolean) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- setWithStd(boolean) - Method in class org.apache.spark.mllib.feature.StandardScalerModel
-
:: DeveloperApi ::
- sha1(Column) - Static method in class org.apache.spark.sql.functions
-
Calculates the SHA-1 digest of a binary column and returns the value
as a 40 character hex string.
- sha2(Column, int) - Static method in class org.apache.spark.sql.functions
-
Calculates the SHA-2 family of hash functions of a binary column and
returns the value as a hex string.
- shape() - Method in class org.apache.spark.mllib.random.GammaGenerator
-
- SharedParamsCodeGen - Class in org.apache.spark.ml.param.shared
-
Code generator for shared params (sharedParams.scala).
- SharedParamsCodeGen() - Constructor for class org.apache.spark.ml.param.shared.SharedParamsCodeGen
-
- SharedReadWrite$() - Constructor for class org.apache.spark.ml.Pipeline.SharedReadWrite$
-
- sharedState() - Method in class org.apache.spark.sql.SparkSession
-
State shared across sessions, including the SparkContext
, cached data, listener,
and a catalog that interacts with external systems.
- shiftLeft(Column, int) - Static method in class org.apache.spark.sql.functions
-
Shift the given value numBits left.
- shiftRight(Column, int) - Static method in class org.apache.spark.sql.functions
-
(Signed) shift the given value numBits right.
- shiftRightUnsigned(Column, int) - Static method in class org.apache.spark.sql.functions
-
Unsigned shift the given value numBits right.
- SHORT() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable short type.
- ShortestPaths - Class in org.apache.spark.graphx.lib
-
Computes shortest paths to the given set of landmark vertices, returning a graph where each
vertex attribute is a map containing the shortest-path distance to each reachable landmark.
- ShortestPaths() - Constructor for class org.apache.spark.graphx.lib.ShortestPaths
-
- shortName() - Method in class org.apache.spark.sql.hive.execution.HiveFileFormat
-
- shortName() - Method in class org.apache.spark.sql.hive.orc.OrcFileFormat
-
- shortName() - Method in interface org.apache.spark.sql.sources.DataSourceRegister
-
The string that represents the format that this data source provider uses.
- shortTimeUnitString(TimeUnit) - Static method in class org.apache.spark.streaming.ui.UIUtils
-
Return the short string for a TimeUnit
.
- ShortType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the ShortType object.
- ShortType - Class in org.apache.spark.sql.types
-
The data type representing Short
values.
- shouldCloseFileAfterWrite(SparkConf, boolean) - Static method in class org.apache.spark.streaming.util.WriteAheadLogUtils
-
- shouldDistributeGaussians(int, int) - Static method in class org.apache.spark.mllib.clustering.GaussianMixture
-
Heuristic to distribute the computation of the MultivariateGaussian
s, approximately when
d is greater than 25 except for when k is very small.
- shouldGoLeft(Vector) - Method in interface org.apache.spark.ml.tree.Split
-
Return true (split to left) or false (split to right).
- shouldGoLeft(int, Split[]) - Method in interface org.apache.spark.ml.tree.Split
-
Return true (split to left) or false (split to right).
- shouldOwn(Param<?>) - Method in interface org.apache.spark.ml.param.Params
-
Validates that the input param belongs to this instance.
- show(int) - Method in class org.apache.spark.sql.Dataset
-
Displays the Dataset in a tabular form.
- show() - Method in class org.apache.spark.sql.Dataset
-
Displays the top 20 rows of Dataset in a tabular form.
- show(boolean) - Method in class org.apache.spark.sql.Dataset
-
Displays the top 20 rows of Dataset in a tabular form.
- show(int, boolean) - Method in class org.apache.spark.sql.Dataset
-
Displays the Dataset in a tabular form.
- show(int, int) - Method in class org.apache.spark.sql.Dataset
-
Displays the Dataset in a tabular form.
- showBytesDistribution(String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showBytesDistribution(String, Option<org.apache.spark.util.Distribution>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showBytesDistribution(String, org.apache.spark.util.Distribution) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showDagVizForJob(int, Seq<org.apache.spark.ui.scope.RDDOperationGraph>) - Static method in class org.apache.spark.ui.UIUtils
-
Return a "DAG visualization" DOM element that expands into a visualization for a job.
- showDagVizForStage(int, Option<org.apache.spark.ui.scope.RDDOperationGraph>) - Static method in class org.apache.spark.ui.UIUtils
-
Return a "DAG visualization" DOM element that expands into a visualization for a stage.
- showDistribution(String, org.apache.spark.util.Distribution, Function1<Object, String>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showDistribution(String, Option<org.apache.spark.util.Distribution>, Function1<Object, String>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showDistribution(String, Option<org.apache.spark.util.Distribution>, String) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showDistribution(String, String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showMillisDistribution(String, Option<org.apache.spark.util.Distribution>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showMillisDistribution(String, Function2<TaskInfo, TaskMetrics, Object>, Seq<Tuple2<TaskInfo, TaskMetrics>>) - Static method in class org.apache.spark.scheduler.StatsReportListener
-
- showMillisDistribution(String, Function1<BatchInfo, Option<Object>>) - Method in class org.apache.spark.streaming.scheduler.StatsReportListener
-
- SHUFFLE() - Static method in class org.apache.spark.storage.BlockId
-
- SHUFFLE_DATA() - Static method in class org.apache.spark.storage.BlockId
-
- SHUFFLE_INDEX() - Static method in class org.apache.spark.storage.BlockId
-
- SHUFFLE_READ() - Static method in class org.apache.spark.ui.ToolTips
-
- SHUFFLE_READ_BLOCKED_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- SHUFFLE_READ_BLOCKED_TIME() - Static method in class org.apache.spark.ui.ToolTips
-
- SHUFFLE_READ_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
-
- SHUFFLE_READ_REMOTE_SIZE() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- SHUFFLE_READ_REMOTE_SIZE() - Static method in class org.apache.spark.ui.ToolTips
-
- SHUFFLE_WRITE() - Static method in class org.apache.spark.ui.ToolTips
-
- SHUFFLE_WRITE_METRICS_PREFIX() - Static method in class org.apache.spark.InternalAccumulator
-
- ShuffleBlockId - Class in org.apache.spark.storage
-
- ShuffleBlockId(int, int, int) - Constructor for class org.apache.spark.storage.ShuffleBlockId
-
- ShuffleDataBlockId - Class in org.apache.spark.storage
-
- ShuffleDataBlockId(int, int, int) - Constructor for class org.apache.spark.storage.ShuffleDataBlockId
-
- ShuffleDependency<K,V,C> - Class in org.apache.spark
-
:: DeveloperApi ::
Represents a dependency on the output of a shuffle stage.
- ShuffleDependency(RDD<? extends Product2<K, V>>, Partitioner, Serializer, Option<Ordering<K>>, Option<Aggregator<K, V, C>>, boolean, ClassTag<K>, ClassTag<V>, ClassTag<C>) - Constructor for class org.apache.spark.ShuffleDependency
-
- ShuffledRDD<K,V,C> - Class in org.apache.spark.rdd
-
:: DeveloperApi ::
The resulting RDD from a shuffle (e.g.
- ShuffledRDD(RDD<? extends Product2<K, V>>, Partitioner, ClassTag<K>, ClassTag<V>, ClassTag<C>) - Constructor for class org.apache.spark.rdd.ShuffledRDD
-
- shuffleHandle() - Method in class org.apache.spark.ShuffleDependency
-
- shuffleId() - Method in class org.apache.spark.CleanShuffle
-
- shuffleId() - Method in class org.apache.spark.FetchFailed
-
- shuffleId() - Method in class org.apache.spark.ShuffleDependency
-
- shuffleId() - Method in class org.apache.spark.storage.BlockManagerMessages.RemoveShuffle
-
- shuffleId() - Method in class org.apache.spark.storage.ShuffleBlockId
-
- shuffleId() - Method in class org.apache.spark.storage.ShuffleDataBlockId
-
- shuffleId() - Method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- ShuffleIndexBlockId - Class in org.apache.spark.storage
-
- ShuffleIndexBlockId(int, int, int) - Constructor for class org.apache.spark.storage.ShuffleIndexBlockId
-
- shuffleManager() - Method in class org.apache.spark.SparkEnv
-
- shuffleRead() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- shuffleRead() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- shuffleRead$() - Constructor for class org.apache.spark.InternalAccumulator.shuffleRead$
-
- shuffleReadBytes() - Method in class org.apache.spark.status.api.v1.StageData
-
- ShuffleReadMetricDistributions - Class in org.apache.spark.status.api.v1
-
- ShuffleReadMetrics - Class in org.apache.spark.status.api.v1
-
- shuffleReadMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
-
- shuffleReadMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
-
- shuffleReadMetrics() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- ShuffleReadMetricsUIData(long, long, long, long, long, long, long, long) - Constructor for class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- ShuffleReadMetricsUIData$() - Constructor for class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData$
-
- shuffleReadRecords() - Method in class org.apache.spark.status.api.v1.StageData
-
- shuffleReadRecords() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- shuffleReadRecords() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- shuffleReadTotalBytes() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- shuffleWrite() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- shuffleWrite() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- shuffleWrite$() - Constructor for class org.apache.spark.InternalAccumulator.shuffleWrite$
-
- shuffleWriteBytes() - Method in class org.apache.spark.status.api.v1.StageData
-
- shuffleWriteBytes() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- ShuffleWriteMetricDistributions - Class in org.apache.spark.status.api.v1
-
- ShuffleWriteMetrics - Class in org.apache.spark.status.api.v1
-
- shuffleWriteMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetricDistributions
-
- shuffleWriteMetrics() - Method in class org.apache.spark.status.api.v1.TaskMetrics
-
- shuffleWriteMetrics() - Method in class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- ShuffleWriteMetricsUIData(long, long, long) - Constructor for class org.apache.spark.ui.jobs.UIData.ShuffleWriteMetricsUIData
-
- ShuffleWriteMetricsUIData$() - Constructor for class org.apache.spark.ui.jobs.UIData.ShuffleWriteMetricsUIData$
-
- shuffleWriteRecords() - Method in class org.apache.spark.status.api.v1.StageData
-
- shuffleWriteRecords() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- shuffleWriteRecords() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- Shutdown$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.Shutdown$
-
- ShutdownHookManager - Class in org.apache.spark.util
-
Various utility methods used by Spark.
- ShutdownHookManager() - Constructor for class org.apache.spark.util.ShutdownHookManager
-
- sigma() - Method in class org.apache.spark.mllib.stat.distribution.MultivariateGaussian
-
- sigmas() - Method in class org.apache.spark.mllib.clustering.ExpectationSum
-
- SignalUtils - Class in org.apache.spark.util
-
Contains utilities for working with posix signals.
- SignalUtils() - Constructor for class org.apache.spark.util.SignalUtils
-
- signum(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the signum of the given value.
- signum(String) - Static method in class org.apache.spark.sql.functions
-
Computes the signum of the given column.
- SimpleConsumerConfig$() - Constructor for class org.apache.spark.streaming.kafka.KafkaCluster.SimpleConsumerConfig$
-
- SimpleFutureAction<T> - Class in org.apache.spark
-
A
FutureAction
holding the result of an action that triggers a single job.
- simpleString() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- simpleString() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- simpleString() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- simpleString() - Method in class org.apache.spark.sql.types.ArrayType
-
- simpleString() - Static method in class org.apache.spark.sql.types.BinaryType
-
- simpleString() - Static method in class org.apache.spark.sql.types.BooleanType
-
- simpleString() - Method in class org.apache.spark.sql.types.ByteType
-
- simpleString() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- simpleString() - Method in class org.apache.spark.sql.types.CharType
-
- simpleString() - Method in class org.apache.spark.sql.types.DataType
-
Readable string representation for the type.
- simpleString() - Static method in class org.apache.spark.sql.types.DateType
-
- simpleString() - Method in class org.apache.spark.sql.types.DecimalType
-
- simpleString() - Static method in class org.apache.spark.sql.types.DoubleType
-
- simpleString() - Static method in class org.apache.spark.sql.types.FloatType
-
- simpleString() - Static method in class org.apache.spark.sql.types.HiveStringType
-
- simpleString() - Method in class org.apache.spark.sql.types.IntegerType
-
- simpleString() - Method in class org.apache.spark.sql.types.LongType
-
- simpleString() - Method in class org.apache.spark.sql.types.MapType
-
- simpleString() - Static method in class org.apache.spark.sql.types.NullType
-
- simpleString() - Method in class org.apache.spark.sql.types.ObjectType
-
- simpleString() - Method in class org.apache.spark.sql.types.ShortType
-
- simpleString() - Static method in class org.apache.spark.sql.types.StringType
-
- simpleString() - Method in class org.apache.spark.sql.types.StructType
-
- simpleString() - Static method in class org.apache.spark.sql.types.TimestampType
-
- simpleString() - Method in class org.apache.spark.sql.types.VarcharType
-
- SimpleUpdater - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
A simple updater for gradient descent *without* any regularization.
- SimpleUpdater() - Constructor for class org.apache.spark.mllib.optimization.SimpleUpdater
-
- sin(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the sine of the given value.
- sin(String) - Static method in class org.apache.spark.sql.functions
-
Computes the sine of the given column.
- SingularValueDecomposition<UType,VType> - Class in org.apache.spark.mllib.linalg
-
Represents singular value decomposition (SVD) factors.
- SingularValueDecomposition(UType, Vector, VType) - Constructor for class org.apache.spark.mllib.linalg.SingularValueDecomposition
-
- sinh(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the hyperbolic sine of the given value.
- sinh(String) - Static method in class org.apache.spark.sql.functions
-
Computes the hyperbolic sine of the given column.
- sink() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- SinkProgress - Class in org.apache.spark.sql.streaming
-
Information about progress made for a sink in the execution of a
StreamingQuery
during a trigger.
- size() - Method in class org.apache.spark.api.java.JavaUtils.SerializableMapWrapper
-
- size() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Size of the attribute group.
- size() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- size() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- size() - Method in interface org.apache.spark.ml.linalg.Vector
-
Size of the vector.
- size() - Method in class org.apache.spark.ml.param.ParamMap
-
Number of param pairs in this map.
- size() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- size() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- size() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Size of the vector.
- size(Column) - Static method in class org.apache.spark.sql.functions
-
Returns length of array or map.
- size() - Method in interface org.apache.spark.sql.Row
-
Number of elements in the Row.
- size() - Static method in class org.apache.spark.sql.types.StructType
-
- size() - Method in class org.apache.spark.storage.EncryptedBlockData
-
- size() - Method in class org.apache.spark.storage.memory.DeserializedMemoryEntry
-
- size() - Method in interface org.apache.spark.storage.memory.MemoryEntry
-
- size() - Method in class org.apache.spark.storage.memory.SerializedMemoryEntry
-
- SizeEstimator - Class in org.apache.spark.util
-
:: DeveloperApi ::
Estimates the sizes of Java objects (number of bytes of memory they occupy), for use in
memory-aware caches.
- SizeEstimator() - Constructor for class org.apache.spark.util.SizeEstimator
-
- sizeInBytes() - Method in class org.apache.spark.sql.sources.BaseRelation
-
Returns an estimated size of this relation in bytes.
- sketch(RDD<K>, int, ClassTag<K>) - Static method in class org.apache.spark.RangePartitioner
-
Sketches the input RDD via reservoir sampling on each partition.
- skewness(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the skewness of the values in a group.
- skewness(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the skewness of the values in a group.
- skip(long) - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- skip(long) - Method in class org.apache.spark.io.NioBufferedFileInputStream
-
- skip(long) - Method in class org.apache.spark.storage.BufferReleasingInputStream
-
- skippedStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- skipWhitespace() - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- slice(int, int) - Static method in class org.apache.spark.sql.types.StructType
-
- slice(Time, Time) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- slice(Time, Time) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return all the RDDs between 'fromDuration' to 'toDuration' (both included)
- slice(Time, Time) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- slice(Time, Time) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- slice(Time, Time) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- slice(Time, Time) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- slice(Time, Time) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- slice(org.apache.spark.streaming.Interval) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return all the RDDs defined by the Interval object (both end times included)
- slice(Time, Time) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return all the RDDs between 'fromTime' to 'toTime' (both included)
- slideDuration() - Method in class org.apache.spark.streaming.dstream.DStream
-
Time interval after which the DStream generates an RDD
- slideDuration() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
- sliding(int, int) - Method in class org.apache.spark.mllib.rdd.RDDFunctions
-
Returns an RDD from grouping items of its parent RDD in fixed size blocks by passing a sliding
window over them.
- sliding(int) - Method in class org.apache.spark.mllib.rdd.RDDFunctions
-
sliding(Int, Int)*
with step = 1.
- sliding(int) - Static method in class org.apache.spark.sql.types.StructType
-
- sliding(int, int) - Static method in class org.apache.spark.sql.types.StructType
-
- smoothing() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- smoothing() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- SnappyCompressionCodec - Class in org.apache.spark.io
-
- SnappyCompressionCodec(SparkConf) - Constructor for class org.apache.spark.io.SnappyCompressionCodec
-
- SnappyOutputStreamWrapper - Class in org.apache.spark.io
-
Wrapper over SnappyOutputStream
which guards against write-after-close and double-close
issues.
- SnappyOutputStreamWrapper(SnappyOutputStream) - Constructor for class org.apache.spark.io.SnappyOutputStreamWrapper
-
- socketStream(String, int, Function<InputStream, Iterable<T>>, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream from network source hostname:port.
- socketStream(String, int, Function1<InputStream, Iterator<T>>, StorageLevel, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Creates an input stream from TCP source hostname:port.
- socketTextStream(String, int, StorageLevel) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream from network source hostname:port.
- socketTextStream(String, int) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream from network source hostname:port.
- socketTextStream(String, int, StorageLevel) - Method in class org.apache.spark.streaming.StreamingContext
-
Creates an input stream from TCP source hostname:port.
- solve(double[], double[]) - Static method in class org.apache.spark.mllib.linalg.CholeskyDecomposition
-
Solves a symmetric positive definite linear system via Cholesky factorization.
- solve(double[], double[], NNLS.Workspace) - Static method in class org.apache.spark.mllib.optimization.NNLS
-
Solve a least squares problem, possibly with nonnegativity constraints, by a modified
projected gradient method.
- solver() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- solver() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- solver() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- solver() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
-
- solver() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- solver() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- Sort() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- sort(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset sorted by the specified column, all in ascending order.
- sort(Column...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset sorted by the given expressions.
- sort(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset sorted by the specified column, all in ascending order.
- sort(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset sorted by the given expressions.
- sort_array(Column) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array for the given column in ascending order,
according to the natural ordering of the array elements.
- sort_array(Column, boolean) - Static method in class org.apache.spark.sql.functions
-
Sorts the input array for the given column in ascending or descending order,
according to the natural ordering of the array elements.
- sortBy(Function<T, S>, boolean, int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return this RDD sorted by the given key function.
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.api.r.RRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Method in class org.apache.spark.rdd.RDD
-
Return this RDD sorted by the given key function.
- sortBy(Function1<T, K>, boolean, int, Ordering<K>, ClassTag<K>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- sortBy(String, String...) - Method in class org.apache.spark.sql.DataFrameWriter
-
Sorts the output in each bucket by the given columns.
- sortBy(String, Seq<String>) - Method in class org.apache.spark.sql.DataFrameWriter
-
Sorts the output in each bucket by the given columns.
- sortBy(Function1<A, B>, Ordering<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- sortBy$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- sortBy$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- sortBy$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- sortBy$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.api.r.RRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- sortBy$default$3() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- sortBy$default$3() - Static method in class org.apache.spark.graphx.VertexRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- sortBy$default$3() - Static method in class org.apache.spark.rdd.UnionRDD
-
- sortByKey() - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements in
ascending order.
- sortByKey(boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(boolean, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(Comparator<K>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(Comparator<K>, boolean) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(Comparator<K>, boolean, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sortByKey(boolean, int) - Method in class org.apache.spark.rdd.OrderedRDDFunctions
-
Sort the RDD by key, so that each partition contains a sorted range of the elements.
- sorted(Ordering<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- sortWith(Function2<A, A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- sortWithinPartitions(String, String...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(Column...) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(String, Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- sortWithinPartitions(Seq<Column>) - Method in class org.apache.spark.sql.Dataset
-
Returns a new Dataset with each partition sorted by the given expressions.
- soundex(Column) - Static method in class org.apache.spark.sql.functions
-
* Return the soundex code for the specified expression.
- sourceName() - Static method in class org.apache.spark.metrics.source.CodegenMetrics
-
- sourceName() - Static method in class org.apache.spark.metrics.source.HiveCatalogMetrics
-
- SourceProgress - Class in org.apache.spark.sql.streaming
-
Information about progress made for a source in the execution of a
StreamingQuery
during a trigger.
- sources() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- sourceSchema(SQLContext, Option<StructType>, String, Map<String, String>) - Method in interface org.apache.spark.sql.sources.StreamSourceProvider
-
Returns the name and schema of the source that can be used to continually read data.
- span(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- spark() - Method in class org.apache.spark.status.api.v1.VersionInfo
-
- SPARK_CONNECTOR_NAME() - Static method in class org.apache.spark.ui.JettyUtils
-
- SPARK_CONTEXT_SHUTDOWN_PRIORITY() - Static method in class org.apache.spark.util.ShutdownHookManager
-
The shutdown priority of the SparkContext instance.
- SPARK_IO_ENCRYPTION_COMMONS_CONFIG_PREFIX() - Static method in class org.apache.spark.security.CryptoStreamUtils
-
- SPARK_MASTER - Static variable in class org.apache.spark.launcher.SparkLauncher
-
The Spark master.
- spark_partition_id() - Static method in class org.apache.spark.sql.functions
-
Partition ID.
- SPARK_REGEX() - Static method in class org.apache.spark.SparkMasterRegex
-
- SparkAppConfig(Seq<Tuple2<String, String>>, Option<byte[]>) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
-
- SparkAppConfig$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig$
-
- SparkAppHandle - Interface in org.apache.spark.launcher
-
A handle to a running Spark application.
- SparkAppHandle.Listener - Interface in org.apache.spark.launcher
-
Listener for updates to a handle's state.
- SparkAppHandle.State - Enum in org.apache.spark.launcher
-
Represents the application's state.
- SparkConf - Class in org.apache.spark
-
Configuration for a Spark application.
- SparkConf(boolean) - Constructor for class org.apache.spark.SparkConf
-
- SparkConf() - Constructor for class org.apache.spark.SparkConf
-
Create a SparkConf that loads defaults from system properties and the classpath
- sparkContext() - Static method in class org.apache.spark.api.r.RRDD
-
- sparkContext() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- sparkContext() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- sparkContext() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- sparkContext() - Static method in class org.apache.spark.graphx.VertexRDD
-
- sparkContext() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- sparkContext() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- sparkContext() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- sparkContext() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- sparkContext() - Method in class org.apache.spark.rdd.RDD
-
The SparkContext that created this RDD.
- sparkContext() - Static method in class org.apache.spark.rdd.UnionRDD
-
- SparkContext - Class in org.apache.spark
-
Main entry point for Spark functionality.
- SparkContext(SparkConf) - Constructor for class org.apache.spark.SparkContext
-
- SparkContext() - Constructor for class org.apache.spark.SparkContext
-
Create a SparkContext that loads settings from system properties (for instance, when
launching with ./bin/spark-submit).
- SparkContext(String, String, SparkConf) - Constructor for class org.apache.spark.SparkContext
-
Alternative constructor that allows setting common Spark properties directly
- SparkContext(String, String, String, Seq<String>, Map<String, String>) - Constructor for class org.apache.spark.SparkContext
-
Alternative constructor that allows setting common Spark properties directly
- sparkContext() - Method in class org.apache.spark.sql.SparkSession
-
- sparkContext() - Method in class org.apache.spark.sql.SQLContext
-
- sparkContext() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
The underlying SparkContext
- sparkContext() - Method in class org.apache.spark.streaming.StreamingContext
-
Return the associated Spark context
- SparkEnv - Class in org.apache.spark
-
:: DeveloperApi ::
Holds all the runtime environment objects for a running Spark instance (either master or worker),
including the serializer, RpcEnv, block manager, map output tracker, etc.
- SparkEnv(String, org.apache.spark.rpc.RpcEnv, Serializer, Serializer, SerializerManager, MapOutputTracker, ShuffleManager, org.apache.spark.broadcast.BroadcastManager, BlockManager, SecurityManager, org.apache.spark.metrics.MetricsSystem, MemoryManager, OutputCommitCoordinator, SparkConf) - Constructor for class org.apache.spark.SparkEnv
-
- sparkEventFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- sparkEventToJson(SparkListenerEvent) - Static method in class org.apache.spark.util.JsonProtocol
-
------------------------------------------------- *
JSON serialization methods for SparkListenerEvents |
- SparkException - Exception in org.apache.spark
-
- SparkException(String, Throwable) - Constructor for exception org.apache.spark.SparkException
-
- SparkException(String) - Constructor for exception org.apache.spark.SparkException
-
- SparkExecutorInfo - Interface in org.apache.spark
-
Exposes information about Spark Executors.
- SparkExecutorInfoImpl - Class in org.apache.spark
-
- SparkExecutorInfoImpl(String, int, long, int) - Constructor for class org.apache.spark.SparkExecutorInfoImpl
-
- SparkExitCode - Class in org.apache.spark.util
-
- SparkExitCode() - Constructor for class org.apache.spark.util.SparkExitCode
-
- SparkFiles - Class in org.apache.spark
-
Resolves paths to files added through SparkContext.addFile()
.
- SparkFiles() - Constructor for class org.apache.spark.SparkFiles
-
- SparkFirehoseListener - Class in org.apache.spark
-
Class that allows users to receive all SparkListener events.
- SparkFirehoseListener() - Constructor for class org.apache.spark.SparkFirehoseListener
-
- SparkFlumeEvent - Class in org.apache.spark.streaming.flume
-
A wrapper class for AvroFlumeEvent's with a custom serialization format.
- SparkFlumeEvent() - Constructor for class org.apache.spark.streaming.flume.SparkFlumeEvent
-
- SparkHadoopMapReduceWriter - Class in org.apache.spark.internal.io
-
A helper object that saves an RDD using a Hadoop OutputFormat
(from the newer mapreduce API, not the old mapred API).
- SparkHadoopMapReduceWriter() - Constructor for class org.apache.spark.internal.io.SparkHadoopMapReduceWriter
-
- SparkHadoopMapRedUtil - Class in org.apache.spark.mapred
-
- SparkHadoopMapRedUtil() - Constructor for class org.apache.spark.mapred.SparkHadoopMapRedUtil
-
- SparkHadoopWriterUtils - Class in org.apache.spark.internal.io
-
A helper object that provide common utils used during saving an RDD using a Hadoop OutputFormat
(both from the old mapred API and the new mapreduce API)
- SparkHadoopWriterUtils() - Constructor for class org.apache.spark.internal.io.SparkHadoopWriterUtils
-
- sparkJavaOpts(SparkConf, Function1<String, Object>) - Static method in class org.apache.spark.util.Utils
-
Convert all spark properties set in the given SparkConf to a sequence of java options.
- SparkJobInfo - Interface in org.apache.spark
-
Exposes information about Spark Jobs.
- SparkJobInfoImpl - Class in org.apache.spark
-
- SparkJobInfoImpl(int, int[], JobExecutionStatus) - Constructor for class org.apache.spark.SparkJobInfoImpl
-
- SparkLauncher - Class in org.apache.spark.launcher
-
Launcher for Spark applications.
- SparkLauncher() - Constructor for class org.apache.spark.launcher.SparkLauncher
-
- SparkLauncher(Map<String, String>) - Constructor for class org.apache.spark.launcher.SparkLauncher
-
Creates a launcher that will set the given environment variables in the child.
- SparkListener - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
A default implementation for SparkListenerInterface
that has no-op implementations for
all callbacks.
- SparkListener() - Constructor for class org.apache.spark.scheduler.SparkListener
-
- SparkListenerApplicationEnd - Class in org.apache.spark.scheduler
-
- SparkListenerApplicationEnd(long) - Constructor for class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- SparkListenerApplicationStart - Class in org.apache.spark.scheduler
-
- SparkListenerApplicationStart(String, Option<String>, long, String, Option<String>, Option<Map<String, String>>) - Constructor for class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- SparkListenerBlockManagerAdded - Class in org.apache.spark.scheduler
-
- SparkListenerBlockManagerAdded(long, BlockManagerId, long, Option<Object>, Option<Object>) - Constructor for class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- SparkListenerBlockManagerRemoved - Class in org.apache.spark.scheduler
-
- SparkListenerBlockManagerRemoved(long, BlockManagerId) - Constructor for class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- SparkListenerBlockUpdated - Class in org.apache.spark.scheduler
-
- SparkListenerBlockUpdated(BlockUpdatedInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerBlockUpdated
-
- SparkListenerEnvironmentUpdate - Class in org.apache.spark.scheduler
-
- SparkListenerEnvironmentUpdate(Map<String, Seq<Tuple2<String, String>>>) - Constructor for class org.apache.spark.scheduler.SparkListenerEnvironmentUpdate
-
- SparkListenerEvent - Interface in org.apache.spark.scheduler
-
- SparkListenerExecutorAdded - Class in org.apache.spark.scheduler
-
- SparkListenerExecutorAdded(long, String, ExecutorInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- SparkListenerExecutorBlacklisted - Class in org.apache.spark.scheduler
-
- SparkListenerExecutorBlacklisted(long, String, int) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- SparkListenerExecutorMetricsUpdate - Class in org.apache.spark.scheduler
-
Periodic updates from executors.
- SparkListenerExecutorMetricsUpdate(String, Seq<Tuple4<Object, Object, Object, Seq<AccumulableInfo>>>) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
-
- SparkListenerExecutorRemoved - Class in org.apache.spark.scheduler
-
- SparkListenerExecutorRemoved(long, String, String) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- SparkListenerExecutorUnblacklisted - Class in org.apache.spark.scheduler
-
- SparkListenerExecutorUnblacklisted(long, String) - Constructor for class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- SparkListenerJobEnd - Class in org.apache.spark.scheduler
-
- SparkListenerJobEnd(int, long, JobResult) - Constructor for class org.apache.spark.scheduler.SparkListenerJobEnd
-
- SparkListenerJobStart - Class in org.apache.spark.scheduler
-
- SparkListenerJobStart(int, long, Seq<StageInfo>, Properties) - Constructor for class org.apache.spark.scheduler.SparkListenerJobStart
-
- SparkListenerNodeBlacklisted - Class in org.apache.spark.scheduler
-
- SparkListenerNodeBlacklisted(long, String, int) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- SparkListenerNodeUnblacklisted - Class in org.apache.spark.scheduler
-
- SparkListenerNodeUnblacklisted(long, String) - Constructor for class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- SparkListenerStageCompleted - Class in org.apache.spark.scheduler
-
- SparkListenerStageCompleted(StageInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- SparkListenerStageSubmitted - Class in org.apache.spark.scheduler
-
- SparkListenerStageSubmitted(StageInfo, Properties) - Constructor for class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- SparkListenerTaskEnd - Class in org.apache.spark.scheduler
-
- SparkListenerTaskEnd(int, int, String, TaskEndReason, TaskInfo, TaskMetrics) - Constructor for class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- SparkListenerTaskGettingResult - Class in org.apache.spark.scheduler
-
- SparkListenerTaskGettingResult(TaskInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- SparkListenerTaskStart - Class in org.apache.spark.scheduler
-
- SparkListenerTaskStart(int, int, TaskInfo) - Constructor for class org.apache.spark.scheduler.SparkListenerTaskStart
-
- SparkListenerUnpersistRDD - Class in org.apache.spark.scheduler
-
- SparkListenerUnpersistRDD(int) - Constructor for class org.apache.spark.scheduler.SparkListenerUnpersistRDD
-
- SparkMasterRegex - Class in org.apache.spark
-
A collection of regexes for extracting information from the master string.
- SparkMasterRegex() - Constructor for class org.apache.spark.SparkMasterRegex
-
- sparkProperties() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.SparkAppConfig
-
- sparkProperties() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
-
- sparkProperties() - Method in class org.apache.spark.ui.env.EnvironmentListener
-
Deprecated.
- SparkRDefaults - Class in org.apache.spark.api.r
-
- SparkRDefaults() - Constructor for class org.apache.spark.api.r.SparkRDefaults
-
- sparkRPackagePath(boolean) - Static method in class org.apache.spark.api.r.RUtils
-
Get the list of paths for R packages in various deployment modes, of which the first
path is for the SparkR package itself.
- sparkSession() - Method in class org.apache.spark.sql.Dataset
-
- SparkSession - Class in org.apache.spark.sql
-
The entry point to programming Spark with the Dataset and DataFrame API.
- sparkSession() - Method in class org.apache.spark.sql.SQLContext
-
- sparkSession() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Returns the SparkSession
associated with this
.
- SparkSession.Builder - Class in org.apache.spark.sql
-
- SparkSession.implicits$ - Class in org.apache.spark.sql
-
:: Experimental ::
(Scala-specific) Implicit methods available in Scala for converting
common Scala objects into DataFrame
s.
- SparkSessionExtensions - Class in org.apache.spark.sql
-
:: Experimental ::
Holder for injection points to the
SparkSession
.
- SparkSessionExtensions() - Constructor for class org.apache.spark.sql.SparkSessionExtensions
-
- SparkShutdownHook - Class in org.apache.spark.util
-
- SparkShutdownHook(int, Function0<BoxedUnit>) - Constructor for class org.apache.spark.util.SparkShutdownHook
-
- SparkStageInfo - Interface in org.apache.spark
-
Exposes information about Spark Stages.
- SparkStageInfoImpl - Class in org.apache.spark
-
- SparkStageInfoImpl(int, int, long, String, int, int, int, int) - Constructor for class org.apache.spark.SparkStageInfoImpl
-
- SparkStatusTracker - Class in org.apache.spark
-
Low-level status reporting APIs for monitoring job and stage progress.
- SparkUncaughtExceptionHandler - Class in org.apache.spark.util
-
The default uncaught exception handler for Executors terminates the whole process, to avoid
getting into a bad state indefinitely.
- SparkUncaughtExceptionHandler() - Constructor for class org.apache.spark.util.SparkUncaughtExceptionHandler
-
- sparkUser() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- sparkUser() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- sparkUser() - Method in class org.apache.spark.SparkContext
-
- sparkUser() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- sparse(int, int, int[], int[], double[]) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Creates a column-major sparse matrix in Compressed Sparse Column (CSC) format.
- sparse(int, int[], double[]) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a sparse vector providing its index array and value array.
- sparse(int, Seq<Tuple2<Object, Object>>) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs.
- sparse(int, Iterable<Tuple2<Integer, Double>>) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs in a Java friendly way.
- sparse(int, int, int[], int[], double[]) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Creates a column-major sparse matrix in Compressed Sparse Column (CSC) format.
- sparse(int, int[], double[]) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a sparse vector providing its index array and value array.
- sparse(int, Seq<Tuple2<Object, Object>>) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs.
- sparse(int, Iterable<Tuple2<Integer, Double>>) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Creates a sparse vector using unordered (index, value) pairs in a Java friendly way.
- SparseMatrix - Class in org.apache.spark.ml.linalg
-
Column-major sparse matrix.
- SparseMatrix(int, int, int[], int[], double[], boolean) - Constructor for class org.apache.spark.ml.linalg.SparseMatrix
-
- SparseMatrix(int, int, int[], int[], double[]) - Constructor for class org.apache.spark.ml.linalg.SparseMatrix
-
Column-major sparse matrix.
- SparseMatrix - Class in org.apache.spark.mllib.linalg
-
Column-major sparse matrix.
- SparseMatrix(int, int, int[], int[], double[], boolean) - Constructor for class org.apache.spark.mllib.linalg.SparseMatrix
-
- SparseMatrix(int, int, int[], int[], double[]) - Constructor for class org.apache.spark.mllib.linalg.SparseMatrix
-
Column-major sparse matrix.
- SparseVector - Class in org.apache.spark.ml.linalg
-
A sparse vector represented by an index array and a value array.
- SparseVector(int, int[], double[]) - Constructor for class org.apache.spark.ml.linalg.SparseVector
-
- SparseVector - Class in org.apache.spark.mllib.linalg
-
A sparse vector represented by an index array and a value array.
- SparseVector(int, int[], double[]) - Constructor for class org.apache.spark.mllib.linalg.SparseVector
-
- SPARSITY() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- sparsity() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- spdiag(Vector) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a diagonal matrix in SparseMatrix
format from the supplied values.
- spdiag(Vector) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a diagonal matrix in SparseMatrix
format from the supplied values.
- SpearmanCorrelation - Class in org.apache.spark.mllib.stat.correlation
-
Compute Spearman's correlation for two RDDs of the type RDD[Double] or the correlation matrix
for an RDD of the type RDD[Vector].
- SpearmanCorrelation() - Constructor for class org.apache.spark.mllib.stat.correlation.SpearmanCorrelation
-
- SpecialLengths - Class in org.apache.spark.api.r
-
- SpecialLengths() - Constructor for class org.apache.spark.api.r.SpecialLengths
-
- speculative() - Method in class org.apache.spark.scheduler.TaskInfo
-
- speculative() - Method in class org.apache.spark.status.api.v1.TaskData
-
- speye(int) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a sparse Identity Matrix in Matrix
format.
- speye(int) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate an Identity Matrix in SparseMatrix
format.
- speye(int) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a sparse Identity Matrix in Matrix
format.
- speye(int) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate an Identity Matrix in SparseMatrix
format.
- SpillListener - Class in org.apache.spark
-
A SparkListener
that detects whether spills have occurred in Spark jobs.
- SpillListener() - Constructor for class org.apache.spark.SpillListener
-
- split() - Method in class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.NodeData
-
- split() - Method in class org.apache.spark.ml.tree.InternalNode
-
- Split - Interface in org.apache.spark.ml.tree
-
Interface for a "Split," which specifies a test made at a decision tree node
to choose the left or right path.
- split() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- split() - Method in class org.apache.spark.mllib.tree.model.Node
-
- Split - Class in org.apache.spark.mllib.tree.model
-
:: DeveloperApi ::
Split applied to a feature
param: feature feature index
param: threshold Threshold for continuous feature.
- Split(int, double, Enumeration.Value, List<Object>) - Constructor for class org.apache.spark.mllib.tree.model.Split
-
- split(Column, String) - Static method in class org.apache.spark.sql.functions
-
Splits str around pattern (pattern is a regular expression).
- splitAndCountPartitions(Iterator<String>) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
Splits lines and counts the words.
- splitAt(int) - Static method in class org.apache.spark.sql.types.StructType
-
- splitCommandString(String) - Static method in class org.apache.spark.util.Utils
-
Split a string of potentially quoted arguments from the command line the way that a shell
would do it to determine arguments to a command.
- SplitData(int, double[], int) - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData
-
- SplitData(int, double, int, Seq<Object>) - Constructor for class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- SplitData$() - Constructor for class org.apache.spark.ml.tree.DecisionTreeModelReadWrite.SplitData$
-
- splitIndex() - Method in class org.apache.spark.storage.RDDBlockId
-
- SplitInfo - Class in org.apache.spark.scheduler
-
- SplitInfo(Class<?>, String, String, long, Object) - Constructor for class org.apache.spark.scheduler.SplitInfo
-
- splits() - Method in class org.apache.spark.ml.feature.Bucketizer
-
Parameter for mapping continuous features into buckets.
- spr(double, Vector, DenseVector) - Static method in class org.apache.spark.ml.linalg.BLAS
-
Adds alpha * x * x.t to a matrix in-place.
- spr(double, Vector, double[]) - Static method in class org.apache.spark.ml.linalg.BLAS
-
Adds alpha * x * x.t to a matrix in-place.
- spr(double, Vector, DenseVector) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
Adds alpha * v * v.t to a matrix in-place.
- spr(double, Vector, double[]) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
Adds alpha * v * v.t to a matrix in-place.
- sprand(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a SparseMatrix
consisting of i.i.d.
gaussian random numbers.
- sprand(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a SparseMatrix
consisting of i.i.d
.
- sprand(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a SparseMatrix
consisting of i.i.d.
gaussian random numbers.
- sprand(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a SparseMatrix
consisting of i.i.d
.
- sprandn(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.Matrices
-
Generate a SparseMatrix
consisting of i.i.d.
gaussian random numbers.
- sprandn(int, int, double, Random) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
Generate a SparseMatrix
consisting of i.i.d
.
- sprandn(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.Matrices
-
Generate a SparseMatrix
consisting of i.i.d.
gaussian random numbers.
- sprandn(int, int, double, Random) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a SparseMatrix
consisting of i.i.d
.
- sqdist(Vector, Vector) - Static method in class org.apache.spark.ml.linalg.Vectors
-
Returns the squared distance between two Vectors.
- sqdist(Vector, Vector) - Static method in class org.apache.spark.mllib.linalg.Vectors
-
Returns the squared distance between two Vectors.
- sql(String) - Method in class org.apache.spark.sql.SparkSession
-
Executes a SQL query using Spark, returning the result as a DataFrame
.
- sql(String) - Method in class org.apache.spark.sql.SQLContext
-
- sql() - Method in class org.apache.spark.sql.types.ArrayType
-
- sql() - Static method in class org.apache.spark.sql.types.BinaryType
-
- sql() - Static method in class org.apache.spark.sql.types.BooleanType
-
- sql() - Static method in class org.apache.spark.sql.types.ByteType
-
- sql() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- sql() - Static method in class org.apache.spark.sql.types.CharType
-
- sql() - Method in class org.apache.spark.sql.types.DataType
-
- sql() - Static method in class org.apache.spark.sql.types.DateType
-
- sql() - Method in class org.apache.spark.sql.types.DecimalType
-
- sql() - Static method in class org.apache.spark.sql.types.DoubleType
-
- sql() - Static method in class org.apache.spark.sql.types.FloatType
-
- sql() - Static method in class org.apache.spark.sql.types.HiveStringType
-
- sql() - Static method in class org.apache.spark.sql.types.IntegerType
-
- sql() - Static method in class org.apache.spark.sql.types.LongType
-
- sql() - Method in class org.apache.spark.sql.types.MapType
-
- sql() - Static method in class org.apache.spark.sql.types.NullType
-
- sql() - Static method in class org.apache.spark.sql.types.NumericType
-
- sql() - Static method in class org.apache.spark.sql.types.ObjectType
-
- sql() - Static method in class org.apache.spark.sql.types.ShortType
-
- sql() - Static method in class org.apache.spark.sql.types.StringType
-
- sql() - Method in class org.apache.spark.sql.types.StructType
-
- sql() - Static method in class org.apache.spark.sql.types.TimestampType
-
- sql() - Static method in class org.apache.spark.sql.types.VarcharType
-
- sqlContext() - Method in class org.apache.spark.sql.Dataset
-
- sqlContext() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- sqlContext() - Method in class org.apache.spark.sql.sources.BaseRelation
-
- sqlContext() - Method in class org.apache.spark.sql.SparkSession
-
A wrapped version of this session in the form of a
SQLContext
, for backward compatibility.
- SQLContext - Class in org.apache.spark.sql
-
The entry point for working with structured data (rows and columns) in Spark 1.x.
- SQLContext(SparkContext) - Constructor for class org.apache.spark.sql.SQLContext
-
- SQLContext(JavaSparkContext) - Constructor for class org.apache.spark.sql.SQLContext
-
- SQLContext.implicits$ - Class in org.apache.spark.sql
-
:: Experimental ::
(Scala-specific) Implicit methods available in Scala for converting
common Scala objects into DataFrame
s.
- SQLDataTypes - Class in org.apache.spark.ml.linalg
-
:: DeveloperApi ::
SQL data types for vectors and matrices.
- SQLDataTypes() - Constructor for class org.apache.spark.ml.linalg.SQLDataTypes
-
- SQLImplicits - Class in org.apache.spark.sql
-
A collection of implicit methods for converting common Scala objects into
Dataset
s.
- SQLImplicits() - Constructor for class org.apache.spark.sql.SQLImplicits
-
- SQLImplicits.StringToColumn - Class in org.apache.spark.sql
-
Converts $"col name" into a
Column
.
- SQLTransformer - Class in org.apache.spark.ml.feature
-
Implements the transformations which are defined by SQL statement.
- SQLTransformer(String) - Constructor for class org.apache.spark.ml.feature.SQLTransformer
-
- SQLTransformer() - Constructor for class org.apache.spark.ml.feature.SQLTransformer
-
- sqlType() - Method in class org.apache.spark.mllib.linalg.VectorUDT
-
- SQLUserDefinedType - Annotation Type in org.apache.spark.sql.types
-
::DeveloperApi::
A user-defined type which can be automatically recognized by a SQLContext and registered.
- SQLUtils - Class in org.apache.spark.sql.api.r
-
- SQLUtils() - Constructor for class org.apache.spark.sql.api.r.SQLUtils
-
- sqrt(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the square root of the specified float value.
- sqrt(String) - Static method in class org.apache.spark.sql.functions
-
Computes the square root of the specified float value.
- Sqrt$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Sqrt$
-
- SquaredError - Class in org.apache.spark.mllib.tree.loss
-
:: DeveloperApi ::
Class for squared error loss calculation.
- SquaredError() - Constructor for class org.apache.spark.mllib.tree.loss.SquaredError
-
- SquaredL2Updater - Class in org.apache.spark.mllib.optimization
-
:: DeveloperApi ::
Updater for L2 regularized problems.
- SquaredL2Updater() - Constructor for class org.apache.spark.mllib.optimization.SquaredL2Updater
-
- Src - Static variable in class org.apache.spark.graphx.TripletFields
-
Expose the source and edge fields but not the destination field.
- srcAttr() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex attribute of the edge's source vertex.
- srcAttr() - Method in class org.apache.spark.graphx.EdgeTriplet
-
The source vertex attribute
- srcAttr() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- srcId() - Method in class org.apache.spark.graphx.Edge
-
- srcId() - Method in class org.apache.spark.graphx.EdgeContext
-
The vertex id of the edge's source vertex.
- srcId() - Method in class org.apache.spark.graphx.impl.AggregatingEdgeContext
-
- srdd() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
- ssc() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
- stackTrace() - Method in class org.apache.spark.ExceptionFailure
-
- stackTraceFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- stackTraceToJson(StackTraceElement[]) - Static method in class org.apache.spark.util.JsonProtocol
-
- stage() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- STAGE_DAG() - Static method in class org.apache.spark.ui.ToolTips
-
- STAGE_TIMELINE() - Static method in class org.apache.spark.ui.ToolTips
-
- stageAttempt() - Method in class org.apache.spark.scheduler.AskPermissionToCommitOutput
-
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- stageAttemptId() - Method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- stageAttemptNumber() - Method in class org.apache.spark.TaskContext
-
How many times the stage that this task belongs to has been attempted.
- stageCompletedFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- stageCompletedToJson(SparkListenerStageCompleted) - Static method in class org.apache.spark.util.JsonProtocol
-
- StageData - Class in org.apache.spark.status.api.v1
-
- stageFailed(String) - Method in class org.apache.spark.scheduler.StageInfo
-
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- stageId() - Method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- stageId() - Method in class org.apache.spark.scheduler.StageInfo
-
- stageId() - Method in interface org.apache.spark.SparkStageInfo
-
- stageId() - Method in class org.apache.spark.SparkStageInfoImpl
-
- stageId() - Method in class org.apache.spark.status.api.v1.StageData
-
- stageId() - Method in class org.apache.spark.TaskContext
-
The ID of the stage that this task belong to.
- stageIds() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- stageIds() - Method in interface org.apache.spark.SparkJobInfo
-
- stageIds() - Method in class org.apache.spark.SparkJobInfoImpl
-
- stageIds() - Method in class org.apache.spark.status.api.v1.JobData
-
- stageIds() - Method in class org.apache.spark.ui.jobs.UIData.JobUIData
-
- stageIdToActiveJobIds() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- stageIdToData() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- stageIdToInfo() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- stageInfo() - Method in class org.apache.spark.scheduler.SparkListenerStageCompleted
-
- stageInfo() - Method in class org.apache.spark.scheduler.SparkListenerStageSubmitted
-
- StageInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
Stores information about a stage to pass from the scheduler to SparkListeners.
- StageInfo(int, int, String, int, Seq<RDDInfo>, Seq<Object>, String, TaskMetrics, Seq<Seq<TaskLocation>>) - Constructor for class org.apache.spark.scheduler.StageInfo
-
- stageInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
--------------------------------------------------------------------- *
JSON deserialization methods for classes SparkListenerEvents depend on |
- stageInfos() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- stageInfoToJson(StageInfo) - Static method in class org.apache.spark.util.JsonProtocol
-
------------------------------------------------------------------- *
JSON serialization methods for classes SparkListenerEvents depend on |
- stages() - Method in class org.apache.spark.ml.Pipeline
-
param for pipeline stages
- stages() - Method in class org.apache.spark.ml.PipelineModel
-
- StageStatus - Enum in org.apache.spark.status.api.v1
-
- stageSubmittedFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- stageSubmittedToJson(SparkListenerStageSubmitted) - Static method in class org.apache.spark.util.JsonProtocol
-
- StageUIData() - Constructor for class org.apache.spark.ui.jobs.UIData.StageUIData
-
- standardization() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- standardization() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- standardization() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- standardization() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- standardization() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- standardization() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- StandardNormalGenerator - Class in org.apache.spark.mllib.random
-
:: DeveloperApi ::
Generates i.i.d.
- StandardNormalGenerator() - Constructor for class org.apache.spark.mllib.random.StandardNormalGenerator
-
- StandardScaler - Class in org.apache.spark.ml.feature
-
Standardizes features by removing the mean and scaling to unit variance using column summary
statistics on the samples in the training set.
- StandardScaler(String) - Constructor for class org.apache.spark.ml.feature.StandardScaler
-
- StandardScaler() - Constructor for class org.apache.spark.ml.feature.StandardScaler
-
- StandardScaler - Class in org.apache.spark.mllib.feature
-
Standardizes features by removing the mean and scaling to unit std using column summary
statistics on the samples in the training set.
- StandardScaler(boolean, boolean) - Constructor for class org.apache.spark.mllib.feature.StandardScaler
-
- StandardScaler() - Constructor for class org.apache.spark.mllib.feature.StandardScaler
-
- StandardScalerModel - Class in org.apache.spark.ml.feature
-
- StandardScalerModel - Class in org.apache.spark.mllib.feature
-
Represents a StandardScaler model that can transform vectors.
- StandardScalerModel(Vector, Vector, boolean, boolean) - Constructor for class org.apache.spark.mllib.feature.StandardScalerModel
-
- StandardScalerModel(Vector, Vector) - Constructor for class org.apache.spark.mllib.feature.StandardScalerModel
-
- StandardScalerModel(Vector) - Constructor for class org.apache.spark.mllib.feature.StandardScalerModel
-
- starGraph(SparkContext, int) - Static method in class org.apache.spark.graphx.util.GraphGenerators
-
Create a star graph with vertex 0 being the center.
- start(String) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Starts the execution of the streaming query, which will continually output results to the given
path as new data arrives.
- start() - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Starts the execution of the streaming query, which will continually output results to the given
path as new data arrives.
- start() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Start the execution of the streams.
- start() - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
-
- start() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
Method called to start receiving data.
- start() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
-
- start() - Method in class org.apache.spark.streaming.StreamingContext
-
Start the execution of the streams.
- startApplication(SparkAppHandle.Listener...) - Method in class org.apache.spark.launcher.SparkLauncher
-
Starts a Spark application.
- startIndexInLevel(int) - Static method in class org.apache.spark.mllib.tree.model.Node
-
Return the index of the first node in the given level.
- startJettyServer(String, int, org.apache.spark.SSLOptions, Seq<ServletContextHandler>, SparkConf, String) - Static method in class org.apache.spark.ui.JettyUtils
-
Attempt to start a Jetty server bound to the supplied hostName:port using the given
context handlers.
- startOffset() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
- startOffset() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
- startPosition() - Method in exception org.apache.spark.sql.AnalysisException
-
- startServiceOnPort(int, Function1<Object, Tuple2<T, Object>>, SparkConf, String) - Static method in class org.apache.spark.util.Utils
-
Attempt to start a service on the given port, or fail after a number of attempts.
- startsWith(Column) - Method in class org.apache.spark.sql.Column
-
String starts with.
- startsWith(String) - Method in class org.apache.spark.sql.Column
-
String starts with another string literal.
- startsWith(GenSeq<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- startsWith(GenSeq<B>, int) - Static method in class org.apache.spark.sql.types.StructType
-
- startTime() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- startTime() - Method in class org.apache.spark.SparkContext
-
- startTime() - Method in class org.apache.spark.status.api.v1.ApplicationAttemptInfo
-
- startTime() - Method in class org.apache.spark.status.api.v1.streaming.OutputOperationInfo
-
- startTime() - Method in class org.apache.spark.status.api.v1.streaming.StreamingStatistics
-
- startTime() - Method in class org.apache.spark.streaming.scheduler.OutputOperationInfo
-
- startTime() - Method in class org.apache.spark.ui.jobs.JobProgressListener
-
Deprecated.
- stat() - Method in class org.apache.spark.sql.Dataset
-
- StatCounter - Class in org.apache.spark.util
-
A class for tracking the statistics of a set of numbers (count, mean and variance) in a
numerically robust way.
- StatCounter(TraversableOnce<Object>) - Constructor for class org.apache.spark.util.StatCounter
-
- StatCounter() - Constructor for class org.apache.spark.util.StatCounter
-
Initialize the StatCounter with no values.
- state() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
-
- state() - Method in class org.apache.spark.scheduler.local.StatusUpdate
-
- State<S> - Class in org.apache.spark.streaming
-
:: Experimental ::
Abstract class for getting and updating the state in mapping function used in the
mapWithState
operation of a
pair DStream
(Scala)
or a
JavaPairDStream
(Java).
- State() - Constructor for class org.apache.spark.streaming.State
-
- stateChanged(SparkAppHandle) - Method in interface org.apache.spark.launcher.SparkAppHandle.Listener
-
Callback for changes in the handle's state.
- statement() - Method in class org.apache.spark.ml.feature.SQLTransformer
-
SQL statement parameter.
- StateOperatorProgress - Class in org.apache.spark.sql.streaming
-
Information about updates made to stateful operators in a
StreamingQuery
during a trigger.
- stateOperators() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- stateSnapshots() - Method in class org.apache.spark.streaming.api.java.JavaMapWithStateDStream
-
- stateSnapshots() - Method in class org.apache.spark.streaming.dstream.MapWithStateDStream
-
Return a pair DStream where each RDD is the snapshot of the state of all the keys.
- StateSpec<KeyType,ValueType,StateType,MappedType> - Class in org.apache.spark.streaming
-
:: Experimental ::
Abstract class representing all the specifications of the DStream transformation
mapWithState
operation of a
pair DStream
(Scala) or a
JavaPairDStream
(Java).
- StateSpec() - Constructor for class org.apache.spark.streaming.StateSpec
-
- staticPageRank(int, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run PageRank for a fixed number of iterations returning a graph with vertex attributes
containing the PageRank and edge attributes the normalized edge weight.
- staticParallelPersonalizedPageRank(long[], int, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run parallel personalized PageRank for a given array of source vertices, such
that all random walks are started relative to the source vertices
- staticPersonalizedPageRank(long, int, double) - Method in class org.apache.spark.graphx.GraphOps
-
Run Personalized PageRank for a fixed number of iterations with
with all iterations originating at the source node
returning a graph with vertex attributes
containing the PageRank and edge attributes the normalized edge weight.
- StaticSources - Class in org.apache.spark.metrics.source
-
- StaticSources() - Constructor for class org.apache.spark.metrics.source.StaticSources
-
- statistic() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
-
- statistic() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
-
- statistic() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
Test statistic.
- Statistics - Class in org.apache.spark.mllib.stat
-
API for statistical functions in MLlib.
- Statistics() - Constructor for class org.apache.spark.mllib.stat.Statistics
-
- stats() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return a
StatCounter
object that captures the mean, variance and
count of the RDD's elements in one operation.
- stats() - Method in class org.apache.spark.mllib.tree.model.Node
-
- stats() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Return a
StatCounter
object that captures the mean, variance and
count of the RDD's elements in one operation.
- stats(SQLConf) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- stats(SQLConf) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- StatsReportListener - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
Simple SparkListener that logs a few summary statistics when each stage completes.
- StatsReportListener() - Constructor for class org.apache.spark.scheduler.StatsReportListener
-
- StatsReportListener - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
A simple StreamingListener that logs summary statistics across Spark Streaming batches
param: numBatchInfos Number of last batches to consider for generating statistics (default: 10)
- StatsReportListener(int) - Constructor for class org.apache.spark.streaming.scheduler.StatsReportListener
-
- status() - Method in class org.apache.spark.scheduler.TaskInfo
-
- status() - Method in interface org.apache.spark.SparkJobInfo
-
- status() - Method in class org.apache.spark.SparkJobInfoImpl
-
- status() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Returns the current status of the query.
- status() - Method in class org.apache.spark.status.api.v1.JobData
-
- status() - Method in class org.apache.spark.status.api.v1.StageData
-
- status() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- status() - Method in class org.apache.spark.status.api.v1.TaskData
-
- status() - Method in class org.apache.spark.ui.jobs.UIData.JobUIData
-
- statusTracker() - Method in class org.apache.spark.api.java.JavaSparkContext
-
- statusTracker() - Method in class org.apache.spark.SparkContext
-
- StatusUpdate(String, long, Enumeration.Value, org.apache.spark.util.SerializableBuffer) - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
-
- StatusUpdate - Class in org.apache.spark.scheduler.local
-
- StatusUpdate(long, Enumeration.Value, ByteBuffer) - Constructor for class org.apache.spark.scheduler.local.StatusUpdate
-
- StatusUpdate$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate$
-
- STD() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- std() - Method in class org.apache.spark.ml.attribute.NumericAttribute
-
- std() - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- std() - Method in class org.apache.spark.mllib.feature.StandardScalerModel
-
- std() - Method in class org.apache.spark.mllib.random.LogNormalGenerator
-
- stddev(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for stddev_samp
.
- stddev(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: alias for stddev_samp
.
- stddev_pop(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population standard deviation of
the expression in a group.
- stddev_pop(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the population standard deviation of
the expression in a group.
- stddev_samp(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample standard deviation of
the expression in a group.
- stddev_samp(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sample standard deviation of
the expression in a group.
- stdev() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Compute the population standard deviation of this RDD's elements.
- stdev() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Compute the population standard deviation of this RDD's elements.
- stdev() - Method in class org.apache.spark.util.StatCounter
-
Return the population standard deviation of the values.
- stepSize() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- stepSize() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- stepSize() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- stepSize() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- stepSize() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- stepSize() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- stepSize() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- stop() - Method in class org.apache.spark.api.java.JavaSparkContext
-
Shut down the SparkContext.
- stop() - Method in interface org.apache.spark.launcher.SparkAppHandle
-
Asks the application to stop.
- stop() - Method in class org.apache.spark.SparkContext
-
Shut down the SparkContext.
- stop() - Method in class org.apache.spark.sql.SparkSession
-
Stop the underlying SparkContext
.
- stop() - Method in interface org.apache.spark.sql.streaming.StreamingQuery
-
Stops the execution of this query if it is running.
- stop() - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Stop the execution of the streams.
- stop(boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Stop the execution of the streams.
- stop(boolean, boolean) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Stop the execution of the streams.
- stop() - Method in class org.apache.spark.streaming.dstream.ConstantInputDStream
-
- stop() - Method in class org.apache.spark.streaming.dstream.InputDStream
-
Method called to stop receiving data.
- stop() - Method in class org.apache.spark.streaming.dstream.ReceiverInputDStream
-
- stop(String) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Stop the receiver completely.
- stop(String, Throwable) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Stop the receiver completely due to an exception
- stop(boolean) - Method in class org.apache.spark.streaming.StreamingContext
-
Stop the execution of the streams immediately (does not wait for all received data
to be processed).
- stop(boolean, boolean) - Method in class org.apache.spark.streaming.StreamingContext
-
Stop the execution of the streams, with option of ensuring all received data
has been processed.
- StopAllReceivers - Class in org.apache.spark.streaming.scheduler
-
This message will trigger ReceiverTrackerEndpoint to send stop signals to all registered
receivers.
- StopAllReceivers() - Constructor for class org.apache.spark.streaming.scheduler.StopAllReceivers
-
- StopBlockManagerMaster$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.StopBlockManagerMaster$
-
- StopCoordinator - Class in org.apache.spark.scheduler
-
- StopCoordinator() - Constructor for class org.apache.spark.scheduler.StopCoordinator
-
- StopDriver$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopDriver$
-
- StopExecutor - Class in org.apache.spark.scheduler.local
-
- StopExecutor() - Constructor for class org.apache.spark.scheduler.local.StopExecutor
-
- StopExecutor$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutor$
-
- StopExecutors$() - Constructor for class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StopExecutors$
-
- StopMapOutputTracker - Class in org.apache.spark
-
- StopMapOutputTracker() - Constructor for class org.apache.spark.StopMapOutputTracker
-
- StopReceiver - Class in org.apache.spark.streaming.receiver
-
- StopReceiver() - Constructor for class org.apache.spark.streaming.receiver.StopReceiver
-
- stopWords() - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
The words to be filtered out.
- StopWordsRemover - Class in org.apache.spark.ml.feature
-
A feature transformer that filters out stop words from input.
- StopWordsRemover(String) - Constructor for class org.apache.spark.ml.feature.StopWordsRemover
-
- StopWordsRemover() - Constructor for class org.apache.spark.ml.feature.StopWordsRemover
-
- STORAGE_MEMORY() - Static method in class org.apache.spark.ui.ToolTips
-
- storageLevel() - Method in class org.apache.spark.sql.Dataset
-
Get the Dataset's current storage level, or StorageLevel.NONE if not persisted.
- storageLevel() - Method in class org.apache.spark.status.api.v1.RDDPartitionInfo
-
- storageLevel() - Method in class org.apache.spark.status.api.v1.RDDStorageInfo
-
- storageLevel() - Method in class org.apache.spark.storage.BlockManagerMessages.UpdateBlockInfo
-
- storageLevel() - Method in class org.apache.spark.storage.BlockStatus
-
- storageLevel() - Method in class org.apache.spark.storage.BlockUpdatedInfo
-
- storageLevel() - Method in class org.apache.spark.storage.RDDInfo
-
- StorageLevel - Class in org.apache.spark.storage
-
:: DeveloperApi ::
Flags for controlling the storage of an RDD.
- StorageLevel() - Constructor for class org.apache.spark.storage.StorageLevel
-
- storageLevel() - Method in class org.apache.spark.streaming.receiver.Receiver
-
- storageLevelFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- StorageLevels - Class in org.apache.spark.api.java
-
Expose some commonly useful storage level constants.
- StorageLevels() - Constructor for class org.apache.spark.api.java.StorageLevels
-
- storageLevelToJson(StorageLevel) - Static method in class org.apache.spark.util.JsonProtocol
-
- StorageListener - Class in org.apache.spark.ui.storage
-
- StorageListener(StorageStatusListener) - Constructor for class org.apache.spark.ui.storage.StorageListener
-
Deprecated.
- StorageStatus - Class in org.apache.spark.storage
-
- StorageStatus(BlockManagerId, long, Option<Object>, Option<Object>) - Constructor for class org.apache.spark.storage.StorageStatus
-
Deprecated.
- StorageStatus(BlockManagerId, long, Option<Object>, Option<Object>, Map<BlockId, BlockStatus>) - Constructor for class org.apache.spark.storage.StorageStatus
-
Deprecated.
Create a storage status with an initial set of blocks, leaving the source unmodified.
- storageStatusList() - Method in class org.apache.spark.storage.StorageStatusListener
-
Deprecated.
- StorageStatusListener - Class in org.apache.spark.storage
-
- StorageStatusListener(SparkConf) - Constructor for class org.apache.spark.storage.StorageStatusListener
-
Deprecated.
- StorageUtils - Class in org.apache.spark.storage
-
Helper methods for storage-related objects.
- StorageUtils() - Constructor for class org.apache.spark.storage.StorageUtils
-
- store(T) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store a single item of received data to Spark's memory.
- store(ArrayBuffer<T>) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an ArrayBuffer of received data as a data block into Spark's memory.
- store(ArrayBuffer<T>, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an ArrayBuffer of received data as a data block into Spark's memory.
- store(Iterator<T>) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(Iterator<T>, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(Iterator<T>) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(Iterator<T>, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store an iterator of received data as a data block into Spark's memory.
- store(ByteBuffer) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store the bytes of received data as a data block into Spark's memory.
- store(ByteBuffer, Object) - Method in class org.apache.spark.streaming.receiver.Receiver
-
Store the bytes of received data as a data block into Spark's memory.
- strategy() - Static method in class org.apache.spark.ml.feature.Imputer
-
- strategy() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- Strategy - Class in org.apache.spark.mllib.tree.configuration
-
Stores all the configuration options for tree construction
param: algo Learning goal.
- Strategy(Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>, int, double, int, double, boolean, int) - Constructor for class org.apache.spark.mllib.tree.configuration.Strategy
-
- Strategy(Enumeration.Value, Impurity, int, int, int, Map<Integer, Integer>) - Constructor for class org.apache.spark.mllib.tree.configuration.Strategy
-
- StratifiedSamplingUtils - Class in org.apache.spark.util.random
-
Auxiliary functions and data structures for the sampleByKey method in PairRDDFunctions.
- StratifiedSamplingUtils() - Constructor for class org.apache.spark.util.random.StratifiedSamplingUtils
-
- STREAM() - Static method in class org.apache.spark.storage.BlockId
-
- StreamBlockId - Class in org.apache.spark.storage
-
- StreamBlockId(int, long) - Constructor for class org.apache.spark.storage.StreamBlockId
-
- streamId() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- streamId() - Method in class org.apache.spark.storage.StreamBlockId
-
- streamId() - Method in class org.apache.spark.streaming.receiver.Receiver
-
Get the unique identifier the receiver input stream that this
receiver is associated with.
- streamId() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
-
- streamIdToInputInfo() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- StreamingContext - Class in org.apache.spark.streaming
-
Main entry point for Spark Streaming functionality.
- StreamingContext(SparkContext, Duration) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Create a StreamingContext using an existing SparkContext.
- StreamingContext(SparkConf, Duration) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Create a StreamingContext by providing the configuration necessary for a new SparkContext.
- StreamingContext(String, String, Duration, String, Seq<String>, Map<String, String>) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Create a StreamingContext by providing the details necessary for creating a new SparkContext.
- StreamingContext(String, Configuration) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Recreate a StreamingContext from a checkpoint file.
- StreamingContext(String) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Recreate a StreamingContext from a checkpoint file.
- StreamingContext(String, SparkContext) - Constructor for class org.apache.spark.streaming.StreamingContext
-
Recreate a StreamingContext from a checkpoint file using an existing SparkContext.
- StreamingContextPythonHelper - Class in org.apache.spark.streaming
-
- StreamingContextPythonHelper() - Constructor for class org.apache.spark.streaming.StreamingContextPythonHelper
-
- StreamingContextState - Enum in org.apache.spark.streaming
-
:: DeveloperApi ::
Represents the state of a StreamingContext.
- StreamingKMeans - Class in org.apache.spark.mllib.clustering
-
StreamingKMeans provides methods for configuring a
streaming k-means analysis, training the model on streaming,
and using the model to make predictions on streaming data.
- StreamingKMeans(int, double, String) - Constructor for class org.apache.spark.mllib.clustering.StreamingKMeans
-
- StreamingKMeans() - Constructor for class org.apache.spark.mllib.clustering.StreamingKMeans
-
- StreamingKMeansModel - Class in org.apache.spark.mllib.clustering
-
StreamingKMeansModel extends MLlib's KMeansModel for streaming
algorithms, so it can keep track of a continuously updated weight
associated with each cluster, and also update the model by
doing a single iteration of the standard k-means algorithm.
- StreamingKMeansModel(Vector[], double[]) - Constructor for class org.apache.spark.mllib.clustering.StreamingKMeansModel
-
- StreamingLinearAlgorithm<M extends GeneralizedLinearModel,A extends GeneralizedLinearAlgorithm<M>> - Class in org.apache.spark.mllib.regression
-
:: DeveloperApi ::
StreamingLinearAlgorithm implements methods for continuously
training a generalized linear model on streaming data,
and using it for prediction on (possibly different) streaming data.
- StreamingLinearAlgorithm() - Constructor for class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
- StreamingLinearRegressionWithSGD - Class in org.apache.spark.mllib.regression
-
Train or predict a linear regression model on streaming data.
- StreamingLinearRegressionWithSGD() - Constructor for class org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD
-
Construct a StreamingLinearRegression object with default parameters:
{stepSize: 0.1, numIterations: 50, miniBatchFraction: 1.0}.
- StreamingListener - Interface in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
A listener interface for receiving information about an ongoing streaming
computation.
- StreamingListenerBatchCompleted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerBatchCompleted(BatchInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerBatchCompleted
-
- StreamingListenerBatchStarted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerBatchStarted(BatchInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerBatchStarted
-
- StreamingListenerBatchSubmitted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerBatchSubmitted(BatchInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerBatchSubmitted
-
- StreamingListenerEvent - Interface in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
Base trait for events related to StreamingListener
- StreamingListenerOutputOperationCompleted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerOutputOperationCompleted(OutputOperationInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationCompleted
-
- StreamingListenerOutputOperationStarted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerOutputOperationStarted(OutputOperationInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerOutputOperationStarted
-
- StreamingListenerReceiverError - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerReceiverError(ReceiverInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerReceiverError
-
- StreamingListenerReceiverStarted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerReceiverStarted(ReceiverInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStarted
-
- StreamingListenerReceiverStopped - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerReceiverStopped(ReceiverInfo) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerReceiverStopped
-
- StreamingListenerStreamingStarted - Class in org.apache.spark.streaming.scheduler
-
- StreamingListenerStreamingStarted(long) - Constructor for class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- StreamingLogisticRegressionWithSGD - Class in org.apache.spark.mllib.classification
-
Train or predict a logistic regression model on streaming data.
- StreamingLogisticRegressionWithSGD() - Constructor for class org.apache.spark.mllib.classification.StreamingLogisticRegressionWithSGD
-
Construct a StreamingLogisticRegression object with default parameters:
{stepSize: 0.1, numIterations: 50, miniBatchFraction: 1.0, regParam: 0.0}.
- StreamingQuery - Interface in org.apache.spark.sql.streaming
-
A handle to a query that is executing continuously in the background as new data arrives.
- StreamingQueryException - Exception in org.apache.spark.sql.streaming
-
- StreamingQueryListener - Class in org.apache.spark.sql.streaming
-
- StreamingQueryListener() - Constructor for class org.apache.spark.sql.streaming.StreamingQueryListener
-
- StreamingQueryListener.Event - Interface in org.apache.spark.sql.streaming
-
- StreamingQueryListener.QueryProgressEvent - Class in org.apache.spark.sql.streaming
-
Event representing any progress updates in a query.
- StreamingQueryListener.QueryStartedEvent - Class in org.apache.spark.sql.streaming
-
Event representing the start of a query
param: id An unique query id that persists across restarts.
- StreamingQueryListener.QueryTerminatedEvent - Class in org.apache.spark.sql.streaming
-
Event representing that termination of a query.
- StreamingQueryManager - Class in org.apache.spark.sql.streaming
-
- StreamingQueryProgress - Class in org.apache.spark.sql.streaming
-
Information about progress made in the execution of a
StreamingQuery
during
a trigger.
- StreamingQueryStatus - Class in org.apache.spark.sql.streaming
-
Reports information about the instantaneous status of a streaming query.
- StreamingStatistics - Class in org.apache.spark.status.api.v1.streaming
-
- StreamingTest - Class in org.apache.spark.mllib.stat.test
-
Performs online 2-sample significance testing for a stream of (Boolean, Double) pairs.
- StreamingTest() - Constructor for class org.apache.spark.mllib.stat.test.StreamingTest
-
- StreamInputInfo - Class in org.apache.spark.streaming.scheduler
-
:: DeveloperApi ::
Track the information of input stream at specified batch time.
- StreamInputInfo(int, long, Map<String, Object>) - Constructor for class org.apache.spark.streaming.scheduler.StreamInputInfo
-
- streamName() - Method in class org.apache.spark.status.api.v1.streaming.ReceiverInfo
-
- streams() - Method in class org.apache.spark.sql.SparkSession
-
:: Experimental ::
Returns a StreamingQueryManager
that allows managing all the
StreamingQuery
s active on this
.
- streams() - Method in class org.apache.spark.sql.SQLContext
-
- StreamSinkProvider - Interface in org.apache.spark.sql.sources
-
::Experimental::
Implemented by objects that can produce a streaming Sink
for a specific format or system.
- StreamSourceProvider - Interface in org.apache.spark.sql.sources
-
::Experimental::
Implemented by objects that can produce a streaming Source
for a specific format or system.
- STRING() - Static method in class org.apache.spark.api.r.SerializationFormats
-
- string() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type string.
- STRING() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable string type.
- StringAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.StringAccumulatorParam$
-
Deprecated.
- StringArrayParam - Class in org.apache.spark.ml.param
-
:: DeveloperApi ::
Specialized version of Param[Array[String}
for Java.
- StringArrayParam(Params, String, String, Function1<String[], Object>) - Constructor for class org.apache.spark.ml.param.StringArrayParam
-
- StringArrayParam(Params, String, String) - Constructor for class org.apache.spark.ml.param.StringArrayParam
-
- StringContains - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to
a string that contains the string value
.
- StringContains(String, String) - Constructor for class org.apache.spark.sql.sources.StringContains
-
- StringEndsWith - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to
a string that starts with value
.
- StringEndsWith(String, String) - Constructor for class org.apache.spark.sql.sources.StringEndsWith
-
- StringIndexer - Class in org.apache.spark.ml.feature
-
A label indexer that maps a string column of labels to an ML column of label indices.
- StringIndexer(String) - Constructor for class org.apache.spark.ml.feature.StringIndexer
-
- StringIndexer() - Constructor for class org.apache.spark.ml.feature.StringIndexer
-
- StringIndexerModel - Class in org.apache.spark.ml.feature
-
- StringIndexerModel(String, String[]) - Constructor for class org.apache.spark.ml.feature.StringIndexerModel
-
- StringIndexerModel(String[]) - Constructor for class org.apache.spark.ml.feature.StringIndexerModel
-
- stringPrefix() - Static method in class org.apache.spark.sql.types.StructType
-
- StringRRDD<T> - Class in org.apache.spark.api.r
-
An RDD that stores R objects as Array[String].
- StringRRDD(RDD<T>, byte[], String, byte[], Object[], ClassTag<T>) - Constructor for class org.apache.spark.api.r.StringRRDD
-
- StringStartsWith - Class in org.apache.spark.sql.sources
-
A filter that evaluates to true
iff the attribute evaluates to
a string that starts with value
.
- StringStartsWith(String, String) - Constructor for class org.apache.spark.sql.sources.StringStartsWith
-
- StringToColumn(StringContext) - Constructor for class org.apache.spark.sql.SQLImplicits.StringToColumn
-
- stringToSeq(String, Function1<String, T>) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- StringType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the StringType object.
- StringType - Class in org.apache.spark.sql.types
-
The data type representing String
values.
- stripDirectory(String) - Static method in class org.apache.spark.util.Utils
-
Strip the directory from a path name
- stripXSS(String) - Static method in class org.apache.spark.ui.UIUtils
-
Remove suspicious characters of user input to prevent Cross-Site scripting (XSS) attacks
- stronglyConnectedComponents(int) - Method in class org.apache.spark.graphx.GraphOps
-
Compute the strongly connected component (SCC) of each vertex and return a graph with the
vertex value containing the lowest vertex id in the SCC containing that vertex.
- StronglyConnectedComponents - Class in org.apache.spark.graphx.lib
-
Strongly connected components algorithm implementation.
- StronglyConnectedComponents() - Constructor for class org.apache.spark.graphx.lib.StronglyConnectedComponents
-
- struct(Seq<StructField>) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type struct.
- struct(StructType) - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type struct.
- struct(Column...) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column.
- struct(String, String...) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column that composes multiple input columns.
- struct(Seq<Column>) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column.
- struct(String, Seq<String>) - Static method in class org.apache.spark.sql.functions
-
Creates a new struct column that composes multiple input columns.
- StructField - Class in org.apache.spark.sql.types
-
A field inside a StructType.
- StructField(String, DataType, boolean, Metadata) - Constructor for class org.apache.spark.sql.types.StructField
-
- StructType - Class in org.apache.spark.sql.types
-
- StructType(StructField[]) - Constructor for class org.apache.spark.sql.types.StructType
-
- StructType() - Constructor for class org.apache.spark.sql.types.StructType
-
No-arg constructor for kryo.
- StudentTTest - Class in org.apache.spark.mllib.stat.test
-
Performs Students's 2-sample t-test.
- StudentTTest() - Constructor for class org.apache.spark.mllib.stat.test.StudentTTest
-
- subexpressionEliminationEnabled() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- subgraph(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.Graph
-
Restricts the graph to only the vertices and edges satisfying the predicates.
- subgraph(Function1<EdgeTriplet<VD, ED>, Object>, Function2<Object, VD, Object>) - Method in class org.apache.spark.graphx.impl.GraphImpl
-
- subgraph$default$1() - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
- subgraph$default$2() - Static method in class org.apache.spark.graphx.impl.GraphImpl
-
- submissionTime() - Method in class org.apache.spark.scheduler.StageInfo
-
When this stage was submitted from the DAGScheduler to a TaskScheduler.
- submissionTime() - Method in interface org.apache.spark.SparkStageInfo
-
- submissionTime() - Method in class org.apache.spark.SparkStageInfoImpl
-
- submissionTime() - Method in class org.apache.spark.status.api.v1.JobData
-
- submissionTime() - Method in class org.apache.spark.status.api.v1.StageData
-
- submissionTime() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
- submissionTime() - Method in class org.apache.spark.ui.jobs.UIData.JobUIData
-
- submitJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, Function0<R>) - Method in interface org.apache.spark.JobSubmitter
-
Submit a job for execution and return a FutureAction holding the result.
- submitJob(RDD<T>, Function1<Iterator<T>, U>, Seq<Object>, Function2<Object, U, BoxedUnit>, Function0<R>) - Method in class org.apache.spark.SparkContext
-
Submit a job for execution and return a FutureJob holding the result.
- subqueries() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- subqueries() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- subqueries() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- subsamplingRate() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- subsamplingRate() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- subsamplingRate() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- subsamplingRate() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- subsamplingRate() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- subsamplingRate() - Static method in class org.apache.spark.ml.clustering.LDA
-
- subsamplingRate() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- subsamplingRate() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- subsamplingRate() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- subsamplingRate() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- subsamplingRate() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- subsamplingRate() - Method in class org.apache.spark.mllib.tree.configuration.Strategy
-
- subsetAccuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
-
Returns subset accuracy
(for equal sets of labels)
- substr(Column, Column) - Method in class org.apache.spark.sql.Column
-
An expression that returns a substring.
- substr(int, int) - Method in class org.apache.spark.sql.Column
-
An expression that returns a substring.
- substring(Column, int, int) - Static method in class org.apache.spark.sql.functions
-
Substring starts at pos
and is of length len
when str is String type or
returns the slice of byte array that starts at pos
in byte and is of length len
when str is Binary type
- substring_index(Column, String, int) - Static method in class org.apache.spark.sql.functions
-
Returns the substring from string str before count occurrences of the delimiter delim.
- subtract(JavaDoubleRDD) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaDoubleRDD, int) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaDoubleRDD, Partitioner) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaPairRDD<K, V>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaPairRDD<K, V>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaPairRDD<K, V>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaRDD<T>) - Method in class org.apache.spark.api.java.JavaRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaRDD<T>, int) - Method in class org.apache.spark.api.java.JavaRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(JavaRDD<T>, Partitioner) - Method in class org.apache.spark.api.java.JavaRDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(RDD<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.api.r.RRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- subtract(RDD<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- subtract(RDD<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- subtract(RDD<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- subtract(RDD<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.graphx.VertexRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- subtract(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Subtracts the given block matrix other
from this
block matrix: this - other
.
- subtract(RDD<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- subtract(RDD<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- subtract(RDD<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- subtract(RDD<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- subtract(RDD<T>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(RDD<T>, int) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(RDD<T>, Partitioner, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Return an RDD with the elements from this
that are not in other
.
- subtract(RDD<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- subtract(RDD<T>, int) - Static method in class org.apache.spark.rdd.UnionRDD
-
- subtract(RDD<T>, Partitioner, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- subtract(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.api.r.RRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.graphx.VertexRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- subtract$default$3(RDD<T>, Partitioner) - Static method in class org.apache.spark.rdd.UnionRDD
-
- subtractByKey(JavaPairRDD<K, W>) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the pairs from this
whose keys are not in other
.
- subtractByKey(JavaPairRDD<K, W>, int) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the pairs from this
whose keys are not in other
.
- subtractByKey(JavaPairRDD<K, W>, Partitioner) - Method in class org.apache.spark.api.java.JavaPairRDD
-
Return an RDD with the pairs from this
whose keys are not in other
.
- subtractByKey(RDD<Tuple2<K, W>>, ClassTag<W>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the pairs from this
whose keys are not in other
.
- subtractByKey(RDD<Tuple2<K, W>>, int, ClassTag<W>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the pairs from this
whose keys are not in other
.
- subtractByKey(RDD<Tuple2<K, W>>, Partitioner, ClassTag<W>) - Method in class org.apache.spark.rdd.PairRDDFunctions
-
Return an RDD with the pairs from this
whose keys are not in other
.
- subtreeToString$default$1() - Static method in class org.apache.spark.ml.tree.InternalNode
-
- succeededTasks() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- succeededTasks() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- success(T) - Static method in class org.apache.spark.ml.feature.RFormulaParser
-
- Success - Class in org.apache.spark
-
:: DeveloperApi ::
Task succeeded.
- Success() - Constructor for class org.apache.spark.Success
-
- successful() - Method in class org.apache.spark.scheduler.TaskInfo
-
- sum() - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Add up the elements in this RDD.
- Sum() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- sum() - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Add up the elements in this RDD.
- sum(MapFunction<T, Double>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Sum aggregate function for floating point (double) type.
- sum(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Sum aggregate function for floating point (double) type.
- sum(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of all values in the expression.
- sum(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of all values in the given column.
- sum(String...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the sum for each numeric columns for each group.
- sum(Seq<String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
-
Compute the sum for each numeric columns for each group.
- sum(Numeric<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- sum() - Method in class org.apache.spark.util.DoubleAccumulator
-
Returns the sum of elements added to the accumulator.
- sum() - Method in class org.apache.spark.util.LongAccumulator
-
Returns the sum of elements added to the accumulator.
- sum() - Method in class org.apache.spark.util.StatCounter
-
- sumApprox(long, Double) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Approximate operation to return the sum within a timeout.
- sumApprox(long) - Method in class org.apache.spark.api.java.JavaDoubleRDD
-
Approximate operation to return the sum within a timeout.
- sumApprox(long, double) - Method in class org.apache.spark.rdd.DoubleRDDFunctions
-
Approximate operation to return the sum within a timeout.
- sumDistinct(Column) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of distinct values in the expression.
- sumDistinct(String) - Static method in class org.apache.spark.sql.functions
-
Aggregate function: returns the sum of distinct values in the expression.
- sumLong(MapFunction<T, Long>) - Static method in class org.apache.spark.sql.expressions.javalang.typed
-
Sum aggregate function for integral (long, i.e.
- sumLong(Function1<IN, Object>) - Static method in class org.apache.spark.sql.expressions.scalalang.typed
-
Sum aggregate function for integral (long, i.e.
- summary() - Method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.clustering.KMeansModel
-
Gets summary of model on training set.
- summary() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
Gets R-like summary of model on training set.
- summary() - Method in class org.apache.spark.ml.regression.LinearRegressionModel
-
Gets summary (e.g.
- supportedFeatureSubsetStrategies() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Accessor for supported featureSubsetStrategy settings: auto, all, onethird, sqrt, log2
- supportedFeatureSubsetStrategies() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Accessor for supported featureSubsetStrategy settings: auto, all, onethird, sqrt, log2
- supportedFeatureSubsetStrategies() - Static method in class org.apache.spark.mllib.tree.RandomForest
-
List of supported feature subset sampling strategies.
- supportedImpurities() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
Accessor for supported impurities: entropy, gini
- supportedImpurities() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
Accessor for supported impurity settings: entropy, gini
- supportedImpurities() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
Accessor for supported impurities: variance
- supportedImpurities() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
Accessor for supported impurity settings: variance
- supportedLossTypes() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
Accessor for supported loss settings: logistic
- supportedLossTypes() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
Accessor for supported loss settings: squared (L2), absolute (L1)
- supportedOptimizers() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- supportedOptimizers() - Static method in class org.apache.spark.ml.clustering.LDA
-
- supportedOptimizers() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- supportedSelectorTypes() - Static method in class org.apache.spark.mllib.feature.ChiSqSelector
-
Set of selector types that ChiSqSelector supports.
- surrogateDF() - Method in class org.apache.spark.ml.feature.ImputerModel
-
- SVDPlusPlus - Class in org.apache.spark.graphx.lib
-
Implementation of SVD++ algorithm.
- SVDPlusPlus() - Constructor for class org.apache.spark.graphx.lib.SVDPlusPlus
-
- SVDPlusPlus.Conf - Class in org.apache.spark.graphx.lib
-
Configuration parameters for SVDPlusPlus.
- SVMDataGenerator - Class in org.apache.spark.mllib.util
-
:: DeveloperApi ::
Generate sample data used for SVM.
- SVMDataGenerator() - Constructor for class org.apache.spark.mllib.util.SVMDataGenerator
-
- SVMModel - Class in org.apache.spark.mllib.classification
-
Model for Support Vector Machines (SVMs).
- SVMModel(Vector, double) - Constructor for class org.apache.spark.mllib.classification.SVMModel
-
- SVMWithSGD - Class in org.apache.spark.mllib.classification
-
Train a Support Vector Machine (SVM) using Stochastic Gradient Descent.
- SVMWithSGD() - Constructor for class org.apache.spark.mllib.classification.SVMWithSGD
-
Construct a SVM object with default parameters: {stepSize: 1.0, numIterations: 100,
regParm: 0.01, miniBatchFraction: 1.0}.
- symbolToColumn(Symbol) - Method in class org.apache.spark.sql.SQLImplicits
-
An implicit conversion that turns a Scala
Symbol
into a
Column
.
- symlink(File, File) - Static method in class org.apache.spark.util.Utils
-
Creates a symlink.
- symmetricEigs(Function1<DenseVector<Object>, DenseVector<Object>>, int, int, double, int) - Static method in class org.apache.spark.mllib.linalg.EigenValueDecomposition
-
Compute the leading k eigenvalues and eigenvectors on a symmetric square matrix using ARPACK.
- syr(double, Vector, DenseMatrix) - Static method in class org.apache.spark.ml.linalg.BLAS
-
A := alpha * x * x^T^ + A
- syr(double, Vector, DenseMatrix) - Static method in class org.apache.spark.mllib.linalg.BLAS
-
A := alpha * x * x^T^ + A
- SYSTEM_DEFAULT() - Static method in class org.apache.spark.sql.types.DecimalType
-
- systemProperties() - Method in class org.apache.spark.status.api.v1.ApplicationEnvironmentInfo
-
- systemProperties() - Method in class org.apache.spark.ui.env.EnvironmentListener
-
Deprecated.
- t() - Method in class org.apache.spark.SerializableWritable
-
- Table - Class in org.apache.spark.sql.catalog
-
A table in Spark, as returned by the
listTables
method in
Catalog
.
- Table(String, String, String, String, boolean) - Constructor for class org.apache.spark.sql.catalog.Table
-
- table(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Returns the specified table as a DataFrame
.
- table() - Method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- table(String) - Method in class org.apache.spark.sql.SparkSession
-
Returns the specified table/view as a DataFrame
.
- table(String) - Method in class org.apache.spark.sql.SQLContext
-
- TABLE_CLASS_NOT_STRIPED() - Static method in class org.apache.spark.ui.UIUtils
-
- TABLE_CLASS_STRIPED() - Static method in class org.apache.spark.ui.UIUtils
-
- TABLE_CLASS_STRIPED_SORTABLE() - Static method in class org.apache.spark.ui.UIUtils
-
- tableDesc() - Method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- tableExists(String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Check if the table or view with the specified name exists.
- tableExists(String, String) - Method in class org.apache.spark.sql.catalog.Catalog
-
Check if the table or view with the specified name exists in the specified database.
- tableNames() - Method in class org.apache.spark.sql.SQLContext
-
- tableNames(String) - Method in class org.apache.spark.sql.SQLContext
-
- tables() - Method in class org.apache.spark.sql.SQLContext
-
- tables(String) - Method in class org.apache.spark.sql.SQLContext
-
- TableScan - Interface in org.apache.spark.sql.sources
-
A BaseRelation that can produce all of its tuples as an RDD of Row objects.
- tableType() - Method in class org.apache.spark.sql.catalog.Table
-
- tail() - Static method in class org.apache.spark.sql.types.StructType
-
- tails() - Static method in class org.apache.spark.sql.types.StructType
-
- take(int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- take(int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- take(int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- take(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Take the first num elements of the RDD.
- take(int) - Static method in class org.apache.spark.api.r.RRDD
-
- take(int) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- take(int) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- take(int) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- take(int) - Static method in class org.apache.spark.graphx.VertexRDD
-
- take(int) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- take(int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- take(int) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- take(int) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- take(int) - Method in class org.apache.spark.rdd.RDD
-
Take the first num elements of the RDD.
- take(int) - Static method in class org.apache.spark.rdd.UnionRDD
-
- take(int) - Method in class org.apache.spark.sql.Dataset
-
Returns the first n
rows in the Dataset.
- take(int) - Static method in class org.apache.spark.sql.types.StructType
-
- takeAsList(int) - Method in class org.apache.spark.sql.Dataset
-
Returns the first n
rows in the Dataset as a list.
- takeAsync(int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- takeAsync(int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- takeAsync(int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- takeAsync(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
The asynchronous version of the take
action, which returns a
future for retrieving the first num
elements of this RDD.
- takeAsync(int) - Method in class org.apache.spark.rdd.AsyncRDDActions
-
Returns a future for retrieving the first num elements of the RDD.
- takeOrdered(int, Comparator<T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- takeOrdered(int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- takeOrdered(int, Comparator<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- takeOrdered(int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- takeOrdered(int, Comparator<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- takeOrdered(int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- takeOrdered(int, Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the first k (smallest) elements from this RDD as defined by
the specified Comparator[T] and maintains the order.
- takeOrdered(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the first k (smallest) elements from this RDD using the
natural ordering for T while maintain the order.
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- takeOrdered(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the first k (smallest) elements from this RDD as defined by the specified
implicit Ordering[T] and maintains the ordering.
- takeOrdered(int, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- takeRight(int) - Static method in class org.apache.spark.sql.types.StructType
-
- takeSample(boolean, int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- takeSample(boolean, int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- takeSample(boolean, int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.api.java.JavaRDD
-
- takeSample(boolean, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
- takeSample(boolean, int, long) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.api.r.RRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.graphx.VertexRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- takeSample(boolean, int, long) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- takeSample(boolean, int, long) - Method in class org.apache.spark.rdd.RDD
-
Return a fixed-size sampled subset of this RDD in an array
- takeSample(boolean, int, long) - Static method in class org.apache.spark.rdd.UnionRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.api.r.RRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- takeSample$default$3() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- takeSample$default$3() - Static method in class org.apache.spark.graphx.VertexRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- takeSample$default$3() - Static method in class org.apache.spark.rdd.UnionRDD
-
- takeWhile(Function1<A, Object>) - Static method in class org.apache.spark.sql.types.StructType
-
- tallSkinnyQR(boolean) - Method in class org.apache.spark.mllib.linalg.distributed.RowMatrix
-
- tan(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the tangent of the given value.
- tan(String) - Static method in class org.apache.spark.sql.functions
-
Computes the tangent of the given column.
- tanh(Column) - Static method in class org.apache.spark.sql.functions
-
Computes the hyperbolic tangent of the given value.
- tanh(String) - Static method in class org.apache.spark.sql.functions
-
Computes the hyperbolic tangent of the given column.
- targetStorageLevel() - Method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- targetStorageLevel() - Method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- task() - Method in class org.apache.spark.CleanupTaskWeakReference
-
- TASK_DESERIALIZATION_TIME() - Static method in class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- TASK_DESERIALIZATION_TIME() - Static method in class org.apache.spark.ui.ToolTips
-
- TASK_TIME() - Static method in class org.apache.spark.ui.ToolTips
-
- taskAttemptId() - Method in class org.apache.spark.TaskContext
-
An ID that is unique to this task attempt (within the same SparkContext, no two task attempts
will share the same attempt ID).
- TaskCommitDenied - Class in org.apache.spark
-
:: DeveloperApi ::
Task requested the driver to commit, but was denied.
- TaskCommitDenied(int, int, int) - Constructor for class org.apache.spark.TaskCommitDenied
-
- TaskCommitMessage(Object) - Constructor for class org.apache.spark.internal.io.FileCommitProtocol.TaskCommitMessage
-
- TaskCompletionListener - Interface in org.apache.spark.util
-
:: DeveloperApi ::
- TaskContext - Class in org.apache.spark
-
Contextual information about a task which can be read or mutated during
execution.
- TaskContext() - Constructor for class org.apache.spark.TaskContext
-
- TaskData - Class in org.apache.spark.status.api.v1
-
- taskData() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
-
- TaskDetailsClassNames - Class in org.apache.spark.ui.jobs
-
Names of the CSS classes corresponding to each type of task detail.
- TaskDetailsClassNames() - Constructor for class org.apache.spark.ui.jobs.TaskDetailsClassNames
-
- taskDuration() - Method in class org.apache.spark.ui.jobs.UIData.TaskUIData
-
- taskEndFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- TaskEndReason - Interface in org.apache.spark
-
:: DeveloperApi ::
Various possible reasons why a task ended.
- taskEndReasonFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskEndReasonToJson(TaskEndReason) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskEndToJson(SparkListenerTaskEnd) - Static method in class org.apache.spark.util.JsonProtocol
-
- TaskFailedReason - Interface in org.apache.spark
-
:: DeveloperApi ::
Various possible reasons why a task failed.
- TaskFailureListener - Interface in org.apache.spark.util
-
:: DeveloperApi ::
- taskFailures() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- taskGettingResultFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskGettingResultToJson(SparkListenerTaskGettingResult) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.KillTask
-
- taskId() - Method in class org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages.StatusUpdate
-
- taskId() - Method in class org.apache.spark.scheduler.local.KillTask
-
- taskId() - Method in class org.apache.spark.scheduler.local.StatusUpdate
-
- taskId() - Method in class org.apache.spark.scheduler.TaskInfo
-
- taskId() - Method in class org.apache.spark.status.api.v1.TaskData
-
- taskId() - Method in class org.apache.spark.storage.TaskResultBlockId
-
- taskInfo() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- taskInfo() - Method in class org.apache.spark.scheduler.SparkListenerTaskGettingResult
-
- taskInfo() - Method in class org.apache.spark.scheduler.SparkListenerTaskStart
-
- TaskInfo - Class in org.apache.spark.scheduler
-
:: DeveloperApi ::
Information about a running task attempt inside a TaskSet.
- TaskInfo(long, int, int, long, String, String, Enumeration.Value, boolean) - Constructor for class org.apache.spark.scheduler.TaskInfo
-
- taskInfo() - Method in class org.apache.spark.ui.jobs.UIData.TaskUIData
-
- taskInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskInfoToJson(TaskInfo) - Static method in class org.apache.spark.util.JsonProtocol
-
- TaskKilled - Class in org.apache.spark
-
:: DeveloperApi ::
Task was killed intentionally and needs to be rescheduled.
- TaskKilled(String) - Constructor for class org.apache.spark.TaskKilled
-
- TaskKilledException - Exception in org.apache.spark
-
:: DeveloperApi ::
Exception thrown when a task is explicitly killed (i.e., task failure is expected).
- TaskKilledException(String) - Constructor for exception org.apache.spark.TaskKilledException
-
- TaskKilledException() - Constructor for exception org.apache.spark.TaskKilledException
-
- taskLocality() - Method in class org.apache.spark.scheduler.TaskInfo
-
- TaskLocality - Class in org.apache.spark.scheduler
-
- TaskLocality() - Constructor for class org.apache.spark.scheduler.TaskLocality
-
- taskLocality() - Method in class org.apache.spark.status.api.v1.TaskData
-
- TaskMetricDistributions - Class in org.apache.spark.status.api.v1
-
- taskMetrics() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- taskMetrics() - Method in class org.apache.spark.scheduler.StageInfo
-
- taskMetrics() - Method in class org.apache.spark.status.api.v1.TaskData
-
- TaskMetrics - Class in org.apache.spark.status.api.v1
-
- taskMetrics() - Method in class org.apache.spark.TaskContext
-
- taskMetricsFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskMetricsToJson(TaskMetrics) - Static method in class org.apache.spark.util.JsonProtocol
-
- TaskMetricsUIData(long, long, long, long, long, long, long, long, long, long, UIData.InputMetricsUIData, UIData.OutputMetricsUIData, UIData.ShuffleReadMetricsUIData, UIData.ShuffleWriteMetricsUIData) - Constructor for class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData
-
- TaskMetricsUIData$() - Constructor for class org.apache.spark.ui.jobs.UIData.TaskMetricsUIData$
-
- TASKRESULT() - Static method in class org.apache.spark.storage.BlockId
-
- TaskResultBlockId - Class in org.apache.spark.storage
-
- TaskResultBlockId(long) - Constructor for class org.apache.spark.storage.TaskResultBlockId
-
- TaskResultLost - Class in org.apache.spark
-
:: DeveloperApi ::
The task finished successfully, but the result was lost from the executor's block manager before
it was fetched.
- TaskResultLost() - Constructor for class org.apache.spark.TaskResultLost
-
- tasks() - Method in class org.apache.spark.status.api.v1.StageData
-
- TaskSchedulerIsSet - Class in org.apache.spark
-
An event that SparkContext uses to notify HeartbeatReceiver that SparkContext.taskScheduler is
created.
- TaskSchedulerIsSet() - Constructor for class org.apache.spark.TaskSchedulerIsSet
-
- TaskSorting - Enum in org.apache.spark.status.api.v1
-
- taskStartFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
-
- taskStartToJson(SparkListenerTaskStart) - Static method in class org.apache.spark.util.JsonProtocol
-
- TaskState - Class in org.apache.spark
-
- TaskState() - Constructor for class org.apache.spark.TaskState
-
- taskTime() - Method in class org.apache.spark.status.api.v1.ExecutorStageSummary
-
- taskTime() - Method in class org.apache.spark.ui.jobs.UIData.ExecutorSummary
-
- taskType() - Method in class org.apache.spark.scheduler.SparkListenerTaskEnd
-
- TaskUIData$() - Constructor for class org.apache.spark.ui.jobs.UIData.TaskUIData$
-
- TEMP_DIR_SHUTDOWN_PRIORITY() - Static method in class org.apache.spark.util.ShutdownHookManager
-
The shutdown priority of temp directory must be lower than the SparkContext shutdown
priority.
- TEMP_LOCAL() - Static method in class org.apache.spark.storage.BlockId
-
- TEMP_SHUFFLE() - Static method in class org.apache.spark.storage.BlockId
-
- tempFileWith(File) - Static method in class org.apache.spark.util.Utils
-
Returns a path of temporary file which is in the same directory with path
.
- terminateProcess(Process, long) - Static method in class org.apache.spark.util.Utils
-
Terminates a process waiting for at most the specified duration.
- test(Dataset<Row>, String, String) - Static method in class org.apache.spark.ml.stat.ChiSquareTest
-
Conduct Pearson's independence test for every feature against the label.
- TEST() - Static method in class org.apache.spark.storage.BlockId
-
- TEST_ACCUM() - Static method in class org.apache.spark.InternalAccumulator
-
- testCommandAvailable(String) - Static method in class org.apache.spark.TestUtils
-
Test if a command is available.
- testOneSample(RDD<Object>, String, double...) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
-
A convenience function that allows running the KS test for 1 set of sample data against
a named distribution
- testOneSample(RDD<Object>, Function1<Object, Object>) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
-
- testOneSample(RDD<Object>, RealDistribution) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
-
- testOneSample(RDD<Object>, String, Seq<Object>) - Static method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTest
-
- TestResult<DF> - Interface in org.apache.spark.mllib.stat.test
-
Trait for hypothesis test results.
- TestUtils - Class in org.apache.spark
-
Utilities for tests.
- TestUtils() - Constructor for class org.apache.spark.TestUtils
-
- text(String...) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads text files and returns a DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any.
- text(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads text files and returns a DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any.
- text(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads text files and returns a DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any.
- text(String) - Method in class org.apache.spark.sql.DataFrameWriter
-
Saves the content of the DataFrame
in a text file at the specified path.
- text(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads text files and returns a DataFrame
whose schema starts with a string column named
"value", and followed by partitioned columns if there are any.
- textFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a text file from HDFS, a local file system (available on all nodes), or any
Hadoop-supported file system URI, and return it as an RDD of Strings.
- textFile(String, int) - Method in class org.apache.spark.api.java.JavaSparkContext
-
Read a text file from HDFS, a local file system (available on all nodes), or any
Hadoop-supported file system URI, and return it as an RDD of Strings.
- textFile(String, int) - Method in class org.apache.spark.SparkContext
-
Read a text file from HDFS, a local file system (available on all nodes), or any
Hadoop-supported file system URI, and return it as an RDD of Strings.
- textFile(String...) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads text files and returns a
Dataset
of String.
- textFile(String) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads text files and returns a
Dataset
of String.
- textFile(Seq<String>) - Method in class org.apache.spark.sql.DataFrameReader
-
Loads text files and returns a
Dataset
of String.
- textFile(String) - Method in class org.apache.spark.sql.streaming.DataStreamReader
-
Loads text file(s) and returns a Dataset
of String.
- textFileStream(String) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them as text files (using key as LongWritable, value
as Text and input format as TextInputFormat).
- textFileStream(String) - Method in class org.apache.spark.streaming.StreamingContext
-
Create an input stream that monitors a Hadoop-compatible filesystem
for new files and reads them as text files (using key as LongWritable, value
as Text and input format as TextInputFormat).
- textResponderToServlet(Function1<HttpServletRequest, String>) - Static method in class org.apache.spark.ui.JettyUtils
-
- theta() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- theta() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$.Data
-
- theta() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$.Data
-
- theta() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel
-
- thisClassName() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
Hard-code class name string in case it changes in the future
- thisClassName() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
Hard-code class name string in case it changes in the future
- thisClassName() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
- thisFormatVersion() - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$
-
- thisFormatVersion() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV1_0$
-
- thisFormatVersion() - Method in class org.apache.spark.mllib.classification.NaiveBayesModel.SaveLoadV2_0$
-
- thisFormatVersion() - Method in class org.apache.spark.mllib.regression.impl.GLMRegressionModel.SaveLoadV1_0$
-
- thisFormatVersion() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$
-
- ThreadUtils - Class in org.apache.spark.util
-
- ThreadUtils() - Constructor for class org.apache.spark.util.ThreadUtils
-
- threshold() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- threshold() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- threshold() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- threshold() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- threshold() - Method in class org.apache.spark.ml.feature.Binarizer
-
Param for threshold used to binarize continuous features.
- threshold() - Method in class org.apache.spark.ml.tree.ContinuousSplit
-
- threshold() - Method in class org.apache.spark.mllib.classification.impl.GLMClassificationModel.SaveLoadV1_0$.Data
-
- threshold() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- threshold() - Method in class org.apache.spark.mllib.tree.model.Split
-
- thresholds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- thresholds() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- thresholds() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- thresholds() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- thresholds() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- thresholds() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- thresholds() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- thresholds() - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- thresholds() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- thresholds() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- thresholds() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- thresholds() - Method in class org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
-
Returns thresholds in descending order.
- throwBalls(int, RDD<?>, double, DefaultPartitionCoalescer.PartitionLocations) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerApplicationEnd
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerApplicationStart
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerAdded
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerBlockManagerRemoved
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorAdded
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorBlacklisted
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorRemoved
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerExecutorUnblacklisted
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerJobEnd
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerJobStart
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeBlacklisted
-
- time() - Method in class org.apache.spark.scheduler.SparkListenerNodeUnblacklisted
-
- time(Function0<T>) - Method in class org.apache.spark.sql.SparkSession
-
Executes some code block and prints to stdout the time taken to execute the block.
- time() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
Time when the exception occurred
- time() - Method in class org.apache.spark.streaming.scheduler.StreamingListenerStreamingStarted
-
- Time - Class in org.apache.spark.streaming
-
This is a simple class that represents an absolute instant of time.
- Time(long) - Constructor for class org.apache.spark.streaming.Time
-
- timeFromString(String, TimeUnit) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- timeIt(int, Function0<BoxedUnit>, Option<Function0<BoxedUnit>>) - Static method in class org.apache.spark.util.Utils
-
Timing method based on iterations that permit JVM JIT optimization.
- timeout(Duration) - Method in class org.apache.spark.streaming.StateSpec
-
Set the duration after which the state of an idle key will be removed.
- times(int) - Method in class org.apache.spark.streaming.Duration
-
- times(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Method executed for repeating a task for side effects.
- timestamp() - Method in class org.apache.spark.sql.ColumnName
-
Creates a new StructField
of type timestamp.
- TIMESTAMP() - Static method in class org.apache.spark.sql.Encoders
-
An encoder for nullable timestamp type.
- timestamp() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- TimestampType - Static variable in class org.apache.spark.sql.types.DataTypes
-
Gets the TimestampType object.
- TimestampType - Class in org.apache.spark.sql.types
-
The data type representing java.sql.Timestamp
values.
- timeStringAsMs(String) - Static method in class org.apache.spark.util.Utils
-
Convert a time parameter such as (50s, 100ms, or 250us) to microseconds for internal use.
- timeStringAsSeconds(String) - Static method in class org.apache.spark.util.Utils
-
Convert a time parameter such as (50s, 100ms, or 250us) to seconds for internal use.
- timeToString(long, TimeUnit) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- TimeTrackingOutputStream - Class in org.apache.spark.storage
-
Intercepts write calls and tracks total time spent writing in order to update shuffle write
metrics.
- TimeTrackingOutputStream(ShuffleWriteMetrics, OutputStream) - Constructor for class org.apache.spark.storage.TimeTrackingOutputStream
-
- timeUnit() - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
- TIMING_DATA() - Static method in class org.apache.spark.api.r.SpecialLengths
-
- to(CanBuildFrom<Nothing$, A, Col>) - Static method in class org.apache.spark.sql.types.StructType
-
- to(Time, Duration) - Method in class org.apache.spark.streaming.Time
-
- to_date(Column) - Static method in class org.apache.spark.sql.functions
-
Converts the column into DateType.
- to_date(Column, String) - Static method in class org.apache.spark.sql.functions
-
Converts the column into a DateType with a specified format
(see [http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html])
return null if fail.
- to_json(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Scala-specific) Converts a column containing a StructType
or ArrayType
of StructType
s
into a JSON string with the specified schema.
- to_json(Column, Map<String, String>) - Static method in class org.apache.spark.sql.functions
-
(Java-specific) Converts a column containing a StructType
or ArrayType
of StructType
s
into a JSON string with the specified schema.
- to_json(Column) - Static method in class org.apache.spark.sql.functions
-
Converts a column containing a StructType
or ArrayType
of StructType
s into a JSON string
with the specified schema.
- to_timestamp(Column) - Static method in class org.apache.spark.sql.functions
-
Convert time string to a Unix timestamp (in seconds).
- to_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Convert time string to a Unix timestamp (in seconds) with a specified format
(see [http://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html])
to Unix timestamp (in seconds), return null if fail.
- to_utc_timestamp(Column, String) - Static method in class org.apache.spark.sql.functions
-
Given a timestamp, which corresponds to a certain time of day in the given timezone, returns
another timestamp that corresponds to the same time of day in UTC.
- toArray() - Method in class org.apache.spark.input.PortableDataStream
-
Read the file as a byte array
- toArray() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toArray() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- toArray() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts to a dense array in column major.
- toArray() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toArray() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- toArray() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts the instance to a double array.
- toArray() - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- toArray() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- toArray() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Converts to a dense array in column major.
- toArray() - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- toArray() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- toArray() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the instance to a double array.
- toArray(ClassTag<B>) - Static method in class org.apache.spark.sql.types.StructType
-
- toBigDecimal() - Method in class org.apache.spark.sql.types.Decimal
-
- toBlockMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to BlockMatrix.
- toBlockMatrix(int, int) - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to BlockMatrix.
- toBlockMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Converts to BlockMatrix.
- toBlockMatrix(int, int) - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Converts to BlockMatrix.
- toBoolean(String, String) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- toBreeze() - Method in interface org.apache.spark.mllib.linalg.distributed.DistributedMatrix
-
Collects data and assembles a local dense breeze matrix (for test only).
- toBuffer() - Static method in class org.apache.spark.sql.types.StructType
-
- toByte() - Method in class org.apache.spark.sql.types.Decimal
-
- toByteArray() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
- toByteBuffer() - Method in class org.apache.spark.storage.EncryptedBlockData
-
- toCatalystDecimal(HiveDecimalObjectInspector, Object) - Static method in class org.apache.spark.sql.hive.HiveShim
-
- toChunkedByteBuffer(Function1<Object, ByteBuffer>) - Method in class org.apache.spark.storage.EncryptedBlockData
-
- toColumn() - Method in class org.apache.spark.sql.expressions.Aggregator
-
Returns this Aggregator
as a TypedColumn
that can be used in Dataset
.
- toCoordinateMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Converts to CoordinateMatrix.
- toCoordinateMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
- toCryptoConf(SparkConf) - Static method in class org.apache.spark.security.CryptoStreamUtils
-
- toDebugString() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- toDebugString() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- toDebugString() - Static method in class org.apache.spark.api.java.JavaRDD
-
- toDebugString() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
A description of this RDD and its recursive dependencies for debugging.
- toDebugString() - Static method in class org.apache.spark.api.r.RRDD
-
- toDebugString() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- toDebugString() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- toDebugString() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- toDebugString() - Static method in class org.apache.spark.graphx.VertexRDD
-
- toDebugString() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- toDebugString() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- toDebugString() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- toDebugString() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- toDebugString() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- toDebugString() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- toDebugString() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Print the full model to a string.
- toDebugString() - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- toDebugString() - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- toDebugString() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- toDebugString() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- toDebugString() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- toDebugString() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- toDebugString() - Method in class org.apache.spark.rdd.RDD
-
A description of this RDD and its recursive dependencies for debugging.
- toDebugString() - Static method in class org.apache.spark.rdd.UnionRDD
-
- toDebugString() - Method in class org.apache.spark.SparkConf
-
Return a string listing all keys and values, one per line.
- toDebugString() - Method in class org.apache.spark.sql.types.Decimal
-
- toDegrees(Column) - Static method in class org.apache.spark.sql.functions
-
- toDegrees(String) - Static method in class org.apache.spark.sql.functions
-
- toDense() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toDense() - Static method in class org.apache.spark.ml.linalg.DenseVector
-
- toDense() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix while maintaining the layout of the current matrix.
- toDense() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toDense() - Static method in class org.apache.spark.ml.linalg.SparseVector
-
- toDense() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts this vector to a dense vector.
- toDense() - Static method in class org.apache.spark.mllib.linalg.DenseVector
-
- toDense() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
Generate a DenseMatrix
from the given SparseMatrix
.
- toDense() - Static method in class org.apache.spark.mllib.linalg.SparseVector
-
- toDense() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts this vector to a dense vector.
- toDenseColMajor() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toDenseColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix in column major order.
- toDenseColMajor() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toDenseMatrix(boolean) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix.
- toDenseRowMajor() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toDenseRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a dense matrix in row major order.
- toDenseRowMajor() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toDF(String...) - Method in class org.apache.spark.sql.Dataset
-
Converts this strongly typed collection of data to generic DataFrame
with columns renamed.
- toDF() - Method in class org.apache.spark.sql.Dataset
-
Converts this strongly typed collection of data to generic Dataframe.
- toDF(Seq<String>) - Method in class org.apache.spark.sql.Dataset
-
Converts this strongly typed collection of data to generic DataFrame
with columns renamed.
- toDF() - Method in class org.apache.spark.sql.DatasetHolder
-
- toDF(Seq<String>) - Method in class org.apache.spark.sql.DatasetHolder
-
- toDouble() - Method in class org.apache.spark.sql.types.Decimal
-
- toDS() - Method in class org.apache.spark.sql.DatasetHolder
-
- toEdgeTriplet() - Method in class org.apache.spark.graphx.EdgeContext
-
Converts the edge and vertex properties into an
EdgeTriplet
for convenience.
- toErrorString() - Method in class org.apache.spark.ExceptionFailure
-
- toErrorString() - Method in class org.apache.spark.ExecutorLostFailure
-
- toErrorString() - Method in class org.apache.spark.FetchFailed
-
- toErrorString() - Static method in class org.apache.spark.Resubmitted
-
- toErrorString() - Method in class org.apache.spark.TaskCommitDenied
-
- toErrorString() - Method in interface org.apache.spark.TaskFailedReason
-
Error message displayed in the web UI.
- toErrorString() - Method in class org.apache.spark.TaskKilled
-
- toErrorString() - Static method in class org.apache.spark.TaskResultLost
-
- toErrorString() - Static method in class org.apache.spark.UnknownReason
-
- toFloat() - Method in class org.apache.spark.sql.types.Decimal
-
- toFormattedString() - Method in class org.apache.spark.streaming.Duration
-
- toIndexedRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Converts to IndexedRowMatrix.
- toIndexedRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to IndexedRowMatrix.
- toIndexedSeq() - Static method in class org.apache.spark.sql.types.StructType
-
- toInputStream() - Method in class org.apache.spark.storage.EncryptedBlockData
-
- toInspector(DataType) - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- toInspector(Expression) - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- toInspector(DataType) - Static method in class org.apache.spark.sql.hive.orc.OrcRelation
-
- toInspector(Expression) - Static method in class org.apache.spark.sql.hive.orc.OrcRelation
-
- toInt() - Method in class org.apache.spark.sql.types.Decimal
-
- toInt() - Method in class org.apache.spark.storage.StorageLevel
-
- toIterable() - Static method in class org.apache.spark.sql.types.StructType
-
- toIterator() - Static method in class org.apache.spark.sql.types.StructType
-
- toJavaBigDecimal() - Method in class org.apache.spark.sql.types.Decimal
-
- toJavaBigInteger() - Method in class org.apache.spark.sql.types.Decimal
-
- toJavaDStream() - Method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
Convert to a JavaDStream
- toJavaDStream() - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- toJavaDStream() - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- toJavaRDD() - Static method in class org.apache.spark.api.r.RRDD
-
- toJavaRDD() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- toJavaRDD() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- toJavaRDD() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- toJavaRDD() - Static method in class org.apache.spark.graphx.VertexRDD
-
- toJavaRDD() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- toJavaRDD() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- toJavaRDD() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- toJavaRDD() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- toJavaRDD() - Method in class org.apache.spark.rdd.RDD
-
- toJavaRDD() - Static method in class org.apache.spark.rdd.UnionRDD
-
- toJavaRDD() - Method in class org.apache.spark.sql.Dataset
-
Returns the content of the Dataset as a JavaRDD
of T
s.
- toJson(Matrix) - Static method in class org.apache.spark.ml.linalg.JsonMatrixConverter
-
Coverts the Matrix to a JSON string.
- toJson(Vector) - Static method in class org.apache.spark.ml.linalg.JsonVectorConverter
-
Coverts the vector to a JSON string.
- toJson() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- toJson() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- toJson() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts the vector to a JSON string.
- toJSON() - Method in class org.apache.spark.sql.Dataset
-
Returns the content of the Dataset as a Dataset of JSON strings.
- toJSON() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- toJSON() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- toJSON() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- Tokenizer - Class in org.apache.spark.ml.feature
-
A tokenizer that converts the input string to lowercase and then splits it by white spaces.
- Tokenizer(String) - Constructor for class org.apache.spark.ml.feature.Tokenizer
-
- Tokenizer() - Constructor for class org.apache.spark.ml.feature.Tokenizer
-
- tol() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- tol() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- tol() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- tol() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- tol() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- tol() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- tol() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- tol() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- tol() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- tol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- tol() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- tol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- tol() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- tol() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- tol() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- toList() - Static method in class org.apache.spark.sql.types.StructType
-
- toLocal() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
Convert this distributed model to a local representation.
- toLocal() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Convert model to a local model.
- toLocalIterator() - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- toLocalIterator() - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- toLocalIterator() - Static method in class org.apache.spark.api.java.JavaRDD
-
- toLocalIterator() - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Return an iterator that contains all of the elements in this RDD.
- toLocalIterator() - Static method in class org.apache.spark.api.r.RRDD
-
- toLocalIterator() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- toLocalIterator() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- toLocalIterator() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- toLocalIterator() - Static method in class org.apache.spark.graphx.VertexRDD
-
- toLocalIterator() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- toLocalIterator() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- toLocalIterator() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- toLocalIterator() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- toLocalIterator() - Method in class org.apache.spark.rdd.RDD
-
Return an iterator that contains all of the elements in this RDD.
- toLocalIterator() - Static method in class org.apache.spark.rdd.UnionRDD
-
- toLocalIterator() - Method in class org.apache.spark.sql.Dataset
-
Return an iterator that contains all rows in this Dataset.
- toLocalMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Collect the distributed matrix on the driver as a DenseMatrix
.
- toLong() - Method in class org.apache.spark.sql.types.Decimal
-
- toLowercase() - Method in class org.apache.spark.ml.feature.RegexTokenizer
-
Indicates whether to convert all characters to lowercase before tokenizing.
- toMap(Predef.$less$colon$less<A, Tuple2<T, U>>) - Static method in class org.apache.spark.sql.types.StructType
-
- toMetadata(Metadata) - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to ML metadata with some existing metadata.
- toMetadata() - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to ML metadata
- toMetadata(Metadata) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to ML metadata with some existing metadata.
- toMetadata() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to ML metadata
- toMetadata(Metadata) - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- toMetadata() - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- toMetadata(Metadata) - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
- toMetadata() - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
- toMetadata(Metadata) - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
- toMetadata() - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
- toMetadata(Metadata) - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- toMetadata() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- toNetty() - Method in class org.apache.spark.storage.EncryptedBlockData
-
- toNumber(String, Function1<String, T>, String, String) - Static method in class org.apache.spark.internal.config.ConfigHelpers
-
- toOld() - Method in interface org.apache.spark.ml.tree.Split
-
Convert to old Split format
- tooltip(String, String) - Static method in class org.apache.spark.ui.UIUtils
-
- ToolTips - Class in org.apache.spark.ui
-
- ToolTips() - Constructor for class org.apache.spark.ui.ToolTips
-
- top(int, Comparator<T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- top(int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- top(int, Comparator<T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- top(int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- top(int, Comparator<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- top(int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- top(int, Comparator<T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the top k (largest) elements from this RDD as defined by
the specified Comparator[T] and maintains the order.
- top(int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Returns the top k (largest) elements from this RDD using the
natural ordering for T and maintains the order.
- top(int, Ordering<T>) - Static method in class org.apache.spark.api.r.RRDD
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- top(int, Ordering<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- top(int, Ordering<T>) - Method in class org.apache.spark.rdd.RDD
-
Returns the top k (largest) elements from this RDD as defined by the specified
implicit Ordering[T] and maintains the ordering.
- top(int, Ordering<T>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- toPairDStreamFunctions(DStream<Tuple2<K, V>>, ClassTag<K>, ClassTag<V>, Ordering<K>) - Static method in class org.apache.spark.streaming.dstream.DStream
-
- topByKey(int, Ordering<V>) - Method in class org.apache.spark.mllib.rdd.MLPairRDDFunctions
-
Returns the top k (largest) elements for each key from this RDD as defined by the specified
implicit Ordering[T].
- topDocumentsPerTopic(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Return the top documents for each topic
- topic() - Method in class org.apache.spark.streaming.kafka.OffsetRange
-
- topicAndPartition() - Method in class org.apache.spark.streaming.kafka.OffsetRange
-
Kafka TopicAndPartition object, for convenience
- topicAssignments() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Return the top topic for each (doc, term) pair.
- topicConcentration() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- topicConcentration() - Static method in class org.apache.spark.ml.clustering.LDA
-
- topicConcentration() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- topicConcentration() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
- topicConcentration() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Concentration parameter (commonly named "beta" or "eta") for the prior placed on topics'
distributions over terms.
- topicConcentration() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- topicDistribution(Vector) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Predicts the topic mixture distribution for a document (often called "theta" in the
literature).
- topicDistributionCol() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- topicDistributionCol() - Static method in class org.apache.spark.ml.clustering.LDA
-
- topicDistributionCol() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- topicDistributions() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
For each document in the training set, return the distribution over topics for that document
("theta_doc").
- topicDistributions(RDD<Tuple2<Object, Vector>>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Predicts the topic mixture distribution for each document (often called "theta" in the
literature).
- topicDistributions(JavaPairRDD<Long, Vector>) - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
Java-friendly version of topicDistributions
- topics() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- topicsMatrix() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- topicsMatrix() - Method in class org.apache.spark.ml.clustering.LDAModel
-
Inferred topics, where each topic is represented by a distribution over terms.
- topicsMatrix() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- topicsMatrix() - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
Inferred topics, where each topic is represented by a distribution over terms.
- topicsMatrix() - Method in class org.apache.spark.mllib.clustering.LDAModel
-
Inferred topics, where each topic is represented by a distribution over terms.
- topicsMatrix() - Method in class org.apache.spark.mllib.clustering.LocalLDAModel
-
- topK(Iterator<Tuple2<String, Object>>, int) - Static method in class org.apache.spark.streaming.util.RawTextHelper
-
Gets the top k words in terms of word counts.
- toPMML(String) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- toPMML(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- toPMML(OutputStream) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- toPMML() - Static method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- toPMML(String) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- toPMML(SparkContext, String) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- toPMML(OutputStream) - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- toPMML() - Static method in class org.apache.spark.mllib.classification.SVMModel
-
- toPMML(String) - Static method in class org.apache.spark.mllib.clustering.KMeansModel
-
- toPMML(SparkContext, String) - Static method in class org.apache.spark.mllib.clustering.KMeansModel
-
- toPMML(OutputStream) - Static method in class org.apache.spark.mllib.clustering.KMeansModel
-
- toPMML() - Static method in class org.apache.spark.mllib.clustering.KMeansModel
-
- toPMML(StreamResult) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to the stream result in PMML format
- toPMML(String) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to a local file in PMML format
- toPMML(SparkContext, String) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to a directory on a distributed file system in PMML format
- toPMML(OutputStream) - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to the OutputStream in PMML format
- toPMML() - Method in interface org.apache.spark.mllib.pmml.PMMLExportable
-
Export the model to a String in PMML format
- toPMML(String) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- toPMML(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- toPMML(OutputStream) - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- toPMML() - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- toPMML(String) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- toPMML(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- toPMML(OutputStream) - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- toPMML() - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- toPMML(String) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- toPMML(SparkContext, String) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- toPMML(OutputStream) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- toPMML() - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- topNode() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
- topologyFile() - Method in class org.apache.spark.storage.FileBasedTopologyMapper
-
- topologyInfo() - Method in class org.apache.spark.storage.BlockManagerId
-
- topologyMap() - Method in class org.apache.spark.storage.FileBasedTopologyMapper
-
- TopologyMapper - Class in org.apache.spark.storage
-
::DeveloperApi::
TopologyMapper provides topology information for a given host
param: conf SparkConf to get required properties, if needed
- TopologyMapper(SparkConf) - Constructor for class org.apache.spark.storage.TopologyMapper
-
- toPredict() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.PredictData
-
- topTopicsPerDocument(int) - Method in class org.apache.spark.mllib.clustering.DistributedLDAModel
-
For each document, return the top k weighted topics for that document and their weights.
- toRadians(Column) - Static method in class org.apache.spark.sql.functions
-
- toRadians(String) - Static method in class org.apache.spark.sql.functions
-
- toRDD(JavaDoubleRDD) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- toRDD(JavaPairRDD<K, V>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- toRDD(JavaRDD<T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- toRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Converts to RowMatrix, dropping row indices after grouping by row index.
- toRowMatrix() - Method in class org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix
-
Drops row indices and converts this matrix to a
RowMatrix
.
- toScalaBigInt() - Method in class org.apache.spark.sql.types.Decimal
-
- toSeq() - Method in class org.apache.spark.ml.param.ParamMap
-
Converts this param map to a sequence of param pairs.
- toSeq() - Method in interface org.apache.spark.sql.Row
-
Return a Scala Seq representing the row.
- toSeq() - Static method in class org.apache.spark.sql.types.StructType
-
- toSet() - Static method in class org.apache.spark.sql.types.StructType
-
- toShort() - Method in class org.apache.spark.sql.types.Decimal
-
- toSparkContext(JavaSparkContext) - Static method in class org.apache.spark.api.java.JavaSparkContext
-
- toSparse() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toSparse() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- toSparse() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix while maintaining the layout of the current matrix.
- toSparse() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toSparse() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- toSparse() - Method in interface org.apache.spark.ml.linalg.Vector
-
Converts this vector to a sparse vector with all explicit zeros removed.
- toSparse() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
Generate a SparseMatrix
from the given DenseMatrix
.
- toSparse() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- toSparse() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- toSparse() - Method in interface org.apache.spark.mllib.linalg.Vector
-
Converts this vector to a sparse vector with all explicit zeros removed.
- toSparseColMajor() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toSparseColMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix in column major order.
- toSparseColMajor() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toSparseMatrix(boolean) - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix.
- toSparseRowMajor() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toSparseRowMajor() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Converts this matrix to a sparse matrix in row major order.
- toSparseRowMajor() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toSplit() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.SplitData
-
- toSplitInfo(Class<?>, String, InputSplit) - Static method in class org.apache.spark.scheduler.SplitInfo
-
- toSplitInfo(Class<?>, String, InputSplit) - Static method in class org.apache.spark.scheduler.SplitInfo
-
- toStream() - Static method in class org.apache.spark.sql.types.StructType
-
- toString() - Method in class org.apache.spark.Accumulable
-
Deprecated.
- toString() - Static method in class org.apache.spark.Accumulator
-
Deprecated.
- toString() - Method in class org.apache.spark.api.java.JavaRDD
-
- toString() - Method in class org.apache.spark.api.java.Optional
-
- toString() - Static method in class org.apache.spark.api.r.RRDD
-
- toString() - Method in class org.apache.spark.broadcast.Broadcast
-
- toString() - Method in class org.apache.spark.graphx.EdgeDirection
-
- toString() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- toString() - Method in class org.apache.spark.graphx.EdgeTriplet
-
- toString() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- toString() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- toString() - Static method in class org.apache.spark.graphx.VertexRDD
-
- toString() - Method in class org.apache.spark.io.LZ4BlockInputStream
-
- toString() - Method in class org.apache.spark.ml.attribute.Attribute
-
- toString() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
- toString() - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- toString() - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
- toString() - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
- toString() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- toString() - Method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- toString() - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- toString() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- toString() - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- toString() - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- toString() - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- toString() - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- toString() - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- toString() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- toString() - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- toString() - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- toString() - Method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- toString() - Static method in class org.apache.spark.ml.classification.OneVsRest
-
- toString() - Static method in class org.apache.spark.ml.classification.OneVsRestModel
-
- toString() - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- toString() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- toString() - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- toString() - Static method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- toString() - Static method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- toString() - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- toString() - Static method in class org.apache.spark.ml.clustering.GaussianMixture
-
- toString() - Static method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- toString() - Static method in class org.apache.spark.ml.clustering.KMeans
-
- toString() - Static method in class org.apache.spark.ml.clustering.KMeansModel
-
- toString() - Static method in class org.apache.spark.ml.clustering.LDA
-
- toString() - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- toString() - Static method in class org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
-
- toString() - Static method in class org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
-
- toString() - Static method in class org.apache.spark.ml.evaluation.RegressionEvaluator
-
- toString() - Static method in class org.apache.spark.ml.feature.Binarizer
-
- toString() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- toString() - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- toString() - Static method in class org.apache.spark.ml.feature.Bucketizer
-
- toString() - Static method in class org.apache.spark.ml.feature.ChiSqSelector
-
- toString() - Static method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- toString() - Static method in class org.apache.spark.ml.feature.ColumnPruner
-
- toString() - Static method in class org.apache.spark.ml.feature.CountVectorizer
-
- toString() - Static method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.DCT
-
- toString() - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- toString() - Static method in class org.apache.spark.ml.feature.HashingTF
-
- toString() - Static method in class org.apache.spark.ml.feature.IDF
-
- toString() - Static method in class org.apache.spark.ml.feature.IDFModel
-
- toString() - Static method in class org.apache.spark.ml.feature.Imputer
-
- toString() - Static method in class org.apache.spark.ml.feature.ImputerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.IndexToString
-
- toString() - Static method in class org.apache.spark.ml.feature.Interaction
-
- toString() - Method in class org.apache.spark.ml.feature.LabeledPoint
-
- toString() - Static method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- toString() - Static method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.MinHashLSH
-
- toString() - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- toString() - Static method in class org.apache.spark.ml.feature.MinMaxScaler
-
- toString() - Static method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.NGram
-
- toString() - Static method in class org.apache.spark.ml.feature.Normalizer
-
- toString() - Static method in class org.apache.spark.ml.feature.OneHotEncoder
-
- toString() - Static method in class org.apache.spark.ml.feature.PCA
-
- toString() - Static method in class org.apache.spark.ml.feature.PCAModel
-
- toString() - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- toString() - Static method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- toString() - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- toString() - Method in class org.apache.spark.ml.feature.RFormula
-
- toString() - Method in class org.apache.spark.ml.feature.RFormulaModel
-
- toString() - Static method in class org.apache.spark.ml.feature.SQLTransformer
-
- toString() - Static method in class org.apache.spark.ml.feature.StandardScaler
-
- toString() - Static method in class org.apache.spark.ml.feature.StandardScalerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.StopWordsRemover
-
- toString() - Static method in class org.apache.spark.ml.feature.StringIndexer
-
- toString() - Static method in class org.apache.spark.ml.feature.StringIndexerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- toString() - Static method in class org.apache.spark.ml.feature.VectorAssembler
-
- toString() - Static method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- toString() - Static method in class org.apache.spark.ml.feature.VectorIndexer
-
- toString() - Static method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- toString() - Static method in class org.apache.spark.ml.feature.VectorSlicer
-
- toString() - Static method in class org.apache.spark.ml.feature.Word2Vec
-
- toString() - Static method in class org.apache.spark.ml.feature.Word2VecModel
-
- toString() - Static method in class org.apache.spark.ml.fpm.FPGrowth
-
- toString() - Static method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- toString() - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toString(int, int) - Static method in class org.apache.spark.ml.linalg.DenseMatrix
-
- toString() - Method in class org.apache.spark.ml.linalg.DenseVector
-
- toString() - Method in interface org.apache.spark.ml.linalg.Matrix
-
A human readable representation of the matrix
- toString(int, int) - Method in interface org.apache.spark.ml.linalg.Matrix
-
A human readable representation of the matrix with maximum lines and width
- toString() - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toString(int, int) - Static method in class org.apache.spark.ml.linalg.SparseMatrix
-
- toString() - Method in class org.apache.spark.ml.linalg.SparseVector
-
- toString() - Static method in class org.apache.spark.ml.param.DoubleParam
-
- toString() - Static method in class org.apache.spark.ml.param.FloatParam
-
- toString() - Method in class org.apache.spark.ml.param.Param
-
- toString() - Method in class org.apache.spark.ml.param.ParamMap
-
- toString() - Static method in class org.apache.spark.ml.Pipeline
-
- toString() - Static method in class org.apache.spark.ml.PipelineModel
-
- toString() - Static method in class org.apache.spark.ml.recommendation.ALS
-
- toString() - Static method in class org.apache.spark.ml.recommendation.ALSModel
-
- toString() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- toString() - Static method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- toString() - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- toString() - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- toString() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- toString() - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- toString() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- toString() - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- toString() - Static method in class org.apache.spark.ml.regression.IsotonicRegression
-
- toString() - Static method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- toString() - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- toString() - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- toString() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- toString() - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- toString() - Method in class org.apache.spark.ml.tree.InternalNode
-
- toString() - Method in class org.apache.spark.ml.tree.LeafNode
-
- toString() - Static method in class org.apache.spark.ml.tuning.CrossValidator
-
- toString() - Static method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- toString() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- toString() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- toString() - Method in interface org.apache.spark.ml.util.Identifiable
-
- toString() - Method in class org.apache.spark.mllib.classification.LogisticRegressionModel
-
- toString() - Method in class org.apache.spark.mllib.classification.SVMModel
-
- toString() - Method in class org.apache.spark.mllib.fpm.AssociationRules.Rule
-
- toString() - Method in class org.apache.spark.mllib.fpm.FPGrowth.FreqItemset
-
- toString() - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- toString(int, int) - Static method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- toString() - Method in class org.apache.spark.mllib.linalg.DenseVector
-
- toString() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
A human readable representation of the matrix
- toString(int, int) - Method in interface org.apache.spark.mllib.linalg.Matrix
-
A human readable representation of the matrix with maximum lines and width
- toString() - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- toString(int, int) - Static method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- toString() - Method in class org.apache.spark.mllib.linalg.SparseVector
-
- toString() - Method in class org.apache.spark.mllib.regression.GeneralizedLinearModel
-
Print a summary of the model.
- toString() - Method in class org.apache.spark.mllib.regression.LabeledPoint
-
- toString() - Static method in class org.apache.spark.mllib.regression.LassoModel
-
- toString() - Static method in class org.apache.spark.mllib.regression.LinearRegressionModel
-
- toString() - Static method in class org.apache.spark.mllib.regression.RidgeRegressionModel
-
- toString() - Method in class org.apache.spark.mllib.stat.test.BinarySample
-
- toString() - Method in class org.apache.spark.mllib.stat.test.ChiSqTestResult
-
- toString() - Method in class org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult
-
- toString() - Method in interface org.apache.spark.mllib.stat.test.TestResult
-
String explaining the hypothesis test result.
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.Algo
-
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.EnsembleCombiningStrategy
-
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.FeatureType
-
- toString() - Static method in class org.apache.spark.mllib.tree.configuration.QuantileStrategy
-
- toString() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel
-
Print a summary of the model.
- toString() - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- toString() - Method in class org.apache.spark.mllib.tree.model.InformationGainStats
-
- toString() - Method in class org.apache.spark.mllib.tree.model.Node
-
- toString() - Method in class org.apache.spark.mllib.tree.model.Predict
-
- toString() - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- toString() - Method in class org.apache.spark.mllib.tree.model.Split
-
- toString() - Method in class org.apache.spark.partial.BoundedDouble
-
- toString() - Method in class org.apache.spark.partial.PartialResult
-
- toString() - Static method in class org.apache.spark.rdd.CheckpointState
-
- toString() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- toString() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- toString() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- toString() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- toString() - Method in class org.apache.spark.rdd.RDD
-
- toString() - Static method in class org.apache.spark.rdd.UnionRDD
-
- toString() - Static method in class org.apache.spark.scheduler.ExecutorKilled
-
- toString() - Method in class org.apache.spark.scheduler.InputFormatInfo
-
- toString() - Static method in class org.apache.spark.scheduler.LossReasonPending
-
- toString() - Static method in class org.apache.spark.scheduler.SchedulingMode
-
- toString() - Method in class org.apache.spark.scheduler.SplitInfo
-
- toString() - Static method in class org.apache.spark.scheduler.TaskLocality
-
- toString() - Method in class org.apache.spark.SerializableWritable
-
- toString() - Static method in exception org.apache.spark.sql.AnalysisException
-
- toString() - Method in class org.apache.spark.sql.catalog.Column
-
- toString() - Method in class org.apache.spark.sql.catalog.Database
-
- toString() - Method in class org.apache.spark.sql.catalog.Function
-
- toString() - Method in class org.apache.spark.sql.catalog.Table
-
- toString() - Method in class org.apache.spark.sql.Column
-
- toString() - Method in class org.apache.spark.sql.Dataset
-
- toString() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- toString() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- toString() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- toString() - Method in class org.apache.spark.sql.hive.orc.OrcFileFormat
-
- toString() - Method in interface org.apache.spark.sql.Row
-
- toString() - Method in class org.apache.spark.sql.sources.In
-
- toString() - Method in class org.apache.spark.sql.streaming.SinkProgress
-
- toString() - Method in class org.apache.spark.sql.streaming.SourceProgress
-
- toString() - Method in class org.apache.spark.sql.streaming.StateOperatorProgress
-
- toString() - Method in exception org.apache.spark.sql.streaming.StreamingQueryException
-
- toString() - Method in class org.apache.spark.sql.streaming.StreamingQueryProgress
-
- toString() - Method in class org.apache.spark.sql.streaming.StreamingQueryStatus
-
- toString() - Method in class org.apache.spark.sql.types.Decimal
-
- toString() - Method in class org.apache.spark.sql.types.DecimalType
-
- toString() - Method in class org.apache.spark.sql.types.Metadata
-
- toString() - Method in class org.apache.spark.sql.types.StructField
-
- toString() - Static method in class org.apache.spark.sql.types.StructType
-
- toString() - Method in class org.apache.spark.storage.BlockId
-
- toString() - Method in class org.apache.spark.storage.BlockManagerId
-
- toString() - Static method in class org.apache.spark.storage.BroadcastBlockId
-
- toString() - Static method in class org.apache.spark.storage.RDDBlockId
-
- toString() - Method in class org.apache.spark.storage.RDDInfo
-
- toString() - Static method in class org.apache.spark.storage.ShuffleBlockId
-
- toString() - Static method in class org.apache.spark.storage.ShuffleDataBlockId
-
- toString() - Static method in class org.apache.spark.storage.ShuffleIndexBlockId
-
- toString() - Method in class org.apache.spark.storage.StorageLevel
-
- toString() - Static method in class org.apache.spark.storage.StreamBlockId
-
- toString() - Static method in class org.apache.spark.storage.TaskResultBlockId
-
- toString() - Method in class org.apache.spark.streaming.Duration
-
- toString() - Method in class org.apache.spark.streaming.kafka.Broker
-
- toString() - Method in class org.apache.spark.streaming.kafka.OffsetRange
-
- toString() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
-
- toString() - Method in class org.apache.spark.streaming.State
-
- toString() - Method in class org.apache.spark.streaming.Time
-
- toString() - Static method in class org.apache.spark.TaskState
-
- toString() - Method in class org.apache.spark.util.AccumulatorV2
-
- toString() - Method in class org.apache.spark.util.MutablePair
-
- toString() - Method in class org.apache.spark.util.StatCounter
-
- toStructField(Metadata) - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to a StructField
with some existing metadata.
- toStructField() - Method in class org.apache.spark.ml.attribute.Attribute
-
Converts to a StructField
.
- toStructField(Metadata) - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to a StructField with some existing metadata.
- toStructField() - Method in class org.apache.spark.ml.attribute.AttributeGroup
-
Converts to a StructField.
- toStructField(Metadata) - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- toStructField() - Static method in class org.apache.spark.ml.attribute.BinaryAttribute
-
- toStructField(Metadata) - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
- toStructField() - Static method in class org.apache.spark.ml.attribute.NominalAttribute
-
- toStructField(Metadata) - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
- toStructField() - Static method in class org.apache.spark.ml.attribute.NumericAttribute
-
- toStructField(Metadata) - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- toStructField() - Static method in class org.apache.spark.ml.attribute.UnresolvedAttribute
-
- totalBlocksFetched() - Method in class org.apache.spark.status.api.v1.ShuffleReadMetricDistributions
-
- totalBlocksFetched() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- totalBytesRead() - Method in class org.apache.spark.ui.jobs.UIData.ShuffleReadMetricsUIData
-
- totalCores() - Method in class org.apache.spark.scheduler.cluster.ExecutorInfo
-
- totalCores() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- totalCount() - Method in class org.apache.spark.util.sketch.CountMinSketch
-
- totalDelay() - Method in class org.apache.spark.status.api.v1.streaming.BatchInfo
-
- totalDelay() - Method in class org.apache.spark.streaming.scheduler.BatchInfo
-
Time taken for all the jobs of this batch to finish processing from the time they
were submitted.
- totalDuration() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- totalGCTime() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- totalInputBytes() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- totalIterations() - Method in interface org.apache.spark.ml.classification.LogisticRegressionTrainingSummary
-
Number of training iterations until termination
- totalIterations() - Method in class org.apache.spark.ml.regression.LinearRegressionTrainingSummary
-
Number of training iterations until termination
- totalNumNodes() - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- totalNumNodes() - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- totalNumNodes() - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- totalNumNodes() - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- totalNumNodes() - Static method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- totalNumNodes() - Static method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- totalOffHeapStorageMemory() - Method in class org.apache.spark.status.api.v1.MemoryMetrics
-
- totalOnHeapStorageMemory() - Method in class org.apache.spark.status.api.v1.MemoryMetrics
-
- totalShuffleRead() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- totalShuffleWrite() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- totalTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
-
- toTraversable() - Static method in class org.apache.spark.sql.types.StructType
-
- toTuple() - Method in class org.apache.spark.graphx.EdgeTriplet
-
- toUnscaledLong() - Method in class org.apache.spark.sql.types.Decimal
-
- toVector() - Static method in class org.apache.spark.sql.types.StructType
-
- toVirtualHosts(Seq<String>) - Static method in class org.apache.spark.ui.JettyUtils
-
- train(RDD<ALS.Rating<ID>>, int, int, int, int, double, boolean, double, boolean, StorageLevel, StorageLevel, int, long, ClassTag<ID>, Ordering<ID>) - Static method in class org.apache.spark.ml.recommendation.ALS
-
:: DeveloperApi ::
Implementation of the ALS algorithm.
- train(RDD<LabeledPoint>, int, double, double, Vector) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
Train a logistic regression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
Train a logistic regression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
Train a logistic regression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int) - Static method in class org.apache.spark.mllib.classification.LogisticRegressionWithSGD
-
Train a logistic regression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>) - Static method in class org.apache.spark.mllib.classification.NaiveBayes
-
Trains a Naive Bayes model given an RDD of (label, features)
pairs.
- train(RDD<LabeledPoint>, double) - Static method in class org.apache.spark.mllib.classification.NaiveBayes
-
Trains a Naive Bayes model given an RDD of (label, features)
pairs.
- train(RDD<LabeledPoint>, double, String) - Static method in class org.apache.spark.mllib.classification.NaiveBayes
-
Trains a Naive Bayes model given an RDD of (label, features)
pairs.
- train(RDD<LabeledPoint>, int, double, double, double, Vector) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, double) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int) - Static method in class org.apache.spark.mllib.classification.SVMWithSGD
-
Train a SVM model given an RDD of (label, features) pairs.
- train(RDD<Vector>, int, int, String, long) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
Trains a k-means model using the given set of parameters.
- train(RDD<Vector>, int, int, String) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
Trains a k-means model using the given set of parameters.
- train(RDD<Vector>, int, int, int, String, long) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
- train(RDD<Vector>, int, int, int, String) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
- train(RDD<Vector>, int, int) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
Trains a k-means model using specified parameters and the default values for unspecified.
- train(RDD<Vector>, int, int, int) - Static method in class org.apache.spark.mllib.clustering.KMeans
-
- train(RDD<Rating>, int, int, double, int, long) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<Rating>, int, int, double, int) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<Rating>, int, int, double) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<Rating>, int, int) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of ratings by users for a subset of products.
- train(RDD<LabeledPoint>, int, double, double, double, Vector) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
Train a Lasso model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, double) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
Train a Lasso model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
Train a Lasso model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int) - Static method in class org.apache.spark.mllib.regression.LassoWithSGD
-
Train a Lasso model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, Vector) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
Train a Linear Regression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
Train a LinearRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
Train a LinearRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int) - Static method in class org.apache.spark.mllib.regression.LinearRegressionWithSGD
-
Train a LinearRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, double, Vector) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
Train a RidgeRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double, double) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
Train a RidgeRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int, double, double) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
Train a RidgeRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, int) - Static method in class org.apache.spark.mllib.regression.RidgeRegressionWithSGD
-
Train a RidgeRegression model given an RDD of (label, features) pairs.
- train(RDD<LabeledPoint>, Strategy) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, Enumeration.Value, Impurity, int, int, int, Enumeration.Value, Map<Object, Object>) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model.
- train(RDD<LabeledPoint>, BoostingStrategy) - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Method to train a gradient boosting model.
- train(JavaRDD<LabeledPoint>, BoostingStrategy) - Static method in class org.apache.spark.mllib.tree.GradientBoostedTrees
-
Java-friendly API for org.apache.spark.mllib.tree.GradientBoostedTrees.train
- trainClassifier(RDD<LabeledPoint>, int, Map<Object, Object>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model for binary or multiclass classification.
- trainClassifier(JavaRDD<LabeledPoint>, int, Map<Integer, Integer>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree.trainClassifier
- trainClassifier(RDD<LabeledPoint>, Strategy, int, String, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for binary or multiclass classification.
- trainClassifier(RDD<LabeledPoint>, int, Map<Object, Object>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for binary or multiclass classification.
- trainClassifier(JavaRDD<LabeledPoint>, int, Map<Integer, Integer>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Java-friendly API for org.apache.spark.mllib.tree.RandomForest.trainClassifier
- trainImplicit(RDD<Rating>, int, int, double, int, double, long) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' given by users
to some products, in the form of (userID, productID, preference) pairs.
- trainImplicit(RDD<Rating>, int, int, double, int, double) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a
subset of products.
- trainImplicit(RDD<Rating>, int, int, double, double) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a
subset of products.
- trainImplicit(RDD<Rating>, int, int) - Static method in class org.apache.spark.mllib.recommendation.ALS
-
Train a matrix factorization model given an RDD of 'implicit preferences' of users for a
subset of products.
- trainingLogLikelihood() - Method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
Log likelihood of the observed tokens in the training set,
given the current parameter estimates:
log P(docs | topics, topic distributions for docs, Dirichlet hyperparameters)
- trainOn(DStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Update the clustering model by training on batches of data from a DStream.
- trainOn(JavaDStream<Vector>) - Method in class org.apache.spark.mllib.clustering.StreamingKMeans
-
Java-friendly version of trainOn
.
- trainOn(DStream<LabeledPoint>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Update the model by training on batches of data from a DStream.
- trainOn(JavaDStream<LabeledPoint>) - Method in class org.apache.spark.mllib.regression.StreamingLinearAlgorithm
-
Java-friendly version of trainOn
.
- trainRatio() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- trainRatio() - Static method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- trainRegressor(RDD<LabeledPoint>, Map<Object, Object>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Method to train a decision tree model for regression.
- trainRegressor(JavaRDD<LabeledPoint>, Map<Integer, Integer>, String, int, int) - Static method in class org.apache.spark.mllib.tree.DecisionTree
-
Java-friendly API for org.apache.spark.mllib.tree.DecisionTree.trainRegressor
- trainRegressor(RDD<LabeledPoint>, Strategy, int, String, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for regression.
- trainRegressor(RDD<LabeledPoint>, Map<Object, Object>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Method to train a decision tree model for regression.
- trainRegressor(JavaRDD<LabeledPoint>, Map<Integer, Integer>, int, String, String, int, int, int) - Static method in class org.apache.spark.mllib.tree.RandomForest
-
Java-friendly API for org.apache.spark.mllib.tree.RandomForest.trainRegressor
- TrainValidationSplit - Class in org.apache.spark.ml.tuning
-
Validation for hyper-parameter tuning.
- TrainValidationSplit(String) - Constructor for class org.apache.spark.ml.tuning.TrainValidationSplit
-
- TrainValidationSplit() - Constructor for class org.apache.spark.ml.tuning.TrainValidationSplit
-
- TrainValidationSplitModel - Class in org.apache.spark.ml.tuning
-
Model from train validation split.
- transfered() - Method in class org.apache.spark.storage.ReadableChannelFileRegion
-
- transferTo(WritableByteChannel, long) - Method in class org.apache.spark.storage.ReadableChannelFileRegion
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.ClassificationModel
-
Transforms dataset by reading from featuresCol
, and appending new columns as specified by
parameters:
- predicted labels as predictionCol
of type Double
- raw predictions (confidences) as rawPredictionCol
of type Vector
.
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
Transforms dataset by reading from featuresCol
, and appending new columns as specified by
parameters:
- predicted labels as predictionCol
of type Double
- raw predictions (confidences) as rawPredictionCol
of type Vector
- probability of each class as probabilityCol
of type Vector
.
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.KMeansModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.clustering.LDAModel
-
Transforms the input dataset.
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Binarizer
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.ColumnPruner
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.DCT
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.DCT
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.DCT
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.DCT
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.HashingTF
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.IDFModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.ImputerModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.IndexToString
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Interaction
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.NGram
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.NGram
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.NGram
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.NGram
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.PCAModel
-
Transform a vector by computed Principal Components.
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.RFormulaModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.SQLTransformer
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorAssembler
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
Transform a sentence column to a vector column to represent the whole sentence.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
The transform method first generates the association rules according to the frequent itemsets.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.PipelineModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.PredictionModel
-
Transforms dataset by reading from featuresCol
, calling predict
, and storing
the predictions as a new column predictionCol
.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- transform(Dataset<?>, ParamMap) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- transform(Dataset<?>) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- transform(Dataset<?>, ParamPair<?>, ParamPair<?>...) - Method in class org.apache.spark.ml.Transformer
-
Transforms the dataset with optional parameters
- transform(Dataset<?>, ParamPair<?>, Seq<ParamPair<?>>) - Method in class org.apache.spark.ml.Transformer
-
Transforms the dataset with optional parameters
- transform(Dataset<?>, ParamMap) - Method in class org.apache.spark.ml.Transformer
-
Transforms the dataset with provided parameter map as additional parameters.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.Transformer
-
Transforms the input dataset.
- transform(Dataset<?>) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- transform(Dataset<?>) - Method in class org.apache.spark.ml.UnaryTransformer
-
- transform(Vector) - Method in class org.apache.spark.mllib.feature.ChiSqSelectorModel
-
Applies transformation on a vector.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.ElementwiseProduct
-
Does the hadamard product transformation.
- transform(Iterable<?>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document into a sparse term frequency vector.
- transform(Iterable<?>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document into a sparse term frequency vector (Java version).
- transform(RDD<D>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document to term frequency vectors.
- transform(JavaRDD<D>) - Method in class org.apache.spark.mllib.feature.HashingTF
-
Transforms the input document to term frequency vectors (Java version).
- transform(RDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDFModel
-
Transforms term frequency (TF) vectors to TF-IDF vectors.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.IDFModel
-
Transforms a term frequency (TF) vector to a TF-IDF vector
- transform(JavaRDD<Vector>) - Method in class org.apache.spark.mllib.feature.IDFModel
-
Transforms term frequency (TF) vectors to TF-IDF vectors (Java version).
- transform(Vector) - Method in class org.apache.spark.mllib.feature.Normalizer
-
Applies unit length normalization on a vector.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.PCAModel
-
Transform a vector by computed Principal Components.
- transform(Vector) - Method in class org.apache.spark.mllib.feature.StandardScalerModel
-
Applies standardization transformation on a vector.
- transform(Vector) - Method in interface org.apache.spark.mllib.feature.VectorTransformer
-
Applies transformation on a vector.
- transform(RDD<Vector>) - Method in interface org.apache.spark.mllib.feature.VectorTransformer
-
Applies transformation on an RDD[Vector].
- transform(JavaRDD<Vector>) - Method in interface org.apache.spark.mllib.feature.VectorTransformer
-
Applies transformation on a JavaRDD[Vector].
- transform(String) - Method in class org.apache.spark.mllib.feature.Word2VecModel
-
Transforms a word to its vector representation
- transform(Function1<Dataset<T>, Dataset<U>>) - Method in class org.apache.spark.sql.Dataset
-
Concise syntax for chaining custom transformations.
- transform(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transform(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transform(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- transform(Function<R, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transform(Function2<R, Time, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transform(Function<R, JavaRDD<U>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream.
- transform(Function2<R, Time, JavaRDD<U>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream.
- transform(Function<R, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transform(Function2<R, Time, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transform(Function<R, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transform(Function2<R, Time, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transform(Function<R, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transform(Function2<R, Time, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transform(Function<R, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transform(Function2<R, Time, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transform(Function<R, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transform(Function2<R, Time, JavaRDD<U>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transform(List<JavaDStream<?>>, Function2<List<JavaRDD<?>>, Time, JavaRDD<T>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create a new DStream in which each RDD is generated by applying a function on RDDs of
the DStreams.
- transform(Function1<RDD<T>, RDD<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream.
- transform(Function2<RDD<T>, Time, RDD<U>>, ClassTag<U>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream.
- transform(Seq<DStream<?>>, Function2<Seq<RDD<?>>, Time, RDD<T>>, ClassTag<T>) - Method in class org.apache.spark.streaming.StreamingContext
-
Create a new DStream in which each RDD is generated by applying a function on RDDs of
the DStreams.
- transformAllExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transformAllExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transformAllExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- transformDown(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transformDown(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transformDown(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- Transformer - Class in org.apache.spark.ml
-
:: DeveloperApi ::
Abstract class for transformers that transform one dataset into another.
- Transformer() - Constructor for class org.apache.spark.ml.Transformer
-
- transformExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transformExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transformExpressions(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- transformExpressionsDown(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transformExpressionsDown(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transformExpressionsDown(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- transformExpressionsUp(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transformExpressionsUp(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transformExpressionsUp(PartialFunction<Expression, Expression>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassificationModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.DecisionTreeClassifier
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.GBTClassifier
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.LinearSVC
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.LinearSVCModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.LogisticRegression
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.LogisticRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.MultilayerPerceptronClassifier
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.NaiveBayes
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.NaiveBayesModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.OneVsRest
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.classification.OneVsRestModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.ProbabilisticClassificationModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.classification.RandomForestClassifier
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.BisectingKMeans
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.BisectingKMeansModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.clustering.DistributedLDAModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.GaussianMixture
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.GaussianMixtureModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.KMeans
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.KMeansModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.LDA
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.clustering.LDAModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.clustering.LocalLDAModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Binarizer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSH
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.BucketedRandomProjectionLSHModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Bucketizer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ChiSqSelector
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ChiSqSelectorModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ColumnPruner
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.CountVectorizer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.CountVectorizerModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.DCT
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.ElementwiseProduct
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.HashingTF
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.IDF
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.IDFModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Imputer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.ImputerModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.IndexToString
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Interaction
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MaxAbsScaler
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MaxAbsScalerModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MinHashLSH
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.MinHashLSHModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MinMaxScaler
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.MinMaxScalerModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.NGram
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.Normalizer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.OneHotEncoder
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.PCA
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.PCAModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.PolynomialExpansion
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.QuantileDiscretizer
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.RegexTokenizer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.RFormula
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.RFormulaModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.SQLTransformer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StandardScaler
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StandardScalerModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StopWordsRemover
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StringIndexer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.StringIndexerModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.feature.Tokenizer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorAssembler
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorAttributeRewriter
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorIndexer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorIndexerModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.VectorSlicer
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Word2Vec
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.feature.Word2VecModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.fpm.FPGrowth
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.fpm.FPGrowthModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.Pipeline
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.PipelineModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.PipelineStage
-
:: DeveloperApi ::
- transformSchema(StructType) - Method in class org.apache.spark.ml.PredictionModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.Predictor
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.recommendation.ALS
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.recommendation.ALSModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegression
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.AFTSurvivalRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.DecisionTreeRegressor
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.GBTRegressor
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegression
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.IsotonicRegression
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.regression.IsotonicRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.LinearRegression
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.LinearRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- transformSchema(StructType) - Static method in class org.apache.spark.ml.regression.RandomForestRegressor
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.CrossValidator
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.CrossValidatorModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.TrainValidationSplit
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.tuning.TrainValidationSplitModel
-
- transformSchema(StructType) - Method in class org.apache.spark.ml.UnaryTransformer
-
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream.
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream.
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transformToPair(Function<R, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transformToPair(Function2<R, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transformToPair(List<JavaDStream<?>>, Function2<List<JavaRDD<?>>, Time, JavaPairRDD<K, V>>) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
-
Create a new DStream in which each RDD is generated by applying a function on RDDs of
the DStreams.
- transformUp(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- transformUp(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- transformUp(PartialFunction<BaseType, BaseType>) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream and 'other' DStream.
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream and 'other' DStream.
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transformWith(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transformWith(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaRDD<W>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transformWith(DStream<U>, Function2<RDD<T>, RDD<U>, RDD<V>>, ClassTag<U>, ClassTag<V>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream and 'other' DStream.
- transformWith(DStream<U>, Function3<RDD<T>, RDD<U>, Time, RDD<V>>, ClassTag<U>, ClassTag<V>) - Method in class org.apache.spark.streaming.dstream.DStream
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream and 'other' DStream.
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Static method in class org.apache.spark.streaming.api.java.JavaDStream
-
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream and 'other' DStream.
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Method in interface org.apache.spark.streaming.api.java.JavaDStreamLike
-
Return a new DStream in which each RDD is generated by applying a function
on each RDD of 'this' DStream and 'other' DStream.
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Static method in class org.apache.spark.streaming.api.java.JavaInputDStream
-
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairDStream
-
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairInputDStream
-
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Static method in class org.apache.spark.streaming.api.java.JavaPairReceiverInputDStream
-
- transformWithToPair(JavaDStream<U>, Function3<R, JavaRDD<U>, Time, JavaPairRDD<K2, V2>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- transformWithToPair(JavaPairDStream<K2, V2>, Function3<R, JavaPairRDD<K2, V2>, Time, JavaPairRDD<K3, V3>>) - Static method in class org.apache.spark.streaming.api.java.JavaReceiverInputDStream
-
- translate(Column, String, String) - Static method in class org.apache.spark.sql.functions
-
Translate any character in the src by a character in replaceString.
- transpose() - Method in class org.apache.spark.ml.linalg.DenseMatrix
-
- transpose() - Method in interface org.apache.spark.ml.linalg.Matrix
-
Transpose the Matrix.
- transpose() - Method in class org.apache.spark.ml.linalg.SparseMatrix
-
- transpose() - Method in class org.apache.spark.mllib.linalg.DenseMatrix
-
- transpose() - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
-
Transpose this BlockMatrix
.
- transpose() - Method in class org.apache.spark.mllib.linalg.distributed.CoordinateMatrix
-
Transposes this CoordinateMatrix.
- transpose() - Method in interface org.apache.spark.mllib.linalg.Matrix
-
Transpose the Matrix.
- transpose() - Method in class org.apache.spark.mllib.linalg.SparseMatrix
-
- transpose(Function1<A, GenTraversableOnce<B>>) - Static method in class org.apache.spark.sql.types.StructType
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Aggregates the elements of this RDD in a multi-level tree pattern.
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
org.apache.spark.api.java.JavaRDDLike.treeAggregate
with suggested depth 2.
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.api.r.RRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.graphx.VertexRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Method in class org.apache.spark.rdd.RDD
-
Aggregates the elements of this RDD in a multi-level tree pattern.
- treeAggregate(U, Function2<U, T, U>, Function2<U, U, U>, int, ClassTag<U>) - Static method in class org.apache.spark.rdd.UnionRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.api.r.RRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.graphx.VertexRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- treeAggregate$default$4(U) - Static method in class org.apache.spark.rdd.UnionRDD
-
- treeID() - Method in class org.apache.spark.ml.tree.EnsembleModelReadWrite.EnsembleNodeData
-
- treeId() - Method in class org.apache.spark.mllib.tree.model.DecisionTreeModel.SaveLoadV1_0$.NodeData
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- treeReduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaDoubleRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- treeReduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaPairRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.api.java.JavaRDD
-
- treeReduce(Function2<T, T, T>) - Static method in class org.apache.spark.api.java.JavaRDD
-
- treeReduce(Function2<T, T, T>, int) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
Reduces the elements of this RDD in a multi-level tree pattern.
- treeReduce(Function2<T, T, T>) - Method in interface org.apache.spark.api.java.JavaRDDLike
-
org.apache.spark.api.java.JavaRDDLike.treeReduce
with suggested depth 2.
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.api.r.RRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.graphx.EdgeRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.graphx.VertexRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.rdd.HadoopRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.rdd.JdbcRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- treeReduce(Function2<T, T, T>, int) - Method in class org.apache.spark.rdd.RDD
-
Reduces the elements of this RDD in a multi-level tree pattern.
- treeReduce(Function2<T, T, T>, int) - Static method in class org.apache.spark.rdd.UnionRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.api.r.RRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.graphx.EdgeRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
-
- treeReduce$default$2() - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
-
- treeReduce$default$2() - Static method in class org.apache.spark.graphx.VertexRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.rdd.HadoopRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.rdd.JdbcRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.rdd.NewHadoopRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.rdd.PartitionPruningRDD
-
- treeReduce$default$2() - Static method in class org.apache.spark.rdd.UnionRDD
-
- trees() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- trees() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- trees() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- trees() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- trees() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- trees() - Method in class org.apache.spark.mllib.tree.model.RandomForestModel
-
- treeStrategy() - Method in class org.apache.spark.mllib.tree.configuration.BoostingStrategy
-
- treeString() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- treeString(boolean, boolean) - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- treeString() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- treeString(boolean, boolean) - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- treeString() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- treeString(boolean, boolean) - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- treeString() - Method in class org.apache.spark.sql.types.StructType
-
- treeString$default$2() - Static method in class org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand
-
- treeString$default$2() - Static method in class org.apache.spark.sql.hive.execution.InsertIntoHiveTable
-
- treeString$default$2() - Static method in class org.apache.spark.sql.hive.execution.ScriptTransformationExec
-
- treeWeights() - Method in class org.apache.spark.ml.classification.GBTClassificationModel
-
- treeWeights() - Method in class org.apache.spark.ml.classification.RandomForestClassificationModel
-
- treeWeights() - Method in class org.apache.spark.ml.regression.GBTRegressionModel
-
- treeWeights() - Method in class org.apache.spark.ml.regression.RandomForestRegressionModel
-
- treeWeights() - Method in class org.apache.spark.mllib.tree.model.GradientBoostedTreesModel
-
- triangleCount() - Method in class org.apache.spark.graphx.GraphOps
-
Compute the number of triangles passing through each vertex.
- TriangleCount - Class in org.apache.spark.graphx.lib
-
Compute the number of triangles passing through each vertex.
- TriangleCount() - Constructor for class org.apache.spark.graphx.lib.TriangleCount
-
- trigger(Trigger) - Method in class org.apache.spark.sql.streaming.DataStreamWriter
-
Set the trigger for the stream query.
- Trigger - Class in org.apache.spark.sql.streaming
-
Policy used to indicate how often results should be produced by a [[StreamingQuery]].
- Trigger() - Constructor for class org.apache.spark.sql.streaming.Trigger
-
- TriggerThreadDump$() - Constructor for class org.apache.spark.storage.BlockManagerMessages.TriggerThreadDump$
-
- trim(Column) - Static method in class org.apache.spark.sql.functions
-
Trim the spaces from both ends for the specified string column.
- TripletFields - Class in org.apache.spark.graphx
-
Represents a subset of the fields of an [[EdgeTriplet]] or [[EdgeContext]].
- TripletFields() - Constructor for class org.apache.spark.graphx.TripletFields
-
Constructs a default TripletFields in which all fields are included.
- TripletFields(boolean, boolean, boolean) - Constructor for class org.apache.spark.graphx.TripletFields
-
- triplets() - Method in class org.apache.spark.graphx.Graph
-
An RDD containing the edge triplets, which are edges along with the vertex data associated with
the adjacent vertices.
- triplets() - Method in class org.apache.spark.graphx.impl.GraphImpl
-
Return an RDD that brings edges together with their source and destination vertices.
- truePositiveRate(double) - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
-
Returns true positive rate for a given label (category)
- trunc(Column, String) - Static method in class org.apache.spark.sql.functions
-
Returns date truncated to the unit specified by the format.
- truncatedString(Seq<T>, String, String, String, int) - Static method in class org.apache.spark.util.Utils
-
Format a sequence with semantics similar to calling .mkString().
- truncatedString(Seq<T>, String) - Static method in class org.apache.spark.util.Utils
-
Shorthand for calling truncatedString() without start or end strings.
- tryLog(Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Executes the given block in a Try, logging any uncaught exceptions.
- tryLogNonFatalError(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Executes the given block.
- tryOrExit(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code that evaluates to Unit, forwarding any uncaught exceptions to the
default UncaughtExceptionHandler
- tryOrIOException(Function0<T>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code that returns a value, re-throwing any non-fatal uncaught
exceptions as IOException.
- tryOrStopSparkContext(SparkContext, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code that evaluates to Unit, stop SparkContext if there is any uncaught
exception
- tryRecoverFromCheckpoint(String) - Method in class org.apache.spark.streaming.StreamingContextPythonHelper
-
This is a private method only for Python to implement getOrCreate
.
- tryWithResource(Function0<R>, Function1<R, T>) - Static method in class org.apache.spark.util.Utils
-
- tryWithSafeFinally(Function0<T>, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code, then a finally block, but if exceptions happen in
the finally block, do not suppress the original exception.
- tryWithSafeFinallyAndFailureCallbacks(Function0<T>, Function0<BoxedUnit>, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.Utils
-
Execute a block of code and call the failure callbacks in the catch block.
- tuple(Encoder<T1>, Encoder<T2>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 2-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 3-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>, Encoder<T4>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 4-ary tuples.
- tuple(Encoder<T1>, Encoder<T2>, Encoder<T3>, Encoder<T4>, Encoder<T5>) - Static method in class org.apache.spark.sql.Encoders
-
An encoder for 5-ary tuples.
- tValues() - Method in class org.apache.spark.ml.regression.GeneralizedLinearRegressionTrainingSummary
-
T-statistic of estimated coefficients and intercept.
- tValues() - Method in class org.apache.spark.ml.regression.LinearRegressionSummary
-
T-statistic of estimated coefficients and intercept.
- Tweedie$() - Constructor for class org.apache.spark.ml.regression.GeneralizedLinearRegression.Tweedie$
-
- TYPE() - Static method in class org.apache.spark.ml.attribute.AttributeKeys
-
- typed - Class in org.apache.spark.sql.expressions.javalang
-
:: Experimental ::
Type-safe functions available for
Dataset
operations in Java.
- typed() - Constructor for class org.apache.spark.sql.expressions.javalang.typed
-
- typed - Class in org.apache.spark.sql.expressions.scalalang
-
:: Experimental ::
Type-safe functions available for Dataset
operations in Scala.
- typed() - Constructor for class org.apache.spark.sql.expressions.scalalang.typed
-
- TypedColumn<T,U> - Class in org.apache.spark.sql
-
A
Column
where an
Encoder
has been given for the expected input and return type.
- TypedColumn(Expression, ExpressionEncoder<U>) - Constructor for class org.apache.spark.sql.TypedColumn
-
- typedLit(T, TypeTags.TypeTag<T>) - Static method in class org.apache.spark.sql.functions
-
Creates a
Column
of literal value.
- typeInfoConversions(DataType) - Static method in class org.apache.spark.sql.hive.execution.HiveScriptIOSchema
-
- typeInfoConversions(DataType) - Static method in class org.apache.spark.sql.hive.orc.OrcRelation
-
- typeName() - Method in class org.apache.spark.mllib.linalg.VectorUDT
-
- typeName() - Static method in class org.apache.spark.sql.types.ArrayType
-
- typeName() - Static method in class org.apache.spark.sql.types.BinaryType
-
- typeName() - Static method in class org.apache.spark.sql.types.BooleanType
-
- typeName() - Static method in class org.apache.spark.sql.types.ByteType
-
- typeName() - Static method in class org.apache.spark.sql.types.CalendarIntervalType
-
- typeName() - Static method in class org.apache.spark.sql.types.CharType
-
- typeName() - Method in class org.apache.spark.sql.types.DataType
-
Name of the type used in JSON serialization.
- typeName() - Static method in class org.apache.spark.sql.types.DateType
-
- typeName() - Method in class org.apache.spark.sql.types.DecimalType
-
- typeName() - Static method in class org.apache.spark.sql.types.DoubleType
-
- typeName() - Static method in class org.apache.spark.sql.types.FloatType
-
- typeName() - Static method in class org.apache.spark.sql.types.HiveStringType
-
- typeName() - Static method in class org.apache.spark.sql.types.IntegerType
-
- typeName() - Static method in class org.apache.spark.sql.types.LongType
-
- typeName() - Static method in class org.apache.spark.sql.types.MapType
-
- typeName() - Static method in class org.apache.spark.sql.types.NullType
-
- typeName() - Static method in class org.apache.spark.sql.types.NumericType
-
- typeName() - Static method in class org.apache.spark.sql.types.ObjectType
-
- typeName() - Static method in class org.apache.spark.sql.types.ShortType
-
- typeName() - Static method in class org.apache.spark.sql.types.StringType
-
- typeName() - Static method in class org.apache.spark.sql.types.StructType
-
- typeName() - Static method in class org.apache.spark.sql.types.TimestampType
-
- typeName() - Static method in class org.apache.spark.sql.types.VarcharType
-