Method and Description |
---|
org.apache.spark.streaming.StreamingContext.awaitTermination(long)
As of 1.3.0, replaced by
awaitTerminationOrTimeout(Long) . |
org.apache.spark.streaming.api.java.JavaStreamingContext.awaitTermination(long)
As of 1.3.0, replaced by
awaitTerminationOrTimeout(Long) . |
org.apache.spark.sql.functions.callUDF(Function0>, DataType)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function1, ?>, DataType, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function10, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf().
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function2, ?, ?>, DataType, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function3, ?, ?, ?>, DataType, Column, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function4, ?, ?, ?, ?>, DataType, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function5, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function6, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function7, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function8, ?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf()
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUDF(Function9, ?, ?, ?, ?, ?, ?, ?, ?, ?>, DataType, Column, Column, Column, Column, Column, Column, Column, Column, Column)
As of 1.5.0, since it's redundant with udf().
This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.callUdf(String, Seq As of 1.5.0, since it was not coherent to have two functions callUdf and callUDF.
This will be removed in Spark 2.0.
|
org.apache.spark.api.java.StorageLevels.create(boolean, boolean, boolean, int) |
org.apache.spark.sql.DataFrame.createJDBCTable(String, String, boolean)
As of 1.340, replaced by
write().jdbc() . This will be removed in Spark 2.0. |
org.apache.spark.sql.functions.cumeDist()
As of 1.6.0, replaced by
cume_dist . This will be removed in Spark 2.0. |
org.apache.spark.api.java.JavaSparkContext.defaultMinSplits()
As of Spark 1.0.0, defaultMinSplits is deprecated, use
JavaSparkContext.defaultMinPartitions() instead |
org.apache.spark.sql.functions.denseRank()
As of 1.6.0, replaced by
dense_rank . This will be removed in Spark 2.0. |
org.apache.spark.streaming.api.java.JavaDStreamLike.foreach(Function As of release 0.9.0, replaced by foreachRDD
|
org.apache.spark.streaming.dstream.DStream.foreach(Function1 As of 0.9.0, replaced by
foreachRDD . |
org.apache.spark.streaming.dstream.DStream.foreach(Function2 As of 0.9.0, replaced by
foreachRDD . |
org.apache.spark.streaming.api.java.JavaDStreamLike.foreach(Function2 As of release 0.9.0, replaced by foreachRDD
|
org.apache.spark.streaming.api.java.JavaDStreamLike.foreachRDD(Function As of release 1.6.0, replaced by foreachRDD(JVoidFunction)
|
org.apache.spark.streaming.api.java.JavaDStreamLike.foreachRDD(Function2 As of release 1.6.0, replaced by foreachRDD(JVoidFunction2)
|
org.apache.spark.sql.types.DataType.fromCaseClassString(String)
As of 1.2.0, replaced by
DataType.fromJson() |
org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(String, Configuration, JavaStreamingContextFactory)
As of 1.4.0, replaced by
getOrCreate without JavaStreamingContextFactor. |
org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(String, Configuration, JavaStreamingContextFactory, boolean)
As of 1.4.0, replaced by
getOrCreate without JavaStreamingContextFactor. |
org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(String, JavaStreamingContextFactory)
As of 1.4.0, replaced by
getOrCreate without JavaStreamingContextFactor. |
org.apache.spark.sql.Column.in(Object...)
As of 1.5.0. Use isin. This will be removed in Spark 2.0.
|
org.apache.spark.sql.Column.in(Seq
As of 1.5.0. Use isin. This will be removed in Spark 2.0.
|
org.apache.spark.sql.functions.inputFileName()
As of 1.6.0, replaced by
input_file_name . This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.insertInto(String)
As of 1.4.0, replaced by
write().mode(SaveMode.Append).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.insertInto(String, boolean)
As of 1.4.0, replaced by
write().mode(SaveMode.Append|SaveMode.Overwrite).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.insertIntoJDBC(String, String, boolean)
As of 1.4.0, replaced by
write().jdbc() . This will be removed in Spark 2.0. |
org.apache.spark.sql.functions.isNaN(Column)
As of 1.6.0, replaced by
isnan . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jdbc(String, String)
As of 1.4.0, replaced by
read().jdbc() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jdbc(String, String, String[])
As of 1.4.0, replaced by
read().jdbc() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jdbc(String, String, String, long, long, int)
As of 1.4.0, replaced by
read().jdbc() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonFile(String)
As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonFile(String, double)
As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonFile(String, StructType)
As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonRDD(JavaRDD As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonRDD(JavaRDD As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonRDD(JavaRDD As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonRDD(RDD As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonRDD(RDD As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.jsonRDD(RDD As of 1.4.0, replaced by
read().json() . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.load(String)
As of 1.4.0, replaced by
read().load(path) . This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.load(String, Map As of 1.4.0, replaced by
read().format(source).options(options).load() .
This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.load(String, Map As of 1.4.0, replaced by
read().format(source).options(options).load() . |
org.apache.spark.sql.SQLContext.load(String, String)
As of 1.4.0, replaced by
read().format(source).load(path) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.SQLContext.load(String, StructType, Map As of 1.4.0, replaced by
read().format(source).schema(schema).options(options).load() . |
org.apache.spark.sql.SQLContext.load(String, StructType, Map As of 1.4.0, replaced by
read().format(source).schema(schema).options(options).load() . |
org.apache.spark.mllib.util.MLUtils.loadLabeledData(SparkContext, String)
Should use
RDD.saveAsTextFile(java.lang.String) for saving and
MLUtils.loadLabeledPoints(org.apache.spark.SparkContext, java.lang.String, int) for loading. |
org.apache.spark.streaming.StreamingContext.networkStream(Receiver As of 1.0.0", replaced by
receiverStream . |
org.apache.spark.sql.SQLContext.parquetFile(String...)
As of 1.4.0, replaced by
read().parquet() . This will be removed in Spark 2.0. |
org.apache.spark.sql.functions.percentRank()
As of 1.6.0, replaced by
percent_rank . This will be removed in Spark 2.0. |
org.apache.spark.streaming.api.java.JavaDStreamLike.reduceByWindow(Function2 As this API is not Java compatible.
|
org.apache.spark.sql.functions.rowNumber()
As of 1.6.0, replaced by
row_number . This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.save(String)
As of 1.4.0, replaced by
write().save(path) . This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.save(String, SaveMode)
As of 1.4.0, replaced by
write().mode(mode).save(path) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.save(String, SaveMode, Map As of 1.4.0, replaced by
write().format(source).mode(mode).options(options).save(path) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.save(String, SaveMode, Map As of 1.4.0, replaced by
write().format(source).mode(mode).options(options).save(path) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.save(String, String)
As of 1.4.0, replaced by
write().format(source).save(path) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.save(String, String, SaveMode)
As of 1.4.0, replaced by
write().format(source).mode(mode).save(path) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsParquetFile(String)
As of 1.4.0, replaced by
write().parquet() . This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsTable(String)
As of 1.4.0, replaced by
write().saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsTable(String, SaveMode)
As of 1.4.0, replaced by
write().mode(mode).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsTable(String, String)
As of 1.4.0, replaced by
write().format(source).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsTable(String, String, SaveMode)
As of 1.4.0, replaced by
write().mode(mode).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsTable(String, String, SaveMode, Map As of 1.4.0, replaced by
write().format(source).mode(mode).options(options).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.sql.DataFrame.saveAsTable(String, String, SaveMode, Map As of 1.4.0, replaced by
write().format(source).mode(mode).options(options).saveAsTable(tableName) .
This will be removed in Spark 2.0. |
org.apache.spark.mllib.util.MLUtils.saveLabeledData(RDD Should use
RDD.saveAsTextFile(java.lang.String) for saving and
MLUtils.loadLabeledPoints(org.apache.spark.SparkContext, java.lang.String, int) for loading. |
org.apache.spark.streaming.api.java.JavaStreamingContext.sc()
As of 0.9.0, replaced by
sparkContext |
org.apache.spark.mllib.optimization.LBFGS.setMaxNumIterations(int)
use
LBFGS.setNumIterations(int) instead |
org.apache.spark.ml.evaluation.BinaryClassificationEvaluator.setScoreCol(String)
use
setRawPredictionCol() instead |
org.apache.spark.sql.functions.sparkPartitionId()
As of 1.6.0, replaced by
spark_partition_id . This will be removed in Spark 2.0. |
org.apache.spark.api.java.JavaRDDLike.toArray()
As of Spark 1.0.0, toArray() is deprecated, use
JavaRDDLike.collect() instead |
org.apache.spark.streaming.StreamingContext.toPairDStreamFunctions(DStream As of 1.3.0, replaced by implicit functions in the DStream companion object.
This is kept here only for backward compatibility.
|
org.apache.spark.sql.DataFrame.toSchemaRDD()
As of 1.3.0, replaced by
toDF() . This will be removed in Spark 2.0. |
org.apache.spark.mllib.rdd.RDDFunctions.treeAggregate(U, Function2, Function2, int, ClassTag) |
org.apache.spark.mllib.rdd.RDDFunctions.treeReduce(Function2 Use
RDD.treeReduce(scala.Function2<T, T, T>, int) instead. |