public interface ParquetTest
NOTE: Considering classes Tuple1
... Tuple22
all extend Product
, it would be more
convenient to use tuples rather than special case classes when writing test cases/suites.
Especially, Tuple1.apply
can be used to easily wrap a single type/value.
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.conf.Configuration |
configuration() |
<T extends scala.Product> |
makeParquetFile(DataFrame df,
java.io.File path,
scala.reflect.ClassTag<T> evidence$9,
scala.reflect.api.TypeTags.TypeTag<T> evidence$10) |
<T extends scala.Product> |
makeParquetFile(scala.collection.Seq<T> data,
java.io.File path,
scala.reflect.ClassTag<T> evidence$7,
scala.reflect.api.TypeTags.TypeTag<T> evidence$8) |
java.io.File |
makePartitionDir(java.io.File basePath,
String defaultPartitionName,
scala.collection.Seq<scala.Tuple2<String,Object>> partitionCols) |
SQLContext |
sqlContext() |
<T extends scala.Product> |
withParquetDataFrame(scala.collection.Seq<T> data,
scala.Function1<DataFrame,scala.runtime.BoxedUnit> f,
scala.reflect.ClassTag<T> evidence$3,
scala.reflect.api.TypeTags.TypeTag<T> evidence$4)
Writes
data to a Parquet file and reads it back as a DataFrame ,
which is then passed to f . |
<T extends scala.Product> |
withParquetFile(scala.collection.Seq<T> data,
scala.Function1<String,scala.runtime.BoxedUnit> f,
scala.reflect.ClassTag<T> evidence$1,
scala.reflect.api.TypeTags.TypeTag<T> evidence$2)
Writes
data to a Parquet file, which is then passed to f and will be deleted after f
returns. |
<T extends scala.Product> |
withParquetTable(scala.collection.Seq<T> data,
String tableName,
scala.Function0<scala.runtime.BoxedUnit> f,
scala.reflect.ClassTag<T> evidence$5,
scala.reflect.api.TypeTags.TypeTag<T> evidence$6)
Writes
data to a Parquet file, reads it back as a DataFrame and registers it as a
temporary table named tableName , then call f . |
void |
withSQLConf(scala.collection.Seq<scala.Tuple2<String,String>> pairs,
scala.Function0<scala.runtime.BoxedUnit> f)
Sets all SQL configurations specified in
pairs , calls f , and then restore all SQL
configurations. |
void |
withTempDir(scala.Function1<java.io.File,scala.runtime.BoxedUnit> f)
Creates a temporary directory, which is then passed to
f and will be deleted after f
returns. |
void |
withTempPath(scala.Function1<java.io.File,scala.runtime.BoxedUnit> f)
Generates a temporary path without creating the actual file/directory, then pass it to
f . |
void |
withTempTable(String tableName,
scala.Function0<scala.runtime.BoxedUnit> f)
Drops temporary table
tableName after calling f . |
SQLContext sqlContext()
org.apache.hadoop.conf.Configuration configuration()
void withSQLConf(scala.collection.Seq<scala.Tuple2<String,String>> pairs, scala.Function0<scala.runtime.BoxedUnit> f)
pairs
, calls f
, and then restore all SQL
configurations.
void withTempPath(scala.Function1<java.io.File,scala.runtime.BoxedUnit> f)
f
. If
a file/directory is created there by f
, it will be delete after f
returns.
void withTempDir(scala.Function1<java.io.File,scala.runtime.BoxedUnit> f)
f
and will be deleted after f
returns.
<T extends scala.Product> void withParquetFile(scala.collection.Seq<T> data, scala.Function1<String,scala.runtime.BoxedUnit> f, scala.reflect.ClassTag<T> evidence$1, scala.reflect.api.TypeTags.TypeTag<T> evidence$2)
data
to a Parquet file, which is then passed to f
and will be deleted after f
returns.<T extends scala.Product> void withParquetDataFrame(scala.collection.Seq<T> data, scala.Function1<DataFrame,scala.runtime.BoxedUnit> f, scala.reflect.ClassTag<T> evidence$3, scala.reflect.api.TypeTags.TypeTag<T> evidence$4)
data
to a Parquet file and reads it back as a DataFrame
,
which is then passed to f
. The Parquet file will be deleted after f
returns.void withTempTable(String tableName, scala.Function0<scala.runtime.BoxedUnit> f)
tableName
after calling f
.<T extends scala.Product> void withParquetTable(scala.collection.Seq<T> data, String tableName, scala.Function0<scala.runtime.BoxedUnit> f, scala.reflect.ClassTag<T> evidence$5, scala.reflect.api.TypeTags.TypeTag<T> evidence$6)
data
to a Parquet file, reads it back as a DataFrame
and registers it as a
temporary table named tableName
, then call f
. The temporary table together with the
Parquet file will be dropped/deleted after f
returns.<T extends scala.Product> void makeParquetFile(scala.collection.Seq<T> data, java.io.File path, scala.reflect.ClassTag<T> evidence$7, scala.reflect.api.TypeTags.TypeTag<T> evidence$8)
<T extends scala.Product> void makeParquetFile(DataFrame df, java.io.File path, scala.reflect.ClassTag<T> evidence$9, scala.reflect.api.TypeTags.TypeTag<T> evidence$10)
java.io.File makePartitionDir(java.io.File basePath, String defaultPartitionName, scala.collection.Seq<scala.Tuple2<String,Object>> partitionCols)