public interface CompressibleColumnBuilder<T extends org.apache.spark.sql.types.NativeType> extends ColumnBuilder, Logging
.--------------------------- Column type ID (4 bytes)
| .----------------------- Null count N (4 bytes)
| | .------------------- Null positions (4 x N bytes, empty if null count is zero)
| | | .------------- Compression scheme ID (4 bytes)
| | | | .--------- Compressed non-null elements
V V V V V
+---+---+-----+---+---------+
| | | ... | | ... ... |
+---+---+-----+---+---------+
\-----------/ \-----------/
header body
Modifier and Type | Method and Description |
---|---|
void |
appendFrom(org.apache.spark.sql.Row row,
int ordinal)
Appends
row(ordinal) to the column builder. |
java.nio.ByteBuffer |
build()
Returns the final columnar byte buffer.
|
scala.collection.Seq<Encoder<T>> |
compressionEncoders() |
void |
gatherCompressibilityStats(org.apache.spark.sql.Row row,
int ordinal) |
void |
initialize(int initialSize,
String columnName,
boolean useCompression)
Initializes with an approximate lower bound on the expected number of elements in this column.
|
boolean |
isWorthCompressing(Encoder<T> encoder) |
columnStats
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
void initialize(int initialSize, String columnName, boolean useCompression)
ColumnBuilder
initialize
in interface ColumnBuilder
void gatherCompressibilityStats(org.apache.spark.sql.Row row, int ordinal)
void appendFrom(org.apache.spark.sql.Row row, int ordinal)
ColumnBuilder
row(ordinal)
to the column builder.appendFrom
in interface ColumnBuilder
java.nio.ByteBuffer build()
ColumnBuilder
build
in interface ColumnBuilder