public class KryoSerializer extends Serializer implements Logging, scala.Serializable
Kryo serialization library
.
Note that this serializer is not guaranteed to be wire-compatible across different versions of Spark. It is intended to be used to serialize/de-serialize data within a single Spark application.
Constructor and Description |
---|
KryoSerializer(SparkConf conf) |
Modifier and Type | Method and Description |
---|---|
int |
maxBufferSizeMb() |
SerializerInstance |
newInstance()
Creates a new
SerializerInstance . |
com.esotericsoftware.kryo.Kryo |
newKryo() |
com.esotericsoftware.kryo.io.Output |
newKryoOutput() |
boolean |
supportsRelocationOfSerializedObjects()
:: Private ::
Returns true if this serializer supports relocation of its serialized objects and false
otherwise.
|
defaultClassLoader, getSerializer, getSerializer, setDefaultClassLoader
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public KryoSerializer(SparkConf conf)
public int maxBufferSizeMb()
public com.esotericsoftware.kryo.io.Output newKryoOutput()
public com.esotericsoftware.kryo.Kryo newKryo()
public SerializerInstance newInstance()
Serializer
SerializerInstance
.newInstance
in class Serializer
public boolean supportsRelocationOfSerializedObjects()
Serializer
serOut.open()
position = 0
serOut.write(obj1)
serOut.flush()
position = # of bytes writen to stream so far
obj1Bytes = output[0:position-1]
serOut.write(obj2)
serOut.flush()
position2 = # of bytes written to stream so far
obj2Bytes = output[position:position2-1]
serIn.open([obj2bytes] concatenate [obj1bytes]) should return (obj2, obj1)
In general, this property should hold for serializers that are stateless and that do not write special metadata at the beginning or end of the serialization stream.
This API is private to Spark; this method should not be overridden in third-party subclasses or called in user code and is subject to removal in future Spark releases.
See SPARK-7311 for more details.