public static class HiveStrategies.ParquetConversion
extends org.apache.spark.sql.catalyst.planning.GenericStrategy<org.apache.spark.sql.execution.SparkPlan>
Modifier and Type | Class and Description |
---|---|
class |
HiveStrategies.ParquetConversion.LogicalPlanHacks |
class |
HiveStrategies.ParquetConversion.PhysicalPlanHacks |
Constructor and Description |
---|
HiveStrategies.ParquetConversion()
:: Experimental ::
Finds table scans that would use the Hive SerDe and replaces them with our own native parquet
table scan operator.
|
Modifier and Type | Method and Description |
---|---|
scala.collection.Seq<org.apache.spark.sql.execution.SparkPlan> |
apply(org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan) |
isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$Logging$$log__$eq, org$apache$spark$Logging$$log_
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initializeIfNecessary, initializeLogging, log_
public HiveStrategies.ParquetConversion()
TODO: Much of this logic is duplicated in HiveTableScan. Ideally we would do some refactoring but since this is after the code freeze for 1.1 all logic is here to minimize disruption.
Other issues: - Much of this logic assumes case insensitive resolution.