Create table and insert the query result into it.
FileFormat
for writing Hive tables.
FileFormat
for writing Hive tables.
TODO: implement the read logic.
Options for the Hive data source.
Options for the Hive data source. Note that rule DetermineHiveSerde
will extract Hive
serde/format information from these options.
The wrapper class of Hive input and output schema properties
Command for writing the results of query
to file system.
Command for writing the results of query
to file system.
The syntax of using this command in SQL is:
INSERT OVERWRITE [LOCAL] DIRECTORY path [ROW FORMAT row_format] [STORED AS file_format] SELECT ...
whether the path specified in storage
is a local directory
storage format used to describe how the query result is stored.
the logical plan representing data to write to
whether overwrites existing directory
Command for writing data out to a Hive table.
Command for writing data out to a Hive table.
This class is mostly a mess, for legacy reasons (since it evolved in organic ways and had to follow Hive's internal implementations closely, which itself was a mess too). Please don't blame Reynold for this! He was just moving code around!
In the future we should converge the write path for Hive with the normal data source write path,
as defined in org.apache.spark.sql.execution.datasources.FileFormatWriter
.
the metadata of the table.
a map from the partition key to the partition value (optional). If the partition
value is optional, dynamic partition insert will be performed.
As an example, INSERT INTO tbl PARTITION (a=1, b=2) AS ...
would have
Map('a' -> Some('1'), 'b' -> Some('2'))
and INSERT INTO tbl PARTITION (a=1, b) AS ...
would have
Map('a' -> Some('1'), 'b' -> None)
.
the logical plan representing data to write to.
overwrite existing table or partitions.
If true, only write if the partition does not exist. Only valid for static partitions.
Transforms the input by forking and running the specified script.
Transforms the input by forking and running the specified script.
the set of expression that should be passed to the script.
the command that should be executed.
the attributes that are produced by the script.
Create table and insert the query result into it.
the Table Describe, which may contain serde, storage handler etc.
the query whose result will be insert into the new relation
SaveMode