public abstract class JdbcDialect
extends Object
implements scala.Serializable, org.apache.spark.internal.Logging
Currently, the only thing done by the dialect is type mapping.
getCatalystType
is used when reading from a JDBC table and getJDBCType
is used when writing to a JDBC table. If getCatalystType
returns null
,
the default type handling is used for the given JDBC type. Similarly,
if getJDBCType
returns (null, None)
, the default type handling is used
for the given Catalyst type.
Constructor and Description |
---|
JdbcDialect() |
Modifier and Type | Method and Description |
---|---|
String[] |
alterTable(String tableName,
scala.collection.Seq<TableChange> changes,
int dbMajorVersion)
Alter an existing table.
|
void |
beforeFetch(java.sql.Connection connection,
scala.collection.immutable.Map<String,String> properties)
Override connection specific properties to run before a select is made.
|
abstract boolean |
canHandle(String url)
Check if this dialect instance can handle a certain jdbc url.
|
AnalysisException |
classifyException(String message,
Throwable e)
Gets a dialect exception, classifies it and wraps it by
AnalysisException . |
scala.Option<String> |
compileAggregate(AggregateFunc aggFunction)
Converts aggregate function to String representing a SQL expression.
|
scala.Option<String> |
compileExpression(Expression expr)
Converts V2 expression to String representing a SQL expression.
|
Object |
compileValue(Object value)
Converts value to SQL expression.
|
scala.Function1<Object,java.sql.Connection> |
createConnectionFactory(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Returns a factory for creating connections to the given JDBC URL.
|
String |
createIndex(String indexName,
String tableName,
NamedReference[] columns,
java.util.Map<NamedReference,java.util.Map<String,String>> columnsProperties,
java.util.Map<String,String> properties)
Build a create index SQL statement.
|
void |
createSchema(java.sql.Statement statement,
String schema,
String comment)
Create schema with an optional comment.
|
String |
dropIndex(String indexName,
String tableName)
Build a drop index SQL statement.
|
String |
dropSchema(String schema,
boolean cascade) |
String |
getAddColumnQuery(String tableName,
String columnName,
String dataType) |
scala.Option<DataType> |
getCatalystType(int sqlType,
String typeName,
int size,
MetadataBuilder md)
Get the custom datatype mapping for the given jdbc meta information.
|
String |
getDeleteColumnQuery(String tableName,
String columnName) |
scala.Option<JdbcType> |
getJDBCType(DataType dt)
Retrieve the jdbc / sql type for a given datatype.
|
String |
getLimitClause(Integer limit)
returns the LIMIT clause for the SELECT statement
|
String |
getRenameColumnQuery(String tableName,
String columnName,
String newName,
int dbMajorVersion) |
String |
getSchemaCommentQuery(String schema,
String comment) |
String |
getSchemaQuery(String table)
The SQL query that should be used to discover the schema of a table.
|
String |
getTableCommentQuery(String table,
String comment) |
String |
getTableExistsQuery(String table)
Get the SQL query that should be used to find if the given table exists.
|
String |
getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample) |
String |
getTruncateQuery(String table)
The SQL query that should be used to truncate a table.
|
String |
getTruncateQuery(String table,
scala.Option<Object> cascade)
The SQL query that should be used to truncate a table.
|
String |
getUpdateColumnNullabilityQuery(String tableName,
String columnName,
boolean isNullable) |
String |
getUpdateColumnTypeQuery(String tableName,
String columnName,
String newDataType) |
boolean |
indexExists(java.sql.Connection conn,
String indexName,
String tableName,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Checks whether an index exists
|
scala.Option<Object> |
isCascadingTruncateTable()
Return Some[true] iff
TRUNCATE TABLE causes cascading default. |
boolean |
isSupportedFunction(String funcName)
Returns whether the database supports function.
|
TableIndex[] |
listIndexes(java.sql.Connection conn,
String tableName,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Lists all the indexes in this table.
|
String[][] |
listSchemas(java.sql.Connection conn,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
Lists all the schemas in this table.
|
String |
quoteIdentifier(String colName)
Quotes the identifier.
|
String |
removeSchemaCommentQuery(String schema) |
String |
renameTable(String oldTable,
String newTable)
Rename an existing table.
|
boolean |
schemasExists(java.sql.Connection conn,
org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options,
String schema)
Check schema exists or not.
|
boolean |
supportsTableSample() |
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public String[] alterTable(String tableName, scala.collection.Seq<TableChange> changes, int dbMajorVersion)
tableName
- The name of the table to be altered.changes
- Changes to apply to the table.dbMajorVersion
- (undocumented)public void beforeFetch(java.sql.Connection connection, scala.collection.immutable.Map<String,String> properties)
connection
- The connection objectproperties
- The connection properties. This is passed through from the relation.public abstract boolean canHandle(String url)
url
- the jdbc url.NullPointerException
- if the url is null.public AnalysisException classifyException(String message, Throwable e)
AnalysisException
.message
- The error message to be placed to the returned exception.e
- The dialect specific exception.AnalysisException
or its sub-class.public scala.Option<String> compileAggregate(AggregateFunc aggFunction)
aggFunction
- The aggregate function to be converted.public scala.Option<String> compileExpression(Expression expr)
expr
- The V2 expression to be converted.public Object compileValue(Object value)
value
- The value to be converted.public scala.Function1<Object,java.sql.Connection> createConnectionFactory(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
options
- - JDBC options that contains url, table and other information.IllegalArgumentException
- if the driver could not open a JDBC connection.public String createIndex(String indexName, String tableName, NamedReference[] columns, java.util.Map<NamedReference,java.util.Map<String,String>> columnsProperties, java.util.Map<String,String> properties)
indexName
- the name of the index to be createdtableName
- the table on which index to be createdcolumns
- the columns on which index to be createdcolumnsProperties
- the properties of the columns on which index to be createdproperties
- the properties of the index to be createdpublic void createSchema(java.sql.Statement statement, String schema, String comment)
statement
- (undocumented)schema
- (undocumented)comment
- (undocumented)public String dropIndex(String indexName, String tableName)
indexName
- the name of the index to be dropped.tableName
- the table name on which index to be dropped.public String dropSchema(String schema, boolean cascade)
public String getAddColumnQuery(String tableName, String columnName, String dataType)
public scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md)
sqlType
- The sql type (see java.sql.Types)typeName
- The sql type name (e.g. "BIGINT UNSIGNED")size
- The size of the type.md
- Result metadata associated with this type.DataType
)
or null if the default type mapping should be used.public String getDeleteColumnQuery(String tableName, String columnName)
public scala.Option<JdbcType> getJDBCType(DataType dt)
dt
- The datatype (e.g. StringType
)public String getLimitClause(Integer limit)
limit
- (undocumented)public String getRenameColumnQuery(String tableName, String columnName, String newName, int dbMajorVersion)
public String getSchemaCommentQuery(String schema, String comment)
public String getSchemaQuery(String table)
table
- The name of the table.public String getTableCommentQuery(String table, String comment)
public String getTableExistsQuery(String table)
table
- The name of the table.public String getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample)
public String getTruncateQuery(String table)
table
- The table to truncatepublic String getTruncateQuery(String table, scala.Option<Object> cascade)
table
- The table to truncatecascade
- Whether or not to cascade the truncationpublic String getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable)
public String getUpdateColumnTypeQuery(String tableName, String columnName, String newDataType)
public boolean indexExists(java.sql.Connection conn, String indexName, String tableName, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
indexName
- the name of the indextableName
- the table name on which index to be checkedoptions
- JDBCOptions of the tableconn
- (undocumented)indexName
exists in the table with tableName
,
false otherwisepublic scala.Option<Object> isCascadingTruncateTable()
TRUNCATE TABLE
causes cascading default.
Some[true] : TRUNCATE TABLE causes cascading.
Some[false] : TRUNCATE TABLE does not cause cascading.
None: The behavior of TRUNCATE TABLE is unknown (default).public boolean isSupportedFunction(String funcName)
funcName
- Upper-cased function namepublic TableIndex[] listIndexes(java.sql.Connection conn, String tableName, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
conn
- (undocumented)tableName
- (undocumented)options
- (undocumented)public String[][] listSchemas(java.sql.Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
conn
- (undocumented)options
- (undocumented)public String quoteIdentifier(String colName)
colName
- (undocumented)public String removeSchemaCommentQuery(String schema)
public String renameTable(String oldTable, String newTable)
oldTable
- The existing table.newTable
- New name of the table.public boolean schemasExists(java.sql.Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options, String schema)
conn
- (undocumented)options
- (undocumented)schema
- (undocumented)public boolean supportsTableSample()