pyspark.pandas.read_sql_table#

pyspark.pandas.read_sql_table(table_name, con, schema=None, index_col=None, columns=None, **options)[source]#

Read SQL database table into a DataFrame.

Given a table name and a JDBC URI, returns a DataFrame.

Parameters
table_namestr

Name of SQL table in database.

constr

A JDBC URI could be provided as str.

Note

The URI must be JDBC URI instead of Python’s database URI.

schemastr, default None

Name of SQL schema in database to query (if database flavor supports this). Uses default schema if None (default).

index_colstr or list of str, optional, default: None

Column(s) to set as index(MultiIndex).

columnslist, default None

List of column names to select from SQL table.

optionsdict

All other options passed directly into Spark’s JDBC data source.

Returns
DataFrame

A SQL table is returned as two-dimensional data structure with labeled axes.

See also

read_sql_query

Read SQL query into a DataFrame.

read_sql

Read SQL query or database table into a DataFrame.

Examples

>>> ps.read_sql_table('table_name', 'jdbc:postgresql:db_name')