pyspark.pandas.read_html¶
-
pyspark.pandas.
read_html
(io: Union[str, Any], match: str = '.+', flavor: Optional[str] = None, header: Union[int, List[int], None] = None, index_col: Union[int, List[int], None] = None, skiprows: Union[int, List[int], slice, None] = None, attrs: Optional[Dict[str, str]] = None, parse_dates: bool = False, thousands: str = ',', encoding: Optional[str] = None, decimal: str = '.', converters: Optional[Dict] = None, na_values: Optional[Any] = None, keep_default_na: bool = True, displayed_only: bool = True) → List[pyspark.pandas.frame.DataFrame][source]¶ Read HTML tables into a
list
ofDataFrame
objects.- Parameters
- iostr or file-like
A URL, a file-like object, or a raw string containing HTML. Note that lxml only accepts the http, ftp and file url protocols. If you have a URL that starts with
'https'
you might try removing the's'
.- matchstr or compiled regular expression, optional
The set of tables containing text matching this regex or string will be returned. Unless the HTML is extremely simple you will probably need to pass a non-empty string here. Defaults to ‘.+’ (match any non-empty string). The default value will return all tables contained on a page. This value is converted to a regular expression so that there is consistent behavior between Beautiful Soup and lxml.
- flavorstr or None, container of strings
The parsing engine to use. ‘bs4’ and ‘html5lib’ are synonymous with each other, they are both there for backwards compatibility. The default of
None
tries to uselxml
to parse and if that fails it falls back onbs4
+html5lib
.- headerint or list-like or None, optional
The row (or list of rows for a
MultiIndex
) to use to make the columns headers.- index_colint or list-like or None, optional
The column (or list of columns) to use to create the index.
- skiprowsint or list-like or slice or None, optional
0-based. Number of rows to skip after parsing the column integer. If a sequence of integers or a slice is given, will skip the rows indexed by that sequence. Note that a single element sequence means ‘skip the nth row’ whereas an integer means ‘skip n rows’.
- attrsdict or None, optional
This is a dictionary of attributes that you can pass to use to identify the table in the HTML. These are not checked for validity before being passed to lxml or Beautiful Soup. However, these attributes must be valid HTML table attributes to work correctly. For example,
attrs = {'id': 'table'}
is a valid attribute dictionary because the ‘id’ HTML tag attribute is a valid HTML attribute for any HTML tag as per this document.
attrs = {'asdf': 'table'}
is not a valid attribute dictionary because ‘asdf’ is not a valid HTML attribute even if it is a valid XML attribute. Valid HTML 4.01 table attributes can be found here. A working draft of the HTML 5 spec can be found here. It contains the latest information on table attributes for the modern web.
- parse_datesbool, optional
See
read_csv()
for more details.- thousandsstr, optional
Separator to use to parse thousands. Defaults to
','
.- encodingstr or None, optional
The encoding used to decode the web page. Defaults to
None
.``None`` preserves the previous encoding behavior, which depends on the underlying parser library (e.g., the parser library will try to use the encoding provided by the document).- decimalstr, default ‘.’
Character to recognize as decimal point (example: use ‘,’ for European data).
- convertersdict, default None
Dict of functions for converting values in certain columns. Keys can either be integers or column labels, values are functions that take one input argument, the cell (not column) content, and return the transformed content.
- na_valuesiterable, default None
Custom NA values
- keep_default_nabool, default True
If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they’re appended to
- displayed_onlybool, default True
Whether elements with “display: none” should be parsed
- Returns
- dfslist of DataFrames
See also