Conversation
allow_large_results option to read_gbq_queryallow_large_results option to read_gbq_query. Set to False to enable faster queries
bigframes/pandas/io/api.py
Outdated
| use_cache: Optional[bool] = None, | ||
| col_order: Iterable[str] = (), | ||
| dry_run: bool = False, | ||
| allow_large_results: bool = True, |
There was a problem hiding this comment.
Should allow_large_results also default to None? This would allow it to inherit its value from ComputeOptions.allow_large_results.
There was a problem hiding this comment.
If we do so, it'd technicaly be a breaking change. Users with query results > 10 GB would have to set this option to True.
Might be worth it though for the consistency with other places, though?
|
Doctest looks like a real failure: It seems I'm not setting the index correctly. |
pytest --doctest-modules bigframes/session/__init__.py::bigframes.session.Session.read_gbq_query
This has been fixed. I've confirmed the doctest passes locally and have added two tests for the columns and index_col arguments. |
|
Looks like I have some more failing tests to address: Getting started on that now. |
|
A lot more failures now. I'll take a look at this when I get back I guess. |
I think this is a real failure. We aren't sorting by the |
allow_large_results option to read_gbq_query. Set to False to enable faster queriesallow_large_results option to read_gbq_query, aligning with bpd.options.compute.allow_large_results option
|
This looks like a real failure: We will need to handle when this data is stored locally if we use the fast query path (aka job optional path). Edit: tested locally after switching to |
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #<issue_number_goes_here> 🦕
BEGIN_COMMIT_OVERRIDE
feat!: add
allow_large_resultsoption toread_gbq_query, aligning withbpd.options.compute.allow_large_resultsoption (#1935)Release-As: 2.18.0
END_COMMIT_OVERRIDE