content
stringlengths 1
103k
⌀ | path
stringlengths 8
216
| filename
stringlengths 2
179
| language
stringclasses 15
values | size_bytes
int64 2
189k
| quality_score
float64 0.5
0.95
| complexity
float64 0
1
| documentation_ratio
float64 0
1
| repository
stringclasses 5
values | stars
int64 0
1k
| created_date
stringdate 2023-07-10 19:21:08
2025-07-09 19:11:45
| license
stringclasses 4
values | is_test
bool 2
classes | file_hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\bokeh_util.cpython-313.pyc
|
bokeh_util.cpython-313.pyc
|
Other
| 3,640 | 0.8 | 0 | 0 |
react-lib
| 704 |
2025-04-12T06:43:25.531891
|
GPL-3.0
| false |
76ab22a9d5ee3902701f93aee2eaaf4b
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\data.cpython-313.pyc
|
data.cpython-313.pyc
|
Other
| 4,046 | 0.8 | 0.036145 | 0.026667 |
react-lib
| 377 |
2025-03-05T23:01:07.478934
|
Apache-2.0
| false |
53718577e1655d4d08cf618db9bd6b50
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\mpl_renderer.cpython-313.pyc
|
mpl_renderer.cpython-313.pyc
|
Other
| 25,766 | 0.8 | 0.012158 | 0.003215 |
awesome-app
| 259 |
2024-02-28T22:25:21.944478
|
MIT
| false |
61b7f14f8fa725531b70bfde65e3831b
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\mpl_util.cpython-313.pyc
|
mpl_util.cpython-313.pyc
|
Other
| 5,014 | 0.8 | 0 | 0 |
vue-tools
| 864 |
2023-08-28T08:17:10.458862
|
GPL-3.0
| false |
dfebb6b404f73482a876b82696cf1a52
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\renderer.cpython-313.pyc
|
renderer.cpython-313.pyc
|
Other
| 7,080 | 0.95 | 0.03681 | 0 |
vue-tools
| 495 |
2024-03-08T11:28:44.032705
|
BSD-3-Clause
| false |
c84e0cba72b56f48d545522faf7bccd6
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\_build_config.cpython-313.pyc
|
_build_config.cpython-313.pyc
|
Other
| 2,182 | 0.8 | 0.166667 | 0 |
python-kit
| 528 |
2025-05-01T03:41:47.235587
|
GPL-3.0
| false |
9ac537a27b7d374be17278ea7381245d
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 332 | 0.7 | 0 | 0 |
react-lib
| 69 |
2024-02-17T02:36:10.071271
|
BSD-3-Clause
| false |
1ef281ab52d15e1b80a6bfd2f2330869
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\array.cpython-313.pyc
|
array.cpython-313.pyc
|
Other
| 12,812 | 0.8 | 0.028037 | 0.009524 |
python-kit
| 647 |
2023-11-23T10:28:06.883890
|
MIT
| false |
2945b0e7065ebaf7f46116602c9f1fc0
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\chunk.cpython-313.pyc
|
chunk.cpython-313.pyc
|
Other
| 3,633 | 0.7 | 0.035714 | 0 |
awesome-app
| 708 |
2023-10-11T06:00:17.642433
|
MIT
| false |
f9d4f9602f766d8b41eb11a3510a2345
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\convert.cpython-313.pyc
|
convert.cpython-313.pyc
|
Other
| 27,009 | 0.8 | 0.064171 | 0 |
python-kit
| 334 |
2025-04-05T07:19:55.851381
|
Apache-2.0
| false |
fe9eab08c41d7e097aa99a2a3190e20f
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\dechunk.cpython-313.pyc
|
dechunk.cpython-313.pyc
|
Other
| 8,221 | 0.8 | 0 | 0 |
awesome-app
| 940 |
2025-05-30T04:50:17.760255
|
MIT
| false |
24bdfa6bf734eaafff76844b0bb70b9a
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\enum_util.cpython-313.pyc
|
enum_util.cpython-313.pyc
|
Other
| 2,088 | 0.7 | 0 | 0.037037 |
python-kit
| 199 |
2024-02-14T11:38:35.606484
|
Apache-2.0
| false |
7cb0983f329214e55392add4ebd3be22
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\typecheck.cpython-313.pyc
|
typecheck.cpython-313.pyc
|
Other
| 11,925 | 0.8 | 0 | 0 |
python-kit
| 113 |
2023-10-17T23:15:06.016584
|
GPL-3.0
| false |
683d819a7d0463b63f088a2bfaaef6f9
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\types.cpython-313.pyc
|
types.cpython-313.pyc
|
Other
| 487 | 0.7 | 0 | 0 |
node-utils
| 999 |
2024-08-29T03:38:40.333303
|
GPL-3.0
| false |
d6338ce9b55f290a445b435945018383
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\_version.cpython-313.pyc
|
_version.cpython-313.pyc
|
Other
| 210 | 0.7 | 0 | 0 |
react-lib
| 592 |
2024-11-17T01:02:10.803881
|
MIT
| false |
4980d0b0b4503fc604449b79efb18606
|
\n\n
|
.venv\Lib\site-packages\contourpy\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 12,770 | 0.95 | 0.113636 | 0 |
python-kit
| 941 |
2024-05-11T11:28:09.131033
|
MIT
| false |
64cfcff3585f1ee2c1ab6e102f8ebf9b
|
pip\n
|
.venv\Lib\site-packages\contourpy-1.3.2.dist-info\INSTALLER
|
INSTALLER
|
Other
| 4 | 0.5 | 0 | 0 |
vue-tools
| 92 |
2023-07-28T07:19:34.083072
|
Apache-2.0
| false |
365c9bfeb7d89244f2ce01c1de44cb85
|
BSD 3-Clause License\n\nCopyright (c) 2021-2025, ContourPy Developers.\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
|
.venv\Lib\site-packages\contourpy-1.3.2.dist-info\LICENSE
|
LICENSE
|
Other
| 1,563 | 0.7 | 0 | 0 |
vue-tools
| 11 |
2024-12-02T10:35:57.670009
|
GPL-3.0
| false |
0186404b1452548f04e644440ce58e3c
|
Metadata-Version: 2.1\nName: contourpy\nVersion: 1.3.2\nSummary: Python library for calculating contours of 2D quadrilateral grids\nAuthor-Email: Ian Thomas <ianthomas23@gmail.com>\nLicense: BSD 3-Clause License\n \n Copyright (c) 2021-2025, ContourPy Developers.\n All rights reserved.\n \n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n \n 1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n \n 2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n \n 3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n \n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n \nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: Intended Audience :: Science/Research\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Programming Language :: C++\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Topic :: Scientific/Engineering :: Information Analysis\nClassifier: Topic :: Scientific/Engineering :: Mathematics\nClassifier: Topic :: Scientific/Engineering :: Visualization\nProject-URL: Homepage, https://github.com/contourpy/contourpy\nProject-URL: Changelog, https://contourpy.readthedocs.io/en/latest/changelog.html\nProject-URL: Documentation, https://contourpy.readthedocs.io\nProject-URL: Repository, https://github.com/contourpy/contourpy\nRequires-Python: >=3.10\nRequires-Dist: numpy>=1.23\nProvides-Extra: docs\nRequires-Dist: furo; extra == "docs"\nRequires-Dist: sphinx>=7.2; extra == "docs"\nRequires-Dist: sphinx-copybutton; extra == "docs"\nProvides-Extra: bokeh\nRequires-Dist: bokeh; extra == "bokeh"\nRequires-Dist: selenium; extra == "bokeh"\nProvides-Extra: mypy\nRequires-Dist: contourpy[bokeh,docs]; extra == "mypy"\nRequires-Dist: bokeh; extra == "mypy"\nRequires-Dist: docutils-stubs; extra == "mypy"\nRequires-Dist: mypy==1.15.0; extra == "mypy"\nRequires-Dist: types-Pillow; extra == "mypy"\nProvides-Extra: test\nRequires-Dist: contourpy[test-no-images]; extra == "test"\nRequires-Dist: matplotlib; extra == "test"\nRequires-Dist: Pillow; extra == "test"\nProvides-Extra: test-no-images\nRequires-Dist: pytest; extra == "test-no-images"\nRequires-Dist: pytest-cov; extra == "test-no-images"\nRequires-Dist: pytest-rerunfailures; extra == "test-no-images"\nRequires-Dist: pytest-xdist; extra == "test-no-images"\nRequires-Dist: wurlitzer; extra == "test-no-images"\nDescription-Content-Type: text/markdown\n\n<img alt="ContourPy" src="https://raw.githubusercontent.com/contourpy/contourpy/main/docs/_static/contourpy_logo_horiz.svg" height="90">\n\nContourPy is a Python library for calculating contours of 2D quadrilateral grids. It is written in C++11 and wrapped using pybind11.\n\nIt contains the 2005 and 2014 algorithms used in Matplotlib as well as a newer algorithm that includes more features and is available in both serial and multithreaded versions. It provides an easy way for Python libraries to use contouring algorithms without having to include Matplotlib as a dependency.\n\n * **Documentation**: https://contourpy.readthedocs.io\n * **Source code**: https://github.com/contourpy/contourpy\n\n| | |\n| --- | --- |\n| Latest release | [](https://pypi.python.org/pypi/contourpy) [](https://anaconda.org/conda-forge/contourpy) |\n| Downloads | [](https://pepy.tech/project/contourpy) |\n| Python version | [](https://pypi.org/project/contourpy/) |\n| Coverage | [](https://app.codecov.io/gh/contourpy/contourpy) |\n
|
.venv\Lib\site-packages\contourpy-1.3.2.dist-info\METADATA
|
METADATA
|
Other
| 5,461 | 0.8 | 0.031915 | 0.02439 |
awesome-app
| 507 |
2024-10-02T10:12:46.477251
|
GPL-3.0
| false |
aac2dbadfdb91073def3c19ff53f20e9
|
contourpy-1.3.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\ncontourpy-1.3.2.dist-info/LICENSE,sha256=hVMQLgMAHdlmJ4CYzAJ9BoPEErNOoUE1GwYL5DMPOak,1563\ncontourpy-1.3.2.dist-info/METADATA,sha256=DYYPfLivjoUAXCCUmeDKJdqFFbS2eRXui4i4hSwxbAg,5461\ncontourpy-1.3.2.dist-info/RECORD,,\ncontourpy-1.3.2.dist-info/WHEEL,sha256=suq8ARrxbiI7iLH3BgK-82uzxQ-4Hm-m8w01oCokrtA,85\ncontourpy/__init__.py,sha256=gxEB1RFZBZ4x2HHO3UZuEbnsCZvsYdgyjSqrHgaSj_0,12116\ncontourpy/__pycache__/__init__.cpython-313.pyc,,\ncontourpy/__pycache__/_version.cpython-313.pyc,,\ncontourpy/__pycache__/array.cpython-313.pyc,,\ncontourpy/__pycache__/chunk.cpython-313.pyc,,\ncontourpy/__pycache__/convert.cpython-313.pyc,,\ncontourpy/__pycache__/dechunk.cpython-313.pyc,,\ncontourpy/__pycache__/enum_util.cpython-313.pyc,,\ncontourpy/__pycache__/typecheck.cpython-313.pyc,,\ncontourpy/__pycache__/types.cpython-313.pyc,,\ncontourpy/_contourpy.cp313-win_amd64.lib,sha256=0Z_5BCIf_Y0ej6W8uboOgD0k8iD8hqCQiWvm5-KHH1c,2068\ncontourpy/_contourpy.cp313-win_amd64.pyd,sha256=BlWKllRBOjucDLznow4XCWrqlIBpZZv-ntFn44UbVh8,475136\ncontourpy/_contourpy.pyi,sha256=03SbkTNX4NqHwQhA2iiqZ62mvuy18oBKp-H9TN3BS90,7321\ncontourpy/_version.py,sha256=uCv73rgSEXLv0WZ2fvJpwCpBCFcxOh94XqSDccIBrL8,23\ncontourpy/array.py,sha256=telv6KYqRk7f985-FbrUrmTTigFQrIMZQHCutRsbGTY,9240\ncontourpy/chunk.py,sha256=vk1Eg6NVxz13MWz4xlbHA_PCGTaD18IeUZG0r2Wc2ZE,3374\ncontourpy/convert.py,sha256=CesgJzYlH_2CwQHVLH__4vB2eao3r7QA3WH8JL45z88,26775\ncontourpy/dechunk.py,sha256=fh4p5UaRBxoxDrYs3rcL3UISz2Hj7Ukhel7hmNulTqk,7963\ncontourpy/enum_util.py,sha256=hSwkZ9OZ4aVdFh6D774C0xzOPrMw0bxAbZdWFRptRtw,1576\ncontourpy/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\ncontourpy/typecheck.py,sha256=xySA3K7iawICRjJEisFU8lZi9ca3w1OlksYw-HlOZgg,10950\ncontourpy/types.py,sha256=Ob3EIVmG5oc3HjP_MX-r7RsJR7hVnffqmI6xR_1qcgA,260\ncontourpy/util/__init__.py,sha256=bqLYeBm_n6tulOyk8mgqdLUbwX2GttvCm0KkeNbASlY,123\ncontourpy/util/__pycache__/__init__.cpython-313.pyc,,\ncontourpy/util/__pycache__/_build_config.cpython-313.pyc,,\ncontourpy/util/__pycache__/bokeh_renderer.cpython-313.pyc,,\ncontourpy/util/__pycache__/bokeh_util.cpython-313.pyc,,\ncontourpy/util/__pycache__/data.cpython-313.pyc,,\ncontourpy/util/__pycache__/mpl_renderer.cpython-313.pyc,,\ncontourpy/util/__pycache__/mpl_util.cpython-313.pyc,,\ncontourpy/util/__pycache__/renderer.cpython-313.pyc,,\ncontourpy/util/_build_config.py,sha256=M0TV1E3XffANzlmchlZt0vJ-8Bul9mUP5zZCQlESkZ4,2085\ncontourpy/util/bokeh_renderer.py,sha256=_nB5FVcM9MjTTGJr91I0dxUs3fo4JNfdbEeqkyZwgoU,14298\ncontourpy/util/bokeh_util.py,sha256=0YlD2IPzW_TSS_s6tG6zGTThJwPfqsEo_pJJsUVbJhA,2878\ncontourpy/util/data.py,sha256=lHEByNIlie_4yQJal5iOFUxfFM9pfTTnQRuyP9BKdRs,2664\ncontourpy/util/mpl_renderer.py,sha256=NcfT8P0cBiyLvcnJIfq3VxZw6dYr7Ex8UikPS2_rRko,20660\ncontourpy/util/mpl_util.py,sha256=6oMTT3ymlMg6Pg99RQSXBDa3raZEZUDZCPu3y-f4S0E,3529\ncontourpy/util/renderer.py,sha256=UKSl9adWHreZLDwjnoJdAK91CKV39n3kKBswaf3Mzp0,5284\n
|
.venv\Lib\site-packages\contourpy-1.3.2.dist-info\RECORD
|
RECORD
|
Other
| 3,031 | 0.7 | 0 | 0 |
react-lib
| 190 |
2024-12-29T21:00:25.439503
|
GPL-3.0
| false |
5b59f0f7dbe47f37c4072d6bef567efc
|
Wheel-Version: 1.0\nGenerator: meson\nRoot-Is-Purelib: false\nTag: cp313-cp313-win_amd64
|
.venv\Lib\site-packages\contourpy-1.3.2.dist-info\WHEEL
|
WHEEL
|
Other
| 85 | 0.5 | 0 | 0 |
awesome-app
| 42 |
2024-11-13T06:26:57.134761
|
Apache-2.0
| false |
51337c97620c3b1e0d781ad8efe86cea
|
\n\n
|
.venv\Lib\site-packages\cycler\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 22,377 | 0.95 | 0.062147 | 0.006667 |
node-utils
| 958 |
2023-11-01T17:34:58.166301
|
GPL-3.0
| false |
f627f58722e1004f15b1cfe22d764e3c
|
pip\n
|
.venv\Lib\site-packages\cycler-0.12.1.dist-info\INSTALLER
|
INSTALLER
|
Other
| 4 | 0.5 | 0 | 0 |
react-lib
| 344 |
2024-11-29T01:05:01.693425
|
GPL-3.0
| false |
365c9bfeb7d89244f2ce01c1de44cb85
|
Copyright (c) 2015, matplotlib project\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the matplotlib project nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
.venv\Lib\site-packages\cycler-0.12.1.dist-info\LICENSE
|
LICENSE
|
Other
| 1,497 | 0.7 | 0 | 0.136364 |
node-utils
| 930 |
2023-11-21T09:21:24.709719
|
MIT
| false |
7713fe42cd766b15c710e19392bfa811
|
Metadata-Version: 2.1\nName: cycler\nVersion: 0.12.1\nSummary: Composable style cycles\nAuthor-email: Thomas A Caswell <matplotlib-users@python.org>\nLicense: Copyright (c) 2015, matplotlib project\n All rights reserved.\n \n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n \n * Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n \n * Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n \n * Neither the name of the matplotlib project nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n \n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nProject-URL: homepage, https://matplotlib.org/cycler/\nProject-URL: repository, https://github.com/matplotlib/cycler\nKeywords: cycle kwargs\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Development Status :: 4 - Beta\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3 :: Only\nRequires-Python: >=3.8\nDescription-Content-Type: text/x-rst\nLicense-File: LICENSE\nProvides-Extra: docs\nRequires-Dist: ipython ; extra == 'docs'\nRequires-Dist: matplotlib ; extra == 'docs'\nRequires-Dist: numpydoc ; extra == 'docs'\nRequires-Dist: sphinx ; extra == 'docs'\nProvides-Extra: tests\nRequires-Dist: pytest ; extra == 'tests'\nRequires-Dist: pytest-cov ; extra == 'tests'\nRequires-Dist: pytest-xdist ; extra == 'tests'\n\n|PyPi|_ |Conda|_ |Supported Python versions|_ |GitHub Actions|_ |Codecov|_\n\n.. |PyPi| image:: https://img.shields.io/pypi/v/cycler.svg?style=flat\n.. _PyPi: https://pypi.python.org/pypi/cycler\n\n.. |Conda| image:: https://img.shields.io/conda/v/conda-forge/cycler\n.. _Conda: https://anaconda.org/conda-forge/cycler\n\n.. |Supported Python versions| image:: https://img.shields.io/pypi/pyversions/cycler.svg\n.. _Supported Python versions: https://pypi.python.org/pypi/cycler\n\n.. |GitHub Actions| image:: https://github.com/matplotlib/cycler/actions/workflows/tests.yml/badge.svg\n.. _GitHub Actions: https://github.com/matplotlib/cycler/actions\n\n.. |Codecov| image:: https://codecov.io/github/matplotlib/cycler/badge.svg?branch=main&service=github\n.. _Codecov: https://codecov.io/github/matplotlib/cycler?branch=main\n\ncycler: composable cycles\n=========================\n\nDocs: https://matplotlib.org/cycler/\n
|
.venv\Lib\site-packages\cycler-0.12.1.dist-info\METADATA
|
METADATA
|
Other
| 3,779 | 0.8 | 0 | 0.046154 |
node-utils
| 448 |
2024-01-22T17:28:33.268821
|
GPL-3.0
| false |
8b02dcd7cd864ffd7885f9a6cc4a611b
|
cycler-0.12.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\ncycler-0.12.1.dist-info/LICENSE,sha256=8SGBQ9dm2j_qZvEzlrfxXfRqgzA_Kb-Wum6Y601C9Ag,1497\ncycler-0.12.1.dist-info/METADATA,sha256=IyieGbdvHgE5Qidpbmryts0c556JcxIJv5GVFIsY7TY,3779\ncycler-0.12.1.dist-info/RECORD,,\ncycler-0.12.1.dist-info/WHEEL,sha256=yQN5g4mg4AybRjkgi-9yy4iQEFibGQmlz78Pik5Or-A,92\ncycler-0.12.1.dist-info/top_level.txt,sha256=D8BVVDdAAelLb2FOEz7lDpc6-AL21ylKPrMhtG6yzyE,7\ncycler/__init__.py,sha256=1JdRgv5Zzxo-W1ev7B_LWquysWP6LZH6CHk_COtIaXE,16709\ncycler/__pycache__/__init__.cpython-313.pyc,,\ncycler/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\n
|
.venv\Lib\site-packages\cycler-0.12.1.dist-info\RECORD
|
RECORD
|
Other
| 672 | 0.7 | 0 | 0 |
node-utils
| 963 |
2024-10-28T00:25:00.360247
|
GPL-3.0
| false |
90aa8c343b0497b84996e3ef22cab5e5
|
cycler\n
|
.venv\Lib\site-packages\cycler-0.12.1.dist-info\top_level.txt
|
top_level.txt
|
Other
| 7 | 0.5 | 0 | 0 |
vue-tools
| 63 |
2025-02-23T09:36:45.743824
|
GPL-3.0
| false |
728466bf379b90a20cc01e67c56ce021
|
Wheel-Version: 1.0\nGenerator: bdist_wheel (0.41.2)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n
|
.venv\Lib\site-packages\cycler-0.12.1.dist-info\WHEEL
|
WHEEL
|
Other
| 92 | 0.5 | 0 | 0 |
python-kit
| 415 |
2024-05-28T13:28:52.079214
|
MIT
| false |
18f1a484771c3f3a3d3b90df42acfbbe
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n"""Arrow ArrowReader."""\n\nimport copy\nimport math\nimport os\nimport re\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Optional, Union\n\nimport pyarrow as pa\nimport pyarrow.parquet as pq\nfrom tqdm.contrib.concurrent import thread_map\n\nfrom .download.download_config import DownloadConfig # noqa: F401\nfrom .naming import _split_re, filenames_for_dataset_split\nfrom .table import InMemoryTable, MemoryMappedTable, Table, concat_tables\nfrom .utils import logging\nfrom .utils import tqdm as hf_tqdm\n\n\nif TYPE_CHECKING:\n from .info import DatasetInfo # noqa: F401\n from .splits import Split, SplitInfo # noqa: F401\n\n\nlogger = logging.get_logger(__name__)\n\nHF_GCP_BASE_URL = "https://storage.googleapis.com/huggingface-nlp/cache/datasets"\n\n_SUB_SPEC_RE = re.compile(\n rf"""\n^\n (?P<split>{_split_re[1:-1]})\n (\[\n ((?P<from>-?[\d_]+)\n (?P<from_pct>%)?)?\n :\n ((?P<to>-?[\d_]+)\n (?P<to_pct>%)?)?\n \])?(\((?P<rounding>[^\)]*)\))?\n$\n""", # remove ^ and $\n re.X,\n)\n\n_ADDITION_SEP_RE = re.compile(r"\s*\+\s*")\n\n\nclass DatasetNotOnHfGcsError(ConnectionError):\n """When you can't get the dataset from the Hf google cloud storage"""\n\n pass\n\n\nclass MissingFilesOnHfGcsError(ConnectionError):\n """When some files are missing on the Hf oogle cloud storage"""\n\n pass\n\n\n@dataclass(frozen=True)\nclass FileInstructions:\n """The file instructions associated with a split ReadInstruction.\n\n Attributes:\n num_examples: `int`, The total number of examples\n file_instructions: List[dict(filename, skip, take)], the files information.\n The filenames contains the relative path, not absolute.\n skip/take indicates which example read in the file: `ds.slice(skip, take)`\n """\n\n num_examples: int\n file_instructions: list[dict]\n\n\ndef make_file_instructions(\n name: str,\n split_infos: list["SplitInfo"],\n instruction: Union[str, "ReadInstruction"],\n filetype_suffix: Optional[str] = None,\n prefix_path: Optional[str] = None,\n) -> FileInstructions:\n """Returns instructions of the split dict.\n\n Args:\n name (`str`): Name of the dataset.\n split_infos (`list` of `[SplitInfo]`): Dataset splits information.\n instruction ([`ReadInstruction`] or `str`): Reading instruction for a dataset.\n filetype_suffix (`str`, *optional*): Suffix of dataset files, e.g. 'arrow' or 'parquet'.\n prefix_path (`str`, *optional*): Prefix of dataset files, e.g. directory name.\n\n Returns:\n [`FileInstructions`]\n """\n if not isinstance(name, str):\n raise TypeError(f"Expected str 'name', but got: {type(name).__name__}")\n elif not name:\n raise ValueError("Expected non-empty str 'name'")\n name2len = {info.name: info.num_examples for info in split_infos}\n name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}\n name2filenames = {\n info.name: filenames_for_dataset_split(\n path=prefix_path,\n dataset_name=name,\n split=info.name,\n filetype_suffix=filetype_suffix,\n shard_lengths=name2shard_lengths[info.name],\n )\n for info in split_infos\n }\n if not isinstance(instruction, ReadInstruction):\n instruction = ReadInstruction.from_spec(instruction)\n # Create the absolute instruction (per split)\n absolute_instructions = instruction.to_absolute(name2len)\n\n # For each split, return the files instruction (skip/take)\n file_instructions = []\n num_examples = 0\n for abs_instr in absolute_instructions:\n split_length = name2len[abs_instr.splitname]\n filenames = name2filenames[abs_instr.splitname]\n shard_lengths = name2shard_lengths[abs_instr.splitname]\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\n to = split_length if abs_instr.to is None else abs_instr.to\n if shard_lengths is None: # not sharded\n for filename in filenames:\n take = to - from_\n if take == 0:\n continue\n num_examples += take\n file_instructions.append({"filename": filename, "skip": from_, "take": take})\n else: # sharded\n index_start = 0 # Beginning (included) of moving window.\n index_end = 0 # End (excluded) of moving window.\n for filename, shard_length in zip(filenames, shard_lengths):\n index_end += shard_length\n if from_ < index_end and to > index_start: # There is something to take.\n skip = from_ - index_start if from_ > index_start else 0\n take = to - index_start - skip if to < index_end else -1\n if take == 0:\n continue\n file_instructions.append({"filename": filename, "skip": skip, "take": take})\n num_examples += shard_length - skip if take == -1 else take\n index_start += shard_length\n return FileInstructions(\n num_examples=num_examples,\n file_instructions=file_instructions,\n )\n\n\nclass BaseReader:\n """\n Build a Dataset object out of Instruction instance(s).\n """\n\n def __init__(self, path: str, info: Optional["DatasetInfo"]):\n """Initializes ArrowReader.\n\n Args:\n path (str): path where tfrecords are stored.\n info (DatasetInfo): info about the dataset.\n """\n self._path: str = path\n self._info: Optional["DatasetInfo"] = info\n self._filetype_suffix: Optional[str] = None\n\n def _get_table_from_filename(self, filename_skip_take, in_memory=False) -> Table:\n """Returns a Dataset instance from given (filename, skip, take)."""\n raise NotImplementedError\n\n def _read_files(self, files, in_memory=False) -> Table:\n """Returns Dataset for given file instructions.\n\n Args:\n files: List[dict(filename, skip, take)], the files information.\n The filenames contain the absolute path, not relative.\n skip/take indicates which example read in the file: `ds.slice(skip, take)`\n in_memory (bool, default False): Whether to copy the data in-memory.\n """\n if len(files) == 0 or not all(isinstance(f, dict) for f in files):\n raise ValueError("please provide valid file informations")\n files = copy.deepcopy(files)\n for f in files:\n f["filename"] = os.path.join(self._path, f["filename"])\n\n pa_tables = thread_map(\n partial(self._get_table_from_filename, in_memory=in_memory),\n files,\n tqdm_class=hf_tqdm,\n desc="Loading dataset shards",\n # set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\n disable=len(files) <= 16 or None,\n )\n pa_tables = [t for t in pa_tables if len(t) > 0]\n if not pa_tables and (self._info is None or self._info.features is None):\n raise ValueError(\n "Tried to read an empty table. Please specify at least info.features to create an empty table with the right type."\n )\n pa_tables = pa_tables or [InMemoryTable.from_batches([], schema=pa.schema(self._info.features.type))]\n pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0]\n return pa_table\n\n def get_file_instructions(self, name, instruction, split_infos):\n """Return list of dict {'filename': str, 'skip': int, 'take': int}"""\n file_instructions = make_file_instructions(\n name, split_infos, instruction, filetype_suffix=self._filetype_suffix, prefix_path=self._path\n )\n files = file_instructions.file_instructions\n return files\n\n def read(\n self,\n name,\n instructions,\n split_infos,\n in_memory=False,\n ):\n """Returns Dataset instance(s).\n\n Args:\n name (str): name of the dataset.\n instructions (ReadInstruction): instructions to read.\n Instruction can be string and will then be passed to the Instruction\n constructor as it.\n split_infos (list of SplitInfo proto): the available splits for dataset.\n in_memory (bool, default False): Whether to copy the data in-memory.\n\n Returns:\n kwargs to build a single Dataset instance.\n """\n\n files = self.get_file_instructions(name, instructions, split_infos)\n if not files:\n msg = f'Instruction "{instructions}" corresponds to no data!'\n raise ValueError(msg)\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\n\n def read_files(\n self,\n files: list[dict],\n original_instructions: Union[None, "ReadInstruction", "Split"] = None,\n in_memory=False,\n ):\n """Returns single Dataset instance for the set of file instructions.\n\n Args:\n files: List[dict(filename, skip, take)], the files information.\n The filenames contains the relative path, not absolute.\n skip/take indicates which example read in the file: `ds.skip().take()`\n original_instructions: store the original instructions used to build the dataset split in the dataset.\n in_memory (bool, default False): Whether to copy the data in-memory.\n\n Returns:\n kwargs to build a Dataset instance.\n """\n # Prepend path to filename\n pa_table = self._read_files(files, in_memory=in_memory)\n # If original_instructions is not None, convert it to a human-readable NamedSplit\n if original_instructions is not None:\n from .splits import Split # noqa\n\n split = Split(str(original_instructions))\n else:\n split = None\n dataset_kwargs = {"arrow_table": pa_table, "info": self._info, "split": split}\n return dataset_kwargs\n\n\nclass ArrowReader(BaseReader):\n """\n Build a Dataset object out of Instruction instance(s).\n This Reader uses either memory mapping or file descriptors (in-memory) on arrow files.\n """\n\n def __init__(self, path: str, info: Optional["DatasetInfo"]):\n """Initializes ArrowReader.\n\n Args:\n path (str): path where Arrow files are stored.\n info (DatasetInfo): info about the dataset.\n """\n super().__init__(path, info)\n self._filetype_suffix = "arrow"\n\n def _get_table_from_filename(self, filename_skip_take, in_memory=False) -> Table:\n """Returns a Dataset instance from given (filename, skip, take)."""\n filename, skip, take = (\n filename_skip_take["filename"],\n filename_skip_take["skip"] if "skip" in filename_skip_take else None,\n filename_skip_take["take"] if "take" in filename_skip_take else None,\n )\n table = ArrowReader.read_table(filename, in_memory=in_memory)\n if take == -1:\n take = len(table) - skip\n # here we don't want to slice an empty table, or it may segfault\n if skip is not None and take is not None and not (skip == 0 and take == len(table)):\n table = table.slice(skip, take)\n return table\n\n @staticmethod\n def read_table(filename, in_memory=False) -> Table:\n """\n Read table from file.\n\n Args:\n filename (str): File name of the table.\n in_memory (bool, default=False): Whether to copy the data in-memory.\n\n Returns:\n pyarrow.Table\n """\n table_cls = InMemoryTable if in_memory else MemoryMappedTable\n return table_cls.from_file(filename)\n\n\nclass ParquetReader(BaseReader):\n """\n Build a Dataset object out of Instruction instance(s).\n This Reader uses memory mapping on parquet files.\n """\n\n def __init__(self, path: str, info: Optional["DatasetInfo"]):\n """Initializes ParquetReader.\n\n Args:\n path (str): path where tfrecords are stored.\n info (DatasetInfo): info about the dataset.\n """\n super().__init__(path, info)\n self._filetype_suffix = "parquet"\n\n def _get_table_from_filename(self, filename_skip_take, **kwargs):\n """Returns a Dataset instance from given (filename, skip, take)."""\n filename, skip, take = (\n filename_skip_take["filename"],\n filename_skip_take["skip"] if "skip" in filename_skip_take else None,\n filename_skip_take["take"] if "take" in filename_skip_take else None,\n )\n # Parquet read_table always loads data in memory, independently of memory_map\n pa_table = pq.read_table(filename, memory_map=True)\n # here we don't want to slice an empty table, or it may segfault\n if skip is not None and take is not None and not (skip == 0 and take == len(pa_table)):\n pa_table = pa_table.slice(skip, take)\n return pa_table\n\n\n@dataclass(frozen=True)\nclass _AbsoluteInstruction:\n """A machine friendly slice: defined absolute positive boundaries."""\n\n splitname: str\n from_: int # uint (starting index).\n to: int # uint (ending index).\n\n\n@dataclass(frozen=True)\nclass _RelativeInstruction:\n """Represents a single parsed slicing instruction, can use % and negatives."""\n\n splitname: str\n from_: Optional[int] = None # int (starting index) or None if no lower boundary.\n to: Optional[int] = None # int (ending index) or None if no upper boundary.\n unit: Optional[str] = None\n rounding: Optional[str] = None\n\n def __post_init__(self):\n if self.unit is not None and self.unit not in ["%", "abs"]:\n raise ValueError("unit must be either % or abs")\n if self.rounding is not None and self.rounding not in ["closest", "pct1_dropremainder"]:\n raise ValueError("rounding must be either closest or pct1_dropremainder")\n if self.unit != "%" and self.rounding is not None:\n raise ValueError("It is forbidden to specify rounding if not using percent slicing.")\n if self.unit == "%" and self.from_ is not None and abs(self.from_) > 100:\n raise ValueError("Percent slice boundaries must be > -100 and < 100.")\n if self.unit == "%" and self.to is not None and abs(self.to) > 100:\n raise ValueError("Percent slice boundaries must be > -100 and < 100.")\n # Update via __dict__ due to instance being "frozen"\n self.__dict__["rounding"] = "closest" if self.rounding is None and self.unit == "%" else self.rounding\n\n\ndef _str_to_read_instruction(spec):\n """Returns ReadInstruction for given string."""\n res = _SUB_SPEC_RE.match(spec)\n if not res:\n raise ValueError(f"Unrecognized instruction format: {spec}")\n unit = "%" if res.group("from_pct") or res.group("to_pct") else "abs"\n return ReadInstruction(\n split_name=res.group("split"),\n rounding=res.group("rounding"),\n from_=int(res.group("from")) if res.group("from") else None,\n to=int(res.group("to")) if res.group("to") else None,\n unit=unit,\n )\n\n\ndef _pct_to_abs_pct1(boundary, num_examples):\n # Using math.trunc here, since -99.5% should give -99%, not -100%.\n if num_examples < 100:\n msg = (\n 'Using "pct1_dropremainder" rounding on a split with less than 100 '\n "elements is forbidden: it always results in an empty dataset."\n )\n raise ValueError(msg)\n return boundary * math.trunc(num_examples / 100.0)\n\n\ndef _pct_to_abs_closest(boundary, num_examples):\n return int(round(boundary * num_examples / 100.0))\n\n\ndef _rel_to_abs_instr(rel_instr, name2len):\n """Returns _AbsoluteInstruction instance for given RelativeInstruction.\n\n Args:\n rel_instr: RelativeInstruction instance.\n name2len: dict {split_name: num_examples}.\n """\n pct_to_abs = _pct_to_abs_closest if rel_instr.rounding == "closest" else _pct_to_abs_pct1\n split = rel_instr.splitname\n if split not in name2len:\n raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')\n num_examples = name2len[split]\n from_ = rel_instr.from_\n to = rel_instr.to\n if rel_instr.unit == "%":\n from_ = 0 if from_ is None else pct_to_abs(from_, num_examples)\n to = num_examples if to is None else pct_to_abs(to, num_examples)\n else:\n from_ = 0 if from_ is None else from_\n to = num_examples if to is None else to\n if from_ < 0:\n from_ = max(num_examples + from_, 0)\n if to < 0:\n to = max(num_examples + to, 0)\n from_ = min(from_, num_examples)\n to = min(to, num_examples)\n return _AbsoluteInstruction(split, from_, to)\n\n\nclass ReadInstruction:\n """Reading instruction for a dataset.\n\n Examples::\n\n # The following lines are equivalent:\n ds = datasets.load_dataset('mnist', split='test[:33%]')\n ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec('test[:33%]'))\n ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction('test', to=33, unit='%'))\n ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction(\n 'test', from_=0, to=33, unit='%'))\n\n # The following lines are equivalent:\n ds = datasets.load_dataset('mnist', split='test[:33%]+train[1:-1]')\n ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec(\n 'test[:33%]+train[1:-1]'))\n ds = datasets.load_dataset('mnist', split=(\n datasets.ReadInstruction('test', to=33, unit='%') +\n datasets.ReadInstruction('train', from_=1, to=-1, unit='abs')))\n\n # The following lines are equivalent:\n ds = datasets.load_dataset('mnist', split='test[:33%](pct1_dropremainder)')\n ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec(\n 'test[:33%](pct1_dropremainder)'))\n ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction(\n 'test', from_=0, to=33, unit='%', rounding="pct1_dropremainder"))\n\n # 10-fold validation:\n tests = datasets.load_dataset(\n 'mnist',\n [datasets.ReadInstruction('train', from_=k, to=k+10, unit='%')\n for k in range(0, 100, 10)])\n trains = datasets.load_dataset(\n 'mnist',\n [datasets.ReadInstruction('train', to=k, unit='%') + datasets.ReadInstruction('train', from_=k+10, unit='%')\n for k in range(0, 100, 10)])\n\n """\n\n def _init(self, relative_instructions):\n # Private initializer.\n self._relative_instructions = relative_instructions\n\n @classmethod\n def _read_instruction_from_relative_instructions(cls, relative_instructions):\n """Returns ReadInstruction obj initialized with relative_instructions."""\n # Use __new__ to bypass __init__ used by public API and not conveniant here.\n result = cls.__new__(cls)\n result._init(relative_instructions) # pylint: disable=protected-access\n return result\n\n def __init__(self, split_name, rounding=None, from_=None, to=None, unit=None):\n """Initialize ReadInstruction.\n\n Args:\n split_name (str): name of the split to read. Eg: 'train'.\n rounding (str, optional): The rounding behaviour to use when percent slicing is\n used. Ignored when slicing with absolute indices.\n Possible values:\n - 'closest' (default): The specified percentages are rounded to the\n closest value. Use this if you want specified percents to be as\n much exact as possible.\n - 'pct1_dropremainder': the specified percentages are treated as\n multiple of 1%. Use this option if you want consistency. Eg:\n len(5%) == 5 * len(1%).\n Using this option, one might not be able to use the full set of\n examples, if the number of those is not a multiple of 100.\n from_ (int):\n to (int): alternative way of specifying slicing boundaries. If any of\n {from_, to, unit} argument is used, slicing cannot be specified as\n string.\n unit (str): optional, one of:\n '%': to set the slicing unit as percents of the split size.\n 'abs': to set the slicing unit as absolute numbers.\n """\n # This constructor is not always called. See factory method\n # `_read_instruction_from_relative_instructions`. Common init instructions\n # MUST be placed in the _init method.\n self._init([_RelativeInstruction(split_name, from_, to, unit, rounding)])\n\n @classmethod\n def from_spec(cls, spec):\n """Creates a `ReadInstruction` instance out of a string spec.\n\n Args:\n spec (`str`):\n Split(s) + optional slice(s) to read + optional rounding\n if percents are used as the slicing unit. A slice can be specified,\n using absolute numbers (`int`) or percentages (`int`).\n\n Examples:\n\n ```\n test: test split.\n test + validation: test split + validation split.\n test[10:]: test split, minus its first 10 records.\n test[:10%]: first 10% records of test split.\n test[:20%](pct1_dropremainder): first 10% records, rounded with the pct1_dropremainder rounding.\n test[:-5%]+train[40%:60%]: first 95% of test + middle 20% of train.\n ```\n\n Returns:\n ReadInstruction instance.\n """\n spec = str(spec) # Need to convert to str in case of NamedSplit instance.\n subs = _ADDITION_SEP_RE.split(spec)\n if not subs:\n raise ValueError(f"No instructions could be built out of {spec}")\n instruction = _str_to_read_instruction(subs[0])\n return sum((_str_to_read_instruction(sub) for sub in subs[1:]), instruction)\n\n def to_spec(self):\n rel_instr_specs = []\n for rel_instr in self._relative_instructions:\n rel_instr_spec = rel_instr.splitname\n if rel_instr.from_ is not None or rel_instr.to is not None:\n from_ = rel_instr.from_\n to = rel_instr.to\n unit = rel_instr.unit\n rounding = rel_instr.rounding\n unit = unit if unit == "%" else ""\n from_ = str(from_) + unit if from_ is not None else ""\n to = str(to) + unit if to is not None else ""\n slice_str = f"[{from_}:{to}]"\n rounding_str = (\n f"({rounding})" if unit == "%" and rounding is not None and rounding != "closest" else ""\n )\n rel_instr_spec += slice_str + rounding_str\n rel_instr_specs.append(rel_instr_spec)\n return "+".join(rel_instr_specs)\n\n def __add__(self, other):\n """Returns a new ReadInstruction obj, result of appending other to self."""\n if not isinstance(other, ReadInstruction):\n msg = "ReadInstruction can only be added to another ReadInstruction obj."\n raise TypeError(msg)\n self_ris = self._relative_instructions\n other_ris = other._relative_instructions # pylint: disable=protected-access\n if (\n self_ris[0].unit != "abs"\n and other_ris[0].unit != "abs"\n and self._relative_instructions[0].rounding != other_ris[0].rounding\n ):\n raise ValueError("It is forbidden to sum ReadInstruction instances with different rounding values.")\n return self._read_instruction_from_relative_instructions(self_ris + other_ris)\n\n def __str__(self):\n return self.to_spec()\n\n def __repr__(self):\n return f"ReadInstruction({self._relative_instructions})"\n\n def to_absolute(self, name2len):\n """Translate instruction into a list of absolute instructions.\n\n Those absolute instructions are then to be added together.\n\n Args:\n name2len (`dict`):\n Associating split names to number of examples.\n\n Returns:\n list of _AbsoluteInstruction instances (corresponds to the + in spec).\n """\n return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]\n
|
.venv\Lib\site-packages\datasets\arrow_reader.py
|
arrow_reader.py
|
Python
| 25,131 | 0.95 | 0.190323 | 0.063098 |
react-lib
| 199 |
2023-07-30T17:53:23.883577
|
BSD-3-Clause
| false |
95cfbb478e7274e388ba18407f4c9f62
|
from typing import Optional, TypeVar\n\nfrom .arrow_dataset import Dataset, _concatenate_map_style_datasets, _interleave_map_style_datasets\nfrom .dataset_dict import DatasetDict, IterableDatasetDict\nfrom .info import DatasetInfo\nfrom .iterable_dataset import IterableDataset, _concatenate_iterable_datasets, _interleave_iterable_datasets\nfrom .splits import NamedSplit\nfrom .utils import logging\nfrom .utils.py_utils import Literal\n\n\nlogger = logging.get_logger(__name__)\n\n\nDatasetType = TypeVar("DatasetType", Dataset, IterableDataset)\n\n\ndef interleave_datasets(\n datasets: list[DatasetType],\n probabilities: Optional[list[float]] = None,\n seed: Optional[int] = None,\n info: Optional[DatasetInfo] = None,\n split: Optional[NamedSplit] = None,\n stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted",\n) -> DatasetType:\n """\n Interleave several datasets (sources) into a single dataset.\n The new dataset is constructed by alternating between the sources to get the examples.\n\n You can use this function on a list of [`Dataset`] objects, or on a list of [`IterableDataset`] objects.\n\n - If `probabilities` is `None` (default) the new dataset is constructed by cycling between each source to get the examples.\n - If `probabilities` is not `None`, the new dataset is constructed by getting examples from a random source at a time according to the provided probabilities.\n\n The resulting dataset ends when one of the source datasets runs out of examples except when `oversampling` is `True`,\n in which case, the resulting dataset ends when all datasets have ran out of examples at least one time.\n\n Note for iterable datasets:\n\n In a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process.\n Therefore the "first_exhausted" strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).\n\n Args:\n datasets (`List[Dataset]` or `List[IterableDataset]`):\n List of datasets to interleave.\n probabilities (`List[float]`, *optional*, defaults to `None`):\n If specified, the new dataset is constructed by sampling\n examples from one source at a time according to these probabilities.\n seed (`int`, *optional*, defaults to `None`):\n The random seed used to choose a source for each example.\n info ([`DatasetInfo`], *optional*):\n Dataset information, like description, citation, etc.\n <Added version="2.4.0"/>\n split ([`NamedSplit`], *optional*):\n Name of the dataset split.\n <Added version="2.4.0"/>\n stopping_strategy (`str`, defaults to `first_exhausted`):\n Two strategies are proposed right now, `first_exhausted` and `all_exhausted`.\n By default, `first_exhausted` is an undersampling strategy, i.e the dataset construction is stopped as soon as one dataset has ran out of samples.\n If the strategy is `all_exhausted`, we use an oversampling strategy, i.e the dataset construction is stopped as soon as every samples of every dataset has been added at least once.\n Note that if the strategy is `all_exhausted`, the interleaved dataset size can get enormous:\n - with no probabilities, the resulting dataset will have `max_length_datasets*nb_dataset` samples.\n - with given probabilities, the resulting dataset will have more samples if some datasets have really low probability of visiting.\n Returns:\n [`Dataset`] or [`IterableDataset`]: Return type depends on the input `datasets`\n parameter. `Dataset` if the input is a list of `Dataset`, `IterableDataset` if the input is a list of\n `IterableDataset`.\n\n Example:\n\n For regular datasets (map-style):\n\n ```python\n >>> from datasets import Dataset, interleave_datasets\n >>> d1 = Dataset.from_dict({"a": [0, 1, 2]})\n >>> d2 = Dataset.from_dict({"a": [10, 11, 12]})\n >>> d3 = Dataset.from_dict({"a": [20, 21, 22]})\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")\n >>> dataset["a"]\n [10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\n >>> dataset["a"]\n [10, 0, 11, 1, 2]\n >>> dataset = interleave_datasets([d1, d2, d3])\n >>> dataset["a"]\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\n >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")\n >>> dataset["a"]\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\n >>> d1 = Dataset.from_dict({"a": [0, 1, 2]})\n >>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})\n >>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})\n >>> dataset = interleave_datasets([d1, d2, d3])\n >>> dataset["a"]\n [0, 10, 20, 1, 11, 21, 2, 12, 22]\n >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")\n >>> dataset["a"]\n [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\n >>> dataset["a"]\n [10, 0, 11, 1, 2]\n >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")\n >>> dataset["a"]\n [10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]\n For datasets in streaming mode (iterable):\n\n >>> from datasets import interleave_datasets\n >>> d1 = load_dataset('allenai/c4', 'es', split='train', streaming=True)\n >>> d2 = load_dataset('allenai/c4', 'fr', split='train', streaming=True)\n >>> dataset = interleave_datasets([d1, d2])\n >>> iterator = iter(dataset)\n >>> next(iterator)\n {'text': 'Comprar Zapatillas para niña en chancla con goma por...'}\n >>> next(iterator)\n {'text': 'Le sacre de philippe ier, 23 mai 1059 - Compte Rendu...'\n ```\n """\n from .arrow_dataset import Dataset\n from .iterable_dataset import IterableDataset\n\n if not datasets:\n raise ValueError("Unable to interleave an empty list of datasets.")\n for i, dataset in enumerate(datasets):\n if not isinstance(dataset, (Dataset, IterableDataset)):\n if isinstance(dataset, (DatasetDict, IterableDatasetDict)):\n if not dataset:\n raise ValueError(\n f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} "\n "is an empty dataset dictionary."\n )\n raise ValueError(\n f"Dataset at position {i} has at least one split: {list(dataset)}\n"\n f"Please pick one to interleave with the other datasets, for example: dataset['{next(iter(dataset))}']"\n )\n raise ValueError(\n f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} is a {type(dataset).__name__}."\n )\n if i == 0:\n dataset_type, other_type = (\n (Dataset, IterableDataset) if isinstance(dataset, Dataset) else (IterableDataset, Dataset)\n )\n elif not isinstance(dataset, dataset_type):\n raise ValueError(\n f"Unable to interleave a {dataset_type.__name__} (at position 0) with a {other_type.__name__} (at position {i}). Expected a list of Dataset objects or a list of IterableDataset objects."\n )\n if stopping_strategy not in ["first_exhausted", "all_exhausted"]:\n raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")\n if dataset_type is Dataset:\n return _interleave_map_style_datasets(\n datasets, probabilities, seed, info=info, split=split, stopping_strategy=stopping_strategy\n )\n else:\n return _interleave_iterable_datasets(\n datasets, probabilities, seed, info=info, split=split, stopping_strategy=stopping_strategy\n )\n\n\ndef concatenate_datasets(\n dsets: list[DatasetType],\n info: Optional[DatasetInfo] = None,\n split: Optional[NamedSplit] = None,\n axis: int = 0,\n) -> DatasetType:\n """\n Converts a list of [`Dataset`] with the same schema into a single [`Dataset`].\n\n Args:\n dsets (`List[datasets.Dataset]`):\n List of Datasets to concatenate.\n info (`DatasetInfo`, *optional*):\n Dataset information, like description, citation, etc.\n split (`NamedSplit`, *optional*):\n Name of the dataset split.\n axis (`{0, 1}`, defaults to `0`):\n Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns\n (horizontally).\n\n <Added version="1.6.0"/>\n\n Example:\n\n ```py\n >>> ds3 = concatenate_datasets([ds1, ds2])\n ```\n """\n\n if not dsets:\n raise ValueError("Unable to concatenate an empty list of datasets.")\n for i, dataset in enumerate(dsets):\n if not isinstance(dataset, (Dataset, IterableDataset)):\n if isinstance(dataset, (DatasetDict, IterableDatasetDict)):\n if not dataset:\n raise ValueError(\n f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} "\n "is an empty dataset dictionary."\n )\n raise ValueError(\n f"Dataset at position {i} has at least one split: {list(dataset)}\n"\n f"Please pick one to interleave with the other datasets, for example: dataset['{next(iter(dataset))}']"\n )\n raise ValueError(\n f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} is a {type(dataset).__name__}."\n )\n if i == 0:\n dataset_type, other_type = (\n (Dataset, IterableDataset) if isinstance(dataset, Dataset) else (IterableDataset, Dataset)\n )\n elif not isinstance(dataset, dataset_type):\n raise ValueError(\n f"Unable to interleave a {dataset_type.__name__} (at position 0) with a {other_type.__name__} (at position {i}). Expected a list of Dataset objects or a list of IterableDataset objects."\n )\n if dataset_type is Dataset:\n return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis)\n else:\n return _concatenate_iterable_datasets(dsets, info=info, split=split, axis=axis)\n
|
.venv\Lib\site-packages\datasets\combine.py
|
combine.py
|
Python
| 10,892 | 0.85 | 0.130233 | 0 |
node-utils
| 762 |
2024-01-26T00:38:28.844884
|
BSD-3-Clause
| false |
4d0469521e920b08573f5ffdeedea5bf
|
import importlib\nimport importlib.metadata\nimport logging\nimport os\nimport platform\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom huggingface_hub import constants\nfrom packaging import version\n\n\nlogger = logging.getLogger(__name__.split(".", 1)[0]) # to avoid circular import from .utils.logging\n\n# Datasets\nS3_DATASETS_BUCKET_PREFIX = "https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets"\nCLOUDFRONT_DATASETS_DISTRIB_PREFIX = "https://cdn-datasets.huggingface.co/datasets/datasets"\nREPO_DATASETS_URL = "https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}"\n\n# Hub\nHF_ENDPOINT = os.environ.get("HF_ENDPOINT", "https://huggingface.co")\nHUB_DATASETS_URL = HF_ENDPOINT + "/datasets/{repo_id}/resolve/{revision}/{path}"\nHUB_DATASETS_HFFS_URL = "hf://datasets/{repo_id}@{revision}/{path}"\nHUB_DEFAULT_VERSION = "main"\n\nPY_VERSION = version.parse(platform.python_version())\n\n# General environment variables accepted values for booleans\nENV_VARS_TRUE_VALUES = {"1", "ON", "YES", "TRUE"}\nENV_VARS_FALSE_VALUES = {"0", "OFF", "NO", "FALSE"}\nENV_VARS_TRUE_AND_AUTO_VALUES = ENV_VARS_TRUE_VALUES.union({"AUTO"})\nENV_VARS_FALSE_AND_AUTO_VALUES = ENV_VARS_FALSE_VALUES.union({"AUTO"})\n\n\n# Imports\nDILL_VERSION = version.parse(importlib.metadata.version("dill"))\nFSSPEC_VERSION = version.parse(importlib.metadata.version("fsspec"))\nPANDAS_VERSION = version.parse(importlib.metadata.version("pandas"))\nPYARROW_VERSION = version.parse(importlib.metadata.version("pyarrow"))\nHF_HUB_VERSION = version.parse(importlib.metadata.version("huggingface_hub"))\n\nUSE_TF = os.environ.get("USE_TF", "AUTO").upper()\nUSE_TORCH = os.environ.get("USE_TORCH", "AUTO").upper()\nUSE_JAX = os.environ.get("USE_JAX", "AUTO").upper()\n\nTORCH_VERSION = "N/A"\nTORCH_AVAILABLE = False\n\nif USE_TORCH in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TF not in ENV_VARS_TRUE_VALUES:\n TORCH_AVAILABLE = importlib.util.find_spec("torch") is not None\n if TORCH_AVAILABLE:\n try:\n TORCH_VERSION = version.parse(importlib.metadata.version("torch"))\n logger.info(f"PyTorch version {TORCH_VERSION} available.")\n except importlib.metadata.PackageNotFoundError:\n pass\nelse:\n logger.info("Disabling PyTorch because USE_TF is set")\n\nPOLARS_VERSION = "N/A"\nPOLARS_AVAILABLE = importlib.util.find_spec("polars") is not None\n\nif POLARS_AVAILABLE:\n try:\n POLARS_VERSION = version.parse(importlib.metadata.version("polars"))\n logger.info(f"Polars version {POLARS_VERSION} available.")\n except importlib.metadata.PackageNotFoundError:\n pass\n\n\nDUCKDB_VERSION = "N/A"\nDUCKDB_AVAILABLE = importlib.util.find_spec("duckdb") is not None\n\nif DUCKDB_AVAILABLE:\n try:\n DUCKDB_VERSION = version.parse(importlib.metadata.version("duckdb"))\n logger.info(f"Duckdb version {DUCKDB_VERSION} available.")\n except importlib.metadata.PackageNotFoundError:\n pass\n\nTF_VERSION = "N/A"\nTF_AVAILABLE = False\n\nif USE_TF in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TORCH not in ENV_VARS_TRUE_VALUES:\n TF_AVAILABLE = importlib.util.find_spec("tensorflow") is not None\n if TF_AVAILABLE:\n # For the metadata, we have to look for both tensorflow and tensorflow-cpu\n for package in [\n "tensorflow",\n "tensorflow-cpu",\n "tensorflow-gpu",\n "tf-nightly",\n "tf-nightly-cpu",\n "tf-nightly-gpu",\n "intel-tensorflow",\n "tensorflow-rocm",\n "tensorflow-macos",\n ]:\n try:\n TF_VERSION = version.parse(importlib.metadata.version(package))\n except importlib.metadata.PackageNotFoundError:\n continue\n else:\n break\n else:\n TF_AVAILABLE = False\n if TF_AVAILABLE:\n if TF_VERSION.major < 2:\n logger.info(f"TensorFlow found but with version {TF_VERSION}. `datasets` requires version 2 minimum.")\n TF_AVAILABLE = False\n else:\n logger.info(f"TensorFlow version {TF_VERSION} available.")\nelse:\n logger.info("Disabling Tensorflow because USE_TORCH is set")\n\n\nJAX_VERSION = "N/A"\nJAX_AVAILABLE = False\n\nif USE_JAX in ENV_VARS_TRUE_AND_AUTO_VALUES:\n JAX_AVAILABLE = importlib.util.find_spec("jax") is not None and importlib.util.find_spec("jaxlib") is not None\n if JAX_AVAILABLE:\n try:\n JAX_VERSION = version.parse(importlib.metadata.version("jax"))\n logger.info(f"JAX version {JAX_VERSION} available.")\n except importlib.metadata.PackageNotFoundError:\n pass\nelse:\n logger.info("Disabling JAX because USE_JAX is set to False")\n\n\n# Optional tools for data loading\nSQLALCHEMY_AVAILABLE = importlib.util.find_spec("sqlalchemy") is not None\n\n# Optional tools for feature decoding\nPIL_AVAILABLE = importlib.util.find_spec("PIL") is not None\nIS_OPUS_SUPPORTED = importlib.util.find_spec("soundfile") is not None and version.parse(\n importlib.import_module("soundfile").__libsndfile_version__\n) >= version.parse("1.0.31")\nIS_MP3_SUPPORTED = importlib.util.find_spec("soundfile") is not None and version.parse(\n importlib.import_module("soundfile").__libsndfile_version__\n) >= version.parse("1.1.0")\nTORCHVISION_AVAILABLE = importlib.util.find_spec("torchvision") is not None\nPDFPLUMBER_AVAILABLE = importlib.util.find_spec("pdfplumber") is not None\n\n# Optional compression tools\nRARFILE_AVAILABLE = importlib.util.find_spec("rarfile") is not None\nZSTANDARD_AVAILABLE = importlib.util.find_spec("zstandard") is not None\nLZ4_AVAILABLE = importlib.util.find_spec("lz4") is not None\nPY7ZR_AVAILABLE = importlib.util.find_spec("py7zr") is not None\n\n# Cache location\nDEFAULT_XDG_CACHE_HOME = "~/.cache"\nXDG_CACHE_HOME = os.getenv("XDG_CACHE_HOME", DEFAULT_XDG_CACHE_HOME)\nDEFAULT_HF_CACHE_HOME = os.path.join(XDG_CACHE_HOME, "huggingface")\nHF_CACHE_HOME = os.path.expanduser(os.getenv("HF_HOME", DEFAULT_HF_CACHE_HOME))\n\nDEFAULT_HF_DATASETS_CACHE = os.path.join(HF_CACHE_HOME, "datasets")\nHF_DATASETS_CACHE = Path(os.getenv("HF_DATASETS_CACHE", DEFAULT_HF_DATASETS_CACHE))\n\nDEFAULT_HF_MODULES_CACHE = os.path.join(HF_CACHE_HOME, "modules")\nHF_MODULES_CACHE = Path(os.getenv("HF_MODULES_CACHE", DEFAULT_HF_MODULES_CACHE))\n\nDOWNLOADED_DATASETS_DIR = "downloads"\nDEFAULT_DOWNLOADED_DATASETS_PATH = os.path.join(HF_DATASETS_CACHE, DOWNLOADED_DATASETS_DIR)\nDOWNLOADED_DATASETS_PATH = Path(os.getenv("HF_DATASETS_DOWNLOADED_DATASETS_PATH", DEFAULT_DOWNLOADED_DATASETS_PATH))\n\nEXTRACTED_DATASETS_DIR = "extracted"\nDEFAULT_EXTRACTED_DATASETS_PATH = os.path.join(DEFAULT_DOWNLOADED_DATASETS_PATH, EXTRACTED_DATASETS_DIR)\nEXTRACTED_DATASETS_PATH = Path(os.getenv("HF_DATASETS_EXTRACTED_DATASETS_PATH", DEFAULT_EXTRACTED_DATASETS_PATH))\n\n# Download count for the website\nHF_UPDATE_DOWNLOAD_COUNTS = (\n os.environ.get("HF_UPDATE_DOWNLOAD_COUNTS", "AUTO").upper() in ENV_VARS_TRUE_AND_AUTO_VALUES\n)\n\n# For downloads and to check remote files metadata\nHF_DATASETS_MULTITHREADING_MAX_WORKERS = 16\n\n# Remote dataset scripts support\n__HF_DATASETS_TRUST_REMOTE_CODE = os.environ.get("HF_DATASETS_TRUST_REMOTE_CODE", "ask")\nHF_DATASETS_TRUST_REMOTE_CODE: Optional[bool] = (\n True\n if __HF_DATASETS_TRUST_REMOTE_CODE.upper() in ENV_VARS_TRUE_VALUES\n else False\n if __HF_DATASETS_TRUST_REMOTE_CODE.upper() in ENV_VARS_FALSE_VALUES\n else None\n)\nTIME_OUT_REMOTE_CODE = 15\n\n# Dataset viewer API\nUSE_PARQUET_EXPORT = True\n\n# Batch size constants. For more info, see:\n# https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations)\nDEFAULT_MAX_BATCH_SIZE = 1000\n\n# Size of the preloaded record batch in `Dataset.__iter__`\nARROW_READER_BATCH_SIZE_IN_DATASET_ITER = 10\n\n# Max shard size in bytes (e.g. to shard parquet datasets in push_to_hub or download_and_prepare)\nMAX_SHARD_SIZE = "500MB"\n\n# Parquet configuration\nPARQUET_ROW_GROUP_SIZE_FOR_AUDIO_DATASETS = 100\nPARQUET_ROW_GROUP_SIZE_FOR_IMAGE_DATASETS = 100\nPARQUET_ROW_GROUP_SIZE_FOR_BINARY_DATASETS = 100\nPARQUET_ROW_GROUP_SIZE_FOR_VIDEO_DATASETS = 10\n\n# Offline mode\n_offline = os.environ.get("HF_DATASETS_OFFLINE")\nHF_HUB_OFFLINE = constants.HF_HUB_OFFLINE if _offline is None else _offline.upper() in ENV_VARS_TRUE_VALUES\nHF_DATASETS_OFFLINE = HF_HUB_OFFLINE # kept for backward-compatibility\n\n# Here, `True` will disable progress bars globally without possibility of enabling it\n# programmatically. `False` will enable them without possibility of disabling them.\n# If environment variable is not set (None), then the user is free to enable/disable\n# them programmatically.\n# TL;DR: env variable has priority over code\n__HF_DATASETS_DISABLE_PROGRESS_BARS = os.environ.get("HF_DATASETS_DISABLE_PROGRESS_BARS")\nHF_DATASETS_DISABLE_PROGRESS_BARS: Optional[bool] = (\n __HF_DATASETS_DISABLE_PROGRESS_BARS.upper() in ENV_VARS_TRUE_VALUES\n if __HF_DATASETS_DISABLE_PROGRESS_BARS is not None\n else None\n)\n\n# In-memory\nDEFAULT_IN_MEMORY_MAX_SIZE = 0 # Disabled\nIN_MEMORY_MAX_SIZE = float(os.environ.get("HF_DATASETS_IN_MEMORY_MAX_SIZE", DEFAULT_IN_MEMORY_MAX_SIZE))\n\n# File names\nDATASET_ARROW_FILENAME = "dataset.arrow"\nDATASET_INDICES_FILENAME = "indices.arrow"\nDATASET_STATE_JSON_FILENAME = "state.json"\nDATASET_INFO_FILENAME = "dataset_info.json"\nDATASETDICT_INFOS_FILENAME = "dataset_infos.json"\nLICENSE_FILENAME = "LICENSE"\nDATASETDICT_JSON_FILENAME = "dataset_dict.json"\nMETADATA_CONFIGS_FIELD = "configs"\nREPOCARD_FILENAME = "README.md"\nREPOYAML_FILENAME = ".huggingface.yaml"\n\nMODULE_NAME_FOR_DYNAMIC_MODULES = "datasets_modules"\n\nMAX_DATASET_CONFIG_ID_READABLE_LENGTH = 255\n\n# Temporary cache directory prefix\nTEMP_CACHE_DIR_PREFIX = "hf_datasets-"\n\n# Streaming\nSTREAMING_READ_MAX_RETRIES = 20\nSTREAMING_READ_RETRY_INTERVAL = 5\n\n# Datasets without script\nDATA_FILES_MAX_NUMBER_FOR_MODULE_INFERENCE = 200\nGLOBBED_DATA_FILES_MAX_NUMBER_FOR_MODULE_INFERENCE = 10\nARCHIVED_DATA_FILES_MAX_NUMBER_FOR_MODULE_INFERENCE = 200\n\n# Async map functions\nMAX_NUM_RUNNING_ASYNC_MAP_FUNCTIONS_IN_PARALLEL = 1000\n\n# Progress bars\nPBAR_REFRESH_TIME_INTERVAL = 0.05 # 20 progress updates per sec\n\n# Maximum number of uploaded files per commit\nUPLOADS_MAX_NUMBER_PER_COMMIT = 50\n\n# Backward compatibility\nMAX_TABLE_NBYTES_FOR_PICKLING = 4 << 30\n
|
.venv\Lib\site-packages\datasets\config.py
|
config.py
|
Python
| 10,306 | 0.95 | 0.096654 | 0.152074 |
python-kit
| 274 |
2023-07-20T19:34:24.669309
|
BSD-3-Clause
| false |
91949fd9552cc0465c143bf7ffa83c8e
|
from typing import TypeVar\n\nfrom .arrow_dataset import Dataset, _split_by_node_map_style_dataset\nfrom .iterable_dataset import IterableDataset, _split_by_node_iterable_dataset\n\n\nDatasetType = TypeVar("DatasetType", Dataset, IterableDataset)\n\n\ndef split_dataset_by_node(dataset: DatasetType, rank: int, world_size: int) -> DatasetType:\n """\n Split a dataset for the node at rank `rank` in a pool of nodes of size `world_size`.\n\n For map-style datasets:\n\n Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset.\n To maximize data loading throughput, chunks are made of contiguous data on disk if possible.\n\n For iterable datasets:\n\n If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.num_shards % world_size == 0`),\n then the shards are evenly assigned across the nodes, which is the most optimized.\n Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples.\n\n Args:\n dataset ([`Dataset`] or [`IterableDataset`]):\n The dataset to split by node.\n rank (`int`):\n Rank of the current node.\n world_size (`int`):\n Total number of nodes.\n\n Returns:\n [`Dataset`] or [`IterableDataset`]: The dataset to be used on the node at rank `rank`.\n """\n if isinstance(dataset, Dataset):\n return _split_by_node_map_style_dataset(dataset, rank=rank, world_size=world_size)\n else:\n return _split_by_node_iterable_dataset(dataset, rank=rank, world_size=world_size)\n
|
.venv\Lib\site-packages\datasets\distributed.py
|
distributed.py
|
Python
| 1,562 | 0.85 | 0.128205 | 0 |
react-lib
| 674 |
2024-03-14T14:35:39.704513
|
MIT
| false |
904bf99d3ccc87f3671c62802fed2121
|
import inspect\nimport os\nimport random\nimport shutil\nimport tempfile\nimport weakref\nfrom functools import wraps\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Any, Callable, Optional, Union\n\nimport numpy as np\nimport xxhash\n\nfrom . import config\nfrom .naming import INVALID_WINDOWS_CHARACTERS_IN_PATH\nfrom .utils._dill import dumps\nfrom .utils.logging import get_logger\n\n\nif TYPE_CHECKING:\n from .arrow_dataset import Dataset\n\n\nlogger = get_logger(__name__)\n\n\n# Fingerprinting allows to have one deterministic fingerprint per dataset state.\n# A dataset fingerprint is updated after each transform.\n# Re-running the same transforms on a dataset in a different session results in the same fingerprint.\n# This is possible thanks to a custom hashing function that works with most python objects.\n\n# Fingerprinting is the main mechanism that enables caching.\n# The caching mechanism allows to reload an existing cache file if it's already been computed.\n\n\n#################\n# Caching\n#################\n\n_CACHING_ENABLED = True\n_TEMP_DIR_FOR_TEMP_CACHE_FILES: Optional["_TempCacheDir"] = None\n_DATASETS_WITH_TABLE_IN_TEMP_DIR: Optional[weakref.WeakSet] = None\n\n\nclass _TempCacheDir:\n """\n A temporary directory for storing cached Arrow files with a cleanup that frees references to the Arrow files\n before deleting the directory itself to avoid permission errors on Windows.\n """\n\n def __init__(self):\n self.name = tempfile.mkdtemp(prefix=config.TEMP_CACHE_DIR_PREFIX)\n self._finalizer = weakref.finalize(self, self._cleanup)\n\n def _cleanup(self):\n for dset in get_datasets_with_cache_file_in_temp_dir():\n dset.__del__()\n if os.path.exists(self.name):\n try:\n shutil.rmtree(self.name)\n except Exception as e:\n raise OSError(\n f"An error occured while trying to delete temporary cache directory {self.name}. Please delete it manually."\n ) from e\n\n def cleanup(self):\n if self._finalizer.detach():\n self._cleanup()\n\n\ndef maybe_register_dataset_for_temp_dir_deletion(dataset):\n """\n This function registers the datasets that have cache files in _TEMP_DIR_FOR_TEMP_CACHE_FILES in order\n to properly delete them before deleting the temporary directory.\n The temporary directory _TEMP_DIR_FOR_TEMP_CACHE_FILES is used when caching is disabled.\n """\n if _TEMP_DIR_FOR_TEMP_CACHE_FILES is None:\n return\n\n global _DATASETS_WITH_TABLE_IN_TEMP_DIR\n if _DATASETS_WITH_TABLE_IN_TEMP_DIR is None:\n _DATASETS_WITH_TABLE_IN_TEMP_DIR = weakref.WeakSet()\n if any(\n Path(_TEMP_DIR_FOR_TEMP_CACHE_FILES.name) in Path(cache_file["filename"]).parents\n for cache_file in dataset.cache_files\n ):\n _DATASETS_WITH_TABLE_IN_TEMP_DIR.add(dataset)\n\n\ndef get_datasets_with_cache_file_in_temp_dir():\n return list(_DATASETS_WITH_TABLE_IN_TEMP_DIR) if _DATASETS_WITH_TABLE_IN_TEMP_DIR is not None else []\n\n\ndef enable_caching():\n """\n When applying transforms on a dataset, the data are stored in cache files.\n The caching mechanism allows to reload an existing cache file if it's already been computed.\n\n Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated\n after each transform.\n\n If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely, if the caching is disabled:\n - cache files are always recreated\n - cache files are written to a temporary directory that is deleted when session closes\n - cache files are named using a random hash instead of the dataset fingerprint\n - use [`~datasets.Dataset.save_to_disk`] to save a transformed dataset or it will be deleted when session closes\n - caching doesn't affect [`~datasets.load_dataset`]. If you want to regenerate a dataset from scratch you should use\n the `download_mode` parameter in [`~datasets.load_dataset`].\n """\n global _CACHING_ENABLED\n _CACHING_ENABLED = True\n\n\ndef disable_caching():\n """\n When applying transforms on a dataset, the data are stored in cache files.\n The caching mechanism allows to reload an existing cache file if it's already been computed.\n\n Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated\n after each transform.\n\n If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely, if the caching is disabled:\n - cache files are always recreated\n - cache files are written to a temporary directory that is deleted when session closes\n - cache files are named using a random hash instead of the dataset fingerprint\n - use [`~datasets.Dataset.save_to_disk`] to save a transformed dataset or it will be deleted when session closes\n - caching doesn't affect [`~datasets.load_dataset`]. If you want to regenerate a dataset from scratch you should use\n the `download_mode` parameter in [`~datasets.load_dataset`].\n """\n global _CACHING_ENABLED\n _CACHING_ENABLED = False\n\n\ndef is_caching_enabled() -> bool:\n """\n When applying transforms on a dataset, the data are stored in cache files.\n The caching mechanism allows to reload an existing cache file if it's already been computed.\n\n Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated\n after each transform.\n\n If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely, if the caching is disabled:\n - cache files are always recreated\n - cache files are written to a temporary directory that is deleted when session closes\n - cache files are named using a random hash instead of the dataset fingerprint\n - use [`~datasets.Dataset.save_to_disk`]] to save a transformed dataset or it will be deleted when session closes\n - caching doesn't affect [`~datasets.load_dataset`]. If you want to regenerate a dataset from scratch you should use\n the `download_mode` parameter in [`~datasets.load_dataset`].\n """\n global _CACHING_ENABLED\n return bool(_CACHING_ENABLED)\n\n\ndef get_temporary_cache_files_directory() -> str:\n """Return a directory that is deleted when session closes."""\n global _TEMP_DIR_FOR_TEMP_CACHE_FILES\n if _TEMP_DIR_FOR_TEMP_CACHE_FILES is None:\n _TEMP_DIR_FOR_TEMP_CACHE_FILES = _TempCacheDir()\n return _TEMP_DIR_FOR_TEMP_CACHE_FILES.name\n\n\n#################\n# Hashing\n#################\n\n\nclass Hasher:\n """Hasher that accepts python objects as inputs."""\n\n dispatch: dict = {}\n\n def __init__(self):\n self.m = xxhash.xxh64()\n\n @classmethod\n def hash_bytes(cls, value: Union[bytes, list[bytes]]) -> str:\n value = [value] if isinstance(value, bytes) else value\n m = xxhash.xxh64()\n for x in value:\n m.update(x)\n return m.hexdigest()\n\n @classmethod\n def hash(cls, value: Any) -> str:\n return cls.hash_bytes(dumps(value))\n\n def update(self, value: Any) -> None:\n header_for_update = f"=={type(value)}=="\n value_for_update = self.hash(value)\n self.m.update(header_for_update.encode("utf8"))\n self.m.update(value_for_update.encode("utf-8"))\n\n def hexdigest(self) -> str:\n return self.m.hexdigest()\n\n\n#################\n# Fingerprinting\n#################\n\nfingerprint_rng = random.Random()\n# we show a warning only once when fingerprinting fails to avoid spam\nfingerprint_warnings: dict[str, bool] = {}\n\n\ndef generate_fingerprint(dataset: "Dataset") -> str:\n state = dataset.__dict__\n hasher = Hasher()\n for key in sorted(state):\n if key == "_fingerprint":\n continue\n hasher.update(key)\n hasher.update(state[key])\n # hash data files last modification timestamps as well\n for cache_file in dataset.cache_files:\n hasher.update(os.path.getmtime(cache_file["filename"]))\n return hasher.hexdigest()\n\n\ndef generate_random_fingerprint(nbits: int = 64) -> str:\n return f"{fingerprint_rng.getrandbits(nbits):0{nbits // 4}x}"\n\n\ndef update_fingerprint(fingerprint, transform, transform_args):\n global fingerprint_warnings\n hasher = Hasher()\n hasher.update(fingerprint)\n try:\n hasher.update(transform)\n except: # noqa various errors might raise here from pickle or dill\n if _CACHING_ENABLED:\n if not fingerprint_warnings.get("update_fingerprint_transform_hash_failed", False):\n logger.warning(\n f"Transform {transform} couldn't be hashed properly, a random hash was used instead. "\n "Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. "\n "If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. "\n "This warning is only showed once. Subsequent hashing failures won't be showed."\n )\n fingerprint_warnings["update_fingerprint_transform_hash_failed"] = True\n else:\n logger.info(f"Transform {transform} couldn't be hashed properly, a random hash was used instead.")\n else:\n logger.info(\n f"Transform {transform} couldn't be hashed properly, a random hash was used instead. This doesn't affect caching since it's disabled."\n )\n\n return generate_random_fingerprint()\n for key in sorted(transform_args):\n hasher.update(key)\n try:\n hasher.update(transform_args[key])\n except: # noqa various errors might raise here from pickle or dill\n if _CACHING_ENABLED:\n if not fingerprint_warnings.get("update_fingerprint_transform_hash_failed", False):\n logger.warning(\n f"Parameter '{key}'={transform_args[key]} of the transform {transform} couldn't be hashed properly, a random hash was used instead. "\n "Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. "\n "If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. "\n "This warning is only showed once. Subsequent hashing failures won't be showed."\n )\n fingerprint_warnings["update_fingerprint_transform_hash_failed"] = True\n else:\n logger.info(\n f"Parameter '{key}'={transform_args[key]} of the transform {transform} couldn't be hashed properly, a random hash was used instead."\n )\n else:\n logger.info(\n f"Parameter '{key}'={transform_args[key]} of the transform {transform} couldn't be hashed properly, a random hash was used instead. This doesn't affect caching since it's disabled."\n )\n return generate_random_fingerprint()\n return hasher.hexdigest()\n\n\ndef validate_fingerprint(fingerprint: str, max_length=64):\n """\n Make sure the fingerprint is a non-empty string that is not longer that max_length=64 by default,\n so that the fingerprint can be used to name cache files without issues.\n """\n if not isinstance(fingerprint, str) or not fingerprint:\n raise ValueError(f"Invalid fingerprint '{fingerprint}': it should be a non-empty string.")\n for invalid_char in INVALID_WINDOWS_CHARACTERS_IN_PATH:\n if invalid_char in fingerprint:\n raise ValueError(\n f"Invalid fingerprint. Bad characters from black list '{INVALID_WINDOWS_CHARACTERS_IN_PATH}' found in '{fingerprint}'. "\n f"They could create issues when creating cache files."\n )\n if len(fingerprint) > max_length:\n raise ValueError(\n f"Invalid fingerprint. Maximum lenth is {max_length} but '{fingerprint}' has length {len(fingerprint)}."\n "It could create issues when creating cache files."\n )\n\n\ndef format_transform_for_fingerprint(func: Callable, version: Optional[str] = None) -> str:\n """\n Format a transform to the format that will be used to update the fingerprint.\n """\n transform = f"{func.__module__}.{func.__qualname__}"\n if version is not None:\n transform += f"@{version}"\n return transform\n\n\ndef format_kwargs_for_fingerprint(\n func: Callable,\n args: tuple,\n kwargs: dict[str, Any],\n use_kwargs: Optional[list[str]] = None,\n ignore_kwargs: Optional[list[str]] = None,\n randomized_function: bool = False,\n) -> dict[str, Any]:\n """\n Format the kwargs of a transform to the format that will be used to update the fingerprint.\n """\n kwargs_for_fingerprint = kwargs.copy()\n if args:\n params = [p.name for p in inspect.signature(func).parameters.values() if p != p.VAR_KEYWORD]\n args = args[1:] # assume the first argument is the dataset\n params = params[1:]\n kwargs_for_fingerprint.update(zip(params, args))\n else:\n del kwargs_for_fingerprint[\n next(iter(inspect.signature(func).parameters))\n ] # assume the first key is the dataset\n\n # keep the right kwargs to be hashed to generate the fingerprint\n\n if use_kwargs:\n kwargs_for_fingerprint = {k: v for k, v in kwargs_for_fingerprint.items() if k in use_kwargs}\n if ignore_kwargs:\n kwargs_for_fingerprint = {k: v for k, v in kwargs_for_fingerprint.items() if k not in ignore_kwargs}\n if randomized_function: # randomized functions have `seed` and `generator` parameters\n if kwargs_for_fingerprint.get("seed") is None and kwargs_for_fingerprint.get("generator") is None:\n _, seed, pos, *_ = np.random.get_state()\n seed = seed[pos] if pos < 624 else seed[0]\n kwargs_for_fingerprint["generator"] = np.random.default_rng(seed)\n\n # remove kwargs that are the default values\n\n default_values = {\n p.name: p.default for p in inspect.signature(func).parameters.values() if p.default != inspect._empty\n }\n for default_varname, default_value in default_values.items():\n if default_varname in kwargs_for_fingerprint and kwargs_for_fingerprint[default_varname] == default_value:\n kwargs_for_fingerprint.pop(default_varname)\n return kwargs_for_fingerprint\n\n\ndef fingerprint_transform(\n inplace: bool,\n use_kwargs: Optional[list[str]] = None,\n ignore_kwargs: Optional[list[str]] = None,\n fingerprint_names: Optional[list[str]] = None,\n randomized_function: bool = False,\n version: Optional[str] = None,\n):\n """\n Wrapper for dataset transforms to update the dataset fingerprint using ``update_fingerprint``\n Args:\n inplace (:obj:`bool`): If inplace is True, the fingerprint of the dataset is updated inplace.\n Otherwise, a parameter "new_fingerprint" is passed to the wrapped method that should take care of\n setting the fingerprint of the returned Dataset.\n use_kwargs (:obj:`List[str]`, optional): optional white list of argument names to take into account\n to update the fingerprint to the wrapped method that should take care of\n setting the fingerprint of the returned Dataset. By default all the arguments are used.\n ignore_kwargs (:obj:`List[str]`, optional): optional black list of argument names to take into account\n to update the fingerprint. Note that ignore_kwargs prevails on use_kwargs.\n fingerprint_names (:obj:`List[str]`, optional, defaults to ["new_fingerprint"]):\n If the dataset transforms is not inplace and returns a DatasetDict, then it can require\n several fingerprints (one per dataset in the DatasetDict). By specifying fingerprint_names,\n one fingerprint named after each element of fingerprint_names is going to be passed.\n randomized_function (:obj:`bool`, defaults to False): If the dataset transform is random and has\n optional parameters "seed" and "generator", then you can set randomized_function to True.\n This way, even if users set "seed" and "generator" to None, then the fingerprint is\n going to be randomly generated depending on numpy's current state. In this case, the\n generator is set to np.random.default_rng(np.random.get_state()[1][0]).\n version (:obj:`str`, optional): version of the transform. The version is taken into account when\n computing the fingerprint. If a datase transform changes (or at least if the output data\n that are cached changes), then one should increase the version. If the version stays the\n same, then old cached data could be reused that are not compatible with the new transform.\n It should be in the format "MAJOR.MINOR.PATCH".\n """\n\n if use_kwargs is not None and not isinstance(use_kwargs, list):\n raise ValueError(f"use_kwargs is supposed to be a list, not {type(use_kwargs)}")\n\n if ignore_kwargs is not None and not isinstance(ignore_kwargs, list):\n raise ValueError(f"ignore_kwargs is supposed to be a list, not {type(use_kwargs)}")\n\n if inplace and fingerprint_names:\n raise ValueError("fingerprint_names are only used when inplace is False")\n\n fingerprint_names = fingerprint_names if fingerprint_names is not None else ["new_fingerprint"]\n\n def _fingerprint(func):\n if not inplace and not all(name in func.__code__.co_varnames for name in fingerprint_names):\n raise ValueError(f"function {func} is missing parameters {fingerprint_names} in signature")\n\n if randomized_function: # randomized function have seed and generator parameters\n if "seed" not in func.__code__.co_varnames:\n raise ValueError(f"'seed' must be in {func}'s signature")\n if "generator" not in func.__code__.co_varnames:\n raise ValueError(f"'generator' must be in {func}'s signature")\n # this call has to be outside the wrapper or since __qualname__ changes in multiprocessing\n transform = format_transform_for_fingerprint(func, version=version)\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n kwargs_for_fingerprint = format_kwargs_for_fingerprint(\n func,\n args,\n kwargs,\n use_kwargs=use_kwargs,\n ignore_kwargs=ignore_kwargs,\n randomized_function=randomized_function,\n )\n\n if args:\n dataset: Dataset = args[0]\n args = args[1:]\n else:\n dataset: Dataset = kwargs.pop(next(iter(inspect.signature(func).parameters)))\n\n # compute new_fingerprint and add it to the args of not in-place transforms\n if inplace:\n new_fingerprint = update_fingerprint(dataset._fingerprint, transform, kwargs_for_fingerprint)\n else:\n for fingerprint_name in fingerprint_names: # transforms like `train_test_split` have several hashes\n if kwargs.get(fingerprint_name) is None:\n kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name\n kwargs[fingerprint_name] = update_fingerprint(\n dataset._fingerprint, transform, kwargs_for_fingerprint\n )\n else:\n validate_fingerprint(kwargs[fingerprint_name])\n\n # Call actual function\n\n out = func(dataset, *args, **kwargs)\n\n # Update fingerprint of in-place transforms + update in-place history of transforms\n\n if inplace: # update after calling func so that the fingerprint doesn't change if the function fails\n dataset._fingerprint = new_fingerprint\n\n return out\n\n wrapper._decorator_name_ = "fingerprint"\n return wrapper\n\n return _fingerprint\n
|
.venv\Lib\site-packages\datasets\fingerprint.py
|
fingerprint.py
|
Python
| 20,333 | 0.95 | 0.229075 | 0.062162 |
vue-tools
| 771 |
2025-05-13T13:43:03.002205
|
GPL-3.0
| false |
8eaf3ef739da8d281ef5541878c057c6
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n"""DatasetInfo record information we know about a dataset.\n\nThis includes things that we know about the dataset statically, i.e.:\n - description\n - canonical location\n - does it have validation and tests splits\n - size\n - etc.\n\nThis also includes the things that can and should be computed once we've\nprocessed the dataset as well:\n - number of examples (in each split)\n - etc.\n"""\n\nimport copy\nimport dataclasses\nimport json\nimport os\nimport posixpath\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import ClassVar, Optional, Union\n\nimport fsspec\nfrom fsspec.core import url_to_fs\nfrom huggingface_hub import DatasetCard, DatasetCardData\n\nfrom . import config\nfrom .features import Features\nfrom .splits import SplitDict\nfrom .utils import Version\nfrom .utils.logging import get_logger\nfrom .utils.py_utils import asdict, unique_values\n\n\nlogger = get_logger(__name__)\n\n\n@dataclass\nclass SupervisedKeysData:\n input: str = ""\n output: str = ""\n\n\n@dataclass\nclass DownloadChecksumsEntryData:\n key: str = ""\n value: str = ""\n\n\nclass MissingCachedSizesConfigError(Exception):\n """The expected cached sizes of the download file are missing."""\n\n\nclass NonMatchingCachedSizesError(Exception):\n """The prepared split doesn't have expected sizes."""\n\n\n@dataclass\nclass PostProcessedInfo:\n features: Optional[Features] = None\n resources_checksums: Optional[dict] = None\n\n def __post_init__(self):\n # Convert back to the correct classes when we reload from dict\n if self.features is not None and not isinstance(self.features, Features):\n self.features = Features.from_dict(self.features)\n\n @classmethod\n def from_dict(cls, post_processed_info_dict: dict) -> "PostProcessedInfo":\n field_names = {f.name for f in dataclasses.fields(cls)}\n return cls(**{k: v for k, v in post_processed_info_dict.items() if k in field_names})\n\n\n@dataclass\nclass DatasetInfo:\n """Information about a dataset.\n\n `DatasetInfo` documents datasets, including its name, version, and features.\n See the constructor arguments and properties for a full list.\n\n Not all fields are known on construction and may be updated later.\n\n Attributes:\n description (`str`):\n A description of the dataset.\n citation (`str`):\n A BibTeX citation of the dataset.\n homepage (`str`):\n A URL to the official homepage for the dataset.\n license (`str`):\n The dataset's license. It can be the name of the license or a paragraph containing the terms of the license.\n features ([`Features`], *optional*):\n The features used to specify the dataset's column types.\n post_processed (`PostProcessedInfo`, *optional*):\n Information regarding the resources of a possible post-processing of a dataset. For example, it can contain the information of an index.\n supervised_keys (`SupervisedKeysData`, *optional*):\n Specifies the input feature and the label for supervised learning if applicable for the dataset (legacy from TFDS).\n builder_name (`str`, *optional*):\n The name of the `GeneratorBasedBuilder` subclass used to create the dataset. Usually matched to the corresponding script name. It is also the snake_case version of the dataset builder class name.\n config_name (`str`, *optional*):\n The name of the configuration derived from [`BuilderConfig`].\n version (`str` or [`Version`], *optional*):\n The version of the dataset.\n splits (`dict`, *optional*):\n The mapping between split name and metadata.\n download_checksums (`dict`, *optional*):\n The mapping between the URL to download the dataset's checksums and corresponding metadata.\n download_size (`int`, *optional*):\n The size of the files to download to generate the dataset, in bytes.\n post_processing_size (`int`, *optional*):\n Size of the dataset in bytes after post-processing, if any.\n dataset_size (`int`, *optional*):\n The combined size in bytes of the Arrow tables for all splits.\n size_in_bytes (`int`, *optional*):\n The combined size in bytes of all files associated with the dataset (downloaded files + Arrow files).\n **config_kwargs (additional keyword arguments):\n Keyword arguments to be passed to the [`BuilderConfig`] and used in the [`DatasetBuilder`].\n """\n\n # Set in the dataset scripts\n description: str = dataclasses.field(default_factory=str)\n citation: str = dataclasses.field(default_factory=str)\n homepage: str = dataclasses.field(default_factory=str)\n license: str = dataclasses.field(default_factory=str)\n features: Optional[Features] = None\n post_processed: Optional[PostProcessedInfo] = None\n supervised_keys: Optional[SupervisedKeysData] = None\n\n # Set later by the builder\n builder_name: Optional[str] = None\n dataset_name: Optional[str] = None # for packaged builders, to be different from builder_name\n config_name: Optional[str] = None\n version: Optional[Union[str, Version]] = None\n # Set later by `download_and_prepare`\n splits: Optional[dict] = None\n download_checksums: Optional[dict] = None\n download_size: Optional[int] = None\n post_processing_size: Optional[int] = None\n dataset_size: Optional[int] = None\n size_in_bytes: Optional[int] = None\n\n _INCLUDED_INFO_IN_YAML: ClassVar[list[str]] = [\n "config_name",\n "download_size",\n "dataset_size",\n "features",\n "splits",\n ]\n\n def __post_init__(self):\n # Convert back to the correct classes when we reload from dict\n if self.features is not None and not isinstance(self.features, Features):\n self.features = Features.from_dict(self.features)\n if self.post_processed is not None and not isinstance(self.post_processed, PostProcessedInfo):\n self.post_processed = PostProcessedInfo.from_dict(self.post_processed)\n if self.version is not None and not isinstance(self.version, Version):\n if isinstance(self.version, str):\n self.version = Version(self.version)\n else:\n self.version = Version.from_dict(self.version)\n if self.splits is not None and not isinstance(self.splits, SplitDict):\n self.splits = SplitDict.from_split_dict(self.splits)\n if self.supervised_keys is not None and not isinstance(self.supervised_keys, SupervisedKeysData):\n if isinstance(self.supervised_keys, (tuple, list)):\n self.supervised_keys = SupervisedKeysData(*self.supervised_keys)\n else:\n self.supervised_keys = SupervisedKeysData(**self.supervised_keys)\n\n def write_to_directory(self, dataset_info_dir, pretty_print=False, storage_options: Optional[dict] = None):\n """Write `DatasetInfo` and license (if present) as JSON files to `dataset_info_dir`.\n\n Args:\n dataset_info_dir (`str`):\n Destination directory.\n pretty_print (`bool`, defaults to `False`):\n If `True`, the JSON will be pretty-printed with the indent level of 4.\n storage_options (`dict`, *optional*):\n Key/value pairs to be passed on to the file-system backend, if any.\n\n <Added version="2.9.0"/>\n\n Example:\n\n ```py\n >>> from datasets import load_dataset\n >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation")\n >>> ds.info.write_to_directory("/path/to/directory/")\n ```\n """\n fs: fsspec.AbstractFileSystem\n fs, *_ = url_to_fs(dataset_info_dir, **(storage_options or {}))\n with fs.open(posixpath.join(dataset_info_dir, config.DATASET_INFO_FILENAME), "wb") as f:\n self._dump_info(f, pretty_print=pretty_print)\n if self.license:\n with fs.open(posixpath.join(dataset_info_dir, config.LICENSE_FILENAME), "wb") as f:\n self._dump_license(f)\n\n def _dump_info(self, file, pretty_print=False):\n """Dump info in `file` file-like object open in bytes mode (to support remote files)"""\n file.write(json.dumps(asdict(self), indent=4 if pretty_print else None).encode("utf-8"))\n\n def _dump_license(self, file):\n """Dump license in `file` file-like object open in bytes mode (to support remote files)"""\n file.write(self.license.encode("utf-8"))\n\n @classmethod\n def from_merge(cls, dataset_infos: list["DatasetInfo"]):\n dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]\n\n if len(dataset_infos) > 0 and all(dataset_infos[0] == dset_info for dset_info in dataset_infos):\n # if all dataset_infos are equal we don't need to merge. Just return the first.\n return dataset_infos[0]\n\n description = "\n\n".join(unique_values(info.description for info in dataset_infos)).strip()\n citation = "\n\n".join(unique_values(info.citation for info in dataset_infos)).strip()\n homepage = "\n\n".join(unique_values(info.homepage for info in dataset_infos)).strip()\n license = "\n\n".join(unique_values(info.license for info in dataset_infos)).strip()\n features = None\n supervised_keys = None\n\n return cls(\n description=description,\n citation=citation,\n homepage=homepage,\n license=license,\n features=features,\n supervised_keys=supervised_keys,\n )\n\n @classmethod\n def from_directory(cls, dataset_info_dir: str, storage_options: Optional[dict] = None) -> "DatasetInfo":\n """Create [`DatasetInfo`] from the JSON file in `dataset_info_dir`.\n\n This function updates all the dynamically generated fields (num_examples,\n hash, time of creation,...) of the [`DatasetInfo`].\n\n This will overwrite all previous metadata.\n\n Args:\n dataset_info_dir (`str`):\n The directory containing the metadata file. This\n should be the root directory of a specific dataset version.\n storage_options (`dict`, *optional*):\n Key/value pairs to be passed on to the file-system backend, if any.\n\n <Added version="2.9.0"/>\n\n Example:\n\n ```py\n >>> from datasets import DatasetInfo\n >>> ds_info = DatasetInfo.from_directory("/path/to/directory/")\n ```\n """\n fs: fsspec.AbstractFileSystem\n fs, *_ = url_to_fs(dataset_info_dir, **(storage_options or {}))\n logger.info(f"Loading Dataset info from {dataset_info_dir}")\n if not dataset_info_dir:\n raise ValueError("Calling DatasetInfo.from_directory() with undefined dataset_info_dir.")\n with fs.open(posixpath.join(dataset_info_dir, config.DATASET_INFO_FILENAME), "r", encoding="utf-8") as f:\n dataset_info_dict = json.load(f)\n return cls.from_dict(dataset_info_dict)\n\n @classmethod\n def from_dict(cls, dataset_info_dict: dict) -> "DatasetInfo":\n field_names = {f.name for f in dataclasses.fields(cls)}\n return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})\n\n def update(self, other_dataset_info: "DatasetInfo", ignore_none=True):\n self_dict = self.__dict__\n self_dict.update(\n **{\n k: copy.deepcopy(v)\n for k, v in other_dataset_info.__dict__.items()\n if (v is not None or not ignore_none)\n }\n )\n\n def copy(self) -> "DatasetInfo":\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\n\n def _to_yaml_dict(self) -> dict:\n yaml_dict = {}\n dataset_info_dict = asdict(self)\n for key in dataset_info_dict:\n if key in self._INCLUDED_INFO_IN_YAML:\n value = getattr(self, key)\n if hasattr(value, "_to_yaml_list"): # Features, SplitDict\n yaml_dict[key] = value._to_yaml_list()\n elif hasattr(value, "_to_yaml_string"): # Version\n yaml_dict[key] = value._to_yaml_string()\n else:\n yaml_dict[key] = value\n return yaml_dict\n\n @classmethod\n def _from_yaml_dict(cls, yaml_data: dict) -> "DatasetInfo":\n yaml_data = copy.deepcopy(yaml_data)\n if yaml_data.get("features") is not None:\n yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])\n if yaml_data.get("splits") is not None:\n yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])\n field_names = {f.name for f in dataclasses.fields(cls)}\n return cls(**{k: v for k, v in yaml_data.items() if k in field_names})\n\n\nclass DatasetInfosDict(dict[str, DatasetInfo]):\n def write_to_directory(self, dataset_infos_dir, overwrite=False, pretty_print=False) -> None:\n total_dataset_infos = {}\n dataset_infos_path = os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)\n dataset_readme_path = os.path.join(dataset_infos_dir, config.REPOCARD_FILENAME)\n if not overwrite:\n total_dataset_infos = self.from_directory(dataset_infos_dir)\n total_dataset_infos.update(self)\n if os.path.exists(dataset_infos_path):\n # for backward compatibility, let's update the JSON file if it exists\n with open(dataset_infos_path, "w", encoding="utf-8") as f:\n dataset_infos_dict = {\n config_name: asdict(dset_info) for config_name, dset_info in total_dataset_infos.items()\n }\n json.dump(dataset_infos_dict, f, indent=4 if pretty_print else None)\n # Dump the infos in the YAML part of the README.md file\n if os.path.exists(dataset_readme_path):\n dataset_card = DatasetCard.load(dataset_readme_path)\n dataset_card_data = dataset_card.data\n else:\n dataset_card = None\n dataset_card_data = DatasetCardData()\n if total_dataset_infos:\n total_dataset_infos.to_dataset_card_data(dataset_card_data)\n dataset_card = (\n DatasetCard("---\n" + str(dataset_card_data) + "\n---\n") if dataset_card is None else dataset_card\n )\n dataset_card.save(Path(dataset_readme_path))\n\n @classmethod\n def from_directory(cls, dataset_infos_dir) -> "DatasetInfosDict":\n logger.info(f"Loading Dataset Infos from {dataset_infos_dir}")\n # Load the info from the YAML part of README.md\n if os.path.exists(os.path.join(dataset_infos_dir, config.REPOCARD_FILENAME)):\n dataset_card_data = DatasetCard.load(Path(dataset_infos_dir) / config.REPOCARD_FILENAME).data\n if "dataset_info" in dataset_card_data:\n return cls.from_dataset_card_data(dataset_card_data)\n if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)):\n # this is just to have backward compatibility with dataset_infos.json files\n with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f:\n return cls(\n {\n config_name: DatasetInfo.from_dict(dataset_info_dict)\n for config_name, dataset_info_dict in json.load(f).items()\n }\n )\n else:\n return cls()\n\n @classmethod\n def from_dataset_card_data(cls, dataset_card_data: DatasetCardData) -> "DatasetInfosDict":\n if isinstance(dataset_card_data.get("dataset_info"), (list, dict)):\n if isinstance(dataset_card_data["dataset_info"], list):\n return cls(\n {\n dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(\n dataset_info_yaml_dict\n )\n for dataset_info_yaml_dict in dataset_card_data["dataset_info"]\n }\n )\n else:\n dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])\n dataset_info.config_name = dataset_card_data["dataset_info"].get("config_name", "default")\n return cls({dataset_info.config_name: dataset_info})\n else:\n return cls()\n\n def to_dataset_card_data(self, dataset_card_data: DatasetCardData) -> None:\n if self:\n # first get existing metadata info\n if "dataset_info" in dataset_card_data and isinstance(dataset_card_data["dataset_info"], dict):\n dataset_metadata_infos = {\n dataset_card_data["dataset_info"].get("config_name", "default"): dataset_card_data["dataset_info"]\n }\n elif "dataset_info" in dataset_card_data and isinstance(dataset_card_data["dataset_info"], list):\n dataset_metadata_infos = {\n config_metadata["config_name"]: config_metadata\n for config_metadata in dataset_card_data["dataset_info"]\n }\n else:\n dataset_metadata_infos = {}\n # update/rewrite existing metadata info with the one to dump\n total_dataset_infos = {\n **dataset_metadata_infos,\n **{config_name: dset_info._to_yaml_dict() for config_name, dset_info in self.items()},\n }\n # the config_name from the dataset_infos_dict takes over the config_name of the DatasetInfo\n for config_name, dset_info_yaml_dict in total_dataset_infos.items():\n dset_info_yaml_dict["config_name"] = config_name\n if len(total_dataset_infos) == 1:\n # use a struct instead of a list of configurations, since there's only one\n dataset_card_data["dataset_info"] = next(iter(total_dataset_infos.values()))\n config_name = dataset_card_data["dataset_info"].pop("config_name", None)\n if config_name != "default":\n # if config_name is not "default" preserve it and put at the first position\n dataset_card_data["dataset_info"] = {\n "config_name": config_name,\n **dataset_card_data["dataset_info"],\n }\n else:\n dataset_card_data["dataset_info"] = []\n for config_name, dataset_info_yaml_dict in sorted(total_dataset_infos.items()):\n # add the config_name field in first position\n dataset_info_yaml_dict.pop("config_name", None)\n dataset_info_yaml_dict = {"config_name": config_name, **dataset_info_yaml_dict}\n dataset_card_data["dataset_info"].append(dataset_info_yaml_dict)\n
|
.venv\Lib\site-packages\datasets\info.py
|
info.py
|
Python
| 19,689 | 0.95 | 0.232558 | 0.093834 |
awesome-app
| 543 |
2025-02-27T11:49:47.933386
|
GPL-3.0
| false |
115220baf97af9fedcc8c2f637eaa294
|
# Copyright 2020 The HuggingFace Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n"""List and inspect datasets."""\n\nimport os\nfrom collections.abc import Mapping, Sequence\nfrom typing import Optional, Union\n\nfrom .download.download_config import DownloadConfig\nfrom .download.download_manager import DownloadMode\nfrom .download.streaming_download_manager import StreamingDownloadManager\nfrom .info import DatasetInfo\nfrom .load import (\n dataset_module_factory,\n get_dataset_builder_class,\n load_dataset_builder,\n)\nfrom .utils.logging import get_logger\nfrom .utils.version import Version\n\n\nlogger = get_logger(__name__)\n\n\nclass SplitsNotFoundError(ValueError):\n pass\n\n\ndef get_dataset_infos(\n path: str,\n data_files: Optional[Union[dict, list, str]] = None,\n download_config: Optional[DownloadConfig] = None,\n download_mode: Optional[Union[DownloadMode, str]] = None,\n revision: Optional[Union[str, Version]] = None,\n token: Optional[Union[bool, str]] = None,\n **config_kwargs,\n):\n """Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.\n\n Args:\n path (`str`): path to the dataset processing script with the dataset builder. Can be either:\n\n - a local path to processing script or the directory containing the script (if the script has the same name as the directory),\n e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`\n - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`huggingface_hub.list_datasets`]),\n e.g. `'rajpurkar/squad'`, `'nyu-mll/glue'` or``'openai/webtext'`\n revision (`Union[str, datasets.Version]`, *optional*):\n If specified, the dataset module will be loaded from the datasets repository at this version.\n By default:\n - it is set to the local version of the lib.\n - it will also try to load it from the main branch if it's not available at the local version of the lib.\n Specifying a version that is different from your local version of the lib might cause compatibility issues.\n download_config ([`DownloadConfig`], *optional*):\n Specific download configuration parameters.\n download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):\n Download/generate mode.\n data_files (`Union[Dict, List, str]`, *optional*):\n Defining the data_files of the dataset configuration.\n token (`str` or `bool`, *optional*):\n Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.\n If `True`, or not specified, will get token from `"~/.huggingface"`.\n **config_kwargs (additional keyword arguments):\n Optional attributes for builder class which will override the attributes if supplied.\n\n Example:\n\n ```py\n >>> from datasets import get_dataset_infos\n >>> get_dataset_infos('cornell-movie-review-data/rotten_tomatoes')\n {'default': DatasetInfo(description="Movie Review Dataset.\nThis is a dataset of containing 5,331 positive and 5,331 negative processed\nsentences from Rotten Tomatoes movie reviews...), ...}\n ```\n """\n config_names = get_dataset_config_names(\n path=path,\n revision=revision,\n download_config=download_config,\n download_mode=download_mode,\n data_files=data_files,\n token=token,\n )\n return {\n config_name: get_dataset_config_info(\n path=path,\n config_name=config_name,\n data_files=data_files,\n download_config=download_config,\n download_mode=download_mode,\n revision=revision,\n token=token,\n **config_kwargs,\n )\n for config_name in config_names\n }\n\n\ndef get_dataset_config_names(\n path: str,\n revision: Optional[Union[str, Version]] = None,\n download_config: Optional[DownloadConfig] = None,\n download_mode: Optional[Union[DownloadMode, str]] = None,\n dynamic_modules_path: Optional[str] = None,\n data_files: Optional[Union[dict, list, str]] = None,\n **download_kwargs,\n):\n """Get the list of available config names for a particular dataset.\n\n Args:\n path (`str`): path to the dataset processing script with the dataset builder. Can be either:\n\n - a local path to processing script or the directory containing the script (if the script has the same name as the directory),\n e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`\n - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`huggingface_hub.list_datasets`]),\n e.g. `'rajpurkar/squad'`, `'nyu-mll/glue'` or `'openai/webtext'`\n revision (`Union[str, datasets.Version]`, *optional*):\n If specified, the dataset module will be loaded from the datasets repository at this version.\n By default:\n - it is set to the local version of the lib.\n - it will also try to load it from the main branch if it's not available at the local version of the lib.\n Specifying a version that is different from your local version of the lib might cause compatibility issues.\n download_config ([`DownloadConfig`], *optional*):\n Specific download configuration parameters.\n download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):\n Download/generate mode.\n dynamic_modules_path (`str`, defaults to `~/.cache/huggingface/modules/datasets_modules`):\n Optional path to the directory in which the dynamic modules are saved. It must have been initialized with `init_dynamic_modules`.\n By default the datasets are stored inside the `datasets_modules` module.\n data_files (`Union[Dict, List, str]`, *optional*):\n Defining the data_files of the dataset configuration.\n **download_kwargs (additional keyword arguments):\n Optional attributes for [`DownloadConfig`] which will override the attributes in `download_config` if supplied,\n for example `token`.\n\n Example:\n\n ```py\n >>> from datasets import get_dataset_config_names\n >>> get_dataset_config_names("nyu-mll/glue")\n ['cola',\n 'sst2',\n 'mrpc',\n 'qqp',\n 'stsb',\n 'mnli',\n 'mnli_mismatched',\n 'mnli_matched',\n 'qnli',\n 'rte',\n 'wnli',\n 'ax']\n ```\n """\n dataset_module = dataset_module_factory(\n path,\n revision=revision,\n download_config=download_config,\n download_mode=download_mode,\n dynamic_modules_path=dynamic_modules_path,\n data_files=data_files,\n **download_kwargs,\n )\n builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))\n return list(builder_cls.builder_configs.keys()) or [\n dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default")\n ]\n\n\ndef get_dataset_default_config_name(\n path: str,\n revision: Optional[Union[str, Version]] = None,\n download_config: Optional[DownloadConfig] = None,\n download_mode: Optional[Union[DownloadMode, str]] = None,\n dynamic_modules_path: Optional[str] = None,\n data_files: Optional[Union[dict, list, str]] = None,\n **download_kwargs,\n) -> Optional[str]:\n """Get the default config name for a particular dataset.\n Can return None only if the dataset has multiple configurations and no default configuration.\n\n Args:\n path (`str`): path to the dataset processing script with the dataset builder. Can be either:\n\n - a local path to processing script or the directory containing the script (if the script has the same name as the directory),\n e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`\n - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`huggingface_hub.list_datasets`]),\n e.g. `'rajpurkar/squad'`, `'nyu-mll/glue'` or `'openai/webtext'`\n revision (`Union[str, datasets.Version]`, *optional*):\n If specified, the dataset module will be loaded from the datasets repository at this version.\n By default:\n - it is set to the local version of the lib.\n - it will also try to load it from the main branch if it's not available at the local version of the lib.\n Specifying a version that is different from your local version of the lib might cause compatibility issues.\n download_config ([`DownloadConfig`], *optional*):\n Specific download configuration parameters.\n download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):\n Download/generate mode.\n dynamic_modules_path (`str`, defaults to `~/.cache/huggingface/modules/datasets_modules`):\n Optional path to the directory in which the dynamic modules are saved. It must have been initialized with `init_dynamic_modules`.\n By default the datasets are stored inside the `datasets_modules` module.\n data_files (`Union[Dict, List, str]`, *optional*):\n Defining the data_files of the dataset configuration.\n **download_kwargs (additional keyword arguments):\n Optional attributes for [`DownloadConfig`] which will override the attributes in `download_config` if supplied,\n for example `token`.\n\n Returns:\n Optional[str]: the default config name if there is one\n\n Example:\n\n ```py\n >>> from datasets import get_dataset_default_config_name\n >>> get_dataset_default_config_name("openbookqa")\n 'main'\n ```\n """\n dataset_module = dataset_module_factory(\n path,\n revision=revision,\n download_config=download_config,\n download_mode=download_mode,\n dynamic_modules_path=dynamic_modules_path,\n data_files=data_files,\n **download_kwargs,\n )\n builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))\n builder_configs = list(builder_cls.builder_configs.keys())\n if builder_configs:\n default_config_name = builder_configs[0] if len(builder_configs) == 1 else None\n else:\n default_config_name = "default"\n return builder_cls.DEFAULT_CONFIG_NAME or default_config_name\n\n\ndef get_dataset_config_info(\n path: str,\n config_name: Optional[str] = None,\n data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,\n download_config: Optional[DownloadConfig] = None,\n download_mode: Optional[Union[DownloadMode, str]] = None,\n revision: Optional[Union[str, Version]] = None,\n token: Optional[Union[bool, str]] = None,\n **config_kwargs,\n) -> DatasetInfo:\n """Get the meta information (DatasetInfo) about a dataset for a particular config\n\n Args:\n path (``str``): path to the dataset processing script with the dataset builder. Can be either:\n\n - a local path to processing script or the directory containing the script (if the script has the same name as the directory),\n e.g. ``'./dataset/squad'`` or ``'./dataset/squad/squad.py'``\n - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`huggingface_hub.list_datasets`]),\n e.g. ``'rajpurkar/squad'``, ``'nyu-mll/glue'`` or ``'openai/webtext'``\n config_name (:obj:`str`, optional): Defining the name of the dataset configuration.\n data_files (:obj:`str` or :obj:`Sequence` or :obj:`Mapping`, optional): Path(s) to source data file(s).\n download_config (:class:`~download.DownloadConfig`, optional): Specific download configuration parameters.\n download_mode (:class:`DownloadMode` or :obj:`str`, default ``REUSE_DATASET_IF_EXISTS``): Download/generate mode.\n revision (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load.\n As datasets have their own git repository on the Datasets Hub, the default version "main" corresponds to their "main" branch.\n You can specify a different version than the default "main" by using a commit SHA or a git tag of the dataset repository.\n token (``str`` or :obj:`bool`, optional): Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.\n If True, or not specified, will get token from `"~/.huggingface"`.\n **config_kwargs (additional keyword arguments): optional attributes for builder class which will override the attributes if supplied.\n\n """\n builder = load_dataset_builder(\n path,\n name=config_name,\n data_files=data_files,\n download_config=download_config,\n download_mode=download_mode,\n revision=revision,\n token=token,\n **config_kwargs,\n )\n info = builder.info\n if info.splits is None:\n download_config = download_config.copy() if download_config else DownloadConfig()\n if token is not None:\n download_config.token = token\n builder._check_manual_download(\n StreamingDownloadManager(base_path=builder.base_path, download_config=download_config)\n )\n try:\n info.splits = {\n split_generator.name: {"name": split_generator.name, "dataset_name": path}\n for split_generator in builder._split_generators(\n StreamingDownloadManager(base_path=builder.base_path, download_config=download_config)\n )\n }\n except Exception as err:\n raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err\n return info\n\n\ndef get_dataset_split_names(\n path: str,\n config_name: Optional[str] = None,\n data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,\n download_config: Optional[DownloadConfig] = None,\n download_mode: Optional[Union[DownloadMode, str]] = None,\n revision: Optional[Union[str, Version]] = None,\n token: Optional[Union[bool, str]] = None,\n **config_kwargs,\n):\n """Get the list of available splits for a particular config and dataset.\n\n Args:\n path (`str`): path to the dataset processing script with the dataset builder. Can be either:\n\n - a local path to processing script or the directory containing the script (if the script has the same name as the directory),\n e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`\n - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`huggingface_hub.list_datasets`]),\n e.g. `'rajpurkar/squad'`, `'nyu-mll/glue'` or `'openai/webtext'`\n config_name (`str`, *optional*):\n Defining the name of the dataset configuration.\n data_files (`str` or `Sequence` or `Mapping`, *optional*):\n Path(s) to source data file(s).\n download_config ([`DownloadConfig`], *optional*):\n Specific download configuration parameters.\n download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):\n Download/generate mode.\n revision ([`Version`] or `str`, *optional*):\n Version of the dataset script to load.\n As datasets have their own git repository on the Datasets Hub, the default version "main" corresponds to their "main" branch.\n You can specify a different version than the default "main" by using a commit SHA or a git tag of the dataset repository.\n token (`str` or `bool`, *optional*):\n Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.\n If `True`, or not specified, will get token from `"~/.huggingface"`.\n **config_kwargs (additional keyword arguments):\n Optional attributes for builder class which will override the attributes if supplied.\n\n Example:\n\n ```py\n >>> from datasets import get_dataset_split_names\n >>> get_dataset_split_names('cornell-movie-review-data/rotten_tomatoes')\n ['train', 'validation', 'test']\n ```\n """\n info = get_dataset_config_info(\n path,\n config_name=config_name,\n data_files=data_files,\n download_config=download_config,\n download_mode=download_mode,\n revision=revision,\n token=token,\n **config_kwargs,\n )\n return list(info.splits.keys())\n
|
.venv\Lib\site-packages\datasets\inspect.py
|
inspect.py
|
Python
| 17,143 | 0.95 | 0.146006 | 0.088957 |
react-lib
| 12 |
2025-01-26T23:00:34.863613
|
GPL-3.0
| false |
298b018686d7f5a64e5ea40210264c51
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n\n"""\nHashing function for dataset keys using `hashlib.md5`\n\nRequirements for the hash function:\n\n- Provides a uniformly distributed hash from random space\n- Adequately fast speed\n- Working with multiple input types (in this case, `str`, `int` or `bytes`)\n- Should be platform independent (generates same hash on different OS and systems)\n\nThe hashing function provides a unique 128-bit integer hash of the key provided.\n\nThe split name is being used here as the hash salt to avoid having same hashes\nin different splits due to same keys\n"""\n\nfrom typing import Union\n\nfrom huggingface_hub.utils import insecure_hashlib\n\n\ndef _as_bytes(hash_data: Union[str, int, bytes, bytearray]) -> bytes:\n """\n Returns the input hash_data in its bytes form\n\n Args:\n hash_data: the hash salt/key to be converted to bytes\n """\n if isinstance(hash_data, (bytes, bytearray)):\n # Data already in bytes, returns as it as\n return hash_data\n elif isinstance(hash_data, str):\n # We keep the data as it as for it ot be later encoded to UTF-8\n # However replace `\\` with `/` for Windows compatibility\n hash_data = hash_data.replace("\\", "/")\n elif isinstance(hash_data, int):\n hash_data = str(hash_data)\n else:\n # If data is not of the required type, raise error\n raise InvalidKeyError(hash_data)\n\n return hash_data.encode("utf-8")\n\n\nclass InvalidKeyError(Exception):\n """Raises an error when given key is of invalid datatype."""\n\n def __init__(self, hash_data):\n self.prefix = "\nFAILURE TO GENERATE DATASET: Invalid key type detected"\n self.err_msg = f"\nFound Key {hash_data} of type {type(hash_data)}"\n self.suffix = "\nKeys should be either str, int or bytes type"\n super().__init__(f"{self.prefix}{self.err_msg}{self.suffix}")\n\n\nclass DuplicatedKeysError(Exception):\n """Raise an error when duplicate key found."""\n\n def __init__(self, key, duplicate_key_indices, fix_msg=""):\n self.key = key\n self.duplicate_key_indices = duplicate_key_indices\n self.fix_msg = fix_msg\n self.prefix = "Found multiple examples generated with the same key"\n if len(duplicate_key_indices) <= 20:\n self.err_msg = f"\nThe examples at index {', '.join(duplicate_key_indices)} have the key {key}"\n else:\n self.err_msg = f"\nThe examples at index {', '.join(duplicate_key_indices[:20])}... ({len(duplicate_key_indices) - 20} more) have the key {key}"\n self.suffix = "\n" + fix_msg if fix_msg else ""\n super().__init__(f"{self.prefix}{self.err_msg}{self.suffix}")\n\n\nclass KeyHasher:\n """KeyHasher class for providing hash using md5"""\n\n def __init__(self, hash_salt: str):\n self._split_md5 = insecure_hashlib.md5(_as_bytes(hash_salt))\n\n def hash(self, key: Union[str, int, bytes]) -> int:\n """Returns 128-bits unique hash of input key\n\n Args:\n key: the input key to be hashed (should be str, int or bytes)\n\n Returns: 128-bit int hash key"""\n md5 = self._split_md5.copy()\n byte_key = _as_bytes(key)\n md5.update(byte_key)\n # Convert to integer with hexadecimal conversion\n return int(md5.hexdigest(), 16)\n
|
.venv\Lib\site-packages\datasets\keyhash.py
|
keyhash.py
|
Python
| 3,896 | 0.95 | 0.201923 | 0.2375 |
python-kit
| 967 |
2024-02-17T03:13:04.934974
|
GPL-3.0
| false |
0f2e0e7c9e98af6a43c8acf11476e60c
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n"""Utilities for file names."""\n\nimport itertools\nimport os\nimport re\n\n\n_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")\n_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")\n\n_single_underscore_re = re.compile(r"(?<!_)_(?!_)")\n_multiple_underscores_re = re.compile(r"(_{2,})")\n\n_split_re = r"^\w+(\.\w+)*$"\n\nINVALID_WINDOWS_CHARACTERS_IN_PATH = r"<>:/\|?*"\n\n\ndef camelcase_to_snakecase(name):\n """Convert camel-case string to snake-case."""\n name = _uppercase_uppercase_re.sub(r"\1_\2", name)\n name = _lowercase_uppercase_re.sub(r"\1_\2", name)\n return name.lower()\n\n\ndef snakecase_to_camelcase(name):\n """Convert snake-case string to camel-case string."""\n name = _single_underscore_re.split(name)\n name = [_multiple_underscores_re.split(n) for n in name]\n return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")\n\n\ndef filename_prefix_for_name(name):\n if os.path.basename(name) != name:\n raise ValueError(f"Should be a dataset name, not a path: {name}")\n return camelcase_to_snakecase(name)\n\n\ndef filename_prefix_for_split(name, split):\n if os.path.basename(name) != name:\n raise ValueError(f"Should be a dataset name, not a path: {name}")\n if not re.match(_split_re, split):\n raise ValueError(f"Split name should match '{_split_re}'' but got '{split}'.")\n return f"{filename_prefix_for_name(name)}-{split}"\n\n\ndef filepattern_for_dataset_split(dataset_name, split, data_dir, filetype_suffix=None):\n prefix = filename_prefix_for_split(dataset_name, split)\n if filetype_suffix:\n prefix += f".{filetype_suffix}"\n filepath = os.path.join(data_dir, prefix)\n return f"{filepath}*"\n\n\ndef filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None):\n prefix = filename_prefix_for_split(dataset_name, split)\n prefix = os.path.join(path, prefix)\n\n if shard_lengths:\n num_shards = len(shard_lengths)\n filenames = [f"{prefix}-{shard_id:05d}-of-{num_shards:05d}" for shard_id in range(num_shards)]\n if filetype_suffix:\n filenames = [filename + f".{filetype_suffix}" for filename in filenames]\n return filenames\n else:\n filename = prefix\n if filetype_suffix:\n filename += f".{filetype_suffix}"\n return [filename]\n
|
.venv\Lib\site-packages\datasets\naming.py
|
naming.py
|
Python
| 3,001 | 0.95 | 0.238095 | 0.21875 |
node-utils
| 866 |
2023-07-13T09:42:14.553284
|
BSD-3-Clause
| false |
a6b329f92bcc81bd6cfbf559ab798e1f
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n"""Splits related API."""\n\nimport abc\nimport collections\nimport copy\nimport dataclasses\nimport re\nfrom dataclasses import dataclass\nfrom typing import Optional, Union\n\nfrom .arrow_reader import FileInstructions, make_file_instructions\nfrom .naming import _split_re\nfrom .utils.py_utils import NonMutableDict, asdict\n\n\n@dataclass\nclass SplitInfo:\n name: str = dataclasses.field(default="", metadata={"include_in_asdict_even_if_is_default": True})\n num_bytes: int = dataclasses.field(default=0, metadata={"include_in_asdict_even_if_is_default": True})\n num_examples: int = dataclasses.field(default=0, metadata={"include_in_asdict_even_if_is_default": True})\n shard_lengths: Optional[list[int]] = None\n\n # Deprecated\n # For backward compatibility, this field needs to always be included in files like\n # dataset_infos.json and dataset_info.json files\n # To do so, we always include it in the output of datasets.utils.py_utils.asdict(split_info)\n dataset_name: Optional[str] = dataclasses.field(\n default=None, metadata={"include_in_asdict_even_if_is_default": True}\n )\n\n @property\n def file_instructions(self):\n """Returns the list of dict(filename, take, skip)."""\n # `self.dataset_name` is assigned in `SplitDict.add()`.\n instructions = make_file_instructions(\n name=self.dataset_name,\n split_infos=[self],\n instruction=str(self.name),\n )\n return instructions.file_instructions\n\n\n@dataclass\nclass SubSplitInfo:\n """Wrapper around a sub split info.\n This class expose info on the subsplit:\n ```\n ds, info = datasets.load_dataset(..., split='train[75%:]', with_info=True)\n info.splits['train[75%:]'].num_examples\n ```\n """\n\n instructions: FileInstructions\n\n @property\n def num_examples(self):\n """Returns the number of example in the subsplit."""\n return self.instructions.num_examples\n\n @property\n def file_instructions(self):\n """Returns the list of dict(filename, take, skip)."""\n return self.instructions.file_instructions\n\n\nclass SplitBase(metaclass=abc.ABCMeta):\n # pylint: disable=line-too-long\n """Abstract base class for Split compositionality.\n\n See the\n [guide on splits](../loading#slice-splits)\n for more information.\n\n There are three parts to the composition:\n 1) The splits are composed (defined, merged, split,...) together before\n calling the `.as_dataset()` function. This is done with the `__add__`,\n `__getitem__`, which return a tree of `SplitBase` (whose leaf\n are the `NamedSplit` objects)\n\n ```\n split = datasets.Split.TRAIN + datasets.Split.TEST.subsplit(datasets.percent[:50])\n ```\n\n 2) The `SplitBase` is forwarded to the `.as_dataset()` function\n to be resolved into actual read instruction. This is done by the\n `.get_read_instruction()` method which takes the real dataset splits\n (name, number of shards,...) and parse the tree to return a\n `SplitReadInstruction()` object\n\n ```\n read_instruction = split.get_read_instruction(self.info.splits)\n ```\n\n 3) The `SplitReadInstruction` is then used in the `tf.data.Dataset` pipeline\n to define which files to read and how to skip examples within file.\n\n """\n\n # pylint: enable=line-too-long\n\n @abc.abstractmethod\n def get_read_instruction(self, split_dict):\n """Parse the descriptor tree and compile all read instructions together.\n\n Args:\n split_dict: `dict`, The `dict[split_name, SplitInfo]` of the dataset\n\n Returns:\n split_read_instruction: `SplitReadInstruction`\n """\n raise NotImplementedError("Abstract method")\n\n def __eq__(self, other):\n """Equality: datasets.Split.TRAIN == 'train'."""\n if isinstance(other, (NamedSplit, str)):\n return False\n raise NotImplementedError("Equality is not implemented between merged/sub splits.")\n\n def __ne__(self, other):\n """InEquality: datasets.Split.TRAIN != 'test'."""\n return not self.__eq__(other)\n\n def __add__(self, other):\n """Merging: datasets.Split.TRAIN + datasets.Split.TEST."""\n return _SplitMerged(self, other)\n\n def subsplit(self, arg=None, k=None, percent=None, weighted=None): # pylint: disable=redefined-outer-name\n """Divides this split into subsplits.\n\n There are 3 ways to define subsplits, which correspond to the 3\n arguments `k` (get `k` even subsplits), `percent` (get a slice of the\n dataset with `datasets.percent`), and `weighted` (get subsplits with proportions\n specified by `weighted`).\n\n Example::\n\n ```\n # 50% train, 50% test\n train, test = split.subsplit(k=2)\n # 50% train, 25% test, 25% validation\n train, test, validation = split.subsplit(weighted=[2, 1, 1])\n # Extract last 20%\n subsplit = split.subsplit(datasets.percent[-20:])\n ```\n\n Warning: k and weighted will be converted into percent which mean that\n values below the percent will be rounded up or down. The final split may be\n bigger to deal with remainders. For instance:\n\n ```\n train, test, valid = split.subsplit(k=3) # 33%, 33%, 34%\n s1, s2, s3, s4 = split.subsplit(weighted=[2, 2, 1, 1]) # 33%, 33%, 16%, 18%\n ```\n\n Args:\n arg: If no kwargs are given, `arg` will be interpreted as one of\n `k`, `percent`, or `weighted` depending on the type.\n For example:\n ```\n split.subsplit(10) # Equivalent to split.subsplit(k=10)\n split.subsplit(datasets.percent[:-20]) # percent=datasets.percent[:-20]\n split.subsplit([1, 1, 2]) # weighted=[1, 1, 2]\n ```\n k: `int` If set, subdivide the split into `k` equal parts.\n percent: `datasets.percent slice`, return a single subsplit corresponding to\n a slice of the original split. For example:\n `split.subsplit(datasets.percent[-20:]) # Last 20% of the dataset`.\n weighted: `list[int]`, return a list of subsplits whose proportions match\n the normalized sum of the list. For example:\n `split.subsplit(weighted=[1, 1, 2]) # 25%, 25%, 50%`.\n\n Returns:\n A subsplit or list of subsplits extracted from this split object.\n """\n # Note that the percent kwargs redefine the outer name datasets.percent. This\n # is done for consistency (.subsplit(percent=datasets.percent[:40]))\n if sum(bool(x) for x in (arg, k, percent, weighted)) != 1:\n raise ValueError("Only one argument of subsplit should be set.")\n\n # Auto deduce k\n if isinstance(arg, int):\n k = arg\n elif isinstance(arg, slice):\n percent = arg\n elif isinstance(arg, list):\n weighted = arg\n\n if not (k or percent or weighted):\n raise ValueError(\n f"Invalid split argument {arg}. Only list, slice and int supported. "\n "One of k, weighted or percent should be set to a non empty value."\n )\n\n def assert_slices_coverage(slices):\n # Ensure that the expended slices cover all percents.\n assert sum((list(range(*s.indices(100))) for s in slices), []) == list(range(100))\n\n if k:\n if not 0 < k <= 100:\n raise ValueError(f"Subsplit k should be between 0 and 100, got {k}")\n shift = 100 // k\n slices = [slice(i * shift, (i + 1) * shift) for i in range(k)]\n # Round up last element to ensure all elements are taken\n slices[-1] = slice(slices[-1].start, 100)\n # Internal check to ensure full coverage\n assert_slices_coverage(slices)\n return tuple(_SubSplit(self, s) for s in slices)\n elif percent:\n return _SubSplit(self, percent)\n elif weighted:\n # Normalize the weighted sum\n total = sum(weighted)\n weighted = [100 * x // total for x in weighted]\n # Create the slice for each of the elements\n start = 0\n stop = 0\n slices = []\n for v in weighted:\n stop += v\n slices.append(slice(start, stop))\n start = stop\n # Round up last element to ensure all elements are taken\n slices[-1] = slice(slices[-1].start, 100)\n # Internal check to ensure full coverage\n assert_slices_coverage(slices)\n return tuple(_SubSplit(self, s) for s in slices)\n else:\n # Should not be possible\n raise ValueError("Could not determine the split")\n\n\n# 2 requirements:\n# 1. datasets.percent be sliceable\n# 2. datasets.percent be documented\n#\n# Instances are not documented, so we want datasets.percent to be a class, but to\n# have it be sliceable, we need this metaclass.\nclass PercentSliceMeta(type):\n def __getitem__(cls, slice_value):\n if not isinstance(slice_value, slice):\n raise ValueError(f"datasets.percent should only be called with slice, not {slice_value}")\n return slice_value\n\n\nclass PercentSlice(metaclass=PercentSliceMeta):\n # pylint: disable=line-too-long\n """Syntactic sugar for defining slice subsplits: `datasets.percent[75:-5]`.\n\n See the\n [guide on splits](../loading#slice-splits)\n for more information.\n """\n\n # pylint: enable=line-too-long\n pass\n\n\npercent = PercentSlice # pylint: disable=invalid-name\n\n\nclass _SplitMerged(SplitBase):\n """Represent two split descriptors merged together."""\n\n def __init__(self, split1, split2):\n self._split1 = split1\n self._split2 = split2\n\n def get_read_instruction(self, split_dict):\n read_instruction1 = self._split1.get_read_instruction(split_dict)\n read_instruction2 = self._split2.get_read_instruction(split_dict)\n return read_instruction1 + read_instruction2\n\n def __repr__(self):\n return f"({repr(self._split1)} + {repr(self._split2)})"\n\n\nclass _SubSplit(SplitBase):\n """Represent a sub split of a split descriptor."""\n\n def __init__(self, split, slice_value):\n self._split = split\n self._slice_value = slice_value\n\n def get_read_instruction(self, split_dict):\n return self._split.get_read_instruction(split_dict)[self._slice_value]\n\n def __repr__(self):\n slice_str = "{start}:{stop}"\n if self._slice_value.step is not None:\n slice_str += ":{step}"\n slice_str = slice_str.format(\n start="" if self._slice_value.start is None else self._slice_value.start,\n stop="" if self._slice_value.stop is None else self._slice_value.stop,\n step=self._slice_value.step,\n )\n return f"{repr(self._split)}(datasets.percent[{slice_str}])"\n\n\nclass NamedSplit(SplitBase):\n """Descriptor corresponding to a named split (train, test, ...).\n\n Example:\n Each descriptor can be composed with other using addition or slice:\n\n ```py\n split = datasets.Split.TRAIN.subsplit(datasets.percent[0:25]) + datasets.Split.TEST\n ```\n\n The resulting split will correspond to 25% of the train split merged with\n 100% of the test split.\n\n A split cannot be added twice, so the following will fail:\n\n ```py\n split = (\n datasets.Split.TRAIN.subsplit(datasets.percent[:25]) +\n datasets.Split.TRAIN.subsplit(datasets.percent[75:])\n ) # Error\n split = datasets.Split.TEST + datasets.Split.ALL # Error\n ```\n\n The slices can be applied only one time. So the following are valid:\n\n ```py\n split = (\n datasets.Split.TRAIN.subsplit(datasets.percent[:25]) +\n datasets.Split.TEST.subsplit(datasets.percent[:50])\n )\n split = (datasets.Split.TRAIN + datasets.Split.TEST).subsplit(datasets.percent[:50])\n ```\n\n But this is not valid:\n\n ```py\n train = datasets.Split.TRAIN\n test = datasets.Split.TEST\n split = train.subsplit(datasets.percent[:25]).subsplit(datasets.percent[:25])\n split = (train.subsplit(datasets.percent[:25]) + test).subsplit(datasets.percent[:50])\n ```\n """\n\n def __init__(self, name):\n self._name = name\n split_names_from_instruction = [split_instruction.split("[")[0] for split_instruction in name.split("+")]\n for split_name in split_names_from_instruction:\n if not re.match(_split_re, split_name):\n raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.")\n\n def __str__(self):\n return self._name\n\n def __repr__(self):\n return f"NamedSplit({self._name!r})"\n\n def __eq__(self, other):\n """Equality: datasets.Split.TRAIN == 'train'."""\n if isinstance(other, NamedSplit):\n return self._name == other._name # pylint: disable=protected-access\n elif isinstance(other, SplitBase):\n return False\n elif isinstance(other, str): # Other should be string\n return self._name == other\n else:\n return False\n\n def __lt__(self, other):\n return self._name < other._name # pylint: disable=protected-access\n\n def __hash__(self):\n return hash(self._name)\n\n def get_read_instruction(self, split_dict):\n return SplitReadInstruction(split_dict[self._name])\n\n\nclass NamedSplitAll(NamedSplit):\n """Split corresponding to the union of all defined dataset splits."""\n\n def __init__(self):\n super().__init__("all")\n\n def __repr__(self):\n return "NamedSplitAll()"\n\n def get_read_instruction(self, split_dict):\n # Merge all dataset split together\n read_instructions = [SplitReadInstruction(s) for s in split_dict.values()]\n return sum(read_instructions, SplitReadInstruction())\n\n\nclass Split:\n # pylint: disable=line-too-long\n """`Enum` for dataset splits.\n\n Datasets are typically split into different subsets to be used at various\n stages of training and evaluation.\n\n - `TRAIN`: the training data.\n - `VALIDATION`: the validation data. If present, this is typically used as\n evaluation data while iterating on a model (e.g. changing hyperparameters,\n model architecture, etc.).\n - `TEST`: the testing data. This is the data to report metrics on. Typically\n you do not want to use this during model iteration as you may overfit to it.\n - `ALL`: the union of all defined dataset splits.\n\n All splits, including compositions inherit from `datasets.SplitBase`.\n\n See the [guide](../load_hub#splits) on splits for more information.\n\n Example:\n\n ```py\n >>> datasets.SplitGenerator(\n ... name=datasets.Split.TRAIN,\n ... gen_kwargs={"split_key": "train", "files": dl_manager.download_and extract(url)},\n ... ),\n ... datasets.SplitGenerator(\n ... name=datasets.Split.VALIDATION,\n ... gen_kwargs={"split_key": "validation", "files": dl_manager.download_and extract(url)},\n ... ),\n ... datasets.SplitGenerator(\n ... name=datasets.Split.TEST,\n ... gen_kwargs={"split_key": "test", "files": dl_manager.download_and extract(url)},\n ... )\n ```\n """\n\n # pylint: enable=line-too-long\n TRAIN = NamedSplit("train")\n TEST = NamedSplit("test")\n VALIDATION = NamedSplit("validation")\n ALL = NamedSplitAll()\n\n def __new__(cls, name):\n """Create a custom split with datasets.Split('custom_name')."""\n return NamedSplitAll() if name == "all" else NamedSplit(name)\n\n\n# Similar to SplitInfo, but contain an additional slice info\nSlicedSplitInfo = collections.namedtuple(\n "SlicedSplitInfo",\n [\n "split_info",\n "slice_value",\n ],\n) # noqa: E231\n\n\nclass SplitReadInstruction:\n """Object containing the reading instruction for the dataset.\n\n Similarly to `SplitDescriptor` nodes, this object can be composed with itself,\n but the resolution happens instantaneously, instead of keeping track of the\n tree, such as all instructions are compiled and flattened in a single\n SplitReadInstruction object containing the list of files and slice to use.\n\n Once resolved, the instructions can be accessed with:\n\n ```\n read_instructions.get_list_sliced_split_info() # List of splits to use\n ```\n\n """\n\n def __init__(self, split_info=None):\n self._splits = NonMutableDict(error_msg="Overlap between splits. Split {key} has been added with itself.")\n\n if split_info:\n self.add(SlicedSplitInfo(split_info=split_info, slice_value=None))\n\n def add(self, sliced_split):\n """Add a SlicedSplitInfo the read instructions."""\n # TODO(epot): Check that the number of examples per shard % 100 == 0\n # Otherwise the slices value may be unbalanced and not exactly reflect the\n # requested slice.\n self._splits[sliced_split.split_info.name] = sliced_split\n\n def __add__(self, other):\n """Merging split together."""\n # Will raise error if a split has already be added (NonMutableDict)\n # TODO(epot): If a split is already added but there is no overlap between\n # the slices, should merge the slices (ex: [:10] + [80:])\n split_instruction = SplitReadInstruction()\n split_instruction._splits.update(self._splits) # pylint: disable=protected-access\n split_instruction._splits.update(other._splits) # pylint: disable=protected-access\n return split_instruction\n\n def __getitem__(self, slice_value):\n """Sub-splits."""\n # Will raise an error if a split has already been sliced\n split_instruction = SplitReadInstruction()\n for v in self._splits.values():\n if v.slice_value is not None:\n raise ValueError(f"Trying to slice Split {v.split_info.name} which has already been sliced")\n v = v._asdict()\n v["slice_value"] = slice_value\n split_instruction.add(SlicedSplitInfo(**v))\n return split_instruction\n\n def get_list_sliced_split_info(self):\n return list(self._splits.values())\n\n\nclass SplitDict(dict):\n """Split info object."""\n\n def __init__(self, *args, dataset_name=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.dataset_name = dataset_name\n\n def __getitem__(self, key: Union[SplitBase, str]):\n # 1st case: The key exists: `info.splits['train']`\n if str(key) in self:\n return super().__getitem__(str(key))\n # 2nd case: Uses instructions: `info.splits['train[50%]']`\n else:\n instructions = make_file_instructions(\n name=self.dataset_name,\n split_infos=self.values(),\n instruction=key,\n )\n return SubSplitInfo(instructions)\n\n def __setitem__(self, key: Union[SplitBase, str], value: SplitInfo):\n if key != value.name:\n raise ValueError(f"Cannot add elem. (key mismatch: '{key}' != '{value.name}')")\n super().__setitem__(key, value)\n\n def add(self, split_info: SplitInfo):\n """Add the split info."""\n if split_info.name in self:\n raise ValueError(f"Split {split_info.name} already present")\n split_info.dataset_name = self.dataset_name\n super().__setitem__(split_info.name, split_info)\n\n @property\n def total_num_examples(self):\n """Return the total number of examples."""\n return sum(s.num_examples for s in self.values())\n\n @classmethod\n def from_split_dict(cls, split_infos: Union[list, dict], dataset_name: Optional[str] = None):\n """Returns a new SplitDict initialized from a Dict or List of `split_infos`."""\n if isinstance(split_infos, dict):\n split_infos = list(split_infos.values())\n\n if dataset_name is None:\n dataset_name = split_infos[0].get("dataset_name") if split_infos else None\n\n split_dict = cls(dataset_name=dataset_name)\n\n for split_info in split_infos:\n if isinstance(split_info, dict):\n split_info = SplitInfo(**split_info)\n split_dict.add(split_info)\n\n return split_dict\n\n def to_split_dict(self):\n """Returns a list of SplitInfo protos that we have."""\n out = []\n for split_name, split_info in self.items():\n split_info = copy.deepcopy(split_info)\n split_info.name = split_name\n out.append(split_info)\n return out\n\n def copy(self):\n return SplitDict.from_split_dict(self.to_split_dict(), self.dataset_name)\n\n def _to_yaml_list(self) -> list:\n out = [asdict(s) for s in self.to_split_dict()]\n # we don't need the shard lengths in YAML, since it depends on max_shard_size and num_proc\n for split_info_dict in out:\n split_info_dict.pop("shard_lengths", None)\n # we don't need the dataset_name attribute that is deprecated\n for split_info_dict in out:\n split_info_dict.pop("dataset_name", None)\n return out\n\n @classmethod\n def _from_yaml_list(cls, yaml_data: list) -> "SplitDict":\n return cls.from_split_dict(yaml_data)\n\n\n@dataclass\nclass SplitGenerator:\n """Defines the split information for the generator.\n\n This should be used as returned value of\n `GeneratorBasedBuilder._split_generators`.\n See `GeneratorBasedBuilder._split_generators` for more info and example\n of usage.\n\n Args:\n name (`str`):\n Name of the `Split` for which the generator will\n create the examples.\n **gen_kwargs (additional keyword arguments):\n Keyword arguments to forward to the `DatasetBuilder._generate_examples` method\n of the builder.\n\n Example:\n\n ```py\n >>> datasets.SplitGenerator(\n ... name=datasets.Split.TRAIN,\n ... gen_kwargs={"split_key": "train", "files": dl_manager.download_and_extract(url)},\n ... )\n ```\n """\n\n name: str\n gen_kwargs: dict = dataclasses.field(default_factory=dict)\n split_info: SplitInfo = dataclasses.field(init=False)\n\n def __post_init__(self):\n self.name = str(self.name) # Make sure we convert NamedSplits in strings\n NamedSplit(self.name) # check that it's a valid split name\n self.split_info = SplitInfo(name=self.name)\n
|
.venv\Lib\site-packages\datasets\splits.py
|
splits.py
|
Python
| 23,430 | 0.95 | 0.182677 | 0.116601 |
react-lib
| 649 |
2023-12-24T07:58:05.546344
|
MIT
| false |
7ca6394ac56b4d2bcc30ed3c86ab0e74
|
import importlib\nimport inspect\nfrom functools import wraps\nfrom typing import TYPE_CHECKING, Optional\n\nfrom .download.download_config import DownloadConfig\nfrom .utils.file_utils import (\n xbasename,\n xdirname,\n xet_parse,\n xexists,\n xgetsize,\n xglob,\n xgzip_open,\n xisdir,\n xisfile,\n xjoin,\n xlistdir,\n xnumpy_load,\n xopen,\n xpandas_read_csv,\n xpandas_read_excel,\n xPath,\n xpyarrow_parquet_read_table,\n xrelpath,\n xsio_loadmat,\n xsplit,\n xsplitext,\n xwalk,\n xxml_dom_minidom_parse,\n)\nfrom .utils.logging import get_logger\nfrom .utils.patching import patch_submodule\nfrom .utils.py_utils import get_imports, lock_importable_file\n\n\nlogger = get_logger(__name__)\n\n\nif TYPE_CHECKING:\n from .builder import DatasetBuilder\n\n\ndef extend_module_for_streaming(module_path, download_config: Optional[DownloadConfig] = None):\n """Extend the module to support streaming.\n\n We patch some functions in the module to use `fsspec` to support data streaming:\n - We use `fsspec.open` to open and read remote files. We patch the module function:\n - `open`\n - We use the "::" hop separator to join paths and navigate remote compressed/archive files. We patch the module\n functions:\n - `os.path.join`\n - `pathlib.Path.joinpath` and `pathlib.Path.__truediv__` (called when using the "/" operator)\n\n The patched functions are replaced with custom functions defined to work with the\n :class:`~download.streaming_download_manager.StreamingDownloadManager`.\n\n Args:\n module_path: Path to the module to be extended.\n download_config: Mainly use `token` or `storage_options` to support different platforms and auth types.\n """\n\n module = importlib.import_module(module_path)\n\n # TODO(QL): always update the module to add subsequent new authentication without removing old ones\n if hasattr(module, "_patched_for_streaming") and module._patched_for_streaming:\n if isinstance(module._patched_for_streaming, DownloadConfig):\n module._patched_for_streaming.token = download_config.token\n module._patched_for_streaming.storage_options = download_config.storage_options\n return\n\n def wrap_auth(function):\n @wraps(function)\n def wrapper(*args, **kwargs):\n return function(*args, download_config=download_config, **kwargs)\n\n wrapper._decorator_name_ = "wrap_auth"\n return wrapper\n\n # open files in a streaming fashion\n patch_submodule(module, "open", wrap_auth(xopen)).start()\n patch_submodule(module, "os.listdir", wrap_auth(xlistdir)).start()\n patch_submodule(module, "os.walk", wrap_auth(xwalk)).start()\n patch_submodule(module, "glob.glob", wrap_auth(xglob)).start()\n # allow to navigate in remote zip files\n patch_submodule(module, "os.path.join", xjoin).start()\n patch_submodule(module, "os.path.dirname", xdirname).start()\n patch_submodule(module, "os.path.basename", xbasename).start()\n patch_submodule(module, "os.path.relpath", xrelpath).start()\n patch_submodule(module, "os.path.split", xsplit).start()\n patch_submodule(module, "os.path.splitext", xsplitext).start()\n # allow checks on paths\n patch_submodule(module, "os.path.exists", wrap_auth(xexists)).start()\n patch_submodule(module, "os.path.isdir", wrap_auth(xisdir)).start()\n patch_submodule(module, "os.path.isfile", wrap_auth(xisfile)).start()\n patch_submodule(module, "os.path.getsize", wrap_auth(xgetsize)).start()\n patch_submodule(module, "pathlib.Path", xPath).start()\n # file readers\n patch_submodule(module, "gzip.open", wrap_auth(xgzip_open)).start()\n patch_submodule(module, "numpy.load", wrap_auth(xnumpy_load)).start()\n patch_submodule(module, "pandas.read_csv", wrap_auth(xpandas_read_csv), attrs=["__version__"]).start()\n patch_submodule(module, "pandas.read_excel", wrap_auth(xpandas_read_excel), attrs=["__version__"]).start()\n patch_submodule(module, "scipy.io.loadmat", wrap_auth(xsio_loadmat), attrs=["__version__"]).start()\n patch_submodule(module, "xml.etree.ElementTree.parse", wrap_auth(xet_parse)).start()\n patch_submodule(module, "xml.dom.minidom.parse", wrap_auth(xxml_dom_minidom_parse)).start()\n # pyarrow: do not patch pyarrow attribute in packaged modules\n if not module.__name__.startswith("datasets.packaged_modules."):\n patch_submodule(module, "pyarrow.parquet.read_table", wrap_auth(xpyarrow_parquet_read_table)).start()\n module._patched_for_streaming = download_config\n\n\ndef extend_dataset_builder_for_streaming(builder: "DatasetBuilder"):\n """Extend the dataset builder module and the modules imported by it to support streaming.\n\n Args:\n builder (:class:`DatasetBuilder`): Dataset builder instance.\n """\n # this extends the open and os.path.join functions for data streaming\n download_config = DownloadConfig(storage_options=builder.storage_options, token=builder.token)\n extend_module_for_streaming(builder.__module__, download_config=download_config)\n # if needed, we also have to extend additional internal imports (like wmt14 -> wmt_utils)\n if not builder.__module__.startswith("datasets."): # check that it's not a packaged builder like csv\n importable_file = inspect.getfile(builder.__class__)\n with lock_importable_file(importable_file):\n for imports in get_imports(importable_file):\n if imports[0] == "internal":\n internal_import_name = imports[1]\n internal_module_name = ".".join(builder.__module__.split(".")[:-1] + [internal_import_name])\n extend_module_for_streaming(internal_module_name, download_config=download_config)\n\n # builders can inherit from other builders that might use streaming functionality\n # (for example, ImageFolder and AudioFolder inherit from FolderBuilder which implements examples generation)\n # but these parents builders are not patched automatically as they are not instantiated, so we patch them here\n from .builder import DatasetBuilder\n\n parent_builder_modules = [\n cls.__module__\n for cls in type(builder).__mro__[1:] # make sure it's not the same module we've already patched\n if issubclass(cls, DatasetBuilder) and cls.__module__ != DatasetBuilder.__module__\n ] # check it's not a standard builder from datasets.builder\n for module in parent_builder_modules:\n extend_module_for_streaming(module, download_config=download_config)\n
|
.venv\Lib\site-packages\datasets\streaming.py
|
streaming.py
|
Python
| 6,534 | 0.95 | 0.161972 | 0.090164 |
awesome-app
| 546 |
2025-01-04T15:52:29.222812
|
BSD-3-Clause
| false |
0df68eb351b04e7a349d06a78cad4873
|
import copy\nimport os\nfrom collections.abc import Iterator\nfrom functools import partial\nfrom itertools import groupby\nfrom typing import TYPE_CHECKING, Any, Callable, Optional, TypeVar, Union\n\nimport numpy as np\nimport pyarrow as pa\nimport pyarrow.compute as pc\nimport pyarrow.types\n\nfrom .utils.logging import get_logger\n\n\nif TYPE_CHECKING:\n from .features.features import Features, FeatureType\n\n\nlogger = get_logger(__name__)\n\n\ndef inject_arrow_table_documentation(arrow_table_method):\n def wrapper(fn):\n fn.__doc__ = arrow_table_method.__doc__ + (fn.__doc__ if fn.__doc__ is not None else "")\n fn.__doc__ = fn.__doc__.replace("pyarrow.Table", "Table")\n if hasattr(arrow_table_method, "__annotations__"):\n fn.__annotations__ = arrow_table_method.__annotations__\n return fn\n\n return wrapper\n\n\ndef _in_memory_arrow_table_from_file(filename: str) -> pa.Table:\n in_memory_stream = pa.input_stream(filename)\n opened_stream = pa.ipc.open_stream(in_memory_stream)\n pa_table = opened_stream.read_all()\n return pa_table\n\n\ndef _in_memory_arrow_table_from_buffer(buffer: pa.Buffer) -> pa.Table:\n stream = pa.BufferReader(buffer)\n opened_stream = pa.ipc.open_stream(stream)\n table = opened_stream.read_all()\n return table\n\n\ndef _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader:\n memory_mapped_stream = pa.memory_map(filename)\n return pa.ipc.open_stream(memory_mapped_stream)\n\n\ndef read_schema_from_file(filename: str) -> pa.Schema:\n """\n Infer arrow table schema from file without loading whole file into memory.\n Useful especially while having very big files.\n """\n with pa.memory_map(filename) as memory_mapped_stream:\n schema = pa.ipc.open_stream(memory_mapped_stream).schema\n return schema\n\n\ndef _memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\n opened_stream = _memory_mapped_record_batch_reader_from_file(filename)\n pa_table = opened_stream.read_all()\n return pa_table\n\n\ndef _deepcopy(x, memo: dict):\n """deepcopy a regular class instance"""\n cls = x.__class__\n result = cls.__new__(cls)\n memo[id(x)] = result\n for k, v in x.__dict__.items():\n setattr(result, k, copy.deepcopy(v, memo))\n return result\n\n\ndef _interpolation_search(arr: list[int], x: int) -> int:\n """\n Return the position i of a sorted array so that arr[i] <= x < arr[i+1]\n\n Args:\n arr (`List[int]`): non-empty sorted list of integers\n x (`int`): query\n\n Returns:\n `int`: the position i so that arr[i] <= x < arr[i+1]\n\n Raises:\n `IndexError`: if the array is empty or if the query is outside the array values\n """\n i, j = 0, len(arr) - 1\n while i < j and arr[i] <= x < arr[j]:\n k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i]))\n if arr[k] <= x < arr[k + 1]:\n return k\n elif arr[k] < x:\n i, j = k + 1, j\n else:\n i, j = i, k\n raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")\n\n\nclass IndexedTableMixin:\n def __init__(self, table: pa.Table):\n self._schema: pa.Schema = table.schema\n self._batches: list[pa.RecordBatch] = [\n recordbatch for recordbatch in table.to_batches() if len(recordbatch) > 0\n ]\n self._offsets: np.ndarray = np.cumsum([0] + [len(b) for b in self._batches], dtype=np.int64)\n\n def fast_gather(self, indices: Union[list[int], np.ndarray]) -> pa.Table:\n """\n Create a pa.Table by gathering the records at the records at the specified indices. Should be faster\n than pa.concat_tables(table.fast_slice(int(i) % table.num_rows, 1) for i in indices) since NumPy can compute\n the binary searches in parallel, highly optimized C\n """\n if not len(indices):\n raise ValueError("Indices must be non-empty")\n batch_indices = np.searchsorted(self._offsets, indices, side="right") - 1\n return pa.Table.from_batches(\n [\n self._batches[batch_idx].slice(i - self._offsets[batch_idx], 1)\n for batch_idx, i in zip(batch_indices, indices)\n ],\n schema=self._schema,\n )\n\n def fast_slice(self, offset=0, length=None) -> pa.Table:\n """\n Slice the Table using interpolation search.\n The behavior is the same as `pyarrow.Table.slice` but it's significantly faster.\n\n Interpolation search is used to find the start and end indexes of the batches we want to keep.\n The batches to keep are then concatenated to form the sliced Table.\n """\n if offset < 0:\n raise IndexError("Offset must be non-negative")\n elif offset >= self._offsets[-1] or (length is not None and length <= 0):\n return pa.Table.from_batches([], schema=self._schema)\n i = _interpolation_search(self._offsets, offset)\n if length is None or length + offset >= self._offsets[-1]:\n batches = self._batches[i:]\n batches[0] = batches[0].slice(offset - self._offsets[i])\n else:\n j = _interpolation_search(self._offsets, offset + length - 1)\n batches = self._batches[i : j + 1]\n batches[-1] = batches[-1].slice(0, offset + length - self._offsets[j])\n batches[0] = batches[0].slice(offset - self._offsets[i])\n return pa.Table.from_batches(batches, schema=self._schema)\n\n\nclass Table(IndexedTableMixin):\n """\n Wraps a pyarrow Table by using composition.\n This is the base class for `InMemoryTable`, `MemoryMappedTable` and `ConcatenationTable`.\n\n It implements all the basic attributes/methods of the pyarrow Table class except\n the Table transforms: `slice, filter, flatten, combine_chunks, cast, add_column,\n append_column, remove_column, set_column, rename_columns` and `drop`.\n\n The implementation of these methods differs for the subclasses.\n """\n\n def __init__(self, table: pa.Table):\n super().__init__(table)\n self.table = table\n\n def __deepcopy__(self, memo: dict):\n # arrow tables are immutable, so there's no need to copy self.table\n # moreover calling deepcopy on a pyarrow table seems to make pa.total_allocated_bytes() decrease for some reason\n # by adding it to the memo, self.table won't be copied\n memo[id(self.table)] = self.table\n # same for the recordbatches used by the index\n memo[id(self._batches)] = list(self._batches)\n return _deepcopy(self, memo)\n\n def validate(self, *args, **kwargs):\n """\n Perform validation checks. An exception is raised if validation fails.\n\n By default only cheap validation checks are run. Pass `full=True`\n for thorough validation checks (potentially `O(n)`).\n\n Args:\n full (`bool`, defaults to `False`):\n If `True`, run expensive checks, otherwise cheap checks only.\n\n Raises:\n `pa.lib.ArrowInvalid`: if validation fails\n """\n return self.table.validate(*args, **kwargs)\n\n def equals(self, *args, **kwargs):\n """\n Check if contents of two tables are equal.\n\n Args:\n other ([`~datasets.table.Table`]):\n Table to compare against.\n check_metadata `bool`, defaults to `False`):\n Whether schema metadata equality should be checked as well.\n\n Returns:\n `bool`\n """\n args = tuple(arg.table if isinstance(arg, Table) else arg for arg in args)\n kwargs = {k: v.table if isinstance(v, Table) else v for k, v in kwargs}\n return self.table.equals(*args, **kwargs)\n\n def to_batches(self, *args, **kwargs):\n """\n Convert Table to list of (contiguous) `RecordBatch` objects.\n\n Args:\n max_chunksize (`int`, defaults to `None`):\n Maximum size for `RecordBatch` chunks. Individual chunks may be\n smaller depending on the chunk layout of individual columns.\n\n Returns:\n `List[pyarrow.RecordBatch]`\n """\n return self.table.to_batches(*args, **kwargs)\n\n def to_pydict(self, *args, **kwargs):\n """\n Convert the Table to a `dict` or `OrderedDict`.\n\n Returns:\n `dict`\n """\n return self.table.to_pydict(*args, **kwargs)\n\n def to_pylist(self, *args, **kwargs):\n """\n Convert the Table to a list\n\n Returns:\n `list`\n """\n return self.table.to_pylist(*args, **kwargs)\n\n def to_pandas(self, *args, **kwargs):\n """\n Convert to a pandas-compatible NumPy array or DataFrame, as appropriate.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n Arrow MemoryPool to use for allocations. Uses the default memory\n pool is not passed.\n strings_to_categorical (`bool`, defaults to `False`):\n Encode string (UTF8) and binary types to `pandas.Categorical`.\n categories (`list`, defaults to `empty`):\n List of fields that should be returned as `pandas.Categorical`. Only\n applies to table-like data structures.\n zero_copy_only (`bool`, defaults to `False`):\n Raise an `ArrowException` if this function call would require copying\n the underlying data.\n integer_object_nulls (`bool`, defaults to `False`):\n Cast integers with nulls to objects.\n date_as_object (`bool`, defaults to `True`):\n Cast dates to objects. If `False`, convert to `datetime64[ns]` dtype.\n timestamp_as_object (`bool`, defaults to `False`):\n Cast non-nanosecond timestamps (`np.datetime64`) to objects. This is\n useful if you have timestamps that don't fit in the normal date\n range of nanosecond timestamps (1678 CE-2262 CE).\n If `False`, all timestamps are converted to `datetime64[ns]` dtype.\n use_threads (`bool`, defaults to `True`):\n Whether to parallelize the conversion using multiple threads.\n deduplicate_objects (`bool`, defaults to `False`):\n Do not create multiple copies Python objects when created, to save\n on memory use. Conversion will be slower.\n ignore_metadata (`bool`, defaults to `False`):\n If `True`, do not use the 'pandas' metadata to reconstruct the\n DataFrame index, if present.\n safe (`bool`, defaults to `True`):\n For certain data types, a cast is needed in order to store the\n data in a pandas DataFrame or Series (e.g. timestamps are always\n stored as nanoseconds in pandas). This option controls whether it\n is a safe cast or not.\n split_blocks (`bool`, defaults to `False`):\n If `True`, generate one internal "block" for each column when\n creating a pandas.DataFrame from a `RecordBatch` or `Table`. While this\n can temporarily reduce memory note that various pandas operations\n can trigger "consolidation" which may balloon memory use.\n self_destruct (`bool`, defaults to `False`):\n EXPERIMENTAL: If `True`, attempt to deallocate the originating Arrow\n memory while converting the Arrow object to pandas. If you use the\n object after calling `to_pandas` with this option it will crash your\n program.\n types_mapper (`function`, defaults to `None`):\n A function mapping a pyarrow DataType to a pandas `ExtensionDtype`.\n This can be used to override the default pandas type for conversion\n of built-in pyarrow types or in absence of `pandas_metadata` in the\n Table schema. The function receives a pyarrow DataType and is\n expected to return a pandas `ExtensionDtype` or `None` if the\n default conversion should be used for that type. If you have\n a dictionary mapping, you can pass `dict.get` as function.\n\n Returns:\n `pandas.Series` or `pandas.DataFrame`: `pandas.Series` or `pandas.DataFrame` depending on type of object\n """\n return self.table.to_pandas(*args, **kwargs)\n\n def to_string(self, *args, **kwargs):\n return self.table.to_string(*args, **kwargs)\n\n def to_reader(self, max_chunksize: Optional[int] = None):\n """\n Convert the Table to a RecordBatchReader.\n\n Note that this method is zero-copy, it merely exposes the same data under a different API.\n\n Args:\n max_chunksize (`int`, defaults to `None`)\n Maximum size for RecordBatch chunks. Individual chunks may be smaller depending\n on the chunk layout of individual columns.\n\n Returns:\n `pyarrow.RecordBatchReader`\n """\n return self.table.to_reader(max_chunksize=max_chunksize)\n\n def field(self, *args, **kwargs):\n """\n Select a schema field by its column name or numeric index.\n\n Args:\n i (`Union[int, str]`):\n The index or name of the field to retrieve.\n\n Returns:\n `pyarrow.Field`\n """\n return self.table.field(*args, **kwargs)\n\n def column(self, *args, **kwargs):\n """\n Select a column by its column name, or numeric index.\n\n Args:\n i (`Union[int, str]`):\n The index or name of the column to retrieve.\n\n Returns:\n `pyarrow.ChunkedArray`\n """\n return self.table.column(*args, **kwargs)\n\n def itercolumns(self, *args, **kwargs):\n """\n Iterator over all columns in their numerical order.\n\n Yields:\n `pyarrow.ChunkedArray`\n """\n return self.table.itercolumns(*args, **kwargs)\n\n @property\n def schema(self):\n """\n Schema of the table and its columns.\n\n Returns:\n `pyarrow.Schema`\n """\n return self.table.schema\n\n @property\n def columns(self):\n """\n List of all columns in numerical order.\n\n Returns:\n `List[pa.ChunkedArray]`\n """\n return self.table.columns\n\n @property\n def num_columns(self):\n """\n Number of columns in this table.\n\n Returns:\n int\n """\n return self.table.num_columns\n\n @property\n def num_rows(self):\n """\n Number of rows in this table.\n\n Due to the definition of a table, all columns have the same number of\n rows.\n\n Returns:\n int\n """\n return self.table.num_rows\n\n @property\n def shape(self):\n """\n Dimensions of the table: (#rows, #columns).\n\n Returns:\n `(int, int)`: Number of rows and number of columns.\n """\n return self.table.shape\n\n @property\n def nbytes(self):\n """\n Total number of bytes consumed by the elements of the table.\n """\n return self.table.nbytes\n\n @property\n def column_names(self):\n """\n Names of the table's columns.\n """\n return self.table.column_names\n\n def __eq__(self, other):\n return self.equals(other)\n\n def __getitem__(self, i):\n return self.table[i]\n\n def __len__(self):\n return len(self.table)\n\n def __repr__(self):\n return self.table.__repr__().replace("pyarrow.Table", self.__class__.__name__)\n\n def __str__(self):\n return self.table.__str__().replace("pyarrow.Table", self.__class__.__name__)\n\n def slice(self, *args, **kwargs):\n """\n Compute zero-copy slice of this Table.\n\n Args:\n offset (`int`, defaults to `0`):\n Offset from start of table to slice.\n length (`int`, defaults to `None`):\n Length of slice (default is until end of table starting from\n offset).\n\n Returns:\n `datasets.table.Table`\n """\n raise NotImplementedError()\n\n def filter(self, *args, **kwargs):\n """\n Select records from a Table. See `pyarrow.compute.filter` for full usage.\n """\n raise NotImplementedError()\n\n def flatten(self, *args, **kwargs):\n """\n Flatten this Table. Each column with a struct type is flattened\n into one column per struct field. Other columns are left unchanged.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n raise NotImplementedError()\n\n def combine_chunks(self, *args, **kwargs):\n """\n Make a new table by combining the chunks this table has.\n\n All the underlying chunks in the `ChunkedArray` of each column are\n concatenated into zero or one chunk.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n raise NotImplementedError()\n\n def cast(self, *args, **kwargs):\n """\n Cast table values to another schema.\n\n Args:\n target_schema (`Schema`):\n Schema to cast to, the names and order of fields must match.\n safe (`bool`, defaults to `True`):\n Check for overflows or other unsafe conversions.\n\n Returns:\n `datasets.table.Table`\n """\n raise NotImplementedError()\n\n def replace_schema_metadata(self, *args, **kwargs):\n """\n EXPERIMENTAL: Create shallow copy of table by replacing schema\n key-value metadata with the indicated new metadata (which may be None,\n which deletes any existing metadata\n\n Args:\n metadata (`dict`, defaults to `None`):\n\n Returns:\n `datasets.table.Table`: shallow_copy\n """\n raise NotImplementedError()\n\n def add_column(self, *args, **kwargs):\n """\n Add column to Table at position.\n\n A new table is returned with the column added, the original table\n object is left unchanged.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`: New table with the passed column added.\n """\n raise NotImplementedError()\n\n def append_column(self, *args, **kwargs):\n """\n Append column at end of columns.\n\n Args:\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`: New table with the passed column added.\n """\n raise NotImplementedError()\n\n def remove_column(self, *args, **kwargs):\n """\n Create new Table with the indicated column removed.\n\n Args:\n i (`int`):\n Index of column to remove.\n\n Returns:\n `datasets.table.Table`: New table without the column.\n """\n raise NotImplementedError()\n\n def set_column(self, *args, **kwargs):\n """\n Replace column in Table at position.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`: New table with the passed column set.\n """\n raise NotImplementedError()\n\n def rename_columns(self, *args, **kwargs):\n """\n Create new table with columns renamed to provided names.\n """\n raise NotImplementedError()\n\n def drop(self, *args, **kwargs):\n """\n Drop one or more columns and return a new table.\n\n Args:\n columns (`List[str]`):\n List of field names referencing existing columns.\n\n Raises:\n `KeyError` : if any of the passed columns name are not existing.\n\n Returns:\n `datasets.table.Table`: New table without the columns.\n """\n raise NotImplementedError()\n\n def select(self, *args, **kwargs):\n """\n Select columns of the table.\n\n Returns a new table with the specified columns, and metadata preserved.\n\n Args:\n columns (:obj:`Union[List[str], List[int]]`):\n The column names or integer indices to select.\n\n Returns:\n `datasets.table.Table`: table with only a subset of the columns\n """\n raise NotImplementedError()\n\n\nclass TableBlock(Table):\n """\n `TableBlock` is the allowed class inside a `ConcanetationTable`.\n Only `MemoryMappedTable` and `InMemoryTable` are `TableBlock`.\n This is because we don't want a `ConcanetationTable` made out of other `ConcanetationTables`.\n """\n\n pass\n\n\nclass InMemoryTable(TableBlock):\n """\n The table is said in-memory when it is loaded into the user's RAM.\n\n Pickling it does copy all the data using memory.\n Its implementation is simple and uses the underlying pyarrow Table methods directly.\n\n This is different from the `MemoryMapped` table, for which pickling doesn't copy all the\n data in memory. For a `MemoryMapped`, unpickling instead reloads the table from the disk.\n\n `InMemoryTable` must be used when data fit in memory, while `MemoryMapped` are reserved for\n data bigger than memory or when you want the memory footprint of your application to\n stay low.\n """\n\n @classmethod\n def from_file(cls, filename: str):\n table = _in_memory_arrow_table_from_file(filename)\n return cls(table)\n\n @classmethod\n def from_buffer(cls, buffer: pa.Buffer):\n table = _in_memory_arrow_table_from_buffer(buffer)\n return cls(table)\n\n @classmethod\n def from_pandas(cls, *args, **kwargs):\n """\n Convert pandas.DataFrame to an Arrow Table.\n\n The column types in the resulting Arrow Table are inferred from the\n dtypes of the pandas.Series in the DataFrame. In the case of non-object\n Series, the NumPy dtype is translated to its Arrow equivalent. In the\n case of `object`, we need to guess the datatype by looking at the\n Python objects in this Series.\n\n Be aware that Series of the `object` dtype don't carry enough\n information to always lead to a meaningful Arrow type. In the case that\n we cannot infer a type, e.g. because the DataFrame is of length 0 or\n the Series only contains `None/nan` objects, the type is set to\n null. This behavior can be avoided by constructing an explicit schema\n and passing it to this function.\n\n Args:\n df (`pandas.DataFrame`):\n schema (`pyarrow.Schema`, *optional*):\n The expected schema of the Arrow Table. This can be used to\n indicate the type of columns if we cannot infer it automatically.\n If passed, the output will have exactly this schema. Columns\n specified in the schema that are not found in the DataFrame columns\n or its index will raise an error. Additional columns or index\n levels in the DataFrame which are not specified in the schema will\n be ignored.\n preserve_index (`bool`, *optional*):\n Whether to store the index as an additional column in the resulting\n `Table`. The default of None will store the index as a column,\n except for RangeIndex which is stored as metadata only. Use\n `preserve_index=True` to force it to be stored as a column.\n nthreads (`int`, defaults to `None` (may use up to system CPU count threads))\n If greater than 1, convert columns to Arrow in parallel using\n indicated number of threads.\n columns (`List[str]`, *optional*):\n List of column to be converted. If `None`, use all columns.\n safe (`bool`, defaults to `True`):\n Check for overflows or other unsafe conversions,\n\n Returns:\n `datasets.table.Table`:\n\n Examples:\n ```python\n >>> import pandas as pd\n >>> import pyarrow as pa\n >>> df = pd.DataFrame({\n ... 'int': [1, 2],\n ... 'str': ['a', 'b']\n ... })\n >>> pa.Table.from_pandas(df)\n <pyarrow.lib.Table object at 0x7f05d1fb1b40>\n ```\n """\n return cls(pa.Table.from_pandas(*args, **kwargs))\n\n @classmethod\n def from_arrays(cls, *args, **kwargs):\n """\n Construct a Table from Arrow arrays.\n\n Args:\n arrays (`List[Union[pyarrow.Array, pyarrow.ChunkedArray]]`):\n Equal-length arrays that should form the table.\n names (`List[str]`, *optional*):\n Names for the table columns. If not passed, schema must be passed.\n schema (`Schema`, defaults to `None`):\n Schema for the created table. If not passed, names must be passed.\n metadata (`Union[dict, Mapping]`, defaults to `None`):\n Optional metadata for the schema (if inferred).\n\n Returns:\n `datasets.table.Table`\n """\n return cls(pa.Table.from_arrays(*args, **kwargs))\n\n @classmethod\n def from_pydict(cls, *args, **kwargs):\n """\n Construct a Table from Arrow arrays or columns.\n\n Args:\n mapping (`Union[dict, Mapping]`):\n A mapping of strings to Arrays or Python lists.\n schema (`Schema`, defaults to `None`):\n If not passed, will be inferred from the Mapping values\n metadata (`Union[dict, Mapping]`, defaults to `None`):\n Optional metadata for the schema (if inferred).\n\n Returns:\n `datasets.table.Table`\n """\n return cls(pa.Table.from_pydict(*args, **kwargs))\n\n @classmethod\n def from_pylist(cls, mapping, *args, **kwargs):\n """\n Construct a Table from list of rows / dictionaries.\n\n Args:\n mapping (`List[dict]`):\n A mapping of strings to row values.\n schema (`Schema`, defaults to `None`):\n If not passed, will be inferred from the Mapping values\n metadata (`Union[dict, Mapping]`, defaults to `None`):\n Optional metadata for the schema (if inferred).\n\n Returns:\n `datasets.table.Table`\n """\n return cls(pa.Table.from_pylist(mapping, *args, **kwargs))\n\n @classmethod\n def from_batches(cls, *args, **kwargs):\n """\n Construct a Table from a sequence or iterator of Arrow `RecordBatches`.\n\n Args:\n batches (`Union[Sequence[pyarrow.RecordBatch], Iterator[pyarrow.RecordBatch]]`):\n Sequence of `RecordBatch` to be converted, all schemas must be equal.\n schema (`Schema`, defaults to `None`):\n If not passed, will be inferred from the first `RecordBatch`.\n\n Returns:\n `datasets.table.Table`:\n """\n return cls(pa.Table.from_batches(*args, **kwargs))\n\n def slice(self, offset=0, length=None):\n """\n Compute zero-copy slice of this Table.\n\n Args:\n offset (`int`, defaults to `0`):\n Offset from start of table to slice.\n length (`int`, defaults to `None`):\n Length of slice (default is until end of table starting from\n offset).\n\n Returns:\n `datasets.table.Table`\n """\n # Use fast slicing here\n return InMemoryTable(self.fast_slice(offset=offset, length=length))\n\n def filter(self, *args, **kwargs):\n """\n Select records from a Table. See `pyarrow.compute.filter` for full usage.\n """\n return InMemoryTable(self.table.filter(*args, **kwargs))\n\n def flatten(self, *args, **kwargs):\n """\n Flatten this Table. Each column with a struct type is flattened\n into one column per struct field. Other columns are left unchanged.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n return InMemoryTable(table_flatten(self.table, *args, **kwargs))\n\n def combine_chunks(self, *args, **kwargs):\n """\n Make a new table by combining the chunks this table has.\n\n All the underlying chunks in the `ChunkedArray` of each column are\n concatenated into zero or one chunk.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n return InMemoryTable(self.table.combine_chunks(*args, **kwargs))\n\n def cast(self, *args, **kwargs):\n """\n Cast table values to another schema.\n\n Args:\n target_schema (`Schema`):\n Schema to cast to, the names and order of fields must match.\n safe (`bool`, defaults to `True`):\n Check for overflows or other unsafe conversions.\n\n Returns:\n `datasets.table.Table`\n """\n return InMemoryTable(table_cast(self.table, *args, **kwargs))\n\n def replace_schema_metadata(self, *args, **kwargs):\n """\n EXPERIMENTAL: Create shallow copy of table by replacing schema\n key-value metadata with the indicated new metadata (which may be `None`,\n which deletes any existing metadata).\n\n Args:\n metadata (`dict`, defaults to `None`):\n\n Returns:\n `datasets.table.Table`: shallow_copy\n """\n return InMemoryTable(self.table.replace_schema_metadata(*args, **kwargs))\n\n def add_column(self, *args, **kwargs):\n """\n Add column to Table at position.\n\n A new table is returned with the column added, the original table\n object is left unchanged.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`: New table with the passed column added.\n """\n return InMemoryTable(self.table.add_column(*args, **kwargs))\n\n def append_column(self, *args, **kwargs):\n """\n Append column at end of columns.\n\n Args:\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`:\n New table with the passed column added.\n """\n return InMemoryTable(self.table.append_column(*args, **kwargs))\n\n def remove_column(self, *args, **kwargs):\n """\n Create new Table with the indicated column removed.\n\n Args:\n i (`int`):\n Index of column to remove.\n\n Returns:\n `datasets.table.Table`:\n New table without the column.\n """\n return InMemoryTable(self.table.remove_column(*args, **kwargs))\n\n def set_column(self, *args, **kwargs):\n """\n Replace column in Table at position.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`:\n New table with the passed column set.\n """\n return InMemoryTable(self.table.set_column(*args, **kwargs))\n\n def rename_columns(self, *args, **kwargs):\n """\n Create new table with columns renamed to provided names.\n """\n return InMemoryTable(self.table.rename_columns(*args, **kwargs))\n\n def drop(self, *args, **kwargs):\n """\n Drop one or more columns and return a new table.\n\n Args:\n columns (`List[str]`):\n List of field names referencing existing columns.\n\n Raises:\n `KeyError` : if any of the passed columns name are not existing.\n\n Returns:\n `datasets.table.Table`:\n New table without the columns.\n """\n return InMemoryTable(self.table.drop(*args, **kwargs))\n\n def select(self, *args, **kwargs):\n """\n Select columns of the table.\n\n Returns a new table with the specified columns, and metadata preserved.\n\n Args:\n columns (:obj:`Union[List[str], List[int]]`):\n The column names or integer indices to select.\n\n Returns:\n :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved.\n """\n return InMemoryTable(self.table.select(*args, **kwargs))\n\n\n# The MemoryMappedTable needs replays to properly reload tables from the disk\nReplay = tuple[str, tuple, dict]\n\n\nclass MemoryMappedTable(TableBlock):\n """\n The table is said memory mapped when it doesn't use the user's RAM but loads the data\n from the disk instead.\n\n Pickling it doesn't copy the data into memory.\n Instead, only the path to the memory mapped arrow file is pickled, as well as the list\n of transforms to "replay" when reloading the table from the disk.\n\n Its implementation requires to store an history of all the transforms that were applied\n to the underlying pyarrow Table, so that they can be "replayed" when reloading the Table\n from the disk.\n\n This is different from the `InMemoryTable` table, for which pickling does copy all the\n data in memory.\n\n `InMemoryTable` must be used when data fit in memory, while `MemoryMapped` are reserved for\n data bigger than memory or when you want the memory footprint of your application to\n stay low.\n """\n\n def __init__(self, table: pa.Table, path: str, replays: Optional[list[Replay]] = None):\n super().__init__(table)\n self.path = os.path.abspath(path)\n self.replays: list[Replay] = replays if replays is not None else []\n\n @classmethod\n def from_file(cls, filename: str, replays=None):\n table = _memory_mapped_arrow_table_from_file(filename)\n table = cls._apply_replays(table, replays)\n return cls(table, filename, replays)\n\n def __getstate__(self):\n return {"path": self.path, "replays": self.replays}\n\n def __setstate__(self, state):\n path = state["path"]\n replays = state["replays"]\n table = _memory_mapped_arrow_table_from_file(path)\n table = self._apply_replays(table, replays)\n MemoryMappedTable.__init__(self, table, path=path, replays=replays)\n\n @staticmethod\n def _apply_replays(table: pa.Table, replays: Optional[list[Replay]] = None) -> pa.Table:\n if replays is not None:\n for name, args, kwargs in replays:\n if name == "cast":\n table = table_cast(table, *args, **kwargs)\n elif name == "flatten":\n table = table_flatten(table, *args, **kwargs)\n else:\n table = getattr(table, name)(*args, **kwargs)\n return table\n\n def _append_replay(self, replay: Replay) -> list[Replay]:\n replays = copy.deepcopy(self.replays)\n replays.append(replay)\n return replays\n\n def slice(self, offset=0, length=None):\n """\n Compute zero-copy slice of this Table.\n\n Args:\n offset (`int`, defaults to `0`):\n Offset from start of table to slice.\n length (`int`, defaults to `None`):\n Length of slice (default is until end of table starting from\n offset).\n\n Returns:\n `datasets.table.Table`\n """\n replay = ("slice", (offset, length), {})\n replays = self._append_replay(replay)\n # Use fast slicing here\n return MemoryMappedTable(self.fast_slice(offset=offset, length=length), self.path, replays)\n\n def filter(self, *args, **kwargs):\n """\n Select records from a Table. See `pyarrow.compute.filter` for full usage.\n """\n replay = ("filter", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.filter(*args, **kwargs), self.path, replays)\n\n def flatten(self, *args, **kwargs):\n """\n Flatten this Table. Each column with a struct type is flattened\n into one column per struct field. Other columns are left unchanged.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n replay = ("flatten", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(table_flatten(self.table, *args, **kwargs), self.path, replays)\n\n def combine_chunks(self, *args, **kwargs):\n """\n Make a new table by combining the chunks this table has.\n\n All the underlying chunks in the ChunkedArray of each column are\n concatenated into zero or one chunk.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n replay = ("combine_chunks", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.combine_chunks(*args, **kwargs), self.path, replays)\n\n def cast(self, *args, **kwargs):\n """\n Cast table values to another schema\n\n Args:\n target_schema (`Schema`):\n Schema to cast to, the names and order of fields must match.\n safe (`bool`, defaults to `True`):\n Check for overflows or other unsafe conversions.\n\n Returns:\n `datasets.table.Table`\n """\n replay = ("cast", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays)\n\n def replace_schema_metadata(self, *args, **kwargs):\n """\n EXPERIMENTAL: Create shallow copy of table by replacing schema\n key-value metadata with the indicated new metadata (which may be None,\n which deletes any existing metadata.\n\n Args:\n metadata (`dict`, defaults to `None`):\n\n Returns:\n `datasets.table.Table`: shallow_copy\n """\n replay = ("replace_schema_metadata", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.replace_schema_metadata(*args, **kwargs), self.path, replays)\n\n def add_column(self, *args, **kwargs):\n """\n Add column to Table at position.\n\n A new table is returned with the column added, the original table\n object is left unchanged.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`: New table with the passed column added.\n """\n replay = ("add_column", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.add_column(*args, **kwargs), self.path, replays)\n\n def append_column(self, *args, **kwargs):\n """\n Append column at end of columns.\n\n Args:\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`:\n New table with the passed column added.\n """\n replay = ("append_column", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.append_column(*args, **kwargs), self.path, replays)\n\n def remove_column(self, *args, **kwargs):\n """\n Create new Table with the indicated column removed.\n\n Args:\n i (`int`):\n Index of column to remove.\n\n Returns:\n `datasets.table.Table`:\n New table without the column.\n """\n replay = ("remove_column", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.remove_column(*args, **kwargs), self.path, replays)\n\n def set_column(self, *args, **kwargs):\n """\n Replace column in Table at position.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`:\n New table with the passed column set.\n """\n replay = ("set_column", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.set_column(*args, **kwargs), self.path, replays)\n\n def rename_columns(self, *args, **kwargs):\n """\n Create new table with columns renamed to provided names.\n """\n replay = ("rename_columns", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.rename_columns(*args, **kwargs), self.path, replays)\n\n def drop(self, *args, **kwargs):\n """\n Drop one or more columns and return a new table.\n\n Args:\n columns (`List[str]`):\n List of field names referencing existing columns.\n\n Raises:\n `KeyError` : if any of the passed columns name are not existing.\n\n Returns:\n `datasets.table.Table`:\n New table without the columns.\n """\n replay = ("drop", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.drop(*args, **kwargs), self.path, replays)\n\n def select(self, *args, **kwargs):\n """\n Select columns of the table.\n\n Returns a new table with the specified columns, and metadata preserved.\n\n Args:\n columns (:obj:`Union[List[str], List[int]]`):\n The column names or integer indices to select.\n\n Returns:\n :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved.\n """\n replay = ("select", copy.deepcopy(args), copy.deepcopy(kwargs))\n replays = self._append_replay(replay)\n return MemoryMappedTable(self.table.select(*args, **kwargs), self.path, replays)\n\n\n# A ConcatenationTable is the concatenation of several tables.\n# The ``blocks`` attributes stores a list of list of blocks.\n# The first axis concatenates the tables along the axis 0 (it appends rows),\n# while the second axis concatenates tables along the axis 1 (it appends columns).\nTableBlockContainer = TypeVar("TableBlockContainer", TableBlock, list[TableBlock], list[list[TableBlock]])\n\n\nclass ConcatenationTable(Table):\n """\n The table comes from the concatenation of several tables called blocks.\n It enables concatenation on both axis 0 (append rows) and axis 1 (append columns).\n\n The underlying tables are called "blocks" and can be either `InMemoryTable`\n or `MemoryMappedTable` objects.\n This allows to combine tables that come from memory or that are memory mapped.\n When a `ConcatenationTable` is pickled, then each block is pickled:\n - the `InMemoryTable` objects are pickled by copying all the data in memory.\n - the MemoryMappedTable objects are pickled without copying the data into memory.\n Instead, only the path to the memory mapped arrow file is pickled, as well as the list\n of transforms to "replays" when reloading the table from the disk.\n\n Its implementation requires to store each block separately.\n The `blocks` attributes stores a list of list of blocks.\n The first axis concatenates the tables along the axis 0 (it appends rows),\n while the second axis concatenates tables along the axis 1 (it appends columns).\n\n If some columns are missing when concatenating on axis 0, they are filled with null values.\n This is done using `pyarrow.concat_tables(tables, promote=True)`.\n\n You can access the fully combined table by accessing the `ConcatenationTable.table` attribute,\n and the blocks by accessing the `ConcatenationTable.blocks` attribute.\n """\n\n def __init__(self, table: pa.Table, blocks: list[list[TableBlock]]):\n super().__init__(table)\n self.blocks = blocks\n # Check that all the blocks have the right type.\n # Only InMemoryTable and MemoryMappedTable are allowed.\n for subtables in blocks:\n for subtable in subtables:\n if not isinstance(subtable, TableBlock):\n raise TypeError(\n "The blocks of a ConcatenationTable must be InMemoryTable or MemoryMappedTable objects"\n f", but got {_short_str(subtable)}."\n )\n\n def __getstate__(self):\n return {"blocks": self.blocks, "schema": self.table.schema}\n\n def __setstate__(self, state):\n blocks = state["blocks"]\n schema = state["schema"]\n table = self._concat_blocks_horizontally_and_vertically(blocks)\n if schema is not None and table.schema != schema:\n # We fix the columns by concatenating with an empty table with the right columns\n empty_table = pa.Table.from_batches([], schema=schema)\n # We set promote_options="default" to fill missing columns with null values\n table = pa.concat_tables([table, empty_table], promote_options="default")\n ConcatenationTable.__init__(self, table, blocks=blocks)\n\n @staticmethod\n def _concat_blocks(blocks: list[Union[TableBlock, pa.Table]], axis: int = 0) -> pa.Table:\n pa_tables = [table.table if hasattr(table, "table") else table for table in blocks]\n if axis == 0:\n # We set promote_options="default" to fill missing columns with null values\n return pa.concat_tables(pa_tables, promote_options="default")\n elif axis == 1:\n for i, table in enumerate(pa_tables):\n if i == 0:\n pa_table = table\n else:\n for name, col in zip(table.column_names, table.columns):\n pa_table = pa_table.append_column(name, col)\n return pa_table\n else:\n raise ValueError("'axis' must be either 0 or 1")\n\n @classmethod\n def _concat_blocks_horizontally_and_vertically(cls, blocks: list[list[TableBlock]]) -> pa.Table:\n pa_tables_to_concat_vertically = []\n for i, tables in enumerate(blocks):\n if not tables:\n continue\n pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1)\n pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated)\n return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0)\n\n @classmethod\n def _merge_blocks(cls, blocks: TableBlockContainer, axis: Optional[int] = None) -> TableBlockContainer:\n if axis is not None:\n merged_blocks = []\n for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)):\n if is_in_memory:\n block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))]\n merged_blocks += list(block_group)\n else: # both\n merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks]\n if all(len(row_block) == 1 for row_block in merged_blocks):\n merged_blocks = cls._merge_blocks(\n [block for row_block in merged_blocks for block in row_block], axis=0\n )\n return merged_blocks\n\n @classmethod\n def _consolidate_blocks(cls, blocks: TableBlockContainer) -> TableBlockContainer:\n if isinstance(blocks, TableBlock):\n return blocks\n elif isinstance(blocks[0], TableBlock):\n return cls._merge_blocks(blocks, axis=0)\n else:\n return cls._merge_blocks(blocks)\n\n @classmethod\n def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable":\n blocks = cls._consolidate_blocks(blocks)\n if isinstance(blocks, TableBlock):\n table = blocks\n return cls(table.table, [[table]])\n elif isinstance(blocks[0], TableBlock):\n table = cls._concat_blocks(blocks, axis=0)\n blocks = [[t] for t in blocks]\n return cls(table, blocks)\n else:\n table = cls._concat_blocks_horizontally_and_vertically(blocks)\n return cls(table, blocks)\n\n @classmethod\n def from_tables(cls, tables: list[Union[pa.Table, Table]], axis: int = 0) -> "ConcatenationTable":\n """Create `ConcatenationTable` from list of tables.\n\n Args:\n tables (list of `Table` or list of `pyarrow.Table`):\n List of tables.\n axis (`{0, 1}`, defaults to `0`, meaning over rows):\n Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns\n (horizontally).\n\n <Added version="1.6.0"/>\n """\n\n def to_blocks(table: Union[pa.Table, Table]) -> list[list[TableBlock]]:\n if isinstance(table, pa.Table):\n return [[InMemoryTable(table)]]\n elif isinstance(table, ConcatenationTable):\n return copy.deepcopy(table.blocks)\n else:\n return [[table]]\n\n def _slice_row_block(row_block: list[TableBlock], length: int) -> tuple[list[TableBlock], list[TableBlock]]:\n sliced = [table.slice(0, length) for table in row_block]\n remainder = [table.slice(length, len(row_block[0]) - length) for table in row_block]\n return sliced, remainder\n\n def _split_both_like(\n result: list[list[TableBlock]], blocks: list[list[TableBlock]]\n ) -> tuple[list[list[TableBlock]], list[list[TableBlock]]]:\n """\n Make sure each row_block contain the same num_rows to be able to concatenate them on axis=1.\n\n To do so, we modify both blocks sets to have the same row_blocks boundaries.\n For example, if `result` has 2 row_blocks of 3 rows and `blocks` has 3 row_blocks of 2 rows,\n we modify both to have 4 row_blocks of size 2, 1, 1 and 2:\n\n [ x x x | x x x ]\n + [ y y | y y | y y ]\n -----------------------------\n = [ x x | x | x | x x ]\n [ y y | y | y | y y ]\n\n """\n result, blocks = list(result), list(blocks)\n new_result, new_blocks = [], []\n while result and blocks:\n # we slice the longest row block to save two row blocks of same length\n # and we replace the long row block by its remainder if necessary\n if len(result[0][0]) > len(blocks[0][0]):\n new_blocks.append(blocks[0])\n sliced, result[0] = _slice_row_block(result[0], len(blocks.pop(0)[0]))\n new_result.append(sliced)\n elif len(result[0][0]) < len(blocks[0][0]):\n new_result.append(result[0])\n sliced, blocks[0] = _slice_row_block(blocks[0], len(result.pop(0)[0]))\n new_blocks.append(sliced)\n else:\n new_result.append(result.pop(0))\n new_blocks.append(blocks.pop(0))\n if result or blocks:\n raise ValueError("Failed to concatenate on axis=1 because tables don't have the same number of rows")\n return new_result, new_blocks\n\n def _extend_blocks(\n result: list[list[TableBlock]], blocks: list[list[TableBlock]], axis: int = 0\n ) -> list[list[TableBlock]]:\n if axis == 0:\n result.extend(blocks)\n elif axis == 1:\n # We make sure each row_block have the same num_rows\n result, blocks = _split_both_like(result, blocks)\n for i, row_block in enumerate(blocks):\n result[i].extend(row_block)\n return result\n\n blocks = to_blocks(tables[0])\n for table in tables[1:]:\n table_blocks = to_blocks(table)\n blocks = _extend_blocks(blocks, table_blocks, axis=axis)\n return cls.from_blocks(blocks)\n\n @property\n def _slices(self):\n offset = 0\n for tables in self.blocks:\n length = len(tables[0])\n yield (offset, length)\n offset += length\n\n def slice(self, offset=0, length=None):\n """\n Compute zero-copy slice of this Table.\n\n Args:\n offset (`int`, defaults to `0`):\n Offset from start of table to slice.\n length (`int`, defaults to `None`):\n Length of slice (default is until end of table starting from\n offset).\n\n Returns:\n `datasets.table.Table`\n """\n table = self.table.slice(offset, length=length)\n length = length if length is not None else self.num_rows - offset\n blocks = []\n for tables in self.blocks:\n n_rows = len(tables[0])\n if length == 0:\n break\n elif n_rows <= offset:\n offset = offset - n_rows\n elif n_rows <= offset + length:\n blocks.append([t.slice(offset) for t in tables])\n length, offset = length + offset - n_rows, 0\n else:\n blocks.append([t.slice(offset, length) for t in tables])\n length, offset = 0, 0\n return ConcatenationTable(table, blocks)\n\n def filter(self, mask, *args, **kwargs):\n """\n Select records from a Table. See `pyarrow.compute.filter` for full usage.\n """\n table = self.table.filter(mask, *args, **kwargs)\n blocks = []\n for (offset, length), tables in zip(self._slices, self.blocks):\n submask = mask.slice(offset, length)\n blocks.append([t.filter(submask, *args, **kwargs) for t in tables])\n return ConcatenationTable(table, blocks)\n\n def flatten(self, *args, **kwargs):\n """\n Flatten this Table. Each column with a struct type is flattened\n into one column per struct field. Other columns are left unchanged.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n table = table_flatten(self.table, *args, **kwargs)\n blocks = []\n for tables in self.blocks:\n blocks.append([t.flatten(*args, **kwargs) for t in tables])\n return ConcatenationTable(table, blocks)\n\n def combine_chunks(self, *args, **kwargs):\n """\n Make a new table by combining the chunks this table has.\n\n All the underlying chunks in the `ChunkedArray` of each column are\n concatenated into zero or one chunk.\n\n Args:\n memory_pool (`MemoryPool`, defaults to `None`):\n For memory allocations, if required, otherwise use default pool.\n\n Returns:\n `datasets.table.Table`\n """\n table = self.table.combine_chunks(*args, **kwargs)\n blocks = []\n for tables in self.blocks:\n blocks.append([t.combine_chunks(*args, **kwargs) for t in tables])\n return ConcatenationTable(table, blocks)\n\n def cast(self, target_schema, *args, **kwargs):\n """\n Cast table values to another schema.\n\n Args:\n target_schema (`Schema`):\n Schema to cast to, the names and order of fields must match.\n safe (`bool`, defaults to `True`):\n Check for overflows or other unsafe conversions.\n\n Returns:\n `datasets.table.Table`\n """\n from .features import Features\n\n table = table_cast(self.table, target_schema, *args, **kwargs)\n target_features = Features.from_arrow_schema(target_schema)\n blocks = []\n for subtables in self.blocks:\n new_tables = []\n fields = list(target_schema)\n for subtable in subtables:\n subfields = []\n for name in subtable.column_names:\n subfields.append(fields.pop(next(i for i, field in enumerate(fields) if field.name == name)))\n subfeatures = Features({subfield.name: target_features[subfield.name] for subfield in subfields})\n subschema = subfeatures.arrow_schema\n new_tables.append(subtable.cast(subschema, *args, **kwargs))\n blocks.append(new_tables)\n return ConcatenationTable(table, blocks)\n\n def replace_schema_metadata(self, *args, **kwargs):\n """\n EXPERIMENTAL: Create shallow copy of table by replacing schema\n key-value metadata with the indicated new metadata (which may be `None`,\n which deletes any existing metadata).\n\n Args:\n metadata (`dict`, defaults to `None`):\n\n Returns:\n `datasets.table.Table`: shallow_copy\n """\n table = self.table.replace_schema_metadata(*args, **kwargs)\n blocks = []\n for tables in self.blocks:\n blocks.append([t.replace_schema_metadata(*args, **kwargs) for t in tables])\n return ConcatenationTable(table, self.blocks)\n\n def add_column(self, *args, **kwargs):\n """\n Add column to Table at position.\n\n A new table is returned with the column added, the original table\n object is left unchanged.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`: New table with the passed column added.\n """\n raise NotImplementedError()\n\n def append_column(self, *args, **kwargs):\n """\n Append column at end of columns.\n\n Args:\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`:\n New table with the passed column added.\n """\n raise NotImplementedError()\n\n def remove_column(self, i, *args, **kwargs):\n """\n Create new Table with the indicated column removed.\n\n Args:\n i (`int`):\n Index of column to remove.\n\n Returns:\n `datasets.table.Table`:\n New table without the column.\n """\n table = self.table.remove_column(i, *args, **kwargs)\n name = self.table.column_names[i]\n blocks = []\n for tables in self.blocks:\n blocks.append(\n [\n t.remove_column(t.column_names.index(name), *args, **kwargs) if name in t.column_names else t\n for t in tables\n ]\n )\n return ConcatenationTable(table, blocks)\n\n def set_column(self, *args, **kwargs):\n """\n Replace column in Table at position.\n\n Args:\n i (`int`):\n Index to place the column at.\n field_ (`Union[str, pyarrow.Field]`):\n If a string is passed then the type is deduced from the column\n data.\n column (`Union[pyarrow.Array, List[pyarrow.Array]]`):\n Column data.\n\n Returns:\n `datasets.table.Table`:\n New table with the passed column set.\n """\n raise NotImplementedError()\n\n def rename_columns(self, names, *args, **kwargs):\n """\n Create new table with columns renamed to provided names.\n """\n table = self.table.rename_columns(names, *args, **kwargs)\n names = dict(zip(self.table.column_names, names))\n blocks = []\n for tables in self.blocks:\n blocks.append(\n [t.rename_columns([names[name] for name in t.column_names], *args, **kwargs) for t in tables]\n )\n return ConcatenationTable(table, blocks)\n\n def drop(self, columns, *args, **kwargs):\n """\n Drop one or more columns and return a new table.\n\n Args:\n columns (`List[str]`):\n List of field names referencing existing columns.\n\n Raises:\n `KeyError` : if any of the passed columns name are not existing.\n\n Returns:\n `datasets.table.Table`:\n New table without the columns.\n """\n table = self.table.drop(columns, *args, **kwargs)\n blocks = []\n for tables in self.blocks:\n blocks.append([t.drop([c for c in columns if c in t.column_names], *args, **kwargs) for t in tables])\n return ConcatenationTable(table, blocks)\n\n def select(self, columns, *args, **kwargs):\n """\n Select columns of the table.\n\n Returns a new table with the specified columns, and metadata preserved.\n\n Args:\n columns (:obj:`Union[List[str], List[int]]`):\n The column names or integer indices to select.\n\n Returns:\n :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved.\n """\n table = self.table.select(columns, *args, **kwargs)\n blocks = []\n for tables in self.blocks:\n blocks.append([t.select([c for c in columns if c in t.column_names], *args, **kwargs) for t in tables])\n return ConcatenationTable(table, blocks)\n\n\ndef concat_tables(tables: list[Table], axis: int = 0) -> Table:\n """\n Concatenate tables.\n\n Args:\n tables (list of `Table`):\n List of tables to be concatenated.\n axis (`{0, 1}`, defaults to `0`, meaning over rows):\n Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns\n (horizontally).\n\n <Added version="1.6.0"/>\n Returns:\n `datasets.table.Table`:\n If the number of input tables is > 1, then the returned table is a `datasets.table.ConcatenationTable`.\n Otherwise if there's only one table, it is returned as is.\n """\n tables = list(tables)\n if len(tables) == 1:\n return tables[0]\n return ConcatenationTable.from_tables(tables, axis=axis)\n\n\ndef list_table_cache_files(table: Table) -> list[str]:\n """\n Get the cache files that are loaded by the table.\n Cache file are used when parts of the table come from the disk via memory mapping.\n\n Returns:\n `List[str]`:\n A list of paths to the cache files loaded by the table.\n """\n if isinstance(table, ConcatenationTable):\n cache_files = []\n for subtables in table.blocks:\n for subtable in subtables:\n cache_files += list_table_cache_files(subtable)\n return cache_files\n elif isinstance(table, MemoryMappedTable):\n return [table.path]\n else:\n return []\n\n\ndef _wrap_for_chunked_arrays(func):\n """Apply the function on each chunk of a `pyarrow.ChunkedArray`, or on the array directly"""\n\n def wrapper(array, *args, **kwargs):\n if isinstance(array, pa.ChunkedArray):\n return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\n else:\n return func(array, *args, **kwargs)\n\n return wrapper\n\n\ndef _are_list_values_of_length(array: pa.ListArray, length: int) -> bool:\n """Check if all the sub-lists of a `pa.ListArray` have the specified length."""\n return pc.all(pc.equal(array.value_lengths(), length)).as_py() or array.null_count == len(array)\n\n\ndef _combine_list_array_offsets_with_mask(array: pa.ListArray) -> pa.Array:\n """Add the null bitmap to the offsets of a `pa.ListArray`."""\n offsets = array.offsets\n if array.null_count > 0:\n offsets = pa.concat_arrays(\n [\n pc.replace_with_mask(offsets[:-1], array.is_null(), pa.nulls(len(array), pa.int32())),\n offsets[-1:],\n ]\n )\n return offsets\n\n\ndef _storage_type(type: pa.DataType) -> pa.DataType:\n """Convert a (possibly nested) `pa.ExtensionType` to its storage type."""\n if isinstance(type, pa.ExtensionType):\n return _storage_type(type.storage_type)\n elif isinstance(type, pa.StructType):\n return pa.struct([pa.field(field.name, _storage_type(field.type)) for field in type])\n elif isinstance(type, pa.ListType):\n return pa.list_(_storage_type(type.value_type))\n elif isinstance(type, pa.FixedSizeListType):\n return pa.list_(_storage_type(type.value_type), type.list_size)\n return type\n\n\ndef _short_str(value: Any) -> str:\n out = str(value)\n if len(out) > 3000:\n out = out[:1500] + "\n...\n" + out[-1500:]\n return out\n\n\n@_wrap_for_chunked_arrays\ndef array_cast(\n array: pa.Array, pa_type: pa.DataType, allow_primitive_to_str: bool = True, allow_decimal_to_str: bool = True\n) -> Union[pa.Array, pa.FixedSizeListArray, pa.ListArray, pa.StructArray, pa.ExtensionArray]:\n """Improved version of `pa.Array.cast`\n\n It supports casting `pa.StructArray` objects to re-order the fields.\n It also let you control certain aspects of the casting, e.g. whether\n to disable casting primitives (`booleans`, `floats` or `ints`) or\n disable casting decimals to strings.\n\n Args:\n array (`pa.Array`):\n PyArrow array to cast\n pa_type (`pa.DataType`):\n Target PyArrow type\n allow_primitive_to_str (`bool`, defaults to `True`):\n Whether to allow casting primitives to strings.\n Defaults to `True`.\n allow_decimal_to_str (`bool`, defaults to `True`):\n Whether to allow casting decimals to strings.\n Defaults to `True`.\n\n Raises:\n `pa.ArrowInvalidError`: if the arrow data casting fails\n `TypeError`: if the target type is not supported according, e.g.\n\n - if a field is missing\n - if casting from primitives to strings and `allow_primitive_to_str` is `False`\n - if casting from decimals to strings and `allow_decimal_to_str` is `False`\n\n Returns:\n `List[pyarrow.Array]`: the casted array\n """\n _c = partial(array_cast, allow_primitive_to_str=allow_primitive_to_str, allow_decimal_to_str=allow_decimal_to_str)\n if isinstance(array, pa.ExtensionArray):\n array = array.storage\n if isinstance(pa_type, pa.ExtensionType):\n return pa_type.wrap_array(_c(array, pa_type.storage_type))\n elif array.type == pa_type:\n return array\n elif pa.types.is_struct(array.type):\n if pa.types.is_struct(pa_type) and ({field.name for field in pa_type} == {field.name for field in array.type}):\n if array.type.num_fields == 0:\n return array\n arrays = [_c(array.field(field.name), field.type) for field in pa_type]\n return pa.StructArray.from_arrays(arrays, fields=list(pa_type), mask=array.is_null())\n elif pa.types.is_list(array.type) or pa.types.is_large_list(array.type):\n if pa.types.is_fixed_size_list(pa_type):\n if _are_list_values_of_length(array, pa_type.list_size):\n if array.null_count > 0:\n # Ensure each null value in the array translates to [null] * pa_type.list_size in the array's values array\n array_type = array.type\n storage_type = _storage_type(array_type)\n if array_type != storage_type:\n # Temporarily convert to the storage type to support extension types in the slice operation\n array = _c(array, storage_type)\n array = pc.list_slice(array, 0, pa_type.list_size, return_fixed_size_list=True)\n array = _c(array, array_type)\n else:\n array = pc.list_slice(array, 0, pa_type.list_size, return_fixed_size_list=True)\n array_values = array.values\n return pa.FixedSizeListArray.from_arrays(\n _c(array_values, pa_type.value_type), pa_type.list_size, mask=array.is_null()\n )\n else:\n array_values = array.values[\n array.offset * pa_type.list_size : (array.offset + len(array)) * pa_type.list_size\n ]\n return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size)\n elif pa.types.is_list(pa_type):\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n return pa.ListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type))\n elif pa.types.is_large_list(pa_type):\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n return pa.LargeListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type))\n elif pa.types.is_fixed_size_list(array.type):\n if pa.types.is_fixed_size_list(pa_type):\n if pa_type.list_size == array.type.list_size:\n array_values = array.values[\n array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size\n ]\n return pa.FixedSizeListArray.from_arrays(\n _c(array_values, pa_type.value_type), pa_type.list_size, mask=array.is_null()\n )\n elif pa.types.is_list(pa_type):\n array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size\n return pa.ListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type), mask=array.is_null())\n elif pa.types.is_large_list(pa_type):\n array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size\n return pa.LargeListArray.from_arrays(\n array_offsets, _c(array.values, pa_type.value_type), mask=array.is_null()\n )\n else:\n if pa.types.is_string(pa_type):\n if not allow_primitive_to_str and pa.types.is_primitive(array.type):\n raise TypeError(\n f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)} "\n f"since allow_primitive_to_str is set to {allow_primitive_to_str} "\n )\n if not allow_decimal_to_str and pa.types.is_decimal(array.type):\n raise TypeError(\n f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)} "\n f"and allow_decimal_to_str is set to {allow_decimal_to_str}"\n )\n if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):\n raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")\n return array.cast(pa_type)\n raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")\n\n\n@_wrap_for_chunked_arrays\ndef cast_array_to_feature(\n array: pa.Array, feature: "FeatureType", allow_primitive_to_str: bool = True, allow_decimal_to_str: bool = True\n) -> pa.Array:\n """Cast an array to the arrow type that corresponds to the requested feature type.\n For custom features like [`Audio`] or [`Image`], it takes into account the "cast_storage" methods\n they defined to enable casting from other arrow types.\n\n Args:\n array (`pa.Array`):\n The PyArrow array to cast.\n feature (`datasets.features.FeatureType`):\n The target feature type.\n allow_primitive_to_str (`bool`, defaults to `True`):\n Whether to allow casting primitives to strings.\n Defaults to `True`.\n allow_decimal_to_str (`bool`, defaults to `True`):\n Whether to allow casting decimals to strings.\n Defaults to `True`.\n\n Raises:\n `pa.ArrowInvalidError`: if the arrow data casting fails\n `TypeError`: if the target type is not supported according, e.g.\n\n - if a field is missing\n - if casting from primitives and `allow_primitive_to_str` is `False`\n - if casting from decimals and `allow_decimal_to_str` is `False`\n\n Returns:\n array (`pyarrow.Array`): the casted array\n """\n from .features.features import LargeList, Sequence, get_nested_type\n\n _c = partial(\n cast_array_to_feature,\n allow_primitive_to_str=allow_primitive_to_str,\n allow_decimal_to_str=allow_decimal_to_str,\n )\n\n if isinstance(array, pa.ExtensionArray):\n array = array.storage\n if hasattr(feature, "cast_storage"):\n return feature.cast_storage(array)\n\n elif pa.types.is_struct(array.type):\n # feature must be a dict or Sequence(subfeatures_dict)\n if isinstance(feature, Sequence) and isinstance(feature.feature, dict):\n sequence_kwargs = vars(feature).copy()\n feature = sequence_kwargs.pop("feature")\n feature = {name: Sequence(subfeature, **sequence_kwargs) for name, subfeature in feature.items()}\n if isinstance(feature, dict) and (array_fields := {field.name for field in array.type}) <= set(feature):\n null_array = pa.array([None] * len(array))\n arrays = [\n _c(array.field(name) if name in array_fields else null_array, subfeature)\n for name, subfeature in feature.items()\n ]\n return pa.StructArray.from_arrays(arrays, names=list(feature), mask=array.is_null())\n elif pa.types.is_list(array.type) or pa.types.is_large_list(array.type):\n # feature must be either [subfeature] or LargeList(subfeature) or Sequence(subfeature)\n if isinstance(feature, list):\n casted_array_values = _c(array.values, feature[0])\n if pa.types.is_list(array.type) and casted_array_values.type == array.values.type:\n # Both array and feature have equal list type and values (within the list) type\n return array\n else:\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n return pa.ListArray.from_arrays(array_offsets, casted_array_values)\n elif isinstance(feature, LargeList):\n casted_array_values = _c(array.values, feature.feature)\n if pa.types.is_large_list(array.type) and casted_array_values.type == array.values.type:\n # Both array and feature have equal large_list type and values (within the list) type\n return array\n else:\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n return pa.LargeListArray.from_arrays(array_offsets, casted_array_values)\n elif isinstance(feature, Sequence):\n if feature.length > -1:\n if _are_list_values_of_length(array, feature.length):\n if array.null_count > 0:\n # Ensure each null value in the array translates to [null] * pa_type.list_size in the array's values array\n array_type = array.type\n storage_type = _storage_type(array_type)\n if array_type != storage_type:\n # Temporarily convert to the storage type to support extension types in the slice operation\n array = array_cast(\n array,\n storage_type,\n allow_primitive_to_str=allow_primitive_to_str,\n allow_decimal_to_str=allow_decimal_to_str,\n )\n array = pc.list_slice(array, 0, feature.length, return_fixed_size_list=True)\n array = array_cast(\n array,\n array_type,\n allow_primitive_to_str=allow_primitive_to_str,\n allow_decimal_to_str=allow_decimal_to_str,\n )\n else:\n array = pc.list_slice(array, 0, feature.length, return_fixed_size_list=True)\n array_values = array.values\n casted_array_values = _c(array_values, feature.feature)\n return pa.FixedSizeListArray.from_arrays(\n casted_array_values, feature.length, mask=array.is_null()\n )\n else:\n array_values = array.values[\n array.offset * feature.length : (array.offset + len(array)) * feature.length\n ]\n return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)\n else:\n casted_array_values = _c(array.values, feature.feature)\n if pa.types.is_list(array.type) and casted_array_values.type == array.values.type:\n # Both array and feature have equal list type and values (within the list) type\n return array\n else:\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n return pa.ListArray.from_arrays(array_offsets, casted_array_values)\n elif pa.types.is_fixed_size_list(array.type):\n # feature must be either [subfeature] or Sequence(subfeature)\n if isinstance(feature, list):\n array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size\n return pa.ListArray.from_arrays(array_offsets, _c(array.values, feature[0]), mask=array.is_null())\n elif isinstance(feature, LargeList):\n array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size\n return pa.LargeListArray.from_arrays(\n array_offsets, _c(array.values, feature.feature), mask=array.is_null()\n )\n elif isinstance(feature, Sequence):\n if feature.length > -1:\n if feature.length == array.type.list_size:\n array_values = array.values[\n array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size\n ]\n casted_array_values = _c(array_values, feature.feature)\n return pa.FixedSizeListArray.from_arrays(casted_array_values, feature.length, mask=array.is_null())\n else:\n array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size\n return pa.ListArray.from_arrays(array_offsets, _c(array.values, feature.feature), mask=array.is_null())\n if pa.types.is_null(array.type):\n return array_cast(\n array,\n get_nested_type(feature),\n allow_primitive_to_str=allow_primitive_to_str,\n allow_decimal_to_str=allow_decimal_to_str,\n )\n elif not isinstance(feature, (Sequence, dict, list, tuple)):\n return array_cast(\n array,\n feature(),\n allow_primitive_to_str=allow_primitive_to_str,\n allow_decimal_to_str=allow_decimal_to_str,\n )\n raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")\n\n\n@_wrap_for_chunked_arrays\ndef embed_array_storage(array: pa.Array, feature: "FeatureType"):\n """Embed data into an arrays's storage.\n For custom features like Audio or Image, it takes into account the "embed_storage" methods\n they define to embed external data (e.g. an image file) into an array.\n\n <Added version="2.4.0"/>\n\n Args:\n array (`pa.Array`):\n The PyArrow array in which to embed data.\n feature (`datasets.features.FeatureType`):\n Array features.\n\n Raises:\n `TypeError`: if the target type is not supported according, e.g.\n\n - if a field is missing\n\n Returns:\n array (`pyarrow.Array`): the casted array\n """\n from .features import Sequence\n\n _e = embed_array_storage\n\n if isinstance(array, pa.ExtensionArray):\n array = array.storage\n if hasattr(feature, "embed_storage"):\n return feature.embed_storage(array)\n elif pa.types.is_struct(array.type):\n # feature must be a dict or Sequence(subfeatures_dict)\n if isinstance(feature, Sequence) and isinstance(feature.feature, dict):\n feature = {\n name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()\n }\n if isinstance(feature, dict):\n arrays = [_e(array.field(name), subfeature) for name, subfeature in feature.items()]\n return pa.StructArray.from_arrays(arrays, names=list(feature), mask=array.is_null())\n elif pa.types.is_list(array.type):\n # feature must be either [subfeature] or Sequence(subfeature)\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n if isinstance(feature, list):\n return pa.ListArray.from_arrays(array_offsets, _e(array.values, feature[0]))\n if isinstance(feature, Sequence) and feature.length == -1:\n return pa.ListArray.from_arrays(array_offsets, _e(array.values, feature.feature))\n elif pa.types.is_large_list(array.type):\n # feature must be LargeList(subfeature)\n # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError\n array_offsets = _combine_list_array_offsets_with_mask(array)\n return pa.LargeListArray.from_arrays(array_offsets, _e(array.values, feature.feature))\n elif pa.types.is_fixed_size_list(array.type):\n # feature must be Sequence(subfeature)\n if isinstance(feature, Sequence) and feature.length > -1:\n array_values = array.values[\n array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size\n ]\n embedded_array_values = _e(array_values, feature.feature)\n return pa.FixedSizeListArray.from_arrays(embedded_array_values, feature.length, mask=array.is_null())\n if not isinstance(feature, (Sequence, dict, list, tuple)):\n return array\n raise TypeError(f"Couldn't embed array of type\n{_short_str(array.type)}\nwith\n{_short_str(feature)}")\n\n\nclass CastError(ValueError):\n """When it's not possible to cast an Arrow table to a specific schema or set of features"""\n\n def __init__(self, *args, table_column_names: list[str], requested_column_names: list[str]) -> None:\n super().__init__(*args)\n self.table_column_names = table_column_names\n self.requested_column_names = requested_column_names\n\n def __reduce__(self):\n # Fix unpickling: TypeError: __init__() missing 2 required keyword-only arguments: 'table_column_names' and 'requested_column_names'\n return partial(\n CastError, table_column_names=self.table_column_names, requested_column_names=self.requested_column_names\n ), ()\n\n def details(self):\n new_columns = set(self.table_column_names) - set(self.requested_column_names)\n missing_columns = set(self.requested_column_names) - set(self.table_column_names)\n if new_columns and missing_columns:\n return f"there are {len(new_columns)} new columns ({_short_str(new_columns)}) and {len(missing_columns)} missing columns ({_short_str(missing_columns)})."\n elif new_columns:\n return f"there are {len(new_columns)} new columns ({_short_str(new_columns)})"\n else:\n return f"there are {len(missing_columns)} missing columns ({_short_str(missing_columns)})"\n\n\ndef cast_table_to_features(table: pa.Table, features: "Features"):\n """Cast a table to the arrow schema that corresponds to the requested features.\n\n Args:\n table (`pyarrow.Table`):\n PyArrow table to cast.\n features ([`Features`]):\n Target features.\n\n Returns:\n table (`pyarrow.Table`): the casted table\n """\n if sorted(table.column_names) != sorted(features):\n raise CastError(\n f"Couldn't cast\n{_short_str(table.schema)}\nto\n{_short_str(features)}\nbecause column names don't match",\n table_column_names=table.column_names,\n requested_column_names=list(features),\n )\n arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]\n return pa.Table.from_arrays(arrays, schema=features.arrow_schema)\n\n\ndef cast_table_to_schema(table: pa.Table, schema: pa.Schema):\n """Cast a table to the arrow schema. Different from `cast_table_to_features`, this method can preserve nullability.\n\n Args:\n table (`pa.Table`):\n PyArrow table to cast.\n features ([`Features`]):\n Target features.\n\n Returns:\n `pa.Table`: the casted table\n """\n from .features import Features\n\n features = Features.from_arrow_schema(schema)\n table_column_names = set(table.column_names)\n if not table_column_names <= set(schema.names):\n raise CastError(\n f"Couldn't cast\n{_short_str(table.schema)}\nto\n{_short_str(features)}\nbecause column names don't match",\n table_column_names=table.column_names,\n requested_column_names=list(features),\n )\n arrays = [\n cast_array_to_feature(\n table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),\n feature,\n )\n for name, feature in features.items()\n ]\n return pa.Table.from_arrays(arrays, schema=schema)\n\n\ndef embed_table_storage(table: pa.Table):\n """Embed external data into a table's storage.\n\n <Added version="2.4.0"/>\n\n Args:\n table (`pyarrow.Table`):\n PyArrow table in which to embed data.\n\n Returns:\n table (`pyarrow.Table`): the table with embedded data\n """\n from .features.features import Features, require_storage_embed\n\n features = Features.from_arrow_schema(table.schema)\n arrays = [\n embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]\n for name, feature in features.items()\n ]\n return pa.Table.from_arrays(arrays, schema=features.arrow_schema)\n\n\ndef table_cast(table: pa.Table, schema: pa.Schema):\n """Improved version of `pa.Table.cast`.\n\n It supports casting to feature types stored in the schema metadata.\n\n Args:\n table (`pyarrow.Table`):\n PyArrow table to cast.\n schema (`pyarrow.Schema`):\n Target PyArrow schema.\n\n Returns:\n table (`pyarrow.Table`): the casted table\n """\n if table.schema != schema:\n return cast_table_to_schema(table, schema)\n elif table.schema.metadata != schema.metadata:\n return table.replace_schema_metadata(schema.metadata)\n else:\n return table\n\n\ndef table_flatten(table: pa.Table):\n """Improved version of `pa.Table.flatten`.\n\n It behaves as `pa.Table.flatten` in a sense it does 1-step flatten of the columns with a struct type into one column per struct field,\n but updates the metadata and skips decodable features unless the `decode` attribute of these features is set to False.\n\n Args:\n table (`pa.Table`):\n PyArrow table to flatten.\n\n Returns:\n `Table`: the flattened table\n """\n from .features import Features\n\n features = Features.from_arrow_schema(table.schema)\n if any(hasattr(subfeature, "flatten") and subfeature.flatten() == subfeature for subfeature in features.values()):\n flat_arrays = []\n flat_column_names = []\n for field in table.schema:\n array = table.column(field.name)\n subfeature = features[field.name]\n if pa.types.is_struct(field.type) and (\n not hasattr(subfeature, "flatten") or subfeature.flatten() != subfeature\n ):\n flat_arrays.extend(array.flatten())\n flat_column_names.extend([f"{field.name}.{subfield.name}" for subfield in field.type])\n else:\n flat_arrays.append(array)\n flat_column_names.append(field.name)\n flat_table = pa.Table.from_arrays(\n flat_arrays,\n names=flat_column_names,\n )\n else:\n flat_table = table.flatten()\n # Preserve complex types in the metadata\n flat_features = features.flatten(max_depth=2)\n flat_features = Features({column_name: flat_features[column_name] for column_name in flat_table.column_names})\n return flat_table.replace_schema_metadata(flat_features.arrow_schema.metadata)\n\n\ndef table_visitor(table: pa.Table, function: Callable[[pa.Array], None]):\n """Visit all arrays in a table and apply a function to them.\n\n Args:\n table (`pyarrow.Table`):\n PyArrow table to visit.\n function (`Callable[[pa.Array], None]`):\n Function to apply to each array.\n """\n from .features import Features, Sequence\n\n features = Features.from_arrow_schema(table.schema)\n\n def _visit(array, feature):\n if isinstance(array, pa.ChunkedArray):\n for chunk in array.chunks:\n _visit(chunk, feature)\n else:\n if isinstance(array, pa.ExtensionArray):\n array = array.storage\n function(array, feature)\n if pa.types.is_struct(array.type) and not hasattr(feature, "cast_storage"):\n if isinstance(feature, Sequence) and isinstance(feature.feature, dict):\n feature = {\n name: Sequence(subfeature, length=feature.length)\n for name, subfeature in feature.feature.items()\n }\n for name, subfeature in feature.items():\n _visit(array.field(name), subfeature)\n elif pa.types.is_list(array.type):\n if isinstance(feature, list):\n _visit(array.values, feature[0])\n elif isinstance(feature, Sequence):\n _visit(array.values, feature.feature)\n\n for name, feature in features.items():\n _visit(table[name], feature)\n\n\ndef table_iter(table: Table, batch_size: int, drop_last_batch=False) -> Iterator[pa.Table]:\n """Iterate over sub-tables of size `batch_size`.\n\n Args:\n table (`pyarrow.Table`):\n PyArrow table to iterate over.\n batch_size (`int`):\n Size of each sub-table to yield.\n drop_last_batch (`bool`, defaults to `False`):\n Drop the last batch if it is smaller than `batch_size`.\n """\n chunks_buffer = []\n chunks_buffer_size = 0\n for chunk in table.to_reader(max_chunksize=batch_size):\n if len(chunk) == 0:\n continue\n elif chunks_buffer_size + len(chunk) < batch_size:\n chunks_buffer.append(chunk)\n chunks_buffer_size += len(chunk)\n continue\n elif chunks_buffer_size + len(chunk) == batch_size:\n chunks_buffer.append(chunk)\n yield pa.Table.from_batches(chunks_buffer)\n chunks_buffer = []\n chunks_buffer_size = 0\n else:\n cropped_chunk_length = batch_size - chunks_buffer_size\n chunks_buffer.append(chunk.slice(0, cropped_chunk_length))\n yield pa.Table.from_batches(chunks_buffer)\n chunks_buffer = [chunk.slice(cropped_chunk_length, len(chunk) - cropped_chunk_length)]\n chunks_buffer_size = len(chunk) - cropped_chunk_length\n if not drop_last_batch and chunks_buffer:\n yield pa.Table.from_batches(chunks_buffer)\n
|
.venv\Lib\site-packages\datasets\table.py
|
table.py
|
Python
| 95,878 | 0.75 | 0.171985 | 0.020823 |
node-utils
| 472 |
2025-07-09T14:11:52.581025
|
BSD-3-Clause
| false |
171ca37c7c423ad0261fb015913ac876
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n__version__ = "3.6.0"\n\nfrom .arrow_dataset import Dataset\nfrom .arrow_reader import ReadInstruction\nfrom .builder import ArrowBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\nfrom .combine import concatenate_datasets, interleave_datasets\nfrom .dataset_dict import DatasetDict, IterableDatasetDict\nfrom .download import *\nfrom .features import *\nfrom .fingerprint import disable_caching, enable_caching, is_caching_enabled\nfrom .info import DatasetInfo\nfrom .inspect import (\n get_dataset_config_info,\n get_dataset_config_names,\n get_dataset_default_config_name,\n get_dataset_infos,\n get_dataset_split_names,\n)\nfrom .iterable_dataset import IterableDataset\nfrom .load import load_dataset, load_dataset_builder, load_from_disk\nfrom .splits import (\n NamedSplit,\n NamedSplitAll,\n Split,\n SplitBase,\n SplitDict,\n SplitGenerator,\n SplitInfo,\n SubSplitInfo,\n percent,\n)\nfrom .utils import *\nfrom .utils import logging\n
|
.venv\Lib\site-packages\datasets\__init__.py
|
__init__.py
|
Python
| 1,606 | 0.95 | 0.021277 | 0.288889 |
react-lib
| 923 |
2024-01-19T10:02:35.776562
|
BSD-3-Clause
| false |
8a879673c6d56bba87b0be4c06bfcd9b
|
import os\nimport re\nimport shutil\nfrom argparse import ArgumentParser, Namespace\n\nfrom datasets.commands import BaseDatasetsCLICommand\nfrom datasets.utils.logging import get_logger\n\n\nHIGHLIGHT_MESSAGE_PRE = """<<<<<<< This should probably be modified because it mentions: """\n\nHIGHLIGHT_MESSAGE_POST = """=======\n>>>>>>>\n"""\n\nTO_HIGHLIGHT = [\n "TextEncoderConfig",\n "ByteTextEncoder",\n "SubwordTextEncoder",\n "encoder_config",\n "maybe_build_from_corpus",\n "manual_dir",\n]\n\nTO_CONVERT = [\n # (pattern, replacement)\n # Order is important here for some replacements\n (r"tfds\.core", r"datasets"),\n (r"tf\.io\.gfile\.GFile", r"open"),\n (r"tf\.([\w\d]+)", r"datasets.Value('\1')"),\n (r"tfds\.features\.Text\(\)", r"datasets.Value('string')"),\n (r"tfds\.features\.Text\(", r"datasets.Value('string'),"),\n (r"features\s*=\s*tfds.features.FeaturesDict\(", r"features=datasets.Features("),\n (r"tfds\.features\.FeaturesDict\(", r"dict("),\n (r"The TensorFlow Datasets Authors", r"The TensorFlow Datasets Authors and the HuggingFace Datasets Authors"),\n (r"tfds\.", r"datasets."),\n (r"dl_manager\.manual_dir", r"self.config.data_dir"),\n (r"self\.builder_config", r"self.config"),\n]\n\n\ndef convert_command_factory(args: Namespace):\n """\n Factory function used to convert a model TF 1.0 checkpoint in a PyTorch checkpoint.\n\n Returns: ConvertCommand\n """\n return ConvertCommand(args.tfds_path, args.datasets_directory)\n\n\nclass ConvertCommand(BaseDatasetsCLICommand):\n @staticmethod\n def register_subcommand(parser: ArgumentParser):\n """\n Register this command to argparse so it's available for the datasets-cli\n\n Args:\n parser: Root parser to register command-specific arguments\n """\n train_parser = parser.add_parser(\n "convert",\n help="Convert a TensorFlow Datasets dataset to a HuggingFace Datasets dataset.",\n )\n train_parser.add_argument(\n "--tfds_path",\n type=str,\n required=True,\n help="Path to a TensorFlow Datasets folder to convert or a single tfds file to convert.",\n )\n train_parser.add_argument(\n "--datasets_directory", type=str, required=True, help="Path to the HuggingFace Datasets folder."\n )\n train_parser.set_defaults(func=convert_command_factory)\n\n def __init__(self, tfds_path: str, datasets_directory: str, *args):\n self._logger = get_logger("datasets-cli/converting")\n\n self._tfds_path = tfds_path\n self._datasets_directory = datasets_directory\n\n def run(self):\n if os.path.isdir(self._tfds_path):\n abs_tfds_path = os.path.abspath(self._tfds_path)\n elif os.path.isfile(self._tfds_path):\n abs_tfds_path = os.path.dirname(self._tfds_path)\n else:\n raise ValueError("--tfds_path is neither a directory nor a file. Please check path.")\n\n abs_datasets_path = os.path.abspath(self._datasets_directory)\n\n self._logger.info(f"Converting datasets from {abs_tfds_path} to {abs_datasets_path}")\n\n utils_files = []\n with_manual_update = []\n imports_to_builder_map = {}\n\n if os.path.isdir(self._tfds_path):\n file_names = os.listdir(abs_tfds_path)\n else:\n file_names = [os.path.basename(self._tfds_path)]\n\n for f_name in file_names:\n self._logger.info(f"Looking at file {f_name}")\n input_file = os.path.join(abs_tfds_path, f_name)\n output_file = os.path.join(abs_datasets_path, f_name)\n\n if not os.path.isfile(input_file) or "__init__" in f_name or "_test" in f_name or ".py" not in f_name:\n self._logger.info("Skipping file")\n continue\n\n with open(input_file, encoding="utf-8") as f:\n lines = f.readlines()\n\n out_lines = []\n is_builder = False\n needs_manual_update = False\n tfds_imports = []\n for line in lines:\n out_line = line\n\n # Convert imports\n if "import tensorflow.compat.v2 as tf" in out_line:\n continue\n elif "@tfds.core" in out_line:\n continue\n elif "builder=self" in out_line:\n continue\n elif "import tensorflow_datasets.public_api as tfds" in out_line:\n out_line = "import datasets\n"\n elif "import tensorflow" in out_line:\n # order is important here\n out_line = ""\n continue\n elif "from absl import logging" in out_line:\n out_line = "from datasets import logging\n"\n elif "getLogger" in out_line:\n out_line = out_line.replace("getLogger", "get_logger")\n elif any(expression in out_line for expression in TO_HIGHLIGHT):\n needs_manual_update = True\n to_remove = list(filter(lambda e: e in out_line, TO_HIGHLIGHT))\n out_lines.append(HIGHLIGHT_MESSAGE_PRE + str(to_remove) + "\n")\n out_lines.append(out_line)\n out_lines.append(HIGHLIGHT_MESSAGE_POST)\n continue\n else:\n for pattern, replacement in TO_CONVERT:\n out_line = re.sub(pattern, replacement, out_line)\n\n # Take care of saving utilities (to later move them together with main script)\n if "tensorflow_datasets" in out_line:\n match = re.match(r"from\stensorflow_datasets.*import\s([^\.\r\n]+)", out_line)\n tfds_imports.extend(imp.strip() for imp in match.group(1).split(","))\n out_line = "from . import " + match.group(1)\n\n # Check we have not forget anything\n if "tf." in out_line or "tfds." in out_line or "tensorflow_datasets" in out_line:\n raise ValueError(f"Error converting {out_line.strip()}")\n\n if "GeneratorBasedBuilder" in out_line:\n is_builder = True\n out_lines.append(out_line)\n\n if is_builder or "wmt" in f_name:\n # We create a new directory for each dataset\n dir_name = f_name.replace(".py", "")\n output_dir = os.path.join(abs_datasets_path, dir_name)\n output_file = os.path.join(output_dir, f_name)\n os.makedirs(output_dir, exist_ok=True)\n self._logger.info(f"Adding directory {output_dir}")\n imports_to_builder_map.update(dict.fromkeys(tfds_imports, output_dir))\n else:\n # Utilities will be moved at the end\n utils_files.append(output_file)\n\n if needs_manual_update:\n with_manual_update.append(output_file)\n\n with open(output_file, "w", encoding="utf-8") as f:\n f.writelines(out_lines)\n self._logger.info(f"Converted in {output_file}")\n\n for utils_file in utils_files:\n try:\n f_name = os.path.basename(utils_file)\n dest_folder = imports_to_builder_map[f_name.replace(".py", "")]\n self._logger.info(f"Moving {dest_folder} to {utils_file}")\n shutil.copy(utils_file, dest_folder)\n except KeyError:\n self._logger.error(f"Cannot find destination folder for {utils_file}. Please copy manually.")\n\n if with_manual_update:\n for file_path in with_manual_update:\n self._logger.warning(\n f"You need to manually update file {file_path} to remove configurations using 'TextEncoderConfig'."\n )\n
|
.venv\Lib\site-packages\datasets\commands\convert.py
|
convert.py
|
Python
| 7,878 | 0.95 | 0.14359 | 0.04908 |
python-kit
| 318 |
2025-01-10T22:51:09.556485
|
GPL-3.0
| false |
14b2dcfb29a2a631dec4889b9a384a45
|
from argparse import ArgumentParser\nfrom typing import Optional\n\nfrom datasets.commands import BaseDatasetsCLICommand\nfrom datasets.hub import convert_to_parquet\n\n\ndef _command_factory(args):\n return ConvertToParquetCommand(\n args.dataset_id,\n args.token,\n args.revision,\n args.trust_remote_code,\n )\n\n\nclass ConvertToParquetCommand(BaseDatasetsCLICommand):\n @staticmethod\n def register_subcommand(parser):\n parser: ArgumentParser = parser.add_parser("convert_to_parquet", help="Convert dataset to Parquet")\n parser.add_argument(\n "dataset_id", help="source dataset ID, e.g. USERNAME/DATASET_NAME or ORGANIZATION/DATASET_NAME"\n )\n parser.add_argument("--token", help="access token to the Hugging Face Hub (defaults to logged-in user's one)")\n parser.add_argument("--revision", help="source revision")\n parser.add_argument(\n "--trust_remote_code", action="store_true", help="whether to trust the code execution of the load script"\n )\n parser.set_defaults(func=_command_factory)\n\n def __init__(\n self,\n dataset_id: str,\n token: Optional[str],\n revision: Optional[str],\n trust_remote_code: bool,\n ):\n self._dataset_id = dataset_id\n self._token = token\n self._revision = revision\n self._trust_remote_code = trust_remote_code\n\n def run(self) -> None:\n _ = convert_to_parquet(\n self._dataset_id, revision=self._revision, token=self._token, trust_remote_code=self._trust_remote_code\n )\n
|
.venv\Lib\site-packages\datasets\commands\convert_to_parquet.py
|
convert_to_parquet.py
|
Python
| 1,593 | 0.85 | 0.108696 | 0 |
node-utils
| 526 |
2025-05-30T11:09:12.844664
|
MIT
| false |
4471f64b781289ebaa0644d494fce177
|
from argparse import ArgumentParser\nfrom typing import Optional\n\nfrom datasets.commands import BaseDatasetsCLICommand\nfrom datasets.hub import delete_from_hub\n\n\ndef _command_factory(args):\n return DeleteFromHubCommand(\n args.dataset_id,\n args.config_name,\n args.token,\n args.revision,\n )\n\n\nclass DeleteFromHubCommand(BaseDatasetsCLICommand):\n @staticmethod\n def register_subcommand(parser):\n parser: ArgumentParser = parser.add_parser("delete_from_hub", help="Delete dataset config from the Hub")\n parser.add_argument(\n "dataset_id", help="source dataset ID, e.g. USERNAME/DATASET_NAME or ORGANIZATION/DATASET_NAME"\n )\n parser.add_argument("config_name", help="config name to delete")\n parser.add_argument("--token", help="access token to the Hugging Face Hub")\n parser.add_argument("--revision", help="source revision")\n parser.set_defaults(func=_command_factory)\n\n def __init__(\n self,\n dataset_id: str,\n config_name: str,\n token: Optional[str],\n revision: Optional[str],\n ):\n self._dataset_id = dataset_id\n self._config_name = config_name\n self._token = token\n self._revision = revision\n\n def run(self) -> None:\n _ = delete_from_hub(self._dataset_id, self._config_name, revision=self._revision, token=self._token)\n
|
.venv\Lib\site-packages\datasets\commands\delete_from_hub.py
|
delete_from_hub.py
|
Python
| 1,396 | 0.85 | 0.119048 | 0 |
react-lib
| 463 |
2025-03-25T21:16:54.490317
|
Apache-2.0
| false |
d117b83bfc551ba727da706d85ce20e5
|
import platform\nfrom argparse import ArgumentParser\n\nimport fsspec\nimport huggingface_hub\nimport pandas\nimport pyarrow\n\nfrom datasets import __version__ as version\nfrom datasets.commands import BaseDatasetsCLICommand\n\n\ndef info_command_factory(_):\n return EnvironmentCommand()\n\n\nclass EnvironmentCommand(BaseDatasetsCLICommand):\n @staticmethod\n def register_subcommand(parser: ArgumentParser):\n download_parser = parser.add_parser("env", help="Print relevant system environment info.")\n download_parser.set_defaults(func=info_command_factory)\n\n def run(self):\n info = {\n "`datasets` version": version,\n "Platform": platform.platform(),\n "Python version": platform.python_version(),\n "`huggingface_hub` version": huggingface_hub.__version__,\n "PyArrow version": pyarrow.__version__,\n "Pandas version": pandas.__version__,\n "`fsspec` version": fsspec.__version__,\n }\n\n print("\nCopy-and-paste the text below in your GitHub issue.\n")\n print(self.format_dict(info))\n\n return info\n\n @staticmethod\n def format_dict(d):\n return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n"\n
|
.venv\Lib\site-packages\datasets\commands\env.py
|
env.py
|
Python
| 1,239 | 0.85 | 0.146341 | 0 |
awesome-app
| 353 |
2024-05-20T17:58:34.766892
|
MIT
| false |
15eaf90fe44d76e13753830dd0eea2d2
|
import logging\nimport os\nfrom argparse import ArgumentParser\nfrom collections.abc import Generator\nfrom pathlib import Path\nfrom shutil import copyfile, rmtree\nfrom typing import Optional\n\nimport datasets.config\nfrom datasets.builder import DatasetBuilder\nfrom datasets.commands import BaseDatasetsCLICommand\nfrom datasets.download.download_manager import DownloadMode\nfrom datasets.load import dataset_module_factory, import_main_class\nfrom datasets.utils.info_utils import VerificationMode\nfrom datasets.utils.logging import ERROR, get_logger\n\n\nlogger = get_logger(__name__)\n\n\ndef _test_command_factory(args):\n return TestCommand(\n args.dataset,\n args.name,\n args.cache_dir,\n args.data_dir,\n args.all_configs,\n args.save_info or args.save_infos,\n args.ignore_verifications,\n args.force_redownload,\n args.clear_cache,\n args.num_proc,\n args.trust_remote_code,\n )\n\n\nclass TestCommand(BaseDatasetsCLICommand):\n __test__ = False # to tell pytest it's not a test class\n\n @staticmethod\n def register_subcommand(parser: ArgumentParser):\n test_parser = parser.add_parser("test", help="Test dataset implementation.")\n test_parser.add_argument("--name", type=str, default=None, help="Dataset processing name")\n test_parser.add_argument(\n "--cache_dir",\n type=str,\n default=None,\n help="Cache directory where the datasets are stored.",\n )\n test_parser.add_argument(\n "--data_dir",\n type=str,\n default=None,\n help="Can be used to specify a manual directory to get the files from.",\n )\n test_parser.add_argument("--all_configs", action="store_true", help="Test all dataset configurations")\n test_parser.add_argument(\n "--save_info", action="store_true", help="Save the dataset infos in the dataset card (README.md)"\n )\n test_parser.add_argument(\n "--ignore_verifications",\n action="store_true",\n help="Run the test without checksums and splits checks.",\n )\n test_parser.add_argument("--force_redownload", action="store_true", help="Force dataset redownload")\n test_parser.add_argument(\n "--clear_cache",\n action="store_true",\n help="Remove downloaded files and cached datasets after each config test",\n )\n test_parser.add_argument("--num_proc", type=int, default=None, help="Number of processes")\n test_parser.add_argument(\n "--trust_remote_code", action="store_true", help="whether to trust the code execution of the load script"\n )\n # aliases\n test_parser.add_argument("--save_infos", action="store_true", help="alias to save_info")\n test_parser.add_argument("dataset", type=str, help="Name of the dataset to download")\n test_parser.set_defaults(func=_test_command_factory)\n\n def __init__(\n self,\n dataset: str,\n name: str,\n cache_dir: str,\n data_dir: str,\n all_configs: bool,\n save_infos: bool,\n ignore_verifications: bool,\n force_redownload: bool,\n clear_cache: bool,\n num_proc: int,\n trust_remote_code: Optional[bool],\n ):\n self._dataset = dataset\n self._name = name\n self._cache_dir = cache_dir\n self._data_dir = data_dir\n self._all_configs = all_configs\n self._save_infos = save_infos\n self._ignore_verifications = ignore_verifications\n self._force_redownload = force_redownload\n self._clear_cache = clear_cache\n self._num_proc = num_proc\n self._trust_remote_code = trust_remote_code\n if clear_cache and not cache_dir:\n print(\n "When --clear_cache is used, specifying a cache directory is mandatory.\n"\n "The 'download' folder of the cache directory and the dataset builder cache will be deleted after each configuration test.\n"\n "Please provide a --cache_dir that will be used to test the dataset script."\n )\n exit(1)\n if save_infos:\n self._ignore_verifications = True\n\n def run(self):\n logging.getLogger("filelock").setLevel(ERROR)\n if self._name is not None and self._all_configs:\n print("Both parameters `config` and `all_configs` can't be used at once.")\n exit(1)\n path, config_name = self._dataset, self._name\n module = dataset_module_factory(path, trust_remote_code=self._trust_remote_code)\n builder_cls = import_main_class(module.module_path)\n n_builders = len(builder_cls.BUILDER_CONFIGS) if self._all_configs and builder_cls.BUILDER_CONFIGS else 1\n\n def get_builders() -> Generator[DatasetBuilder, None, None]:\n if self._all_configs and builder_cls.BUILDER_CONFIGS:\n for i, config in enumerate(builder_cls.BUILDER_CONFIGS):\n if "config_name" in module.builder_kwargs:\n yield builder_cls(\n cache_dir=self._cache_dir,\n data_dir=self._data_dir,\n **module.builder_kwargs,\n )\n else:\n yield builder_cls(\n config_name=config.name,\n cache_dir=self._cache_dir,\n data_dir=self._data_dir,\n **module.builder_kwargs,\n )\n else:\n if "config_name" in module.builder_kwargs:\n yield builder_cls(cache_dir=self._cache_dir, data_dir=self._data_dir, **module.builder_kwargs)\n else:\n yield builder_cls(\n config_name=config_name,\n cache_dir=self._cache_dir,\n data_dir=self._data_dir,\n **module.builder_kwargs,\n )\n\n for j, builder in enumerate(get_builders()):\n print(f"Testing builder '{builder.config.name}' ({j + 1}/{n_builders})")\n builder._record_infos = os.path.exists(\n os.path.join(builder.get_imported_module_dir(), datasets.config.DATASETDICT_INFOS_FILENAME)\n ) # record checksums only if we need to update a (deprecated) dataset_infos.json\n builder.download_and_prepare(\n download_mode=DownloadMode.REUSE_CACHE_IF_EXISTS\n if not self._force_redownload\n else DownloadMode.FORCE_REDOWNLOAD,\n verification_mode=VerificationMode.NO_CHECKS\n if self._ignore_verifications\n else VerificationMode.ALL_CHECKS,\n num_proc=self._num_proc,\n )\n builder.as_dataset()\n if self._save_infos:\n builder._save_infos()\n\n # If save_infos=True, the dataset card (README.md) is created next to the loaded module file.\n # The dataset_infos are saved in the YAML part of the README.md\n\n # Let's move it to the original directory of the dataset script, to allow the user to\n # upload them on S3 at the same time afterwards.\n if self._save_infos:\n dataset_readme_path = os.path.join(\n builder_cls.get_imported_module_dir(), datasets.config.REPOCARD_FILENAME\n )\n name = Path(path).name + ".py"\n combined_path = os.path.join(path, name)\n if os.path.isfile(path):\n dataset_dir = os.path.dirname(path)\n elif os.path.isfile(combined_path):\n dataset_dir = path\n elif os.path.isdir(path): # for local directories containing only data files\n dataset_dir = path\n else: # in case of a remote dataset\n dataset_dir = None\n print(f"Dataset card saved at {dataset_readme_path}")\n\n # Move dataset_info back to the user\n if dataset_dir is not None:\n user_dataset_readme_path = os.path.join(dataset_dir, datasets.config.REPOCARD_FILENAME)\n copyfile(dataset_readme_path, user_dataset_readme_path)\n print(f"Dataset card saved at {user_dataset_readme_path}")\n\n # If clear_cache=True, the download folder and the dataset builder cache directory are deleted\n if self._clear_cache:\n if os.path.isdir(builder._cache_dir):\n logger.warning(f"Clearing cache at {builder._cache_dir}")\n rmtree(builder._cache_dir)\n download_dir = os.path.join(self._cache_dir, datasets.config.DOWNLOADED_DATASETS_DIR)\n if os.path.isdir(download_dir):\n logger.warning(f"Clearing cache at {download_dir}")\n rmtree(download_dir)\n\n print("Test successful.")\n
|
.venv\Lib\site-packages\datasets\commands\test.py
|
test.py
|
Python
| 9,115 | 0.95 | 0.130435 | 0.052632 |
awesome-app
| 218 |
2024-01-08T05:36:32.477864
|
BSD-3-Clause
| true |
db3e174d765fe872f94c26985168464f
|
from abc import ABC, abstractmethod\nfrom argparse import ArgumentParser\n\n\nclass BaseDatasetsCLICommand(ABC):\n @staticmethod\n @abstractmethod\n def register_subcommand(parser: ArgumentParser):\n raise NotImplementedError()\n\n @abstractmethod\n def run(self):\n raise NotImplementedError()\n
|
.venv\Lib\site-packages\datasets\commands\__init__.py
|
__init__.py
|
Python
| 312 | 0.85 | 0.230769 | 0 |
awesome-app
| 952 |
2024-07-02T01:11:27.149647
|
GPL-3.0
| false |
ee7dcd83c54d25acf17afcc31e901c6b
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\convert.cpython-313.pyc
|
convert.cpython-313.pyc
|
Other
| 9,773 | 0.95 | 0.041667 | 0 |
python-kit
| 204 |
2024-04-21T01:23:28.637307
|
Apache-2.0
| false |
89a73212a701d022ed680be843a18312
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\convert_to_parquet.cpython-313.pyc
|
convert_to_parquet.cpython-313.pyc
|
Other
| 2,625 | 0.8 | 0 | 0 |
node-utils
| 51 |
2024-09-15T00:36:57.049081
|
BSD-3-Clause
| false |
4c19c1809d1f66624db47ab899cdb04b
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\datasets_cli.cpython-313.pyc
|
datasets_cli.cpython-313.pyc
|
Other
| 2,237 | 0.8 | 0 | 0 |
node-utils
| 206 |
2024-10-29T07:33:48.721221
|
MIT
| false |
c63f30a7f37919900eb4e0fc3a53337f
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\delete_from_hub.cpython-313.pyc
|
delete_from_hub.cpython-313.pyc
|
Other
| 2,472 | 0.8 | 0 | 0 |
awesome-app
| 708 |
2024-05-18T08:39:39.967796
|
BSD-3-Clause
| false |
2118d6e242bd1f30df08a7ced7edccbf
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\env.cpython-313.pyc
|
env.cpython-313.pyc
|
Other
| 2,435 | 0.8 | 0 | 0 |
python-kit
| 193 |
2024-02-21T12:38:26.509730
|
BSD-3-Clause
| false |
a27d0fbfd45777f7092a30b65f4346a4
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\test.cpython-313.pyc
|
test.cpython-313.pyc
|
Other
| 10,562 | 0.95 | 0 | 0 |
awesome-app
| 106 |
2025-06-26T05:36:06.681134
|
BSD-3-Clause
| true |
278012aa87dc79b8c515a1982be29145
|
\n\n
|
.venv\Lib\site-packages\datasets\commands\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 1,009 | 0.8 | 0 | 0 |
vue-tools
| 563 |
2025-02-07T17:15:09.530213
|
MIT
| false |
d8af7bcdcc3a109d287a8e80618207dc
|
import copy\nfrom dataclasses import dataclass, field\nfrom pathlib import Path\nfrom typing import Any, Optional, Union\n\nfrom .. import config\n\n\n@dataclass\nclass DownloadConfig:\n """Configuration for our cached path manager.\n\n Attributes:\n cache_dir (`str` or `Path`, *optional*):\n Specify a cache directory to save the file to (overwrite the\n default cache dir).\n force_download (`bool`, defaults to `False`):\n If `True`, re-download the file even if it's already cached in\n the cache dir.\n resume_download (`bool`, defaults to `False`):\n If `True`, resume the download if an incompletely received file is\n found.\n proxies (`dict`, *optional*):\n user_agent (`str`, *optional*):\n Optional string or dict that will be appended to the user-agent on remote\n requests.\n extract_compressed_file (`bool`, defaults to `False`):\n If `True` and the path point to a zip or tar file,\n extract the compressed file in a folder along the archive.\n force_extract (`bool`, defaults to `False`):\n If `True` when `extract_compressed_file` is `True` and the archive\n was already extracted, re-extract the archive and override the folder where it was extracted.\n delete_extracted (`bool`, defaults to `False`):\n Whether to delete (or keep) the extracted files.\n extract_on_the_fly (`bool`, defaults to `False`):\n If `True`, extract compressed files while they are being read.\n use_etag (`bool`, defaults to `True`):\n Whether to use the ETag HTTP response header to validate the cached files.\n num_proc (`int`, *optional*):\n The number of processes to launch to download the files in parallel.\n max_retries (`int`, default to `1`):\n The number of times to retry an HTTP request if it fails.\n token (`str` or `bool`, *optional*):\n Optional string or boolean to use as Bearer token\n for remote files on the Datasets Hub. If `True`, or not specified, will get token from `~/.huggingface`.\n storage_options (`dict`, *optional*):\n Key/value pairs to be passed on to the dataset file-system backend, if any.\n download_desc (`str`, *optional*):\n A description to be displayed alongside with the progress bar while downloading the files.\n disable_tqdm (`bool`, defaults to `False`):\n Whether to disable the individual files download progress bar\n """\n\n cache_dir: Optional[Union[str, Path]] = None\n force_download: bool = False\n resume_download: bool = False\n local_files_only: bool = False\n proxies: Optional[dict] = None\n user_agent: Optional[str] = None\n extract_compressed_file: bool = False\n force_extract: bool = False\n delete_extracted: bool = False\n extract_on_the_fly: bool = False\n use_etag: bool = True\n num_proc: Optional[int] = None\n max_retries: int = 1\n token: Optional[Union[str, bool]] = None\n storage_options: dict[str, Any] = field(default_factory=dict)\n download_desc: Optional[str] = None\n disable_tqdm: bool = False\n\n def copy(self) -> "DownloadConfig":\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\n\n def __setattr__(self, name, value):\n if name == "token" and getattr(self, "storage_options", None) is not None:\n if "hf" not in self.storage_options:\n self.storage_options["hf"] = {"token": value, "endpoint": config.HF_ENDPOINT}\n elif getattr(self.storage_options["hf"], "token", None) is None:\n self.storage_options["hf"]["token"] = value\n super().__setattr__(name, value)\n
|
.venv\Lib\site-packages\datasets\download\download_config.py
|
download_config.py
|
Python
| 3,796 | 0.85 | 0.17284 | 0 |
python-kit
| 558 |
2023-12-03T13:37:38.623410
|
GPL-3.0
| false |
a6dd15e866757890f1bd320da9648951
|
# Copyright 2020 The TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\n"""Download manager interface."""\n\nimport enum\nimport io\nimport multiprocessing\nimport os\nfrom datetime import datetime\nfrom functools import partial\nfrom typing import Optional, Union\n\nimport fsspec\nfrom fsspec.core import url_to_fs\nfrom tqdm.contrib.concurrent import thread_map\n\nfrom .. import config\nfrom ..utils import tqdm as hf_tqdm\nfrom ..utils.file_utils import (\n ArchiveIterable,\n FilesIterable,\n cached_path,\n is_relative_path,\n stack_multiprocessing_download_progress_bars,\n url_or_path_join,\n)\nfrom ..utils.info_utils import get_size_checksum_dict\nfrom ..utils.logging import get_logger, tqdm\nfrom ..utils.py_utils import NestedDataStructure, map_nested\nfrom ..utils.track import tracked_str\nfrom .download_config import DownloadConfig\n\n\nlogger = get_logger(__name__)\n\n\nclass DownloadMode(enum.Enum):\n """`Enum` for how to treat pre-existing downloads and data.\n\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\n raw downloads and the prepared dataset if they exist.\n\n The generations modes:\n\n | | Downloads | Dataset |\n |-------------------------------------|-----------|---------|\n | `REUSE_DATASET_IF_EXISTS` (default) | Reuse | Reuse |\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\n\n """\n\n REUSE_DATASET_IF_EXISTS = "reuse_dataset_if_exists"\n REUSE_CACHE_IF_EXISTS = "reuse_cache_if_exists"\n FORCE_REDOWNLOAD = "force_redownload"\n\n\nclass DownloadManager:\n is_streaming = False\n\n def __init__(\n self,\n dataset_name: Optional[str] = None,\n data_dir: Optional[str] = None,\n download_config: Optional[DownloadConfig] = None,\n base_path: Optional[str] = None,\n record_checksums=True,\n ):\n """Download manager constructor.\n\n Args:\n data_dir:\n can be used to specify a manual directory to get the files from.\n dataset_name (`str`):\n name of dataset this instance will be used for. If\n provided, downloads will contain which datasets they were used for.\n download_config (`DownloadConfig`):\n to specify the cache directory and other\n download options\n base_path (`str`):\n base path that is used when relative paths are used to\n download files. This can be a remote url.\n record_checksums (`bool`, defaults to `True`):\n Whether to record the checksums of the downloaded files. If None, the value is inferred from the builder.\n """\n self._dataset_name = dataset_name\n self._data_dir = data_dir\n self._base_path = base_path or os.path.abspath(".")\n # To record what is being used: {url: {num_bytes: int, checksum: str}}\n self._recorded_sizes_checksums: dict[str, dict[str, Optional[Union[int, str]]]] = {}\n self.record_checksums = record_checksums\n self.download_config = download_config or DownloadConfig()\n self.downloaded_paths = {}\n self.extracted_paths = {}\n\n @property\n def manual_dir(self):\n return self._data_dir\n\n @property\n def downloaded_size(self):\n """Returns the total size of downloaded files."""\n return sum(checksums_dict["num_bytes"] for checksums_dict in self._recorded_sizes_checksums.values())\n\n def _record_sizes_checksums(self, url_or_urls: NestedDataStructure, downloaded_path_or_paths: NestedDataStructure):\n """Record size/checksum of downloaded files."""\n delay = 5\n for url, path in hf_tqdm(\n list(zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten())),\n delay=delay,\n desc="Computing checksums",\n ):\n # call str to support PathLike objects\n self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict(\n path, record_checksum=self.record_checksums\n )\n\n def download(self, url_or_urls):\n """Download given URL(s).\n\n By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.\n\n Args:\n url_or_urls (`str` or `list` or `dict`):\n URL or `list` or `dict` of URLs to download. Each URL is a `str`.\n\n Returns:\n `str` or `list` or `dict`:\n The downloaded paths matching the given input `url_or_urls`.\n\n Example:\n\n ```py\n >>> downloaded_files = dl_manager.download('https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz')\n ```\n """\n download_config = self.download_config.copy()\n download_config.extract_compressed_file = False\n if download_config.download_desc is None:\n download_config.download_desc = "Downloading data"\n\n download_func = partial(self._download_batched, download_config=download_config)\n\n start_time = datetime.now()\n with stack_multiprocessing_download_progress_bars():\n downloaded_path_or_paths = map_nested(\n download_func,\n url_or_urls,\n map_tuple=True,\n num_proc=download_config.num_proc,\n desc="Downloading data files",\n batched=True,\n batch_size=-1,\n )\n duration = datetime.now() - start_time\n logger.info(f"Downloading took {duration.total_seconds() // 60} min")\n url_or_urls = NestedDataStructure(url_or_urls)\n downloaded_path_or_paths = NestedDataStructure(downloaded_path_or_paths)\n self.downloaded_paths.update(dict(zip(url_or_urls.flatten(), downloaded_path_or_paths.flatten())))\n\n start_time = datetime.now()\n self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\n duration = datetime.now() - start_time\n logger.info(f"Checksum Computation took {duration.total_seconds() // 60} min")\n\n return downloaded_path_or_paths.data\n\n def _download_batched(\n self,\n url_or_filenames: list[str],\n download_config: DownloadConfig,\n ) -> list[str]:\n if len(url_or_filenames) >= 16:\n download_config = download_config.copy()\n download_config.disable_tqdm = True\n download_func = partial(self._download_single, download_config=download_config)\n\n fs: fsspec.AbstractFileSystem\n path = str(url_or_filenames[0])\n if is_relative_path(path):\n # append the relative path to the base_path\n path = url_or_path_join(self._base_path, path)\n fs, path = url_to_fs(path, **download_config.storage_options)\n size = 0\n try:\n size = fs.info(path).get("size", 0)\n except Exception:\n pass\n max_workers = (\n config.HF_DATASETS_MULTITHREADING_MAX_WORKERS if size < (20 << 20) else 1\n ) # enable multithreading if files are small\n\n return thread_map(\n download_func,\n url_or_filenames,\n desc=download_config.download_desc or "Downloading",\n unit="files",\n position=multiprocessing.current_process()._identity[-1] # contains the ranks of subprocesses\n if os.environ.get("HF_DATASETS_STACK_MULTIPROCESSING_DOWNLOAD_PROGRESS_BARS") == "1"\n and multiprocessing.current_process()._identity\n else None,\n max_workers=max_workers,\n tqdm_class=tqdm,\n )\n else:\n return [\n self._download_single(url_or_filename, download_config=download_config)\n for url_or_filename in url_or_filenames\n ]\n\n def _download_single(self, url_or_filename: str, download_config: DownloadConfig) -> str:\n url_or_filename = str(url_or_filename)\n if is_relative_path(url_or_filename):\n # append the relative path to the base_path\n url_or_filename = url_or_path_join(self._base_path, url_or_filename)\n out = cached_path(url_or_filename, download_config=download_config)\n out = tracked_str(out)\n out.set_origin(url_or_filename)\n return out\n\n def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):\n """Iterate over files within an archive.\n\n Args:\n path_or_buf (`str` or `io.BufferedReader`):\n Archive path or archive binary file object.\n\n Yields:\n `tuple[str, io.BufferedReader]`:\n 2-tuple (path_within_archive, file_object).\n File object is opened in binary mode.\n\n Example:\n\n ```py\n >>> archive = dl_manager.download('https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz')\n >>> files = dl_manager.iter_archive(archive)\n ```\n """\n\n if hasattr(path_or_buf, "read"):\n return ArchiveIterable.from_buf(path_or_buf)\n else:\n return ArchiveIterable.from_urlpath(path_or_buf)\n\n def iter_files(self, paths: Union[str, list[str]]):\n """Iterate over file paths.\n\n Args:\n paths (`str` or `list` of `str`):\n Root paths.\n\n Yields:\n `str`: File path.\n\n Example:\n\n ```py\n >>> files = dl_manager.download_and_extract('https://huggingface.co/datasets/beans/resolve/main/data/train.zip')\n >>> files = dl_manager.iter_files(files)\n ```\n """\n return FilesIterable.from_urlpaths(paths)\n\n def extract(self, path_or_paths):\n """Extract given path(s).\n\n Args:\n path_or_paths (path or `list` or `dict`):\n Path of file to extract. Each path is a `str`.\n\n Returns:\n extracted_path(s): `str`, The extracted paths matching the given input\n path_or_paths.\n\n Example:\n\n ```py\n >>> downloaded_files = dl_manager.download('https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz')\n >>> extracted_files = dl_manager.extract(downloaded_files)\n ```\n """\n download_config = self.download_config.copy()\n download_config.extract_compressed_file = True\n extract_func = partial(self._download_single, download_config=download_config)\n extracted_paths = map_nested(\n extract_func,\n path_or_paths,\n num_proc=download_config.num_proc,\n desc="Extracting data files",\n )\n path_or_paths = NestedDataStructure(path_or_paths)\n extracted_paths = NestedDataStructure(extracted_paths)\n self.extracted_paths.update(dict(zip(path_or_paths.flatten(), extracted_paths.flatten())))\n return extracted_paths.data\n\n def download_and_extract(self, url_or_urls):\n """Download and extract given `url_or_urls`.\n\n Is roughly equivalent to:\n\n ```\n extracted_paths = dl_manager.extract(dl_manager.download(url_or_urls))\n ```\n\n Args:\n url_or_urls (`str` or `list` or `dict`):\n URL or `list` or `dict` of URLs to download and extract. Each URL is a `str`.\n\n Returns:\n extracted_path(s): `str`, extracted paths of given URL(s).\n """\n return self.extract(self.download(url_or_urls))\n\n def get_recorded_sizes_checksums(self):\n return self._recorded_sizes_checksums.copy()\n\n def delete_extracted_files(self):\n paths_to_delete = set(self.extracted_paths.values()) - set(self.downloaded_paths.values())\n for key, path in list(self.extracted_paths.items()):\n if path in paths_to_delete and os.path.isfile(path):\n os.remove(path)\n del self.extracted_paths[key]\n\n def manage_extracted_files(self):\n if self.download_config.delete_extracted:\n self.delete_extracted_files()\n
|
.venv\Lib\site-packages\datasets\download\download_manager.py
|
download_manager.py
|
Python
| 12,762 | 0.95 | 0.108824 | 0.06383 |
awesome-app
| 151 |
2024-01-15T21:26:16.795390
|
BSD-3-Clause
| false |
46a61bce6aa92cbd78113fa7f1ce2fcf
|
import io\nimport os\nfrom collections.abc import Iterable\nfrom typing import Optional, Union\n\nfrom ..utils.file_utils import ( # noqa: F401 # backward compatibility\n SINGLE_FILE_COMPRESSION_PROTOCOLS,\n ArchiveIterable,\n FilesIterable,\n _get_extraction_protocol,\n _get_path_extension,\n _prepare_path_and_storage_options,\n is_relative_path,\n url_or_path_join,\n xbasename,\n xdirname,\n xet_parse,\n xexists,\n xgetsize,\n xglob,\n xgzip_open,\n xisdir,\n xisfile,\n xjoin,\n xlistdir,\n xnumpy_load,\n xopen,\n xpandas_read_csv,\n xpandas_read_excel,\n xPath,\n xpyarrow_parquet_read_table,\n xrelpath,\n xsio_loadmat,\n xsplit,\n xsplitext,\n xwalk,\n xxml_dom_minidom_parse,\n)\nfrom ..utils.logging import get_logger\nfrom ..utils.py_utils import map_nested\nfrom .download_config import DownloadConfig\n\n\nlogger = get_logger(__name__)\n\n\nclass StreamingDownloadManager:\n """\n Download manager that uses the "::" separator to navigate through (possibly remote) compressed archives.\n Contrary to the regular `DownloadManager`, the `download` and `extract` methods don't actually download nor extract\n data, but they rather return the path or url that could be opened using the `xopen` function which extends the\n built-in `open` function to stream data from remote files.\n """\n\n is_streaming = True\n\n def __init__(\n self,\n dataset_name: Optional[str] = None,\n data_dir: Optional[str] = None,\n download_config: Optional[DownloadConfig] = None,\n base_path: Optional[str] = None,\n ):\n self._dataset_name = dataset_name\n self._data_dir = data_dir\n self._base_path = base_path or os.path.abspath(".")\n self.download_config = download_config or DownloadConfig()\n self.downloaded_size = None\n self.record_checksums = False\n\n @property\n def manual_dir(self):\n return self._data_dir\n\n def download(self, url_or_urls):\n """Normalize URL(s) of files to stream data from.\n This is the lazy version of `DownloadManager.download` for streaming.\n\n Args:\n url_or_urls (`str` or `list` or `dict`):\n URL(s) of files to stream data from. Each url is a `str`.\n\n Returns:\n url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input url_or_urls.\n\n Example:\n\n ```py\n >>> downloaded_files = dl_manager.download('https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz')\n ```\n """\n url_or_urls = map_nested(self._download_single, url_or_urls, map_tuple=True)\n return url_or_urls\n\n def _download_single(self, urlpath: str) -> str:\n urlpath = str(urlpath)\n if is_relative_path(urlpath):\n # append the relative path to the base_path\n urlpath = url_or_path_join(self._base_path, urlpath)\n return urlpath\n\n def extract(self, url_or_urls):\n """Add extraction protocol for given url(s) for streaming.\n\n This is the lazy version of `DownloadManager.extract` for streaming.\n\n Args:\n url_or_urls (`str` or `list` or `dict`):\n URL(s) of files to stream data from. Each url is a `str`.\n\n Returns:\n url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\n\n Example:\n\n ```py\n >>> downloaded_files = dl_manager.download('https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz')\n >>> extracted_files = dl_manager.extract(downloaded_files)\n ```\n """\n urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\n return urlpaths\n\n def _extract(self, urlpath: str) -> str:\n urlpath = str(urlpath)\n protocol = _get_extraction_protocol(urlpath, download_config=self.download_config)\n # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\n path = urlpath.split("::")[0]\n extension = _get_path_extension(path)\n if extension in ["tgz", "tar"] or path.endswith((".tar.gz", ".tar.bz2", ".tar.xz")):\n raise NotImplementedError(\n f"Extraction protocol for TAR archives like '{urlpath}' is not implemented in streaming mode. "\n f"Please use `dl_manager.iter_archive` instead.\n\n"\n f"Example usage:\n\n"\n f"\turl = dl_manager.download(url)\n"\n f"\ttar_archive_iterator = dl_manager.iter_archive(url)\n\n"\n f"\tfor filename, file in tar_archive_iterator:\n"\n f"\t\t..."\n )\n if protocol is None:\n # no extraction\n return urlpath\n elif protocol in SINGLE_FILE_COMPRESSION_PROTOCOLS:\n # there is one single file which is the uncompressed file\n inner_file = os.path.basename(urlpath.split("::")[0])\n inner_file = inner_file[: inner_file.rindex(".")] if "." in inner_file else inner_file\n return f"{protocol}://{inner_file}::{urlpath}"\n else:\n return f"{protocol}://::{urlpath}"\n\n def download_and_extract(self, url_or_urls):\n """Prepare given `url_or_urls` for streaming (add extraction protocol).\n\n This is the lazy version of `DownloadManager.download_and_extract` for streaming.\n\n Is equivalent to:\n\n ```\n urls = dl_manager.extract(dl_manager.download(url_or_urls))\n ```\n\n Args:\n url_or_urls (`str` or `list` or `dict`):\n URL(s) to stream from data from. Each url is a `str`.\n\n Returns:\n url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\n """\n return self.extract(self.download(url_or_urls))\n\n def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[tuple]:\n """Iterate over files within an archive.\n\n Args:\n urlpath_or_buf (`str` or `io.BufferedReader`):\n Archive path or archive binary file object.\n\n Yields:\n `tuple[str, io.BufferedReader]`:\n 2-tuple (path_within_archive, file_object).\n File object is opened in binary mode.\n\n Example:\n\n ```py\n >>> archive = dl_manager.download('https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz')\n >>> files = dl_manager.iter_archive(archive)\n ```\n """\n\n if hasattr(urlpath_or_buf, "read"):\n return ArchiveIterable.from_buf(urlpath_or_buf)\n else:\n return ArchiveIterable.from_urlpath(urlpath_or_buf, download_config=self.download_config)\n\n def iter_files(self, urlpaths: Union[str, list[str]]) -> Iterable[str]:\n """Iterate over files.\n\n Args:\n urlpaths (`str` or `list` of `str`):\n Root paths.\n\n Yields:\n str: File URL path.\n\n Example:\n\n ```py\n >>> files = dl_manager.download_and_extract('https://huggingface.co/datasets/beans/resolve/main/data/train.zip')\n >>> files = dl_manager.iter_files(files)\n ```\n """\n return FilesIterable.from_urlpaths(urlpaths, download_config=self.download_config)\n\n def manage_extracted_files(self):\n pass\n\n def get_recorded_sizes_checksums(self):\n pass\n
|
.venv\Lib\site-packages\datasets\download\streaming_download_manager.py
|
streaming_download_manager.py
|
Python
| 7,537 | 0.95 | 0.118721 | 0.022346 |
react-lib
| 86 |
2025-07-08T13:14:02.977428
|
Apache-2.0
| false |
11ca85524a0681427eb3368b5982b607
|
__all__ = [\n "DownloadConfig",\n "DownloadManager",\n "DownloadMode",\n "StreamingDownloadManager",\n]\n\nfrom .download_config import DownloadConfig\nfrom .download_manager import DownloadManager, DownloadMode\nfrom .streaming_download_manager import StreamingDownloadManager\n
|
.venv\Lib\site-packages\datasets\download\__init__.py
|
__init__.py
|
Python
| 281 | 0.85 | 0 | 0 |
python-kit
| 356 |
2023-11-05T12:00:20.788157
|
BSD-3-Clause
| false |
c86f4f8501f5f1c76fb3c75c44336133
|
\n\n
|
.venv\Lib\site-packages\datasets\download\__pycache__\download_config.cpython-313.pyc
|
download_config.cpython-313.pyc
|
Other
| 4,996 | 0.8 | 0.108108 | 0 |
awesome-app
| 92 |
2024-01-28T19:30:19.325836
|
MIT
| false |
90adf20bbb9624beaffc425cb8d9b5df
|
\n\n
|
.venv\Lib\site-packages\datasets\download\__pycache__\download_manager.cpython-313.pyc
|
download_manager.cpython-313.pyc
|
Other
| 14,699 | 0.8 | 0.026738 | 0 |
python-kit
| 813 |
2023-08-10T13:51:43.566685
|
MIT
| false |
251ade61c4303551c6472235c971970b
|
\n\n
|
.venv\Lib\site-packages\datasets\download\__pycache__\streaming_download_manager.cpython-313.pyc
|
streaming_download_manager.cpython-313.pyc
|
Other
| 8,947 | 0.95 | 0.071429 | 0 |
react-lib
| 576 |
2024-02-13T20:12:38.868625
|
MIT
| false |
788c2fcec3dd5fbf74d53c7bf5de384e
|
\n\n
|
.venv\Lib\site-packages\datasets\download\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 453 | 0.7 | 0 | 0 |
python-kit
| 705 |
2023-08-29T23:25:04.489543
|
Apache-2.0
| false |
9ccfec95def5b0d2d845cf8ff98df527
|
__all__ = [\n "Audio",\n "Array2D",\n "Array3D",\n "Array4D",\n "Array5D",\n "ClassLabel",\n "Features",\n "LargeList",\n "Sequence",\n "Value",\n "Image",\n "Translation",\n "TranslationVariableLanguages",\n "Video",\n "Pdf",\n]\nfrom .audio import Audio\nfrom .features import Array2D, Array3D, Array4D, Array5D, ClassLabel, Features, LargeList, Sequence, Value\nfrom .image import Image\nfrom .pdf import Pdf\nfrom .translation import Translation, TranslationVariableLanguages\nfrom .video import Video\n
|
.venv\Lib\site-packages\datasets\features\__init__.py
|
__init__.py
|
Python
| 529 | 0.85 | 0 | 0 |
python-kit
| 258 |
2023-12-08T02:25:51.812495
|
Apache-2.0
| false |
27b73b22728ad4d4521e2aafc3d0f9e8
|
\n\n
|
.venv\Lib\site-packages\datasets\features\__pycache__\audio.cpython-313.pyc
|
audio.cpython-313.pyc
|
Other
| 15,195 | 0.95 | 0.030303 | 0.011696 |
react-lib
| 992 |
2025-01-06T14:43:14.460676
|
BSD-3-Clause
| false |
cd2f56e81baaa7ae7bbd58f76057fe28
|
\n\n
|
.venv\Lib\site-packages\datasets\features\__pycache__\image.cpython-313.pyc
|
image.cpython-313.pyc
|
Other
| 19,796 | 0.95 | 0.03125 | 0.020305 |
vue-tools
| 371 |
2024-09-01T11:46:02.645498
|
BSD-3-Clause
| false |
d7923875373cb32f19f23be4ad9d1fa9
|
\n\n
|
.venv\Lib\site-packages\datasets\features\__pycache__\pdf.cpython-313.pyc
|
pdf.cpython-313.pyc
|
Other
| 11,366 | 0.95 | 0.017143 | 0.019737 |
python-kit
| 820 |
2024-08-09T14:05:26.980793
|
BSD-3-Clause
| false |
e1e4a0e5237d0086163f74fe018e060c
|
\n\n
|
.venv\Lib\site-packages\datasets\features\__pycache__\translation.cpython-313.pyc
|
translation.cpython-313.pyc
|
Other
| 6,461 | 0.8 | 0.06383 | 0 |
vue-tools
| 741 |
2025-03-14T06:33:18.027387
|
Apache-2.0
| false |
92b0164d73cd15ed47333a15f7db2a92
|
\n\n
|
.venv\Lib\site-packages\datasets\features\__pycache__\video.cpython-313.pyc
|
video.cpython-313.pyc
|
Other
| 13,642 | 0.95 | 0.027624 | 0.018868 |
react-lib
| 502 |
2023-08-03T12:23:29.251599
|
MIT
| false |
bc61cda11b5eb5ad0b3a7e05e0960717
|
\n\n
|
.venv\Lib\site-packages\datasets\features\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 720 | 0.7 | 0 | 0 |
awesome-app
| 433 |
2025-03-04T03:27:22.527774
|
Apache-2.0
| false |
98cef2093cd48faeb2a33e36d431ed08
|
\n\n
|
.venv\Lib\site-packages\datasets\filesystems\__pycache__\compression.cpython-313.pyc
|
compression.cpython-313.pyc
|
Other
| 6,202 | 0.8 | 0.014493 | 0.0625 |
awesome-app
| 558 |
2025-04-11T00:38:34.189068
|
Apache-2.0
| false |
4221a070b6ae5213f2d3962fe04e7c35
|
\n\n
|
.venv\Lib\site-packages\datasets\filesystems\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 2,335 | 0.95 | 0.129032 | 0 |
python-kit
| 35 |
2024-12-02T04:51:55.673176
|
GPL-3.0
| false |
6a896e2dbcc4c0e28c52e6ea86f36cef
|
# Copyright 2020 The HuggingFace Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom collections.abc import Mapping\n\nimport numpy as np\nimport pyarrow as pa\n\nfrom .. import config\nfrom ..utils.py_utils import map_nested\nfrom .formatting import TensorFormatter\n\n\nclass NumpyFormatter(TensorFormatter[Mapping, np.ndarray, Mapping]):\n def __init__(self, features=None, token_per_repo_id=None, **np_array_kwargs):\n super().__init__(features=features, token_per_repo_id=token_per_repo_id)\n self.np_array_kwargs = np_array_kwargs\n\n def _consolidate(self, column):\n if isinstance(column, list):\n if column and all(\n isinstance(x, np.ndarray) and x.shape == column[0].shape and x.dtype == column[0].dtype for x in column\n ):\n return np.stack(column)\n else:\n # don't use np.array(column, dtype=object)\n # since it fails in certain cases\n # see https://stackoverflow.com/q/51005699\n out = np.empty(len(column), dtype=object)\n out[:] = column\n return out\n return column\n\n def _tensorize(self, value):\n if isinstance(value, (str, bytes, type(None))):\n return value\n elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character):\n return value\n elif isinstance(value, np.number):\n return value\n\n default_dtype = {}\n\n if isinstance(value, np.ndarray) and np.issubdtype(value.dtype, np.integer):\n default_dtype = {"dtype": np.int64}\n elif isinstance(value, np.ndarray) and np.issubdtype(value.dtype, np.floating):\n default_dtype = {"dtype": np.float32}\n\n if config.PIL_AVAILABLE and "PIL" in sys.modules:\n import PIL.Image\n\n if isinstance(value, PIL.Image.Image):\n return np.asarray(value, **self.np_array_kwargs)\n if config.TORCHVISION_AVAILABLE and "torchvision" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to np arrays ?\n\n return np.asarray(value, **{**default_dtype, **self.np_array_kwargs})\n\n def _recursive_tensorize(self, data_struct):\n # support for torch, tf, jax etc.\n if config.TORCH_AVAILABLE and "torch" in sys.modules:\n import torch\n\n if isinstance(data_struct, torch.Tensor):\n return self._tensorize(data_struct.detach().cpu().numpy()[()])\n if hasattr(data_struct, "__array__") and not isinstance(data_struct, (np.ndarray, np.character, np.number)):\n data_struct = data_struct.__array__()\n # support for nested types like struct of list of struct\n if isinstance(data_struct, np.ndarray):\n if data_struct.dtype == object:\n return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\n if isinstance(data_struct, (list, tuple)):\n return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\n return self._tensorize(data_struct)\n\n def recursive_tensorize(self, data_struct: dict):\n return map_nested(self._recursive_tensorize, data_struct, map_list=False)\n\n def format_row(self, pa_table: pa.Table) -> Mapping:\n row = self.numpy_arrow_extractor().extract_row(pa_table)\n row = self.python_features_decoder.decode_row(row)\n return self.recursive_tensorize(row)\n\n def format_column(self, pa_table: pa.Table) -> np.ndarray:\n column = self.numpy_arrow_extractor().extract_column(pa_table)\n column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])\n column = self.recursive_tensorize(column)\n column = self._consolidate(column)\n return column\n\n def format_batch(self, pa_table: pa.Table) -> Mapping:\n batch = self.numpy_arrow_extractor().extract_batch(pa_table)\n batch = self.python_features_decoder.decode_batch(batch)\n batch = self.recursive_tensorize(batch)\n for column_name in batch:\n batch[column_name] = self._consolidate(batch[column_name])\n return batch\n
|
.venv\Lib\site-packages\datasets\formatting\np_formatter.py
|
np_formatter.py
|
Python
| 4,826 | 0.95 | 0.267857 | 0.193548 |
react-lib
| 969 |
2023-09-10T02:36:21.557090
|
Apache-2.0
| false |
9da2b39d04d16a98c2e87e9e313b321c
|
# Copyright 2020 The HuggingFace Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Optional\n\nimport pyarrow as pa\n\nfrom .. import config\nfrom ..features import Features\nfrom ..features.features import decode_nested_example\nfrom ..utils.py_utils import no_op_if_value_is_null\nfrom .formatting import BaseArrowExtractor, TableFormatter\n\n\nif TYPE_CHECKING:\n import polars as pl\n\n\nclass PolarsArrowExtractor(BaseArrowExtractor["pl.DataFrame", "pl.Series", "pl.DataFrame"]):\n def extract_row(self, pa_table: pa.Table) -> "pl.DataFrame":\n if config.POLARS_AVAILABLE:\n if "polars" not in sys.modules:\n import polars\n else:\n polars = sys.modules["polars"]\n\n return polars.from_arrow(pa_table.slice(length=1))\n else:\n raise ValueError("Polars needs to be installed to be able to return Polars dataframes.")\n\n def extract_column(self, pa_table: pa.Table) -> "pl.Series":\n if config.POLARS_AVAILABLE:\n if "polars" not in sys.modules:\n import polars\n else:\n polars = sys.modules["polars"]\n\n return polars.from_arrow(pa_table.select([0]))[pa_table.column_names[0]]\n else:\n raise ValueError("Polars needs to be installed to be able to return Polars dataframes.")\n\n def extract_batch(self, pa_table: pa.Table) -> "pl.DataFrame":\n if config.POLARS_AVAILABLE:\n if "polars" not in sys.modules:\n import polars\n else:\n polars = sys.modules["polars"]\n\n return polars.from_arrow(pa_table)\n else:\n raise ValueError("Polars needs to be installed to be able to return Polars dataframes.")\n\n\nclass PolarsFeaturesDecoder:\n def __init__(self, features: Optional[Features]):\n self.features = features\n import polars as pl # noqa: F401 - import pl at initialization\n\n def decode_row(self, row: "pl.DataFrame") -> "pl.DataFrame":\n decode = (\n {\n column_name: no_op_if_value_is_null(partial(decode_nested_example, feature))\n for column_name, feature in self.features.items()\n if self.features._column_requires_decoding[column_name]\n }\n if self.features\n else {}\n )\n if decode:\n row[list(decode.keys())] = row.map_rows(decode)\n return row\n\n def decode_column(self, column: "pl.Series", column_name: str) -> "pl.Series":\n decode = (\n no_op_if_value_is_null(partial(decode_nested_example, self.features[column_name]))\n if self.features and column_name in self.features and self.features._column_requires_decoding[column_name]\n else None\n )\n if decode:\n column = column.map_elements(decode)\n return column\n\n def decode_batch(self, batch: "pl.DataFrame") -> "pl.DataFrame":\n return self.decode_row(batch)\n\n\nclass PolarsFormatter(TableFormatter["pl.DataFrame", "pl.Series", "pl.DataFrame"]):\n table_type = "polars dataframe"\n column_type = "polars series"\n\n def __init__(self, features=None, **np_array_kwargs):\n super().__init__(features=features)\n self.np_array_kwargs = np_array_kwargs\n self.polars_arrow_extractor = PolarsArrowExtractor\n self.polars_features_decoder = PolarsFeaturesDecoder(features)\n import polars as pl # noqa: F401 - import pl at initialization\n\n def format_row(self, pa_table: pa.Table) -> "pl.DataFrame":\n row = self.polars_arrow_extractor().extract_row(pa_table)\n row = self.polars_features_decoder.decode_row(row)\n return row\n\n def format_column(self, pa_table: pa.Table) -> "pl.Series":\n column = self.polars_arrow_extractor().extract_column(pa_table)\n column = self.polars_features_decoder.decode_column(column, pa_table.column_names[0])\n return column\n\n def format_batch(self, pa_table: pa.Table) -> "pl.DataFrame":\n row = self.polars_arrow_extractor().extract_batch(pa_table)\n row = self.polars_features_decoder.decode_batch(row)\n return row\n
|
.venv\Lib\site-packages\datasets\formatting\polars_formatter.py
|
polars_formatter.py
|
Python
| 4,744 | 0.95 | 0.225806 | 0.128713 |
awesome-app
| 485 |
2023-12-26T00:26:55.698793
|
GPL-3.0
| false |
3a8460a5569829ad8ad65e2bc05e58a5
|
# Copyright 2020 The HuggingFace Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nimport sys\nfrom collections.abc import Mapping\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\nimport pyarrow as pa\n\nfrom .. import config\nfrom ..utils.py_utils import map_nested\nfrom .formatting import TensorFormatter\n\n\nif TYPE_CHECKING:\n import tensorflow as tf\n\n\nclass TFFormatter(TensorFormatter[Mapping, "tf.Tensor", Mapping]):\n def __init__(self, features=None, token_per_repo_id=None, **tf_tensor_kwargs):\n super().__init__(features=features, token_per_repo_id=token_per_repo_id)\n self.tf_tensor_kwargs = tf_tensor_kwargs\n import tensorflow as tf # noqa: F401 - import tf at initialization\n\n def _consolidate(self, column):\n import tensorflow as tf\n\n if isinstance(column, list) and column:\n if all(\n isinstance(x, tf.Tensor) and x.shape == column[0].shape and x.dtype == column[0].dtype for x in column\n ):\n return tf.stack(column)\n elif all(\n isinstance(x, (tf.Tensor, tf.RaggedTensor)) and x.ndim == 1 and x.dtype == column[0].dtype\n for x in column\n ):\n # only rag 1-D tensors, otherwise some dimensions become ragged even though they were consolidated\n return tf.ragged.stack(column)\n\n return column\n\n def _tensorize(self, value):\n import tensorflow as tf\n\n if value is None:\n return value\n\n default_dtype = {}\n\n if isinstance(value, (np.number, np.ndarray)) and np.issubdtype(value.dtype, np.integer):\n default_dtype = {"dtype": tf.int64}\n elif isinstance(value, (np.number, np.ndarray)) and np.issubdtype(value.dtype, np.floating):\n default_dtype = {"dtype": tf.float32}\n\n if config.PIL_AVAILABLE and "PIL" in sys.modules:\n import PIL.Image\n\n if isinstance(value, PIL.Image.Image):\n value = np.asarray(value)\n if config.TORCHVISION_AVAILABLE and "torchvision" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to tf tensors ?\n\n return tf.convert_to_tensor(value, **{**default_dtype, **self.tf_tensor_kwargs})\n\n def _recursive_tensorize(self, data_struct):\n import tensorflow as tf\n\n # support for torch, tf, jax etc.\n if config.TORCH_AVAILABLE and "torch" in sys.modules:\n import torch\n\n if isinstance(data_struct, torch.Tensor):\n return self._tensorize(data_struct.detach().cpu().numpy()[()])\n if hasattr(data_struct, "__array__") and not isinstance(data_struct, tf.Tensor):\n data_struct = data_struct.__array__()\n # support for nested types like struct of list of struct\n if isinstance(data_struct, np.ndarray):\n if data_struct.dtype == object: # tf tensors cannot be instantied from an array of objects\n return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\n elif isinstance(data_struct, (list, tuple)):\n return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\n return self._tensorize(data_struct)\n\n def recursive_tensorize(self, data_struct: dict):\n return map_nested(self._recursive_tensorize, data_struct, map_list=False)\n\n def format_row(self, pa_table: pa.Table) -> Mapping:\n row = self.numpy_arrow_extractor().extract_row(pa_table)\n row = self.python_features_decoder.decode_row(row)\n return self.recursive_tensorize(row)\n\n def format_column(self, pa_table: pa.Table) -> "tf.Tensor":\n column = self.numpy_arrow_extractor().extract_column(pa_table)\n column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])\n column = self.recursive_tensorize(column)\n column = self._consolidate(column)\n return column\n\n def format_batch(self, pa_table: pa.Table) -> Mapping:\n batch = self.numpy_arrow_extractor().extract_batch(pa_table)\n batch = self.python_features_decoder.decode_batch(batch)\n batch = self.recursive_tensorize(batch)\n for column_name in batch:\n batch[column_name] = self._consolidate(batch[column_name])\n return batch\n
|
.venv\Lib\site-packages\datasets\formatting\tf_formatter.py
|
tf_formatter.py
|
Python
| 4,959 | 0.95 | 0.256198 | 0.177083 |
react-lib
| 959 |
2023-10-25T12:30:57.914449
|
BSD-3-Clause
| false |
ad614c3ad943beabc62c2ad621c9cc7b
|
# Copyright 2020 The HuggingFace Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nimport sys\nfrom collections.abc import Mapping\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\nimport pyarrow as pa\n\nfrom .. import config\nfrom ..utils.py_utils import map_nested\nfrom .formatting import TensorFormatter\n\n\nif TYPE_CHECKING:\n import torch\n\n\nclass TorchFormatter(TensorFormatter[Mapping, "torch.Tensor", Mapping]):\n def __init__(self, features=None, token_per_repo_id=None, **torch_tensor_kwargs):\n super().__init__(features=features, token_per_repo_id=token_per_repo_id)\n self.torch_tensor_kwargs = torch_tensor_kwargs\n import torch # noqa import torch at initialization\n\n def _consolidate(self, column):\n import torch\n\n if isinstance(column, list) and column:\n if all(\n isinstance(x, torch.Tensor) and x.shape == column[0].shape and x.dtype == column[0].dtype\n for x in column\n ):\n return torch.stack(column)\n return column\n\n def _tensorize(self, value):\n import torch\n\n if isinstance(value, (str, bytes, type(None))):\n return value\n elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character):\n return value.tolist()\n\n default_dtype = {}\n\n if isinstance(value, (np.number, np.ndarray)) and np.issubdtype(value.dtype, np.integer):\n default_dtype = {"dtype": torch.int64}\n\n # Convert dtype to np.int64 if it's either np.uint16 or np.uint32 to ensure compatibility.\n # np.uint64 is excluded from this conversion as there is no compatible PyTorch dtype that can handle it without loss.\n if value.dtype in [np.uint16, np.uint32]:\n value = value.astype(np.int64)\n\n elif isinstance(value, (np.number, np.ndarray)) and np.issubdtype(value.dtype, np.floating):\n default_dtype = {"dtype": torch.float32}\n\n if config.PIL_AVAILABLE and "PIL" in sys.modules:\n import PIL.Image\n\n if isinstance(value, PIL.Image.Image):\n value = np.asarray(value)\n if value.ndim == 2:\n value = value[:, :, np.newaxis]\n\n value = value.transpose((2, 0, 1))\n if config.TORCHVISION_AVAILABLE and "torchvision" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to torch tensors ?\n\n return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})\n\n def _recursive_tensorize(self, data_struct):\n import torch\n\n # support for torch, tf, jax etc.\n if hasattr(data_struct, "__array__") and not isinstance(data_struct, torch.Tensor):\n data_struct = data_struct.__array__()\n # support for nested types like struct of list of struct\n if isinstance(data_struct, np.ndarray):\n if data_struct.dtype == object: # torch tensors cannot be instantied from an array of objects\n return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\n elif isinstance(data_struct, (list, tuple)):\n return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\n return self._tensorize(data_struct)\n\n def recursive_tensorize(self, data_struct: dict):\n return map_nested(self._recursive_tensorize, data_struct, map_list=False)\n\n def format_row(self, pa_table: pa.Table) -> Mapping:\n row = self.numpy_arrow_extractor().extract_row(pa_table)\n row = self.python_features_decoder.decode_row(row)\n return self.recursive_tensorize(row)\n\n def format_column(self, pa_table: pa.Table) -> "torch.Tensor":\n column = self.numpy_arrow_extractor().extract_column(pa_table)\n column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])\n column = self.recursive_tensorize(column)\n column = self._consolidate(column)\n return column\n\n def format_batch(self, pa_table: pa.Table) -> Mapping:\n batch = self.numpy_arrow_extractor().extract_batch(pa_table)\n batch = self.python_features_decoder.decode_batch(batch)\n batch = self.recursive_tensorize(batch)\n for column_name in batch:\n batch[column_name] = self._consolidate(batch[column_name])\n return batch\n
|
.venv\Lib\site-packages\datasets\formatting\torch_formatter.py
|
torch_formatter.py
|
Python
| 5,034 | 0.95 | 0.254098 | 0.1875 |
awesome-app
| 913 |
2024-07-04T19:58:19.663118
|
GPL-3.0
| false |
f9347af14730ac05b6a05f3767897518
|
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, List, Optional, Type\n\nfrom .. import config\nfrom ..utils import logging\nfrom .formatting import (\n ArrowFormatter,\n CustomFormatter,\n Formatter,\n PandasFormatter,\n PythonFormatter,\n TableFormatter,\n TensorFormatter,\n format_table,\n query_table,\n)\nfrom .np_formatter import NumpyFormatter\n\n\nlogger = logging.get_logger(__name__)\n\n_FORMAT_TYPES: dict[Optional[str], type[Formatter]] = {}\n_FORMAT_TYPES_ALIASES: dict[Optional[str], str] = {}\n_FORMAT_TYPES_ALIASES_UNAVAILABLE: dict[Optional[str], Exception] = {}\n\n\ndef _register_formatter(\n formatter_cls: type,\n format_type: Optional[str],\n aliases: Optional[list[str]] = None,\n):\n """\n Register a Formatter object using a name and optional aliases.\n This function must be used on a Formatter class.\n """\n aliases = aliases if aliases is not None else []\n if format_type in _FORMAT_TYPES:\n logger.warning(\n f"Overwriting format type '{format_type}' ({_FORMAT_TYPES[format_type].__name__} -> {formatter_cls.__name__})"\n )\n _FORMAT_TYPES[format_type] = formatter_cls\n for alias in set(aliases + [format_type]):\n if alias in _FORMAT_TYPES_ALIASES:\n logger.warning(\n f"Overwriting format type alias '{alias}' ({_FORMAT_TYPES_ALIASES[alias]} -> {format_type})"\n )\n _FORMAT_TYPES_ALIASES[alias] = format_type\n\n\ndef _register_unavailable_formatter(\n unavailable_error: Exception, format_type: Optional[str], aliases: Optional[list[str]] = None\n):\n """\n Register an unavailable Formatter object using a name and optional aliases.\n This function must be used on an Exception object that is raised when trying to get the unavailable formatter.\n """\n aliases = aliases if aliases is not None else []\n for alias in set(aliases + [format_type]):\n _FORMAT_TYPES_ALIASES_UNAVAILABLE[alias] = unavailable_error\n\n\n# Here we define all the available formatting functions that can be used by `Dataset.set_format`\n_register_formatter(PythonFormatter, None, aliases=["python"])\n_register_formatter(ArrowFormatter, "arrow", aliases=["pa", "pyarrow"])\n_register_formatter(NumpyFormatter, "numpy", aliases=["np"])\n_register_formatter(PandasFormatter, "pandas", aliases=["pd"])\n_register_formatter(CustomFormatter, "custom")\n\nif config.POLARS_AVAILABLE:\n from .polars_formatter import PolarsFormatter\n\n _register_formatter(PolarsFormatter, "polars", aliases=["pl"])\nelse:\n _polars_error = ValueError("Polars needs to be installed to be able to return Polars dataframes.")\n _register_unavailable_formatter(_polars_error, "polars", aliases=["pl"])\n\nif config.TORCH_AVAILABLE:\n from .torch_formatter import TorchFormatter\n\n _register_formatter(TorchFormatter, "torch", aliases=["pt", "pytorch"])\nelse:\n _torch_error = ValueError("PyTorch needs to be installed to be able to return PyTorch tensors.")\n _register_unavailable_formatter(_torch_error, "torch", aliases=["pt", "pytorch"])\n\nif config.TF_AVAILABLE:\n from .tf_formatter import TFFormatter\n\n _register_formatter(TFFormatter, "tensorflow", aliases=["tf"])\nelse:\n _tf_error = ValueError("Tensorflow needs to be installed to be able to return Tensorflow tensors.")\n _register_unavailable_formatter(_tf_error, "tensorflow", aliases=["tf"])\n\nif config.JAX_AVAILABLE:\n from .jax_formatter import JaxFormatter\n\n _register_formatter(JaxFormatter, "jax", aliases=[])\nelse:\n _jax_error = ValueError("JAX needs to be installed to be able to return JAX arrays.")\n _register_unavailable_formatter(_jax_error, "jax", aliases=[])\n\n\ndef get_format_type_from_alias(format_type: Optional[str]) -> Optional[str]:\n """If the given format type is a known alias, then return its main type name. Otherwise return the type with no change."""\n if format_type in _FORMAT_TYPES_ALIASES:\n return _FORMAT_TYPES_ALIASES[format_type]\n else:\n return format_type\n\n\ndef get_formatter(format_type: Optional[str], **format_kwargs) -> Formatter:\n """\n Factory function to get a Formatter given its type name and keyword arguments.\n A formatter is an object that extracts and formats data from pyarrow table.\n It defines the formatting for rows, columns and batches.\n If the formatter for a given type name doesn't exist or is not available, an error is raised.\n """\n format_type = get_format_type_from_alias(format_type)\n if format_type in _FORMAT_TYPES:\n return _FORMAT_TYPES[format_type](**format_kwargs)\n if format_type in _FORMAT_TYPES_ALIASES_UNAVAILABLE:\n raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]\n else:\n raise ValueError(f"Format type should be one of {list(_FORMAT_TYPES.keys())}, but got '{format_type}'")\n
|
.venv\Lib\site-packages\datasets\formatting\__init__.py
|
__init__.py
|
Python
| 5,412 | 0.95 | 0.176471 | 0.123894 |
python-kit
| 833 |
2024-05-05T00:06:38.671486
|
Apache-2.0
| false |
f814af8d17c0bfa428a4924fbafc0898
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\formatting.cpython-313.pyc
|
formatting.cpython-313.pyc
|
Other
| 43,233 | 0.95 | 0.078652 | 0.007874 |
awesome-app
| 854 |
2024-02-10T01:51:03.246856
|
BSD-3-Clause
| false |
496a89fa2e4f608f1d91e844d69e62a6
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\jax_formatter.cpython-313.pyc
|
jax_formatter.cpython-313.pyc
|
Other
| 9,876 | 0.8 | 0 | 0 |
awesome-app
| 647 |
2024-10-07T13:49:58.317719
|
MIT
| false |
34678040d046129b03162e126c945d8c
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\np_formatter.cpython-313.pyc
|
np_formatter.cpython-313.pyc
|
Other
| 7,537 | 0.8 | 0 | 0.037736 |
node-utils
| 316 |
2024-02-05T01:48:06.703367
|
MIT
| false |
f04ed0be4c3fd5505ae541472b582746
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\polars_formatter.cpython-313.pyc
|
polars_formatter.cpython-313.pyc
|
Other
| 6,735 | 0.95 | 0 | 0.017544 |
vue-tools
| 14 |
2025-05-10T22:59:29.816237
|
GPL-3.0
| false |
6501ec0631513243df1bcdf870bcf846
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\tf_formatter.cpython-313.pyc
|
tf_formatter.cpython-313.pyc
|
Other
| 7,573 | 0.8 | 0 | 0 |
awesome-app
| 796 |
2024-07-04T07:26:53.033961
|
MIT
| false |
610eb27e161287dba103b3ca328e177e
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\torch_formatter.cpython-313.pyc
|
torch_formatter.cpython-313.pyc
|
Other
| 7,428 | 0.8 | 0 | 0 |
react-lib
| 49 |
2024-11-14T20:51:44.336821
|
MIT
| false |
11fbfc76e002362d502890de775b8782
|
\n\n
|
.venv\Lib\site-packages\datasets\formatting\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 5,288 | 0.95 | 0.111111 | 0 |
react-lib
| 20 |
2024-11-08T12:31:18.352454
|
GPL-3.0
| false |
246f280f9303d1a1fb0dee2d5467ca73
|
from abc import ABC, abstractmethod\nfrom typing import Optional, Union\n\nfrom .. import Dataset, DatasetDict, Features, IterableDataset, IterableDatasetDict, NamedSplit\nfrom ..utils.typing import NestedDataStructureLike, PathLike\n\n\nclass AbstractDatasetReader(ABC):\n def __init__(\n self,\n path_or_paths: Optional[NestedDataStructureLike[PathLike]] = None,\n split: Optional[NamedSplit] = None,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n num_proc: Optional[int] = None,\n **kwargs,\n ):\n self.path_or_paths = path_or_paths\n self.split = split if split or isinstance(path_or_paths, dict) else "train"\n self.features = features\n self.cache_dir = cache_dir\n self.keep_in_memory = keep_in_memory\n self.streaming = streaming\n self.num_proc = num_proc\n self.kwargs = kwargs\n\n @abstractmethod\n def read(self) -> Union[Dataset, DatasetDict, IterableDataset, IterableDatasetDict]:\n pass\n\n\nclass AbstractDatasetInputStream(ABC):\n def __init__(\n self,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n num_proc: Optional[int] = None,\n **kwargs,\n ):\n self.features = features\n self.cache_dir = cache_dir\n self.keep_in_memory = keep_in_memory\n self.streaming = streaming\n self.num_proc = num_proc\n self.kwargs = kwargs\n\n @abstractmethod\n def read(self) -> Union[Dataset, IterableDataset]:\n pass\n
|
.venv\Lib\site-packages\datasets\io\abc.py
|
abc.py
|
Python
| 1,672 | 0.85 | 0.132075 | 0.043478 |
node-utils
| 645 |
2023-10-23T18:30:40.722546
|
GPL-3.0
| false |
62d9650c5de91869c06e6b357255d652
|
import multiprocessing\nimport os\nfrom typing import BinaryIO, Optional, Union\n\nimport fsspec\n\nfrom .. import Dataset, Features, NamedSplit, config\nfrom ..formatting import query_table\nfrom ..packaged_modules.csv.csv import Csv\nfrom ..utils import tqdm as hf_tqdm\nfrom ..utils.typing import NestedDataStructureLike, PathLike\nfrom .abc import AbstractDatasetReader\n\n\nclass CsvDatasetReader(AbstractDatasetReader):\n def __init__(\n self,\n path_or_paths: NestedDataStructureLike[PathLike],\n split: Optional[NamedSplit] = None,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n num_proc: Optional[int] = None,\n **kwargs,\n ):\n super().__init__(\n path_or_paths,\n split=split,\n features=features,\n cache_dir=cache_dir,\n keep_in_memory=keep_in_memory,\n streaming=streaming,\n num_proc=num_proc,\n **kwargs,\n )\n path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\n self.builder = Csv(\n cache_dir=cache_dir,\n data_files=path_or_paths,\n features=features,\n **kwargs,\n )\n\n def read(self):\n # Build iterable dataset\n if self.streaming:\n dataset = self.builder.as_streaming_dataset(split=self.split)\n # Build regular (map-style) dataset\n else:\n download_config = None\n download_mode = None\n verification_mode = None\n base_path = None\n\n self.builder.download_and_prepare(\n download_config=download_config,\n download_mode=download_mode,\n verification_mode=verification_mode,\n base_path=base_path,\n num_proc=self.num_proc,\n )\n dataset = self.builder.as_dataset(\n split=self.split, verification_mode=verification_mode, in_memory=self.keep_in_memory\n )\n return dataset\n\n\nclass CsvDatasetWriter:\n def __init__(\n self,\n dataset: Dataset,\n path_or_buf: Union[PathLike, BinaryIO],\n batch_size: Optional[int] = None,\n num_proc: Optional[int] = None,\n storage_options: Optional[dict] = None,\n **to_csv_kwargs,\n ):\n if num_proc is not None and num_proc <= 0:\n raise ValueError(f"num_proc {num_proc} must be an integer > 0.")\n\n self.dataset = dataset\n self.path_or_buf = path_or_buf\n self.batch_size = batch_size if batch_size else config.DEFAULT_MAX_BATCH_SIZE\n self.num_proc = num_proc\n self.encoding = "utf-8"\n self.storage_options = storage_options or {}\n self.to_csv_kwargs = to_csv_kwargs\n\n def write(self) -> int:\n _ = self.to_csv_kwargs.pop("path_or_buf", None)\n header = self.to_csv_kwargs.pop("header", True)\n index = self.to_csv_kwargs.pop("index", False)\n\n if isinstance(self.path_or_buf, (str, bytes, os.PathLike)):\n with fsspec.open(self.path_or_buf, "wb", **(self.storage_options or {})) as buffer:\n written = self._write(file_obj=buffer, header=header, index=index, **self.to_csv_kwargs)\n else:\n written = self._write(file_obj=self.path_or_buf, header=header, index=index, **self.to_csv_kwargs)\n return written\n\n def _batch_csv(self, args):\n offset, header, index, to_csv_kwargs = args\n\n batch = query_table(\n table=self.dataset.data,\n key=slice(offset, offset + self.batch_size),\n indices=self.dataset._indices,\n )\n csv_str = batch.to_pandas().to_csv(\n path_or_buf=None, header=header if (offset == 0) else False, index=index, **to_csv_kwargs\n )\n return csv_str.encode(self.encoding)\n\n def _write(self, file_obj: BinaryIO, header, index, **to_csv_kwargs) -> int:\n """Writes the pyarrow table as CSV to a binary file handle.\n\n Caller is responsible for opening and closing the handle.\n """\n written = 0\n\n if self.num_proc is None or self.num_proc == 1:\n for offset in hf_tqdm(\n range(0, len(self.dataset), self.batch_size),\n unit="ba",\n desc="Creating CSV from Arrow format",\n ):\n csv_str = self._batch_csv((offset, header, index, to_csv_kwargs))\n written += file_obj.write(csv_str)\n\n else:\n num_rows, batch_size = len(self.dataset), self.batch_size\n with multiprocessing.Pool(self.num_proc) as pool:\n for csv_str in hf_tqdm(\n pool.imap(\n self._batch_csv,\n [(offset, header, index, to_csv_kwargs) for offset in range(0, num_rows, batch_size)],\n ),\n total=(num_rows // batch_size) + 1 if num_rows % batch_size else num_rows // batch_size,\n unit="ba",\n desc="Creating CSV from Arrow format",\n ):\n written += file_obj.write(csv_str)\n\n return written\n
|
.venv\Lib\site-packages\datasets\io\csv.py
|
csv.py
|
Python
| 5,265 | 0.95 | 0.137931 | 0.047244 |
react-lib
| 102 |
2024-11-11T20:20:07.685284
|
GPL-3.0
| false |
3e18f6ccd1d47b15e6195a7cc890e594
|
from typing import Callable, Optional\n\nfrom .. import Features, NamedSplit, Split\nfrom ..packaged_modules.generator.generator import Generator\nfrom .abc import AbstractDatasetInputStream\n\n\nclass GeneratorDatasetInputStream(AbstractDatasetInputStream):\n def __init__(\n self,\n generator: Callable,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n gen_kwargs: Optional[dict] = None,\n num_proc: Optional[int] = None,\n split: NamedSplit = Split.TRAIN,\n **kwargs,\n ):\n super().__init__(\n features=features,\n cache_dir=cache_dir,\n keep_in_memory=keep_in_memory,\n streaming=streaming,\n num_proc=num_proc,\n **kwargs,\n )\n self.builder = Generator(\n cache_dir=cache_dir,\n features=features,\n generator=generator,\n gen_kwargs=gen_kwargs,\n split=split,\n **kwargs,\n )\n\n def read(self):\n # Build iterable dataset\n if self.streaming:\n dataset = self.builder.as_streaming_dataset(split=self.builder.config.split)\n # Build regular (map-style) dataset\n else:\n download_config = None\n download_mode = None\n verification_mode = None\n base_path = None\n\n self.builder.download_and_prepare(\n download_config=download_config,\n download_mode=download_mode,\n verification_mode=verification_mode,\n base_path=base_path,\n num_proc=self.num_proc,\n )\n dataset = self.builder.as_dataset(\n split=self.builder.config.split, verification_mode=verification_mode, in_memory=self.keep_in_memory\n )\n return dataset\n
|
.venv\Lib\site-packages\datasets\io\generator.py
|
generator.py
|
Python
| 1,909 | 0.95 | 0.067797 | 0.092593 |
awesome-app
| 607 |
2024-01-09T13:10:28.362580
|
MIT
| false |
e25ba26d2585add970e0b04e26e1746e
|
import multiprocessing\nimport os\nfrom typing import BinaryIO, Optional, Union\n\nimport fsspec\n\nfrom .. import Dataset, Features, NamedSplit, config\nfrom ..formatting import query_table\nfrom ..packaged_modules.json.json import Json\nfrom ..utils import tqdm as hf_tqdm\nfrom ..utils.typing import NestedDataStructureLike, PathLike\nfrom .abc import AbstractDatasetReader\n\n\nclass JsonDatasetReader(AbstractDatasetReader):\n def __init__(\n self,\n path_or_paths: NestedDataStructureLike[PathLike],\n split: Optional[NamedSplit] = None,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n field: Optional[str] = None,\n num_proc: Optional[int] = None,\n **kwargs,\n ):\n super().__init__(\n path_or_paths,\n split=split,\n features=features,\n cache_dir=cache_dir,\n keep_in_memory=keep_in_memory,\n streaming=streaming,\n num_proc=num_proc,\n **kwargs,\n )\n self.field = field\n path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\n self.builder = Json(\n cache_dir=cache_dir,\n data_files=path_or_paths,\n features=features,\n field=field,\n **kwargs,\n )\n\n def read(self):\n # Build iterable dataset\n if self.streaming:\n dataset = self.builder.as_streaming_dataset(split=self.split)\n # Build regular (map-style) dataset\n else:\n download_config = None\n download_mode = None\n verification_mode = None\n base_path = None\n\n self.builder.download_and_prepare(\n download_config=download_config,\n download_mode=download_mode,\n verification_mode=verification_mode,\n base_path=base_path,\n num_proc=self.num_proc,\n )\n dataset = self.builder.as_dataset(\n split=self.split, verification_mode=verification_mode, in_memory=self.keep_in_memory\n )\n return dataset\n\n\nclass JsonDatasetWriter:\n def __init__(\n self,\n dataset: Dataset,\n path_or_buf: Union[PathLike, BinaryIO],\n batch_size: Optional[int] = None,\n num_proc: Optional[int] = None,\n storage_options: Optional[dict] = None,\n **to_json_kwargs,\n ):\n if num_proc is not None and num_proc <= 0:\n raise ValueError(f"num_proc {num_proc} must be an integer > 0.")\n\n self.dataset = dataset\n self.path_or_buf = path_or_buf\n self.batch_size = batch_size if batch_size else config.DEFAULT_MAX_BATCH_SIZE\n self.num_proc = num_proc\n self.encoding = "utf-8"\n self.storage_options = storage_options or {}\n self.to_json_kwargs = to_json_kwargs\n\n def write(self) -> int:\n _ = self.to_json_kwargs.pop("path_or_buf", None)\n orient = self.to_json_kwargs.pop("orient", "records")\n lines = self.to_json_kwargs.pop("lines", True if orient == "records" else False)\n if "index" not in self.to_json_kwargs and orient in ["split", "table"]:\n self.to_json_kwargs["index"] = False\n\n # Determine the default compression value based on self.path_or_buf type\n default_compression = "infer" if isinstance(self.path_or_buf, (str, bytes, os.PathLike)) else None\n compression = self.to_json_kwargs.pop("compression", default_compression)\n\n if compression not in [None, "infer", "gzip", "bz2", "xz"]:\n raise NotImplementedError(f"`datasets` currently does not support {compression} compression")\n\n if not lines and self.batch_size < self.dataset.num_rows:\n raise NotImplementedError(\n "Output JSON will not be formatted correctly when lines = False and batch_size < number of rows in the dataset. Use pandas.DataFrame.to_json() instead."\n )\n\n if isinstance(self.path_or_buf, (str, bytes, os.PathLike)):\n with fsspec.open(\n self.path_or_buf, "wb", compression=compression, **(self.storage_options or {})\n ) as buffer:\n written = self._write(file_obj=buffer, orient=orient, lines=lines, **self.to_json_kwargs)\n else:\n if compression:\n raise NotImplementedError(\n f"The compression parameter is not supported when writing to a buffer, but compression={compression}"\n " was passed. Please provide a local path instead."\n )\n written = self._write(file_obj=self.path_or_buf, orient=orient, lines=lines, **self.to_json_kwargs)\n return written\n\n def _batch_json(self, args):\n offset, orient, lines, to_json_kwargs = args\n\n batch = query_table(\n table=self.dataset.data,\n key=slice(offset, offset + self.batch_size),\n indices=self.dataset._indices,\n )\n json_str = batch.to_pandas().to_json(path_or_buf=None, orient=orient, lines=lines, **to_json_kwargs)\n if not json_str.endswith("\n"):\n json_str += "\n"\n return json_str.encode(self.encoding)\n\n def _write(\n self,\n file_obj: BinaryIO,\n orient,\n lines,\n **to_json_kwargs,\n ) -> int:\n """Writes the pyarrow table as JSON lines to a binary file handle.\n\n Caller is responsible for opening and closing the handle.\n """\n written = 0\n\n if self.num_proc is None or self.num_proc == 1:\n for offset in hf_tqdm(\n range(0, len(self.dataset), self.batch_size),\n unit="ba",\n desc="Creating json from Arrow format",\n ):\n json_str = self._batch_json((offset, orient, lines, to_json_kwargs))\n written += file_obj.write(json_str)\n else:\n num_rows, batch_size = len(self.dataset), self.batch_size\n with multiprocessing.Pool(self.num_proc) as pool:\n for json_str in hf_tqdm(\n pool.imap(\n self._batch_json,\n [(offset, orient, lines, to_json_kwargs) for offset in range(0, num_rows, batch_size)],\n ),\n total=(num_rows // batch_size) + 1 if num_rows % batch_size else num_rows // batch_size,\n unit="ba",\n desc="Creating json from Arrow format",\n ):\n written += file_obj.write(json_str)\n\n return written\n
|
.venv\Lib\site-packages\datasets\io\json.py
|
json.py
|
Python
| 6,697 | 0.95 | 0.149425 | 0.051948 |
node-utils
| 806 |
2024-12-23T21:09:42.115743
|
Apache-2.0
| false |
617066c31ed726ea2f6fb5c86f22d47d
|
import os\nfrom typing import BinaryIO, Optional, Union\n\nimport fsspec\nimport pyarrow.parquet as pq\n\nfrom .. import Dataset, Features, NamedSplit, config\nfrom ..arrow_writer import get_writer_batch_size\nfrom ..formatting import query_table\nfrom ..packaged_modules import _PACKAGED_DATASETS_MODULES\nfrom ..packaged_modules.parquet.parquet import Parquet\nfrom ..utils import tqdm as hf_tqdm\nfrom ..utils.typing import NestedDataStructureLike, PathLike\nfrom .abc import AbstractDatasetReader\n\n\nclass ParquetDatasetReader(AbstractDatasetReader):\n def __init__(\n self,\n path_or_paths: NestedDataStructureLike[PathLike],\n split: Optional[NamedSplit] = None,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n num_proc: Optional[int] = None,\n **kwargs,\n ):\n super().__init__(\n path_or_paths,\n split=split,\n features=features,\n cache_dir=cache_dir,\n keep_in_memory=keep_in_memory,\n streaming=streaming,\n num_proc=num_proc,\n **kwargs,\n )\n path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\n hash = _PACKAGED_DATASETS_MODULES["parquet"][1]\n self.builder = Parquet(\n cache_dir=cache_dir,\n data_files=path_or_paths,\n features=features,\n hash=hash,\n **kwargs,\n )\n\n def read(self):\n # Build iterable dataset\n if self.streaming:\n dataset = self.builder.as_streaming_dataset(split=self.split)\n # Build regular (map-style) dataset\n else:\n download_config = None\n download_mode = None\n verification_mode = None\n base_path = None\n\n self.builder.download_and_prepare(\n download_config=download_config,\n download_mode=download_mode,\n verification_mode=verification_mode,\n base_path=base_path,\n num_proc=self.num_proc,\n )\n dataset = self.builder.as_dataset(\n split=self.split, verification_mode=verification_mode, in_memory=self.keep_in_memory\n )\n return dataset\n\n\nclass ParquetDatasetWriter:\n def __init__(\n self,\n dataset: Dataset,\n path_or_buf: Union[PathLike, BinaryIO],\n batch_size: Optional[int] = None,\n storage_options: Optional[dict] = None,\n **parquet_writer_kwargs,\n ):\n self.dataset = dataset\n self.path_or_buf = path_or_buf\n self.batch_size = batch_size or get_writer_batch_size(dataset.features)\n self.storage_options = storage_options or {}\n self.parquet_writer_kwargs = parquet_writer_kwargs\n\n def write(self) -> int:\n batch_size = self.batch_size if self.batch_size else config.DEFAULT_MAX_BATCH_SIZE\n\n if isinstance(self.path_or_buf, (str, bytes, os.PathLike)):\n with fsspec.open(self.path_or_buf, "wb", **(self.storage_options or {})) as buffer:\n written = self._write(file_obj=buffer, batch_size=batch_size, **self.parquet_writer_kwargs)\n else:\n written = self._write(file_obj=self.path_or_buf, batch_size=batch_size, **self.parquet_writer_kwargs)\n return written\n\n def _write(self, file_obj: BinaryIO, batch_size: int, **parquet_writer_kwargs) -> int:\n """Writes the pyarrow table as Parquet to a binary file handle.\n\n Caller is responsible for opening and closing the handle.\n """\n written = 0\n _ = parquet_writer_kwargs.pop("path_or_buf", None)\n schema = self.dataset.features.arrow_schema\n\n writer = pq.ParquetWriter(file_obj, schema=schema, **parquet_writer_kwargs)\n\n for offset in hf_tqdm(\n range(0, len(self.dataset), batch_size),\n unit="ba",\n desc="Creating parquet from Arrow format",\n ):\n batch = query_table(\n table=self.dataset._data,\n key=slice(offset, offset + batch_size),\n indices=self.dataset._indices,\n )\n writer.write_table(batch)\n written += batch.nbytes\n writer.close()\n return written\n
|
.venv\Lib\site-packages\datasets\io\parquet.py
|
parquet.py
|
Python
| 4,354 | 0.95 | 0.106557 | 0.055556 |
awesome-app
| 909 |
2023-11-30T08:53:26.227355
|
MIT
| false |
c0ee71665fb1b2cf777ba3078d1183fe
|
from typing import Optional\n\nimport pyspark\n\nfrom .. import Features, NamedSplit\nfrom ..download import DownloadMode\nfrom ..packaged_modules.spark.spark import Spark\nfrom .abc import AbstractDatasetReader\n\n\nclass SparkDatasetReader(AbstractDatasetReader):\n """A dataset reader that reads from a Spark DataFrame.\n\n When caching, cache materialization is parallelized over Spark; an NFS that is accessible to the driver must be\n provided. Streaming is not currently supported.\n """\n\n def __init__(\n self,\n df: pyspark.sql.DataFrame,\n split: Optional[NamedSplit] = None,\n features: Optional[Features] = None,\n streaming: bool = True,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n working_dir: str = None,\n load_from_cache_file: bool = True,\n file_format: str = "arrow",\n **kwargs,\n ):\n super().__init__(\n split=split,\n features=features,\n cache_dir=cache_dir,\n keep_in_memory=keep_in_memory,\n streaming=streaming,\n **kwargs,\n )\n self._load_from_cache_file = load_from_cache_file\n self._file_format = file_format\n self.builder = Spark(\n df=df,\n features=features,\n cache_dir=cache_dir,\n working_dir=working_dir,\n **kwargs,\n )\n\n def read(self):\n if self.streaming:\n return self.builder.as_streaming_dataset(split=self.split)\n download_mode = None if self._load_from_cache_file else DownloadMode.FORCE_REDOWNLOAD\n self.builder.download_and_prepare(\n download_mode=download_mode,\n file_format=self._file_format,\n )\n return self.builder.as_dataset(split=self.split)\n
|
.venv\Lib\site-packages\datasets\io\spark.py
|
spark.py
|
Python
| 1,797 | 0.85 | 0.087719 | 0.06 |
python-kit
| 540 |
2024-09-13T08:38:10.471562
|
BSD-3-Clause
| false |
0a5d0a1fae2f86803d9289e05c5dcb73
|
import multiprocessing\nfrom typing import TYPE_CHECKING, Optional, Union\n\nfrom .. import Dataset, Features, config\nfrom ..formatting import query_table\nfrom ..packaged_modules.sql.sql import Sql\nfrom ..utils import tqdm as hf_tqdm\nfrom .abc import AbstractDatasetInputStream\n\n\nif TYPE_CHECKING:\n import sqlite3\n\n import sqlalchemy\n\n\nclass SqlDatasetReader(AbstractDatasetInputStream):\n def __init__(\n self,\n sql: Union[str, "sqlalchemy.sql.Selectable"],\n con: Union[str, "sqlalchemy.engine.Connection", "sqlalchemy.engine.Engine", "sqlite3.Connection"],\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n **kwargs,\n ):\n super().__init__(features=features, cache_dir=cache_dir, keep_in_memory=keep_in_memory, **kwargs)\n self.builder = Sql(\n cache_dir=cache_dir,\n features=features,\n sql=sql,\n con=con,\n **kwargs,\n )\n\n def read(self):\n download_config = None\n download_mode = None\n verification_mode = None\n base_path = None\n\n self.builder.download_and_prepare(\n download_config=download_config,\n download_mode=download_mode,\n verification_mode=verification_mode,\n base_path=base_path,\n )\n\n # Build dataset for splits\n dataset = self.builder.as_dataset(\n split="train", verification_mode=verification_mode, in_memory=self.keep_in_memory\n )\n return dataset\n\n\nclass SqlDatasetWriter:\n def __init__(\n self,\n dataset: Dataset,\n name: str,\n con: Union[str, "sqlalchemy.engine.Connection", "sqlalchemy.engine.Engine", "sqlite3.Connection"],\n batch_size: Optional[int] = None,\n num_proc: Optional[int] = None,\n **to_sql_kwargs,\n ):\n if num_proc is not None and num_proc <= 0:\n raise ValueError(f"num_proc {num_proc} must be an integer > 0.")\n\n self.dataset = dataset\n self.name = name\n self.con = con\n self.batch_size = batch_size if batch_size else config.DEFAULT_MAX_BATCH_SIZE\n self.num_proc = num_proc\n self.to_sql_kwargs = to_sql_kwargs\n\n def write(self) -> int:\n _ = self.to_sql_kwargs.pop("sql", None)\n _ = self.to_sql_kwargs.pop("con", None)\n index = self.to_sql_kwargs.pop("index", False)\n\n written = self._write(index=index, **self.to_sql_kwargs)\n return written\n\n def _batch_sql(self, args):\n offset, index, to_sql_kwargs = args\n to_sql_kwargs = {**to_sql_kwargs, "if_exists": "append"} if offset > 0 else to_sql_kwargs\n batch = query_table(\n table=self.dataset.data,\n key=slice(offset, offset + self.batch_size),\n indices=self.dataset._indices,\n )\n df = batch.to_pandas()\n num_rows = df.to_sql(self.name, self.con, index=index, **to_sql_kwargs)\n return num_rows or len(df)\n\n def _write(self, index, **to_sql_kwargs) -> int:\n """Writes the pyarrow table as SQL to a database.\n\n Caller is responsible for opening and closing the SQL connection.\n """\n written = 0\n\n if self.num_proc is None or self.num_proc == 1:\n for offset in hf_tqdm(\n range(0, len(self.dataset), self.batch_size),\n unit="ba",\n desc="Creating SQL from Arrow format",\n ):\n written += self._batch_sql((offset, index, to_sql_kwargs))\n else:\n num_rows, batch_size = len(self.dataset), self.batch_size\n with multiprocessing.Pool(self.num_proc) as pool:\n for num_rows in hf_tqdm(\n pool.imap(\n self._batch_sql,\n [(offset, index, to_sql_kwargs) for offset in range(0, num_rows, batch_size)],\n ),\n total=(num_rows // batch_size) + 1 if num_rows % batch_size else num_rows // batch_size,\n unit="ba",\n desc="Creating SQL from Arrow format",\n ):\n written += num_rows\n\n return written\n
|
.venv\Lib\site-packages\datasets\io\sql.py
|
sql.py
|
Python
| 4,234 | 0.95 | 0.153226 | 0.038095 |
react-lib
| 797 |
2024-08-05T06:51:14.686883
|
GPL-3.0
| false |
887047dbeaae93e0b1b3a0d0a4c73e5d
|
from typing import Optional\n\nfrom .. import Features, NamedSplit\nfrom ..packaged_modules.text.text import Text\nfrom ..utils.typing import NestedDataStructureLike, PathLike\nfrom .abc import AbstractDatasetReader\n\n\nclass TextDatasetReader(AbstractDatasetReader):\n def __init__(\n self,\n path_or_paths: NestedDataStructureLike[PathLike],\n split: Optional[NamedSplit] = None,\n features: Optional[Features] = None,\n cache_dir: str = None,\n keep_in_memory: bool = False,\n streaming: bool = False,\n num_proc: Optional[int] = None,\n **kwargs,\n ):\n super().__init__(\n path_or_paths,\n split=split,\n features=features,\n cache_dir=cache_dir,\n keep_in_memory=keep_in_memory,\n streaming=streaming,\n num_proc=num_proc,\n **kwargs,\n )\n path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\n self.builder = Text(\n cache_dir=cache_dir,\n data_files=path_or_paths,\n features=features,\n **kwargs,\n )\n\n def read(self):\n # Build iterable dataset\n if self.streaming:\n dataset = self.builder.as_streaming_dataset(split=self.split)\n # Build regular (map-style) dataset\n else:\n download_config = None\n download_mode = None\n verification_mode = None\n base_path = None\n\n self.builder.download_and_prepare(\n download_config=download_config,\n download_mode=download_mode,\n verification_mode=verification_mode,\n base_path=base_path,\n num_proc=self.num_proc,\n )\n dataset = self.builder.as_dataset(\n split=self.split, verification_mode=verification_mode, in_memory=self.keep_in_memory\n )\n return dataset\n
|
.venv\Lib\site-packages\datasets\io\text.py
|
text.py
|
Python
| 1,975 | 0.95 | 0.083333 | 0.090909 |
node-utils
| 258 |
2024-11-07T16:39:31.037994
|
GPL-3.0
| false |
4526bf1feffd4aad06fa4e23f46389ef
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\abc.cpython-313.pyc
|
abc.cpython-313.pyc
|
Other
| 2,757 | 0.8 | 0 | 0 |
react-lib
| 612 |
2025-01-13T11:35:03.326617
|
BSD-3-Clause
| false |
77efa289c806ced8580e25110d637092
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\csv.cpython-313.pyc
|
csv.cpython-313.pyc
|
Other
| 7,047 | 0.8 | 0.010204 | 0 |
vue-tools
| 697 |
2025-04-30T20:38:55.593176
|
Apache-2.0
| false |
5681158cac11ef999b7039aa15de4a95
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\generator.cpython-313.pyc
|
generator.cpython-313.pyc
|
Other
| 2,479 | 0.8 | 0 | 0 |
awesome-app
| 898 |
2023-10-12T15:23:00.940161
|
BSD-3-Clause
| false |
3b32dbe3d788bd6f52dd942631b86231
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\json.cpython-313.pyc
|
json.cpython-313.pyc
|
Other
| 8,390 | 0.8 | 0.009615 | 0.010309 |
awesome-app
| 553 |
2024-01-02T02:01:55.547252
|
BSD-3-Clause
| false |
4f274876f8d947a3c1f3774ed83671da
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\parquet.cpython-313.pyc
|
parquet.cpython-313.pyc
|
Other
| 5,948 | 0.8 | 0.011765 | 0 |
awesome-app
| 240 |
2025-01-13T15:37:28.224491
|
MIT
| false |
b71a2e1bef790f78f88d2b5003e08831
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\spark.cpython-313.pyc
|
spark.cpython-313.pyc
|
Other
| 2,629 | 0.8 | 0 | 0 |
vue-tools
| 656 |
2025-06-15T02:12:37.762439
|
GPL-3.0
| false |
8f28b7455d59cd23da0d4aa9be22cdef
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\sql.cpython-313.pyc
|
sql.cpython-313.pyc
|
Other
| 5,824 | 0.8 | 0.012658 | 0.013514 |
react-lib
| 769 |
2023-12-16T18:21:50.009992
|
GPL-3.0
| false |
f861b8923740093b96e9bb83db313c9e
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\text.cpython-313.pyc
|
text.cpython-313.pyc
|
Other
| 2,430 | 0.8 | 0 | 0 |
node-utils
| 976 |
2024-03-17T17:01:50.388991
|
MIT
| false |
b73519a8c49926bc3f220a162f02447c
|
\n\n
|
.venv\Lib\site-packages\datasets\io\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 186 | 0.7 | 0 | 0 |
react-lib
| 150 |
2024-11-19T22:29:16.047262
|
Apache-2.0
| false |
9e857124c9c45d038703bbe72b8d321c
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.