content
stringlengths
1
103k
path
stringlengths
8
216
filename
stringlengths
2
179
language
stringclasses
15 values
size_bytes
int64
2
189k
quality_score
float64
0.5
0.95
complexity
float64
0
1
documentation_ratio
float64
0
1
repository
stringclasses
5 values
stars
int64
0
1k
created_date
stringdate
2023-07-10 19:21:08
2025-07-09 19:11:45
license
stringclasses
4 values
is_test
bool
2 classes
file_hash
stringlengths
32
32
\n\n
.venv\Lib\site-packages\ipywidgets\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
2,130
0.95
0.08
0
python-kit
937
2024-12-21T19:44:26.511866
GPL-3.0
false
69135b8fd568b04e25009510082d5eee
pip\n
.venv\Lib\site-packages\ipywidgets-8.1.7.dist-info\INSTALLER
INSTALLER
Other
4
0.5
0
0
python-kit
728
2024-03-01T02:18:55.055453
MIT
false
365c9bfeb7d89244f2ce01c1de44cb85
Metadata-Version: 2.4\nName: ipywidgets\nVersion: 8.1.7\nSummary: Jupyter interactive widgets\nHome-page: http://jupyter.org\nAuthor: Jupyter Development Team\nAuthor-email: jupyter@googlegroups.com\nLicense: BSD 3-Clause License\nKeywords: Interactive,Interpreter,Shell,Web,ipython,widgets,Jupyter\nPlatform: Linux\nPlatform: Mac OS X\nPlatform: Windows\nClassifier: Intended Audience :: Developers\nClassifier: Intended Audience :: System Administrators\nClassifier: Intended Audience :: Science/Research\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Programming Language :: Python :: 3 :: Only\nClassifier: Framework :: Jupyter\nRequires-Python: >=3.7\nDescription-Content-Type: text/markdown\nLicense-File: LICENSE\nRequires-Dist: comm>=0.1.3\nRequires-Dist: ipython>=6.1.0\nRequires-Dist: traitlets>=4.3.1\nRequires-Dist: widgetsnbextension~=4.0.14\nRequires-Dist: jupyterlab_widgets~=3.0.15\nProvides-Extra: test\nRequires-Dist: jsonschema; extra == "test"\nRequires-Dist: ipykernel; extra == "test"\nRequires-Dist: pytest>=3.6.0; extra == "test"\nRequires-Dist: pytest-cov; extra == "test"\nRequires-Dist: pytz; extra == "test"\nDynamic: license-file\n\n# ipywidgets: Interactive HTML Widgets\n\n**ipywidgets**, also known as jupyter-widgets or simply widgets, are\n[interactive HTML widgets](https://github.com/jupyter-widgets/ipywidgets/blob/main/docs/source/examples/Index.ipynb)\nfor Jupyter notebooks and the IPython kernel.\n\nThis package contains the python implementation of the core interactive widgets bundled in ipywidgets.\n\n## Core Interactive Widgets\n\nThe fundamental widgets provided by this library are called core interactive\nwidgets. A [demonstration notebook](https://github.com/jupyter-widgets/ipywidgets/blob/main/docs/source/examples/Index.ipynb)\nprovides an overview of the core interactive widgets, including:\n\n- sliders\n- progress bars\n- text boxes\n- toggle buttons and checkboxes\n- display areas\n- and more\n\nFor more information, see the main [documentation](https://github.com/jupyter-widgets/ipywidgets#readme).\n
.venv\Lib\site-packages\ipywidgets-8.1.7.dist-info\METADATA
METADATA
Other
2,371
0.8
0.015873
0.053571
vue-tools
480
2024-08-01T05:00:27.708007
MIT
false
7ccc4294ea646db61d79877540d78729
ipywidgets-8.1.7.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nipywidgets-8.1.7.dist-info/METADATA,sha256=09d_LRv97wpEEUwSQNG9TuXzgh0zMAKzZhVTWJSL3a4,2371\nipywidgets-8.1.7.dist-info/RECORD,,\nipywidgets-8.1.7.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nipywidgets-8.1.7.dist-info/WHEEL,sha256=0CuiUZ_p9E4cD6NyLD6UG80LBXYyiSYZOKDm5lp32xk,91\nipywidgets-8.1.7.dist-info/licenses/LICENSE,sha256=5awr4wsrzv4zDunlrX_lsNEVb-yzVV_KqpCKjSn5_e0,1513\nipywidgets-8.1.7.dist-info/top_level.txt,sha256=Q6IYzaNK-b3ZCOEt1514fjWEXOfh-WAjCWZTcObuHkg,11\nipywidgets/__init__.py,sha256=OuA1lvgMj-Dq8u4PaYqI24mpivI6nIBPQSfQCrv7FeM,1728\nipywidgets/__pycache__/__init__.cpython-313.pyc,,\nipywidgets/__pycache__/_version.cpython-313.pyc,,\nipywidgets/__pycache__/comm.cpython-313.pyc,,\nipywidgets/__pycache__/embed.cpython-313.pyc,,\nipywidgets/_version.py,sha256=fBUHQQW5ter4_U8GcVmxJ3lniobJnnmpOh0Cjr3rvQE,610\nipywidgets/comm.py,sha256=y7w0e39H5Ig69bPABmSeHCHV3CxsAkm-I24CkAgT10c,764\nipywidgets/embed.py,sha256=K2YrBtHgRxkWjoRV09PRNNNsJkj9XnnvyXCd_U6qAvg,11287\nipywidgets/state.schema.json,sha256=YPSqZqFDioDGeTVkgm61kob6lrKo74m4Q50nw6szS_4,2883\nipywidgets/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nipywidgets/tests/__pycache__/__init__.cpython-313.pyc,,\nipywidgets/tests/__pycache__/test_embed.cpython-313.pyc,,\nipywidgets/tests/test_embed.py,sha256=3AKqjcI8kfEumEslrFqOsjewXIvgHnqEe8-bclYqgmI,6729\nipywidgets/view.schema.json,sha256=2IvzD9yH0_KszTivptVAJfgmAS0ZFIC7q347U1YtWk0,595\nipywidgets/widgets/__init__.py,sha256=u1hzkzWFRmdp9jOl-RM4mKrjnxbISsopmNT9WzS8WjQ,1709\nipywidgets/widgets/__pycache__/__init__.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/docutils.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/domwidget.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/interaction.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/trait_types.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/utils.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/valuewidget.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_bool.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_box.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_button.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_color.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_controller.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_core.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_date.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_datetime.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_description.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_float.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_int.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_layout.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_link.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_media.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_output.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_selection.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_selectioncontainer.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_string.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_style.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_tagsinput.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_templates.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_time.cpython-313.pyc,,\nipywidgets/widgets/__pycache__/widget_upload.cpython-313.pyc,,\nipywidgets/widgets/docutils.py,sha256=wDicSkSBiI-exhm-YN_yC2YF5kY2VLQMlEvb_Amgzm8,639\nipywidgets/widgets/domwidget.py,sha256=b_Gb6RQJKxCqnrjTZiwr7cgF6VsWqWYs7PsNDNnNzPA,2290\nipywidgets/widgets/interaction.py,sha256=qhrIbungDdH7RXmRxVdHDtnrMj29X9THGEnQRtJsUdQ,21249\nipywidgets/widgets/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nipywidgets/widgets/tests/__pycache__/__init__.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_datetime_serializers.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_docutils.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_interaction.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_link.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_selectioncontainer.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_send_state.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_set_state.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_traits.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_utils.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_box.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_button.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_datetime.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_float.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_image.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_naive_datetime.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_output.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_selection.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_string.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_templates.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_time.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/test_widget_upload.cpython-313.pyc,,\nipywidgets/widgets/tests/__pycache__/utils.cpython-313.pyc,,\nipywidgets/widgets/tests/data/jupyter-logo-transparent.png,sha256=P_nq_XGXCDFT6DM5py56M1U5uuGJwzVUxoDkOCyYrwI,36297\nipywidgets/widgets/tests/test_datetime_serializers.py,sha256=Zo3v-aiM3Xfx_HYiMTqoF3wDld-H-Gr2zoPLgbDqBcM,2111\nipywidgets/widgets/tests/test_docutils.py,sha256=jb3nb3crL76ZsURAWCkQPYMg1Di22M5Wcq1UC-0Nels,669\nipywidgets/widgets/tests/test_interaction.py,sha256=8AAAjbNzTXxA0fMDBNgnxyr-hujDJYiGbEOLeS94nqI,17849\nipywidgets/widgets/tests/test_link.py,sha256=CDcoFWs3ZfXtrSDqmgxfMF1-2ZjA5pzaz-EepHMRlyM,970\nipywidgets/widgets/tests/test_selectioncontainer.py,sha256=cqE0nycjptKRvqbsNVkGdGORC0KQNOAg8p0VgHaHL2Y,5619\nipywidgets/widgets/tests/test_send_state.py,sha256=0IBwSB1m_bU2vZtqGK5FX3eCVeqmx4r-pZ-OrNBBpzQ,711\nipywidgets/widgets/tests/test_set_state.py,sha256=z2bGBsJPvh6go13PQZngFIxk0mRwBqGGAO5Tc6ihPRw,11007\nipywidgets/widgets/tests/test_traits.py,sha256=ZxtiNiOAXepCjtRlTK0Mw9uKSe8WiHIudR2DylWWQjM,8361\nipywidgets/widgets/tests/test_utils.py,sha256=KOzHjVLThxV9SHtBxccgXcjzfrapz2c76iS4AMok8oo,2478\nipywidgets/widgets/tests/test_widget.py,sha256=vnSpj43CYsYUqOnNiga1UcIBeX6O6eYpi4g4bexwYcI,2811\nipywidgets/widgets/tests/test_widget_box.py,sha256=7RmP7dk4b1XkW3F6q0u9uaR3os_3re3067Hywx2OQd0,995\nipywidgets/widgets/tests/test_widget_button.py,sha256=nB3BWxabzJC3wJ5f1L-15wf87VaMeFJImyc6zLdrMFk,369\nipywidgets/widgets/tests/test_widget_datetime.py,sha256=Ysxzj8sZ9q1GZos1j4Rs_aT0mu8XvVlZv5o6xx0soHc,4147\nipywidgets/widgets/tests/test_widget_float.py,sha256=u3RFfoMFTnHM49-KrwmWAPX24aoiJGw1KxiIVEepueA,606\nipywidgets/widgets/tests/test_widget_image.py,sha256=ATcTxV42mEgb_6KaXQwmCg3cG0LBHz6jdUOe-GQjEuE,4422\nipywidgets/widgets/tests/test_widget_naive_datetime.py,sha256=Ur7H-n8XBwcX-V6AxZndQz9EbAlS1N5FHQ5zeaWHBbw,2633\nipywidgets/widgets/tests/test_widget_output.py,sha256=GyenGL9KWED9CSxpolqZrU0pmQMmuvvO2JEeJVh44z8,7466\nipywidgets/widgets/tests/test_widget_selection.py,sha256=LZHMPk3v7uf-un5ZeQ8GVtSVwrFT0GqsZ6ECR6DCj1s,3372\nipywidgets/widgets/tests/test_widget_string.py,sha256=pRWSA-mr4kT-_SUX6dt3mdFUdiRDESMI4ymrQKM55rM,1796\nipywidgets/widgets/tests/test_widget_templates.py,sha256=L57Ag3QX0C34PzUYLsH0evssf84ecv29CZ_riZx0NC4,28435\nipywidgets/widgets/tests/test_widget_time.py,sha256=e8As3ZhlvD6TRLon_nuPt_uKOmRq8LOjNMAgUi_wBR8,2447\nipywidgets/widgets/tests/test_widget_upload.py,sha256=ufdxHbsbB2qTM64FFKClmm90qC0OOw7LzukbCRXwOHM,4164\nipywidgets/widgets/tests/utils.py,sha256=8UCwcoilDLKbqSkc4koXoOAHvKhNPYzM10nkv7fXU9Y,2443\nipywidgets/widgets/trait_types.py,sha256=4ru8AMCBj23OpwNalLKAtsW5_Nm_7v3DQAVK141XX2Q,16270\nipywidgets/widgets/utils.py,sha256=LuP0pCR0ZN4uQCYRzKfKoKPqK78fEPAzG0JJ_f0SEKM,2608\nipywidgets/widgets/valuewidget.py,sha256=4Th-UZyC84pYMK8VvXAclcoFAYz3--wVRSEVchLLE-s,817\nipywidgets/widgets/widget.py,sha256=Z2pNVW4VrgflQMjRhP1F4ZaXIhHmU38eTuXYNtkOVNE,35129\nipywidgets/widgets/widget_bool.py,sha256=S1HV1lA7xEI6YFvOhO5FPjIQRVlkALA9v_Qv-wOU6nc,4328\nipywidgets/widgets/widget_box.py,sha256=-KJC-I0IbFb60jYP8MfdobkLax_kLRunHystyRrw9tQ,3731\nipywidgets/widgets/widget_button.py,sha256=p_3u8oEgL-khHs1_y_NcGSbMPi8rDBUuGs82J8PZpHw,4204\nipywidgets/widgets/widget_color.py,sha256=OKW_Yhq4AL1AHjM1vFDhypkCKCCJkrOlQfXFBW24XbM,807\nipywidgets/widgets/widget_controller.py,sha256=ejI7rSQZg0Y52OuAn6uBlHdhbKhTp9NffyfxqdoXhVE,2320\nipywidgets/widgets/widget_core.py,sha256=MRkZxDFTYLXWPY3dfRk-fg2LlypTyMgwc1ZWgyoGa9w,622\nipywidgets/widgets/widget_date.py,sha256=_tuU549mhHMHNzSAWoURE0pxkW-FfxsIbjes_3g8hj8,2597\nipywidgets/widgets/widget_datetime.py,sha256=-gMb7OPorCE2-_bfPSlmI7Xp_9NwB9fePdILHffUGpg,4134\nipywidgets/widgets/widget_description.py,sha256=lpNTpEduTEvw9YnBUYLzH_YkkzxqEmhRntP0joqR_Sw,2278\nipywidgets/widgets/widget_float.py,sha256=suM-DAmn7LyeKtw31KZcWo-fsJU9xL-2IGbv32DnIrE,15028\nipywidgets/widgets/widget_int.py,sha256=VUto4Z28MJ3bGTw_qTkxHaIJlxf5wvu3nBWkL37j8-k,12125\nipywidgets/widgets/widget_layout.py,sha256=_YvxUDVumX39YXpn5GFKnDYZDDlXbqwndgwjN25Zx0c,7067\nipywidgets/widgets/widget_link.py,sha256=1ZgRdSKrmXdQHoOrxpjuLm1PcTWbmtLYh3a-Ua-HeUU,3708\nipywidgets/widgets/widget_media.py,sha256=pPj126b1jePp6Ci4NIuOjYIioBXQMBNoMbQQbOz1gIY,7783\nipywidgets/widgets/widget_output.py,sha256=9aza06_azRkcUCOKHEXDYRWpub2jre_fd1cyJKGzmUA,6823\nipywidgets/widgets/widget_selection.py,sha256=rwCkfLZBUiFWdWHR0dx2lpYl-h1BR_lQfQR16oWnU8A,24617\nipywidgets/widgets/widget_selectioncontainer.py,sha256=HJvb8FJ6Xhy8pAM4SIGEL5RtyECz0i7MPyZLjnh4lf8,4198\nipywidgets/widgets/widget_string.py,sha256=SvAyBlG4dlWU5rBmO_9NivxSmFPTbUC4w_ID1q5LPDY,6869\nipywidgets/widgets/widget_style.py,sha256=oC6I5Hkfc7wv3mG0HwzipIlrDujQ1agxU5F7fm3oTz4,559\nipywidgets/widgets/widget_tagsinput.py,sha256=UbDiLh9_UayEvlUYF66qrZryr8eX9En5UEW6qaYu0UI,3498\nipywidgets/widgets/widget_templates.py,sha256=jRntJI9KKDOxBp-ebpMHfiTiLxXk9e8ooHBkycQLg9U,15507\nipywidgets/widgets/widget_time.py,sha256=IjKfVh2hQ3Chum2og2l6P8WHc0p4ZnqHnk_4yQ1Dsrk,2779\nipywidgets/widgets/widget_upload.py,sha256=1WfI2OExVtCP1jdezhnyWVuEPP3cQZWCWo6O4D_rqkg,4637\n
.venv\Lib\site-packages\ipywidgets-8.1.7.dist-info\RECORD
RECORD
Other
10,699
0.7
0
0
awesome-app
647
2024-06-17T18:49:18.492867
GPL-3.0
false
f74a1663cd27966f818253014af3c34f
ipywidgets\n
.venv\Lib\site-packages\ipywidgets-8.1.7.dist-info\top_level.txt
top_level.txt
Other
11
0.5
0
0
react-lib
678
2023-10-23T19:34:14.947305
Apache-2.0
false
28f61ddfd28e4c53497f8fc7cd99ae8f
Wheel-Version: 1.0\nGenerator: setuptools (80.3.1)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n
.venv\Lib\site-packages\ipywidgets-8.1.7.dist-info\WHEEL
WHEEL
Other
91
0.5
0
0
python-kit
792
2024-02-24T18:19:38.714554
GPL-3.0
false
247b432e7b64e56273a8978fd6cf45b6
Copyright (c) 2015 Project Jupyter Contributors\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
.venv\Lib\site-packages\ipywidgets-8.1.7.dist-info\licenses\LICENSE
LICENSE
Other
1,513
0.7
0
0
vue-tools
960
2025-03-15T23:27:46.840369
Apache-2.0
false
20a40995a0b2f0ae1f2a70d2dc995bbf
MZ
.venv\Lib\site-packages\isapi\PyISAPI_loader.dll
PyISAPI_loader.dll
Other
69,120
0.75
0.019685
0.005941
vue-tools
904
2025-06-09T13:39:44.337815
MIT
false
1ba6930bf707adb2d5d8be0b749711a2
A Python ISAPI extension. Contributed by Phillip Frantz, and is\nCopyright 2002-2003 by Blackdog Software Pty Ltd.\n\nSee the 'samples' directory, and particularly samples\README.txt\n\nYou can find documentation in the PyWin32.chm file that comes with pywin32 -\nyou can open this from Pythonwin->Help, or from the start menu.\n
.venv\Lib\site-packages\isapi\README.txt
README.txt
Other
330
0.7
0
0
react-lib
171
2025-04-20T21:48:17.199261
BSD-3-Clause
false
a081b649e53f0e785b454fc16d9dfea4
"""Simple base-classes for extensions and filters.\n\nNone of the filter and extension functions are considered 'optional' by the\nframework. These base-classes provide simple implementations for the\nInitialize and Terminate functions, allowing you to omit them,\n\nIt is not necessary to use these base-classes - but if you don't, you\nmust ensure each of the required methods are implemented.\n"""\n\nfrom __future__ import annotations\n\n\nclass SimpleExtension:\n "Base class for a simple ISAPI extension"\n\n def __init__(self):\n pass\n\n def GetExtensionVersion(self, vi):\n """Called by the ISAPI framework to get the extension version\n\n The default implementation uses the classes docstring to\n set the extension description."""\n # nod to our reload capability - vi is None when we are reloaded.\n if vi is not None:\n vi.ExtensionDesc = self.__doc__\n\n def HttpExtensionProc(self, control_block):\n """Called by the ISAPI framework for each extension request.\n\n sub-classes must provide an implementation for this method.\n """\n raise NotImplementedError("sub-classes should override HttpExtensionProc")\n\n def TerminateExtension(self, status):\n """Called by the ISAPI framework as the extension terminates."""\n pass\n\n\nclass SimpleFilter:\n "Base class for a a simple ISAPI filter"\n\n filter_flags: int | None = None\n\n def __init__(self):\n pass\n\n def GetFilterVersion(self, fv):\n """Called by the ISAPI framework to get the extension version\n\n The default implementation uses the classes docstring to\n set the extension description, and uses the classes\n filter_flags attribute to set the ISAPI filter flags - you\n must specify filter_flags in your class.\n """\n if self.filter_flags is None:\n raise RuntimeError("You must specify the filter flags")\n # nod to our reload capability - fv is None when we are reloaded.\n if fv is not None:\n fv.Flags = self.filter_flags\n fv.FilterDesc = self.__doc__\n\n def HttpFilterProc(self, fc):\n """Called by the ISAPI framework for each filter request.\n\n sub-classes must provide an implementation for this method.\n """\n raise NotImplementedError("sub-classes should override HttpExtensionProc")\n\n def TerminateFilter(self, status):\n """Called by the ISAPI framework as the filter terminates."""\n pass\n
.venv\Lib\site-packages\isapi\simple.py
simple.py
Python
2,566
0.95
0.342466
0.037736
react-lib
299
2025-06-14T11:32:04.354202
GPL-3.0
false
e1d431debecedb746d7fc07e137b69c0
"""An ISAPI extension base class implemented using a thread-pool."""\n\n# $Id$\n\nimport sys\nimport threading\nimport time\nimport traceback\n\nfrom pywintypes import OVERLAPPED\nfrom win32event import INFINITE\nfrom win32file import (\n CloseHandle,\n CreateIoCompletionPort,\n GetQueuedCompletionStatus,\n PostQueuedCompletionStatus,\n)\nfrom win32security import SetThreadToken\n\nimport isapi.simple\nfrom isapi import ExtensionError, isapicon\n\nISAPI_REQUEST = 1\nISAPI_SHUTDOWN = 2\n\n\nclass WorkerThread(threading.Thread):\n def __init__(self, extension, io_req_port):\n self.running = False\n self.io_req_port = io_req_port\n self.extension = extension\n threading.Thread.__init__(self)\n # We wait 15 seconds for a thread to terminate, but if it fails to,\n # we don't want the process to hang at exit waiting for it...\n self.setDaemon(True)\n\n def run(self):\n self.running = True\n while self.running:\n errCode, bytes, key, overlapped = GetQueuedCompletionStatus(\n self.io_req_port, INFINITE\n )\n if key == ISAPI_SHUTDOWN and overlapped is None:\n break\n\n # Let the parent extension handle the command.\n dispatcher = self.extension.dispatch_map.get(key)\n if dispatcher is None:\n raise RuntimeError(f"Bad request '{key}'")\n\n dispatcher(errCode, bytes, key, overlapped)\n\n def call_handler(self, cblock):\n self.extension.Dispatch(cblock)\n\n\n# A generic thread-pool based extension, using IO Completion Ports.\n# Sub-classes can override one method to implement a simple extension, or\n# may leverage the CompletionPort to queue their own requests, and implement a\n# fully asynch extension.\nclass ThreadPoolExtension(isapi.simple.SimpleExtension):\n "Base class for an ISAPI extension based around a thread-pool"\n\n max_workers = 20\n worker_shutdown_wait = 15000 # 15 seconds for workers to quit...\n\n def __init__(self):\n self.workers = []\n # extensible dispatch map, for sub-classes that need to post their\n # own requests to the completion port.\n # Each of these functions is called with the result of\n # GetQueuedCompletionStatus for our port.\n self.dispatch_map = {\n ISAPI_REQUEST: self.DispatchConnection,\n }\n\n def GetExtensionVersion(self, vi):\n isapi.simple.SimpleExtension.GetExtensionVersion(self, vi)\n # As per Q192800, the CompletionPort should be created with the number\n # of processors, even if the number of worker threads is much larger.\n # Passing 0 means the system picks the number.\n self.io_req_port = CreateIoCompletionPort(-1, None, 0, 0)\n # start up the workers\n self.workers = []\n for i in range(self.max_workers):\n worker = WorkerThread(self, self.io_req_port)\n worker.start()\n self.workers.append(worker)\n\n def HttpExtensionProc(self, control_block):\n overlapped = OVERLAPPED()\n overlapped.object = control_block\n PostQueuedCompletionStatus(self.io_req_port, 0, ISAPI_REQUEST, overlapped)\n return isapicon.HSE_STATUS_PENDING\n\n def TerminateExtension(self, status):\n for worker in self.workers:\n worker.running = False\n for worker in self.workers:\n PostQueuedCompletionStatus(self.io_req_port, 0, ISAPI_SHUTDOWN, None)\n # wait for them to terminate - pity we aren't using 'native' threads\n # as then we could do a smart wait - but now we need to poll....\n end_time = time.time() + self.worker_shutdown_wait / 1000\n alive = self.workers\n while alive:\n if time.time() > end_time:\n # xxx - might be nice to log something here.\n break\n time.sleep(0.2)\n alive = [w for w in alive if w.is_alive()]\n self.dispatch_map = {} # break circles\n CloseHandle(self.io_req_port)\n\n # This is the one operation the base class supports - a simple\n # Connection request. We setup the thread-token, and dispatch to the\n # sub-class's 'Dispatch' method.\n def DispatchConnection(self, errCode, bytes, key, overlapped):\n control_block = overlapped.object\n # setup the correct user for this request\n hRequestToken = control_block.GetImpersonationToken()\n SetThreadToken(None, hRequestToken)\n try:\n try:\n self.Dispatch(control_block)\n except:\n self.HandleDispatchError(control_block)\n finally:\n # reset the security context\n SetThreadToken(None, None)\n\n def Dispatch(self, ecb):\n """Overridden by the sub-class to handle connection requests.\n\n This class creates a thread-pool using a Windows completion port,\n and dispatches requests via this port. Sub-classes can generally\n implement each connection request using blocking reads and writes, and\n the thread-pool will still provide decent response to the end user.\n\n The sub-class can set a max_workers attribute (default is 20). Note\n that this generally does *not* mean 20 threads will all be concurrently\n running, via the magic of Windows completion ports.\n\n There is no default implementation - sub-classes must implement this.\n """\n raise NotImplementedError("sub-classes should override Dispatch")\n\n def HandleDispatchError(self, ecb):\n """Handles errors in the Dispatch method.\n\n When a Dispatch method call fails, this method is called to handle\n the exception. The default implementation formats the traceback\n in the browser.\n """\n ecb.HttpStatusCode = isapicon.HSE_STATUS_ERROR\n # control_block.LogData = "we failed!"\n exc_typ, exc_val, exc_tb = sys.exc_info()\n limit = None\n try:\n try:\n import html\n\n ecb.SendResponseHeaders(\n "200 OK", "Content-type: text/html\r\n\r\n", False\n )\n print(file=ecb)\n print("<H3>Traceback (most recent call last):</H3>", file=ecb)\n list = traceback.format_tb(\n exc_tb, limit\n ) + traceback.format_exception_only(exc_typ, exc_val)\n bold = list.pop()\n print(\n "<PRE>{}<B>{}</B></PRE>".format(\n html.escape("".join(list)),\n html.escape(bold),\n ),\n file=ecb,\n )\n except ExtensionError:\n # The client disconnected without reading the error body -\n # it's probably not a real browser at the other end, ignore it.\n pass\n except:\n print("FAILED to render the error message!")\n traceback.print_exc()\n print("ORIGINAL extension error:")\n traceback.print_exception(exc_typ, exc_val, exc_tb)\n finally:\n # holding tracebacks in a local of a frame that may itself be\n # part of a traceback used to be evil and cause leaks!\n exc_tb = None\n ecb.DoneWithSession()\n
.venv\Lib\site-packages\isapi\threaded_extension.py
threaded_extension.py
Python
7,526
0.95
0.225131
0.175758
vue-tools
576
2025-05-26T14:47:33.965674
MIT
false
514913e9620a2c60dbdbaa253fc1be97
# This extension demonstrates some advanced features of the Python ISAPI\n# framework.\n# We demonstrate:\n# * Reloading your Python module without shutting down IIS (eg, when your\n# .py implementation file changes.)\n# * Custom command-line handling - both additional options and commands.\n# * Using a query string - any part of the URL after a '?' is assumed to\n# be "variable names" separated by '&' - we will print the values of\n# these server variables.\n# * If the tail portion of the URL is "ReportUnhealthy", IIS will be\n# notified we are unhealthy via a HSE_REQ_REPORT_UNHEALTHY request.\n# Whether this is acted upon depends on if the IIS health-checking\n# tools are installed, but you should always see the reason written\n# to the Windows event log - see the IIS documentation for more.\n\nimport os\nimport stat\nimport sys\n\nfrom isapi import isapicon\nfrom isapi.simple import SimpleExtension\n\nif hasattr(sys, "isapidllhandle"):\n import win32traceutil\n\n# Notes on reloading\n# If your HttpFilterProc or HttpExtensionProc functions raises\n# 'isapi.InternalReloadException', the framework will not treat it\n# as an error but instead will terminate your extension, reload your\n# extension module, re-initialize the instance, and re-issue the request.\n# The Initialize functions are called with None as their param. The\n# return code from the terminate function is ignored.\n#\n# This is all the framework does to help you. It is up to your code\n# when you raise this exception. This sample uses a Win32 "find\n# notification". Whenever windows tells us one of the files in the\n# directory has changed, we check if the time of our source-file has\n# changed, and set a flag. Next imcoming request, we check the flag and\n# raise the special exception if set.\n#\n# The end result is that the module is automatically reloaded whenever\n# the source-file changes - you need take no further action to see your\n# changes reflected in the running server.\n\n# The framework only reloads your module - if you have libraries you\n# depend on and also want reloaded, you must arrange for this yourself.\n# One way of doing this would be to special case the import of these\n# modules. Eg:\n# --\n# try:\n# my_module = reload(my_module) # module already imported - reload it\n# except NameError:\n# import my_module # first time around - import it.\n# --\n# When your module is imported for the first time, the NameError will\n# be raised, and the module imported. When the ISAPI framework reloads\n# your module, the existing module will avoid the NameError, and allow\n# you to reload that module.\n\nimport threading\n\nimport win32con\nimport win32event\nimport win32file\nimport winerror\n\nfrom isapi import InternalReloadException\n\ntry:\n reload_counter += 1 # type: ignore[used-before-def]\nexcept NameError:\n reload_counter = 0\n\n\n# A watcher thread that checks for __file__ changing.\n# When it detects it, it simply sets "change_detected" to true.\nclass ReloadWatcherThread(threading.Thread):\n def __init__(self):\n self.change_detected = False\n self.filename = __file__\n if self.filename.endswith("c") or self.filename.endswith("o"):\n self.filename = self.filename[:-1]\n self.handle = win32file.FindFirstChangeNotification(\n os.path.dirname(self.filename),\n False, # watch tree?\n win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,\n )\n threading.Thread.__init__(self)\n\n def run(self):\n last_time = os.stat(self.filename)[stat.ST_MTIME]\n while 1:\n try:\n rc = win32event.WaitForSingleObject(self.handle, win32event.INFINITE)\n win32file.FindNextChangeNotification(self.handle)\n except win32event.error as details:\n # handle closed - thread should terminate.\n if details.winerror != winerror.ERROR_INVALID_HANDLE:\n raise\n break\n this_time = os.stat(self.filename)[stat.ST_MTIME]\n if this_time != last_time:\n print("Detected file change - flagging for reload.")\n self.change_detected = True\n last_time = this_time\n\n def stop(self):\n win32file.FindCloseChangeNotification(self.handle)\n\n\n# The ISAPI extension - handles requests in our virtual dir, and sends the\n# response to the client.\nclass Extension(SimpleExtension):\n "Python advanced sample Extension"\n\n def __init__(self):\n self.reload_watcher = ReloadWatcherThread()\n self.reload_watcher.start()\n\n def HttpExtensionProc(self, ecb):\n # NOTE: If you use a ThreadPoolExtension, you must still perform\n # this check in HttpExtensionProc - raising the exception from\n # The "Dispatch" method will just cause the exception to be\n # rendered to the browser.\n if self.reload_watcher.change_detected:\n print("Doing reload")\n raise InternalReloadException\n\n url = ecb.GetServerVariable("UNICODE_URL")\n if url.endswith("ReportUnhealthy"):\n ecb.ReportUnhealthy("I'm a little sick")\n\n ecb.SendResponseHeaders("200 OK", "Content-Type: text/html\r\n\r\n", 0)\n print("<HTML><BODY>", file=ecb)\n\n qs = ecb.GetServerVariable("QUERY_STRING")\n if qs:\n queries = qs.split("&")\n print("<PRE>", file=ecb)\n for q in queries:\n val = ecb.GetServerVariable(q, "&lt;no such variable&gt;")\n print(f"{q}={val!r}", file=ecb)\n print("</PRE><P/>", file=ecb)\n\n print("This module has been imported", file=ecb)\n print("%d times" % (reload_counter,), file=ecb)\n print("</BODY></HTML>", file=ecb)\n ecb.close()\n return isapicon.HSE_STATUS_SUCCESS\n\n def TerminateExtension(self, status):\n self.reload_watcher.stop()\n\n\n# The entry points for the ISAPI extension.\ndef __ExtensionFactory__():\n return Extension()\n\n\n# Our special command line customization.\n# Pre-install hook for our virtual directory.\ndef PreInstallDirectory(params, options):\n # If the user used our special '--description' option,\n # then we override our default.\n if options.description:\n params.Description = options.description\n\n\n# Post install hook for our entire script\ndef PostInstall(params, options):\n print()\n print("The sample has been installed.")\n print("Point your browser to /AdvancedPythonSample")\n print("If you modify the source file and reload the page,")\n print("you should see the reload counter increment")\n\n\n# Handler for our custom 'status' argument.\ndef status_handler(options, log, arg):\n "Query the status of something"\n print("Everything seems to be fine!")\n\n\ncustom_arg_handlers = {"status": status_handler}\n\nif __name__ == "__main__":\n # If run from the command-line, install ourselves.\n from isapi.install import *\n\n params = ISAPIParameters(PostInstall=PostInstall)\n # Setup the virtual directories - this is a list of directories our\n # extension uses - in this case only 1.\n # Each extension has a "script map" - this is the mapping of ISAPI\n # extensions.\n sm = [ScriptMapParams(Extension="*", Flags=0)]\n vd = VirtualDirParameters(\n Name="AdvancedPythonSample",\n Description=Extension.__doc__,\n ScriptMaps=sm,\n ScriptMapUpdate="replace",\n # specify the pre-install hook.\n PreInstall=PreInstallDirectory,\n )\n params.VirtualDirs = [vd]\n # Setup our custom option parser.\n from optparse import OptionParser\n\n parser = OptionParser("") # blank usage, so isapi sets it.\n parser.add_option(\n "",\n "--description",\n action="store",\n help="custom description to use for the virtual directory",\n )\n\n HandleCommandLine(\n params, opt_parser=parser, custom_arg_handlers=custom_arg_handlers\n )\n
.venv\Lib\site-packages\isapi\samples\advanced.py
advanced.py
Python
8,124
0.95
0.192661
0.379121
node-utils
43
2024-05-10T06:37:27.139511
MIT
false
27fc14cda7ff80de5159b21d697dafff
In this directory you will find examples of ISAPI filters and extensions.\n\nThe filter loading mechanism works like this:\n* IIS loads the special Python "loader" DLL. This DLL will generally have a\n leading underscore as part of its name.\n* This loader DLL looks for a Python module, by removing the first letter of\n the DLL base name.\n\nThis means that an ISAPI extension module consists of 2 key files - the loader\nDLL (eg, "_MyIISModule.dll", and a Python module (which for this example\nwould be "MyIISModule.py")\n\nWhen you install an ISAPI extension, the installation code checks to see if\nthere is a loader DLL for your implementation file - if one does not exist,\nor the standard loader is different, it is copied and renamed accordingly.\n\nWe use this mechanism to provide the maximum separation between different\nPython extensions installed on the same server - otherwise filter order and\nother tricky IIS semantics would need to be replicated. Also, each filter\ngets its own thread-pool, etc.\n
.venv\Lib\site-packages\isapi\samples\README.txt
README.txt
Other
1,023
0.7
0.25
0.125
awesome-app
135
2024-02-04T14:14:15.294341
BSD-3-Clause
false
ee5b43ddd920845f9e1d41fc2f7ac362
# This is a sample ISAPI extension written in Python.\n#\n# Please see README.txt in this directory, and specifically the\n# information about the "loader" DLL - installing this sample will create\n# "_redirector.dll" in the current directory. The readme explains this.\n\n# Executing this script (or any server config script) will install the extension\n# into your web server. As the server executes, the PyISAPI framework will load\n# this module and create your Extension and Filter objects.\n\n# This is the simplest possible redirector (or proxy) we can write. The\n# extension installs with a mask of '*' in the root of the site.\n# As an added bonus though, we optionally show how, on IIS6 and later, we\n# can use HSE_ERQ_EXEC_URL to ignore certain requests - in IIS5 and earlier\n# we can only do this with an ISAPI filter - see redirector_with_filter for\n# an example. If this sample is run on IIS5 or earlier it simply ignores\n# any excludes.\n\nimport sys\nfrom urllib.request import urlopen\n\nimport win32api\n\nfrom isapi import isapicon, threaded_extension\n\n# sys.isapidllhandle will exist when we are loaded by the IIS framework.\n# In this case we redirect our output to the win32traceutil collector.\nif hasattr(sys, "isapidllhandle"):\n import win32traceutil\n\n# The site we are proxying.\nproxy = "https://www.python.org"\n\n# Urls we exclude (ie, allow IIS to handle itself) - all are lowered,\n# and these entries exist by default on Vista...\nexcludes = ["/iisstart.htm", "/welcome.png"]\n\n\n# An "io completion" function, called when ecb.ExecURL completes...\ndef io_callback(ecb, url, cbIO, errcode):\n # Get the status of our ExecURL\n httpstatus, substatus, win32 = ecb.GetExecURLStatus()\n print(\n "ExecURL of %r finished with http status %d.%d, win32 status %d (%s)"\n % (url, httpstatus, substatus, win32, win32api.FormatMessage(win32).strip())\n )\n # nothing more to do!\n ecb.DoneWithSession()\n\n\n# The ISAPI extension - handles all requests in the site.\nclass Extension(threaded_extension.ThreadPoolExtension):\n "Python sample Extension"\n\n def Dispatch(self, ecb):\n # Note that our ThreadPoolExtension base class will catch exceptions\n # in our Dispatch method, and write the traceback to the client.\n # That is perfect for this sample, so we don't catch our own.\n # print(f'IIS dispatching "{ecb.GetServerVariable("URL")}"')\n url = ecb.GetServerVariable("URL").decode("ascii")\n for exclude in excludes:\n if url.lower().startswith(exclude):\n print("excluding %s" % url)\n if ecb.Version < 0x60000:\n print("(but this is IIS5 or earlier - can't do 'excludes')")\n else:\n ecb.IOCompletion(io_callback, url)\n ecb.ExecURL(\n None,\n None,\n None,\n None,\n None,\n isapicon.HSE_EXEC_URL_IGNORE_CURRENT_INTERCEPTOR,\n )\n return isapicon.HSE_STATUS_PENDING\n\n new_url = proxy + url\n print("Opening %s" % new_url)\n fp = urlopen(new_url)\n headers = fp.info()\n # subtle breakage: str(headers) normalizes \r\n\n # back to \n and also sticks an extra \n term.\n # take *all* trailing \n off, replace remaining with\n # \r\n, then add the 2 trailing \r\n.\n header_text = str(headers).rstrip("\n").replace("\n", "\r\n") + "\r\n\r\n"\n ecb.SendResponseHeaders("200 OK", header_text, False)\n ecb.WriteClient(fp.read())\n ecb.DoneWithSession()\n print(f"Returned data from '{new_url}'")\n return isapicon.HSE_STATUS_SUCCESS\n\n\n# The entry points for the ISAPI extension.\ndef __ExtensionFactory__():\n return Extension()\n\n\nif __name__ == "__main__":\n # If run from the command-line, install ourselves.\n from isapi.install import *\n\n params = ISAPIParameters()\n # Setup the virtual directories - this is a list of directories our\n # extension uses - in this case only 1.\n # Each extension has a "script map" - this is the mapping of ISAPI\n # extensions.\n sm = [ScriptMapParams(Extension="*", Flags=0)]\n vd = VirtualDirParameters(\n Name="/",\n Description=Extension.__doc__,\n ScriptMaps=sm,\n ScriptMapUpdate="replace",\n )\n params.VirtualDirs = [vd]\n HandleCommandLine(params)\n
.venv\Lib\site-packages\isapi\samples\redirector.py
redirector.py
Python
4,587
0.95
0.137931
0.391753
node-utils
605
2024-05-29T17:55:35.324962
MIT
false
e14a2f5af5c2355940d78cbe23803bf0
# This is a sample ISAPI extension written in Python.\n\n# This is like the other 'redirector' samples, but uses asnch IO when writing\n# back to the client (it does *not* use asynch io talking to the remote\n# server!)\n\nimport sys\nimport urllib.error\nimport urllib.parse\nimport urllib.request\n\nfrom isapi import isapicon, threaded_extension\n\n# sys.isapidllhandle will exist when we are loaded by the IIS framework.\n# In this case we redirect our output to the win32traceutil collector.\nif hasattr(sys, "isapidllhandle"):\n import win32traceutil\n\n# The site we are proxying.\nproxy = "https://www.python.org"\n\n# We synchronously read chunks of this size then asynchronously write them.\nCHUNK_SIZE = 8192\n\n\n# The callback made when IIS completes the asynch write.\ndef io_callback(ecb, fp, cbIO, errcode):\n print("IO callback", ecb, fp, cbIO, errcode)\n chunk = fp.read(CHUNK_SIZE)\n if chunk:\n ecb.WriteClient(chunk, isapicon.HSE_IO_ASYNC)\n # and wait for the next callback to say this chunk is done.\n else:\n # eof - say we are complete.\n fp.close()\n ecb.DoneWithSession()\n\n\n# The ISAPI extension - handles all requests in the site.\nclass Extension(threaded_extension.ThreadPoolExtension):\n "Python sample proxy server - asynch version."\n\n def Dispatch(self, ecb):\n print('IIS dispatching "{}"'.format(ecb.GetServerVariable("URL")))\n url = ecb.GetServerVariable("URL")\n\n new_url = proxy + url\n print("Opening %s" % new_url)\n fp = urllib.request.urlopen(new_url)\n headers = fp.info()\n ecb.SendResponseHeaders("200 OK", str(headers) + "\r\n", False)\n # now send the first chunk asynchronously\n ecb.ReqIOCompletion(io_callback, fp)\n chunk = fp.read(CHUNK_SIZE)\n if chunk:\n ecb.WriteClient(chunk, isapicon.HSE_IO_ASYNC)\n return isapicon.HSE_STATUS_PENDING\n # no data - just close things now.\n ecb.DoneWithSession()\n return isapicon.HSE_STATUS_SUCCESS\n\n\n# The entry points for the ISAPI extension.\ndef __ExtensionFactory__():\n return Extension()\n\n\nif __name__ == "__main__":\n # If run from the command-line, install ourselves.\n from isapi.install import *\n\n params = ISAPIParameters()\n # Setup the virtual directories - this is a list of directories our\n # extension uses - in this case only 1.\n # Each extension has a "script map" - this is the mapping of ISAPI\n # extensions.\n sm = [ScriptMapParams(Extension="*", Flags=0)]\n vd = VirtualDirParameters(\n Name="/",\n Description=Extension.__doc__,\n ScriptMaps=sm,\n ScriptMapUpdate="replace",\n )\n params.VirtualDirs = [vd]\n HandleCommandLine(params)\n
.venv\Lib\site-packages\isapi\samples\redirector_asynch.py
redirector_asynch.py
Python
2,812
0.95
0.117647
0.294118
awesome-app
994
2024-11-25T19:18:28.720067
GPL-3.0
false
7e3d938233ad25ec928d0c57e1d92b96
# This is a sample configuration file for an ISAPI filter and extension\n# written in Python.\n#\n# Please see README.txt in this directory, and specifically the\n# information about the "loader" DLL - installing this sample will create\n# "_redirector_with_filter.dll" in the current directory. The readme explains\n# this.\n\n# Executing this script (or any server config script) will install the extension\n# into your web server. As the server executes, the PyISAPI framework will load\n# this module and create your Extension and Filter objects.\n\n# This sample provides sample redirector:\n# It is implemented by a filter and an extension, so that some requests can\n# be ignored. Compare with 'redirector_simple' which avoids the filter, but\n# is unable to selectively ignore certain requests.\n# The process is sample uses is:\n# * The filter is installed globally, as all filters are.\n# * A Virtual Directory named "python" is setup. This dir has our ISAPI\n# extension as the only application, mapped to file-extension '*'. Thus, our\n# extension handles *all* requests in this directory.\n# The basic process is that the filter does URL rewriting, redirecting every\n# URL to our Virtual Directory. Our extension then handles this request,\n# forwarding the data from the proxied site.\n# For example:\n# * URL of "index.html" comes in.\n# * Filter rewrites this to "/python/index.html"\n# * Our extension sees the full "/python/index.html", removes the leading\n# portion, and opens and forwards the remote URL.\n\n\n# This sample is very small - it avoid most error handling, etc. It is for\n# demonstration purposes only.\n\nimport sys\nimport urllib.error\nimport urllib.parse\nimport urllib.request\n\nfrom isapi import isapicon, threaded_extension\nfrom isapi.simple import SimpleFilter\n\n# sys.isapidllhandle will exist when we are loaded by the IIS framework.\n# In this case we redirect our output to the win32traceutil collector.\nif hasattr(sys, "isapidllhandle"):\n import win32traceutil\n\n# The site we are proxying.\nproxy = "https://www.python.org"\n# The name of the virtual directory we install in, and redirect from.\nvirtualdir = "/python"\n\n# The key feature of this redirector over the simple redirector is that it\n# can choose to ignore certain responses by having the filter not rewrite them\n# to our virtual dir. For this sample, we just exclude the IIS help directory.\n\n\n# The ISAPI extension - handles requests in our virtual dir, and sends the\n# response to the client.\nclass Extension(threaded_extension.ThreadPoolExtension):\n "Python sample Extension"\n\n def Dispatch(self, ecb):\n # Note that our ThreadPoolExtension base class will catch exceptions\n # in our Dispatch method, and write the traceback to the client.\n # That is perfect for this sample, so we don't catch our own.\n # print(f'IIS dispatching "{ecb.GetServerVariable("URL")}"')\n url = ecb.GetServerVariable("URL")\n if url.startswith(virtualdir):\n new_url = proxy + url[len(virtualdir) :]\n print("Opening", new_url)\n fp = urllib.request.urlopen(new_url)\n headers = fp.info()\n ecb.SendResponseHeaders("200 OK", str(headers) + "\r\n", False)\n ecb.WriteClient(fp.read())\n ecb.DoneWithSession()\n print(f"Returned data from '{new_url}'!")\n else:\n # this should never happen - we should only see requests that\n # start with our virtual directory name.\n print(f"Not proxying '{url}'")\n\n\n# The ISAPI filter.\nclass Filter(SimpleFilter):\n "Sample Python Redirector"\n\n filter_flags = isapicon.SF_NOTIFY_PREPROC_HEADERS | isapicon.SF_NOTIFY_ORDER_DEFAULT\n\n def HttpFilterProc(self, fc):\n # print("Filter Dispatch")\n nt = fc.NotificationType\n if nt != isapicon.SF_NOTIFY_PREPROC_HEADERS:\n return isapicon.SF_STATUS_REQ_NEXT_NOTIFICATION\n\n pp = fc.GetData()\n url = pp.GetHeader("url")\n # print(f"URL is '{url}'")\n prefix = virtualdir\n if not url.startswith(prefix):\n new_url = prefix + url\n print(f"New proxied URL is '{new_url}'")\n pp.SetHeader("url", new_url)\n # For the sake of demonstration, show how the FilterContext\n # attribute is used. It always starts out life as None, and\n # any assignments made are automatically decref'd by the\n # framework during a SF_NOTIFY_END_OF_NET_SESSION notification.\n if fc.FilterContext is None:\n fc.FilterContext = 0\n fc.FilterContext += 1\n print("This is request number %d on this connection" % fc.FilterContext)\n return isapicon.SF_STATUS_REQ_HANDLED_NOTIFICATION\n else:\n print(f"Filter ignoring URL '{url}'")\n\n # Some older code that handled SF_NOTIFY_URL_MAP.\n # print("Have URL_MAP notify")\n # urlmap = fc.GetData()\n # print("URI is", urlmap.URL)\n # print("Path is", urlmap.PhysicalPath)\n # if urlmap.URL.startswith("/UC/"):\n # # Find the /UC/ in the physical path, and nuke it (except\n # # as the path is physical, it is \)\n # p = urlmap.PhysicalPath\n # pos = p.index("\\UC\\")\n # p = p[:pos] + p[pos+3:]\n # p = r"E:\src\pyisapi\webroot\PyTest\formTest.htm"\n # print("New path is", p)\n # urlmap.PhysicalPath = p\n\n\n# The entry points for the ISAPI extension.\ndef __FilterFactory__():\n return Filter()\n\n\ndef __ExtensionFactory__():\n return Extension()\n\n\nif __name__ == "__main__":\n # If run from the command-line, install ourselves.\n from isapi.install import *\n\n params = ISAPIParameters()\n # Setup all filters - these are global to the site.\n params.Filters = [\n FilterParameters(Name="PythonRedirector", Description=Filter.__doc__),\n ]\n # Setup the virtual directories - this is a list of directories our\n # extension uses - in this case only 1.\n # Each extension has a "script map" - this is the mapping of ISAPI\n # extensions.\n sm = [ScriptMapParams(Extension="*", Flags=0)]\n vd = VirtualDirParameters(\n Name=virtualdir[1:],\n Description=Extension.__doc__,\n ScriptMaps=sm,\n ScriptMapUpdate="replace",\n )\n params.VirtualDirs = [vd]\n HandleCommandLine(params)\n
.venv\Lib\site-packages\isapi\samples\redirector_with_filter.py
redirector_with_filter.py
Python
6,574
0.95
0.123457
0.525547
node-utils
996
2024-11-05T02:52:50.487441
GPL-3.0
false
31d0cf6ba8bda4d714aae9ec5160f8e6
# This extension is used mainly for testing purposes - it is not\n# designed to be a simple sample, but instead is a hotch-potch of things\n# that attempts to exercise the framework.\n\nimport os\nimport stat\nimport sys\n\nfrom isapi import isapicon\nfrom isapi.simple import SimpleExtension\n\nif hasattr(sys, "isapidllhandle"):\n import win32traceutil\n\n# We use the same reload support as 'advanced.py' demonstrates.\nimport threading\n\nimport win32con\nimport win32event\nimport win32file\nimport winerror\n\nfrom isapi import InternalReloadException\n\n\n# A watcher thread that checks for __file__ changing.\n# When it detects it, it simply sets "change_detected" to true.\nclass ReloadWatcherThread(threading.Thread):\n def __init__(self):\n self.change_detected = False\n self.filename = __file__\n if self.filename.endswith("c") or self.filename.endswith("o"):\n self.filename = self.filename[:-1]\n self.handle = win32file.FindFirstChangeNotification(\n os.path.dirname(self.filename),\n False, # watch tree?\n win32con.FILE_NOTIFY_CHANGE_LAST_WRITE,\n )\n threading.Thread.__init__(self)\n\n def run(self):\n last_time = os.stat(self.filename)[stat.ST_MTIME]\n while 1:\n try:\n rc = win32event.WaitForSingleObject(self.handle, win32event.INFINITE)\n win32file.FindNextChangeNotification(self.handle)\n except win32event.error as details:\n # handle closed - thread should terminate.\n if details.winerror != winerror.ERROR_INVALID_HANDLE:\n raise\n break\n this_time = os.stat(self.filename)[stat.ST_MTIME]\n if this_time != last_time:\n print("Detected file change - flagging for reload.")\n self.change_detected = True\n last_time = this_time\n\n def stop(self):\n win32file.FindCloseChangeNotification(self.handle)\n\n\ndef TransmitFileCallback(ecb, hFile, cbIO, errCode):\n print("Transmit complete!")\n ecb.close()\n\n\n# The ISAPI extension - handles requests in our virtual dir, and sends the\n# response to the client.\nclass Extension(SimpleExtension):\n "Python test Extension"\n\n def __init__(self):\n self.reload_watcher = ReloadWatcherThread()\n self.reload_watcher.start()\n\n def HttpExtensionProc(self, ecb):\n # NOTE: If you use a ThreadPoolExtension, you must still perform\n # this check in HttpExtensionProc - raising the exception from\n # The "Dispatch" method will just cause the exception to be\n # rendered to the browser.\n if self.reload_watcher.change_detected:\n print("Doing reload")\n raise InternalReloadException\n\n if ecb.GetServerVariable("UNICODE_URL").endswith("test.py"):\n file_flags = (\n win32con.FILE_FLAG_SEQUENTIAL_SCAN | win32con.FILE_FLAG_OVERLAPPED\n )\n hfile = win32file.CreateFile(\n __file__,\n win32con.GENERIC_READ,\n 0,\n None,\n win32con.OPEN_EXISTING,\n file_flags,\n None,\n )\n flags = (\n isapicon.HSE_IO_ASYNC\n | isapicon.HSE_IO_DISCONNECT_AFTER_SEND\n | isapicon.HSE_IO_SEND_HEADERS\n )\n # We pass hFile to the callback simply as a way of keeping it alive\n # for the duration of the transmission\n try:\n ecb.TransmitFile(\n TransmitFileCallback,\n hfile,\n int(hfile),\n "200 OK",\n 0,\n 0,\n None,\n None,\n flags,\n )\n except:\n # Errors keep this source file open!\n hfile.Close()\n raise\n else:\n # default response\n ecb.SendResponseHeaders("200 OK", "Content-Type: text/html\r\n\r\n", 0)\n print("<HTML><BODY>", file=ecb)\n print("The root of this site is at", ecb.MapURLToPath("/"), file=ecb)\n print("</BODY></HTML>", file=ecb)\n ecb.close()\n return isapicon.HSE_STATUS_SUCCESS\n\n def TerminateExtension(self, status):\n self.reload_watcher.stop()\n\n\n# The entry points for the ISAPI extension.\ndef __ExtensionFactory__():\n return Extension()\n\n\n# Our special command line customization.\n# Pre-install hook for our virtual directory.\ndef PreInstallDirectory(params, options):\n # If the user used our special '--description' option,\n # then we override our default.\n if options.description:\n params.Description = options.description\n\n\n# Post install hook for our entire script\ndef PostInstall(params, options):\n print()\n print("The sample has been installed.")\n print("Point your browser to /PyISAPITest")\n\n\n# Handler for our custom 'status' argument.\ndef status_handler(options, log, arg):\n "Query the status of something"\n print("Everything seems to be fine!")\n\n\ncustom_arg_handlers = {"status": status_handler}\n\nif __name__ == "__main__":\n # If run from the command-line, install ourselves.\n from isapi.install import *\n\n params = ISAPIParameters(PostInstall=PostInstall)\n # Setup the virtual directories - this is a list of directories our\n # extension uses - in this case only 1.\n # Each extension has a "script map" - this is the mapping of ISAPI\n # extensions.\n sm = [ScriptMapParams(Extension="*", Flags=0)]\n vd = VirtualDirParameters(\n Name="PyISAPITest",\n Description=Extension.__doc__,\n ScriptMaps=sm,\n ScriptMapUpdate="replace",\n # specify the pre-install hook.\n PreInstall=PreInstallDirectory,\n )\n params.VirtualDirs = [vd]\n # Setup our custom option parser.\n from optparse import OptionParser\n\n parser = OptionParser("") # blank usage, so isapi sets it.\n parser.add_option(\n "",\n "--description",\n action="store",\n help="custom description to use for the virtual directory",\n )\n\n HandleCommandLine(\n params, opt_parser=parser, custom_arg_handlers=custom_arg_handlers\n )\n
.venv\Lib\site-packages\isapi\samples\test.py
test.py
Python
6,513
0.95
0.169231
0.190184
python-kit
142
2025-05-01T22:01:26.760921
MIT
true
4898630adaf813d8b0a23e92c377746a
\n\n
.venv\Lib\site-packages\isapi\samples\__pycache__\advanced.cpython-313.pyc
advanced.cpython-313.pyc
Other
7,166
0.95
0.029412
0.030769
awesome-app
38
2025-05-05T11:13:00.403956
GPL-3.0
false
aeafbf6102e39ec12e53b44d56d24895
\n\n
.venv\Lib\site-packages\isapi\samples\__pycache__\redirector.cpython-313.pyc
redirector.cpython-313.pyc
Other
3,680
0.8
0
0
react-lib
513
2025-04-01T07:17:02.245394
GPL-3.0
false
f7182b76e95f88897b74d08042d1ab94
\n\n
.venv\Lib\site-packages\isapi\samples\__pycache__\redirector_asynch.cpython-313.pyc
redirector_asynch.cpython-313.pyc
Other
3,151
0.8
0
0.060606
awesome-app
295
2024-11-27T20:05:12.985315
GPL-3.0
false
2198d67c73bef379c35da357c562fe8d
\n\n
.venv\Lib\site-packages\isapi\samples\__pycache__\redirector_with_filter.cpython-313.pyc
redirector_with_filter.cpython-313.pyc
Other
4,292
0.8
0
0
awesome-app
871
2024-05-11T17:07:53.371511
GPL-3.0
false
cbb56a726e37395e870e5380f3a276a9
\n\n
.venv\Lib\site-packages\isapi\samples\__pycache__\test.cpython-313.pyc
test.cpython-313.pyc
Other
7,461
0.8
0.03125
0.016393
node-utils
970
2024-01-04T16:54:46.034060
Apache-2.0
true
957251b7729f4074b098a41d2fd63cf4
# This is an ISAPI extension purely for testing purposes. It is NOT\n# a 'demo' (even though it may be useful!)\n#\n# Install this extension, then point your browser to:\n# "http://localhost/pyisapi_test/test1"\n# This will execute the method 'test1' below. See below for the list of\n# test methods that are acceptable.\n\n# If we have no console (eg, am running from inside IIS), redirect output\n# somewhere useful - in this case, the standard win32 trace collector.\nimport win32api\nimport winerror\n\nfrom isapi import ExtensionError, threaded_extension\n\ntry:\n win32api.GetConsoleTitle()\nexcept win32api.error:\n # No console - redirect\n import win32traceutil\n\n\n# The ISAPI extension - handles requests in our virtual dir, and sends the\n# response to the client.\nclass Extension(threaded_extension.ThreadPoolExtension):\n "Python ISAPI Tester"\n\n def Dispatch(self, ecb):\n print('Tester dispatching "{}"'.format(ecb.GetServerVariable("URL")))\n url = ecb.GetServerVariable("URL")\n test_name = url.split("/")[-1]\n meth = getattr(self, test_name, None)\n if meth is None:\n raise AttributeError(f"No test named '{test_name}'")\n result = meth(ecb)\n if result is None:\n # This means the test finalized everything\n return\n ecb.SendResponseHeaders("200 OK", "Content-type: text/html\r\n\r\n", False)\n print("<HTML><BODY>Finished running test <i>", test_name, "</i>", file=ecb)\n print("<pre>", file=ecb)\n print(result, file=ecb)\n print("</pre>", file=ecb)\n print("</BODY></HTML>", file=ecb)\n ecb.DoneWithSession()\n\n def test1(self, ecb):\n try:\n ecb.GetServerVariable("foo bar")\n raise AssertionError("should have failed!")\n except ExtensionError as err:\n assert err.errno == winerror.ERROR_INVALID_INDEX, err\n return "worked!"\n\n def test_long_vars(self, ecb):\n qs = ecb.GetServerVariable("QUERY_STRING")\n # Our implementation has a default buffer size of 8k - so we test\n # the code that handles an overflow by ensuring there are more\n # than 8k worth of chars in the URL.\n expected_query = "x" * 8500\n if len(qs) == 0:\n # Just the URL with no query part - redirect to myself, but with\n # a huge query portion.\n me = ecb.GetServerVariable("URL")\n headers = "Location: " + me + "?" + expected_query + "\r\n\r\n"\n ecb.SendResponseHeaders("301 Moved", headers)\n ecb.DoneWithSession()\n return None\n if qs == expected_query:\n return "Total length of variable is %d - test worked!" % (len(qs),)\n else:\n return "Unexpected query portion! Got %d chars, expected %d" % (\n len(qs),\n len(expected_query),\n )\n\n def test_unicode_vars(self, ecb):\n # We need to check that we are running IIS6! This seems the only\n # effective way from an extension.\n ver = float(ecb.GetServerVariable("SERVER_SOFTWARE").split("/")[1])\n if ver < 6.0:\n return "This is IIS version %g - unicode only works in IIS6 and later" % ver\n\n us = ecb.GetServerVariable("UNICODE_SERVER_NAME")\n assert isinstance(us, str), "unexpected type!"\n assert us == str(\n ecb.GetServerVariable("SERVER_NAME")\n ), "Unicode and non-unicode values were not the same"\n return "worked!"\n\n\n# The entry points for the ISAPI extension.\ndef __ExtensionFactory__():\n return Extension()\n\n\nif __name__ == "__main__":\n # If run from the command-line, install ourselves.\n from isapi.install import *\n\n params = ISAPIParameters()\n # Setup the virtual directories - this is a list of directories our\n # extension uses - in this case only 1.\n # Each extension has a "script map" - this is the mapping of ISAPI\n # extensions.\n sm = [ScriptMapParams(Extension="*", Flags=0)]\n vd = VirtualDirParameters(\n Name="pyisapi_test",\n Description=Extension.__doc__,\n ScriptMaps=sm,\n ScriptMapUpdate="replace",\n )\n params.VirtualDirs = [vd]\n HandleCommandLine(params)\n
.venv\Lib\site-packages\isapi\test\extension_simple.py
extension_simple.py
Python
4,339
0.95
0.149123
0.262626
node-utils
729
2024-03-02T23:33:30.628822
Apache-2.0
true
0b445d3df37149e2990f5425534a60c3
This is a directory for tests of the PyISAPI framework.\n\nFor demos, please see the pyisapi 'samples' directory.\n
.venv\Lib\site-packages\isapi\test\README.txt
README.txt
Other
115
0.7
0.333333
0
vue-tools
732
2024-01-09T15:55:27.698859
BSD-3-Clause
true
18a0a408b0582bb211eccda2d4f5f616
\n\n
.venv\Lib\site-packages\isapi\test\__pycache__\extension_simple.cpython-313.pyc
extension_simple.cpython-313.pyc
Other
4,600
0.8
0
0
awesome-app
570
2024-06-23T11:31:58.836810
GPL-3.0
true
c97e73af704ebcdcaa03e946115c4fea
\n\n
.venv\Lib\site-packages\isapi\__pycache__\install.cpython-313.pyc
install.cpython-313.pyc
Other
33,756
0.95
0.026403
0.010601
python-kit
618
2023-08-24T06:05:30.285679
BSD-3-Clause
false
8050f73b39e0f8e62e27d2e7b44d07d1
\n\n
.venv\Lib\site-packages\isapi\__pycache__\isapicon.cpython-313.pyc
isapicon.cpython-313.pyc
Other
3,411
0.8
0
0
node-utils
582
2024-09-21T07:26:23.777601
MIT
false
899c2b02ef4717a30e191a92c8c88fe2
\n\n
.venv\Lib\site-packages\isapi\__pycache__\simple.cpython-313.pyc
simple.cpython-313.pyc
Other
3,596
0.95
0.235294
0
python-kit
502
2023-11-29T13:43:54.761773
GPL-3.0
false
74beb8bcaaba4a834b2274aa368df55a
\n\n
.venv\Lib\site-packages\isapi\__pycache__\threaded_extension.cpython-313.pyc
threaded_extension.cpython-313.pyc
Other
8,300
0.95
0.076923
0
awesome-app
614
2023-11-22T10:25:50.006061
MIT
false
887aeb45426caa09769b678866f4739e
\n\n
.venv\Lib\site-packages\isapi\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,805
0.8
0
0
react-lib
399
2025-06-02T16:41:55.266783
Apache-2.0
false
4f5fe3bb9609b3a78b4b62b2fdfd0ac3
PERIOD_PREFIX = "P"\nTIME_PREFIX = "T"\nWEEK_PREFIX = "W"\n
.venv\Lib\site-packages\isoduration\constants.py
constants.py
Python
56
0.5
0
0
python-kit
232
2025-02-25T03:53:19.448302
GPL-3.0
false
cf3bdd59d7fa2a5dc004210d02c3ecb5
from __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom decimal import Decimal\n\nfrom isoduration.formatter import format_duration\nfrom isoduration.operations import add\n\n\n@dataclass\nclass DateDuration:\n years: Decimal = Decimal(0)\n months: Decimal = Decimal(0)\n days: Decimal = Decimal(0)\n weeks: Decimal = Decimal(0)\n\n def __neg__(self) -> DateDuration:\n return DateDuration(\n years=-self.years,\n months=-self.months,\n days=-self.days,\n weeks=-self.weeks,\n )\n\n\n@dataclass\nclass TimeDuration:\n hours: Decimal = Decimal(0)\n minutes: Decimal = Decimal(0)\n seconds: Decimal = Decimal(0)\n\n def __neg__(self) -> TimeDuration:\n return TimeDuration(\n hours=-self.hours,\n minutes=-self.minutes,\n seconds=-self.seconds,\n )\n\n\nclass Duration:\n def __init__(self, date_duration: DateDuration, time_duration: TimeDuration):\n self.date = date_duration\n self.time = time_duration\n\n def __repr__(self) -> str:\n return f"{self.__class__.__name__}({self.date}, {self.time})"\n\n def __str__(self) -> str:\n return format_duration(self)\n\n def __hash__(self) -> int:\n return hash(\n (\n self.date.years,\n self.date.months,\n self.date.days,\n self.date.weeks,\n self.time.hours,\n self.time.minutes,\n self.time.seconds,\n )\n )\n\n def __eq__(self, other: object) -> bool:\n if isinstance(other, Duration):\n return self.date == other.date and self.time == other.time\n\n raise NotImplementedError\n\n def __neg__(self) -> Duration:\n return Duration(-self.date, -self.time)\n\n def __add__(self, other: datetime) -> datetime:\n if isinstance(other, datetime):\n return add(other, self)\n\n raise NotImplementedError\n\n __radd__ = __add__\n\n def __sub__(self, other: object) -> None:\n raise NotImplementedError\n\n def __rsub__(self, other: datetime) -> datetime:\n if isinstance(other, datetime):\n return -self + other\n\n raise NotImplementedError\n
.venv\Lib\site-packages\isoduration\types.py
types.py
Python
2,266
0.85
0.191011
0
node-utils
837
2024-06-15T10:35:58.038980
MIT
false
3fbc52a95605e34c1a9c5f1929a3b061
from isoduration.formatter import format_duration\nfrom isoduration.formatter.exceptions import DurationFormattingException\nfrom isoduration.parser import parse_duration\nfrom isoduration.parser.exceptions import DurationParsingException\n\n__all__ = (\n "format_duration",\n "parse_duration",\n "DurationParsingException",\n "DurationFormattingException",\n)\n
.venv\Lib\site-packages\isoduration\__init__.py
__init__.py
Python
363
0.85
0
0
node-utils
306
2024-10-04T06:33:17.749843
Apache-2.0
false
90710af955fd7a48d7a8986d07dc95cd
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom isoduration.formatter.exceptions import DurationFormattingException\n\nif TYPE_CHECKING: # pragma: no cover\n from isoduration.types import DateDuration, Duration\n\n\ndef check_global_sign(duration: Duration) -> int:\n is_date_zero = (\n duration.date.years == 0\n and duration.date.months == 0\n and duration.date.days == 0\n and duration.date.weeks == 0\n )\n is_time_zero = (\n duration.time.hours == 0\n and duration.time.minutes == 0\n and duration.time.seconds == 0\n )\n\n is_date_negative = (\n duration.date.years <= 0\n and duration.date.months <= 0\n and duration.date.days <= 0\n and duration.date.weeks <= 0\n )\n is_time_negative = (\n duration.time.hours <= 0\n and duration.time.minutes <= 0\n and duration.time.seconds <= 0\n )\n\n if not is_date_zero and not is_time_zero:\n if is_date_negative and is_time_negative:\n return -1\n elif not is_date_zero:\n if is_date_negative:\n return -1\n elif not is_time_zero:\n if is_time_negative:\n return -1\n\n return +1\n\n\ndef validate_date_duration(date_duration: DateDuration) -> None:\n if date_duration.weeks:\n if date_duration.years or date_duration.months or date_duration.days:\n raise DurationFormattingException(\n "Weeks are incompatible with other date designators"\n )\n
.venv\Lib\site-packages\isoduration\formatter\checking.py
checking.py
Python
1,511
0.95
0.166667
0
react-lib
851
2023-11-20T08:42:02.931487
Apache-2.0
false
26b3124485a30d8da6bc11b90361eeab
"""\nException\n +- ValueError\n | +- DurationFormattingException\n"""\n\n\nclass DurationFormattingException(ValueError):\n ...\n
.venv\Lib\site-packages\isoduration\formatter\exceptions.py
exceptions.py
Python
126
0.85
0.111111
0
python-kit
240
2024-07-18T12:02:40.052608
MIT
false
476aa328a6329cf76fac1590f64da297
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom isoduration.constants import PERIOD_PREFIX, TIME_PREFIX\nfrom isoduration.formatter.checking import validate_date_duration\n\nif TYPE_CHECKING: # pragma: no cover\n from isoduration.types import DateDuration, TimeDuration\n\n\ndef format_date(date_duration: DateDuration, global_sign: int) -> str:\n validate_date_duration(date_duration)\n\n date_duration_str = PERIOD_PREFIX\n\n if date_duration.weeks != 0:\n date_duration_str += f"{(date_duration.weeks * global_sign):g}W"\n\n if date_duration.years != 0:\n date_duration_str += f"{(date_duration.years * global_sign):g}Y"\n if date_duration.months != 0:\n date_duration_str += f"{(date_duration.months * global_sign):g}M"\n if date_duration.days != 0:\n date_duration_str += f"{(date_duration.days * global_sign):g}D"\n\n return date_duration_str\n\n\ndef format_time(time_duration: TimeDuration, global_sign: int) -> str:\n time_duration_str = TIME_PREFIX\n\n if time_duration.hours != 0:\n time_duration_str += f"{(time_duration.hours * global_sign):g}H"\n if time_duration.minutes != 0:\n time_duration_str += f"{(time_duration.minutes * global_sign):g}M"\n if time_duration.seconds != 0:\n time_duration_str += f"{(time_duration.seconds * global_sign):g}S"\n\n if time_duration_str == TIME_PREFIX:\n return ""\n return time_duration_str\n
.venv\Lib\site-packages\isoduration\formatter\formatting.py
formatting.py
Python
1,432
0.95
0.261905
0
vue-tools
557
2024-12-19T23:09:55.777922
BSD-3-Clause
false
4941cd8a17420cc53ea6da5c710a6e30
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom isoduration.constants import PERIOD_PREFIX\nfrom isoduration.formatter.checking import check_global_sign\nfrom isoduration.formatter.formatting import format_date, format_time\n\nif TYPE_CHECKING: # pragma: no cover\n from isoduration.types import Duration\n\n\ndef format_duration(duration: Duration) -> str:\n global_sign = check_global_sign(duration)\n date_duration_str = format_date(duration.date, global_sign)\n time_duration_str = format_time(duration.time, global_sign)\n\n duration_str = f"{date_duration_str}{time_duration_str}"\n sign_str = "-" if global_sign < 0 else ""\n\n if duration_str == PERIOD_PREFIX:\n return f"{PERIOD_PREFIX}0D"\n\n return f"{sign_str}{duration_str}"\n
.venv\Lib\site-packages\isoduration\formatter\__init__.py
__init__.py
Python
778
0.95
0.166667
0
vue-tools
464
2024-09-15T14:49:22.084648
MIT
false
9da1bc8592689cb3e23bea2e04d39b5a
\n\n
.venv\Lib\site-packages\isoduration\formatter\__pycache__\checking.cpython-313.pyc
checking.cpython-313.pyc
Other
2,559
0.8
0
0
node-utils
490
2024-04-11T19:38:14.600591
BSD-3-Clause
false
3d986e74888c7e2e64098da627d51f5c
\n\n
.venv\Lib\site-packages\isoduration\formatter\__pycache__\exceptions.cpython-313.pyc
exceptions.cpython-313.pyc
Other
543
0.7
0
0
python-kit
953
2024-07-12T01:29:25.330140
MIT
false
46c7c1002884be9ec7d16d8a49931a6c
\n\n
.venv\Lib\site-packages\isoduration\formatter\__pycache__\formatting.cpython-313.pyc
formatting.cpython-313.pyc
Other
1,960
0.8
0
0
vue-tools
792
2024-08-31T06:50:55.405927
Apache-2.0
false
11edf770b5706974fa68aaae0c9ed9dc
\n\n
.venv\Lib\site-packages\isoduration\formatter\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,149
0.8
0
0
vue-tools
909
2025-04-02T00:37:53.086579
Apache-2.0
false
12f2320a055d46549e3ebf7f09f168a4
from decimal import ROUND_FLOOR, Decimal\n\n\ndef quot2(dividend: Decimal, divisor: Decimal) -> Decimal:\n return (dividend / divisor).to_integral_value(rounding=ROUND_FLOOR)\n\n\ndef mod2(dividend: Decimal, divisor: Decimal) -> Decimal:\n return dividend - quot2(dividend, divisor) * divisor\n\n\ndef quot3(value: Decimal, low: Decimal, high: Decimal) -> Decimal:\n dividend = value - low\n divisor = high - low\n return (dividend / divisor).to_integral_value(rounding=ROUND_FLOOR)\n\n\ndef mod3(value: Decimal, low: Decimal, high: Decimal) -> Decimal:\n dividend = value - low\n divisor = high - low\n return mod2(dividend, divisor) + low\n\n\ndef max_day_in_month(year: Decimal, month: Decimal) -> Decimal:\n norm_month = int(mod3(month, Decimal(1), Decimal(13)))\n norm_year = year + quot3(month, Decimal(1), Decimal(13))\n\n if norm_month in (1, 3, 5, 7, 8, 10, 12):\n return Decimal(31)\n if norm_month in (4, 6, 9, 11):\n return Decimal(30)\n\n is_leap_year = (\n mod2(norm_year, Decimal(400)) == 0\n or mod2(norm_year, Decimal(100)) != 0\n and mod2(norm_year, Decimal(4)) == 0\n )\n if norm_month == 2 and is_leap_year:\n return Decimal(29)\n\n return Decimal(28)\n
.venv\Lib\site-packages\isoduration\operations\util.py
util.py
Python
1,222
0.85
0.195122
0
react-lib
603
2025-03-25T23:01:29.081014
BSD-3-Clause
false
dac546eb5b2898c8e26776ad972cc3e7
from __future__ import annotations\n\nfrom datetime import datetime\nfrom decimal import ROUND_HALF_UP, Decimal\nfrom typing import TYPE_CHECKING\n\nfrom isoduration.operations.util import max_day_in_month, mod2, mod3, quot2, quot3\n\nif TYPE_CHECKING: # pragma: no cover\n from isoduration.types import Duration\n\n\ndef add(start: datetime, duration: Duration) -> datetime:\n """\n https://www.w3.org/TR/xmlschema-2/#adding-durations-to-dateTimes\n """\n\n # Months.\n temp = Decimal(start.month) + duration.date.months\n end_month = mod3(temp, Decimal(1), Decimal(13))\n carry = quot3(temp, Decimal(1), Decimal(13))\n\n # Years.\n end_year = Decimal(start.year) + duration.date.years + carry\n\n # Zone.\n end_tzinfo = start.tzinfo\n\n # Seconds.\n temp = Decimal(start.second) + duration.time.seconds\n end_second = mod2(temp, Decimal("60"))\n carry = quot2(temp, Decimal("60"))\n\n # Minutes.\n temp = Decimal(start.minute) + duration.time.minutes + carry\n end_minute = mod2(temp, Decimal("60"))\n carry = quot2(temp, Decimal("60"))\n\n # Hours.\n temp = Decimal(start.hour) + duration.time.hours + carry\n end_hour = mod2(temp, Decimal("24"))\n carry = quot2(temp, Decimal("24"))\n\n # Days.\n end_max_day_in_month = max_day_in_month(end_year, end_month)\n\n if start.day > end_max_day_in_month:\n temp = end_max_day_in_month\n else:\n temp = Decimal(start.day)\n\n end_day = temp + duration.date.days + (7 * duration.date.weeks) + carry\n\n while True:\n if end_day < 1:\n end_day += max_day_in_month(end_year, end_month - 1)\n carry = Decimal(-1)\n elif end_day > max_day_in_month(end_year, end_month):\n end_day -= max_day_in_month(end_year, end_month)\n carry = Decimal(1)\n else:\n break\n\n temp = end_month + carry\n end_month = mod3(temp, Decimal(1), Decimal(13))\n end_year = end_year + quot3(temp, Decimal(1), Decimal(13))\n\n return datetime(\n year=int(end_year.to_integral_value(ROUND_HALF_UP)),\n month=int(end_month.to_integral_value(ROUND_HALF_UP)),\n day=int(end_day.to_integral_value(ROUND_HALF_UP)),\n hour=int(end_hour.to_integral_value(ROUND_HALF_UP)),\n minute=int(end_minute.to_integral_value(ROUND_HALF_UP)),\n second=int(end_second.to_integral_value(ROUND_HALF_UP)),\n tzinfo=end_tzinfo,\n )\n
.venv\Lib\site-packages\isoduration\operations\__init__.py
__init__.py
Python
2,406
0.95
0.065789
0.118644
react-lib
938
2024-01-13T20:25:26.190539
GPL-3.0
false
327eacbbe7900e2208dfa7fc8f772c43
\n\n
.venv\Lib\site-packages\isoduration\operations\__pycache__\util.cpython-313.pyc
util.cpython-313.pyc
Other
2,227
0.7
0
0
vue-tools
691
2025-07-02T07:00:56.001372
MIT
false
afb1b2e8ffcb62b364f1a808011e29e2
\n\n
.venv\Lib\site-packages\isoduration\operations\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
3,578
0.8
0
0
awesome-app
471
2024-12-30T19:14:26.273964
BSD-3-Clause
false
af45707b3b437a41ec81bf373f7744bc
"""\nException\n +- ValueError\n | +- DurationParsingException\n | | +- EmptyDuration\n | | +- IncorrectDesignator\n | | +- NoTime\n | | +- UnknownToken\n | | +- UnparseableValue\n +- KeyError\n +- OutOfDesignators\n"""\n\n\nclass DurationParsingException(ValueError):\n ...\n\n\nclass EmptyDuration(DurationParsingException):\n ...\n\n\nclass IncorrectDesignator(DurationParsingException):\n ...\n\n\nclass NoTime(DurationParsingException):\n ...\n\n\nclass UnknownToken(DurationParsingException):\n ...\n\n\nclass UnparseableValue(DurationParsingException):\n ...\n\n\nclass OutOfDesignators(KeyError):\n ...\n
.venv\Lib\site-packages\isoduration\parser\exceptions.py
exceptions.py
Python
619
0.85
0.175
0
awesome-app
762
2024-01-14T05:07:24.424616
BSD-3-Clause
false
b229c53f17e7e6829fe0bbf6ecb4eb67
from collections import OrderedDict\nfrom decimal import Decimal, InvalidOperation\n\nimport arrow # type: ignore\n\nfrom isoduration.parser.exceptions import (\n IncorrectDesignator,\n NoTime,\n OutOfDesignators,\n UnknownToken,\n UnparseableValue,\n)\nfrom isoduration.parser.util import (\n is_letter,\n is_number,\n is_time,\n is_week,\n parse_designator,\n)\nfrom isoduration.types import DateDuration, Duration, TimeDuration\n\n\ndef parse_datetime_duration(duration_str: str, sign: int) -> Duration:\n try:\n duration: arrow.Arrow = arrow.get(duration_str)\n except (arrow.ParserError, ValueError):\n raise UnparseableValue(f"Value could not be parsed as datetime: {duration_str}")\n\n return Duration(\n DateDuration(\n years=sign * duration.year,\n months=sign * duration.month,\n days=sign * duration.day,\n ),\n TimeDuration(\n hours=sign * duration.hour,\n minutes=sign * duration.minute,\n seconds=sign * duration.second,\n ),\n )\n\n\ndef parse_date_duration(date_str: str, sign: int) -> Duration:\n date_designators = OrderedDict(\n (("Y", "years"), ("M", "months"), ("D", "days"), ("W", "weeks"))\n )\n\n duration = DateDuration()\n tmp_value = ""\n\n for idx, ch in enumerate(date_str):\n if is_time(ch):\n if tmp_value != "" and tmp_value == date_str[:idx]:\n # PYYYY-MM-DDThh:mm:ss\n # PYYYYMMDDThhmmss\n return parse_datetime_duration(date_str, sign)\n\n time_idx = idx + 1\n time_str = date_str[time_idx:]\n\n if time_str == "":\n raise NoTime("Wanted time, no time provided")\n\n return Duration(duration, parse_time_duration(time_str, sign))\n\n if is_letter(ch):\n try:\n key = parse_designator(date_designators, ch)\n value = sign * Decimal(tmp_value)\n except OutOfDesignators as exc:\n raise IncorrectDesignator(\n f"Wrong date designator, or designator in the wrong order: {ch}"\n ) from exc\n except InvalidOperation as exc:\n raise UnparseableValue(\n f"Value could not be parsed as decimal: {tmp_value}"\n ) from exc\n\n if is_week(ch) and duration != DateDuration():\n raise IncorrectDesignator(\n "Week is incompatible with any other date designator"\n )\n\n setattr(duration, key, value)\n tmp_value = ""\n\n continue\n\n if is_number(ch):\n if ch == ",":\n tmp_value += "."\n else:\n tmp_value += ch\n\n continue\n\n raise UnknownToken(f"Token not recognizable: {ch}")\n\n return Duration(duration, TimeDuration())\n\n\ndef parse_time_duration(time_str: str, sign: int) -> TimeDuration:\n time_designators = OrderedDict((("H", "hours"), ("M", "minutes"), ("S", "seconds")))\n\n duration = TimeDuration()\n tmp_value = ""\n\n for ch in time_str:\n if is_letter(ch):\n try:\n key = parse_designator(time_designators, ch)\n value = sign * Decimal(tmp_value)\n except OutOfDesignators as exc:\n raise IncorrectDesignator(\n f"Wrong time designator, or designator in the wrong order: {ch}"\n ) from exc\n except InvalidOperation as exc:\n raise UnparseableValue(\n f"Value could not be parsed as decimal: {tmp_value}"\n ) from exc\n\n setattr(duration, key, value)\n tmp_value = ""\n\n continue\n\n if is_number(ch):\n if ch == ",":\n tmp_value += "."\n else:\n tmp_value += ch\n\n continue\n\n raise UnknownToken(f"Token not recognizable: {ch}")\n\n return duration\n
.venv\Lib\site-packages\isoduration\parser\parsing.py
parsing.py
Python
3,990
0.95
0.131387
0.018692
node-utils
781
2023-10-21T01:20:30.363420
MIT
false
01112764fe5d533d580d77241c1cb837
import re\nfrom typing import Dict\n\nfrom isoduration.constants import PERIOD_PREFIX, TIME_PREFIX, WEEK_PREFIX\nfrom isoduration.parser.exceptions import OutOfDesignators\n\n\ndef is_period(ch: str) -> bool:\n return ch == PERIOD_PREFIX\n\n\ndef is_time(ch: str) -> bool:\n return ch == TIME_PREFIX\n\n\ndef is_week(ch: str) -> bool:\n return ch == WEEK_PREFIX\n\n\ndef is_number(ch: str) -> bool:\n return bool(re.match(r"[+\-0-9.,eE]", ch))\n\n\ndef is_letter(ch: str) -> bool:\n return ch.isalpha() and ch.lower() != "e"\n\n\ndef parse_designator(designators: Dict[str, str], target: str) -> str:\n while True:\n try:\n key, value = designators.popitem(last=False) # type: ignore\n except KeyError as exc:\n raise OutOfDesignators from exc\n\n if key == target:\n return value\n
.venv\Lib\site-packages\isoduration\parser\util.py
util.py
Python
819
0.95
0.25
0
node-utils
115
2025-03-06T00:18:30.813652
MIT
false
038d4d4ef4eae14d183ec25dfeb693ff
from isoduration.parser.exceptions import EmptyDuration\nfrom isoduration.parser.parsing import parse_date_duration\nfrom isoduration.parser.util import is_period\nfrom isoduration.types import Duration\n\n\ndef parse_duration(duration_str: str) -> Duration:\n if len(duration_str) < 2:\n raise EmptyDuration("No duration information provided")\n\n beginning = 1\n first = duration_str[beginning - 1]\n\n sign = +1\n\n if first == "+":\n beginning += 1\n if first == "-":\n sign = -1\n beginning += 1\n\n prefix = duration_str[beginning - 1]\n duration = duration_str[beginning:]\n\n if not is_period(prefix):\n raise EmptyDuration("No prefix provided")\n\n return parse_date_duration(duration, sign)\n
.venv\Lib\site-packages\isoduration\parser\__init__.py
__init__.py
Python
739
0.85
0.178571
0
react-lib
618
2024-10-21T16:26:03.586921
BSD-3-Clause
false
6c623543c662bdb7c19e3dfadf0cec4c
\n\n
.venv\Lib\site-packages\isoduration\parser\__pycache__\exceptions.cpython-313.pyc
exceptions.cpython-313.pyc
Other
1,694
0.8
0
0
python-kit
215
2023-12-26T03:36:30.512043
MIT
false
6b6149f8c08d670c2c24463ecd7433bc
\n\n
.venv\Lib\site-packages\isoduration\parser\__pycache__\parsing.cpython-313.pyc
parsing.cpython-313.pyc
Other
4,497
0.8
0
0
node-utils
490
2023-11-01T20:14:19.865563
Apache-2.0
false
30f04718630fbf4206c50977831efad2
\n\n
.venv\Lib\site-packages\isoduration\parser\__pycache__\util.cpython-313.pyc
util.cpython-313.pyc
Other
1,832
0.8
0
0
python-kit
45
2025-01-16T02:44:11.343855
GPL-3.0
false
4a82f416fbe13c8462c93f997ad8ff29
\n\n
.venv\Lib\site-packages\isoduration\parser\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,102
0.7
0
0
react-lib
999
2023-08-02T12:45:06.811154
MIT
false
4dbbb67f3fba3c8be2995d8249e57d67
\n\n
.venv\Lib\site-packages\isoduration\__pycache__\constants.cpython-313.pyc
constants.cpython-313.pyc
Other
261
0.7
0
0
awesome-app
516
2024-03-16T13:31:07.179232
Apache-2.0
false
2d6534e15ffbc82ed00db40658a8b146
\n\n
.venv\Lib\site-packages\isoduration\__pycache__\types.cpython-313.pyc
types.cpython-313.pyc
Other
4,755
0.8
0
0
react-lib
383
2025-05-09T04:08:50.321514
Apache-2.0
false
5726b28b9baf0cccc9c41599219ca4d1
\n\n
.venv\Lib\site-packages\isoduration\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
514
0.7
0
0
react-lib
339
2024-07-23T02:47:21.378575
Apache-2.0
false
0e6f597cd48af6cf0a61fbbcf956f3e1
pip\n
.venv\Lib\site-packages\isoduration-20.11.0.dist-info\INSTALLER
INSTALLER
Other
4
0.5
0
0
node-utils
917
2024-01-25T22:54:13.262653
GPL-3.0
false
365c9bfeb7d89244f2ce01c1de44cb85
Copyright (c) 2020 Víctor Muñoz <victorm@marshland.es>\n\nPermission to use, copy, modify, and distribute this software for any\npurpose with or without fee is hereby granted, provided that the above\ncopyright notice and this permission notice appear in all copies.\n\nTHE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\nWITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\nANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\nWHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\nACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\nOR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n
.venv\Lib\site-packages\isoduration-20.11.0.dist-info\LICENSE
LICENSE
Other
752
0.7
0.076923
0
python-kit
809
2024-01-02T15:45:32.591911
Apache-2.0
false
e3a2f086f31826f64c46f8b519e5dfc4
Metadata-Version: 2.1\nName: isoduration\nVersion: 20.11.0\nSummary: Operations with ISO 8601 durations\nHome-page: https://github.com/bolsote/isoduration\nAuthor: Víctor Muñoz\nAuthor-email: victorm@marshland.es\nLicense: UNKNOWN\nProject-URL: Repository, https://github.com/bolsote/isoduration\nProject-URL: Bug Reports, https://github.com/bolsote/isoduration/issues\nProject-URL: Changelog, https://github.com/bolsote/isoduration/blob/master/CHANGELOG\nKeywords: datetime,date,time,duration,duration-parsing,duration-string,iso8601,iso8601-duration\nPlatform: UNKNOWN\nClassifier: Programming Language :: Python :: 3\nClassifier: Operating System :: OS Independent\nClassifier: License :: OSI Approved :: ISC License (ISCL)\nClassifier: Development Status :: 4 - Beta\nClassifier: Intended Audience :: Developers\nClassifier: Topic :: Software Development :: Libraries :: Python Modules\nRequires-Python: >=3.7\nDescription-Content-Type: text/markdown\nRequires-Dist: arrow (>=0.15.0)\n\n# isoduration: Operations with ISO 8601 durations.\n\n[![PyPI Package](https://img.shields.io/pypi/v/isoduration?style=flat-square)](https://pypi.org/project/isoduration/)\n\n## What is this.\n\nISO 8601 is most commonly known as a way to exchange datetimes in textual format. A\nlesser known aspect of the standard is the representation of durations. They have a\nshape similar to this:\n\n```\nP3Y6M4DT12H30M5S\n```\n\nThis string represents a duration of 3 years, 6 months, 4 days, 12 hours, 30 minutes,\nand 5 seconds.\n\nThe state of the art of ISO 8601 duration handling in Python is more or less limited to\nwhat's offered by [`isodate`](https://pypi.org/project/isodate/). What we are trying to\nachieve here is to address the shortcomings of `isodate` (as described in their own\n[_Limitations_](https://github.com/gweis/isodate/#limitations) section), and a few of\nour own annoyances with their interface, such as the lack of uniformity in their\nhandling of types, and the use of regular expressions for parsing.\n\n## How to use it.\n\nThis package revolves around the [`Duration`](src/isoduration/types.py) type.\n\nGiven a ISO duration string we can produce such a type by using the `parse_duration()`\nfunction:\n\n```py\n>>> from isoduration import parse_duration\n>>> duration = parse_duration("P3Y6M4DT12H30M5S")\n>>> duration.date\nDateDuration(years=Decimal('3'), months=Decimal('6'), days=Decimal('4'), weeks=Decimal('0'))\n>>> duration.time\nTimeDuration(hours=Decimal('12'), minutes=Decimal('30'), seconds=Decimal('5'))\n```\n\nThe `date` and `time` portions of the parsed duration are just regular\n[dataclasses](https://docs.python.org/3/library/dataclasses.html), so their members can\nbe accessed in a non-surprising way.\n\nBesides just parsing them, a number of additional operations are available:\n\n- Durations can be compared and negated:\n ```py\n >>> parse_duration("P3Y4D") == parse_duration("P3Y4DT0H")\n True\n >>> -parse_duration("P3Y4D")\n Duration(DateDuration(years=Decimal('-3'), months=Decimal('0'), days=Decimal('-4'), weeks=Decimal('0')), TimeDuration(hours=Decimal('0'), minutes=Decimal('0'), seconds=Decimal('0')))\n ```\n- Durations can be added to, or subtracted from, Python datetimes:\n ```py\n >>> from datetime import datetime\n >>> datetime(2020, 3, 15) + parse_duration("P2Y")\n datetime.datetime(2022, 3, 15, 0, 0)\n >>> datetime(2020, 3, 15) - parse_duration("P33Y1M4D")\n datetime.datetime(1987, 2, 11, 0, 0)\n ```\n- Durations are hashable, so they can be used as dictionary keys or as part of sets.\n- Durations can be formatted back to a ISO 8601-compliant duration string:\n ```py\n >>> from isoduration import parse_duration, format_duration\n >>> format_duration(parse_duration("P11YT2H"))\n 'P11YT2H'\n >>> str(parse_duration("P11YT2H"))\n 'P11YT2H'\n ```\n\n## How to improve it.\n\nThese steps, in this order, should land you in a development environment:\n\n```sh\ngit clone git@github.com:bolsote/isoduration.git\ncd isoduration/\npython -m venv ve\n. ve/bin/activate\npip install -U pip\npip install -e .\npip install -r requirements/dev.txt\n```\n\nAdapt to your own likings and/or needs.\n\nTesting is driven by [tox](https://tox.readthedocs.io). The output of `tox -l` and a\ncareful read of [tox.ini](tox.ini) should get you there.\n\n## FAQs.\n\n### How come `P1Y != P365D`?\nSome years have 366 days. If it's not always the same, then it's not the same.\n\n### Why do you create your own types, instead of somewhat shoehorning a `timedelta`?\n`timedelta` cannot represent certain durations, such as those involving years or months.\nSince it cannot represent all possible durations without dangerous arithmetic, then it\nmust not be the right type.\n\n### Why don't you use regular expressions to parse duration strings?\n[Regular expressions should only be used to parse regular languages.](https://stackoverflow.com/a/1732454)\n\n### Why is parsing the inverse of formatting, but the converse is not true?\nBecause this wonderful representation is not unique.\n\n### Why do you support `<insert here a weird case>`?\nProbably because the standard made me to.\n\n### Why do you not support `<insert here a weird case>`?\nProbably because the standard doesn't allow me to.\n\n### Why is it not possible to subtract a datetime from a duration?\nI'm confused.\n\n### Why should I use this over some other thing?\nYou shouldn't do what people on the Internet tell you to do.\n\n### Why are ISO standards so strange?\nYes.\n\n## References.\n\n- [XML Schema Part 2: Datatypes, Appendix D](https://www.w3.org/TR/xmlschema-2/#isoformats):\n This excitingly named document contains more details about ISO 8601 than any human\n should be allowed to understand.\n- [`isodate`](https://pypi.org/project/isodate/): The original implementation of ISO\n durations in Python. Worth a look. But ours is cooler.\n\n\n
.venv\Lib\site-packages\isoduration-20.11.0.dist-info\METADATA
METADATA
Other
5,742
0.95
0.013072
0.125
react-lib
0
2023-07-16T04:00:01.386416
MIT
false
787358f0b08aec5a7d8b215e841dc911
isoduration-20.11.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nisoduration-20.11.0.dist-info/LICENSE,sha256=RSwEEOmj11q77xtstRnTGyMEcJLKAPm9YZ33yw2Lmpk,752\nisoduration-20.11.0.dist-info/METADATA,sha256=RABFIBFUgneTytJLr71U3spoAIGSWeOxf27bwGyntWs,5742\nisoduration-20.11.0.dist-info/RECORD,,\nisoduration-20.11.0.dist-info/WHEEL,sha256=EVRjI69F5qVjm_YgqcTXPnTAv3BfSUr0WVAHuSP3Xoo,92\nisoduration-20.11.0.dist-info/top_level.txt,sha256=wTZajj7Q01cAPxUFkF30MQ2Qmewtk1LOMJTg4S-VB58,12\nisoduration/__init__.py,sha256=Z_S-YBAr65l0V03dinLxRLu7QFjeMzA3ahFpUOQmSdA,363\nisoduration/__pycache__/__init__.cpython-313.pyc,,\nisoduration/__pycache__/constants.cpython-313.pyc,,\nisoduration/__pycache__/types.cpython-313.pyc,,\nisoduration/constants.py,sha256=qa_KFYHwCDJzriC1g6yNcXrA2F648QbtRjqZoggouIQ,56\nisoduration/formatter/__init__.py,sha256=JvSMXT7Oq2I6PoNN6nO1c23AIqEDocUM8LT-NUasf2Y,778\nisoduration/formatter/__pycache__/__init__.cpython-313.pyc,,\nisoduration/formatter/__pycache__/checking.cpython-313.pyc,,\nisoduration/formatter/__pycache__/exceptions.cpython-313.pyc,,\nisoduration/formatter/__pycache__/formatting.cpython-313.pyc,,\nisoduration/formatter/checking.py,sha256=Blxq7eDL6Fn_6msnO6Hr9rA7dLvIsuzR6Hpn8jZc1zA,1511\nisoduration/formatter/exceptions.py,sha256=b-dxFOFoXidhCMQa3Zt8uXL6DTjJgl9Fx7z_SMCM9DE,126\nisoduration/formatter/formatting.py,sha256=uamV1Cn1NE_6opP2vvf2EGPsjUkk_9UVXaN2qXaptLU,1432\nisoduration/operations/__init__.py,sha256=0nq9HguBzogrirRSbMFAAJEc20ANTNe8VlsEuzXrdaw,2406\nisoduration/operations/__pycache__/__init__.cpython-313.pyc,,\nisoduration/operations/__pycache__/util.cpython-313.pyc,,\nisoduration/operations/util.py,sha256=LxwNzGLIddf1m6c4iKC3_ZSkPlmXRCDk1OG2Dp7-rS0,1222\nisoduration/parser/__init__.py,sha256=XUKdlikQ8pbAFqEQK-F7zwexj-KP69Q7lRulEIOQyio,739\nisoduration/parser/__pycache__/__init__.cpython-313.pyc,,\nisoduration/parser/__pycache__/exceptions.cpython-313.pyc,,\nisoduration/parser/__pycache__/parsing.cpython-313.pyc,,\nisoduration/parser/__pycache__/util.cpython-313.pyc,,\nisoduration/parser/exceptions.py,sha256=SyBuy2fGDsacvspc5g-YX0Lf9EbuoSmYhvNeYmeGLk4,619\nisoduration/parser/parsing.py,sha256=j6O5N2m5pJvWucdyGY_K_Q6T2c5fqsCup-h3kzmzios,3990\nisoduration/parser/util.py,sha256=JoUAuQ0CgMG6pYq9HotuHwSKDwo4Qt25XGluC8xK1gU,819\nisoduration/types.py,sha256=hDwsvb_BENb0wG6f-IgDUAkzUCNvQyrL6lx2xa-L8VM,2266\n
.venv\Lib\site-packages\isoduration-20.11.0.dist-info\RECORD
RECORD
Other
2,405
0.7
0
0
react-lib
396
2024-01-10T07:33:54.346085
BSD-3-Clause
false
cc1e62ae37a2232bbd8b1e82b5dd0eb1
isoduration\n
.venv\Lib\site-packages\isoduration-20.11.0.dist-info\top_level.txt
top_level.txt
Other
12
0.5
0
0
vue-tools
527
2024-03-10T05:46:27.940908
BSD-3-Clause
false
9beea9ac4e9be9df8a39103c98f479f6
Wheel-Version: 1.0\nGenerator: bdist_wheel (0.35.1)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n
.venv\Lib\site-packages\isoduration-20.11.0.dist-info\WHEEL
WHEEL
Other
92
0.5
0
0
vue-tools
677
2025-04-25T14:12:52.329581
GPL-3.0
false
5ccc7519eb42f1dfceee6e7d685f1ff5
"""\nThis caching is very important for speed and memory optimizations. There's\nnothing really spectacular, just some decorators. The following cache types are\navailable:\n\n- ``time_cache`` can be used to cache something for just a limited time span,\n which can be useful if there's user interaction and the user cannot react\n faster than a certain time.\n\nThis module is one of the reasons why |jedi| is not thread-safe. As you can see\nthere are global variables, which are holding the cache information. Some of\nthese variables are being cleaned after every API usage.\n"""\nimport time\nfrom functools import wraps\nfrom typing import Any, Dict, Tuple\n\nfrom jedi import settings\nfrom parso.cache import parser_cache\n\n_time_caches: Dict[str, Dict[Any, Tuple[float, Any]]] = {}\n\n\ndef clear_time_caches(delete_all: bool = False) -> None:\n """ Jedi caches many things, that should be completed after each completion\n finishes.\n\n :param delete_all: Deletes also the cache that is normally not deleted,\n like parser cache, which is important for faster parsing.\n """\n global _time_caches\n\n if delete_all:\n for cache in _time_caches.values():\n cache.clear()\n parser_cache.clear()\n else:\n # normally just kill the expired entries, not all\n for tc in _time_caches.values():\n # check time_cache for expired entries\n for key, (t, value) in list(tc.items()):\n if t < time.time():\n # delete expired entries\n del tc[key]\n\n\ndef signature_time_cache(time_add_setting):\n """\n This decorator works as follows: Call it with a setting and after that\n use the function with a callable that returns the key.\n But: This function is only called if the key is not available. After a\n certain amount of time (`time_add_setting`) the cache is invalid.\n\n If the given key is None, the function will not be cached.\n """\n def _temp(key_func):\n dct = {}\n _time_caches[time_add_setting] = dct\n\n def wrapper(*args, **kwargs):\n generator = key_func(*args, **kwargs)\n key = next(generator)\n try:\n expiry, value = dct[key]\n if expiry > time.time():\n return value\n except KeyError:\n pass\n\n value = next(generator)\n time_add = getattr(settings, time_add_setting)\n if key is not None:\n dct[key] = time.time() + time_add, value\n return value\n return wrapper\n return _temp\n\n\ndef time_cache(seconds):\n def decorator(func):\n cache = {}\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n key = (args, frozenset(kwargs.items()))\n try:\n created, result = cache[key]\n if time.time() < created + seconds:\n return result\n except KeyError:\n pass\n result = func(*args, **kwargs)\n cache[key] = time.time(), result\n return result\n\n wrapper.clear_cache = lambda: cache.clear()\n return wrapper\n\n return decorator\n\n\ndef memoize_method(method):\n """A normal memoize function."""\n @wraps(method)\n def wrapper(self, *args, **kwargs):\n cache_dict = self.__dict__.setdefault('_memoize_method_dct', {})\n dct = cache_dict.setdefault(method, {})\n key = (args, frozenset(kwargs.items()))\n try:\n return dct[key]\n except KeyError:\n result = method(self, *args, **kwargs)\n dct[key] = result\n return result\n return wrapper\n
.venv\Lib\site-packages\jedi\cache.py
cache.py
Python
3,674
0.95
0.26087
0.031579
python-kit
841
2025-07-05T22:18:12.980524
MIT
false
fe52c32418ee97efb6aa729499856bff
from contextlib import contextmanager\n\n\n@contextmanager\ndef monkeypatch(obj, attribute_name, new_value):\n """\n Like pytest's monkeypatch, but as a value manager.\n """\n old_value = getattr(obj, attribute_name)\n try:\n setattr(obj, attribute_name, new_value)\n yield\n finally:\n setattr(obj, attribute_name, old_value)\n\n\ndef indent_block(text, indention=' '):\n """This function indents a text block with a default of four spaces."""\n temp = ''\n while text and text[-1] == '\n':\n temp += text[-1]\n text = text[:-1]\n lines = text.split('\n')\n return '\n'.join(map(lambda s: indention + s, lines)) + temp\n
.venv\Lib\site-packages\jedi\common.py
common.py
Python
668
0.85
0.208333
0
react-lib
191
2024-05-16T00:16:26.931643
Apache-2.0
false
7e2aba8cf04dce3a9aabea5a4355a3a8
import os\nimport time\nfrom contextlib import contextmanager\nfrom typing import Callable, Optional\n\n_inited = False\n\n\ndef _lazy_colorama_init():\n """\n Lazily init colorama if necessary, not to screw up stdout if debugging is\n not enabled.\n\n This version of the function does nothing.\n """\n\n\ntry:\n if os.name == 'nt':\n # Does not work on Windows, as pyreadline and colorama interfere\n raise ImportError\n else:\n # Use colorama for nicer console output.\n from colorama import Fore, init # type: ignore[import]\n from colorama import initialise\n\n def _lazy_colorama_init(): # noqa: F811\n """\n Lazily init colorama if necessary, not to screw up stdout is\n debug not enabled.\n\n This version of the function does init colorama.\n """\n global _inited\n if not _inited:\n # pytest resets the stream at the end - causes troubles. Since\n # after every output the stream is reset automatically we don't\n # need this.\n initialise.atexit_done = True\n try:\n init(strip=False)\n except Exception:\n # Colorama fails with initializing under vim and is buggy in\n # version 0.3.6.\n pass\n _inited = True\n\nexcept ImportError:\n class Fore: # type: ignore[no-redef]\n RED = ''\n GREEN = ''\n YELLOW = ''\n MAGENTA = ''\n RESET = ''\n BLUE = ''\n\nNOTICE = object()\nWARNING = object()\nSPEED = object()\n\nenable_speed = False\nenable_warning = False\nenable_notice = False\n\n# callback, interface: level, str\ndebug_function: Optional[Callable[[str, str], None]] = None\n_debug_indent = 0\n_start_time = time.time()\n\n\ndef reset_time():\n global _start_time, _debug_indent\n _start_time = time.time()\n _debug_indent = 0\n\n\ndef increase_indent(func):\n """Decorator for makin """\n def wrapper(*args, **kwargs):\n with increase_indent_cm():\n return func(*args, **kwargs)\n return wrapper\n\n\n@contextmanager\ndef increase_indent_cm(title=None, color='MAGENTA'):\n global _debug_indent\n if title:\n dbg('Start: ' + title, color=color)\n _debug_indent += 1\n try:\n yield\n finally:\n _debug_indent -= 1\n if title:\n dbg('End: ' + title, color=color)\n\n\ndef dbg(message, *args, color='GREEN'):\n """ Looks at the stack, to see if a debug message should be printed. """\n assert color\n\n if debug_function and enable_notice:\n i = ' ' * _debug_indent\n _lazy_colorama_init()\n debug_function(color, i + 'dbg: ' + message % tuple(repr(a) for a in args))\n\n\ndef warning(message, *args, format=True):\n if debug_function and enable_warning:\n i = ' ' * _debug_indent\n if format:\n message = message % tuple(repr(a) for a in args)\n debug_function('RED', i + 'warning: ' + message)\n\n\ndef speed(name):\n if debug_function and enable_speed:\n now = time.time()\n i = ' ' * _debug_indent\n debug_function('YELLOW', i + 'speed: ' + '%s %s' % (name, now - _start_time))\n\n\ndef print_to_stdout(color, str_out):\n """\n The default debug function that prints to standard out.\n\n :param str color: A string that is an attribute of ``colorama.Fore``.\n """\n col = getattr(Fore, color)\n _lazy_colorama_init()\n print(col + str_out + Fore.RESET)\n
.venv\Lib\site-packages\jedi\debug.py
debug.py
Python
3,504
0.95
0.25
0.076923
node-utils
591
2023-10-05T15:10:13.232990
GPL-3.0
false
88de153aa4d7bd3268a192f77e0754aa
import os\n\nfrom parso import file_io\n\n\nclass AbstractFolderIO:\n def __init__(self, path):\n self.path = path\n\n def get_base_name(self):\n raise NotImplementedError\n\n def list(self):\n raise NotImplementedError\n\n def get_file_io(self, name):\n raise NotImplementedError\n\n def get_parent_folder(self):\n raise NotImplementedError\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.path)\n\n\nclass FolderIO(AbstractFolderIO):\n def get_base_name(self):\n return os.path.basename(self.path)\n\n def list(self):\n return os.listdir(self.path)\n\n def get_file_io(self, name):\n return FileIO(os.path.join(self.path, name))\n\n def get_parent_folder(self):\n return FolderIO(os.path.dirname(self.path))\n\n def walk(self):\n for root, dirs, files in os.walk(self.path):\n root_folder_io = FolderIO(root)\n original_folder_ios = [FolderIO(os.path.join(root, d)) for d in dirs]\n modified_folder_ios = list(original_folder_ios)\n yield (\n root_folder_io,\n modified_folder_ios,\n [FileIO(os.path.join(root, f)) for f in files],\n )\n modified_iterator = iter(reversed(modified_folder_ios))\n current = next(modified_iterator, None)\n i = len(original_folder_ios)\n for folder_io in reversed(original_folder_ios):\n i -= 1 # Basically enumerate but reversed\n if current is folder_io:\n current = next(modified_iterator, None)\n else:\n del dirs[i]\n\n\nclass FileIOFolderMixin:\n def get_parent_folder(self):\n return FolderIO(os.path.dirname(self.path))\n\n\nclass ZipFileIO(file_io.KnownContentFileIO, FileIOFolderMixin):\n """For .zip and .egg archives"""\n def __init__(self, path, code, zip_path):\n super().__init__(path, code)\n self._zip_path = zip_path\n\n def get_last_modified(self):\n try:\n return os.path.getmtime(self._zip_path)\n except (FileNotFoundError, PermissionError, NotADirectoryError):\n return None\n\n\nclass FileIO(file_io.FileIO, FileIOFolderMixin):\n pass\n\n\nclass KnownContentFileIO(file_io.KnownContentFileIO, FileIOFolderMixin):\n pass\n
.venv\Lib\site-packages\jedi\file_io.py
file_io.py
Python
2,337
0.95
0.313253
0
vue-tools
226
2024-10-24T18:56:33.199785
MIT
false
2e98bebc58761ae6c2441a3fd8ff13b0
"""\nUtilities for end-users.\n"""\n\nimport __main__\nfrom collections import namedtuple\nimport logging\nimport traceback\nimport re\nimport os\nimport sys\n\nfrom jedi import Interpreter\n\n\nREADLINE_DEBUG = False\n\n\ndef setup_readline(namespace_module=__main__, fuzzy=False):\n """\n This function sets up :mod:`readline` to use Jedi in a Python interactive\n shell.\n\n If you want to use a custom ``PYTHONSTARTUP`` file (typically\n ``$HOME/.pythonrc.py``), you can add this piece of code::\n\n try:\n from jedi.utils import setup_readline\n except ImportError:\n # Fallback to the stdlib readline completer if it is installed.\n # Taken from http://docs.python.org/2/library/rlcompleter.html\n print("Jedi is not installed, falling back to readline")\n try:\n import readline\n import rlcompleter\n readline.parse_and_bind("tab: complete")\n except ImportError:\n print("Readline is not installed either. No tab completion is enabled.")\n else:\n setup_readline()\n\n This will fallback to the readline completer if Jedi is not installed.\n The readline completer will only complete names in the global namespace,\n so for example::\n\n ran<TAB>\n\n will complete to ``range``.\n\n With Jedi the following code::\n\n range(10).cou<TAB>\n\n will complete to ``range(10).count``, this does not work with the default\n cPython :mod:`readline` completer.\n\n You will also need to add ``export PYTHONSTARTUP=$HOME/.pythonrc.py`` to\n your shell profile (usually ``.bash_profile`` or ``.profile`` if you use\n bash).\n """\n if READLINE_DEBUG:\n logging.basicConfig(\n filename='/tmp/jedi.log',\n filemode='a',\n level=logging.DEBUG\n )\n\n class JediRL:\n def complete(self, text, state):\n """\n This complete stuff is pretty weird, a generator would make\n a lot more sense, but probably due to backwards compatibility\n this is still the way how it works.\n\n The only important part is stuff in the ``state == 0`` flow,\n everything else has been copied from the ``rlcompleter`` std.\n library module.\n """\n if state == 0:\n sys.path.insert(0, os.getcwd())\n # Calling python doesn't have a path, so add to sys.path.\n try:\n logging.debug("Start REPL completion: " + repr(text))\n interpreter = Interpreter(text, [namespace_module.__dict__])\n\n completions = interpreter.complete(fuzzy=fuzzy)\n logging.debug("REPL completions: %s", completions)\n\n self.matches = [\n text[:len(text) - c._like_name_length] + c.name_with_symbols\n for c in completions\n ]\n except:\n logging.error("REPL Completion error:\n" + traceback.format_exc())\n raise\n finally:\n sys.path.pop(0)\n try:\n return self.matches[state]\n except IndexError:\n return None\n\n try:\n # Need to import this one as well to make sure it's executed before\n # this code. This didn't use to be an issue until 3.3. Starting with\n # 3.4 this is different, it always overwrites the completer if it's not\n # already imported here.\n import rlcompleter # noqa: F401\n import readline\n except ImportError:\n print("Jedi: Module readline not available.")\n else:\n readline.set_completer(JediRL().complete)\n readline.parse_and_bind("tab: complete")\n # jedi itself does the case matching\n readline.parse_and_bind("set completion-ignore-case on")\n # because it's easier to hit the tab just once\n readline.parse_and_bind("set show-all-if-unmodified")\n readline.parse_and_bind("set show-all-if-ambiguous on")\n # don't repeat all the things written in the readline all the time\n readline.parse_and_bind("set completion-prefix-display-length 2")\n # No delimiters, Jedi handles that.\n readline.set_completer_delims('')\n\n\ndef version_info():\n """\n Returns a namedtuple of Jedi's version, similar to Python's\n ``sys.version_info``.\n """\n Version = namedtuple('Version', 'major, minor, micro')\n from jedi import __version__\n tupl = re.findall(r'[a-z]+|\d+', __version__)\n return Version(*[x if i == 3 else int(x) for i, x in enumerate(tupl)])\n
.venv\Lib\site-packages\jedi\utils.py
utils.py
Python
4,680
0.95
0.171642
0.098214
react-lib
593
2025-04-30T08:34:07.421431
MIT
false
7dfead2d6a61a165bfa4736b1cc6c3ba
"""\nThis module is here to ensure compatibility of Windows/Linux/MacOS and\ndifferent Python versions.\n"""\nimport errno\nimport sys\nimport pickle\nfrom typing import Any\n\n\nclass Unpickler(pickle.Unpickler):\n def find_class(self, module: str, name: str) -> Any:\n # Python 3.13 moved pathlib implementation out of __init__.py as part of\n # generalising its implementation. Ensure that we support loading\n # pickles from 3.13 on older version of Python. Since 3.13 maintained a\n # compatible API, pickles from older Python work natively on the newer\n # version.\n if module == 'pathlib._local':\n module = 'pathlib'\n return super().find_class(module, name)\n\n\ndef pickle_load(file):\n try:\n return Unpickler(file).load()\n # Python on Windows don't throw EOF errors for pipes. So reraise them with\n # the correct type, which is caught upwards.\n except OSError:\n if sys.platform == 'win32':\n raise EOFError()\n raise\n\n\ndef pickle_dump(data, file, protocol):\n try:\n pickle.dump(data, file, protocol)\n # On Python 3.3 flush throws sometimes an error even though the writing\n # operation should be completed.\n file.flush()\n # Python on Windows don't throw EPIPE errors for pipes. So reraise them with\n # the correct type and error number.\n except OSError:\n if sys.platform == 'win32':\n raise IOError(errno.EPIPE, "Broken pipe")\n raise\n
.venv\Lib\site-packages\jedi\_compatibility.py
_compatibility.py
Python
1,491
0.95
0.244444
0.282051
node-utils
999
2024-02-17T23:39:41.095491
BSD-3-Clause
false
0b1c802b7e9812bcdcbaba2f840374c9
import sys\nfrom os.path import join, dirname, abspath, isdir\n\n\ndef _start_linter():\n """\n This is a pre-alpha API. You're not supposed to use it at all, except for\n testing. It will very likely change.\n """\n import jedi\n\n if '--debug' in sys.argv:\n jedi.set_debug_function()\n\n for path in sys.argv[2:]:\n if path.startswith('--'):\n continue\n if isdir(path):\n import fnmatch\n import os\n\n paths = []\n for root, dirnames, filenames in os.walk(path):\n for filename in fnmatch.filter(filenames, '*.py'):\n paths.append(os.path.join(root, filename))\n else:\n paths = [path]\n\n try:\n for p in paths:\n for error in jedi.Script(path=p)._analysis():\n print(error)\n except Exception:\n if '--pdb' in sys.argv:\n import traceback\n traceback.print_exc()\n import pdb\n pdb.post_mortem()\n else:\n raise\n\n\ndef _complete():\n import jedi\n import pdb\n\n if '-d' in sys.argv:\n sys.argv.remove('-d')\n jedi.set_debug_function()\n\n try:\n completions = jedi.Script(sys.argv[2]).complete()\n for c in completions:\n c.docstring()\n c.type\n except Exception as e:\n print(repr(e))\n pdb.post_mortem()\n else:\n print(completions)\n\n\nif len(sys.argv) == 2 and sys.argv[1] == 'repl':\n # don't want to use __main__ only for repl yet, maybe we want to use it for\n # something else. So just use the keyword ``repl`` for now.\n print(join(dirname(abspath(__file__)), 'api', 'replstartup.py'))\nelif len(sys.argv) > 1 and sys.argv[1] == '_linter':\n _start_linter()\nelif len(sys.argv) > 1 and sys.argv[1] == '_complete':\n _complete()\nelse:\n print('Command not implemented: %s' % sys.argv[1])\n
.venv\Lib\site-packages\jedi\__main__.py
__main__.py
Python
1,950
0.95
0.277778
0.033333
awesome-app
596
2024-10-12T03:14:45.649085
MIT
false
86f3f1194847c586f80344ec01b3c97e
"""\nThere are a couple of classes documented in here:\n\n- :class:`.BaseName` as an abstact base class for almost everything.\n- :class:`.Name` used in a lot of places\n- :class:`.Completion` for completions\n- :class:`.BaseSignature` as a base class for signatures\n- :class:`.Signature` for :meth:`.Script.get_signatures` only\n- :class:`.ParamName` used for parameters of signatures\n- :class:`.Refactoring` for refactorings\n- :class:`.SyntaxError` for :meth:`.Script.get_syntax_errors` only\n\nThese classes are the much biggest part of the API, because they contain\nthe interesting information about all operations.\n"""\nimport re\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom parso.tree import search_ancestor\n\nfrom jedi import settings\nfrom jedi import debug\nfrom jedi.inference.utils import unite\nfrom jedi.cache import memoize_method\nfrom jedi.inference.compiled.mixed import MixedName\nfrom jedi.inference.names import ImportName, SubModuleName\nfrom jedi.inference.gradual.stub_value import StubModuleValue\nfrom jedi.inference.gradual.conversion import convert_names, convert_values\nfrom jedi.inference.base_value import ValueSet, HasNoContext\nfrom jedi.api.keywords import KeywordName\nfrom jedi.api import completion_cache\nfrom jedi.api.helpers import filter_follow_imports\n\n\ndef _sort_names_by_start_pos(names):\n return sorted(names, key=lambda s: s.start_pos or (0, 0))\n\n\ndef defined_names(inference_state, value):\n """\n List sub-definitions (e.g., methods in class).\n\n :type scope: Scope\n :rtype: list of Name\n """\n try:\n context = value.as_context()\n except HasNoContext:\n return []\n filter = next(context.get_filters())\n names = [name for name in filter.values()]\n return [Name(inference_state, n) for n in _sort_names_by_start_pos(names)]\n\n\ndef _values_to_definitions(values):\n return [Name(c.inference_state, c.name) for c in values]\n\n\nclass BaseName:\n """\n The base class for all definitions, completions and signatures.\n """\n _mapping = {\n 'posixpath': 'os.path',\n 'riscospath': 'os.path',\n 'ntpath': 'os.path',\n 'os2emxpath': 'os.path',\n 'macpath': 'os.path',\n 'genericpath': 'os.path',\n 'posix': 'os',\n '_io': 'io',\n '_functools': 'functools',\n '_collections': 'collections',\n '_socket': 'socket',\n '_sqlite3': 'sqlite3',\n }\n\n _tuple_mapping = dict((tuple(k.split('.')), v) for (k, v) in {\n 'argparse._ActionsContainer': 'argparse.ArgumentParser',\n }.items())\n\n def __init__(self, inference_state, name):\n self._inference_state = inference_state\n self._name = name\n """\n An instance of :class:`parso.python.tree.Name` subclass.\n """\n self.is_keyword = isinstance(self._name, KeywordName)\n\n @memoize_method\n def _get_module_context(self):\n # This can take a while to complete, because in the worst case of\n # imports (consider `import a` completions), we need to load all\n # modules starting with a first.\n return self._name.get_root_context()\n\n @property\n def module_path(self) -> Optional[Path]:\n """\n Shows the file path of a module. e.g. ``/usr/lib/python3.9/os.py``\n """\n module = self._get_module_context()\n if module.is_stub() or not module.is_compiled():\n # Compiled modules should not return a module path even if they\n # have one.\n path: Optional[Path] = self._get_module_context().py__file__()\n return path\n\n return None\n\n @property\n def name(self):\n """\n Name of variable/function/class/module.\n\n For example, for ``x = None`` it returns ``'x'``.\n\n :rtype: str or None\n """\n return self._name.get_public_name()\n\n @property\n def type(self):\n """\n The type of the definition.\n\n Here is an example of the value of this attribute. Let's consider\n the following source. As what is in ``variable`` is unambiguous\n to Jedi, :meth:`jedi.Script.infer` should return a list of\n definition for ``sys``, ``f``, ``C`` and ``x``.\n\n >>> from jedi import Script\n >>> source = '''\n ... import keyword\n ...\n ... class C:\n ... pass\n ...\n ... class D:\n ... pass\n ...\n ... x = D()\n ...\n ... def f():\n ... pass\n ...\n ... for variable in [keyword, f, C, x]:\n ... variable'''\n\n >>> script = Script(source)\n >>> defs = script.infer()\n\n Before showing what is in ``defs``, let's sort it by :attr:`line`\n so that it is easy to relate the result to the source code.\n\n >>> defs = sorted(defs, key=lambda d: d.line)\n >>> print(defs) # doctest: +NORMALIZE_WHITESPACE\n [<Name full_name='keyword', description='module keyword'>,\n <Name full_name='__main__.C', description='class C'>,\n <Name full_name='__main__.D', description='instance D'>,\n <Name full_name='__main__.f', description='def f'>]\n\n Finally, here is what you can get from :attr:`type`:\n\n >>> defs = [d.type for d in defs]\n >>> defs[0]\n 'module'\n >>> defs[1]\n 'class'\n >>> defs[2]\n 'instance'\n >>> defs[3]\n 'function'\n\n Valid values for type are ``module``, ``class``, ``instance``, ``function``,\n ``param``, ``path``, ``keyword``, ``property`` and ``statement``.\n\n """\n tree_name = self._name.tree_name\n resolve = False\n if tree_name is not None:\n # TODO move this to their respective names.\n definition = tree_name.get_definition()\n if definition is not None and definition.type == 'import_from' and \\n tree_name.is_definition():\n resolve = True\n\n if isinstance(self._name, SubModuleName) or resolve:\n for value in self._name.infer():\n return value.api_type\n return self._name.api_type\n\n @property\n def module_name(self):\n """\n The module name, a bit similar to what ``__name__`` is in a random\n Python module.\n\n >>> from jedi import Script\n >>> source = 'import json'\n >>> script = Script(source, path='example.py')\n >>> d = script.infer()[0]\n >>> print(d.module_name) # doctest: +ELLIPSIS\n json\n """\n return self._get_module_context().py__name__()\n\n def in_builtin_module(self):\n """\n Returns True, if this is a builtin module.\n """\n value = self._get_module_context().get_value()\n if isinstance(value, StubModuleValue):\n return any(v.is_compiled() for v in value.non_stub_value_set)\n return value.is_compiled()\n\n @property\n def line(self):\n """The line where the definition occurs (starting with 1)."""\n start_pos = self._name.start_pos\n if start_pos is None:\n return None\n return start_pos[0]\n\n @property\n def column(self):\n """The column where the definition occurs (starting with 0)."""\n start_pos = self._name.start_pos\n if start_pos is None:\n return None\n return start_pos[1]\n\n def get_definition_start_position(self):\n """\n The (row, column) of the start of the definition range. Rows start with\n 1, columns start with 0.\n\n :rtype: Optional[Tuple[int, int]]\n """\n if self._name.tree_name is None:\n return None\n definition = self._name.tree_name.get_definition()\n if definition is None:\n return self._name.start_pos\n return definition.start_pos\n\n def get_definition_end_position(self):\n """\n The (row, column) of the end of the definition range. Rows start with\n 1, columns start with 0.\n\n :rtype: Optional[Tuple[int, int]]\n """\n if self._name.tree_name is None:\n return None\n definition = self._name.tree_name.get_definition()\n if definition is None:\n return self._name.tree_name.end_pos\n if self.type in ("function", "class"):\n last_leaf = definition.get_last_leaf()\n if last_leaf.type == "newline":\n return last_leaf.get_previous_leaf().end_pos\n return last_leaf.end_pos\n return definition.end_pos\n\n def docstring(self, raw=False, fast=True):\n r"""\n Return a document string for this completion object.\n\n Example:\n\n >>> from jedi import Script\n >>> source = '''\\n ... def f(a, b=1):\n ... "Document for function f."\n ... '''\n >>> script = Script(source, path='example.py')\n >>> doc = script.infer(1, len('def f'))[0].docstring()\n >>> print(doc)\n f(a, b=1)\n <BLANKLINE>\n Document for function f.\n\n Notice that useful extra information is added to the actual\n docstring, e.g. function signatures are prepended to their docstrings.\n If you need the actual docstring, use ``raw=True`` instead.\n\n >>> print(script.infer(1, len('def f'))[0].docstring(raw=True))\n Document for function f.\n\n :param fast: Don't follow imports that are only one level deep like\n ``import foo``, but follow ``from foo import bar``. This makes\n sense for speed reasons. Completing `import a` is slow if you use\n the ``foo.docstring(fast=False)`` on every object, because it\n parses all libraries starting with ``a``.\n """\n if isinstance(self._name, ImportName) and fast:\n return ''\n doc = self._get_docstring()\n if raw:\n return doc\n\n signature_text = self._get_docstring_signature()\n if signature_text and doc:\n return signature_text + '\n\n' + doc\n else:\n return signature_text + doc\n\n def _get_docstring(self):\n return self._name.py__doc__()\n\n def _get_docstring_signature(self):\n return '\n'.join(\n signature.to_string()\n for signature in self._get_signatures(for_docstring=True)\n )\n\n @property\n def description(self):\n """\n A description of the :class:`.Name` object, which is heavily used\n in testing. e.g. for ``isinstance`` it returns ``def isinstance``.\n\n Example:\n\n >>> from jedi import Script\n >>> source = '''\n ... def f():\n ... pass\n ...\n ... class C:\n ... pass\n ...\n ... variable = f if random.choice([0,1]) else C'''\n >>> script = Script(source) # line is maximum by default\n >>> defs = script.infer(column=3)\n >>> defs = sorted(defs, key=lambda d: d.line)\n >>> print(defs) # doctest: +NORMALIZE_WHITESPACE\n [<Name full_name='__main__.f', description='def f'>,\n <Name full_name='__main__.C', description='class C'>]\n >>> str(defs[0].description)\n 'def f'\n >>> str(defs[1].description)\n 'class C'\n\n """\n typ = self.type\n tree_name = self._name.tree_name\n if typ == 'param':\n return typ + ' ' + self._name.to_string()\n if typ in ('function', 'class', 'module', 'instance') or tree_name is None:\n if typ == 'function':\n # For the description we want a short and a pythonic way.\n typ = 'def'\n return typ + ' ' + self._name.get_public_name()\n\n definition = tree_name.get_definition(include_setitem=True) or tree_name\n # Remove the prefix, because that's not what we want for get_code\n # here.\n txt = definition.get_code(include_prefix=False)\n # Delete comments:\n txt = re.sub(r'#[^\n]+\n', ' ', txt)\n # Delete multi spaces/newlines\n txt = re.sub(r'\s+', ' ', txt).strip()\n return txt\n\n @property\n def full_name(self):\n """\n Dot-separated path of this object.\n\n It is in the form of ``<module>[.<submodule>[...]][.<object>]``.\n It is useful when you want to look up Python manual of the\n object at hand.\n\n Example:\n\n >>> from jedi import Script\n >>> source = '''\n ... import os\n ... os.path.join'''\n >>> script = Script(source, path='example.py')\n >>> print(script.infer(3, len('os.path.join'))[0].full_name)\n os.path.join\n\n Notice that it returns ``'os.path.join'`` instead of (for example)\n ``'posixpath.join'``. This is not correct, since the modules name would\n be ``<module 'posixpath' ...>```. However most users find the latter\n more practical.\n """\n if not self._name.is_value_name:\n return None\n\n names = self._name.get_qualified_names(include_module_names=True)\n if names is None:\n return None\n\n names = list(names)\n try:\n names[0] = self._mapping[names[0]]\n except KeyError:\n pass\n\n return '.'.join(names)\n\n def is_stub(self):\n """\n Returns True if the current name is defined in a stub file.\n """\n if not self._name.is_value_name:\n return False\n\n return self._name.get_root_context().is_stub()\n\n def is_side_effect(self):\n """\n Checks if a name is defined as ``self.foo = 3``. In case of self, this\n function would return False, for foo it would return True.\n """\n tree_name = self._name.tree_name\n if tree_name is None:\n return False\n return tree_name.is_definition() and tree_name.parent.type == 'trailer'\n\n @debug.increase_indent_cm('goto on name')\n def goto(self, *, follow_imports=False, follow_builtin_imports=False,\n only_stubs=False, prefer_stubs=False):\n\n """\n Like :meth:`.Script.goto` (also supports the same params), but does it\n for the current name. This is typically useful if you are using\n something like :meth:`.Script.get_names()`.\n\n :param follow_imports: The goto call will follow imports.\n :param follow_builtin_imports: If follow_imports is True will try to\n look up names in builtins (i.e. compiled or extension modules).\n :param only_stubs: Only return stubs for this goto call.\n :param prefer_stubs: Prefer stubs to Python objects for this goto call.\n :rtype: list of :class:`Name`\n """\n if not self._name.is_value_name:\n return []\n\n names = self._name.goto()\n if follow_imports:\n names = filter_follow_imports(names, follow_builtin_imports)\n names = convert_names(\n names,\n only_stubs=only_stubs,\n prefer_stubs=prefer_stubs,\n )\n return [self if n == self._name else Name(self._inference_state, n)\n for n in names]\n\n @debug.increase_indent_cm('infer on name')\n def infer(self, *, only_stubs=False, prefer_stubs=False):\n """\n Like :meth:`.Script.infer`, it can be useful to understand which type\n the current name has.\n\n Return the actual definitions. I strongly recommend not using it for\n your completions, because it might slow down |jedi|. If you want to\n read only a few objects (<=20), it might be useful, especially to get\n the original docstrings. The basic problem of this function is that it\n follows all results. This means with 1000 completions (e.g. numpy),\n it's just very, very slow.\n\n :param only_stubs: Only return stubs for this goto call.\n :param prefer_stubs: Prefer stubs to Python objects for this type\n inference call.\n :rtype: list of :class:`Name`\n """\n assert not (only_stubs and prefer_stubs)\n\n if not self._name.is_value_name:\n return []\n\n # First we need to make sure that we have stub names (if possible) that\n # we can follow. If we don't do that, we can end up with the inferred\n # results of Python objects instead of stubs.\n names = convert_names([self._name], prefer_stubs=True)\n values = convert_values(\n ValueSet.from_sets(n.infer() for n in names),\n only_stubs=only_stubs,\n prefer_stubs=prefer_stubs,\n )\n resulting_names = [c.name for c in values]\n return [self if n == self._name else Name(self._inference_state, n)\n for n in resulting_names]\n\n def parent(self):\n """\n Returns the parent scope of this identifier.\n\n :rtype: Name\n """\n if not self._name.is_value_name:\n return None\n\n if self.type in ('function', 'class', 'param') and self._name.tree_name is not None:\n # Since the parent_context doesn't really match what the user\n # thinks of that the parent is here, we do these cases separately.\n # The reason for this is the following:\n # - class: Nested classes parent_context is always the\n # parent_context of the most outer one.\n # - function: Functions in classes have the module as\n # parent_context.\n # - param: The parent_context of a param is not its function but\n # e.g. the outer class or module.\n cls_or_func_node = self._name.tree_name.get_definition()\n parent = search_ancestor(cls_or_func_node, 'funcdef', 'classdef', 'file_input')\n context = self._get_module_context().create_value(parent).as_context()\n else:\n context = self._name.parent_context\n\n if context is None:\n return None\n while context.name is None:\n # Happens for comprehension contexts\n context = context.parent_context\n\n return Name(self._inference_state, context.name)\n\n def __repr__(self):\n return "<%s %sname=%r, description=%r>" % (\n self.__class__.__name__,\n 'full_' if self.full_name else '',\n self.full_name or self.name,\n self.description,\n )\n\n def get_line_code(self, before=0, after=0):\n """\n Returns the line of code where this object was defined.\n\n :param before: Add n lines before the current line to the output.\n :param after: Add n lines after the current line to the output.\n\n :return str: Returns the line(s) of code or an empty string if it's a\n builtin.\n """\n if not self._name.is_value_name:\n return ''\n\n lines = self._name.get_root_context().code_lines\n if lines is None:\n # Probably a builtin module, just ignore in that case.\n return ''\n\n index = self._name.start_pos[0] - 1\n start_index = max(index - before, 0)\n return ''.join(lines[start_index:index + after + 1])\n\n def _get_signatures(self, for_docstring=False):\n if self._name.api_type == 'property':\n return []\n if for_docstring and self._name.api_type == 'statement' and not self.is_stub():\n # For docstrings we don't resolve signatures if they are simple\n # statements and not stubs. This is a speed optimization.\n return []\n\n if isinstance(self._name, MixedName):\n # While this would eventually happen anyway, it's basically just a\n # shortcut to not infer anything tree related, because it's really\n # not necessary.\n return self._name.infer_compiled_value().get_signatures()\n\n names = convert_names([self._name], prefer_stubs=True)\n return [sig for name in names for sig in name.infer().get_signatures()]\n\n def get_signatures(self):\n """\n Returns all potential signatures for a function or a class. Multiple\n signatures are typical if you use Python stubs with ``@overload``.\n\n :rtype: list of :class:`BaseSignature`\n """\n return [\n BaseSignature(self._inference_state, s)\n for s in self._get_signatures()\n ]\n\n def execute(self):\n """\n Uses type inference to "execute" this identifier and returns the\n executed objects.\n\n :rtype: list of :class:`Name`\n """\n return _values_to_definitions(self._name.infer().execute_with_values())\n\n def get_type_hint(self):\n """\n Returns type hints like ``Iterable[int]`` or ``Union[int, str]``.\n\n This method might be quite slow, especially for functions. The problem\n is finding executions for those functions to return something like\n ``Callable[[int, str], str]``.\n\n :rtype: str\n """\n return self._name.infer().get_type_hint()\n\n\nclass Completion(BaseName):\n """\n ``Completion`` objects are returned from :meth:`.Script.complete`. They\n provide additional information about a completion.\n """\n def __init__(self, inference_state, name, stack, like_name_length,\n is_fuzzy, cached_name=None):\n super().__init__(inference_state, name)\n\n self._like_name_length = like_name_length\n self._stack = stack\n self._is_fuzzy = is_fuzzy\n self._cached_name = cached_name\n\n # Completion objects with the same Completion name (which means\n # duplicate items in the completion)\n self._same_name_completions = []\n\n def _complete(self, like_name):\n append = ''\n if settings.add_bracket_after_function \\n and self.type == 'function':\n append = '('\n\n name = self._name.get_public_name()\n if like_name:\n name = name[self._like_name_length:]\n return name + append\n\n @property\n def complete(self):\n """\n Only works with non-fuzzy completions. Returns None if fuzzy\n completions are used.\n\n Return the rest of the word, e.g. completing ``isinstance``::\n\n isinstan# <-- Cursor is here\n\n would return the string 'ce'. It also adds additional stuff, depending\n on your ``settings.py``.\n\n Assuming the following function definition::\n\n def foo(param=0):\n pass\n\n completing ``foo(par`` would give a ``Completion`` which ``complete``\n would be ``am=``.\n """\n if self._is_fuzzy:\n return None\n return self._complete(True)\n\n @property\n def name_with_symbols(self):\n """\n Similar to :attr:`.name`, but like :attr:`.name` returns also the\n symbols, for example assuming the following function definition::\n\n def foo(param=0):\n pass\n\n completing ``foo(`` would give a ``Completion`` which\n ``name_with_symbols`` would be "param=".\n\n """\n return self._complete(False)\n\n def docstring(self, raw=False, fast=True):\n """\n Documented under :meth:`BaseName.docstring`.\n """\n if self._like_name_length >= 3:\n # In this case we can just resolve the like name, because we\n # wouldn't load like > 100 Python modules anymore.\n fast = False\n\n return super().docstring(raw=raw, fast=fast)\n\n def _get_docstring(self):\n if self._cached_name is not None:\n return completion_cache.get_docstring(\n self._cached_name,\n self._name.get_public_name(),\n lambda: self._get_cache()\n )\n return super()._get_docstring()\n\n def _get_docstring_signature(self):\n if self._cached_name is not None:\n return completion_cache.get_docstring_signature(\n self._cached_name,\n self._name.get_public_name(),\n lambda: self._get_cache()\n )\n return super()._get_docstring_signature()\n\n def _get_cache(self):\n return (\n super().type,\n super()._get_docstring_signature(),\n super()._get_docstring(),\n )\n\n @property\n def type(self):\n """\n Documented under :meth:`BaseName.type`.\n """\n # Purely a speed optimization.\n if self._cached_name is not None:\n return completion_cache.get_type(\n self._cached_name,\n self._name.get_public_name(),\n lambda: self._get_cache()\n )\n\n return super().type\n\n def get_completion_prefix_length(self):\n """\n Returns the length of the prefix being completed.\n For example, completing ``isinstance``::\n\n isinstan# <-- Cursor is here\n\n would return 8, because len('isinstan') == 8.\n\n Assuming the following function definition::\n\n def foo(param=0):\n pass\n\n completing ``foo(par`` would return 3.\n """\n return self._like_name_length\n\n def __repr__(self):\n return '<%s: %s>' % (type(self).__name__, self._name.get_public_name())\n\n\nclass Name(BaseName):\n """\n *Name* objects are returned from many different APIs including\n :meth:`.Script.goto` or :meth:`.Script.infer`.\n """\n def __init__(self, inference_state, definition):\n super().__init__(inference_state, definition)\n\n @memoize_method\n def defined_names(self):\n """\n List sub-definitions (e.g., methods in class).\n\n :rtype: list of :class:`Name`\n """\n defs = self._name.infer()\n return sorted(\n unite(defined_names(self._inference_state, d) for d in defs),\n key=lambda s: s._name.start_pos or (0, 0)\n )\n\n def is_definition(self):\n """\n Returns True, if defined as a name in a statement, function or class.\n Returns False, if it's a reference to such a definition.\n """\n if self._name.tree_name is None:\n return True\n else:\n return self._name.tree_name.is_definition()\n\n def __eq__(self, other):\n return self._name.start_pos == other._name.start_pos \\n and self.module_path == other.module_path \\n and self.name == other.name \\n and self._inference_state == other._inference_state\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def __hash__(self):\n return hash((self._name.start_pos, self.module_path, self.name, self._inference_state))\n\n\nclass BaseSignature(Name):\n """\n These signatures are returned by :meth:`BaseName.get_signatures`\n calls.\n """\n def __init__(self, inference_state, signature):\n super().__init__(inference_state, signature.name)\n self._signature = signature\n\n @property\n def params(self):\n """\n Returns definitions for all parameters that a signature defines.\n This includes stuff like ``*args`` and ``**kwargs``.\n\n :rtype: list of :class:`.ParamName`\n """\n return [ParamName(self._inference_state, n)\n for n in self._signature.get_param_names(resolve_stars=True)]\n\n def to_string(self):\n """\n Returns a text representation of the signature. This could for example\n look like ``foo(bar, baz: int, **kwargs)``.\n\n :rtype: str\n """\n return self._signature.to_string()\n\n\nclass Signature(BaseSignature):\n """\n A full signature object is the return value of\n :meth:`.Script.get_signatures`.\n """\n def __init__(self, inference_state, signature, call_details):\n super().__init__(inference_state, signature)\n self._call_details = call_details\n self._signature = signature\n\n @property\n def index(self):\n """\n Returns the param index of the current cursor position.\n Returns None if the index cannot be found in the curent call.\n\n :rtype: int\n """\n return self._call_details.calculate_index(\n self._signature.get_param_names(resolve_stars=True)\n )\n\n @property\n def bracket_start(self):\n """\n Returns a line/column tuple of the bracket that is responsible for the\n last function call. The first line is 1 and the first column 0.\n\n :rtype: int, int\n """\n return self._call_details.bracket_leaf.start_pos\n\n def __repr__(self):\n return '<%s: index=%r %s>' % (\n type(self).__name__,\n self.index,\n self._signature.to_string(),\n )\n\n\nclass ParamName(Name):\n def infer_default(self):\n """\n Returns default values like the ``1`` of ``def foo(x=1):``.\n\n :rtype: list of :class:`.Name`\n """\n return _values_to_definitions(self._name.infer_default())\n\n def infer_annotation(self, **kwargs):\n """\n :param execute_annotation: Default True; If False, values are not\n executed and classes are returned instead of instances.\n :rtype: list of :class:`.Name`\n """\n return _values_to_definitions(self._name.infer_annotation(ignore_stars=True, **kwargs))\n\n def to_string(self):\n """\n Returns a simple representation of a param, like\n ``f: Callable[..., Any]``.\n\n :rtype: str\n """\n return self._name.to_string()\n\n @property\n def kind(self):\n """\n Returns an enum instance of :mod:`inspect`'s ``Parameter`` enum.\n\n :rtype: :py:attr:`inspect.Parameter.kind`\n """\n return self._name.get_kind()\n
.venv\Lib\site-packages\jedi\api\classes.py
classes.py
Python
29,600
0.95
0.287151
0.048714
vue-tools
408
2024-06-13T17:00:43.388305
Apache-2.0
false
fa24d4c013024f999e690829b4678570
import re\nfrom textwrap import dedent\nfrom inspect import Parameter\n\nfrom parso.python.token import PythonTokenTypes\nfrom parso.python import tree\nfrom parso.tree import search_ancestor, Leaf\nfrom parso import split_lines\n\nfrom jedi import debug\nfrom jedi import settings\nfrom jedi.api import classes\nfrom jedi.api import helpers\nfrom jedi.api import keywords\nfrom jedi.api.strings import complete_dict\nfrom jedi.api.file_name import complete_file_name\nfrom jedi.inference import imports\nfrom jedi.inference.base_value import ValueSet\nfrom jedi.inference.helpers import infer_call_of_leaf, parse_dotted_names\nfrom jedi.inference.context import get_global_filters\nfrom jedi.inference.value import TreeInstance\nfrom jedi.inference.docstring_utils import DocstringModule\nfrom jedi.inference.names import ParamNameWrapper, SubModuleName\nfrom jedi.inference.gradual.conversion import convert_values, convert_names\nfrom jedi.parser_utils import cut_value_at_position\nfrom jedi.plugins import plugin_manager\n\n\nclass ParamNameWithEquals(ParamNameWrapper):\n def get_public_name(self):\n return self.string_name + '='\n\n\ndef _get_signature_param_names(signatures, positional_count, used_kwargs):\n # Add named params\n for call_sig in signatures:\n for i, p in enumerate(call_sig.params):\n kind = p.kind\n if i < positional_count and kind == Parameter.POSITIONAL_OR_KEYWORD:\n continue\n if kind in (Parameter.POSITIONAL_OR_KEYWORD, Parameter.KEYWORD_ONLY) \\n and p.name not in used_kwargs:\n yield ParamNameWithEquals(p._name)\n\n\ndef _must_be_kwarg(signatures, positional_count, used_kwargs):\n if used_kwargs:\n return True\n\n must_be_kwarg = True\n for signature in signatures:\n for i, p in enumerate(signature.params):\n kind = p.kind\n if kind is Parameter.VAR_POSITIONAL:\n # In case there were not already kwargs, the next param can\n # always be a normal argument.\n return False\n\n if i >= positional_count and kind in (Parameter.POSITIONAL_OR_KEYWORD,\n Parameter.POSITIONAL_ONLY):\n must_be_kwarg = False\n break\n if not must_be_kwarg:\n break\n return must_be_kwarg\n\n\ndef filter_names(inference_state, completion_names, stack, like_name, fuzzy,\n imported_names, cached_name):\n comp_dct = set()\n if settings.case_insensitive_completion:\n like_name = like_name.lower()\n for name in completion_names:\n string = name.string_name\n if string in imported_names and string != like_name:\n continue\n if settings.case_insensitive_completion:\n string = string.lower()\n if helpers.match(string, like_name, fuzzy=fuzzy):\n new = classes.Completion(\n inference_state,\n name,\n stack,\n len(like_name),\n is_fuzzy=fuzzy,\n cached_name=cached_name,\n )\n k = (new.name, new.complete) # key\n if k not in comp_dct:\n comp_dct.add(k)\n tree_name = name.tree_name\n if tree_name is not None:\n definition = tree_name.get_definition()\n if definition is not None and definition.type == 'del_stmt':\n continue\n yield new\n\n\ndef _remove_duplicates(completions, other_completions):\n names = {d.name for d in other_completions}\n return [c for c in completions if c.name not in names]\n\n\ndef get_user_context(module_context, position):\n """\n Returns the scope in which the user resides. This includes flows.\n """\n leaf = module_context.tree_node.get_leaf_for_position(position, include_prefixes=True)\n return module_context.create_context(leaf)\n\n\ndef get_flow_scope_node(module_node, position):\n node = module_node.get_leaf_for_position(position, include_prefixes=True)\n while not isinstance(node, (tree.Scope, tree.Flow)):\n node = node.parent\n\n return node\n\n\n@plugin_manager.decorate()\ndef complete_param_names(context, function_name, decorator_nodes):\n # Basically there's no way to do param completion. The plugins are\n # responsible for this.\n return []\n\n\nclass Completion:\n def __init__(self, inference_state, module_context, code_lines, position,\n signatures_callback, fuzzy=False):\n self._inference_state = inference_state\n self._module_context = module_context\n self._module_node = module_context.tree_node\n self._code_lines = code_lines\n\n # The first step of completions is to get the name\n self._like_name = helpers.get_on_completion_name(self._module_node, code_lines, position)\n # The actual cursor position is not what we need to calculate\n # everything. We want the start of the name we're on.\n self._original_position = position\n self._signatures_callback = signatures_callback\n\n self._fuzzy = fuzzy\n\n # Return list of completions in this order:\n # - Beginning with what user is typing\n # - Public (alphabet)\n # - Private ("_xxx")\n # - Dunder ("__xxx")\n def complete(self):\n leaf = self._module_node.get_leaf_for_position(\n self._original_position,\n include_prefixes=True\n )\n string, start_leaf, quote = _extract_string_while_in_string(leaf, self._original_position)\n\n prefixed_completions = complete_dict(\n self._module_context,\n self._code_lines,\n start_leaf or leaf,\n self._original_position,\n None if string is None else quote + string,\n fuzzy=self._fuzzy,\n )\n\n if string is not None and not prefixed_completions:\n prefixed_completions = list(complete_file_name(\n self._inference_state, self._module_context, start_leaf, quote, string,\n self._like_name, self._signatures_callback,\n self._code_lines, self._original_position,\n self._fuzzy\n ))\n if string is not None:\n if not prefixed_completions and '\n' in string:\n # Complete only multi line strings\n prefixed_completions = self._complete_in_string(start_leaf, string)\n return prefixed_completions\n\n cached_name, completion_names = self._complete_python(leaf)\n\n imported_names = []\n if leaf.parent is not None and leaf.parent.type in ['import_as_names', 'dotted_as_names']:\n imported_names.extend(extract_imported_names(leaf.parent))\n\n completions = list(filter_names(self._inference_state, completion_names,\n self.stack, self._like_name,\n self._fuzzy, imported_names, cached_name=cached_name))\n\n return (\n # Removing duplicates mostly to remove False/True/None duplicates.\n _remove_duplicates(prefixed_completions, completions)\n + sorted(completions, key=lambda x: (not x.name.startswith(self._like_name),\n x.name.startswith('__'),\n x.name.startswith('_'),\n x.name.lower()))\n )\n\n def _complete_python(self, leaf):\n """\n Analyzes the current context of a completion and decides what to\n return.\n\n Technically this works by generating a parser stack and analysing the\n current stack for possible grammar nodes.\n\n Possible enhancements:\n - global/nonlocal search global\n - yield from / raise from <- could be only exceptions/generators\n - In args: */**: no completion\n - In params (also lambda): no completion before =\n """\n grammar = self._inference_state.grammar\n self.stack = stack = None\n self._position = (\n self._original_position[0],\n self._original_position[1] - len(self._like_name)\n )\n cached_name = None\n\n try:\n self.stack = stack = helpers.get_stack_at_position(\n grammar, self._code_lines, leaf, self._position\n )\n except helpers.OnErrorLeaf as e:\n value = e.error_leaf.value\n if value == '.':\n # After ErrorLeaf's that are dots, we will not do any\n # completions since this probably just confuses the user.\n return cached_name, []\n\n # If we don't have a value, just use global completion.\n return cached_name, self._complete_global_scope()\n\n allowed_transitions = \\n list(stack._allowed_transition_names_and_token_types())\n\n if 'if' in allowed_transitions:\n leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)\n previous_leaf = leaf.get_previous_leaf()\n\n indent = self._position[1]\n if not (leaf.start_pos <= self._position <= leaf.end_pos):\n indent = leaf.start_pos[1]\n\n if previous_leaf is not None:\n stmt = previous_leaf\n while True:\n stmt = search_ancestor(\n stmt, 'if_stmt', 'for_stmt', 'while_stmt', 'try_stmt',\n 'error_node',\n )\n if stmt is None:\n break\n\n type_ = stmt.type\n if type_ == 'error_node':\n first = stmt.children[0]\n if isinstance(first, Leaf):\n type_ = first.value + '_stmt'\n # Compare indents\n if stmt.start_pos[1] == indent:\n if type_ == 'if_stmt':\n allowed_transitions += ['elif', 'else']\n elif type_ == 'try_stmt':\n allowed_transitions += ['except', 'finally', 'else']\n elif type_ == 'for_stmt':\n allowed_transitions.append('else')\n\n completion_names = []\n\n kwargs_only = False\n if any(t in allowed_transitions for t in (PythonTokenTypes.NAME,\n PythonTokenTypes.INDENT)):\n # This means that we actually have to do type inference.\n\n nonterminals = [stack_node.nonterminal for stack_node in stack]\n\n nodes = _gather_nodes(stack)\n if nodes and nodes[-1] in ('as', 'def', 'class'):\n # No completions for ``with x as foo`` and ``import x as foo``.\n # Also true for defining names as a class or function.\n return cached_name, list(self._complete_inherited(is_function=True))\n elif "import_stmt" in nonterminals:\n level, names = parse_dotted_names(nodes, "import_from" in nonterminals)\n\n only_modules = not ("import_from" in nonterminals and 'import' in nodes)\n completion_names += self._get_importer_names(\n names,\n level,\n only_modules=only_modules,\n )\n elif nonterminals[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':\n dot = self._module_node.get_leaf_for_position(self._position)\n if dot.type == "endmarker":\n # This is a bit of a weird edge case, maybe we can somehow\n # generalize this.\n dot = leaf.get_previous_leaf()\n cached_name, n = self._complete_trailer(dot.get_previous_leaf())\n completion_names += n\n elif self._is_parameter_completion():\n completion_names += self._complete_params(leaf)\n else:\n # Apparently this looks like it's good enough to filter most cases\n # so that signature completions don't randomly appear.\n # To understand why this works, three things are important:\n # 1. trailer with a `,` in it is either a subscript or an arglist.\n # 2. If there's no `,`, it's at the start and only signatures start\n # with `(`. Other trailers could start with `.` or `[`.\n # 3. Decorators are very primitive and have an optional `(` with\n # optional arglist in them.\n if nodes[-1] in ['(', ','] \\n and nonterminals[-1] in ('trailer', 'arglist', 'decorator'):\n signatures = self._signatures_callback(*self._position)\n if signatures:\n call_details = signatures[0]._call_details\n used_kwargs = list(call_details.iter_used_keyword_arguments())\n positional_count = call_details.count_positional_arguments()\n\n completion_names += _get_signature_param_names(\n signatures,\n positional_count,\n used_kwargs,\n )\n\n kwargs_only = _must_be_kwarg(signatures, positional_count, used_kwargs)\n\n if not kwargs_only:\n completion_names += self._complete_global_scope()\n completion_names += self._complete_inherited(is_function=False)\n\n if not kwargs_only:\n current_line = self._code_lines[self._position[0] - 1][:self._position[1]]\n completion_names += self._complete_keywords(\n allowed_transitions,\n only_values=not (not current_line or current_line[-1] in ' \t.;'\n and current_line[-3:] != '...')\n )\n\n return cached_name, completion_names\n\n def _is_parameter_completion(self):\n tos = self.stack[-1]\n if tos.nonterminal == 'lambdef' and len(tos.nodes) == 1:\n # We are at the position `lambda `, where basically the next node\n # is a param.\n return True\n if tos.nonterminal in 'parameters':\n # Basically we are at the position `foo(`, there's nothing there\n # yet, so we have no `typedargslist`.\n return True\n # var args is for lambdas and typed args for normal functions\n return tos.nonterminal in ('typedargslist', 'varargslist') and tos.nodes[-1] == ','\n\n def _complete_params(self, leaf):\n stack_node = self.stack[-2]\n if stack_node.nonterminal == 'parameters':\n stack_node = self.stack[-3]\n if stack_node.nonterminal == 'funcdef':\n context = get_user_context(self._module_context, self._position)\n node = search_ancestor(leaf, 'error_node', 'funcdef')\n if node is not None:\n if node.type == 'error_node':\n n = node.children[0]\n if n.type == 'decorators':\n decorators = n.children\n elif n.type == 'decorator':\n decorators = [n]\n else:\n decorators = []\n else:\n decorators = node.get_decorators()\n function_name = stack_node.nodes[1]\n\n return complete_param_names(context, function_name.value, decorators)\n return []\n\n def _complete_keywords(self, allowed_transitions, only_values):\n for k in allowed_transitions:\n if isinstance(k, str) and k.isalpha():\n if not only_values or k in ('True', 'False', 'None'):\n yield keywords.KeywordName(self._inference_state, k)\n\n def _complete_global_scope(self):\n context = get_user_context(self._module_context, self._position)\n debug.dbg('global completion scope: %s', context)\n flow_scope_node = get_flow_scope_node(self._module_node, self._position)\n filters = get_global_filters(\n context,\n self._position,\n flow_scope_node\n )\n completion_names = []\n for filter in filters:\n completion_names += filter.values()\n return completion_names\n\n def _complete_trailer(self, previous_leaf):\n inferred_context = self._module_context.create_context(previous_leaf)\n values = infer_call_of_leaf(inferred_context, previous_leaf)\n debug.dbg('trailer completion values: %s', values, color='MAGENTA')\n\n # The cached name simply exists to make speed optimizations for certain\n # modules.\n cached_name = None\n if len(values) == 1:\n v, = values\n if v.is_module():\n if len(v.string_names) == 1:\n module_name = v.string_names[0]\n if module_name in ('numpy', 'tensorflow', 'matplotlib', 'pandas'):\n cached_name = module_name\n\n return cached_name, self._complete_trailer_for_values(values)\n\n def _complete_trailer_for_values(self, values):\n user_context = get_user_context(self._module_context, self._position)\n\n return complete_trailer(user_context, values)\n\n def _get_importer_names(self, names, level=0, only_modules=True):\n names = [n.value for n in names]\n i = imports.Importer(self._inference_state, names, self._module_context, level)\n return i.completion_names(self._inference_state, only_modules=only_modules)\n\n def _complete_inherited(self, is_function=True):\n """\n Autocomplete inherited methods when overriding in child class.\n """\n leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)\n cls = tree.search_ancestor(leaf, 'classdef')\n if cls is None:\n return\n\n # Complete the methods that are defined in the super classes.\n class_value = self._module_context.create_value(cls)\n\n if cls.start_pos[1] >= leaf.start_pos[1]:\n return\n\n filters = class_value.get_filters(is_instance=True)\n # The first dict is the dictionary of class itself.\n next(filters)\n for filter in filters:\n for name in filter.values():\n # TODO we should probably check here for properties\n if (name.api_type == 'function') == is_function:\n yield name\n\n def _complete_in_string(self, start_leaf, string):\n """\n To make it possible for people to have completions in doctests or\n generally in "Python" code in docstrings, we use the following\n heuristic:\n\n - Having an indented block of code\n - Having some doctest code that starts with `>>>`\n - Having backticks that doesn't have whitespace inside it\n """\n\n def iter_relevant_lines(lines):\n include_next_line = False\n for l in code_lines:\n if include_next_line or l.startswith('>>>') or l.startswith(' '):\n yield re.sub(r'^( *>>> ?| +)', '', l)\n else:\n yield None\n\n include_next_line = bool(re.match(' *>>>', l))\n\n string = dedent(string)\n code_lines = split_lines(string, keepends=True)\n relevant_code_lines = list(iter_relevant_lines(code_lines))\n if relevant_code_lines[-1] is not None:\n # Some code lines might be None, therefore get rid of that.\n relevant_code_lines = ['\n' if c is None else c for c in relevant_code_lines]\n return self._complete_code_lines(relevant_code_lines)\n match = re.search(r'`([^`\s]+)', code_lines[-1])\n if match:\n return self._complete_code_lines([match.group(1)])\n return []\n\n def _complete_code_lines(self, code_lines):\n module_node = self._inference_state.grammar.parse(''.join(code_lines))\n module_value = DocstringModule(\n in_module_context=self._module_context,\n inference_state=self._inference_state,\n module_node=module_node,\n code_lines=code_lines,\n )\n return Completion(\n self._inference_state,\n module_value.as_context(),\n code_lines=code_lines,\n position=module_node.end_pos,\n signatures_callback=lambda *args, **kwargs: [],\n fuzzy=self._fuzzy\n ).complete()\n\n\ndef _gather_nodes(stack):\n nodes = []\n for stack_node in stack:\n if stack_node.dfa.from_rule == 'small_stmt':\n nodes = []\n else:\n nodes += stack_node.nodes\n return nodes\n\n\n_string_start = re.compile(r'^\w*(\'{3}|"{3}|\'|")')\n\n\ndef _extract_string_while_in_string(leaf, position):\n def return_part_of_leaf(leaf):\n kwargs = {}\n if leaf.line == position[0]:\n kwargs['endpos'] = position[1] - leaf.column\n match = _string_start.match(leaf.value, **kwargs)\n if not match:\n return None, None, None\n start = match.group(0)\n if leaf.line == position[0] and position[1] < leaf.column + match.end():\n return None, None, None\n return cut_value_at_position(leaf, position)[match.end():], leaf, start\n\n if position < leaf.start_pos:\n return None, None, None\n\n if leaf.type == 'string':\n return return_part_of_leaf(leaf)\n\n leaves = []\n while leaf is not None:\n if leaf.type == 'error_leaf' and ('"' in leaf.value or "'" in leaf.value):\n if len(leaf.value) > 1:\n return return_part_of_leaf(leaf)\n prefix_leaf = None\n if not leaf.prefix:\n prefix_leaf = leaf.get_previous_leaf()\n if prefix_leaf is None or prefix_leaf.type != 'name' \\n or not all(c in 'rubf' for c in prefix_leaf.value.lower()):\n prefix_leaf = None\n\n return (\n ''.join(cut_value_at_position(l, position) for l in leaves),\n prefix_leaf or leaf,\n ('' if prefix_leaf is None else prefix_leaf.value)\n + cut_value_at_position(leaf, position),\n )\n if leaf.line != position[0]:\n # Multi line strings are always simple error leaves and contain the\n # whole string, single line error leaves are atherefore important\n # now and since the line is different, it's not really a single\n # line string anymore.\n break\n leaves.insert(0, leaf)\n leaf = leaf.get_previous_leaf()\n return None, None, None\n\n\ndef complete_trailer(user_context, values):\n completion_names = []\n for value in values:\n for filter in value.get_filters(origin_scope=user_context.tree_node):\n completion_names += filter.values()\n\n if not value.is_stub() and isinstance(value, TreeInstance):\n completion_names += _complete_getattr(user_context, value)\n\n python_values = convert_values(values)\n for c in python_values:\n if c not in values:\n for filter in c.get_filters(origin_scope=user_context.tree_node):\n completion_names += filter.values()\n return completion_names\n\n\ndef _complete_getattr(user_context, instance):\n """\n A heuristic to make completion for proxy objects work. This is not\n intended to work in all cases. It works exactly in this case:\n\n def __getattr__(self, name):\n ...\n return getattr(any_object, name)\n\n It is important that the return contains getattr directly, otherwise it\n won't work anymore. It's really just a stupid heuristic. It will not\n work if you write e.g. `return (getatr(o, name))`, because of the\n additional parentheses. It will also not work if you move the getattr\n to some other place that is not the return statement itself.\n\n It is intentional that it doesn't work in all cases. Generally it's\n really hard to do even this case (as you can see below). Most people\n will write it like this anyway and the other ones, well they are just\n out of luck I guess :) ~dave.\n """\n names = (instance.get_function_slot_names('__getattr__')\n or instance.get_function_slot_names('__getattribute__'))\n functions = ValueSet.from_sets(\n name.infer()\n for name in names\n )\n for func in functions:\n tree_node = func.tree_node\n if tree_node is None or tree_node.type != 'funcdef':\n continue\n\n for return_stmt in tree_node.iter_return_stmts():\n # Basically until the next comment we just try to find out if a\n # return statement looks exactly like `return getattr(x, name)`.\n if return_stmt.type != 'return_stmt':\n continue\n atom_expr = return_stmt.children[1]\n if atom_expr.type != 'atom_expr':\n continue\n atom = atom_expr.children[0]\n trailer = atom_expr.children[1]\n if len(atom_expr.children) != 2 or atom.type != 'name' \\n or atom.value != 'getattr':\n continue\n arglist = trailer.children[1]\n if arglist.type != 'arglist' or len(arglist.children) < 3:\n continue\n context = func.as_context()\n object_node = arglist.children[0]\n\n # Make sure it's a param: foo in __getattr__(self, foo)\n name_node = arglist.children[2]\n name_list = context.goto(name_node, name_node.start_pos)\n if not any(n.api_type == 'param' for n in name_list):\n continue\n\n # Now that we know that these are most probably completion\n # objects, we just infer the object and return them as\n # completions.\n objects = context.infer_node(object_node)\n return complete_trailer(user_context, objects)\n return []\n\n\ndef search_in_module(inference_state, module_context, names, wanted_names,\n wanted_type, complete=False, fuzzy=False,\n ignore_imports=False, convert=False):\n for s in wanted_names[:-1]:\n new_names = []\n for n in names:\n if s == n.string_name:\n if n.tree_name is not None and n.api_type in ('module', 'namespace') \\n and ignore_imports:\n continue\n new_names += complete_trailer(\n module_context,\n n.infer()\n )\n debug.dbg('dot lookup on search %s from %s', new_names, names[:10])\n names = new_names\n\n last_name = wanted_names[-1].lower()\n for n in names:\n string = n.string_name.lower()\n if complete and helpers.match(string, last_name, fuzzy=fuzzy) \\n or not complete and string == last_name:\n if isinstance(n, SubModuleName):\n names = [v.name for v in n.infer()]\n else:\n names = [n]\n if convert:\n names = convert_names(names)\n for n2 in names:\n if complete:\n def_ = classes.Completion(\n inference_state, n2,\n stack=None,\n like_name_length=len(last_name),\n is_fuzzy=fuzzy,\n )\n else:\n def_ = classes.Name(inference_state, n2)\n if not wanted_type or wanted_type == def_.type:\n yield def_\n\n\ndef extract_imported_names(node):\n imported_names = []\n\n if node.type in ['import_as_names', 'dotted_as_names', 'import_as_name']:\n for index, child in enumerate(node.children):\n if child.type == 'name':\n if (index > 0 and node.children[index - 1].type == "keyword"\n and node.children[index - 1].value == "as"):\n continue\n imported_names.append(child.value)\n elif child.type == 'import_as_name':\n imported_names.extend(extract_imported_names(child))\n\n return imported_names\n
.venv\Lib\site-packages\jedi\api\completion.py
completion.py
Python
28,379
0.95
0.252874
0.089226
node-utils
854
2024-05-06T23:28:42.776240
MIT
false
5394cb47926297bcc1e323857263bc2c
from typing import Dict, Tuple, Callable\n\nCacheValues = Tuple[str, str, str]\nCacheValuesCallback = Callable[[], CacheValues]\n\n\n_cache: Dict[str, Dict[str, CacheValues]] = {}\n\n\ndef save_entry(module_name: str, name: str, cache: CacheValues) -> None:\n try:\n module_cache = _cache[module_name]\n except KeyError:\n module_cache = _cache[module_name] = {}\n module_cache[name] = cache\n\n\ndef _create_get_from_cache(number: int) -> Callable[[str, str, CacheValuesCallback], str]:\n def _get_from_cache(module_name: str, name: str, get_cache_values: CacheValuesCallback) -> str:\n try:\n return _cache[module_name][name][number]\n except KeyError:\n v = get_cache_values()\n save_entry(module_name, name, v)\n return v[number]\n return _get_from_cache\n\n\nget_type = _create_get_from_cache(0)\nget_docstring_signature = _create_get_from_cache(1)\nget_docstring = _create_get_from_cache(2)\n
.venv\Lib\site-packages\jedi\api\completion_cache.py
completion_cache.py
Python
954
0.85
0.16129
0
node-utils
944
2024-02-02T21:39:14.948818
GPL-3.0
false
0579102e6bd01fb6559fd39ddbc32cf4
"""\nThis file is about errors in Python files and not about exception handling in\nJedi.\n"""\n\n\ndef parso_to_jedi_errors(grammar, module_node):\n return [SyntaxError(e) for e in grammar.iter_errors(module_node)]\n\n\nclass SyntaxError:\n """\n Syntax errors are generated by :meth:`.Script.get_syntax_errors`.\n """\n def __init__(self, parso_error):\n self._parso_error = parso_error\n\n @property\n def line(self):\n """The line where the error starts (starting with 1)."""\n return self._parso_error.start_pos[0]\n\n @property\n def column(self):\n """The column where the error starts (starting with 0)."""\n return self._parso_error.start_pos[1]\n\n @property\n def until_line(self):\n """The line where the error ends (starting with 1)."""\n return self._parso_error.end_pos[0]\n\n @property\n def until_column(self):\n """The column where the error ends (starting with 0)."""\n return self._parso_error.end_pos[1]\n\n def get_message(self):\n return self._parso_error.message\n\n def __repr__(self):\n return '<%s from=%s to=%s>' % (\n self.__class__.__name__,\n self._parso_error.start_pos,\n self._parso_error.end_pos,\n )\n
.venv\Lib\site-packages\jedi\api\errors.py
errors.py
Python
1,253
0.85
0.217391
0
vue-tools
734
2025-06-23T20:34:28.659564
MIT
false
65ec38abeb76d2c201441e37940a69ac
class _JediError(Exception):\n pass\n\n\nclass InternalError(_JediError):\n """\n This error might happen a subprocess is crashing. The reason for this is\n usually broken C code in third party libraries. This is not a very common\n thing and it is safe to use Jedi again. However using the same calls might\n result in the same error again.\n """\n\n\nclass WrongVersion(_JediError):\n """\n This error is reserved for the future, shouldn't really be happening at the\n moment.\n """\n\n\nclass RefactoringError(_JediError):\n """\n Refactorings can fail for various reasons. So if you work with refactorings\n like :meth:`.Script.rename`, :meth:`.Script.inline`,\n :meth:`.Script.extract_variable` and :meth:`.Script.extract_function`, make\n sure to catch these. The descriptions in the errors are usually valuable\n for end users.\n\n A typical ``RefactoringError`` would tell the user that inlining is not\n possible if no name is under the cursor.\n """\n
.venv\Lib\site-packages\jedi\api\exceptions.py
exceptions.py
Python
990
0.85
0.354839
0
react-lib
782
2025-05-19T13:05:12.004774
Apache-2.0
false
fc0081602b8c0af4ef76c88000465c48
import os\n\nfrom jedi.api import classes\nfrom jedi.api.strings import StringName, get_quote_ending\nfrom jedi.api.helpers import match\nfrom jedi.inference.helpers import get_str_or_none\n\n\nclass PathName(StringName):\n api_type = 'path'\n\n\ndef complete_file_name(inference_state, module_context, start_leaf, quote, string,\n like_name, signatures_callback, code_lines, position, fuzzy):\n # First we want to find out what can actually be changed as a name.\n like_name_length = len(os.path.basename(string))\n\n addition = _get_string_additions(module_context, start_leaf)\n if string.startswith('~'):\n string = os.path.expanduser(string)\n if addition is None:\n return\n string = addition + string\n\n # Here we use basename again, because if strings are added like\n # `'foo' + 'bar`, it should complete to `foobar/`.\n must_start_with = os.path.basename(string)\n string = os.path.dirname(string)\n\n sigs = signatures_callback(*position)\n is_in_os_path_join = sigs and all(s.full_name == 'os.path.join' for s in sigs)\n if is_in_os_path_join:\n to_be_added = _add_os_path_join(module_context, start_leaf, sigs[0].bracket_start)\n if to_be_added is None:\n is_in_os_path_join = False\n else:\n string = to_be_added + string\n base_path = os.path.join(inference_state.project.path, string)\n try:\n listed = sorted(os.scandir(base_path), key=lambda e: e.name)\n # OSError: [Errno 36] File name too long: '...'\n except (FileNotFoundError, OSError):\n return\n quote_ending = get_quote_ending(quote, code_lines, position)\n for entry in listed:\n name = entry.name\n if match(name, must_start_with, fuzzy=fuzzy):\n if is_in_os_path_join or not entry.is_dir():\n name += quote_ending\n else:\n name += os.path.sep\n\n yield classes.Completion(\n inference_state,\n PathName(inference_state, name[len(must_start_with) - like_name_length:]),\n stack=None,\n like_name_length=like_name_length,\n is_fuzzy=fuzzy,\n )\n\n\ndef _get_string_additions(module_context, start_leaf):\n def iterate_nodes():\n node = addition.parent\n was_addition = True\n for child_node in reversed(node.children[:node.children.index(addition)]):\n if was_addition:\n was_addition = False\n yield child_node\n continue\n\n if child_node != '+':\n break\n was_addition = True\n\n addition = start_leaf.get_previous_leaf()\n if addition != '+':\n return ''\n context = module_context.create_context(start_leaf)\n return _add_strings(context, reversed(list(iterate_nodes())))\n\n\ndef _add_strings(context, nodes, add_slash=False):\n string = ''\n first = True\n for child_node in nodes:\n values = context.infer_node(child_node)\n if len(values) != 1:\n return None\n c, = values\n s = get_str_or_none(c)\n if s is None:\n return None\n if not first and add_slash:\n string += os.path.sep\n string += s\n first = False\n return string\n\n\ndef _add_os_path_join(module_context, start_leaf, bracket_start):\n def check(maybe_bracket, nodes):\n if maybe_bracket.start_pos != bracket_start:\n return None\n\n if not nodes:\n return ''\n context = module_context.create_context(nodes[0])\n return _add_strings(context, nodes, add_slash=True) or ''\n\n if start_leaf.type == 'error_leaf':\n # Unfinished string literal, like `join('`\n value_node = start_leaf.parent\n index = value_node.children.index(start_leaf)\n if index > 0:\n error_node = value_node.children[index - 1]\n if error_node.type == 'error_node' and len(error_node.children) >= 2:\n index = -2\n if error_node.children[-1].type == 'arglist':\n arglist_nodes = error_node.children[-1].children\n index -= 1\n else:\n arglist_nodes = []\n\n return check(error_node.children[index + 1], arglist_nodes[::2])\n return None\n\n # Maybe an arglist or some weird error case. Therefore checked below.\n searched_node_child = start_leaf\n while searched_node_child.parent is not None \\n and searched_node_child.parent.type not in ('arglist', 'trailer', 'error_node'):\n searched_node_child = searched_node_child.parent\n\n if searched_node_child.get_first_leaf() is not start_leaf:\n return None\n searched_node = searched_node_child.parent\n if searched_node is None:\n return None\n\n index = searched_node.children.index(searched_node_child)\n arglist_nodes = searched_node.children[:index]\n if searched_node.type == 'arglist':\n trailer = searched_node.parent\n if trailer.type == 'error_node':\n trailer_index = trailer.children.index(searched_node)\n assert trailer_index >= 2\n assert trailer.children[trailer_index - 1] == '('\n return check(trailer.children[trailer_index - 1], arglist_nodes[::2])\n elif trailer.type == 'trailer':\n return check(trailer.children[0], arglist_nodes[::2])\n elif searched_node.type == 'trailer':\n return check(searched_node.children[0], [])\n elif searched_node.type == 'error_node':\n # Stuff like `join(""`\n return check(arglist_nodes[-1], [])\n
.venv\Lib\site-packages\jedi\api\file_name.py
file_name.py
Python
5,620
0.95
0.232258
0.05303
node-utils
961
2024-07-05T16:03:49.596252
Apache-2.0
false
7ea29ff366945c143715dfd3fac8ebd8
"""\nHelpers for the API\n"""\nimport re\nfrom collections import namedtuple\nfrom textwrap import dedent\nfrom itertools import chain\nfrom functools import wraps\nfrom inspect import Parameter\n\nfrom parso.python.parser import Parser\nfrom parso.python import tree\n\nfrom jedi.inference.base_value import NO_VALUES\nfrom jedi.inference.syntax_tree import infer_atom\nfrom jedi.inference.helpers import infer_call_of_leaf\nfrom jedi.inference.compiled import get_string_value_set\nfrom jedi.cache import signature_time_cache, memoize_method\nfrom jedi.parser_utils import get_parent_scope\n\n\nCompletionParts = namedtuple('CompletionParts', ['path', 'has_dot', 'name'])\n\n\ndef _start_match(string, like_name):\n return string.startswith(like_name)\n\n\ndef _fuzzy_match(string, like_name):\n if len(like_name) <= 1:\n return like_name in string\n pos = string.find(like_name[0])\n if pos >= 0:\n return _fuzzy_match(string[pos + 1:], like_name[1:])\n return False\n\n\ndef match(string, like_name, fuzzy=False):\n if fuzzy:\n return _fuzzy_match(string, like_name)\n else:\n return _start_match(string, like_name)\n\n\ndef sorted_definitions(defs):\n # Note: `or ''` below is required because `module_path` could be\n return sorted(defs, key=lambda x: (str(x.module_path or ''),\n x.line or 0,\n x.column or 0,\n x.name))\n\n\ndef get_on_completion_name(module_node, lines, position):\n leaf = module_node.get_leaf_for_position(position)\n if leaf is None or leaf.type in ('string', 'error_leaf'):\n # Completions inside strings are a bit special, we need to parse the\n # string. The same is true for comments and error_leafs.\n line = lines[position[0] - 1]\n # The first step of completions is to get the name\n return re.search(r'(?!\d)\w+$|$', line[:position[1]]).group(0)\n elif leaf.type not in ('name', 'keyword'):\n return ''\n\n return leaf.value[:position[1] - leaf.start_pos[1]]\n\n\ndef _get_code(code_lines, start_pos, end_pos):\n # Get relevant lines.\n lines = code_lines[start_pos[0] - 1:end_pos[0]]\n # Remove the parts at the end of the line.\n lines[-1] = lines[-1][:end_pos[1]]\n # Remove first line indentation.\n lines[0] = lines[0][start_pos[1]:]\n return ''.join(lines)\n\n\nclass OnErrorLeaf(Exception):\n @property\n def error_leaf(self):\n return self.args[0]\n\n\ndef _get_code_for_stack(code_lines, leaf, position):\n # It might happen that we're on whitespace or on a comment. This means\n # that we would not get the right leaf.\n if leaf.start_pos >= position:\n # If we're not on a comment simply get the previous leaf and proceed.\n leaf = leaf.get_previous_leaf()\n if leaf is None:\n return '' # At the beginning of the file.\n\n is_after_newline = leaf.type == 'newline'\n while leaf.type == 'newline':\n leaf = leaf.get_previous_leaf()\n if leaf is None:\n return ''\n\n if leaf.type == 'error_leaf' or leaf.type == 'string':\n if leaf.start_pos[0] < position[0]:\n # On a different line, we just begin anew.\n return ''\n\n # Error leafs cannot be parsed, completion in strings is also\n # impossible.\n raise OnErrorLeaf(leaf)\n else:\n user_stmt = leaf\n while True:\n if user_stmt.parent.type in ('file_input', 'suite', 'simple_stmt'):\n break\n user_stmt = user_stmt.parent\n\n if is_after_newline:\n if user_stmt.start_pos[1] > position[1]:\n # This means that it's actually a dedent and that means that we\n # start without value (part of a suite).\n return ''\n\n # This is basically getting the relevant lines.\n return _get_code(code_lines, user_stmt.get_start_pos_of_prefix(), position)\n\n\ndef get_stack_at_position(grammar, code_lines, leaf, pos):\n """\n Returns the possible node names (e.g. import_from, xor_test or yield_stmt).\n """\n class EndMarkerReached(Exception):\n pass\n\n def tokenize_without_endmarker(code):\n # TODO This is for now not an official parso API that exists purely\n # for Jedi.\n tokens = grammar._tokenize(code)\n for token in tokens:\n if token.string == safeword:\n raise EndMarkerReached()\n elif token.prefix.endswith(safeword):\n # This happens with comments.\n raise EndMarkerReached()\n elif token.string.endswith(safeword):\n yield token # Probably an f-string literal that was not finished.\n raise EndMarkerReached()\n else:\n yield token\n\n # The code might be indedented, just remove it.\n code = dedent(_get_code_for_stack(code_lines, leaf, pos))\n # We use a word to tell Jedi when we have reached the start of the\n # completion.\n # Use Z as a prefix because it's not part of a number suffix.\n safeword = 'ZZZ_USER_WANTS_TO_COMPLETE_HERE_WITH_JEDI'\n code = code + ' ' + safeword\n\n p = Parser(grammar._pgen_grammar, error_recovery=True)\n try:\n p.parse(tokens=tokenize_without_endmarker(code))\n except EndMarkerReached:\n return p.stack\n raise SystemError(\n "This really shouldn't happen. There's a bug in Jedi:\n%s"\n % list(tokenize_without_endmarker(code))\n )\n\n\ndef infer(inference_state, context, leaf):\n if leaf.type == 'name':\n return inference_state.infer(context, leaf)\n\n parent = leaf.parent\n definitions = NO_VALUES\n if parent.type == 'atom':\n # e.g. `(a + b)`\n definitions = context.infer_node(leaf.parent)\n elif parent.type == 'trailer':\n # e.g. `a()`\n definitions = infer_call_of_leaf(context, leaf)\n elif isinstance(leaf, tree.Literal):\n # e.g. `"foo"` or `1.0`\n return infer_atom(context, leaf)\n elif leaf.type in ('fstring_string', 'fstring_start', 'fstring_end'):\n return get_string_value_set(inference_state)\n return definitions\n\n\ndef filter_follow_imports(names, follow_builtin_imports=False):\n for name in names:\n if name.is_import():\n new_names = list(filter_follow_imports(\n name.goto(),\n follow_builtin_imports=follow_builtin_imports,\n ))\n found_builtin = False\n if follow_builtin_imports:\n for new_name in new_names:\n if new_name.start_pos is None:\n found_builtin = True\n\n if found_builtin:\n yield name\n else:\n yield from new_names\n else:\n yield name\n\n\nclass CallDetails:\n def __init__(self, bracket_leaf, children, position):\n self.bracket_leaf = bracket_leaf\n self._children = children\n self._position = position\n\n @property\n def index(self):\n return _get_index_and_key(self._children, self._position)[0]\n\n @property\n def keyword_name_str(self):\n return _get_index_and_key(self._children, self._position)[1]\n\n @memoize_method\n def _list_arguments(self):\n return list(_iter_arguments(self._children, self._position))\n\n def calculate_index(self, param_names):\n positional_count = 0\n used_names = set()\n star_count = -1\n args = self._list_arguments()\n if not args:\n if param_names:\n return 0\n else:\n return None\n\n is_kwarg = False\n for i, (star_count, key_start, had_equal) in enumerate(args):\n is_kwarg |= had_equal | (star_count == 2)\n if star_count:\n pass # For now do nothing, we don't know what's in there here.\n else:\n if i + 1 != len(args): # Not last\n if had_equal:\n used_names.add(key_start)\n else:\n positional_count += 1\n\n for i, param_name in enumerate(param_names):\n kind = param_name.get_kind()\n\n if not is_kwarg:\n if kind == Parameter.VAR_POSITIONAL:\n return i\n if kind in (Parameter.POSITIONAL_OR_KEYWORD, Parameter.POSITIONAL_ONLY):\n if i == positional_count:\n return i\n\n if key_start is not None and not star_count == 1 or star_count == 2:\n if param_name.string_name not in used_names \\n and (kind == Parameter.KEYWORD_ONLY\n or kind == Parameter.POSITIONAL_OR_KEYWORD\n and positional_count <= i):\n if star_count:\n return i\n if had_equal:\n if param_name.string_name == key_start:\n return i\n else:\n if param_name.string_name.startswith(key_start):\n return i\n\n if kind == Parameter.VAR_KEYWORD:\n return i\n return None\n\n def iter_used_keyword_arguments(self):\n for star_count, key_start, had_equal in list(self._list_arguments()):\n if had_equal and key_start:\n yield key_start\n\n def count_positional_arguments(self):\n count = 0\n for star_count, key_start, had_equal in self._list_arguments()[:-1]:\n if star_count or key_start:\n break\n count += 1\n return count\n\n\ndef _iter_arguments(nodes, position):\n def remove_after_pos(name):\n if name.type != 'name':\n return None\n return name.value[:position[1] - name.start_pos[1]]\n\n # Returns Generator[Tuple[star_count, Optional[key_start: str], had_equal]]\n nodes_before = [c for c in nodes if c.start_pos < position]\n if nodes_before[-1].type == 'arglist':\n yield from _iter_arguments(nodes_before[-1].children, position)\n return\n\n previous_node_yielded = False\n stars_seen = 0\n for i, node in enumerate(nodes_before):\n if node.type == 'argument':\n previous_node_yielded = True\n first = node.children[0]\n second = node.children[1]\n if second == '=':\n if second.start_pos < position and first.type == 'name':\n yield 0, first.value, True\n else:\n yield 0, remove_after_pos(first), False\n elif first in ('*', '**'):\n yield len(first.value), remove_after_pos(second), False\n else:\n # Must be a Comprehension\n first_leaf = node.get_first_leaf()\n if first_leaf.type == 'name' and first_leaf.start_pos >= position:\n yield 0, remove_after_pos(first_leaf), False\n else:\n yield 0, None, False\n stars_seen = 0\n elif node.type == 'testlist_star_expr':\n for n in node.children[::2]:\n if n.type == 'star_expr':\n stars_seen = 1\n n = n.children[1]\n yield stars_seen, remove_after_pos(n), False\n stars_seen = 0\n # The count of children is even if there's a comma at the end.\n previous_node_yielded = bool(len(node.children) % 2)\n elif isinstance(node, tree.PythonLeaf) and node.value == ',':\n if not previous_node_yielded:\n yield stars_seen, '', False\n stars_seen = 0\n previous_node_yielded = False\n elif isinstance(node, tree.PythonLeaf) and node.value in ('*', '**'):\n stars_seen = len(node.value)\n elif node == '=' and nodes_before[-1]:\n previous_node_yielded = True\n before = nodes_before[i - 1]\n if before.type == 'name':\n yield 0, before.value, True\n else:\n yield 0, None, False\n # Just ignore the star that is probably a syntax error.\n stars_seen = 0\n\n if not previous_node_yielded:\n if nodes_before[-1].type == 'name':\n yield stars_seen, remove_after_pos(nodes_before[-1]), False\n else:\n yield stars_seen, '', False\n\n\ndef _get_index_and_key(nodes, position):\n """\n Returns the amount of commas and the keyword argument string.\n """\n nodes_before = [c for c in nodes if c.start_pos < position]\n if nodes_before[-1].type == 'arglist':\n return _get_index_and_key(nodes_before[-1].children, position)\n\n key_str = None\n\n last = nodes_before[-1]\n if last.type == 'argument' and last.children[1] == '=' \\n and last.children[1].end_pos <= position:\n # Checked if the argument\n key_str = last.children[0].value\n elif last == '=':\n key_str = nodes_before[-2].value\n\n return nodes_before.count(','), key_str\n\n\ndef _get_signature_details_from_error_node(node, additional_children, position):\n for index, element in reversed(list(enumerate(node.children))):\n # `index > 0` means that it's a trailer and not an atom.\n if element == '(' and element.end_pos <= position and index > 0:\n # It's an error node, we don't want to match too much, just\n # until the parentheses is enough.\n children = node.children[index:]\n name = element.get_previous_leaf()\n if name is None:\n continue\n if name.type == 'name' or name.parent.type in ('trailer', 'atom'):\n return CallDetails(element, children + additional_children, position)\n\n\ndef get_signature_details(module, position):\n leaf = module.get_leaf_for_position(position, include_prefixes=True)\n # It's easier to deal with the previous token than the next one in this\n # case.\n if leaf.start_pos >= position:\n # Whitespace / comments after the leaf count towards the previous leaf.\n leaf = leaf.get_previous_leaf()\n if leaf is None:\n return None\n\n # Now that we know where we are in the syntax tree, we start to look at\n # parents for possible function definitions.\n node = leaf.parent\n while node is not None:\n if node.type in ('funcdef', 'classdef', 'decorated', 'async_stmt'):\n # Don't show signatures if there's stuff before it that just\n # makes it feel strange to have a signature.\n return None\n\n additional_children = []\n for n in reversed(node.children):\n if n.start_pos < position:\n if n.type == 'error_node':\n result = _get_signature_details_from_error_node(\n n, additional_children, position\n )\n if result is not None:\n return result\n\n additional_children[0:0] = n.children\n continue\n additional_children.insert(0, n)\n\n # Find a valid trailer\n if node.type == 'trailer' and node.children[0] == '(' \\n or node.type == 'decorator' and node.children[2] == '(':\n # Additionally we have to check that an ending parenthesis isn't\n # interpreted wrong. There are two cases:\n # 1. Cursor before paren -> The current signature is good\n # 2. Cursor after paren -> We need to skip the current signature\n if not (leaf is node.children[-1] and position >= leaf.end_pos):\n leaf = node.get_previous_leaf()\n if leaf is None:\n return None\n return CallDetails(\n node.children[0] if node.type == 'trailer' else node.children[2],\n node.children,\n position\n )\n\n node = node.parent\n\n return None\n\n\n@signature_time_cache("call_signatures_validity")\ndef cache_signatures(inference_state, context, bracket_leaf, code_lines, user_pos):\n """This function calculates the cache key."""\n line_index = user_pos[0] - 1\n\n before_cursor = code_lines[line_index][:user_pos[1]]\n other_lines = code_lines[bracket_leaf.start_pos[0]:line_index]\n whole = ''.join(other_lines + [before_cursor])\n before_bracket = re.match(r'.*\(', whole, re.DOTALL)\n\n module_path = context.get_root_context().py__file__()\n if module_path is None:\n yield None # Don't cache!\n else:\n yield (module_path, before_bracket, bracket_leaf.start_pos)\n yield infer(\n inference_state,\n context,\n bracket_leaf.get_previous_leaf(),\n )\n\n\ndef validate_line_column(func):\n @wraps(func)\n def wrapper(self, line=None, column=None, *args, **kwargs):\n line = max(len(self._code_lines), 1) if line is None else line\n if not (0 < line <= len(self._code_lines)):\n raise ValueError('`line` parameter is not in a valid range.')\n\n line_string = self._code_lines[line - 1]\n line_len = len(line_string)\n if line_string.endswith('\r\n'):\n line_len -= 2\n elif line_string.endswith('\n'):\n line_len -= 1\n\n column = line_len if column is None else column\n if not (0 <= column <= line_len):\n raise ValueError('`column` parameter (%d) is not in a valid range '\n '(0-%d) for line %d (%r).' % (\n column, line_len, line, line_string))\n return func(self, line, column, *args, **kwargs)\n return wrapper\n\n\ndef get_module_names(module, all_scopes, definitions=True, references=False):\n """\n Returns a dictionary with name parts as keys and their call paths as\n values.\n """\n def def_ref_filter(name):\n is_def = name.is_definition()\n return definitions and is_def or references and not is_def\n\n names = list(chain.from_iterable(module.get_used_names().values()))\n if not all_scopes:\n # We have to filter all the names that don't have the module as a\n # parent_scope. There's None as a parent, because nodes in the module\n # node have the parent module and not suite as all the others.\n # Therefore it's important to catch that case.\n\n def is_module_scope_name(name):\n parent_scope = get_parent_scope(name)\n # async functions have an extra wrapper. Strip it.\n if parent_scope and parent_scope.type == 'async_stmt':\n parent_scope = parent_scope.parent\n return parent_scope in (module, None)\n\n names = [n for n in names if is_module_scope_name(n)]\n return filter(def_ref_filter, names)\n\n\ndef split_search_string(name):\n type, _, dotted_names = name.rpartition(' ')\n if type == 'def':\n type = 'function'\n return type, dotted_names.split('.')\n
.venv\Lib\site-packages\jedi\api\helpers.py
helpers.py
Python
18,982
0.95
0.270115
0.116705
node-utils
949
2023-09-10T10:16:55.105790
GPL-3.0
false
c2846b9c6952d335686f8818d203d726
"""\nTODO Some parts of this module are still not well documented.\n"""\n\nfrom jedi.inference import compiled\nfrom jedi.inference.base_value import ValueSet\nfrom jedi.inference.filters import ParserTreeFilter, MergedFilter\nfrom jedi.inference.names import TreeNameDefinition\nfrom jedi.inference.compiled import mixed\nfrom jedi.inference.compiled.access import create_access_path\nfrom jedi.inference.context import ModuleContext\n\n\ndef _create(inference_state, obj):\n return compiled.create_from_access_path(\n inference_state, create_access_path(inference_state, obj)\n )\n\n\nclass NamespaceObject:\n def __init__(self, dct):\n self.__dict__ = dct\n\n\nclass MixedTreeName(TreeNameDefinition):\n def infer(self):\n """\n In IPython notebook it is typical that some parts of the code that is\n provided was already executed. In that case if something is not properly\n inferred, it should still infer from the variables it already knows.\n """\n inferred = super().infer()\n if not inferred:\n for compiled_value in self.parent_context.mixed_values:\n for f in compiled_value.get_filters():\n values = ValueSet.from_sets(\n n.infer() for n in f.get(self.string_name)\n )\n if values:\n return values\n return inferred\n\n\nclass MixedParserTreeFilter(ParserTreeFilter):\n name_class = MixedTreeName\n\n\nclass MixedModuleContext(ModuleContext):\n def __init__(self, tree_module_value, namespaces):\n super().__init__(tree_module_value)\n self.mixed_values = [\n self._get_mixed_object(\n _create(self.inference_state, NamespaceObject(n))\n ) for n in namespaces\n ]\n\n def _get_mixed_object(self, compiled_value):\n return mixed.MixedObject(\n compiled_value=compiled_value,\n tree_value=self._value\n )\n\n def get_filters(self, until_position=None, origin_scope=None):\n yield MergedFilter(\n MixedParserTreeFilter(\n parent_context=self,\n until_position=until_position,\n origin_scope=origin_scope\n ),\n self.get_global_filter(),\n )\n\n for mixed_object in self.mixed_values:\n yield from mixed_object.get_filters(until_position, origin_scope)\n
.venv\Lib\site-packages\jedi\api\interpreter.py
interpreter.py
Python
2,415
0.85
0.243243
0
vue-tools
640
2024-08-19T00:16:42.627431
BSD-3-Clause
false
48f8e0637479ea4fa3bfbbcbe5ea6d87
import pydoc\nfrom contextlib import suppress\nfrom typing import Dict, Optional\n\nfrom jedi.inference.names import AbstractArbitraryName\n\ntry:\n from pydoc_data import topics\n pydoc_topics: Optional[Dict[str, str]] = topics.topics\nexcept ImportError:\n # Python 3.6.8 embeddable does not have pydoc_data.\n pydoc_topics = None\n\n\nclass KeywordName(AbstractArbitraryName):\n api_type = 'keyword'\n\n def py__doc__(self):\n return imitate_pydoc(self.string_name)\n\n\ndef imitate_pydoc(string):\n """\n It's not possible to get the pydoc's without starting the annoying pager\n stuff.\n """\n if pydoc_topics is None:\n return ''\n\n h = pydoc.help\n with suppress(KeyError):\n # try to access symbols\n string = h.symbols[string]\n string, _, related = string.partition(' ')\n\n def get_target(s):\n return h.topics.get(s, h.keywords.get(s))\n\n while isinstance(string, str):\n string = get_target(string)\n\n try:\n # is a tuple now\n label, related = string\n except TypeError:\n return ''\n\n try:\n return pydoc_topics[label].strip() if pydoc_topics else ''\n except KeyError:\n return ''\n
.venv\Lib\site-packages\jedi\api\keywords.py
keywords.py
Python
1,192
0.95
0.215686
0.076923
react-lib
771
2025-06-24T22:09:48.495252
Apache-2.0
false
828e7e7c49c62d6f7c1620912c72173c
"""\nTo use Jedi completion in Python interpreter, add the following in your shell\nsetup (e.g., ``.bashrc``). This works only on Linux/Mac, because readline is\nnot available on Windows. If you still want Jedi autocompletion in your REPL,\njust use IPython instead::\n\n export PYTHONSTARTUP="$(python -m jedi repl)"\n\nThen you will be able to use Jedi completer in your Python interpreter::\n\n $ python\n Python 3.9.2+ (default, Jul 20 2020, 22:15:08)\n [GCC 4.6.1] on linux2\n Type "help", "copyright", "credits" or "license" for more information.\n >>> import os\n >>> os.path.join('a', 'b').split().in<TAB> # doctest: +SKIP\n ..dex ..sert\n\n"""\nimport jedi.utils\nfrom jedi import __version__ as __jedi_version__\n\nprint('REPL completion using Jedi %s' % __jedi_version__)\njedi.utils.setup_readline(fuzzy=False)\n\ndel jedi\n\n# Note: try not to do many things here, as it will contaminate global\n# namespace of the interpreter.\n
.venv\Lib\site-packages\jedi\api\replstartup.py
replstartup.py
Python
950
0.95
0.068966
0.090909
node-utils
182
2023-10-14T08:58:34.017440
MIT
false
5f4a67540bb5948e26c3346da3dd53a0
"""\nThis module is here for string completions. This means mostly stuff where\nstrings are returned, like `foo = dict(bar=3); foo["ba` would complete to\n`"bar"]`.\n\nIt however does the same for numbers. The difference between string completions\nand other completions is mostly that this module doesn't return defined\nnames in a module, but pretty much an arbitrary string.\n"""\nimport re\n\nfrom jedi.inference.names import AbstractArbitraryName\nfrom jedi.inference.helpers import infer_call_of_leaf\nfrom jedi.api.classes import Completion\nfrom jedi.parser_utils import cut_value_at_position\n\n_sentinel = object()\n\n\nclass StringName(AbstractArbitraryName):\n api_type = 'string'\n is_value_name = False\n\n\ndef complete_dict(module_context, code_lines, leaf, position, string, fuzzy):\n bracket_leaf = leaf\n if bracket_leaf != '[':\n bracket_leaf = leaf.get_previous_leaf()\n\n cut_end_quote = ''\n if string:\n cut_end_quote = get_quote_ending(string, code_lines, position, invert_result=True)\n\n if bracket_leaf == '[':\n if string is None and leaf is not bracket_leaf:\n string = cut_value_at_position(leaf, position)\n\n context = module_context.create_context(bracket_leaf)\n\n before_node = before_bracket_leaf = bracket_leaf.get_previous_leaf()\n if before_node in (')', ']', '}'):\n before_node = before_node.parent\n if before_node.type in ('atom', 'trailer', 'name'):\n values = infer_call_of_leaf(context, before_bracket_leaf)\n return list(_completions_for_dicts(\n module_context.inference_state,\n values,\n '' if string is None else string,\n cut_end_quote,\n fuzzy=fuzzy,\n ))\n return []\n\n\ndef _completions_for_dicts(inference_state, dicts, literal_string, cut_end_quote, fuzzy):\n for dict_key in sorted(_get_python_keys(dicts), key=lambda x: repr(x)):\n dict_key_str = _create_repr_string(literal_string, dict_key)\n if dict_key_str.startswith(literal_string):\n name = StringName(inference_state, dict_key_str[:-len(cut_end_quote) or None])\n yield Completion(\n inference_state,\n name,\n stack=None,\n like_name_length=len(literal_string),\n is_fuzzy=fuzzy\n )\n\n\ndef _create_repr_string(literal_string, dict_key):\n if not isinstance(dict_key, (str, bytes)) or not literal_string:\n return repr(dict_key)\n\n r = repr(dict_key)\n prefix, quote = _get_string_prefix_and_quote(literal_string)\n if quote is None:\n return r\n if quote == r[0]:\n return prefix + r\n return prefix + quote + r[1:-1] + quote\n\n\ndef _get_python_keys(dicts):\n for dct in dicts:\n if dct.array_type == 'dict':\n for key in dct.get_key_values():\n dict_key = key.get_safe_value(default=_sentinel)\n if dict_key is not _sentinel:\n yield dict_key\n\n\ndef _get_string_prefix_and_quote(string):\n match = re.match(r'(\w*)("""|\'{3}|"|\')', string)\n if match is None:\n return None, None\n return match.group(1), match.group(2)\n\n\ndef _matches_quote_at_position(code_lines, quote, position):\n string = code_lines[position[0] - 1][position[1]:position[1] + len(quote)]\n return string == quote\n\n\ndef get_quote_ending(string, code_lines, position, invert_result=False):\n _, quote = _get_string_prefix_and_quote(string)\n if quote is None:\n return ''\n\n # Add a quote only if it's not already there.\n if _matches_quote_at_position(code_lines, quote, position) != invert_result:\n return ''\n return quote\n
.venv\Lib\site-packages\jedi\api\strings.py
strings.py
Python
3,711
0.95
0.27027
0.011628
awesome-app
938
2024-09-04T12:49:00.539912
MIT
false
e4a0b117180eee7c3e8121ba7ad3722a
"""\nThe API basically only provides one class. You can create a :class:`Script` and\nuse its methods.\n\nAdditionally you can add a debug function with :func:`set_debug_function`.\nAlternatively, if you don't need a custom function and are happy with printing\ndebug messages to stdout, simply call :func:`set_debug_function` without\narguments.\n"""\nimport sys\nfrom pathlib import Path\n\nimport parso\nfrom parso.python import tree\n\nfrom jedi.parser_utils import get_executable_nodes\nfrom jedi import debug\nfrom jedi import settings\nfrom jedi import cache\nfrom jedi.file_io import KnownContentFileIO\nfrom jedi.api import classes\nfrom jedi.api import interpreter\nfrom jedi.api import helpers\nfrom jedi.api.helpers import validate_line_column\nfrom jedi.api.completion import Completion, search_in_module\nfrom jedi.api.keywords import KeywordName\nfrom jedi.api.environment import InterpreterEnvironment\nfrom jedi.api.project import get_default_project, Project\nfrom jedi.api.errors import parso_to_jedi_errors\nfrom jedi.api import refactoring\nfrom jedi.api.refactoring.extract import extract_function, extract_variable\nfrom jedi.inference import InferenceState\nfrom jedi.inference import imports\nfrom jedi.inference.references import find_references\nfrom jedi.inference.arguments import try_iter_content\nfrom jedi.inference.helpers import infer_call_of_leaf\nfrom jedi.inference.sys_path import transform_path_to_dotted\nfrom jedi.inference.syntax_tree import tree_name_to_values\nfrom jedi.inference.value import ModuleValue\nfrom jedi.inference.base_value import ValueSet\nfrom jedi.inference.value.iterable import unpack_tuple_to_dict\nfrom jedi.inference.gradual.conversion import convert_names, convert_values\nfrom jedi.inference.gradual.utils import load_proper_stub_module\nfrom jedi.inference.utils import to_list\n\n# Jedi uses lots and lots of recursion. By setting this a little bit higher, we\n# can remove some "maximum recursion depth" errors.\nsys.setrecursionlimit(3000)\n\n\nclass Script:\n """\n A Script is the base for completions, goto or whatever you want to do with\n Jedi. The counter part of this class is :class:`Interpreter`, which works\n with actual dictionaries and can work with a REPL. This class\n should be used when a user edits code in an editor.\n\n You can either use the ``code`` parameter or ``path`` to read a file.\n Usually you're going to want to use both of them (in an editor).\n\n The Script's ``sys.path`` is very customizable:\n\n - If `project` is provided with a ``sys_path``, that is going to be used.\n - If `environment` is provided, its ``sys.path`` will be used\n (see :func:`Environment.get_sys_path <jedi.api.environment.Environment.get_sys_path>`);\n - Otherwise ``sys.path`` will match that of the default environment of\n Jedi, which typically matches the sys path that was used at the time\n when Jedi was imported.\n\n Most methods have a ``line`` and a ``column`` parameter. Lines in Jedi are\n always 1-based and columns are always zero based. To avoid repetition they\n are not always documented. You can omit both line and column. Jedi will\n then just do whatever action you are calling at the end of the file. If you\n provide only the line, just will complete at the end of that line.\n\n .. warning:: By default :attr:`jedi.settings.fast_parser` is enabled, which means\n that parso reuses modules (i.e. they are not immutable). With this setting\n Jedi is **not thread safe** and it is also not safe to use multiple\n :class:`.Script` instances and its definitions at the same time.\n\n If you are a normal plugin developer this should not be an issue. It is\n an issue for people that do more complex stuff with Jedi.\n\n This is purely a performance optimization and works pretty well for all\n typical usages, however consider to turn the setting off if it causes\n you problems. See also\n `this discussion <https://github.com/davidhalter/jedi/issues/1240>`_.\n\n :param code: The source code of the current file, separated by newlines.\n :type code: str\n :param path: The path of the file in the file system, or ``''`` if\n it hasn't been saved yet.\n :type path: str or pathlib.Path or None\n :param Environment environment: Provide a predefined :ref:`Environment <environments>`\n to work with a specific Python version or virtualenv.\n :param Project project: Provide a :class:`.Project` to make sure finding\n references works well, because the right folder is searched. There are\n also ways to modify the sys path and other things.\n """\n def __init__(self, code=None, *, path=None, environment=None, project=None):\n self._orig_path = path\n if isinstance(path, str):\n path = Path(path)\n\n self.path = path.absolute() if path else None\n\n if code is None:\n if path is None:\n raise ValueError("Must provide at least one of code or path")\n\n # TODO add a better warning than the traceback!\n with open(path, 'rb') as f:\n code = f.read()\n\n if project is None:\n # Load the Python grammar of the current interpreter.\n project = get_default_project(None if self.path is None else self.path.parent)\n\n self._inference_state = InferenceState(\n project, environment=environment, script_path=self.path\n )\n debug.speed('init')\n self._module_node, code = self._inference_state.parse_and_get_code(\n code=code,\n path=self.path,\n use_latest_grammar=path and path.suffix == '.pyi',\n cache=False, # No disk cache, because the current script often changes.\n diff_cache=settings.fast_parser,\n cache_path=settings.cache_directory,\n )\n debug.speed('parsed')\n self._code_lines = parso.split_lines(code, keepends=True)\n self._code = code\n\n cache.clear_time_caches()\n debug.reset_time()\n\n # Cache the module, this is mostly useful for testing, since this shouldn't\n # be called multiple times.\n @cache.memoize_method\n def _get_module(self):\n names = None\n is_package = False\n if self.path is not None:\n import_names, is_p = transform_path_to_dotted(\n self._inference_state.get_sys_path(add_parent_paths=False),\n self.path\n )\n if import_names is not None:\n names = import_names\n is_package = is_p\n\n if self.path is None:\n file_io = None\n else:\n file_io = KnownContentFileIO(self.path, self._code)\n if self.path is not None and self.path.suffix == '.pyi':\n # We are in a stub file. Try to load the stub properly.\n stub_module = load_proper_stub_module(\n self._inference_state,\n self._inference_state.latest_grammar,\n file_io,\n names,\n self._module_node\n )\n if stub_module is not None:\n return stub_module\n\n if names is None:\n names = ('__main__',)\n\n module = ModuleValue(\n self._inference_state, self._module_node,\n file_io=file_io,\n string_names=names,\n code_lines=self._code_lines,\n is_package=is_package,\n )\n if names[0] not in ('builtins', 'typing'):\n # These modules are essential for Jedi, so don't overwrite them.\n self._inference_state.module_cache.add(names, ValueSet([module]))\n return module\n\n def _get_module_context(self):\n return self._get_module().as_context()\n\n def __repr__(self):\n return '<%s: %s %r>' % (\n self.__class__.__name__,\n repr(self._orig_path),\n self._inference_state.environment,\n )\n\n @validate_line_column\n def complete(self, line=None, column=None, *, fuzzy=False):\n """\n Completes objects under the cursor.\n\n Those objects contain information about the completions, more than just\n names.\n\n :param fuzzy: Default False. Will return fuzzy completions, which means\n that e.g. ``ooa`` will match ``foobar``.\n :return: Completion objects, sorted by name. Normal names appear\n before "private" names that start with ``_`` and those appear\n before magic methods and name mangled names that start with ``__``.\n :rtype: list of :class:`.Completion`\n """\n self._inference_state.reset_recursion_limitations()\n with debug.increase_indent_cm('complete'):\n completion = Completion(\n self._inference_state, self._get_module_context(), self._code_lines,\n (line, column), self.get_signatures, fuzzy=fuzzy,\n )\n return completion.complete()\n\n @validate_line_column\n def infer(self, line=None, column=None, *, only_stubs=False, prefer_stubs=False):\n """\n Return the definitions of under the cursor. It is basically a wrapper\n around Jedi's type inference.\n\n This method follows complicated paths and returns the end, not the\n first definition. The big difference between :meth:`goto` and\n :meth:`infer` is that :meth:`goto` doesn't\n follow imports and statements. Multiple objects may be returned,\n because depending on an option you can have two different versions of a\n function.\n\n :param only_stubs: Only return stubs for this method.\n :param prefer_stubs: Prefer stubs to Python objects for this method.\n :rtype: list of :class:`.Name`\n """\n self._inference_state.reset_recursion_limitations()\n pos = line, column\n leaf = self._module_node.get_name_of_position(pos)\n if leaf is None:\n leaf = self._module_node.get_leaf_for_position(pos)\n if leaf is None or leaf.type == 'string':\n return []\n if leaf.end_pos == (line, column) and leaf.type == 'operator':\n next_ = leaf.get_next_leaf()\n if next_.start_pos == leaf.end_pos \\n and next_.type in ('number', 'string', 'keyword'):\n leaf = next_\n\n context = self._get_module_context().create_context(leaf)\n\n values = helpers.infer(self._inference_state, context, leaf)\n values = convert_values(\n values,\n only_stubs=only_stubs,\n prefer_stubs=prefer_stubs,\n )\n\n defs = [classes.Name(self._inference_state, c.name) for c in values]\n # The additional set here allows the definitions to become unique in an\n # API sense. In the internals we want to separate more things than in\n # the API.\n return helpers.sorted_definitions(set(defs))\n\n @validate_line_column\n def goto(self, line=None, column=None, *, follow_imports=False, follow_builtin_imports=False,\n only_stubs=False, prefer_stubs=False):\n """\n Goes to the name that defined the object under the cursor. Optionally\n you can follow imports.\n Multiple objects may be returned, depending on an if you can have two\n different versions of a function.\n\n :param follow_imports: The method will follow imports.\n :param follow_builtin_imports: If ``follow_imports`` is True will try\n to look up names in builtins (i.e. compiled or extension modules).\n :param only_stubs: Only return stubs for this method.\n :param prefer_stubs: Prefer stubs to Python objects for this method.\n :rtype: list of :class:`.Name`\n """\n self._inference_state.reset_recursion_limitations()\n tree_name = self._module_node.get_name_of_position((line, column))\n if tree_name is None:\n # Without a name we really just want to jump to the result e.g.\n # executed by `foo()`, if we the cursor is after `)`.\n return self.infer(line, column, only_stubs=only_stubs, prefer_stubs=prefer_stubs)\n name = self._get_module_context().create_name(tree_name)\n\n # Make it possible to goto the super class function/attribute\n # definitions, when they are overwritten.\n names = []\n if name.tree_name.is_definition() and name.parent_context.is_class():\n class_node = name.parent_context.tree_node\n class_value = self._get_module_context().create_value(class_node)\n mro = class_value.py__mro__()\n next(mro) # Ignore the first entry, because it's the class itself.\n for cls in mro:\n names = cls.goto(tree_name.value)\n if names:\n break\n\n if not names:\n names = list(name.goto())\n\n if follow_imports:\n names = helpers.filter_follow_imports(names, follow_builtin_imports)\n names = convert_names(\n names,\n only_stubs=only_stubs,\n prefer_stubs=prefer_stubs,\n )\n\n defs = [classes.Name(self._inference_state, d) for d in set(names)]\n # Avoid duplicates\n return list(set(helpers.sorted_definitions(defs)))\n\n def search(self, string, *, all_scopes=False):\n """\n Searches a name in the current file. For a description of how the\n search string should look like, please have a look at\n :meth:`.Project.search`.\n\n :param bool all_scopes: Default False; searches not only for\n definitions on the top level of a module level, but also in\n functions and classes.\n :yields: :class:`.Name`\n """\n return self._search_func(string, all_scopes=all_scopes)\n\n @to_list\n def _search_func(self, string, all_scopes=False, complete=False, fuzzy=False):\n names = self._names(all_scopes=all_scopes)\n wanted_type, wanted_names = helpers.split_search_string(string)\n return search_in_module(\n self._inference_state,\n self._get_module_context(),\n names=names,\n wanted_type=wanted_type,\n wanted_names=wanted_names,\n complete=complete,\n fuzzy=fuzzy,\n )\n\n def complete_search(self, string, **kwargs):\n """\n Like :meth:`.Script.search`, but completes that string. If you want to\n have all possible definitions in a file you can also provide an empty\n string.\n\n :param bool all_scopes: Default False; searches not only for\n definitions on the top level of a module level, but also in\n functions and classes.\n :param fuzzy: Default False. Will return fuzzy completions, which means\n that e.g. ``ooa`` will match ``foobar``.\n :yields: :class:`.Completion`\n """\n return self._search_func(string, complete=True, **kwargs)\n\n @validate_line_column\n def help(self, line=None, column=None):\n """\n Used to display a help window to users. Uses :meth:`.Script.goto` and\n returns additional definitions for keywords and operators.\n\n Typically you will want to display :meth:`.BaseName.docstring` to the\n user for all the returned definitions.\n\n The additional definitions are ``Name(...).type == 'keyword'``.\n These definitions do not have a lot of value apart from their docstring\n attribute, which contains the output of Python's :func:`help` function.\n\n :rtype: list of :class:`.Name`\n """\n self._inference_state.reset_recursion_limitations()\n definitions = self.goto(line, column, follow_imports=True)\n if definitions:\n return definitions\n leaf = self._module_node.get_leaf_for_position((line, column))\n\n if leaf is not None and leaf.end_pos == (line, column) and leaf.type == 'newline':\n next_ = leaf.get_next_leaf()\n if next_ is not None and next_.start_pos == leaf.end_pos:\n leaf = next_\n\n if leaf is not None and leaf.type in ('keyword', 'operator', 'error_leaf'):\n def need_pydoc():\n if leaf.value in ('(', ')', '[', ']'):\n if leaf.parent.type == 'trailer':\n return False\n if leaf.parent.type == 'atom':\n return False\n grammar = self._inference_state.grammar\n # This parso stuff is not public, but since I control it, this\n # is fine :-) ~dave\n reserved = grammar._pgen_grammar.reserved_syntax_strings.keys()\n return leaf.value in reserved\n\n if need_pydoc():\n name = KeywordName(self._inference_state, leaf.value)\n return [classes.Name(self._inference_state, name)]\n return []\n\n @validate_line_column\n def get_references(self, line=None, column=None, **kwargs):\n """\n Lists all references of a variable in a project. Since this can be\n quite hard to do for Jedi, if it is too complicated, Jedi will stop\n searching.\n\n :param include_builtins: Default ``True``. If ``False``, checks if a definition\n is a builtin (e.g. ``sys``) and in that case does not return it.\n :param scope: Default ``'project'``. If ``'file'``, include references in\n the current module only.\n :rtype: list of :class:`.Name`\n """\n self._inference_state.reset_recursion_limitations()\n\n def _references(include_builtins=True, scope='project'):\n if scope not in ('project', 'file'):\n raise ValueError('Only the scopes "file" and "project" are allowed')\n tree_name = self._module_node.get_name_of_position((line, column))\n if tree_name is None:\n # Must be syntax\n return []\n\n names = find_references(self._get_module_context(), tree_name, scope == 'file')\n\n definitions = [classes.Name(self._inference_state, n) for n in names]\n if not include_builtins or scope == 'file':\n definitions = [d for d in definitions if not d.in_builtin_module()]\n return helpers.sorted_definitions(definitions)\n return _references(**kwargs)\n\n @validate_line_column\n def get_signatures(self, line=None, column=None):\n """\n Return the function object of the call under the cursor.\n\n E.g. if the cursor is here::\n\n abs(# <-- cursor is here\n\n This would return the ``abs`` function. On the other hand::\n\n abs()# <-- cursor is here\n\n This would return an empty list..\n\n :rtype: list of :class:`.Signature`\n """\n self._inference_state.reset_recursion_limitations()\n pos = line, column\n call_details = helpers.get_signature_details(self._module_node, pos)\n if call_details is None:\n return []\n\n context = self._get_module_context().create_context(call_details.bracket_leaf)\n definitions = helpers.cache_signatures(\n self._inference_state,\n context,\n call_details.bracket_leaf,\n self._code_lines,\n pos\n )\n debug.speed('func_call followed')\n\n # TODO here we use stubs instead of the actual values. We should use\n # the signatures from stubs, but the actual values, probably?!\n return [classes.Signature(self._inference_state, signature, call_details)\n for signature in definitions.get_signatures()]\n\n @validate_line_column\n def get_context(self, line=None, column=None):\n """\n Returns the scope context under the cursor. This basically means the\n function, class or module where the cursor is at.\n\n :rtype: :class:`.Name`\n """\n pos = (line, column)\n leaf = self._module_node.get_leaf_for_position(pos, include_prefixes=True)\n if leaf.start_pos > pos or leaf.type == 'endmarker':\n previous_leaf = leaf.get_previous_leaf()\n if previous_leaf is not None:\n leaf = previous_leaf\n\n module_context = self._get_module_context()\n\n n = tree.search_ancestor(leaf, 'funcdef', 'classdef')\n if n is not None and n.start_pos < pos <= n.children[-1].start_pos:\n # This is a bit of a special case. The context of a function/class\n # name/param/keyword is always it's parent context, not the\n # function itself. Catch all the cases here where we are before the\n # suite object, but still in the function.\n context = module_context.create_value(n).as_context()\n else:\n context = module_context.create_context(leaf)\n\n while context.name is None:\n context = context.parent_context # comprehensions\n\n definition = classes.Name(self._inference_state, context.name)\n while definition.type != 'module':\n name = definition._name # TODO private access\n tree_name = name.tree_name\n if tree_name is not None: # Happens with lambdas.\n scope = tree_name.get_definition()\n if scope.start_pos[1] < column:\n break\n definition = definition.parent()\n return definition\n\n def _analysis(self):\n self._inference_state.is_analysis = True\n self._inference_state.analysis_modules = [self._module_node]\n module = self._get_module_context()\n try:\n for node in get_executable_nodes(self._module_node):\n context = module.create_context(node)\n if node.type in ('funcdef', 'classdef'):\n # Resolve the decorators.\n tree_name_to_values(self._inference_state, context, node.children[1])\n elif isinstance(node, tree.Import):\n import_names = set(node.get_defined_names())\n if node.is_nested():\n import_names |= set(path[-1] for path in node.get_paths())\n for n in import_names:\n imports.infer_import(context, n)\n elif node.type == 'expr_stmt':\n types = context.infer_node(node)\n for testlist in node.children[:-1:2]:\n # Iterate tuples.\n unpack_tuple_to_dict(context, types, testlist)\n else:\n if node.type == 'name':\n defs = self._inference_state.infer(context, node)\n else:\n defs = infer_call_of_leaf(context, node)\n try_iter_content(defs)\n self._inference_state.reset_recursion_limitations()\n\n ana = [a for a in self._inference_state.analysis if self.path == a.path]\n return sorted(set(ana), key=lambda x: x.line)\n finally:\n self._inference_state.is_analysis = False\n\n def get_names(self, **kwargs):\n """\n Returns names defined in the current file.\n\n :param all_scopes: If True lists the names of all scopes instead of\n only the module namespace.\n :param definitions: If True lists the names that have been defined by a\n class, function or a statement (``a = b`` returns ``a``).\n :param references: If True lists all the names that are not listed by\n ``definitions=True``. E.g. ``a = b`` returns ``b``.\n :rtype: list of :class:`.Name`\n """\n names = self._names(**kwargs)\n return [classes.Name(self._inference_state, n) for n in names]\n\n def get_syntax_errors(self):\n """\n Lists all syntax errors in the current file.\n\n :rtype: list of :class:`.SyntaxError`\n """\n return parso_to_jedi_errors(self._inference_state.grammar, self._module_node)\n\n def _names(self, all_scopes=False, definitions=True, references=False):\n self._inference_state.reset_recursion_limitations()\n # Set line/column to a random position, because they don't matter.\n module_context = self._get_module_context()\n defs = [\n module_context.create_name(name)\n for name in helpers.get_module_names(\n self._module_node,\n all_scopes=all_scopes,\n definitions=definitions,\n references=references,\n )\n ]\n return sorted(defs, key=lambda x: x.start_pos)\n\n def rename(self, line=None, column=None, *, new_name):\n """\n Renames all references of the variable under the cursor.\n\n :param new_name: The variable under the cursor will be renamed to this\n string.\n :raises: :exc:`.RefactoringError`\n :rtype: :class:`.Refactoring`\n """\n definitions = self.get_references(line, column, include_builtins=False)\n return refactoring.rename(self._inference_state, definitions, new_name)\n\n @validate_line_column\n def extract_variable(self, line, column, *, new_name, until_line=None, until_column=None):\n """\n Moves an expression to a new statement.\n\n For example if you have the cursor on ``foo`` and provide a\n ``new_name`` called ``bar``::\n\n foo = 3.1\n x = int(foo + 1)\n\n the code above will become::\n\n foo = 3.1\n bar = foo + 1\n x = int(bar)\n\n :param new_name: The expression under the cursor will be renamed to\n this string.\n :param int until_line: The the selection range ends at this line, when\n omitted, Jedi will be clever and try to define the range itself.\n :param int until_column: The the selection range ends at this column, when\n omitted, Jedi will be clever and try to define the range itself.\n :raises: :exc:`.RefactoringError`\n :rtype: :class:`.Refactoring`\n """\n if until_line is None and until_column is None:\n until_pos = None\n else:\n if until_line is None:\n until_line = line\n if until_column is None:\n until_column = len(self._code_lines[until_line - 1])\n until_pos = until_line, until_column\n return extract_variable(\n self._inference_state, self.path, self._module_node,\n new_name, (line, column), until_pos\n )\n\n @validate_line_column\n def extract_function(self, line, column, *, new_name, until_line=None, until_column=None):\n """\n Moves an expression to a new function.\n\n For example if you have the cursor on ``foo`` and provide a\n ``new_name`` called ``bar``::\n\n global_var = 3\n\n def x():\n foo = 3.1\n x = int(foo + 1 + global_var)\n\n the code above will become::\n\n global_var = 3\n\n def bar(foo):\n return int(foo + 1 + global_var)\n\n def x():\n foo = 3.1\n x = bar(foo)\n\n :param new_name: The expression under the cursor will be replaced with\n a function with this name.\n :param int until_line: The the selection range ends at this line, when\n omitted, Jedi will be clever and try to define the range itself.\n :param int until_column: The the selection range ends at this column, when\n omitted, Jedi will be clever and try to define the range itself.\n :raises: :exc:`.RefactoringError`\n :rtype: :class:`.Refactoring`\n """\n if until_line is None and until_column is None:\n until_pos = None\n else:\n if until_line is None:\n until_line = line\n if until_column is None:\n until_column = len(self._code_lines[until_line - 1])\n until_pos = until_line, until_column\n return extract_function(\n self._inference_state, self.path, self._get_module_context(),\n new_name, (line, column), until_pos\n )\n\n def inline(self, line=None, column=None):\n """\n Inlines a variable under the cursor. This is basically the opposite of\n extracting a variable. For example with the cursor on bar::\n\n foo = 3.1\n bar = foo + 1\n x = int(bar)\n\n the code above will become::\n\n foo = 3.1\n x = int(foo + 1)\n\n :raises: :exc:`.RefactoringError`\n :rtype: :class:`.Refactoring`\n """\n names = [d._name for d in self.get_references(line, column, include_builtins=True)]\n return refactoring.inline(self._inference_state, names)\n\n\nclass Interpreter(Script):\n """\n Jedi's API for Python REPLs.\n\n Implements all of the methods that are present in :class:`.Script` as well.\n\n In addition to completions that normal REPL completion does like\n ``str.upper``, Jedi also supports code completion based on static code\n analysis. For example Jedi will complete ``str().upper``.\n\n >>> from os.path import join\n >>> namespace = locals()\n >>> script = Interpreter('join("").up', [namespace])\n >>> print(script.complete()[0].name)\n upper\n\n All keyword arguments are same as the arguments for :class:`.Script`.\n\n :param str code: Code to parse.\n :type namespaces: typing.List[dict]\n :param namespaces: A list of namespace dictionaries such as the one\n returned by :func:`globals` and :func:`locals`.\n """\n\n def __init__(self, code, namespaces, *, project=None, **kwds):\n try:\n namespaces = [dict(n) for n in namespaces]\n except Exception:\n raise TypeError("namespaces must be a non-empty list of dicts.")\n\n environment = kwds.get('environment', None)\n if environment is None:\n environment = InterpreterEnvironment()\n else:\n if not isinstance(environment, InterpreterEnvironment):\n raise TypeError("The environment needs to be an InterpreterEnvironment subclass.")\n\n if project is None:\n project = Project(Path.cwd())\n\n super().__init__(code, environment=environment, project=project, **kwds)\n\n self.namespaces = namespaces\n self._inference_state.allow_unsafe_executions = \\n settings.allow_unsafe_interpreter_executions\n # Dynamic params search is important when we work on functions that are\n # called by other pieces of code. However for interpreter completions\n # this is not important at all, because the current code is always new\n # and will never be called by something.\n # Also sometimes this logic goes a bit too far like in\n # https://github.com/ipython/ipython/issues/13866, where it takes\n # seconds to do a simple completion.\n self._inference_state.do_dynamic_params_search = False\n\n @cache.memoize_method\n def _get_module_context(self):\n if self.path is None:\n file_io = None\n else:\n file_io = KnownContentFileIO(self.path, self._code)\n tree_module_value = ModuleValue(\n self._inference_state, self._module_node,\n file_io=file_io,\n string_names=('__main__',),\n code_lines=self._code_lines,\n )\n return interpreter.MixedModuleContext(\n tree_module_value,\n self.namespaces,\n )\n\n\ndef preload_module(*modules):\n """\n Preloading modules tells Jedi to load a module now, instead of lazy parsing\n of modules. This can be useful for IDEs, to control which modules to load\n on startup.\n\n :param modules: different module names, list of string.\n """\n for m in modules:\n s = "import %s as x; x." % m\n Script(s).complete(1, len(s))\n\n\ndef set_debug_function(func_cb=debug.print_to_stdout, warnings=True,\n notices=True, speed=True):\n """\n Define a callback debug function to get all the debug messages.\n\n If you don't specify any arguments, debug messages will be printed to stdout.\n\n :param func_cb: The callback function for debug messages.\n """\n debug.debug_function = func_cb\n debug.enable_warning = warnings\n debug.enable_notice = notices\n debug.enable_speed = speed\n
.venv\Lib\site-packages\jedi\api\__init__.py
__init__.py
Python
32,428
0.95
0.234336
0.051775
python-kit
863
2024-03-17T03:23:41.007661
GPL-3.0
false
4a49032a9d8658fca455c6a5d7c9c40d
from textwrap import dedent\n\nfrom parso import split_lines\n\nfrom jedi import debug\nfrom jedi.api.exceptions import RefactoringError\nfrom jedi.api.refactoring import Refactoring, EXPRESSION_PARTS\nfrom jedi.common import indent_block\nfrom jedi.parser_utils import function_is_classmethod, function_is_staticmethod\n\n\n_DEFINITION_SCOPES = ('suite', 'file_input')\n_VARIABLE_EXCTRACTABLE = EXPRESSION_PARTS + \\n ('atom testlist_star_expr testlist test lambdef lambdef_nocond '\n 'keyword name number string fstring').split()\n\n\ndef extract_variable(inference_state, path, module_node, name, pos, until_pos):\n nodes = _find_nodes(module_node, pos, until_pos)\n debug.dbg('Extracting nodes: %s', nodes)\n\n is_expression, message = _is_expression_with_error(nodes)\n if not is_expression:\n raise RefactoringError(message)\n\n generated_code = name + ' = ' + _expression_nodes_to_string(nodes)\n file_to_node_changes = {path: _replace(nodes, name, generated_code, pos)}\n return Refactoring(inference_state, file_to_node_changes)\n\n\ndef _is_expression_with_error(nodes):\n """\n Returns a tuple (is_expression, error_string).\n """\n if any(node.type == 'name' and node.is_definition() for node in nodes):\n return False, 'Cannot extract a name that defines something'\n\n if nodes[0].type not in _VARIABLE_EXCTRACTABLE:\n return False, 'Cannot extract a "%s"' % nodes[0].type\n return True, ''\n\n\ndef _find_nodes(module_node, pos, until_pos):\n """\n Looks up a module and tries to find the appropriate amount of nodes that\n are in there.\n """\n start_node = module_node.get_leaf_for_position(pos, include_prefixes=True)\n\n if until_pos is None:\n if start_node.type == 'operator':\n next_leaf = start_node.get_next_leaf()\n if next_leaf is not None and next_leaf.start_pos == pos:\n start_node = next_leaf\n\n if _is_not_extractable_syntax(start_node):\n start_node = start_node.parent\n\n if start_node.parent.type == 'trailer':\n start_node = start_node.parent.parent\n while start_node.parent.type in EXPRESSION_PARTS:\n start_node = start_node.parent\n\n nodes = [start_node]\n else:\n # Get the next leaf if we are at the end of a leaf\n if start_node.end_pos == pos:\n next_leaf = start_node.get_next_leaf()\n if next_leaf is not None:\n start_node = next_leaf\n\n # Some syntax is not exactable, just use its parent\n if _is_not_extractable_syntax(start_node):\n start_node = start_node.parent\n\n # Find the end\n end_leaf = module_node.get_leaf_for_position(until_pos, include_prefixes=True)\n if end_leaf.start_pos > until_pos:\n end_leaf = end_leaf.get_previous_leaf()\n if end_leaf is None:\n raise RefactoringError('Cannot extract anything from that')\n\n parent_node = start_node\n while parent_node.end_pos < end_leaf.end_pos:\n parent_node = parent_node.parent\n\n nodes = _remove_unwanted_expression_nodes(parent_node, pos, until_pos)\n\n # If the user marks just a return statement, we return the expression\n # instead of the whole statement, because the user obviously wants to\n # extract that part.\n if len(nodes) == 1 and start_node.type in ('return_stmt', 'yield_expr'):\n return [nodes[0].children[1]]\n return nodes\n\n\ndef _replace(nodes, expression_replacement, extracted, pos,\n insert_before_leaf=None, remaining_prefix=None):\n # Now try to replace the nodes found with a variable and move the code\n # before the current statement.\n definition = _get_parent_definition(nodes[0])\n if insert_before_leaf is None:\n insert_before_leaf = definition.get_first_leaf()\n first_node_leaf = nodes[0].get_first_leaf()\n\n lines = split_lines(insert_before_leaf.prefix, keepends=True)\n if first_node_leaf is insert_before_leaf:\n if remaining_prefix is not None:\n # The remaining prefix has already been calculated.\n lines[:-1] = remaining_prefix\n lines[-1:-1] = [indent_block(extracted, lines[-1]) + '\n']\n extracted_prefix = ''.join(lines)\n\n replacement_dct = {}\n if first_node_leaf is insert_before_leaf:\n replacement_dct[nodes[0]] = extracted_prefix + expression_replacement\n else:\n if remaining_prefix is None:\n p = first_node_leaf.prefix\n else:\n p = remaining_prefix + _get_indentation(nodes[0])\n replacement_dct[nodes[0]] = p + expression_replacement\n replacement_dct[insert_before_leaf] = extracted_prefix + insert_before_leaf.value\n\n for node in nodes[1:]:\n replacement_dct[node] = ''\n return replacement_dct\n\n\ndef _expression_nodes_to_string(nodes):\n return ''.join(n.get_code(include_prefix=i != 0) for i, n in enumerate(nodes))\n\n\ndef _suite_nodes_to_string(nodes, pos):\n n = nodes[0]\n prefix, part_of_code = _split_prefix_at(n.get_first_leaf(), pos[0] - 1)\n code = part_of_code + n.get_code(include_prefix=False) \\n + ''.join(n.get_code() for n in nodes[1:])\n return prefix, code\n\n\ndef _split_prefix_at(leaf, until_line):\n """\n Returns a tuple of the leaf's prefix, split at the until_line\n position.\n """\n # second means the second returned part\n second_line_count = leaf.start_pos[0] - until_line\n lines = split_lines(leaf.prefix, keepends=True)\n return ''.join(lines[:-second_line_count]), ''.join(lines[-second_line_count:])\n\n\ndef _get_indentation(node):\n return split_lines(node.get_first_leaf().prefix)[-1]\n\n\ndef _get_parent_definition(node):\n """\n Returns the statement where a node is defined.\n """\n while node is not None:\n if node.parent.type in _DEFINITION_SCOPES:\n return node\n node = node.parent\n raise NotImplementedError('We should never even get here')\n\n\ndef _remove_unwanted_expression_nodes(parent_node, pos, until_pos):\n """\n This function makes it so for `1 * 2 + 3` you can extract `2 + 3`, even\n though it is not part of the expression.\n """\n typ = parent_node.type\n is_suite_part = typ in ('suite', 'file_input')\n if typ in EXPRESSION_PARTS or is_suite_part:\n nodes = parent_node.children\n for i, n in enumerate(nodes):\n if n.end_pos > pos:\n start_index = i\n if n.type == 'operator':\n start_index -= 1\n break\n for i, n in reversed(list(enumerate(nodes))):\n if n.start_pos < until_pos:\n end_index = i\n if n.type == 'operator':\n end_index += 1\n\n # Something like `not foo or bar` should not be cut after not\n for n2 in nodes[i:]:\n if _is_not_extractable_syntax(n2):\n end_index += 1\n else:\n break\n break\n nodes = nodes[start_index:end_index + 1]\n if not is_suite_part:\n nodes[0:1] = _remove_unwanted_expression_nodes(nodes[0], pos, until_pos)\n nodes[-1:] = _remove_unwanted_expression_nodes(nodes[-1], pos, until_pos)\n return nodes\n return [parent_node]\n\n\ndef _is_not_extractable_syntax(node):\n return node.type == 'operator' \\n or node.type == 'keyword' and node.value not in ('None', 'True', 'False')\n\n\ndef extract_function(inference_state, path, module_context, name, pos, until_pos):\n nodes = _find_nodes(module_context.tree_node, pos, until_pos)\n assert len(nodes)\n\n is_expression, _ = _is_expression_with_error(nodes)\n context = module_context.create_context(nodes[0])\n is_bound_method = context.is_bound_method()\n params, return_variables = list(_find_inputs_and_outputs(module_context, context, nodes))\n\n # Find variables\n # Is a class method / method\n if context.is_module():\n insert_before_leaf = None # Leaf will be determined later\n else:\n node = _get_code_insertion_node(context.tree_node, is_bound_method)\n insert_before_leaf = node.get_first_leaf()\n if is_expression:\n code_block = 'return ' + _expression_nodes_to_string(nodes) + '\n'\n remaining_prefix = None\n has_ending_return_stmt = False\n else:\n has_ending_return_stmt = _is_node_ending_return_stmt(nodes[-1])\n if not has_ending_return_stmt:\n # Find the actually used variables (of the defined ones). If none are\n # used (e.g. if the range covers the whole function), return the last\n # defined variable.\n return_variables = list(_find_needed_output_variables(\n context,\n nodes[0].parent,\n nodes[-1].end_pos,\n return_variables\n )) or [return_variables[-1]] if return_variables else []\n\n remaining_prefix, code_block = _suite_nodes_to_string(nodes, pos)\n after_leaf = nodes[-1].get_next_leaf()\n first, second = _split_prefix_at(after_leaf, until_pos[0])\n code_block += first\n\n code_block = dedent(code_block)\n if not has_ending_return_stmt:\n output_var_str = ', '.join(return_variables)\n code_block += 'return ' + output_var_str + '\n'\n\n # Check if we have to raise RefactoringError\n _check_for_non_extractables(nodes[:-1] if has_ending_return_stmt else nodes)\n\n decorator = ''\n self_param = None\n if is_bound_method:\n if not function_is_staticmethod(context.tree_node):\n function_param_names = context.get_value().get_param_names()\n if len(function_param_names):\n self_param = function_param_names[0].string_name\n params = [p for p in params if p != self_param]\n\n if function_is_classmethod(context.tree_node):\n decorator = '@classmethod\n'\n else:\n code_block += '\n'\n\n function_code = '%sdef %s(%s):\n%s' % (\n decorator,\n name,\n ', '.join(params if self_param is None else [self_param] + params),\n indent_block(code_block)\n )\n\n function_call = '%s(%s)' % (\n ('' if self_param is None else self_param + '.') + name,\n ', '.join(params)\n )\n if is_expression:\n replacement = function_call\n else:\n if has_ending_return_stmt:\n replacement = 'return ' + function_call + '\n'\n else:\n replacement = output_var_str + ' = ' + function_call + '\n'\n\n replacement_dct = _replace(nodes, replacement, function_code, pos,\n insert_before_leaf, remaining_prefix)\n if not is_expression:\n replacement_dct[after_leaf] = second + after_leaf.value\n file_to_node_changes = {path: replacement_dct}\n return Refactoring(inference_state, file_to_node_changes)\n\n\ndef _check_for_non_extractables(nodes):\n for n in nodes:\n try:\n children = n.children\n except AttributeError:\n if n.value == 'return':\n raise RefactoringError(\n 'Can only extract return statements if they are at the end.')\n if n.value == 'yield':\n raise RefactoringError('Cannot extract yield statements.')\n else:\n _check_for_non_extractables(children)\n\n\ndef _is_name_input(module_context, names, first, last):\n for name in names:\n if name.api_type == 'param' or not name.parent_context.is_module():\n if name.get_root_context() is not module_context:\n return True\n if name.start_pos is None or not (first <= name.start_pos < last):\n return True\n return False\n\n\ndef _find_inputs_and_outputs(module_context, context, nodes):\n first = nodes[0].start_pos\n last = nodes[-1].end_pos\n\n inputs = []\n outputs = []\n for name in _find_non_global_names(nodes):\n if name.is_definition():\n if name not in outputs:\n outputs.append(name.value)\n else:\n if name.value not in inputs:\n name_definitions = context.goto(name, name.start_pos)\n if not name_definitions \\n or _is_name_input(module_context, name_definitions, first, last):\n inputs.append(name.value)\n\n # Check if outputs are really needed:\n return inputs, outputs\n\n\ndef _find_non_global_names(nodes):\n for node in nodes:\n try:\n children = node.children\n except AttributeError:\n if node.type == 'name':\n yield node\n else:\n # We only want to check foo in foo.bar\n if node.type == 'trailer' and node.children[0] == '.':\n continue\n\n yield from _find_non_global_names(children)\n\n\ndef _get_code_insertion_node(node, is_bound_method):\n if not is_bound_method or function_is_staticmethod(node):\n while node.parent.type != 'file_input':\n node = node.parent\n\n while node.parent.type in ('async_funcdef', 'decorated', 'async_stmt'):\n node = node.parent\n return node\n\n\ndef _find_needed_output_variables(context, search_node, at_least_pos, return_variables):\n """\n Searches everything after at_least_pos in a node and checks if any of the\n return_variables are used in there and returns those.\n """\n for node in search_node.children:\n if node.start_pos < at_least_pos:\n continue\n\n return_variables = set(return_variables)\n for name in _find_non_global_names([node]):\n if not name.is_definition() and name.value in return_variables:\n return_variables.remove(name.value)\n yield name.value\n\n\ndef _is_node_ending_return_stmt(node):\n t = node.type\n if t == 'simple_stmt':\n return _is_node_ending_return_stmt(node.children[0])\n return t == 'return_stmt'\n
.venv\Lib\site-packages\jedi\api\refactoring\extract.py
extract.py
Python
13,933
0.95
0.282383
0.060703
node-utils
604
2025-02-18T11:45:45.654478
Apache-2.0
false
efdedbd063b1547201a8c97ced2b574e
\n\n
.venv\Lib\site-packages\jedi\api\refactoring\__pycache__\extract.cpython-313.pyc
extract.cpython-313.pyc
Other
16,429
0.95
0.032258
0
react-lib
33
2024-11-25T21:08:28.381779
BSD-3-Clause
false
cd59a065767eecd46389003311ff8b64
\n\n
.venv\Lib\site-packages\jedi\api\refactoring\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
14,254
0.95
0.017094
0
python-kit
521
2023-11-14T22:52:44.917852
MIT
false
32e050f9d07f4b505a30413f54373272
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\classes.cpython-313.pyc
classes.cpython-313.pyc
Other
38,086
0.95
0.207547
0.002179
react-lib
858
2023-11-17T07:26:29.891605
MIT
false
e99000249db10075d51d8a416dfee0a9
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\completion.cpython-313.pyc
completion.cpython-313.pyc
Other
31,918
0.95
0.036496
0
react-lib
672
2024-12-16T04:55:36.366759
GPL-3.0
false
95c66e8a928e5b9f367ec2e26e755a9b
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\completion_cache.cpython-313.pyc
completion_cache.cpython-313.pyc
Other
1,700
0.8
0
0
python-kit
446
2024-12-24T07:43:27.664030
GPL-3.0
false
6bb787d77d05fd016435ce5447ae0699
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\environment.cpython-313.pyc
environment.cpython-313.pyc
Other
19,604
0.95
0.096774
0.034483
node-utils
217
2023-10-12T06:03:02.040971
GPL-3.0
false
fc1344d9f73614a9ca3ddd116741e359
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\errors.cpython-313.pyc
errors.cpython-313.pyc
Other
2,689
0.8
0
0
react-lib
768
2024-06-07T00:39:12.594560
Apache-2.0
false
90fb53f41936d2807d84b37c303ed66e
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\exceptions.cpython-313.pyc
exceptions.cpython-313.pyc
Other
1,739
0.85
0.28
0
node-utils
20
2024-09-09T03:49:18.592332
MIT
false
74f9858b8f4e2060edaaeb02b8d55c0a
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\file_name.cpython-313.pyc
file_name.cpython-313.pyc
Other
7,402
0.8
0
0
vue-tools
662
2024-11-23T07:25:22.612534
Apache-2.0
false
f5249372caf638c7585007921d6830e2
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\helpers.cpython-313.pyc
helpers.cpython-313.pyc
Other
21,825
0.95
0.02809
0.00578
python-kit
380
2023-10-29T16:36:15.779317
BSD-3-Clause
false
b80d59ca031f148c2b45e07fdb2d261c
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\interpreter.cpython-313.pyc
interpreter.cpython-313.pyc
Other
4,383
0.8
0.023256
0
awesome-app
397
2025-06-30T09:19:06.610209
MIT
false
a5fa0ceca58a5cdf5109f4f88ff80ee2
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\keywords.cpython-313.pyc
keywords.cpython-313.pyc
Other
2,398
0.8
0
0
node-utils
476
2023-12-24T01:38:31.427912
GPL-3.0
false
80b4df8d0d32b81cef9a7a6b93d5b4d4
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\project.cpython-313.pyc
project.cpython-313.pyc
Other
19,056
0.95
0.108808
0
react-lib
781
2025-06-21T12:28:34.765187
GPL-3.0
false
5017907a2387e10ce8fd2e90c25f9db8
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\replstartup.cpython-313.pyc
replstartup.cpython-313.pyc
Other
1,144
0.95
0.041667
0
vue-tools
742
2024-07-05T09:40:23.878540
GPL-3.0
false
ea0a4e805a0a68a81f56fe1d6866de69
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\strings.cpython-313.pyc
strings.cpython-313.pyc
Other
4,788
0.8
0.041667
0
awesome-app
639
2024-12-12T16:03:16.308153
GPL-3.0
false
e19b349d28e360dfbbc1acaf7bb0eee0
\n\n
.venv\Lib\site-packages\jedi\api\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
35,809
0.95
0.156182
0
python-kit
231
2025-01-09T01:25:50.531729
MIT
false
84958d196858e02f273869ccf6410dcf
import re\nfrom itertools import zip_longest\n\nfrom parso.python import tree\n\nfrom jedi import debug\nfrom jedi.inference.utils import PushBackIterator\nfrom jedi.inference import analysis\nfrom jedi.inference.lazy_value import LazyKnownValue, LazyKnownValues, \\n LazyTreeValue, get_merged_lazy_value\nfrom jedi.inference.names import ParamName, TreeNameDefinition, AnonymousParamName\nfrom jedi.inference.base_value import NO_VALUES, ValueSet, ContextualizedNode\nfrom jedi.inference.value import iterable\nfrom jedi.inference.cache import inference_state_as_method_param_cache\n\n\ndef try_iter_content(types, depth=0):\n """Helper method for static analysis."""\n if depth > 10:\n # It's possible that a loop has references on itself (especially with\n # CompiledValue). Therefore don't loop infinitely.\n return\n\n for typ in types:\n try:\n f = typ.py__iter__\n except AttributeError:\n pass\n else:\n for lazy_value in f():\n try_iter_content(lazy_value.infer(), depth + 1)\n\n\nclass ParamIssue(Exception):\n pass\n\n\ndef repack_with_argument_clinic(clinic_string):\n """\n Transforms a function or method with arguments to the signature that is\n given as an argument clinic notation.\n\n Argument clinic is part of CPython and used for all the functions that are\n implemented in C (Python 3.7):\n\n str.split.__text_signature__\n # Results in: '($self, /, sep=None, maxsplit=-1)'\n """\n def decorator(func):\n def wrapper(value, arguments):\n try:\n args = tuple(iterate_argument_clinic(\n value.inference_state,\n arguments,\n clinic_string,\n ))\n except ParamIssue:\n return NO_VALUES\n else:\n return func(value, *args)\n\n return wrapper\n return decorator\n\n\ndef iterate_argument_clinic(inference_state, arguments, clinic_string):\n """Uses a list with argument clinic information (see PEP 436)."""\n clinic_args = list(_parse_argument_clinic(clinic_string))\n\n iterator = PushBackIterator(arguments.unpack())\n for i, (name, optional, allow_kwargs, stars) in enumerate(clinic_args):\n if stars == 1:\n lazy_values = []\n for key, argument in iterator:\n if key is not None:\n iterator.push_back((key, argument))\n break\n\n lazy_values.append(argument)\n yield ValueSet([iterable.FakeTuple(inference_state, lazy_values)])\n lazy_values\n continue\n elif stars == 2:\n raise NotImplementedError()\n key, argument = next(iterator, (None, None))\n if key is not None:\n debug.warning('Keyword arguments in argument clinic are currently not supported.')\n raise ParamIssue\n if argument is None and not optional:\n debug.warning('TypeError: %s expected at least %s arguments, got %s',\n name, len(clinic_args), i)\n raise ParamIssue\n\n value_set = NO_VALUES if argument is None else argument.infer()\n\n if not value_set and not optional:\n # For the stdlib we always want values. If we don't get them,\n # that's ok, maybe something is too hard to resolve, however,\n # we will not proceed with the type inference of that function.\n debug.warning('argument_clinic "%s" not resolvable.', name)\n raise ParamIssue\n yield value_set\n\n\ndef _parse_argument_clinic(string):\n allow_kwargs = False\n optional = False\n while string:\n # Optional arguments have to begin with a bracket. And should always be\n # at the end of the arguments. This is therefore not a proper argument\n # clinic implementation. `range()` for exmple allows an optional start\n # value at the beginning.\n match = re.match(r'(?:(?:(\[),? ?|, ?|)(\**\w+)|, ?/)\]*', string)\n string = string[len(match.group(0)):]\n if not match.group(2): # A slash -> allow named arguments\n allow_kwargs = True\n continue\n optional = optional or bool(match.group(1))\n word = match.group(2)\n stars = word.count('*')\n word = word[stars:]\n yield (word, optional, allow_kwargs, stars)\n if stars:\n allow_kwargs = True\n\n\nclass _AbstractArgumentsMixin:\n def unpack(self, funcdef=None):\n raise NotImplementedError\n\n def get_calling_nodes(self):\n return []\n\n\nclass AbstractArguments(_AbstractArgumentsMixin):\n context = None\n argument_node = None\n trailer = None\n\n\ndef unpack_arglist(arglist):\n if arglist is None:\n return\n\n if arglist.type != 'arglist' and not (\n arglist.type == 'argument' and arglist.children[0] in ('*', '**')):\n yield 0, arglist\n return\n\n iterator = iter(arglist.children)\n for child in iterator:\n if child == ',':\n continue\n elif child in ('*', '**'):\n c = next(iterator, None)\n assert c is not None\n yield len(child.value), c\n elif child.type == 'argument' and \\n child.children[0] in ('*', '**'):\n assert len(child.children) == 2\n yield len(child.children[0].value), child.children[1]\n else:\n yield 0, child\n\n\nclass TreeArguments(AbstractArguments):\n def __init__(self, inference_state, context, argument_node, trailer=None):\n """\n :param argument_node: May be an argument_node or a list of nodes.\n """\n self.argument_node = argument_node\n self.context = context\n self._inference_state = inference_state\n self.trailer = trailer # Can be None, e.g. in a class definition.\n\n @classmethod\n @inference_state_as_method_param_cache()\n def create_cached(cls, *args, **kwargs):\n return cls(*args, **kwargs)\n\n def unpack(self, funcdef=None):\n named_args = []\n for star_count, el in unpack_arglist(self.argument_node):\n if star_count == 1:\n arrays = self.context.infer_node(el)\n iterators = [_iterate_star_args(self.context, a, el, funcdef)\n for a in arrays]\n for values in list(zip_longest(*iterators)):\n yield None, get_merged_lazy_value(\n [v for v in values if v is not None]\n )\n elif star_count == 2:\n arrays = self.context.infer_node(el)\n for dct in arrays:\n yield from _star_star_dict(self.context, dct, el, funcdef)\n else:\n if el.type == 'argument':\n c = el.children\n if len(c) == 3: # Keyword argument.\n named_args.append((c[0].value, LazyTreeValue(self.context, c[2]),))\n else: # Generator comprehension.\n # Include the brackets with the parent.\n sync_comp_for = el.children[1]\n if sync_comp_for.type == 'comp_for':\n sync_comp_for = sync_comp_for.children[1]\n comp = iterable.GeneratorComprehension(\n self._inference_state,\n defining_context=self.context,\n sync_comp_for_node=sync_comp_for,\n entry_node=el.children[0],\n )\n yield None, LazyKnownValue(comp)\n else:\n yield None, LazyTreeValue(self.context, el)\n\n # Reordering arguments is necessary, because star args sometimes appear\n # after named argument, but in the actual order it's prepended.\n yield from named_args\n\n def _as_tree_tuple_objects(self):\n for star_count, argument in unpack_arglist(self.argument_node):\n default = None\n if argument.type == 'argument':\n if len(argument.children) == 3: # Keyword argument.\n argument, default = argument.children[::2]\n yield argument, default, star_count\n\n def iter_calling_names_with_star(self):\n for name, default, star_count in self._as_tree_tuple_objects():\n # TODO this function is a bit strange. probably refactor?\n if not star_count or not isinstance(name, tree.Name):\n continue\n\n yield TreeNameDefinition(self.context, name)\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self.argument_node)\n\n def get_calling_nodes(self):\n old_arguments_list = []\n arguments = self\n\n while arguments not in old_arguments_list:\n if not isinstance(arguments, TreeArguments):\n break\n\n old_arguments_list.append(arguments)\n for calling_name in reversed(list(arguments.iter_calling_names_with_star())):\n names = calling_name.goto()\n if len(names) != 1:\n break\n if isinstance(names[0], AnonymousParamName):\n # Dynamic parameters should not have calling nodes, because\n # they are dynamic and extremely random.\n return []\n if not isinstance(names[0], ParamName):\n break\n executed_param_name = names[0].get_executed_param_name()\n arguments = executed_param_name.arguments\n break\n\n if arguments.argument_node is not None:\n return [ContextualizedNode(arguments.context, arguments.argument_node)]\n if arguments.trailer is not None:\n return [ContextualizedNode(arguments.context, arguments.trailer)]\n return []\n\n\nclass ValuesArguments(AbstractArguments):\n def __init__(self, values_list):\n self._values_list = values_list\n\n def unpack(self, funcdef=None):\n for values in self._values_list:\n yield None, LazyKnownValues(values)\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self._values_list)\n\n\nclass TreeArgumentsWrapper(_AbstractArgumentsMixin):\n def __init__(self, arguments):\n self._wrapped_arguments = arguments\n\n @property\n def context(self):\n return self._wrapped_arguments.context\n\n @property\n def argument_node(self):\n return self._wrapped_arguments.argument_node\n\n @property\n def trailer(self):\n return self._wrapped_arguments.trailer\n\n def unpack(self, func=None):\n raise NotImplementedError\n\n def get_calling_nodes(self):\n return self._wrapped_arguments.get_calling_nodes()\n\n def __repr__(self):\n return '<%s: %s>' % (self.__class__.__name__, self._wrapped_arguments)\n\n\ndef _iterate_star_args(context, array, input_node, funcdef=None):\n if not array.py__getattribute__('__iter__'):\n if funcdef is not None:\n # TODO this funcdef should not be needed.\n m = "TypeError: %s() argument after * must be a sequence, not %s" \\n % (funcdef.name.value, array)\n analysis.add(context, 'type-error-star', input_node, message=m)\n try:\n iter_ = array.py__iter__\n except AttributeError:\n pass\n else:\n yield from iter_()\n\n\ndef _star_star_dict(context, array, input_node, funcdef):\n from jedi.inference.value.instance import CompiledInstance\n if isinstance(array, CompiledInstance) and array.name.string_name == 'dict':\n # For now ignore this case. In the future add proper iterators and just\n # make one call without crazy isinstance checks.\n return {}\n elif isinstance(array, iterable.Sequence) and array.array_type == 'dict':\n return array.exact_key_items()\n else:\n if funcdef is not None:\n m = "TypeError: %s argument after ** must be a mapping, not %s" \\n % (funcdef.name.value, array)\n analysis.add(context, 'type-error-star-star', input_node, message=m)\n return {}\n
.venv\Lib\site-packages\jedi\inference\arguments.py
arguments.py
Python
12,218
0.95
0.268657
0.068592
react-lib
358
2025-07-01T15:00:07.713472
Apache-2.0
false
2a9c4e1b216bd5541b8874024592e716
from abc import abstractmethod\nfrom contextlib import contextmanager\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom parso.tree import search_ancestor\nfrom parso.python.tree import Name\n\nfrom jedi.inference.filters import ParserTreeFilter, MergedFilter, \\n GlobalNameFilter\nfrom jedi.inference.names import AnonymousParamName, TreeNameDefinition\nfrom jedi.inference.base_value import NO_VALUES, ValueSet\nfrom jedi.parser_utils import get_parent_scope\nfrom jedi import debug\nfrom jedi import parser_utils\n\n\nclass AbstractContext:\n # Must be defined: inference_state and tree_node and parent_context as an attribute/property\n\n def __init__(self, inference_state):\n self.inference_state = inference_state\n self.predefined_names = {}\n\n @abstractmethod\n def get_filters(self, until_position=None, origin_scope=None):\n raise NotImplementedError\n\n def goto(self, name_or_str, position):\n from jedi.inference import finder\n filters = _get_global_filters_for_name(\n self, name_or_str if isinstance(name_or_str, Name) else None, position,\n )\n names = finder.filter_name(filters, name_or_str)\n debug.dbg('context.goto %s in (%s): %s', name_or_str, self, names)\n return names\n\n def py__getattribute__(self, name_or_str, name_context=None, position=None,\n analysis_errors=True):\n """\n :param position: Position of the last statement -> tuple of line, column\n """\n if name_context is None:\n name_context = self\n names = self.goto(name_or_str, position)\n\n string_name = name_or_str.value if isinstance(name_or_str, Name) else name_or_str\n\n # This paragraph is currently needed for proper branch type inference\n # (static analysis).\n found_predefined_types = None\n if self.predefined_names and isinstance(name_or_str, Name):\n node = name_or_str\n while node is not None and not parser_utils.is_scope(node):\n node = node.parent\n if node.type in ("if_stmt", "for_stmt", "comp_for", 'sync_comp_for'):\n try:\n name_dict = self.predefined_names[node]\n types = name_dict[string_name]\n except KeyError:\n continue\n else:\n found_predefined_types = types\n break\n if found_predefined_types is not None and names:\n from jedi.inference import flow_analysis\n check = flow_analysis.reachability_check(\n context=self,\n value_scope=self.tree_node,\n node=name_or_str,\n )\n if check is flow_analysis.UNREACHABLE:\n values = NO_VALUES\n else:\n values = found_predefined_types\n else:\n values = ValueSet.from_sets(name.infer() for name in names)\n\n if not names and not values and analysis_errors:\n if isinstance(name_or_str, Name):\n from jedi.inference import analysis\n message = ("NameError: name '%s' is not defined." % string_name)\n analysis.add(name_context, 'name-error', name_or_str, message)\n\n debug.dbg('context.names_to_types: %s -> %s', names, values)\n if values:\n return values\n return self._check_for_additional_knowledge(name_or_str, name_context, position)\n\n def _check_for_additional_knowledge(self, name_or_str, name_context, position):\n name_context = name_context or self\n # Add isinstance and other if/assert knowledge.\n if isinstance(name_or_str, Name) and not name_context.is_instance():\n flow_scope = name_or_str\n base_nodes = [name_context.tree_node]\n\n if any(b.type in ('comp_for', 'sync_comp_for') for b in base_nodes):\n return NO_VALUES\n from jedi.inference.finder import check_flow_information\n while True:\n flow_scope = get_parent_scope(flow_scope, include_flows=True)\n n = check_flow_information(name_context, flow_scope,\n name_or_str, position)\n if n is not None:\n return n\n if flow_scope in base_nodes:\n break\n return NO_VALUES\n\n def get_root_context(self):\n parent_context = self.parent_context\n if parent_context is None:\n return self\n return parent_context.get_root_context()\n\n def is_module(self):\n return False\n\n def is_builtins_module(self):\n return False\n\n def is_class(self):\n return False\n\n def is_stub(self):\n return False\n\n def is_instance(self):\n return False\n\n def is_compiled(self):\n return False\n\n def is_bound_method(self):\n return False\n\n @abstractmethod\n def py__name__(self):\n raise NotImplementedError\n\n def get_value(self):\n raise NotImplementedError\n\n @property\n def name(self):\n return None\n\n def get_qualified_names(self):\n return ()\n\n def py__doc__(self):\n return ''\n\n @contextmanager\n def predefine_names(self, flow_scope, dct):\n predefined = self.predefined_names\n predefined[flow_scope] = dct\n try:\n yield\n finally:\n del predefined[flow_scope]\n\n\nclass ValueContext(AbstractContext):\n """\n Should be defined, otherwise the API returns empty types.\n """\n def __init__(self, value):\n super().__init__(value.inference_state)\n self._value = value\n\n @property\n def tree_node(self):\n return self._value.tree_node\n\n @property\n def parent_context(self):\n return self._value.parent_context\n\n def is_module(self):\n return self._value.is_module()\n\n def is_builtins_module(self):\n return self._value == self.inference_state.builtins_module\n\n def is_class(self):\n return self._value.is_class()\n\n def is_stub(self):\n return self._value.is_stub()\n\n def is_instance(self):\n return self._value.is_instance()\n\n def is_compiled(self):\n return self._value.is_compiled()\n\n def is_bound_method(self):\n return self._value.is_bound_method()\n\n def py__name__(self):\n return self._value.py__name__()\n\n @property\n def name(self):\n return self._value.name\n\n def get_qualified_names(self):\n return self._value.get_qualified_names()\n\n def py__doc__(self):\n return self._value.py__doc__()\n\n def get_value(self):\n return self._value\n\n def __repr__(self):\n return '%s(%s)' % (self.__class__.__name__, self._value)\n\n\nclass TreeContextMixin:\n def infer_node(self, node):\n from jedi.inference.syntax_tree import infer_node\n return infer_node(self, node)\n\n def create_value(self, node):\n from jedi.inference import value\n\n if node == self.tree_node:\n assert self.is_module()\n return self.get_value()\n\n parent_context = self.create_context(node)\n\n if node.type in ('funcdef', 'lambdef'):\n func = value.FunctionValue.from_context(parent_context, node)\n if parent_context.is_class():\n class_value = parent_context.parent_context.create_value(parent_context.tree_node)\n instance = value.AnonymousInstance(\n self.inference_state, parent_context.parent_context, class_value)\n func = value.BoundMethod(\n instance=instance,\n class_context=class_value.as_context(),\n function=func\n )\n return func\n elif node.type == 'classdef':\n return value.ClassValue(self.inference_state, parent_context, node)\n else:\n raise NotImplementedError("Probably shouldn't happen: %s" % node)\n\n def create_context(self, node):\n def from_scope_node(scope_node, is_nested=True):\n if scope_node == self.tree_node:\n return self\n\n if scope_node.type in ('funcdef', 'lambdef', 'classdef'):\n return self.create_value(scope_node).as_context()\n elif scope_node.type in ('comp_for', 'sync_comp_for'):\n parent_context = from_scope_node(parent_scope(scope_node.parent))\n if node.start_pos >= scope_node.children[-1].start_pos:\n return parent_context\n return CompForContext(parent_context, scope_node)\n raise Exception("There's a scope that was not managed: %s" % scope_node)\n\n def parent_scope(node):\n while True:\n node = node.parent\n\n if parser_utils.is_scope(node):\n return node\n elif node.type in ('argument', 'testlist_comp'):\n if node.children[1].type in ('comp_for', 'sync_comp_for'):\n return node.children[1]\n elif node.type == 'dictorsetmaker':\n for n in node.children[1:4]:\n # In dictionaries it can be pretty much anything.\n if n.type in ('comp_for', 'sync_comp_for'):\n return n\n\n scope_node = parent_scope(node)\n if scope_node.type in ('funcdef', 'classdef'):\n colon = scope_node.children[scope_node.children.index(':')]\n if node.start_pos < colon.start_pos:\n parent = node.parent\n if not (parent.type == 'param' and parent.name == node):\n scope_node = parent_scope(scope_node)\n return from_scope_node(scope_node, is_nested=True)\n\n def create_name(self, tree_name):\n definition = tree_name.get_definition()\n if definition and definition.type == 'param' and definition.name == tree_name:\n funcdef = search_ancestor(definition, 'funcdef', 'lambdef')\n func = self.create_value(funcdef)\n return AnonymousParamName(func, tree_name)\n else:\n context = self.create_context(tree_name)\n return TreeNameDefinition(context, tree_name)\n\n\nclass FunctionContext(TreeContextMixin, ValueContext):\n def get_filters(self, until_position=None, origin_scope=None):\n yield ParserTreeFilter(\n self.inference_state,\n parent_context=self,\n until_position=until_position,\n origin_scope=origin_scope\n )\n\n\nclass ModuleContext(TreeContextMixin, ValueContext):\n def py__file__(self) -> Optional[Path]:\n return self._value.py__file__() # type: ignore[no-any-return]\n\n def get_filters(self, until_position=None, origin_scope=None):\n filters = self._value.get_filters(origin_scope)\n # Skip the first filter and replace it.\n next(filters, None)\n yield MergedFilter(\n ParserTreeFilter(\n parent_context=self,\n until_position=until_position,\n origin_scope=origin_scope\n ),\n self.get_global_filter(),\n )\n yield from filters\n\n def get_global_filter(self):\n return GlobalNameFilter(self)\n\n @property\n def string_names(self):\n return self._value.string_names\n\n @property\n def code_lines(self):\n return self._value.code_lines\n\n def get_value(self):\n """\n This is the only function that converts a context back to a value.\n This is necessary for stub -> python conversion and vice versa. However\n this method shouldn't be moved to AbstractContext.\n """\n return self._value\n\n\nclass NamespaceContext(TreeContextMixin, ValueContext):\n def get_filters(self, until_position=None, origin_scope=None):\n return self._value.get_filters()\n\n def get_value(self):\n return self._value\n\n @property\n def string_names(self):\n return self._value.string_names\n\n def py__file__(self) -> Optional[Path]:\n return self._value.py__file__() # type: ignore[no-any-return]\n\n\nclass ClassContext(TreeContextMixin, ValueContext):\n def get_filters(self, until_position=None, origin_scope=None):\n yield self.get_global_filter(until_position, origin_scope)\n\n def get_global_filter(self, until_position=None, origin_scope=None):\n return ParserTreeFilter(\n parent_context=self,\n until_position=until_position,\n origin_scope=origin_scope\n )\n\n\nclass CompForContext(TreeContextMixin, AbstractContext):\n def __init__(self, parent_context, comp_for):\n super().__init__(parent_context.inference_state)\n self.tree_node = comp_for\n self.parent_context = parent_context\n\n def get_filters(self, until_position=None, origin_scope=None):\n yield ParserTreeFilter(self)\n\n def get_value(self):\n return None\n\n def py__name__(self):\n return '<comprehension context>'\n\n def __repr__(self):\n return '%s(%s)' % (self.__class__.__name__, self.tree_node)\n\n\nclass CompiledContext(ValueContext):\n def get_filters(self, until_position=None, origin_scope=None):\n return self._value.get_filters()\n\n\nclass CompiledModuleContext(CompiledContext):\n code_lines = None\n\n def get_value(self):\n return self._value\n\n @property\n def string_names(self):\n return self._value.string_names\n\n def py__file__(self) -> Optional[Path]:\n return self._value.py__file__() # type: ignore[no-any-return]\n\n\ndef _get_global_filters_for_name(context, name_or_none, position):\n # For functions and classes the defaults don't belong to the\n # function and get inferred in the value before the function. So\n # make sure to exclude the function/class name.\n if name_or_none is not None:\n ancestor = search_ancestor(name_or_none, 'funcdef', 'classdef', 'lambdef')\n lambdef = None\n if ancestor == 'lambdef':\n # For lambdas it's even more complicated since parts will\n # be inferred later.\n lambdef = ancestor\n ancestor = search_ancestor(name_or_none, 'funcdef', 'classdef')\n if ancestor is not None:\n colon = ancestor.children[-2]\n if position is not None and position < colon.start_pos:\n if lambdef is None or position < lambdef.children[-2].start_pos:\n position = ancestor.start_pos\n\n return get_global_filters(context, position, name_or_none)\n\n\ndef get_global_filters(context, until_position, origin_scope):\n """\n Returns all filters in order of priority for name resolution.\n\n For global name lookups. The filters will handle name resolution\n themselves, but here we gather possible filters downwards.\n\n >>> from jedi import Script\n >>> script = Script('''\n ... x = ['a', 'b', 'c']\n ... def func():\n ... y = None\n ... ''')\n >>> module_node = script._module_node\n >>> scope = next(module_node.iter_funcdefs())\n >>> scope\n <Function: func@3-5>\n >>> context = script._get_module_context().create_context(scope)\n >>> filters = list(get_global_filters(context, (4, 0), None))\n\n First we get the names from the function scope.\n\n >>> print(filters[0]) # doctest: +ELLIPSIS\n MergedFilter(<ParserTreeFilter: ...>, <GlobalNameFilter: ...>)\n >>> sorted(str(n) for n in filters[0].values()) # doctest: +NORMALIZE_WHITESPACE\n ['<TreeNameDefinition: string_name=func start_pos=(3, 4)>',\n '<TreeNameDefinition: string_name=x start_pos=(2, 0)>']\n >>> filters[0]._filters[0]._until_position\n (4, 0)\n >>> filters[0]._filters[1]._until_position\n\n Then it yields the names from one level "lower". In this example, this is\n the module scope (including globals).\n As a side note, you can see, that the position in the filter is None on the\n globals filter, because there the whole module is searched.\n\n >>> list(filters[1].values()) # package modules -> Also empty.\n []\n >>> sorted(name.string_name for name in filters[2].values()) # Module attributes\n ['__doc__', '__name__', '__package__']\n\n Finally, it yields the builtin filter, if `include_builtin` is\n true (default).\n\n >>> list(filters[3].values()) # doctest: +ELLIPSIS\n [...]\n """\n base_context = context\n from jedi.inference.value.function import BaseFunctionExecutionContext\n while context is not None:\n # Names in methods cannot be resolved within the class.\n yield from context.get_filters(\n until_position=until_position,\n origin_scope=origin_scope\n )\n if isinstance(context, (BaseFunctionExecutionContext, ModuleContext)):\n # The position should be reset if the current scope is a function.\n until_position = None\n\n context = context.parent_context\n\n b = next(base_context.inference_state.builtins_module.get_filters(), None)\n assert b is not None\n # Add builtins to the global scope.\n yield b\n
.venv\Lib\site-packages\jedi\inference\context.py
context.py
Python
17,164
0.95
0.274549
0.035264
python-kit
634
2024-12-02T21:50:18.656190
GPL-3.0
false
bbf7480967f2db0f8a7e428e8a100e94