content
stringlengths 1
103k
⌀ | path
stringlengths 8
216
| filename
stringlengths 2
179
| language
stringclasses 15
values | size_bytes
int64 2
189k
| quality_score
float64 0.5
0.95
| complexity
float64 0
1
| documentation_ratio
float64 0
1
| repository
stringclasses 5
values | stars
int64 0
1k
| created_date
stringdate 2023-07-10 19:21:08
2025-07-09 19:11:45
| license
stringclasses 4
values | is_test
bool 2
classes | file_hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\n\n
|
.venv\Lib\site-packages\chardet\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 4,601 | 0.8 | 0 | 0.018519 |
awesome-app
| 133 |
2024-08-09T05:17:58.879919
|
Apache-2.0
| false |
baf9ea7617c7a78ea271f02dbe327a46
|
\n\n
|
.venv\Lib\site-packages\chardet\__pycache__\__main__.cpython-313.pyc
|
__main__.cpython-313.pyc
|
Other
| 355 | 0.7 | 0 | 0 |
react-lib
| 482 |
2024-10-24T08:50:27.618068
|
MIT
| false |
a3ebc4b1c13994a8301a73f1bf577eb3
|
[console_scripts]\nchardetect = chardet.cli.chardetect:main\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\entry_points.txt
|
entry_points.txt
|
Other
| 59 | 0.5 | 0 | 0 |
python-kit
| 202 |
2025-02-03T15:49:19.451153
|
GPL-3.0
| false |
735bb4ba93a089398144993471b6cb56
|
pip\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\INSTALLER
|
INSTALLER
|
Other
| 4 | 0.5 | 0 | 0 |
python-kit
| 843 |
2024-10-14T13:17:09.446150
|
BSD-3-Clause
| false |
365c9bfeb7d89244f2ce01c1de44cb85
|
GNU LESSER GENERAL PUBLIC LICENSE\n Version 2.1, February 1999\n\n Copyright (C) 1991, 1999 Free Software Foundation, Inc.\n 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n[This is the first released version of the Lesser GPL. It also counts\n as the successor of the GNU Library Public License, version 2, hence\n the version number 2.1.]\n\n Preamble\n\n The licenses for most software are designed to take away your\nfreedom to share and change it. By contrast, the GNU General Public\nLicenses are intended to guarantee your freedom to share and change\nfree software--to make sure the software is free for all its users.\n\n This license, the Lesser General Public License, applies to some\nspecially designated software packages--typically libraries--of the\nFree Software Foundation and other authors who decide to use it. You\ncan use it too, but we suggest you first think carefully about whether\nthis license or the ordinary General Public License is the better\nstrategy to use in any particular case, based on the explanations below.\n\n When we speak of free software, we are referring to freedom of use,\nnot price. Our General Public Licenses are designed to make sure that\nyou have the freedom to distribute copies of free software (and charge\nfor this service if you wish); that you receive source code or can get\nit if you want it; that you can change the software and use pieces of\nit in new free programs; and that you are informed that you can do\nthese things.\n\n To protect your rights, we need to make restrictions that forbid\ndistributors to deny you these rights or to ask you to surrender these\nrights. These restrictions translate to certain responsibilities for\nyou if you distribute copies of the library or if you modify it.\n\n For example, if you distribute copies of the library, whether gratis\nor for a fee, you must give the recipients all the rights that we gave\nyou. You must make sure that they, too, receive or can get the source\ncode. If you link other code with the library, you must provide\ncomplete object files to the recipients, so that they can relink them\nwith the library after making changes to the library and recompiling\nit. And you must show them these terms so they know their rights.\n\n We protect your rights with a two-step method: (1) we copyright the\nlibrary, and (2) we offer you this license, which gives you legal\npermission to copy, distribute and/or modify the library.\n\n To protect each distributor, we want to make it very clear that\nthere is no warranty for the free library. Also, if the library is\nmodified by someone else and passed on, the recipients should know\nthat what they have is not the original version, so that the original\nauthor's reputation will not be affected by problems that might be\nintroduced by others.\n\n Finally, software patents pose a constant threat to the existence of\nany free program. We wish to make sure that a company cannot\neffectively restrict the users of a free program by obtaining a\nrestrictive license from a patent holder. Therefore, we insist that\nany patent license obtained for a version of the library must be\nconsistent with the full freedom of use specified in this license.\n\n Most GNU software, including some libraries, is covered by the\nordinary GNU General Public License. This license, the GNU Lesser\nGeneral Public License, applies to certain designated libraries, and\nis quite different from the ordinary General Public License. We use\nthis license for certain libraries in order to permit linking those\nlibraries into non-free programs.\n\n When a program is linked with a library, whether statically or using\na shared library, the combination of the two is legally speaking a\ncombined work, a derivative of the original library. The ordinary\nGeneral Public License therefore permits such linking only if the\nentire combination fits its criteria of freedom. The Lesser General\nPublic License permits more lax criteria for linking other code with\nthe library.\n\n We call this license the "Lesser" General Public License because it\ndoes Less to protect the user's freedom than the ordinary General\nPublic License. It also provides other free software developers Less\nof an advantage over competing non-free programs. These disadvantages\nare the reason we use the ordinary General Public License for many\nlibraries. However, the Lesser license provides advantages in certain\nspecial circumstances.\n\n For example, on rare occasions, there may be a special need to\nencourage the widest possible use of a certain library, so that it becomes\na de-facto standard. To achieve this, non-free programs must be\nallowed to use the library. A more frequent case is that a free\nlibrary does the same job as widely used non-free libraries. In this\ncase, there is little to gain by limiting the free library to free\nsoftware only, so we use the Lesser General Public License.\n\n In other cases, permission to use a particular library in non-free\nprograms enables a greater number of people to use a large body of\nfree software. For example, permission to use the GNU C Library in\nnon-free programs enables many more people to use the whole GNU\noperating system, as well as its variant, the GNU/Linux operating\nsystem.\n\n Although the Lesser General Public License is Less protective of the\nusers' freedom, it does ensure that the user of a program that is\nlinked with the Library has the freedom and the wherewithal to run\nthat program using a modified version of the Library.\n\n The precise terms and conditions for copying, distribution and\nmodification follow. Pay close attention to the difference between a\n"work based on the library" and a "work that uses the library". The\nformer contains code derived from the library, whereas the latter must\nbe combined with the library in order to run.\n\n GNU LESSER GENERAL PUBLIC LICENSE\n TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n 0. This License Agreement applies to any software library or other\nprogram which contains a notice placed by the copyright holder or\nother authorized party saying it may be distributed under the terms of\nthis Lesser General Public License (also called "this License").\nEach licensee is addressed as "you".\n\n A "library" means a collection of software functions and/or data\nprepared so as to be conveniently linked with application programs\n(which use some of those functions and data) to form executables.\n\n The "Library", below, refers to any such software library or work\nwhich has been distributed under these terms. A "work based on the\nLibrary" means either the Library or any derivative work under\ncopyright law: that is to say, a work containing the Library or a\nportion of it, either verbatim or with modifications and/or translated\nstraightforwardly into another language. (Hereinafter, translation is\nincluded without limitation in the term "modification".)\n\n "Source code" for a work means the preferred form of the work for\nmaking modifications to it. For a library, complete source code means\nall the source code for all modules it contains, plus any associated\ninterface definition files, plus the scripts used to control compilation\nand installation of the library.\n\n Activities other than copying, distribution and modification are not\ncovered by this License; they are outside its scope. The act of\nrunning a program using the Library is not restricted, and output from\nsuch a program is covered only if its contents constitute a work based\non the Library (independent of the use of the Library in a tool for\nwriting it). Whether that is true depends on what the Library does\nand what the program that uses the Library does.\n\n 1. You may copy and distribute verbatim copies of the Library's\ncomplete source code as you receive it, in any medium, provided that\nyou conspicuously and appropriately publish on each copy an\nappropriate copyright notice and disclaimer of warranty; keep intact\nall the notices that refer to this License and to the absence of any\nwarranty; and distribute a copy of this License along with the\nLibrary.\n\n You may charge a fee for the physical act of transferring a copy,\nand you may at your option offer warranty protection in exchange for a\nfee.\n\n 2. You may modify your copy or copies of the Library or any portion\nof it, thus forming a work based on the Library, and copy and\ndistribute such modifications or work under the terms of Section 1\nabove, provided that you also meet all of these conditions:\n\n a) The modified work must itself be a software library.\n\n b) You must cause the files modified to carry prominent notices\n stating that you changed the files and the date of any change.\n\n c) You must cause the whole of the work to be licensed at no\n charge to all third parties under the terms of this License.\n\n d) If a facility in the modified Library refers to a function or a\n table of data to be supplied by an application program that uses\n the facility, other than as an argument passed when the facility\n is invoked, then you must make a good faith effort to ensure that,\n in the event an application does not supply such function or\n table, the facility still operates, and performs whatever part of\n its purpose remains meaningful.\n\n (For example, a function in a library to compute square roots has\n a purpose that is entirely well-defined independent of the\n application. Therefore, Subsection 2d requires that any\n application-supplied function or table used by this function must\n be optional: if the application does not supply it, the square\n root function must still compute square roots.)\n\nThese requirements apply to the modified work as a whole. If\nidentifiable sections of that work are not derived from the Library,\nand can be reasonably considered independent and separate works in\nthemselves, then this License, and its terms, do not apply to those\nsections when you distribute them as separate works. But when you\ndistribute the same sections as part of a whole which is a work based\non the Library, the distribution of the whole must be on the terms of\nthis License, whose permissions for other licensees extend to the\nentire whole, and thus to each and every part regardless of who wrote\nit.\n\nThus, it is not the intent of this section to claim rights or contest\nyour rights to work written entirely by you; rather, the intent is to\nexercise the right to control the distribution of derivative or\ncollective works based on the Library.\n\nIn addition, mere aggregation of another work not based on the Library\nwith the Library (or with a work based on the Library) on a volume of\na storage or distribution medium does not bring the other work under\nthe scope of this License.\n\n 3. You may opt to apply the terms of the ordinary GNU General Public\nLicense instead of this License to a given copy of the Library. To do\nthis, you must alter all the notices that refer to this License, so\nthat they refer to the ordinary GNU General Public License, version 2,\ninstead of to this License. (If a newer version than version 2 of the\nordinary GNU General Public License has appeared, then you can specify\nthat version instead if you wish.) Do not make any other change in\nthese notices.\n\n Once this change is made in a given copy, it is irreversible for\nthat copy, so the ordinary GNU General Public License applies to all\nsubsequent copies and derivative works made from that copy.\n\n This option is useful when you wish to copy part of the code of\nthe Library into a program that is not a library.\n\n 4. You may copy and distribute the Library (or a portion or\nderivative of it, under Section 2) in object code or executable form\nunder the terms of Sections 1 and 2 above provided that you accompany\nit with the complete corresponding machine-readable source code, which\nmust be distributed under the terms of Sections 1 and 2 above on a\nmedium customarily used for software interchange.\n\n If distribution of object code is made by offering access to copy\nfrom a designated place, then offering equivalent access to copy the\nsource code from the same place satisfies the requirement to\ndistribute the source code, even though third parties are not\ncompelled to copy the source along with the object code.\n\n 5. A program that contains no derivative of any portion of the\nLibrary, but is designed to work with the Library by being compiled or\nlinked with it, is called a "work that uses the Library". Such a\nwork, in isolation, is not a derivative work of the Library, and\ntherefore falls outside the scope of this License.\n\n However, linking a "work that uses the Library" with the Library\ncreates an executable that is a derivative of the Library (because it\ncontains portions of the Library), rather than a "work that uses the\nlibrary". The executable is therefore covered by this License.\nSection 6 states terms for distribution of such executables.\n\n When a "work that uses the Library" uses material from a header file\nthat is part of the Library, the object code for the work may be a\nderivative work of the Library even though the source code is not.\nWhether this is true is especially significant if the work can be\nlinked without the Library, or if the work is itself a library. The\nthreshold for this to be true is not precisely defined by law.\n\n If such an object file uses only numerical parameters, data\nstructure layouts and accessors, and small macros and small inline\nfunctions (ten lines or less in length), then the use of the object\nfile is unrestricted, regardless of whether it is legally a derivative\nwork. (Executables containing this object code plus portions of the\nLibrary will still fall under Section 6.)\n\n Otherwise, if the work is a derivative of the Library, you may\ndistribute the object code for the work under the terms of Section 6.\nAny executables containing that work also fall under Section 6,\nwhether or not they are linked directly with the Library itself.\n\n 6. As an exception to the Sections above, you may also combine or\nlink a "work that uses the Library" with the Library to produce a\nwork containing portions of the Library, and distribute that work\nunder terms of your choice, provided that the terms permit\nmodification of the work for the customer's own use and reverse\nengineering for debugging such modifications.\n\n You must give prominent notice with each copy of the work that the\nLibrary is used in it and that the Library and its use are covered by\nthis License. You must supply a copy of this License. If the work\nduring execution displays copyright notices, you must include the\ncopyright notice for the Library among them, as well as a reference\ndirecting the user to the copy of this License. Also, you must do one\nof these things:\n\n a) Accompany the work with the complete corresponding\n machine-readable source code for the Library including whatever\n changes were used in the work (which must be distributed under\n Sections 1 and 2 above); and, if the work is an executable linked\n with the Library, with the complete machine-readable "work that\n uses the Library", as object code and/or source code, so that the\n user can modify the Library and then relink to produce a modified\n executable containing the modified Library. (It is understood\n that the user who changes the contents of definitions files in the\n Library will not necessarily be able to recompile the application\n to use the modified definitions.)\n\n b) Use a suitable shared library mechanism for linking with the\n Library. A suitable mechanism is one that (1) uses at run time a\n copy of the library already present on the user's computer system,\n rather than copying library functions into the executable, and (2)\n will operate properly with a modified version of the library, if\n the user installs one, as long as the modified version is\n interface-compatible with the version that the work was made with.\n\n c) Accompany the work with a written offer, valid for at\n least three years, to give the same user the materials\n specified in Subsection 6a, above, for a charge no more\n than the cost of performing this distribution.\n\n d) If distribution of the work is made by offering access to copy\n from a designated place, offer equivalent access to copy the above\n specified materials from the same place.\n\n e) Verify that the user has already received a copy of these\n materials or that you have already sent this user a copy.\n\n For an executable, the required form of the "work that uses the\nLibrary" must include any data and utility programs needed for\nreproducing the executable from it. However, as a special exception,\nthe materials to be distributed need not include anything that is\nnormally distributed (in either source or binary form) with the major\ncomponents (compiler, kernel, and so on) of the operating system on\nwhich the executable runs, unless that component itself accompanies\nthe executable.\n\n It may happen that this requirement contradicts the license\nrestrictions of other proprietary libraries that do not normally\naccompany the operating system. Such a contradiction means you cannot\nuse both them and the Library together in an executable that you\ndistribute.\n\n 7. You may place library facilities that are a work based on the\nLibrary side-by-side in a single library together with other library\nfacilities not covered by this License, and distribute such a combined\nlibrary, provided that the separate distribution of the work based on\nthe Library and of the other library facilities is otherwise\npermitted, and provided that you do these two things:\n\n a) Accompany the combined library with a copy of the same work\n based on the Library, uncombined with any other library\n facilities. This must be distributed under the terms of the\n Sections above.\n\n b) Give prominent notice with the combined library of the fact\n that part of it is a work based on the Library, and explaining\n where to find the accompanying uncombined form of the same work.\n\n 8. You may not copy, modify, sublicense, link with, or distribute\nthe Library except as expressly provided under this License. Any\nattempt otherwise to copy, modify, sublicense, link with, or\ndistribute the Library is void, and will automatically terminate your\nrights under this License. However, parties who have received copies,\nor rights, from you under this License will not have their licenses\nterminated so long as such parties remain in full compliance.\n\n 9. You are not required to accept this License, since you have not\nsigned it. However, nothing else grants you permission to modify or\ndistribute the Library or its derivative works. These actions are\nprohibited by law if you do not accept this License. Therefore, by\nmodifying or distributing the Library (or any work based on the\nLibrary), you indicate your acceptance of this License to do so, and\nall its terms and conditions for copying, distributing or modifying\nthe Library or works based on it.\n\n 10. Each time you redistribute the Library (or any work based on the\nLibrary), the recipient automatically receives a license from the\noriginal licensor to copy, distribute, link with or modify the Library\nsubject to these terms and conditions. You may not impose any further\nrestrictions on the recipients' exercise of the rights granted herein.\nYou are not responsible for enforcing compliance by third parties with\nthis License.\n\n 11. If, as a consequence of a court judgment or allegation of patent\ninfringement or for any other reason (not limited to patent issues),\nconditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License. If you cannot\ndistribute so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you\nmay not distribute the Library at all. For example, if a patent\nlicense would not permit royalty-free redistribution of the Library by\nall those who receive copies directly or indirectly through you, then\nthe only way you could satisfy both it and this License would be to\nrefrain entirely from distribution of the Library.\n\nIf any portion of this section is held invalid or unenforceable under any\nparticular circumstance, the balance of the section is intended to apply,\nand the section as a whole is intended to apply in other circumstances.\n\nIt is not the purpose of this section to induce you to infringe any\npatents or other property right claims or to contest validity of any\nsuch claims; this section has the sole purpose of protecting the\nintegrity of the free software distribution system which is\nimplemented by public license practices. Many people have made\ngenerous contributions to the wide range of software distributed\nthrough that system in reliance on consistent application of that\nsystem; it is up to the author/donor to decide if he or she is willing\nto distribute software through any other system and a licensee cannot\nimpose that choice.\n\nThis section is intended to make thoroughly clear what is believed to\nbe a consequence of the rest of this License.\n\n 12. If the distribution and/or use of the Library is restricted in\ncertain countries either by patents or by copyrighted interfaces, the\noriginal copyright holder who places the Library under this License may add\nan explicit geographical distribution limitation excluding those countries,\nso that distribution is permitted only in or among countries not thus\nexcluded. In such case, this License incorporates the limitation as if\nwritten in the body of this License.\n\n 13. The Free Software Foundation may publish revised and/or new\nversions of the Lesser General Public License from time to time.\nSuch new versions will be similar in spirit to the present version,\nbut may differ in detail to address new problems or concerns.\n\nEach version is given a distinguishing version number. If the Library\nspecifies a version number of this License which applies to it and\n"any later version", you have the option of following the terms and\nconditions either of that version or of any later version published by\nthe Free Software Foundation. If the Library does not specify a\nlicense version number, you may choose any version ever published by\nthe Free Software Foundation.\n\n 14. If you wish to incorporate parts of the Library into other free\nprograms whose distribution conditions are incompatible with these,\nwrite to the author to ask for permission. For software which is\ncopyrighted by the Free Software Foundation, write to the Free\nSoftware Foundation; we sometimes make exceptions for this. Our\ndecision will be guided by the two goals of preserving the free status\nof all derivatives of our free software and of promoting the sharing\nand reuse of software generally.\n\n NO WARRANTY\n\n 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO\nWARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.\nEXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR\nOTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY\nKIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE\nLIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME\nTHE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN\nWRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY\nAND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU\nFOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR\nCONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE\nLIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING\nRENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A\nFAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF\nSUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH\nDAMAGES.\n\n END OF TERMS AND CONDITIONS\n\n How to Apply These Terms to Your New Libraries\n\n If you develop a new library, and you want it to be of the greatest\npossible use to the public, we recommend making it free software that\neveryone can redistribute and change. You can do so by permitting\nredistribution under these terms (or, alternatively, under the terms of the\nordinary General Public License).\n\n To apply these terms, attach the following notices to the library. It is\nsafest to attach them to the start of each source file to most effectively\nconvey the exclusion of warranty; and each file should have at least the\n"copyright" line and a pointer to where the full notice is found.\n\n <one line to give the library's name and a brief idea of what it does.>\n Copyright (C) <year> <name of author>\n\n This library is free software; you can redistribute it and/or\n modify it under the terms of the GNU Lesser General Public\n License as published by the Free Software Foundation; either\n version 2.1 of the License, or (at your option) any later version.\n\n This library is distributed in the hope that it will be useful,\n but WITHOUT ANY WARRANTY; without even the implied warranty of\n MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n Lesser General Public License for more details.\n\n You should have received a copy of the GNU Lesser General Public\n License along with this library; if not, write to the Free Software\n Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA\n\nAlso add information on how to contact you by electronic and paper mail.\n\nYou should also get your employer (if you work as a programmer) or your\nschool, if any, to sign a "copyright disclaimer" for the library, if\nnecessary. Here is a sample; alter the names:\n\n Yoyodyne, Inc., hereby disclaims all copyright interest in the\n library `Frob' (a library for tweaking knobs) written by James Random Hacker.\n\n <signature of Ty Coon>, 1 April 1990\n Ty Coon, President of Vice\n\nThat's all there is to it!\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\LICENSE
|
LICENSE
|
Other
| 26,530 | 0.85 | 0.13745 | 0 |
python-kit
| 188 |
2025-02-09T13:37:30.284086
|
Apache-2.0
| false |
4fbd65380cdd255951079008b364516c
|
Metadata-Version: 2.1\nName: chardet\nVersion: 5.2.0\nSummary: Universal encoding detector for Python 3\nHome-page: https://github.com/chardet/chardet\nAuthor: Mark Pilgrim\nAuthor-email: mark@diveintomark.org\nMaintainer: Daniel Blanchard\nMaintainer-email: dan.blanchard@gmail.com\nLicense: LGPL\nProject-URL: Documentation, https://chardet.readthedocs.io/\nProject-URL: GitHub Project, https://github.com/chardet/chardet\nProject-URL: Issue Tracker, https://github.com/chardet/chardet/issues\nKeywords: encoding,i18n,xml\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: Programming Language :: Python :: Implementation :: PyPy\nClassifier: Topic :: Software Development :: Libraries :: Python Modules\nClassifier: Topic :: Text Processing :: Linguistic\nRequires-Python: >=3.7\nLicense-File: LICENSE\n\nChardet: The Universal Character Encoding Detector\n--------------------------------------------------\n\n.. image:: https://img.shields.io/travis/chardet/chardet/stable.svg\n :alt: Build status\n :target: https://travis-ci.org/chardet/chardet\n\n.. image:: https://img.shields.io/coveralls/chardet/chardet/stable.svg\n :target: https://coveralls.io/r/chardet/chardet\n\n.. image:: https://img.shields.io/pypi/v/chardet.svg\n :target: https://warehouse.python.org/project/chardet/\n :alt: Latest version on PyPI\n\n.. image:: https://img.shields.io/pypi/l/chardet.svg\n :alt: License\n\n\nDetects\n - ASCII, UTF-8, UTF-16 (2 variants), UTF-32 (4 variants)\n - Big5, GB2312, EUC-TW, HZ-GB-2312, ISO-2022-CN (Traditional and Simplified Chinese)\n - EUC-JP, SHIFT_JIS, CP932, ISO-2022-JP (Japanese)\n - EUC-KR, ISO-2022-KR, Johab (Korean)\n - KOI8-R, MacCyrillic, IBM855, IBM866, ISO-8859-5, windows-1251 (Cyrillic)\n - ISO-8859-5, windows-1251 (Bulgarian)\n - ISO-8859-1, windows-1252, MacRoman (Western European languages)\n - ISO-8859-7, windows-1253 (Greek)\n - ISO-8859-8, windows-1255 (Visual and Logical Hebrew)\n - TIS-620 (Thai)\n\n.. note::\n Our ISO-8859-2 and windows-1250 (Hungarian) probers have been temporarily\n disabled until we can retrain the models.\n\nRequires Python 3.7+.\n\nInstallation\n------------\n\nInstall from `PyPI <https://pypi.org/project/chardet/>`_::\n\n pip install chardet\n\nDocumentation\n-------------\n\nFor users, docs are now available at https://chardet.readthedocs.io/.\n\nCommand-line Tool\n-----------------\n\nchardet comes with a command-line script which reports on the encodings of one\nor more files::\n\n % chardetect somefile someotherfile\n somefile: windows-1252 with confidence 0.5\n someotherfile: ascii with confidence 1.0\n\nAbout\n-----\n\nThis is a continuation of Mark Pilgrim's excellent original chardet port from C, and `Ian Cordasco <https://github.com/sigmavirus24>`_'s\n`charade <https://github.com/sigmavirus24/charade>`_ Python 3-compatible fork.\n\n:maintainer: Dan Blanchard\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\METADATA
|
METADATA
|
Other
| 3,418 | 0.8 | 0.010309 | 0 |
awesome-app
| 559 |
2023-08-12T13:02:11.570484
|
MIT
| false |
80352d14ccb37516b6aea57075e5a166
|
../../Scripts/chardetect.exe,sha256=eNgk23mwNvkaVJ6dHFhNElFdafRC7PhjTKWISO8XI10,108423\nchardet-5.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nchardet-5.2.0.dist-info/LICENSE,sha256=3GJlINzVOiL3J68-5Cx3DlbJemT-OtsGN5nYqwMv5VE,26530\nchardet-5.2.0.dist-info/METADATA,sha256=PAr2NQ6hQWpjyFnwlI7MoxHt2S_6oRiUsucOKMNhzGw,3418\nchardet-5.2.0.dist-info/RECORD,,\nchardet-5.2.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nchardet-5.2.0.dist-info/WHEEL,sha256=AtBG6SXL3KF_v0NxLf0ehyVOh0cold-JbJYXNGorC6Q,92\nchardet-5.2.0.dist-info/entry_points.txt,sha256=_cdvYc4jyY68GYfsQAAthNMxO-yodcGkvNC1xOEsLmI,59\nchardet-5.2.0.dist-info/top_level.txt,sha256=AowzBbZy4x8EirABDdJSLJZMkJ_53iIag8xfKR6D7kI,8\nchardet/__init__.py,sha256=57R-HSxj0PWmILMN0GFmUNqEMfrEVSamXyjD-W6_fbs,4797\nchardet/__main__.py,sha256=puNj2o_QfBRKElEkiVp1zEIL1gGYD2o-JuXLFlqHDC4,123\nchardet/__pycache__/__init__.cpython-313.pyc,,\nchardet/__pycache__/__main__.cpython-313.pyc,,\nchardet/__pycache__/big5freq.cpython-313.pyc,,\nchardet/__pycache__/big5prober.cpython-313.pyc,,\nchardet/__pycache__/chardistribution.cpython-313.pyc,,\nchardet/__pycache__/charsetgroupprober.cpython-313.pyc,,\nchardet/__pycache__/charsetprober.cpython-313.pyc,,\nchardet/__pycache__/codingstatemachine.cpython-313.pyc,,\nchardet/__pycache__/codingstatemachinedict.cpython-313.pyc,,\nchardet/__pycache__/cp949prober.cpython-313.pyc,,\nchardet/__pycache__/enums.cpython-313.pyc,,\nchardet/__pycache__/escprober.cpython-313.pyc,,\nchardet/__pycache__/escsm.cpython-313.pyc,,\nchardet/__pycache__/eucjpprober.cpython-313.pyc,,\nchardet/__pycache__/euckrfreq.cpython-313.pyc,,\nchardet/__pycache__/euckrprober.cpython-313.pyc,,\nchardet/__pycache__/euctwfreq.cpython-313.pyc,,\nchardet/__pycache__/euctwprober.cpython-313.pyc,,\nchardet/__pycache__/gb2312freq.cpython-313.pyc,,\nchardet/__pycache__/gb2312prober.cpython-313.pyc,,\nchardet/__pycache__/hebrewprober.cpython-313.pyc,,\nchardet/__pycache__/jisfreq.cpython-313.pyc,,\nchardet/__pycache__/johabfreq.cpython-313.pyc,,\nchardet/__pycache__/johabprober.cpython-313.pyc,,\nchardet/__pycache__/jpcntx.cpython-313.pyc,,\nchardet/__pycache__/langbulgarianmodel.cpython-313.pyc,,\nchardet/__pycache__/langgreekmodel.cpython-313.pyc,,\nchardet/__pycache__/langhebrewmodel.cpython-313.pyc,,\nchardet/__pycache__/langhungarianmodel.cpython-313.pyc,,\nchardet/__pycache__/langrussianmodel.cpython-313.pyc,,\nchardet/__pycache__/langthaimodel.cpython-313.pyc,,\nchardet/__pycache__/langturkishmodel.cpython-313.pyc,,\nchardet/__pycache__/latin1prober.cpython-313.pyc,,\nchardet/__pycache__/macromanprober.cpython-313.pyc,,\nchardet/__pycache__/mbcharsetprober.cpython-313.pyc,,\nchardet/__pycache__/mbcsgroupprober.cpython-313.pyc,,\nchardet/__pycache__/mbcssm.cpython-313.pyc,,\nchardet/__pycache__/resultdict.cpython-313.pyc,,\nchardet/__pycache__/sbcharsetprober.cpython-313.pyc,,\nchardet/__pycache__/sbcsgroupprober.cpython-313.pyc,,\nchardet/__pycache__/sjisprober.cpython-313.pyc,,\nchardet/__pycache__/universaldetector.cpython-313.pyc,,\nchardet/__pycache__/utf1632prober.cpython-313.pyc,,\nchardet/__pycache__/utf8prober.cpython-313.pyc,,\nchardet/__pycache__/version.cpython-313.pyc,,\nchardet/big5freq.py,sha256=ltcfP-3PjlNHCoo5e4a7C4z-2DhBTXRfY6jbMbB7P30,31274\nchardet/big5prober.py,sha256=lPMfwCX6v2AaPgvFh_cSWZcgLDbWiFCHLZ_p9RQ9uxE,1763\nchardet/chardistribution.py,sha256=13B8XUG4oXDuLdXvfbIWwLFeR-ZU21AqTS1zcdON8bU,10032\nchardet/charsetgroupprober.py,sha256=UKK3SaIZB2PCdKSIS0gnvMtLR9JJX62M-fZJu3OlWyg,3915\nchardet/charsetprober.py,sha256=L3t8_wIOov8em-vZWOcbkdsrwe43N6_gqNh5pH7WPd4,5420\nchardet/cli/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nchardet/cli/__pycache__/__init__.cpython-313.pyc,,\nchardet/cli/__pycache__/chardetect.cpython-313.pyc,,\nchardet/cli/chardetect.py,sha256=zibMVg5RpKb-ME9_7EYG4ZM2Sf07NHcQzZ12U-rYJho,3242\nchardet/codingstatemachine.py,sha256=K7k69sw3jY5DmTXoSJQVsUtFIQKYPQVOSJJhBuGv_yE,3732\nchardet/codingstatemachinedict.py,sha256=0GY3Hi2qIZvDrOOJ3AtqppM1RsYxr_66ER4EHjuMiMc,542\nchardet/cp949prober.py,sha256=0jKRV7fECuWI16rNnks0ZECKA1iZYCIEaP8A1ZvjUSI,1860\nchardet/enums.py,sha256=TzECiZoCKNMqgwU76cPCeKWFBqaWvAdLMev5_bCkhY8,1683\nchardet/escprober.py,sha256=Kho48X65xE0scFylIdeJjM2bcbvRvv0h0WUbMWrJD3A,4006\nchardet/escsm.py,sha256=AqyXpA2FQFD7k-buBty_7itGEYkhmVa8X09NLRul3QM,12176\nchardet/eucjpprober.py,sha256=5KYaM9fsxkRYzw1b5k0fL-j_-ezIw-ij9r97a9MHxLY,3934\nchardet/euckrfreq.py,sha256=3mHuRvXfsq_QcQysDQFb8qSudvTiol71C6Ic2w57tKM,13566\nchardet/euckrprober.py,sha256=hiFT6wM174GIwRvqDsIcuOc-dDsq2uPKMKbyV8-1Xnc,1753\nchardet/euctwfreq.py,sha256=2alILE1Lh5eqiFJZjzRkMQXolNJRHY5oBQd-vmZYFFM,36913\nchardet/euctwprober.py,sha256=NxbpNdBtU0VFI0bKfGfDkpP7S2_8_6FlO87dVH0ogws,1753\nchardet/gb2312freq.py,sha256=49OrdXzD-HXqwavkqjo8Z7gvs58hONNzDhAyMENNkvY,20735\nchardet/gb2312prober.py,sha256=KPEBueaSLSvBpFeINMu0D6TgHcR90e5PaQawifzF4o0,1759\nchardet/hebrewprober.py,sha256=96T_Lj_OmW-fK7JrSHojYjyG3fsGgbzkoTNleZ3kfYE,14537\nchardet/jisfreq.py,sha256=mm8tfrwqhpOd3wzZKS4NJqkYBQVcDfTM2JiQ5aW932E,25796\nchardet/johabfreq.py,sha256=dBpOYG34GRX6SL8k_LbS9rxZPMjLjoMlgZ03Pz5Hmqc,42498\nchardet/johabprober.py,sha256=O1Qw9nVzRnun7vZp4UZM7wvJSv9W941mEU9uDMnY3DU,1752\nchardet/jpcntx.py,sha256=uhHrYWkLxE_rF5OkHKInm0HUsrjgKHHVQvtt3UcvotA,27055\nchardet/langbulgarianmodel.py,sha256=bGoRpxBYtrbSHa6mX6PkEA26v30pWmhDjemhdxmkew8,104550\nchardet/langgreekmodel.py,sha256=3wMlEzQ8oU2MbrL2xN8lkuOB0dCMLBhW6heekxusoc0,98472\nchardet/langhebrewmodel.py,sha256=ZUTqusxMvR_earWPs5w-rH10xoe5sPjd9FLMu1DUIvE,98184\nchardet/langhungarianmodel.py,sha256=N-YtC2EiswyS7XsUicCPRycrIzRNj47Y048odp9qOoo,101351\nchardet/langrussianmodel.py,sha256=6v7RcZKGj0VH0864BHzizKNceAYbHvGts2p00ifC7w4,128023\nchardet/langthaimodel.py,sha256=Mr673U9U8rkQFfUDtLP01pp-0TOsl2o6sb75YEjvpcs,102762\nchardet/langturkishmodel.py,sha256=LkXCjWhGUEzqKXvfasHN0SFBigwKJ3xeWNVZ0EyI0kA,95360\nchardet/latin1prober.py,sha256=p15EEmFbmQUwbKLC7lOJVGHEZwcG45ubEZYTGu01J5g,5380\nchardet/macromanprober.py,sha256=9anfzmY6TBfUPDyBDOdY07kqmTHpZ1tK0jL-p1JWcOY,6077\nchardet/mbcharsetprober.py,sha256=Wr04WNI4F3X_VxEverNG-H25g7u-MDDKlNt-JGj-_uU,3715\nchardet/mbcsgroupprober.py,sha256=iRpaNBjV0DNwYPu_z6TiHgRpwYahiM7ztI_4kZ4Uz9A,2131\nchardet/mbcssm.py,sha256=hUtPvDYgWDaA2dWdgLsshbwRfm3Q5YRlRogdmeRUNQw,30391\nchardet/metadata/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nchardet/metadata/__pycache__/__init__.cpython-313.pyc,,\nchardet/metadata/__pycache__/languages.cpython-313.pyc,,\nchardet/metadata/languages.py,sha256=FhvBIdZFxRQ-dTwkb_0madRKgVBCaUMQz9I5xqjE5iQ,13560\nchardet/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nchardet/resultdict.py,sha256=ez4FRvN5KaSosJeJ2WzUyKdDdg35HDy_SSLPXKCdt5M,402\nchardet/sbcharsetprober.py,sha256=-nd3F90i7GpXLjehLVHqVBE0KlWzGvQUPETLBNn4o6U,6400\nchardet/sbcsgroupprober.py,sha256=gcgI0fOfgw_3YTClpbra_MNxwyEyJ3eUXraoLHYb59E,4137\nchardet/sjisprober.py,sha256=aqQufMzRw46ZpFlzmYaYeT2-nzmKb-hmcrApppJ862k,4007\nchardet/universaldetector.py,sha256=xYBrg4x0dd9WnT8qclfADVD9ondrUNkqPmvte1pa520,14848\nchardet/utf1632prober.py,sha256=pw1epGdMj1hDGiCu1AHqqzOEfjX8MVdiW7O1BlT8-eQ,8505\nchardet/utf8prober.py,sha256=8m08Ub5490H4jQ6LYXvFysGtgKoKsHUd2zH_i8_TnVw,2812\nchardet/version.py,sha256=jp8ePp1zC63YxruGcHSuKxtf3-fF1LYAMUZD2eDWYok,244\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\RECORD
|
RECORD
|
Other
| 7,375 | 0.7 | 0 | 0 |
node-utils
| 20 |
2024-10-18T22:21:56.771548
|
GPL-3.0
| false |
01760c5d85e2d488114c8d9f032c5849
|
chardet\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\top_level.txt
|
top_level.txt
|
Other
| 8 | 0.5 | 0 | 0 |
react-lib
| 461 |
2024-05-03T14:37:44.030850
|
GPL-3.0
| false |
dfa288092949be4ded87cfe9be2702a5
|
Wheel-Version: 1.0\nGenerator: bdist_wheel (0.41.0)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n
|
.venv\Lib\site-packages\chardet-5.2.0.dist-info\WHEEL
|
WHEEL
|
Other
| 92 | 0.5 | 0 | 0 |
awesome-app
| 715 |
2024-09-09T18:44:29.583733
|
MIT
| false |
3eced86d3f01a481e60a39780b007038
|
from __future__ import annotations\n\nimport logging\nfrom os import PathLike\nfrom typing import BinaryIO\n\nfrom .cd import (\n coherence_ratio,\n encoding_languages,\n mb_encoding_languages,\n merge_coherence_ratios,\n)\nfrom .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE\nfrom .md import mess_ratio\nfrom .models import CharsetMatch, CharsetMatches\nfrom .utils import (\n any_specified_encoding,\n cut_sequence_chunks,\n iana_name,\n identify_sig_or_bom,\n is_cp_similar,\n is_multi_byte_encoding,\n should_strip_sig_or_bom,\n)\n\nlogger = logging.getLogger("charset_normalizer")\nexplain_handler = logging.StreamHandler()\nexplain_handler.setFormatter(\n logging.Formatter("%(asctime)s | %(levelname)s | %(message)s")\n)\n\n\ndef from_bytes(\n sequences: bytes | bytearray,\n steps: int = 5,\n chunk_size: int = 512,\n threshold: float = 0.2,\n cp_isolation: list[str] | None = None,\n cp_exclusion: list[str] | None = None,\n preemptive_behaviour: bool = True,\n explain: bool = False,\n language_threshold: float = 0.1,\n enable_fallback: bool = True,\n) -> CharsetMatches:\n """\n Given a raw bytes sequence, return the best possibles charset usable to render str objects.\n If there is no results, it is a strong indicator that the source is binary/not text.\n By default, the process will extract 5 blocks of 512o each to assess the mess and coherence of a given sequence.\n And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will.\n\n The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page\n but never take it for granted. Can improve the performance.\n\n You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that\n purpose.\n\n This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32.\n By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain'\n toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging.\n Custom logging format and handler can be set manually.\n """\n\n if not isinstance(sequences, (bytearray, bytes)):\n raise TypeError(\n "Expected object of type bytes or bytearray, got: {}".format(\n type(sequences)\n )\n )\n\n if explain:\n previous_logger_level: int = logger.level\n logger.addHandler(explain_handler)\n logger.setLevel(TRACE)\n\n length: int = len(sequences)\n\n if length == 0:\n logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.")\n if explain: # Defensive: ensure exit path clean handler\n logger.removeHandler(explain_handler)\n logger.setLevel(previous_logger_level or logging.WARNING)\n return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")])\n\n if cp_isolation is not None:\n logger.log(\n TRACE,\n "cp_isolation is set. use this flag for debugging purpose. "\n "limited list of encoding allowed : %s.",\n ", ".join(cp_isolation),\n )\n cp_isolation = [iana_name(cp, False) for cp in cp_isolation]\n else:\n cp_isolation = []\n\n if cp_exclusion is not None:\n logger.log(\n TRACE,\n "cp_exclusion is set. use this flag for debugging purpose. "\n "limited list of encoding excluded : %s.",\n ", ".join(cp_exclusion),\n )\n cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion]\n else:\n cp_exclusion = []\n\n if length <= (chunk_size * steps):\n logger.log(\n TRACE,\n "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.",\n steps,\n chunk_size,\n length,\n )\n steps = 1\n chunk_size = length\n\n if steps > 1 and length / steps < chunk_size:\n chunk_size = int(length / steps)\n\n is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE\n is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE\n\n if is_too_small_sequence:\n logger.log(\n TRACE,\n "Trying to detect encoding from a tiny portion of ({}) byte(s).".format(\n length\n ),\n )\n elif is_too_large_sequence:\n logger.log(\n TRACE,\n "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format(\n length\n ),\n )\n\n prioritized_encodings: list[str] = []\n\n specified_encoding: str | None = (\n any_specified_encoding(sequences) if preemptive_behaviour else None\n )\n\n if specified_encoding is not None:\n prioritized_encodings.append(specified_encoding)\n logger.log(\n TRACE,\n "Detected declarative mark in sequence. Priority +1 given for %s.",\n specified_encoding,\n )\n\n tested: set[str] = set()\n tested_but_hard_failure: list[str] = []\n tested_but_soft_failure: list[str] = []\n\n fallback_ascii: CharsetMatch | None = None\n fallback_u8: CharsetMatch | None = None\n fallback_specified: CharsetMatch | None = None\n\n results: CharsetMatches = CharsetMatches()\n\n early_stop_results: CharsetMatches = CharsetMatches()\n\n sig_encoding, sig_payload = identify_sig_or_bom(sequences)\n\n if sig_encoding is not None:\n prioritized_encodings.append(sig_encoding)\n logger.log(\n TRACE,\n "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.",\n len(sig_payload),\n sig_encoding,\n )\n\n prioritized_encodings.append("ascii")\n\n if "utf_8" not in prioritized_encodings:\n prioritized_encodings.append("utf_8")\n\n for encoding_iana in prioritized_encodings + IANA_SUPPORTED:\n if cp_isolation and encoding_iana not in cp_isolation:\n continue\n\n if cp_exclusion and encoding_iana in cp_exclusion:\n continue\n\n if encoding_iana in tested:\n continue\n\n tested.add(encoding_iana)\n\n decoded_payload: str | None = None\n bom_or_sig_available: bool = sig_encoding == encoding_iana\n strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom(\n encoding_iana\n )\n\n if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available:\n logger.log(\n TRACE,\n "Encoding %s won't be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.",\n encoding_iana,\n )\n continue\n if encoding_iana in {"utf_7"} and not bom_or_sig_available:\n logger.log(\n TRACE,\n "Encoding %s won't be tested as-is because detection is unreliable without BOM/SIG.",\n encoding_iana,\n )\n continue\n\n try:\n is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana)\n except (ModuleNotFoundError, ImportError):\n logger.log(\n TRACE,\n "Encoding %s does not provide an IncrementalDecoder",\n encoding_iana,\n )\n continue\n\n try:\n if is_too_large_sequence and is_multi_byte_decoder is False:\n str(\n (\n sequences[: int(50e4)]\n if strip_sig_or_bom is False\n else sequences[len(sig_payload) : int(50e4)]\n ),\n encoding=encoding_iana,\n )\n else:\n decoded_payload = str(\n (\n sequences\n if strip_sig_or_bom is False\n else sequences[len(sig_payload) :]\n ),\n encoding=encoding_iana,\n )\n except (UnicodeDecodeError, LookupError) as e:\n if not isinstance(e, LookupError):\n logger.log(\n TRACE,\n "Code page %s does not fit given bytes sequence at ALL. %s",\n encoding_iana,\n str(e),\n )\n tested_but_hard_failure.append(encoding_iana)\n continue\n\n similar_soft_failure_test: bool = False\n\n for encoding_soft_failed in tested_but_soft_failure:\n if is_cp_similar(encoding_iana, encoding_soft_failed):\n similar_soft_failure_test = True\n break\n\n if similar_soft_failure_test:\n logger.log(\n TRACE,\n "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!",\n encoding_iana,\n encoding_soft_failed,\n )\n continue\n\n r_ = range(\n 0 if not bom_or_sig_available else len(sig_payload),\n length,\n int(length / steps),\n )\n\n multi_byte_bonus: bool = (\n is_multi_byte_decoder\n and decoded_payload is not None\n and len(decoded_payload) < length\n )\n\n if multi_byte_bonus:\n logger.log(\n TRACE,\n "Code page %s is a multi byte encoding table and it appear that at least one character "\n "was encoded using n-bytes.",\n encoding_iana,\n )\n\n max_chunk_gave_up: int = int(len(r_) / 4)\n\n max_chunk_gave_up = max(max_chunk_gave_up, 2)\n early_stop_count: int = 0\n lazy_str_hard_failure = False\n\n md_chunks: list[str] = []\n md_ratios = []\n\n try:\n for chunk in cut_sequence_chunks(\n sequences,\n encoding_iana,\n r_,\n chunk_size,\n bom_or_sig_available,\n strip_sig_or_bom,\n sig_payload,\n is_multi_byte_decoder,\n decoded_payload,\n ):\n md_chunks.append(chunk)\n\n md_ratios.append(\n mess_ratio(\n chunk,\n threshold,\n explain is True and 1 <= len(cp_isolation) <= 2,\n )\n )\n\n if md_ratios[-1] >= threshold:\n early_stop_count += 1\n\n if (early_stop_count >= max_chunk_gave_up) or (\n bom_or_sig_available and strip_sig_or_bom is False\n ):\n break\n except (\n UnicodeDecodeError\n ) as e: # Lazy str loading may have missed something there\n logger.log(\n TRACE,\n "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s",\n encoding_iana,\n str(e),\n )\n early_stop_count = max_chunk_gave_up\n lazy_str_hard_failure = True\n\n # We might want to check the sequence again with the whole content\n # Only if initial MD tests passes\n if (\n not lazy_str_hard_failure\n and is_too_large_sequence\n and not is_multi_byte_decoder\n ):\n try:\n sequences[int(50e3) :].decode(encoding_iana, errors="strict")\n except UnicodeDecodeError as e:\n logger.log(\n TRACE,\n "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s",\n encoding_iana,\n str(e),\n )\n tested_but_hard_failure.append(encoding_iana)\n continue\n\n mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0\n if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up:\n tested_but_soft_failure.append(encoding_iana)\n logger.log(\n TRACE,\n "%s was excluded because of initial chaos probing. Gave up %i time(s). "\n "Computed mean chaos is %f %%.",\n encoding_iana,\n early_stop_count,\n round(mean_mess_ratio * 100, ndigits=3),\n )\n # Preparing those fallbacks in case we got nothing.\n if (\n enable_fallback\n and encoding_iana in ["ascii", "utf_8", specified_encoding]\n and not lazy_str_hard_failure\n ):\n fallback_entry = CharsetMatch(\n sequences,\n encoding_iana,\n threshold,\n False,\n [],\n decoded_payload,\n preemptive_declaration=specified_encoding,\n )\n if encoding_iana == specified_encoding:\n fallback_specified = fallback_entry\n elif encoding_iana == "ascii":\n fallback_ascii = fallback_entry\n else:\n fallback_u8 = fallback_entry\n continue\n\n logger.log(\n TRACE,\n "%s passed initial chaos probing. Mean measured chaos is %f %%",\n encoding_iana,\n round(mean_mess_ratio * 100, ndigits=3),\n )\n\n if not is_multi_byte_decoder:\n target_languages: list[str] = encoding_languages(encoding_iana)\n else:\n target_languages = mb_encoding_languages(encoding_iana)\n\n if target_languages:\n logger.log(\n TRACE,\n "{} should target any language(s) of {}".format(\n encoding_iana, str(target_languages)\n ),\n )\n\n cd_ratios = []\n\n # We shall skip the CD when its about ASCII\n # Most of the time its not relevant to run "language-detection" on it.\n if encoding_iana != "ascii":\n for chunk in md_chunks:\n chunk_languages = coherence_ratio(\n chunk,\n language_threshold,\n ",".join(target_languages) if target_languages else None,\n )\n\n cd_ratios.append(chunk_languages)\n\n cd_ratios_merged = merge_coherence_ratios(cd_ratios)\n\n if cd_ratios_merged:\n logger.log(\n TRACE,\n "We detected language {} using {}".format(\n cd_ratios_merged, encoding_iana\n ),\n )\n\n current_match = CharsetMatch(\n sequences,\n encoding_iana,\n mean_mess_ratio,\n bom_or_sig_available,\n cd_ratios_merged,\n (\n decoded_payload\n if (\n is_too_large_sequence is False\n or encoding_iana in [specified_encoding, "ascii", "utf_8"]\n )\n else None\n ),\n preemptive_declaration=specified_encoding,\n )\n\n results.append(current_match)\n\n if (\n encoding_iana in [specified_encoding, "ascii", "utf_8"]\n and mean_mess_ratio < 0.1\n ):\n # If md says nothing to worry about, then... stop immediately!\n if mean_mess_ratio == 0.0:\n logger.debug(\n "Encoding detection: %s is most likely the one.",\n current_match.encoding,\n )\n if explain: # Defensive: ensure exit path clean handler\n logger.removeHandler(explain_handler)\n logger.setLevel(previous_logger_level)\n return CharsetMatches([current_match])\n\n early_stop_results.append(current_match)\n\n if (\n len(early_stop_results)\n and (specified_encoding is None or specified_encoding in tested)\n and "ascii" in tested\n and "utf_8" in tested\n ):\n probable_result: CharsetMatch = early_stop_results.best() # type: ignore[assignment]\n logger.debug(\n "Encoding detection: %s is most likely the one.",\n probable_result.encoding,\n )\n if explain: # Defensive: ensure exit path clean handler\n logger.removeHandler(explain_handler)\n logger.setLevel(previous_logger_level)\n\n return CharsetMatches([probable_result])\n\n if encoding_iana == sig_encoding:\n logger.debug(\n "Encoding detection: %s is most likely the one as we detected a BOM or SIG within "\n "the beginning of the sequence.",\n encoding_iana,\n )\n if explain: # Defensive: ensure exit path clean handler\n logger.removeHandler(explain_handler)\n logger.setLevel(previous_logger_level)\n return CharsetMatches([results[encoding_iana]])\n\n if len(results) == 0:\n if fallback_u8 or fallback_ascii or fallback_specified:\n logger.log(\n TRACE,\n "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.",\n )\n\n if fallback_specified:\n logger.debug(\n "Encoding detection: %s will be used as a fallback match",\n fallback_specified.encoding,\n )\n results.append(fallback_specified)\n elif (\n (fallback_u8 and fallback_ascii is None)\n or (\n fallback_u8\n and fallback_ascii\n and fallback_u8.fingerprint != fallback_ascii.fingerprint\n )\n or (fallback_u8 is not None)\n ):\n logger.debug("Encoding detection: utf_8 will be used as a fallback match")\n results.append(fallback_u8)\n elif fallback_ascii:\n logger.debug("Encoding detection: ascii will be used as a fallback match")\n results.append(fallback_ascii)\n\n if results:\n logger.debug(\n "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.",\n results.best().encoding, # type: ignore\n len(results) - 1,\n )\n else:\n logger.debug("Encoding detection: Unable to determine any suitable charset.")\n\n if explain:\n logger.removeHandler(explain_handler)\n logger.setLevel(previous_logger_level)\n\n return results\n\n\ndef from_fp(\n fp: BinaryIO,\n steps: int = 5,\n chunk_size: int = 512,\n threshold: float = 0.20,\n cp_isolation: list[str] | None = None,\n cp_exclusion: list[str] | None = None,\n preemptive_behaviour: bool = True,\n explain: bool = False,\n language_threshold: float = 0.1,\n enable_fallback: bool = True,\n) -> CharsetMatches:\n """\n Same thing than the function from_bytes but using a file pointer that is already ready.\n Will not close the file pointer.\n """\n return from_bytes(\n fp.read(),\n steps,\n chunk_size,\n threshold,\n cp_isolation,\n cp_exclusion,\n preemptive_behaviour,\n explain,\n language_threshold,\n enable_fallback,\n )\n\n\ndef from_path(\n path: str | bytes | PathLike, # type: ignore[type-arg]\n steps: int = 5,\n chunk_size: int = 512,\n threshold: float = 0.20,\n cp_isolation: list[str] | None = None,\n cp_exclusion: list[str] | None = None,\n preemptive_behaviour: bool = True,\n explain: bool = False,\n language_threshold: float = 0.1,\n enable_fallback: bool = True,\n) -> CharsetMatches:\n """\n Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode.\n Can raise IOError.\n """\n with open(path, "rb") as fp:\n return from_fp(\n fp,\n steps,\n chunk_size,\n threshold,\n cp_isolation,\n cp_exclusion,\n preemptive_behaviour,\n explain,\n language_threshold,\n enable_fallback,\n )\n\n\ndef is_binary(\n fp_or_path_or_payload: PathLike | str | BinaryIO | bytes, # type: ignore[type-arg]\n steps: int = 5,\n chunk_size: int = 512,\n threshold: float = 0.20,\n cp_isolation: list[str] | None = None,\n cp_exclusion: list[str] | None = None,\n preemptive_behaviour: bool = True,\n explain: bool = False,\n language_threshold: float = 0.1,\n enable_fallback: bool = False,\n) -> bool:\n """\n Detect if the given input (file, bytes, or path) points to a binary file. aka. not a string.\n Based on the same main heuristic algorithms and default kwargs at the sole exception that fallbacks match\n are disabled to be stricter around ASCII-compatible but unlikely to be a string.\n """\n if isinstance(fp_or_path_or_payload, (str, PathLike)):\n guesses = from_path(\n fp_or_path_or_payload,\n steps=steps,\n chunk_size=chunk_size,\n threshold=threshold,\n cp_isolation=cp_isolation,\n cp_exclusion=cp_exclusion,\n preemptive_behaviour=preemptive_behaviour,\n explain=explain,\n language_threshold=language_threshold,\n enable_fallback=enable_fallback,\n )\n elif isinstance(\n fp_or_path_or_payload,\n (\n bytes,\n bytearray,\n ),\n ):\n guesses = from_bytes(\n fp_or_path_or_payload,\n steps=steps,\n chunk_size=chunk_size,\n threshold=threshold,\n cp_isolation=cp_isolation,\n cp_exclusion=cp_exclusion,\n preemptive_behaviour=preemptive_behaviour,\n explain=explain,\n language_threshold=language_threshold,\n enable_fallback=enable_fallback,\n )\n else:\n guesses = from_fp(\n fp_or_path_or_payload,\n steps=steps,\n chunk_size=chunk_size,\n threshold=threshold,\n cp_isolation=cp_isolation,\n cp_exclusion=cp_exclusion,\n preemptive_behaviour=preemptive_behaviour,\n explain=explain,\n language_threshold=language_threshold,\n enable_fallback=enable_fallback,\n )\n\n return not guesses\n
|
.venv\Lib\site-packages\charset_normalizer\api.py
|
api.py
|
Python
| 23,285 | 0.95 | 0.121257 | 0.010187 |
vue-tools
| 549 |
2024-04-24T04:16:49.510233
|
GPL-3.0
| false |
339d93f83fe3de7b801b9b975451d780
|
from __future__ import annotations\n\nimport importlib\nfrom codecs import IncrementalDecoder\nfrom collections import Counter\nfrom functools import lru_cache\nfrom typing import Counter as TypeCounter\n\nfrom .constant import (\n FREQUENCIES,\n KO_NAMES,\n LANGUAGE_SUPPORTED_COUNT,\n TOO_SMALL_SEQUENCE,\n ZH_NAMES,\n)\nfrom .md import is_suspiciously_successive_range\nfrom .models import CoherenceMatches\nfrom .utils import (\n is_accentuated,\n is_latin,\n is_multi_byte_encoding,\n is_unicode_range_secondary,\n unicode_range,\n)\n\n\ndef encoding_unicode_range(iana_name: str) -> list[str]:\n """\n Return associated unicode ranges in a single byte code page.\n """\n if is_multi_byte_encoding(iana_name):\n raise OSError("Function not supported on multi-byte code page")\n\n decoder = importlib.import_module(f"encodings.{iana_name}").IncrementalDecoder\n\n p: IncrementalDecoder = decoder(errors="ignore")\n seen_ranges: dict[str, int] = {}\n character_count: int = 0\n\n for i in range(0x40, 0xFF):\n chunk: str = p.decode(bytes([i]))\n\n if chunk:\n character_range: str | None = unicode_range(chunk)\n\n if character_range is None:\n continue\n\n if is_unicode_range_secondary(character_range) is False:\n if character_range not in seen_ranges:\n seen_ranges[character_range] = 0\n seen_ranges[character_range] += 1\n character_count += 1\n\n return sorted(\n [\n character_range\n for character_range in seen_ranges\n if seen_ranges[character_range] / character_count >= 0.15\n ]\n )\n\n\ndef unicode_range_languages(primary_range: str) -> list[str]:\n """\n Return inferred languages used with a unicode range.\n """\n languages: list[str] = []\n\n for language, characters in FREQUENCIES.items():\n for character in characters:\n if unicode_range(character) == primary_range:\n languages.append(language)\n break\n\n return languages\n\n\n@lru_cache()\ndef encoding_languages(iana_name: str) -> list[str]:\n """\n Single-byte encoding language association. Some code page are heavily linked to particular language(s).\n This function does the correspondence.\n """\n unicode_ranges: list[str] = encoding_unicode_range(iana_name)\n primary_range: str | None = None\n\n for specified_range in unicode_ranges:\n if "Latin" not in specified_range:\n primary_range = specified_range\n break\n\n if primary_range is None:\n return ["Latin Based"]\n\n return unicode_range_languages(primary_range)\n\n\n@lru_cache()\ndef mb_encoding_languages(iana_name: str) -> list[str]:\n """\n Multi-byte encoding language association. Some code page are heavily linked to particular language(s).\n This function does the correspondence.\n """\n if (\n iana_name.startswith("shift_")\n or iana_name.startswith("iso2022_jp")\n or iana_name.startswith("euc_j")\n or iana_name == "cp932"\n ):\n return ["Japanese"]\n if iana_name.startswith("gb") or iana_name in ZH_NAMES:\n return ["Chinese"]\n if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES:\n return ["Korean"]\n\n return []\n\n\n@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT)\ndef get_target_features(language: str) -> tuple[bool, bool]:\n """\n Determine main aspects from a supported language if it contains accents and if is pure Latin.\n """\n target_have_accents: bool = False\n target_pure_latin: bool = True\n\n for character in FREQUENCIES[language]:\n if not target_have_accents and is_accentuated(character):\n target_have_accents = True\n if target_pure_latin and is_latin(character) is False:\n target_pure_latin = False\n\n return target_have_accents, target_pure_latin\n\n\ndef alphabet_languages(\n characters: list[str], ignore_non_latin: bool = False\n) -> list[str]:\n """\n Return associated languages associated to given characters.\n """\n languages: list[tuple[str, float]] = []\n\n source_have_accents = any(is_accentuated(character) for character in characters)\n\n for language, language_characters in FREQUENCIES.items():\n target_have_accents, target_pure_latin = get_target_features(language)\n\n if ignore_non_latin and target_pure_latin is False:\n continue\n\n if target_have_accents is False and source_have_accents:\n continue\n\n character_count: int = len(language_characters)\n\n character_match_count: int = len(\n [c for c in language_characters if c in characters]\n )\n\n ratio: float = character_match_count / character_count\n\n if ratio >= 0.2:\n languages.append((language, ratio))\n\n languages = sorted(languages, key=lambda x: x[1], reverse=True)\n\n return [compatible_language[0] for compatible_language in languages]\n\n\ndef characters_popularity_compare(\n language: str, ordered_characters: list[str]\n) -> float:\n """\n Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language.\n The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit).\n Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.)\n """\n if language not in FREQUENCIES:\n raise ValueError(f"{language} not available")\n\n character_approved_count: int = 0\n FREQUENCIES_language_set = set(FREQUENCIES[language])\n\n ordered_characters_count: int = len(ordered_characters)\n target_language_characters_count: int = len(FREQUENCIES[language])\n\n large_alphabet: bool = target_language_characters_count > 26\n\n for character, character_rank in zip(\n ordered_characters, range(0, ordered_characters_count)\n ):\n if character not in FREQUENCIES_language_set:\n continue\n\n character_rank_in_language: int = FREQUENCIES[language].index(character)\n expected_projection_ratio: float = (\n target_language_characters_count / ordered_characters_count\n )\n character_rank_projection: int = int(character_rank * expected_projection_ratio)\n\n if (\n large_alphabet is False\n and abs(character_rank_projection - character_rank_in_language) > 4\n ):\n continue\n\n if (\n large_alphabet is True\n and abs(character_rank_projection - character_rank_in_language)\n < target_language_characters_count / 3\n ):\n character_approved_count += 1\n continue\n\n characters_before_source: list[str] = FREQUENCIES[language][\n 0:character_rank_in_language\n ]\n characters_after_source: list[str] = FREQUENCIES[language][\n character_rank_in_language:\n ]\n characters_before: list[str] = ordered_characters[0:character_rank]\n characters_after: list[str] = ordered_characters[character_rank:]\n\n before_match_count: int = len(\n set(characters_before) & set(characters_before_source)\n )\n\n after_match_count: int = len(\n set(characters_after) & set(characters_after_source)\n )\n\n if len(characters_before_source) == 0 and before_match_count <= 4:\n character_approved_count += 1\n continue\n\n if len(characters_after_source) == 0 and after_match_count <= 4:\n character_approved_count += 1\n continue\n\n if (\n before_match_count / len(characters_before_source) >= 0.4\n or after_match_count / len(characters_after_source) >= 0.4\n ):\n character_approved_count += 1\n continue\n\n return character_approved_count / len(ordered_characters)\n\n\ndef alpha_unicode_split(decoded_sequence: str) -> list[str]:\n """\n Given a decoded text sequence, return a list of str. Unicode range / alphabet separation.\n Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list;\n One containing the latin letters and the other hebrew.\n """\n layers: dict[str, str] = {}\n\n for character in decoded_sequence:\n if character.isalpha() is False:\n continue\n\n character_range: str | None = unicode_range(character)\n\n if character_range is None:\n continue\n\n layer_target_range: str | None = None\n\n for discovered_range in layers:\n if (\n is_suspiciously_successive_range(discovered_range, character_range)\n is False\n ):\n layer_target_range = discovered_range\n break\n\n if layer_target_range is None:\n layer_target_range = character_range\n\n if layer_target_range not in layers:\n layers[layer_target_range] = character.lower()\n continue\n\n layers[layer_target_range] += character.lower()\n\n return list(layers.values())\n\n\ndef merge_coherence_ratios(results: list[CoherenceMatches]) -> CoherenceMatches:\n """\n This function merge results previously given by the function coherence_ratio.\n The return type is the same as coherence_ratio.\n """\n per_language_ratios: dict[str, list[float]] = {}\n for result in results:\n for sub_result in result:\n language, ratio = sub_result\n if language not in per_language_ratios:\n per_language_ratios[language] = [ratio]\n continue\n per_language_ratios[language].append(ratio)\n\n merge = [\n (\n language,\n round(\n sum(per_language_ratios[language]) / len(per_language_ratios[language]),\n 4,\n ),\n )\n for language in per_language_ratios\n ]\n\n return sorted(merge, key=lambda x: x[1], reverse=True)\n\n\ndef filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches:\n """\n We shall NOT return "English—" in CoherenceMatches because it is an alternative\n of "English". This function only keeps the best match and remove the em-dash in it.\n """\n index_results: dict[str, list[float]] = dict()\n\n for result in results:\n language, ratio = result\n no_em_name: str = language.replace("—", "")\n\n if no_em_name not in index_results:\n index_results[no_em_name] = []\n\n index_results[no_em_name].append(ratio)\n\n if any(len(index_results[e]) > 1 for e in index_results):\n filtered_results: CoherenceMatches = []\n\n for language in index_results:\n filtered_results.append((language, max(index_results[language])))\n\n return filtered_results\n\n return results\n\n\n@lru_cache(maxsize=2048)\ndef coherence_ratio(\n decoded_sequence: str, threshold: float = 0.1, lg_inclusion: str | None = None\n) -> CoherenceMatches:\n """\n Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers.\n A layer = Character extraction by alphabets/ranges.\n """\n\n results: list[tuple[str, float]] = []\n ignore_non_latin: bool = False\n\n sufficient_match_count: int = 0\n\n lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else []\n if "Latin Based" in lg_inclusion_list:\n ignore_non_latin = True\n lg_inclusion_list.remove("Latin Based")\n\n for layer in alpha_unicode_split(decoded_sequence):\n sequence_frequencies: TypeCounter[str] = Counter(layer)\n most_common = sequence_frequencies.most_common()\n\n character_count: int = sum(o for c, o in most_common)\n\n if character_count <= TOO_SMALL_SEQUENCE:\n continue\n\n popular_character_ordered: list[str] = [c for c, o in most_common]\n\n for language in lg_inclusion_list or alphabet_languages(\n popular_character_ordered, ignore_non_latin\n ):\n ratio: float = characters_popularity_compare(\n language, popular_character_ordered\n )\n\n if ratio < threshold:\n continue\n elif ratio >= 0.8:\n sufficient_match_count += 1\n\n results.append((language, round(ratio, 4)))\n\n if sufficient_match_count >= 3:\n break\n\n return sorted(\n filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True\n )\n
|
.venv\Lib\site-packages\charset_normalizer\cd.py
|
cd.py
|
Python
| 12,917 | 0.85 | 0.205063 | 0 |
awesome-app
| 216 |
2025-01-01T18:31:22.160729
|
BSD-3-Clause
| false |
0f85d12a12255b461a01e23207764719
|
from __future__ import annotations\n\nfrom codecs import BOM_UTF8, BOM_UTF16_BE, BOM_UTF16_LE, BOM_UTF32_BE, BOM_UTF32_LE\nfrom encodings.aliases import aliases\nfrom re import IGNORECASE\nfrom re import compile as re_compile\n\n# Contain for each eligible encoding a list of/item bytes SIG/BOM\nENCODING_MARKS: dict[str, bytes | list[bytes]] = {\n "utf_8": BOM_UTF8,\n "utf_7": [\n b"\x2b\x2f\x76\x38",\n b"\x2b\x2f\x76\x39",\n b"\x2b\x2f\x76\x2b",\n b"\x2b\x2f\x76\x2f",\n b"\x2b\x2f\x76\x38\x2d",\n ],\n "gb18030": b"\x84\x31\x95\x33",\n "utf_32": [BOM_UTF32_BE, BOM_UTF32_LE],\n "utf_16": [BOM_UTF16_BE, BOM_UTF16_LE],\n}\n\nTOO_SMALL_SEQUENCE: int = 32\nTOO_BIG_SEQUENCE: int = int(10e6)\n\nUTF8_MAXIMAL_ALLOCATION: int = 1_112_064\n\n# Up-to-date Unicode ucd/15.0.0\nUNICODE_RANGES_COMBINED: dict[str, range] = {\n "Control character": range(32),\n "Basic Latin": range(32, 128),\n "Latin-1 Supplement": range(128, 256),\n "Latin Extended-A": range(256, 384),\n "Latin Extended-B": range(384, 592),\n "IPA Extensions": range(592, 688),\n "Spacing Modifier Letters": range(688, 768),\n "Combining Diacritical Marks": range(768, 880),\n "Greek and Coptic": range(880, 1024),\n "Cyrillic": range(1024, 1280),\n "Cyrillic Supplement": range(1280, 1328),\n "Armenian": range(1328, 1424),\n "Hebrew": range(1424, 1536),\n "Arabic": range(1536, 1792),\n "Syriac": range(1792, 1872),\n "Arabic Supplement": range(1872, 1920),\n "Thaana": range(1920, 1984),\n "NKo": range(1984, 2048),\n "Samaritan": range(2048, 2112),\n "Mandaic": range(2112, 2144),\n "Syriac Supplement": range(2144, 2160),\n "Arabic Extended-B": range(2160, 2208),\n "Arabic Extended-A": range(2208, 2304),\n "Devanagari": range(2304, 2432),\n "Bengali": range(2432, 2560),\n "Gurmukhi": range(2560, 2688),\n "Gujarati": range(2688, 2816),\n "Oriya": range(2816, 2944),\n "Tamil": range(2944, 3072),\n "Telugu": range(3072, 3200),\n "Kannada": range(3200, 3328),\n "Malayalam": range(3328, 3456),\n "Sinhala": range(3456, 3584),\n "Thai": range(3584, 3712),\n "Lao": range(3712, 3840),\n "Tibetan": range(3840, 4096),\n "Myanmar": range(4096, 4256),\n "Georgian": range(4256, 4352),\n "Hangul Jamo": range(4352, 4608),\n "Ethiopic": range(4608, 4992),\n "Ethiopic Supplement": range(4992, 5024),\n "Cherokee": range(5024, 5120),\n "Unified Canadian Aboriginal Syllabics": range(5120, 5760),\n "Ogham": range(5760, 5792),\n "Runic": range(5792, 5888),\n "Tagalog": range(5888, 5920),\n "Hanunoo": range(5920, 5952),\n "Buhid": range(5952, 5984),\n "Tagbanwa": range(5984, 6016),\n "Khmer": range(6016, 6144),\n "Mongolian": range(6144, 6320),\n "Unified Canadian Aboriginal Syllabics Extended": range(6320, 6400),\n "Limbu": range(6400, 6480),\n "Tai Le": range(6480, 6528),\n "New Tai Lue": range(6528, 6624),\n "Khmer Symbols": range(6624, 6656),\n "Buginese": range(6656, 6688),\n "Tai Tham": range(6688, 6832),\n "Combining Diacritical Marks Extended": range(6832, 6912),\n "Balinese": range(6912, 7040),\n "Sundanese": range(7040, 7104),\n "Batak": range(7104, 7168),\n "Lepcha": range(7168, 7248),\n "Ol Chiki": range(7248, 7296),\n "Cyrillic Extended-C": range(7296, 7312),\n "Georgian Extended": range(7312, 7360),\n "Sundanese Supplement": range(7360, 7376),\n "Vedic Extensions": range(7376, 7424),\n "Phonetic Extensions": range(7424, 7552),\n "Phonetic Extensions Supplement": range(7552, 7616),\n "Combining Diacritical Marks Supplement": range(7616, 7680),\n "Latin Extended Additional": range(7680, 7936),\n "Greek Extended": range(7936, 8192),\n "General Punctuation": range(8192, 8304),\n "Superscripts and Subscripts": range(8304, 8352),\n "Currency Symbols": range(8352, 8400),\n "Combining Diacritical Marks for Symbols": range(8400, 8448),\n "Letterlike Symbols": range(8448, 8528),\n "Number Forms": range(8528, 8592),\n "Arrows": range(8592, 8704),\n "Mathematical Operators": range(8704, 8960),\n "Miscellaneous Technical": range(8960, 9216),\n "Control Pictures": range(9216, 9280),\n "Optical Character Recognition": range(9280, 9312),\n "Enclosed Alphanumerics": range(9312, 9472),\n "Box Drawing": range(9472, 9600),\n "Block Elements": range(9600, 9632),\n "Geometric Shapes": range(9632, 9728),\n "Miscellaneous Symbols": range(9728, 9984),\n "Dingbats": range(9984, 10176),\n "Miscellaneous Mathematical Symbols-A": range(10176, 10224),\n "Supplemental Arrows-A": range(10224, 10240),\n "Braille Patterns": range(10240, 10496),\n "Supplemental Arrows-B": range(10496, 10624),\n "Miscellaneous Mathematical Symbols-B": range(10624, 10752),\n "Supplemental Mathematical Operators": range(10752, 11008),\n "Miscellaneous Symbols and Arrows": range(11008, 11264),\n "Glagolitic": range(11264, 11360),\n "Latin Extended-C": range(11360, 11392),\n "Coptic": range(11392, 11520),\n "Georgian Supplement": range(11520, 11568),\n "Tifinagh": range(11568, 11648),\n "Ethiopic Extended": range(11648, 11744),\n "Cyrillic Extended-A": range(11744, 11776),\n "Supplemental Punctuation": range(11776, 11904),\n "CJK Radicals Supplement": range(11904, 12032),\n "Kangxi Radicals": range(12032, 12256),\n "Ideographic Description Characters": range(12272, 12288),\n "CJK Symbols and Punctuation": range(12288, 12352),\n "Hiragana": range(12352, 12448),\n "Katakana": range(12448, 12544),\n "Bopomofo": range(12544, 12592),\n "Hangul Compatibility Jamo": range(12592, 12688),\n "Kanbun": range(12688, 12704),\n "Bopomofo Extended": range(12704, 12736),\n "CJK Strokes": range(12736, 12784),\n "Katakana Phonetic Extensions": range(12784, 12800),\n "Enclosed CJK Letters and Months": range(12800, 13056),\n "CJK Compatibility": range(13056, 13312),\n "CJK Unified Ideographs Extension A": range(13312, 19904),\n "Yijing Hexagram Symbols": range(19904, 19968),\n "CJK Unified Ideographs": range(19968, 40960),\n "Yi Syllables": range(40960, 42128),\n "Yi Radicals": range(42128, 42192),\n "Lisu": range(42192, 42240),\n "Vai": range(42240, 42560),\n "Cyrillic Extended-B": range(42560, 42656),\n "Bamum": range(42656, 42752),\n "Modifier Tone Letters": range(42752, 42784),\n "Latin Extended-D": range(42784, 43008),\n "Syloti Nagri": range(43008, 43056),\n "Common Indic Number Forms": range(43056, 43072),\n "Phags-pa": range(43072, 43136),\n "Saurashtra": range(43136, 43232),\n "Devanagari Extended": range(43232, 43264),\n "Kayah Li": range(43264, 43312),\n "Rejang": range(43312, 43360),\n "Hangul Jamo Extended-A": range(43360, 43392),\n "Javanese": range(43392, 43488),\n "Myanmar Extended-B": range(43488, 43520),\n "Cham": range(43520, 43616),\n "Myanmar Extended-A": range(43616, 43648),\n "Tai Viet": range(43648, 43744),\n "Meetei Mayek Extensions": range(43744, 43776),\n "Ethiopic Extended-A": range(43776, 43824),\n "Latin Extended-E": range(43824, 43888),\n "Cherokee Supplement": range(43888, 43968),\n "Meetei Mayek": range(43968, 44032),\n "Hangul Syllables": range(44032, 55216),\n "Hangul Jamo Extended-B": range(55216, 55296),\n "High Surrogates": range(55296, 56192),\n "High Private Use Surrogates": range(56192, 56320),\n "Low Surrogates": range(56320, 57344),\n "Private Use Area": range(57344, 63744),\n "CJK Compatibility Ideographs": range(63744, 64256),\n "Alphabetic Presentation Forms": range(64256, 64336),\n "Arabic Presentation Forms-A": range(64336, 65024),\n "Variation Selectors": range(65024, 65040),\n "Vertical Forms": range(65040, 65056),\n "Combining Half Marks": range(65056, 65072),\n "CJK Compatibility Forms": range(65072, 65104),\n "Small Form Variants": range(65104, 65136),\n "Arabic Presentation Forms-B": range(65136, 65280),\n "Halfwidth and Fullwidth Forms": range(65280, 65520),\n "Specials": range(65520, 65536),\n "Linear B Syllabary": range(65536, 65664),\n "Linear B Ideograms": range(65664, 65792),\n "Aegean Numbers": range(65792, 65856),\n "Ancient Greek Numbers": range(65856, 65936),\n "Ancient Symbols": range(65936, 66000),\n "Phaistos Disc": range(66000, 66048),\n "Lycian": range(66176, 66208),\n "Carian": range(66208, 66272),\n "Coptic Epact Numbers": range(66272, 66304),\n "Old Italic": range(66304, 66352),\n "Gothic": range(66352, 66384),\n "Old Permic": range(66384, 66432),\n "Ugaritic": range(66432, 66464),\n "Old Persian": range(66464, 66528),\n "Deseret": range(66560, 66640),\n "Shavian": range(66640, 66688),\n "Osmanya": range(66688, 66736),\n "Osage": range(66736, 66816),\n "Elbasan": range(66816, 66864),\n "Caucasian Albanian": range(66864, 66928),\n "Vithkuqi": range(66928, 67008),\n "Linear A": range(67072, 67456),\n "Latin Extended-F": range(67456, 67520),\n "Cypriot Syllabary": range(67584, 67648),\n "Imperial Aramaic": range(67648, 67680),\n "Palmyrene": range(67680, 67712),\n "Nabataean": range(67712, 67760),\n "Hatran": range(67808, 67840),\n "Phoenician": range(67840, 67872),\n "Lydian": range(67872, 67904),\n "Meroitic Hieroglyphs": range(67968, 68000),\n "Meroitic Cursive": range(68000, 68096),\n "Kharoshthi": range(68096, 68192),\n "Old South Arabian": range(68192, 68224),\n "Old North Arabian": range(68224, 68256),\n "Manichaean": range(68288, 68352),\n "Avestan": range(68352, 68416),\n "Inscriptional Parthian": range(68416, 68448),\n "Inscriptional Pahlavi": range(68448, 68480),\n "Psalter Pahlavi": range(68480, 68528),\n "Old Turkic": range(68608, 68688),\n "Old Hungarian": range(68736, 68864),\n "Hanifi Rohingya": range(68864, 68928),\n "Rumi Numeral Symbols": range(69216, 69248),\n "Yezidi": range(69248, 69312),\n "Arabic Extended-C": range(69312, 69376),\n "Old Sogdian": range(69376, 69424),\n "Sogdian": range(69424, 69488),\n "Old Uyghur": range(69488, 69552),\n "Chorasmian": range(69552, 69600),\n "Elymaic": range(69600, 69632),\n "Brahmi": range(69632, 69760),\n "Kaithi": range(69760, 69840),\n "Sora Sompeng": range(69840, 69888),\n "Chakma": range(69888, 69968),\n "Mahajani": range(69968, 70016),\n "Sharada": range(70016, 70112),\n "Sinhala Archaic Numbers": range(70112, 70144),\n "Khojki": range(70144, 70224),\n "Multani": range(70272, 70320),\n "Khudawadi": range(70320, 70400),\n "Grantha": range(70400, 70528),\n "Newa": range(70656, 70784),\n "Tirhuta": range(70784, 70880),\n "Siddham": range(71040, 71168),\n "Modi": range(71168, 71264),\n "Mongolian Supplement": range(71264, 71296),\n "Takri": range(71296, 71376),\n "Ahom": range(71424, 71504),\n "Dogra": range(71680, 71760),\n "Warang Citi": range(71840, 71936),\n "Dives Akuru": range(71936, 72032),\n "Nandinagari": range(72096, 72192),\n "Zanabazar Square": range(72192, 72272),\n "Soyombo": range(72272, 72368),\n "Unified Canadian Aboriginal Syllabics Extended-A": range(72368, 72384),\n "Pau Cin Hau": range(72384, 72448),\n "Devanagari Extended-A": range(72448, 72544),\n "Bhaiksuki": range(72704, 72816),\n "Marchen": range(72816, 72896),\n "Masaram Gondi": range(72960, 73056),\n "Gunjala Gondi": range(73056, 73136),\n "Makasar": range(73440, 73472),\n "Kawi": range(73472, 73568),\n "Lisu Supplement": range(73648, 73664),\n "Tamil Supplement": range(73664, 73728),\n "Cuneiform": range(73728, 74752),\n "Cuneiform Numbers and Punctuation": range(74752, 74880),\n "Early Dynastic Cuneiform": range(74880, 75088),\n "Cypro-Minoan": range(77712, 77824),\n "Egyptian Hieroglyphs": range(77824, 78896),\n "Egyptian Hieroglyph Format Controls": range(78896, 78944),\n "Anatolian Hieroglyphs": range(82944, 83584),\n "Bamum Supplement": range(92160, 92736),\n "Mro": range(92736, 92784),\n "Tangsa": range(92784, 92880),\n "Bassa Vah": range(92880, 92928),\n "Pahawh Hmong": range(92928, 93072),\n "Medefaidrin": range(93760, 93856),\n "Miao": range(93952, 94112),\n "Ideographic Symbols and Punctuation": range(94176, 94208),\n "Tangut": range(94208, 100352),\n "Tangut Components": range(100352, 101120),\n "Khitan Small Script": range(101120, 101632),\n "Tangut Supplement": range(101632, 101760),\n "Kana Extended-B": range(110576, 110592),\n "Kana Supplement": range(110592, 110848),\n "Kana Extended-A": range(110848, 110896),\n "Small Kana Extension": range(110896, 110960),\n "Nushu": range(110960, 111360),\n "Duployan": range(113664, 113824),\n "Shorthand Format Controls": range(113824, 113840),\n "Znamenny Musical Notation": range(118528, 118736),\n "Byzantine Musical Symbols": range(118784, 119040),\n "Musical Symbols": range(119040, 119296),\n "Ancient Greek Musical Notation": range(119296, 119376),\n "Kaktovik Numerals": range(119488, 119520),\n "Mayan Numerals": range(119520, 119552),\n "Tai Xuan Jing Symbols": range(119552, 119648),\n "Counting Rod Numerals": range(119648, 119680),\n "Mathematical Alphanumeric Symbols": range(119808, 120832),\n "Sutton SignWriting": range(120832, 121520),\n "Latin Extended-G": range(122624, 122880),\n "Glagolitic Supplement": range(122880, 122928),\n "Cyrillic Extended-D": range(122928, 123024),\n "Nyiakeng Puachue Hmong": range(123136, 123216),\n "Toto": range(123536, 123584),\n "Wancho": range(123584, 123648),\n "Nag Mundari": range(124112, 124160),\n "Ethiopic Extended-B": range(124896, 124928),\n "Mende Kikakui": range(124928, 125152),\n "Adlam": range(125184, 125280),\n "Indic Siyaq Numbers": range(126064, 126144),\n "Ottoman Siyaq Numbers": range(126208, 126288),\n "Arabic Mathematical Alphabetic Symbols": range(126464, 126720),\n "Mahjong Tiles": range(126976, 127024),\n "Domino Tiles": range(127024, 127136),\n "Playing Cards": range(127136, 127232),\n "Enclosed Alphanumeric Supplement": range(127232, 127488),\n "Enclosed Ideographic Supplement": range(127488, 127744),\n "Miscellaneous Symbols and Pictographs": range(127744, 128512),\n "Emoticons range(Emoji)": range(128512, 128592),\n "Ornamental Dingbats": range(128592, 128640),\n "Transport and Map Symbols": range(128640, 128768),\n "Alchemical Symbols": range(128768, 128896),\n "Geometric Shapes Extended": range(128896, 129024),\n "Supplemental Arrows-C": range(129024, 129280),\n "Supplemental Symbols and Pictographs": range(129280, 129536),\n "Chess Symbols": range(129536, 129648),\n "Symbols and Pictographs Extended-A": range(129648, 129792),\n "Symbols for Legacy Computing": range(129792, 130048),\n "CJK Unified Ideographs Extension B": range(131072, 173792),\n "CJK Unified Ideographs Extension C": range(173824, 177984),\n "CJK Unified Ideographs Extension D": range(177984, 178208),\n "CJK Unified Ideographs Extension E": range(178208, 183984),\n "CJK Unified Ideographs Extension F": range(183984, 191472),\n "CJK Compatibility Ideographs Supplement": range(194560, 195104),\n "CJK Unified Ideographs Extension G": range(196608, 201552),\n "CJK Unified Ideographs Extension H": range(201552, 205744),\n "Tags": range(917504, 917632),\n "Variation Selectors Supplement": range(917760, 918000),\n "Supplementary Private Use Area-A": range(983040, 1048576),\n "Supplementary Private Use Area-B": range(1048576, 1114112),\n}\n\n\nUNICODE_SECONDARY_RANGE_KEYWORD: list[str] = [\n "Supplement",\n "Extended",\n "Extensions",\n "Modifier",\n "Marks",\n "Punctuation",\n "Symbols",\n "Forms",\n "Operators",\n "Miscellaneous",\n "Drawing",\n "Block",\n "Shapes",\n "Supplemental",\n "Tags",\n]\n\nRE_POSSIBLE_ENCODING_INDICATION = re_compile(\n r"(?:(?:encoding)|(?:charset)|(?:coding))(?:[\:= ]{1,10})(?:[\"\']?)([a-zA-Z0-9\-_]+)(?:[\"\']?)",\n IGNORECASE,\n)\n\nIANA_NO_ALIASES = [\n "cp720",\n "cp737",\n "cp856",\n "cp874",\n "cp875",\n "cp1006",\n "koi8_r",\n "koi8_t",\n "koi8_u",\n]\n\nIANA_SUPPORTED: list[str] = sorted(\n filter(\n lambda x: x.endswith("_codec") is False\n and x not in {"rot_13", "tactis", "mbcs"},\n list(set(aliases.values())) + IANA_NO_ALIASES,\n )\n)\n\nIANA_SUPPORTED_COUNT: int = len(IANA_SUPPORTED)\n\n# pre-computed code page that are similar using the function cp_similarity.\nIANA_SUPPORTED_SIMILAR: dict[str, list[str]] = {\n "cp037": ["cp1026", "cp1140", "cp273", "cp500"],\n "cp1026": ["cp037", "cp1140", "cp273", "cp500"],\n "cp1125": ["cp866"],\n "cp1140": ["cp037", "cp1026", "cp273", "cp500"],\n "cp1250": ["iso8859_2"],\n "cp1251": ["kz1048", "ptcp154"],\n "cp1252": ["iso8859_15", "iso8859_9", "latin_1"],\n "cp1253": ["iso8859_7"],\n "cp1254": ["iso8859_15", "iso8859_9", "latin_1"],\n "cp1257": ["iso8859_13"],\n "cp273": ["cp037", "cp1026", "cp1140", "cp500"],\n "cp437": ["cp850", "cp858", "cp860", "cp861", "cp862", "cp863", "cp865"],\n "cp500": ["cp037", "cp1026", "cp1140", "cp273"],\n "cp850": ["cp437", "cp857", "cp858", "cp865"],\n "cp857": ["cp850", "cp858", "cp865"],\n "cp858": ["cp437", "cp850", "cp857", "cp865"],\n "cp860": ["cp437", "cp861", "cp862", "cp863", "cp865"],\n "cp861": ["cp437", "cp860", "cp862", "cp863", "cp865"],\n "cp862": ["cp437", "cp860", "cp861", "cp863", "cp865"],\n "cp863": ["cp437", "cp860", "cp861", "cp862", "cp865"],\n "cp865": ["cp437", "cp850", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863"],\n "cp866": ["cp1125"],\n "iso8859_10": ["iso8859_14", "iso8859_15", "iso8859_4", "iso8859_9", "latin_1"],\n "iso8859_11": ["tis_620"],\n "iso8859_13": ["cp1257"],\n "iso8859_14": [\n "iso8859_10",\n "iso8859_15",\n "iso8859_16",\n "iso8859_3",\n "iso8859_9",\n "latin_1",\n ],\n "iso8859_15": [\n "cp1252",\n "cp1254",\n "iso8859_10",\n "iso8859_14",\n "iso8859_16",\n "iso8859_3",\n "iso8859_9",\n "latin_1",\n ],\n "iso8859_16": [\n "iso8859_14",\n "iso8859_15",\n "iso8859_2",\n "iso8859_3",\n "iso8859_9",\n "latin_1",\n ],\n "iso8859_2": ["cp1250", "iso8859_16", "iso8859_4"],\n "iso8859_3": ["iso8859_14", "iso8859_15", "iso8859_16", "iso8859_9", "latin_1"],\n "iso8859_4": ["iso8859_10", "iso8859_2", "iso8859_9", "latin_1"],\n "iso8859_7": ["cp1253"],\n "iso8859_9": [\n "cp1252",\n "cp1254",\n "cp1258",\n "iso8859_10",\n "iso8859_14",\n "iso8859_15",\n "iso8859_16",\n "iso8859_3",\n "iso8859_4",\n "latin_1",\n ],\n "kz1048": ["cp1251", "ptcp154"],\n "latin_1": [\n "cp1252",\n "cp1254",\n "cp1258",\n "iso8859_10",\n "iso8859_14",\n "iso8859_15",\n "iso8859_16",\n "iso8859_3",\n "iso8859_4",\n "iso8859_9",\n ],\n "mac_iceland": ["mac_roman", "mac_turkish"],\n "mac_roman": ["mac_iceland", "mac_turkish"],\n "mac_turkish": ["mac_iceland", "mac_roman"],\n "ptcp154": ["cp1251", "kz1048"],\n "tis_620": ["iso8859_11"],\n}\n\n\nCHARDET_CORRESPONDENCE: dict[str, str] = {\n "iso2022_kr": "ISO-2022-KR",\n "iso2022_jp": "ISO-2022-JP",\n "euc_kr": "EUC-KR",\n "tis_620": "TIS-620",\n "utf_32": "UTF-32",\n "euc_jp": "EUC-JP",\n "koi8_r": "KOI8-R",\n "iso8859_1": "ISO-8859-1",\n "iso8859_2": "ISO-8859-2",\n "iso8859_5": "ISO-8859-5",\n "iso8859_6": "ISO-8859-6",\n "iso8859_7": "ISO-8859-7",\n "iso8859_8": "ISO-8859-8",\n "utf_16": "UTF-16",\n "cp855": "IBM855",\n "mac_cyrillic": "MacCyrillic",\n "gb2312": "GB2312",\n "gb18030": "GB18030",\n "cp932": "CP932",\n "cp866": "IBM866",\n "utf_8": "utf-8",\n "utf_8_sig": "UTF-8-SIG",\n "shift_jis": "SHIFT_JIS",\n "big5": "Big5",\n "cp1250": "windows-1250",\n "cp1251": "windows-1251",\n "cp1252": "Windows-1252",\n "cp1253": "windows-1253",\n "cp1255": "windows-1255",\n "cp1256": "windows-1256",\n "cp1254": "Windows-1254",\n "cp949": "CP949",\n}\n\n\nCOMMON_SAFE_ASCII_CHARACTERS: set[str] = {\n "<",\n ">",\n "=",\n ":",\n "/",\n "&",\n ";",\n "{",\n "}",\n "[",\n "]",\n ",",\n "|",\n '"',\n "-",\n "(",\n ")",\n}\n\n# Sample character sets — replace with full lists if needed\nCOMMON_CHINESE_CHARACTERS = "的一是在不了有和人这中大为上个国我以要他时来用们生到作地于出就分对成会可主发年动同工也能下过子说产种面而方后多定行学法所民得经十三之进着等部度家电力里如水化高自二理起小物现实加量都两体制机当使点从业本去把性好应开它合还因由其些然前外天政四日那社义事平形相全表间样与关各重新线内数正心反你明看原又么利比或但质气第向道命此变条只没结解问意建月公无系军很情者最立代想已通并提直题党程展五果料象员革位入常文总次品式活设及管特件长求老头基资边流路级少图山统接知较将组见计别她手角期根论运农指几九区强放决西被干做必战先回则任取据处队南给色光门即保治北造百规热领七海口东导器压志世金增争济阶油思术极交受联什认六共权收证改清己美再采转更单风切打白教速花带安场身车例真务具万每目至达走积示议声报斗完类八离华名确才科张信马节话米整空元况今集温传土许步群广石记需段研界拉林律叫且究观越织装影算低持音众书布复容儿须际商非验连断深难近矿千周委素技备半办青省列习响约支般史感劳便团往酸历市克何除消构府太准精值号率族维划选标写存候毛亲快效斯院查江型眼王按格养易置派层片始却专状育厂京识适属圆包火住调满县局照参红细引听该铁价严龙飞"\n\nCOMMON_JAPANESE_CHARACTERS = "日一国年大十二本中長出三時行見月分後前生五間上東四今金九入学高円子外八六下来気小七山話女北午百書先名川千水半男西電校語土木聞食車何南万毎白天母火右読友左休父雨"\n\nCOMMON_KOREAN_CHARACTERS = "一二三四五六七八九十百千萬上下左右中人女子大小山川日月火水木金土父母天地國名年時文校學生"\n\n# Combine all into a set\nCOMMON_CJK_CHARACTERS = set(\n "".join(\n [\n COMMON_CHINESE_CHARACTERS,\n COMMON_JAPANESE_CHARACTERS,\n COMMON_KOREAN_CHARACTERS,\n ]\n )\n)\n\nKO_NAMES: set[str] = {"johab", "cp949", "euc_kr"}\nZH_NAMES: set[str] = {"big5", "cp950", "big5hkscs", "hz"}\n\n# Logging LEVEL below DEBUG\nTRACE: int = 5\n\n\n# Language label that contain the em dash "—"\n# character are to be considered alternative seq to origin\nFREQUENCIES: dict[str, list[str]] = {\n "English": [\n "e",\n "a",\n "t",\n "i",\n "o",\n "n",\n "s",\n "r",\n "h",\n "l",\n "d",\n "c",\n "u",\n "m",\n "f",\n "p",\n "g",\n "w",\n "y",\n "b",\n "v",\n "k",\n "x",\n "j",\n "z",\n "q",\n ],\n "English—": [\n "e",\n "a",\n "t",\n "i",\n "o",\n "n",\n "s",\n "r",\n "h",\n "l",\n "d",\n "c",\n "m",\n "u",\n "f",\n "p",\n "g",\n "w",\n "b",\n "y",\n "v",\n "k",\n "j",\n "x",\n "z",\n "q",\n ],\n "German": [\n "e",\n "n",\n "i",\n "r",\n "s",\n "t",\n "a",\n "d",\n "h",\n "u",\n "l",\n "g",\n "o",\n "c",\n "m",\n "b",\n "f",\n "k",\n "w",\n "z",\n "p",\n "v",\n "ü",\n "ä",\n "ö",\n "j",\n ],\n "French": [\n "e",\n "a",\n "s",\n "n",\n "i",\n "t",\n "r",\n "l",\n "u",\n "o",\n "d",\n "c",\n "p",\n "m",\n "é",\n "v",\n "g",\n "f",\n "b",\n "h",\n "q",\n "à",\n "x",\n "è",\n "y",\n "j",\n ],\n "Dutch": [\n "e",\n "n",\n "a",\n "i",\n "r",\n "t",\n "o",\n "d",\n "s",\n "l",\n "g",\n "h",\n "v",\n "m",\n "u",\n "k",\n "c",\n "p",\n "b",\n "w",\n "j",\n "z",\n "f",\n "y",\n "x",\n "ë",\n ],\n "Italian": [\n "e",\n "i",\n "a",\n "o",\n "n",\n "l",\n "t",\n "r",\n "s",\n "c",\n "d",\n "u",\n "p",\n "m",\n "g",\n "v",\n "f",\n "b",\n "z",\n "h",\n "q",\n "è",\n "à",\n "k",\n "y",\n "ò",\n ],\n "Polish": [\n "a",\n "i",\n "o",\n "e",\n "n",\n "r",\n "z",\n "w",\n "s",\n "c",\n "t",\n "k",\n "y",\n "d",\n "p",\n "m",\n "u",\n "l",\n "j",\n "ł",\n "g",\n "b",\n "h",\n "ą",\n "ę",\n "ó",\n ],\n "Spanish": [\n "e",\n "a",\n "o",\n "n",\n "s",\n "r",\n "i",\n "l",\n "d",\n "t",\n "c",\n "u",\n "m",\n "p",\n "b",\n "g",\n "v",\n "f",\n "y",\n "ó",\n "h",\n "q",\n "í",\n "j",\n "z",\n "á",\n ],\n "Russian": [\n "о",\n "а",\n "е",\n "и",\n "н",\n "с",\n "т",\n "р",\n "в",\n "л",\n "к",\n "м",\n "д",\n "п",\n "у",\n "г",\n "я",\n "ы",\n "з",\n "б",\n "й",\n "ь",\n "ч",\n "х",\n "ж",\n "ц",\n ],\n # Jap-Kanji\n "Japanese": [\n "人",\n "一",\n "大",\n "亅",\n "丁",\n "丨",\n "竹",\n "笑",\n "口",\n "日",\n "今",\n "二",\n "彳",\n "行",\n "十",\n "土",\n "丶",\n "寸",\n "寺",\n "時",\n "乙",\n "丿",\n "乂",\n "气",\n "気",\n "冂",\n "巾",\n "亠",\n "市",\n "目",\n "儿",\n "見",\n "八",\n "小",\n "凵",\n "県",\n "月",\n "彐",\n "門",\n "間",\n "木",\n "東",\n "山",\n "出",\n "本",\n "中",\n "刀",\n "分",\n "耳",\n "又",\n "取",\n "最",\n "言",\n "田",\n "心",\n "思",\n "刂",\n "前",\n "京",\n "尹",\n "事",\n "生",\n "厶",\n "云",\n "会",\n "未",\n "来",\n "白",\n "冫",\n "楽",\n "灬",\n "馬",\n "尸",\n "尺",\n "駅",\n "明",\n "耂",\n "者",\n "了",\n "阝",\n "都",\n "高",\n "卜",\n "占",\n "厂",\n "广",\n "店",\n "子",\n "申",\n "奄",\n "亻",\n "俺",\n "上",\n "方",\n "冖",\n "学",\n "衣",\n "艮",\n "食",\n "自",\n ],\n # Jap-Katakana\n "Japanese—": [\n "ー",\n "ン",\n "ス",\n "・",\n "ル",\n "ト",\n "リ",\n "イ",\n "ア",\n "ラ",\n "ッ",\n "ク",\n "ド",\n "シ",\n "レ",\n "ジ",\n "タ",\n "フ",\n "ロ",\n "カ",\n "テ",\n "マ",\n "ィ",\n "グ",\n "バ",\n "ム",\n "プ",\n "オ",\n "コ",\n "デ",\n "ニ",\n "ウ",\n "メ",\n "サ",\n "ビ",\n "ナ",\n "ブ",\n "ャ",\n "エ",\n "ュ",\n "チ",\n "キ",\n "ズ",\n "ダ",\n "パ",\n "ミ",\n "ェ",\n "ョ",\n "ハ",\n "セ",\n "ベ",\n "ガ",\n "モ",\n "ツ",\n "ネ",\n "ボ",\n "ソ",\n "ノ",\n "ァ",\n "ヴ",\n "ワ",\n "ポ",\n "ペ",\n "ピ",\n "ケ",\n "ゴ",\n "ギ",\n "ザ",\n "ホ",\n "ゲ",\n "ォ",\n "ヤ",\n "ヒ",\n "ユ",\n "ヨ",\n "ヘ",\n "ゼ",\n "ヌ",\n "ゥ",\n "ゾ",\n "ヶ",\n "ヂ",\n "ヲ",\n "ヅ",\n "ヵ",\n "ヱ",\n "ヰ",\n "ヮ",\n "ヽ",\n "゠",\n "ヾ",\n "ヷ",\n "ヿ",\n "ヸ",\n "ヹ",\n "ヺ",\n ],\n # Jap-Hiragana\n "Japanese——": [\n "の",\n "に",\n "る",\n "た",\n "と",\n "は",\n "し",\n "い",\n "を",\n "で",\n "て",\n "が",\n "な",\n "れ",\n "か",\n "ら",\n "さ",\n "っ",\n "り",\n "す",\n "あ",\n "も",\n "こ",\n "ま",\n "う",\n "く",\n "よ",\n "き",\n "ん",\n "め",\n "お",\n "け",\n "そ",\n "つ",\n "だ",\n "や",\n "え",\n "ど",\n "わ",\n "ち",\n "み",\n "せ",\n "じ",\n "ば",\n "へ",\n "び",\n "ず",\n "ろ",\n "ほ",\n "げ",\n "む",\n "べ",\n "ひ",\n "ょ",\n "ゆ",\n "ぶ",\n "ご",\n "ゃ",\n "ね",\n "ふ",\n "ぐ",\n "ぎ",\n "ぼ",\n "ゅ",\n "づ",\n "ざ",\n "ぞ",\n "ぬ",\n "ぜ",\n "ぱ",\n "ぽ",\n "ぷ",\n "ぴ",\n "ぃ",\n "ぁ",\n "ぇ",\n "ぺ",\n "ゞ",\n "ぢ",\n "ぉ",\n "ぅ",\n "ゐ",\n "ゝ",\n "ゑ",\n "゛",\n "゜",\n "ゎ",\n "ゔ",\n "゚",\n "ゟ",\n "゙",\n "ゕ",\n "ゖ",\n ],\n "Portuguese": [\n "a",\n "e",\n "o",\n "s",\n "i",\n "r",\n "d",\n "n",\n "t",\n "m",\n "u",\n "c",\n "l",\n "p",\n "g",\n "v",\n "b",\n "f",\n "h",\n "ã",\n "q",\n "é",\n "ç",\n "á",\n "z",\n "í",\n ],\n "Swedish": [\n "e",\n "a",\n "n",\n "r",\n "t",\n "s",\n "i",\n "l",\n "d",\n "o",\n "m",\n "k",\n "g",\n "v",\n "h",\n "f",\n "u",\n "p",\n "ä",\n "c",\n "b",\n "ö",\n "å",\n "y",\n "j",\n "x",\n ],\n "Chinese": [\n "的",\n "一",\n "是",\n "不",\n "了",\n "在",\n "人",\n "有",\n "我",\n "他",\n "这",\n "个",\n "们",\n "中",\n "来",\n "上",\n "大",\n "为",\n "和",\n "国",\n "地",\n "到",\n "以",\n "说",\n "时",\n "要",\n "就",\n "出",\n "会",\n "可",\n "也",\n "你",\n "对",\n "生",\n "能",\n "而",\n "子",\n "那",\n "得",\n "于",\n "着",\n "下",\n "自",\n "之",\n "年",\n "过",\n "发",\n "后",\n "作",\n "里",\n "用",\n "道",\n "行",\n "所",\n "然",\n "家",\n "种",\n "事",\n "成",\n "方",\n "多",\n "经",\n "么",\n "去",\n "法",\n "学",\n "如",\n "都",\n "同",\n "现",\n "当",\n "没",\n "动",\n "面",\n "起",\n "看",\n "定",\n "天",\n "分",\n "还",\n "进",\n "好",\n "小",\n "部",\n "其",\n "些",\n "主",\n "样",\n "理",\n "心",\n "她",\n "本",\n "前",\n "开",\n "但",\n "因",\n "只",\n "从",\n "想",\n "实",\n ],\n "Ukrainian": [\n "о",\n "а",\n "н",\n "і",\n "и",\n "р",\n "в",\n "т",\n "е",\n "с",\n "к",\n "л",\n "у",\n "д",\n "м",\n "п",\n "з",\n "я",\n "ь",\n "б",\n "г",\n "й",\n "ч",\n "х",\n "ц",\n "ї",\n ],\n "Norwegian": [\n "e",\n "r",\n "n",\n "t",\n "a",\n "s",\n "i",\n "o",\n "l",\n "d",\n "g",\n "k",\n "m",\n "v",\n "f",\n "p",\n "u",\n "b",\n "h",\n "å",\n "y",\n "j",\n "ø",\n "c",\n "æ",\n "w",\n ],\n "Finnish": [\n "a",\n "i",\n "n",\n "t",\n "e",\n "s",\n "l",\n "o",\n "u",\n "k",\n "ä",\n "m",\n "r",\n "v",\n "j",\n "h",\n "p",\n "y",\n "d",\n "ö",\n "g",\n "c",\n "b",\n "f",\n "w",\n "z",\n ],\n "Vietnamese": [\n "n",\n "h",\n "t",\n "i",\n "c",\n "g",\n "a",\n "o",\n "u",\n "m",\n "l",\n "r",\n "à",\n "đ",\n "s",\n "e",\n "v",\n "p",\n "b",\n "y",\n "ư",\n "d",\n "á",\n "k",\n "ộ",\n "ế",\n ],\n "Czech": [\n "o",\n "e",\n "a",\n "n",\n "t",\n "s",\n "i",\n "l",\n "v",\n "r",\n "k",\n "d",\n "u",\n "m",\n "p",\n "í",\n "c",\n "h",\n "z",\n "á",\n "y",\n "j",\n "b",\n "ě",\n "é",\n "ř",\n ],\n "Hungarian": [\n "e",\n "a",\n "t",\n "l",\n "s",\n "n",\n "k",\n "r",\n "i",\n "o",\n "z",\n "á",\n "é",\n "g",\n "m",\n "b",\n "y",\n "v",\n "d",\n "h",\n "u",\n "p",\n "j",\n "ö",\n "f",\n "c",\n ],\n "Korean": [\n "이",\n "다",\n "에",\n "의",\n "는",\n "로",\n "하",\n "을",\n "가",\n "고",\n "지",\n "서",\n "한",\n "은",\n "기",\n "으",\n "년",\n "대",\n "사",\n "시",\n "를",\n "리",\n "도",\n "인",\n "스",\n "일",\n ],\n "Indonesian": [\n "a",\n "n",\n "e",\n "i",\n "r",\n "t",\n "u",\n "s",\n "d",\n "k",\n "m",\n "l",\n "g",\n "p",\n "b",\n "o",\n "h",\n "y",\n "j",\n "c",\n "w",\n "f",\n "v",\n "z",\n "x",\n "q",\n ],\n "Turkish": [\n "a",\n "e",\n "i",\n "n",\n "r",\n "l",\n "ı",\n "k",\n "d",\n "t",\n "s",\n "m",\n "y",\n "u",\n "o",\n "b",\n "ü",\n "ş",\n "v",\n "g",\n "z",\n "h",\n "c",\n "p",\n "ç",\n "ğ",\n ],\n "Romanian": [\n "e",\n "i",\n "a",\n "r",\n "n",\n "t",\n "u",\n "l",\n "o",\n "c",\n "s",\n "d",\n "p",\n "m",\n "ă",\n "f",\n "v",\n "î",\n "g",\n "b",\n "ș",\n "ț",\n "z",\n "h",\n "â",\n "j",\n ],\n "Farsi": [\n "ا",\n "ی",\n "ر",\n "د",\n "ن",\n "ه",\n "و",\n "م",\n "ت",\n "ب",\n "س",\n "ل",\n "ک",\n "ش",\n "ز",\n "ف",\n "گ",\n "ع",\n "خ",\n "ق",\n "ج",\n "آ",\n "پ",\n "ح",\n "ط",\n "ص",\n ],\n "Arabic": [\n "ا",\n "ل",\n "ي",\n "م",\n "و",\n "ن",\n "ر",\n "ت",\n "ب",\n "ة",\n "ع",\n "د",\n "س",\n "ف",\n "ه",\n "ك",\n "ق",\n "أ",\n "ح",\n "ج",\n "ش",\n "ط",\n "ص",\n "ى",\n "خ",\n "إ",\n ],\n "Danish": [\n "e",\n "r",\n "n",\n "t",\n "a",\n "i",\n "s",\n "d",\n "l",\n "o",\n "g",\n "m",\n "k",\n "f",\n "v",\n "u",\n "b",\n "h",\n "p",\n "å",\n "y",\n "ø",\n "æ",\n "c",\n "j",\n "w",\n ],\n "Serbian": [\n "а",\n "и",\n "о",\n "е",\n "н",\n "р",\n "с",\n "у",\n "т",\n "к",\n "ј",\n "в",\n "д",\n "м",\n "п",\n "л",\n "г",\n "з",\n "б",\n "a",\n "i",\n "e",\n "o",\n "n",\n "ц",\n "ш",\n ],\n "Lithuanian": [\n "i",\n "a",\n "s",\n "o",\n "r",\n "e",\n "t",\n "n",\n "u",\n "k",\n "m",\n "l",\n "p",\n "v",\n "d",\n "j",\n "g",\n "ė",\n "b",\n "y",\n "ų",\n "š",\n "ž",\n "c",\n "ą",\n "į",\n ],\n "Slovene": [\n "e",\n "a",\n "i",\n "o",\n "n",\n "r",\n "s",\n "l",\n "t",\n "j",\n "v",\n "k",\n "d",\n "p",\n "m",\n "u",\n "z",\n "b",\n "g",\n "h",\n "č",\n "c",\n "š",\n "ž",\n "f",\n "y",\n ],\n "Slovak": [\n "o",\n "a",\n "e",\n "n",\n "i",\n "r",\n "v",\n "t",\n "s",\n "l",\n "k",\n "d",\n "m",\n "p",\n "u",\n "c",\n "h",\n "j",\n "b",\n "z",\n "á",\n "y",\n "ý",\n "í",\n "č",\n "é",\n ],\n "Hebrew": [\n "י",\n "ו",\n "ה",\n "ל",\n "ר",\n "ב",\n "ת",\n "מ",\n "א",\n "ש",\n "נ",\n "ע",\n "ם",\n "ד",\n "ק",\n "ח",\n "פ",\n "ס",\n "כ",\n "ג",\n "ט",\n "צ",\n "ן",\n "ז",\n "ך",\n ],\n "Bulgarian": [\n "а",\n "и",\n "о",\n "е",\n "н",\n "т",\n "р",\n "с",\n "в",\n "л",\n "к",\n "д",\n "п",\n "м",\n "з",\n "г",\n "я",\n "ъ",\n "у",\n "б",\n "ч",\n "ц",\n "й",\n "ж",\n "щ",\n "х",\n ],\n "Croatian": [\n "a",\n "i",\n "o",\n "e",\n "n",\n "r",\n "j",\n "s",\n "t",\n "u",\n "k",\n "l",\n "v",\n "d",\n "m",\n "p",\n "g",\n "z",\n "b",\n "c",\n "č",\n "h",\n "š",\n "ž",\n "ć",\n "f",\n ],\n "Hindi": [\n "क",\n "र",\n "स",\n "न",\n "त",\n "म",\n "ह",\n "प",\n "य",\n "ल",\n "व",\n "ज",\n "द",\n "ग",\n "ब",\n "श",\n "ट",\n "अ",\n "ए",\n "थ",\n "भ",\n "ड",\n "च",\n "ध",\n "ष",\n "इ",\n ],\n "Estonian": [\n "a",\n "i",\n "e",\n "s",\n "t",\n "l",\n "u",\n "n",\n "o",\n "k",\n "r",\n "d",\n "m",\n "v",\n "g",\n "p",\n "j",\n "h",\n "ä",\n "b",\n "õ",\n "ü",\n "f",\n "c",\n "ö",\n "y",\n ],\n "Thai": [\n "า",\n "น",\n "ร",\n "อ",\n "ก",\n "เ",\n "ง",\n "ม",\n "ย",\n "ล",\n "ว",\n "ด",\n "ท",\n "ส",\n "ต",\n "ะ",\n "ป",\n "บ",\n "ค",\n "ห",\n "แ",\n "จ",\n "พ",\n "ช",\n "ข",\n "ใ",\n ],\n "Greek": [\n "α",\n "τ",\n "ο",\n "ι",\n "ε",\n "ν",\n "ρ",\n "σ",\n "κ",\n "η",\n "π",\n "ς",\n "υ",\n "μ",\n "λ",\n "ί",\n "ό",\n "ά",\n "γ",\n "έ",\n "δ",\n "ή",\n "ω",\n "χ",\n "θ",\n "ύ",\n ],\n "Tamil": [\n "க",\n "த",\n "ப",\n "ட",\n "ர",\n "ம",\n "ல",\n "ன",\n "வ",\n "ற",\n "ய",\n "ள",\n "ச",\n "ந",\n "இ",\n "ண",\n "அ",\n "ஆ",\n "ழ",\n "ங",\n "எ",\n "உ",\n "ஒ",\n "ஸ",\n ],\n "Kazakh": [\n "а",\n "ы",\n "е",\n "н",\n "т",\n "р",\n "л",\n "і",\n "д",\n "с",\n "м",\n "қ",\n "к",\n "о",\n "б",\n "и",\n "у",\n "ғ",\n "ж",\n "ң",\n "з",\n "ш",\n "й",\n "п",\n "г",\n "ө",\n ],\n}\n\nLANGUAGE_SUPPORTED_COUNT: int = len(FREQUENCIES)\n
|
.venv\Lib\site-packages\charset_normalizer\constant.py
|
constant.py
|
Python
| 44,728 | 0.95 | 0.002481 | 0.005528 |
python-kit
| 939 |
2023-10-05T20:40:25.991472
|
Apache-2.0
| false |
b7726e2b58c69f65168a35c7c9ec3653
|
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any\nfrom warnings import warn\n\nfrom .api import from_bytes\nfrom .constant import CHARDET_CORRESPONDENCE\n\n# TODO: remove this check when dropping Python 3.7 support\nif TYPE_CHECKING:\n from typing_extensions import TypedDict\n\n class ResultDict(TypedDict):\n encoding: str | None\n language: str\n confidence: float | None\n\n\ndef detect(\n byte_str: bytes, should_rename_legacy: bool = False, **kwargs: Any\n) -> ResultDict:\n """\n chardet legacy method\n Detect the encoding of the given byte string. It should be mostly backward-compatible.\n Encoding name will match Chardet own writing whenever possible. (Not on encoding name unsupported by it)\n This function is deprecated and should be used to migrate your project easily, consult the documentation for\n further information. Not planned for removal.\n\n :param byte_str: The byte sequence to examine.\n :param should_rename_legacy: Should we rename legacy encodings\n to their more modern equivalents?\n """\n if len(kwargs):\n warn(\n f"charset-normalizer disregard arguments '{','.join(list(kwargs.keys()))}' in legacy function detect()"\n )\n\n if not isinstance(byte_str, (bytearray, bytes)):\n raise TypeError( # pragma: nocover\n f"Expected object of type bytes or bytearray, got: {type(byte_str)}"\n )\n\n if isinstance(byte_str, bytearray):\n byte_str = bytes(byte_str)\n\n r = from_bytes(byte_str).best()\n\n encoding = r.encoding if r is not None else None\n language = r.language if r is not None and r.language != "Unknown" else ""\n confidence = 1.0 - r.chaos if r is not None else None\n\n # Note: CharsetNormalizer does not return 'UTF-8-SIG' as the sig get stripped in the detection/normalization process\n # but chardet does return 'utf-8-sig' and it is a valid codec name.\n if r is not None and encoding == "utf_8" and r.bom:\n encoding += "_sig"\n\n if should_rename_legacy is False and encoding in CHARDET_CORRESPONDENCE:\n encoding = CHARDET_CORRESPONDENCE[encoding]\n\n return {\n "encoding": encoding,\n "language": language,\n "confidence": confidence,\n }\n
|
.venv\Lib\site-packages\charset_normalizer\legacy.py
|
legacy.py
|
Python
| 2,351 | 0.95 | 0.234375 | 0.06 |
node-utils
| 274 |
2025-05-02T23:01:26.751545
|
BSD-3-Clause
| false |
6def13d1e62db3716bfda2348273f88e
|
MZ
|
.venv\Lib\site-packages\charset_normalizer\md.cp313-win_amd64.pyd
|
md.cp313-win_amd64.pyd
|
Other
| 10,752 | 0.8 | 0 | 0 |
react-lib
| 711 |
2024-02-22T20:50:26.424363
|
BSD-3-Clause
| false |
86ac2f49250785380f7c3841468354bc
|
from __future__ import annotations\n\nfrom functools import lru_cache\nfrom logging import getLogger\n\nfrom .constant import (\n COMMON_SAFE_ASCII_CHARACTERS,\n TRACE,\n UNICODE_SECONDARY_RANGE_KEYWORD,\n)\nfrom .utils import (\n is_accentuated,\n is_arabic,\n is_arabic_isolated_form,\n is_case_variable,\n is_cjk,\n is_emoticon,\n is_hangul,\n is_hiragana,\n is_katakana,\n is_latin,\n is_punctuation,\n is_separator,\n is_symbol,\n is_thai,\n is_unprintable,\n remove_accent,\n unicode_range,\n is_cjk_uncommon,\n)\n\n\nclass MessDetectorPlugin:\n """\n Base abstract class used for mess detection plugins.\n All detectors MUST extend and implement given methods.\n """\n\n def eligible(self, character: str) -> bool:\n """\n Determine if given character should be fed in.\n """\n raise NotImplementedError # pragma: nocover\n\n def feed(self, character: str) -> None:\n """\n The main routine to be executed upon character.\n Insert the logic in witch the text would be considered chaotic.\n """\n raise NotImplementedError # pragma: nocover\n\n def reset(self) -> None: # pragma: no cover\n """\n Permit to reset the plugin to the initial state.\n """\n raise NotImplementedError\n\n @property\n def ratio(self) -> float:\n """\n Compute the chaos ratio based on what your feed() has seen.\n Must NOT be lower than 0.; No restriction gt 0.\n """\n raise NotImplementedError # pragma: nocover\n\n\nclass TooManySymbolOrPunctuationPlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._punctuation_count: int = 0\n self._symbol_count: int = 0\n self._character_count: int = 0\n\n self._last_printable_char: str | None = None\n self._frenzy_symbol_in_word: bool = False\n\n def eligible(self, character: str) -> bool:\n return character.isprintable()\n\n def feed(self, character: str) -> None:\n self._character_count += 1\n\n if (\n character != self._last_printable_char\n and character not in COMMON_SAFE_ASCII_CHARACTERS\n ):\n if is_punctuation(character):\n self._punctuation_count += 1\n elif (\n character.isdigit() is False\n and is_symbol(character)\n and is_emoticon(character) is False\n ):\n self._symbol_count += 2\n\n self._last_printable_char = character\n\n def reset(self) -> None: # Abstract\n self._punctuation_count = 0\n self._character_count = 0\n self._symbol_count = 0\n\n @property\n def ratio(self) -> float:\n if self._character_count == 0:\n return 0.0\n\n ratio_of_punctuation: float = (\n self._punctuation_count + self._symbol_count\n ) / self._character_count\n\n return ratio_of_punctuation if ratio_of_punctuation >= 0.3 else 0.0\n\n\nclass TooManyAccentuatedPlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._character_count: int = 0\n self._accentuated_count: int = 0\n\n def eligible(self, character: str) -> bool:\n return character.isalpha()\n\n def feed(self, character: str) -> None:\n self._character_count += 1\n\n if is_accentuated(character):\n self._accentuated_count += 1\n\n def reset(self) -> None: # Abstract\n self._character_count = 0\n self._accentuated_count = 0\n\n @property\n def ratio(self) -> float:\n if self._character_count < 8:\n return 0.0\n\n ratio_of_accentuation: float = self._accentuated_count / self._character_count\n return ratio_of_accentuation if ratio_of_accentuation >= 0.35 else 0.0\n\n\nclass UnprintablePlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._unprintable_count: int = 0\n self._character_count: int = 0\n\n def eligible(self, character: str) -> bool:\n return True\n\n def feed(self, character: str) -> None:\n if is_unprintable(character):\n self._unprintable_count += 1\n self._character_count += 1\n\n def reset(self) -> None: # Abstract\n self._unprintable_count = 0\n\n @property\n def ratio(self) -> float:\n if self._character_count == 0:\n return 0.0\n\n return (self._unprintable_count * 8) / self._character_count\n\n\nclass SuspiciousDuplicateAccentPlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._successive_count: int = 0\n self._character_count: int = 0\n\n self._last_latin_character: str | None = None\n\n def eligible(self, character: str) -> bool:\n return character.isalpha() and is_latin(character)\n\n def feed(self, character: str) -> None:\n self._character_count += 1\n if (\n self._last_latin_character is not None\n and is_accentuated(character)\n and is_accentuated(self._last_latin_character)\n ):\n if character.isupper() and self._last_latin_character.isupper():\n self._successive_count += 1\n # Worse if its the same char duplicated with different accent.\n if remove_accent(character) == remove_accent(self._last_latin_character):\n self._successive_count += 1\n self._last_latin_character = character\n\n def reset(self) -> None: # Abstract\n self._successive_count = 0\n self._character_count = 0\n self._last_latin_character = None\n\n @property\n def ratio(self) -> float:\n if self._character_count == 0:\n return 0.0\n\n return (self._successive_count * 2) / self._character_count\n\n\nclass SuspiciousRange(MessDetectorPlugin):\n def __init__(self) -> None:\n self._suspicious_successive_range_count: int = 0\n self._character_count: int = 0\n self._last_printable_seen: str | None = None\n\n def eligible(self, character: str) -> bool:\n return character.isprintable()\n\n def feed(self, character: str) -> None:\n self._character_count += 1\n\n if (\n character.isspace()\n or is_punctuation(character)\n or character in COMMON_SAFE_ASCII_CHARACTERS\n ):\n self._last_printable_seen = None\n return\n\n if self._last_printable_seen is None:\n self._last_printable_seen = character\n return\n\n unicode_range_a: str | None = unicode_range(self._last_printable_seen)\n unicode_range_b: str | None = unicode_range(character)\n\n if is_suspiciously_successive_range(unicode_range_a, unicode_range_b):\n self._suspicious_successive_range_count += 1\n\n self._last_printable_seen = character\n\n def reset(self) -> None: # Abstract\n self._character_count = 0\n self._suspicious_successive_range_count = 0\n self._last_printable_seen = None\n\n @property\n def ratio(self) -> float:\n if self._character_count <= 13:\n return 0.0\n\n ratio_of_suspicious_range_usage: float = (\n self._suspicious_successive_range_count * 2\n ) / self._character_count\n\n return ratio_of_suspicious_range_usage\n\n\nclass SuperWeirdWordPlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._word_count: int = 0\n self._bad_word_count: int = 0\n self._foreign_long_count: int = 0\n\n self._is_current_word_bad: bool = False\n self._foreign_long_watch: bool = False\n\n self._character_count: int = 0\n self._bad_character_count: int = 0\n\n self._buffer: str = ""\n self._buffer_accent_count: int = 0\n self._buffer_glyph_count: int = 0\n\n def eligible(self, character: str) -> bool:\n return True\n\n def feed(self, character: str) -> None:\n if character.isalpha():\n self._buffer += character\n if is_accentuated(character):\n self._buffer_accent_count += 1\n if (\n self._foreign_long_watch is False\n and (is_latin(character) is False or is_accentuated(character))\n and is_cjk(character) is False\n and is_hangul(character) is False\n and is_katakana(character) is False\n and is_hiragana(character) is False\n and is_thai(character) is False\n ):\n self._foreign_long_watch = True\n if (\n is_cjk(character)\n or is_hangul(character)\n or is_katakana(character)\n or is_hiragana(character)\n or is_thai(character)\n ):\n self._buffer_glyph_count += 1\n return\n if not self._buffer:\n return\n if (\n character.isspace() or is_punctuation(character) or is_separator(character)\n ) and self._buffer:\n self._word_count += 1\n buffer_length: int = len(self._buffer)\n\n self._character_count += buffer_length\n\n if buffer_length >= 4:\n if self._buffer_accent_count / buffer_length >= 0.5:\n self._is_current_word_bad = True\n # Word/Buffer ending with an upper case accentuated letter are so rare,\n # that we will consider them all as suspicious. Same weight as foreign_long suspicious.\n elif (\n is_accentuated(self._buffer[-1])\n and self._buffer[-1].isupper()\n and all(_.isupper() for _ in self._buffer) is False\n ):\n self._foreign_long_count += 1\n self._is_current_word_bad = True\n elif self._buffer_glyph_count == 1:\n self._is_current_word_bad = True\n self._foreign_long_count += 1\n if buffer_length >= 24 and self._foreign_long_watch:\n camel_case_dst = [\n i\n for c, i in zip(self._buffer, range(0, buffer_length))\n if c.isupper()\n ]\n probable_camel_cased: bool = False\n\n if camel_case_dst and (len(camel_case_dst) / buffer_length <= 0.3):\n probable_camel_cased = True\n\n if not probable_camel_cased:\n self._foreign_long_count += 1\n self._is_current_word_bad = True\n\n if self._is_current_word_bad:\n self._bad_word_count += 1\n self._bad_character_count += len(self._buffer)\n self._is_current_word_bad = False\n\n self._foreign_long_watch = False\n self._buffer = ""\n self._buffer_accent_count = 0\n self._buffer_glyph_count = 0\n elif (\n character not in {"<", ">", "-", "=", "~", "|", "_"}\n and character.isdigit() is False\n and is_symbol(character)\n ):\n self._is_current_word_bad = True\n self._buffer += character\n\n def reset(self) -> None: # Abstract\n self._buffer = ""\n self._is_current_word_bad = False\n self._foreign_long_watch = False\n self._bad_word_count = 0\n self._word_count = 0\n self._character_count = 0\n self._bad_character_count = 0\n self._foreign_long_count = 0\n\n @property\n def ratio(self) -> float:\n if self._word_count <= 10 and self._foreign_long_count == 0:\n return 0.0\n\n return self._bad_character_count / self._character_count\n\n\nclass CjkUncommonPlugin(MessDetectorPlugin):\n """\n Detect messy CJK text that probably means nothing.\n """\n\n def __init__(self) -> None:\n self._character_count: int = 0\n self._uncommon_count: int = 0\n\n def eligible(self, character: str) -> bool:\n return is_cjk(character)\n\n def feed(self, character: str) -> None:\n self._character_count += 1\n\n if is_cjk_uncommon(character):\n self._uncommon_count += 1\n return\n\n def reset(self) -> None: # Abstract\n self._character_count = 0\n self._uncommon_count = 0\n\n @property\n def ratio(self) -> float:\n if self._character_count < 8:\n return 0.0\n\n uncommon_form_usage: float = self._uncommon_count / self._character_count\n\n # we can be pretty sure it's garbage when uncommon characters are widely\n # used. otherwise it could just be traditional chinese for example.\n return uncommon_form_usage / 10 if uncommon_form_usage > 0.5 else 0.0\n\n\nclass ArchaicUpperLowerPlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._buf: bool = False\n\n self._character_count_since_last_sep: int = 0\n\n self._successive_upper_lower_count: int = 0\n self._successive_upper_lower_count_final: int = 0\n\n self._character_count: int = 0\n\n self._last_alpha_seen: str | None = None\n self._current_ascii_only: bool = True\n\n def eligible(self, character: str) -> bool:\n return True\n\n def feed(self, character: str) -> None:\n is_concerned = character.isalpha() and is_case_variable(character)\n chunk_sep = is_concerned is False\n\n if chunk_sep and self._character_count_since_last_sep > 0:\n if (\n self._character_count_since_last_sep <= 64\n and character.isdigit() is False\n and self._current_ascii_only is False\n ):\n self._successive_upper_lower_count_final += (\n self._successive_upper_lower_count\n )\n\n self._successive_upper_lower_count = 0\n self._character_count_since_last_sep = 0\n self._last_alpha_seen = None\n self._buf = False\n self._character_count += 1\n self._current_ascii_only = True\n\n return\n\n if self._current_ascii_only is True and character.isascii() is False:\n self._current_ascii_only = False\n\n if self._last_alpha_seen is not None:\n if (character.isupper() and self._last_alpha_seen.islower()) or (\n character.islower() and self._last_alpha_seen.isupper()\n ):\n if self._buf is True:\n self._successive_upper_lower_count += 2\n self._buf = False\n else:\n self._buf = True\n else:\n self._buf = False\n\n self._character_count += 1\n self._character_count_since_last_sep += 1\n self._last_alpha_seen = character\n\n def reset(self) -> None: # Abstract\n self._character_count = 0\n self._character_count_since_last_sep = 0\n self._successive_upper_lower_count = 0\n self._successive_upper_lower_count_final = 0\n self._last_alpha_seen = None\n self._buf = False\n self._current_ascii_only = True\n\n @property\n def ratio(self) -> float:\n if self._character_count == 0:\n return 0.0\n\n return self._successive_upper_lower_count_final / self._character_count\n\n\nclass ArabicIsolatedFormPlugin(MessDetectorPlugin):\n def __init__(self) -> None:\n self._character_count: int = 0\n self._isolated_form_count: int = 0\n\n def reset(self) -> None: # Abstract\n self._character_count = 0\n self._isolated_form_count = 0\n\n def eligible(self, character: str) -> bool:\n return is_arabic(character)\n\n def feed(self, character: str) -> None:\n self._character_count += 1\n\n if is_arabic_isolated_form(character):\n self._isolated_form_count += 1\n\n @property\n def ratio(self) -> float:\n if self._character_count < 8:\n return 0.0\n\n isolated_form_usage: float = self._isolated_form_count / self._character_count\n\n return isolated_form_usage\n\n\n@lru_cache(maxsize=1024)\ndef is_suspiciously_successive_range(\n unicode_range_a: str | None, unicode_range_b: str | None\n) -> bool:\n """\n Determine if two Unicode range seen next to each other can be considered as suspicious.\n """\n if unicode_range_a is None or unicode_range_b is None:\n return True\n\n if unicode_range_a == unicode_range_b:\n return False\n\n if "Latin" in unicode_range_a and "Latin" in unicode_range_b:\n return False\n\n if "Emoticons" in unicode_range_a or "Emoticons" in unicode_range_b:\n return False\n\n # Latin characters can be accompanied with a combining diacritical mark\n # eg. Vietnamese.\n if ("Latin" in unicode_range_a or "Latin" in unicode_range_b) and (\n "Combining" in unicode_range_a or "Combining" in unicode_range_b\n ):\n return False\n\n keywords_range_a, keywords_range_b = (\n unicode_range_a.split(" "),\n unicode_range_b.split(" "),\n )\n\n for el in keywords_range_a:\n if el in UNICODE_SECONDARY_RANGE_KEYWORD:\n continue\n if el in keywords_range_b:\n return False\n\n # Japanese Exception\n range_a_jp_chars, range_b_jp_chars = (\n unicode_range_a\n in (\n "Hiragana",\n "Katakana",\n ),\n unicode_range_b in ("Hiragana", "Katakana"),\n )\n if (range_a_jp_chars or range_b_jp_chars) and (\n "CJK" in unicode_range_a or "CJK" in unicode_range_b\n ):\n return False\n if range_a_jp_chars and range_b_jp_chars:\n return False\n\n if "Hangul" in unicode_range_a or "Hangul" in unicode_range_b:\n if "CJK" in unicode_range_a or "CJK" in unicode_range_b:\n return False\n if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin":\n return False\n\n # Chinese/Japanese use dedicated range for punctuation and/or separators.\n if ("CJK" in unicode_range_a or "CJK" in unicode_range_b) or (\n unicode_range_a in ["Katakana", "Hiragana"]\n and unicode_range_b in ["Katakana", "Hiragana"]\n ):\n if "Punctuation" in unicode_range_a or "Punctuation" in unicode_range_b:\n return False\n if "Forms" in unicode_range_a or "Forms" in unicode_range_b:\n return False\n if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin":\n return False\n\n return True\n\n\n@lru_cache(maxsize=2048)\ndef mess_ratio(\n decoded_sequence: str, maximum_threshold: float = 0.2, debug: bool = False\n) -> float:\n """\n Compute a mess ratio given a decoded bytes sequence. The maximum threshold does stop the computation earlier.\n """\n\n detectors: list[MessDetectorPlugin] = [\n md_class() for md_class in MessDetectorPlugin.__subclasses__()\n ]\n\n length: int = len(decoded_sequence) + 1\n\n mean_mess_ratio: float = 0.0\n\n if length < 512:\n intermediary_mean_mess_ratio_calc: int = 32\n elif length <= 1024:\n intermediary_mean_mess_ratio_calc = 64\n else:\n intermediary_mean_mess_ratio_calc = 128\n\n for character, index in zip(decoded_sequence + "\n", range(length)):\n for detector in detectors:\n if detector.eligible(character):\n detector.feed(character)\n\n if (\n index > 0 and index % intermediary_mean_mess_ratio_calc == 0\n ) or index == length - 1:\n mean_mess_ratio = sum(dt.ratio for dt in detectors)\n\n if mean_mess_ratio >= maximum_threshold:\n break\n\n if debug:\n logger = getLogger("charset_normalizer")\n\n logger.log(\n TRACE,\n "Mess-detector extended-analysis start. "\n f"intermediary_mean_mess_ratio_calc={intermediary_mean_mess_ratio_calc} mean_mess_ratio={mean_mess_ratio} "\n f"maximum_threshold={maximum_threshold}",\n )\n\n if len(decoded_sequence) > 16:\n logger.log(TRACE, f"Starting with: {decoded_sequence[:16]}")\n logger.log(TRACE, f"Ending with: {decoded_sequence[-16::]}")\n\n for dt in detectors:\n logger.log(TRACE, f"{dt.__class__}: {dt.ratio}")\n\n return round(mean_mess_ratio, 3)\n
|
.venv\Lib\site-packages\charset_normalizer\md.py
|
md.py
|
Python
| 20,780 | 0.95 | 0.222047 | 0.017928 |
awesome-app
| 654 |
2023-08-18T06:34:29.413725
|
Apache-2.0
| false |
ad21a83e729a104bd0b04eaf9e6ae3de
|
MZ
|
.venv\Lib\site-packages\charset_normalizer\md__mypyc.cp313-win_amd64.pyd
|
md__mypyc.cp313-win_amd64.pyd
|
Other
| 125,440 | 0.75 | 0.00722 | 0.011861 |
vue-tools
| 707 |
2024-12-27T15:09:55.180774
|
Apache-2.0
| false |
562dfc796db8f0748cd2cf8ff25ed346
|
from __future__ import annotations\n\nfrom encodings.aliases import aliases\nfrom hashlib import sha256\nfrom json import dumps\nfrom re import sub\nfrom typing import Any, Iterator, List, Tuple\n\nfrom .constant import RE_POSSIBLE_ENCODING_INDICATION, TOO_BIG_SEQUENCE\nfrom .utils import iana_name, is_multi_byte_encoding, unicode_range\n\n\nclass CharsetMatch:\n def __init__(\n self,\n payload: bytes,\n guessed_encoding: str,\n mean_mess_ratio: float,\n has_sig_or_bom: bool,\n languages: CoherenceMatches,\n decoded_payload: str | None = None,\n preemptive_declaration: str | None = None,\n ):\n self._payload: bytes = payload\n\n self._encoding: str = guessed_encoding\n self._mean_mess_ratio: float = mean_mess_ratio\n self._languages: CoherenceMatches = languages\n self._has_sig_or_bom: bool = has_sig_or_bom\n self._unicode_ranges: list[str] | None = None\n\n self._leaves: list[CharsetMatch] = []\n self._mean_coherence_ratio: float = 0.0\n\n self._output_payload: bytes | None = None\n self._output_encoding: str | None = None\n\n self._string: str | None = decoded_payload\n\n self._preemptive_declaration: str | None = preemptive_declaration\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, CharsetMatch):\n if isinstance(other, str):\n return iana_name(other) == self.encoding\n return False\n return self.encoding == other.encoding and self.fingerprint == other.fingerprint\n\n def __lt__(self, other: object) -> bool:\n """\n Implemented to make sorted available upon CharsetMatches items.\n """\n if not isinstance(other, CharsetMatch):\n raise ValueError\n\n chaos_difference: float = abs(self.chaos - other.chaos)\n coherence_difference: float = abs(self.coherence - other.coherence)\n\n # Below 1% difference --> Use Coherence\n if chaos_difference < 0.01 and coherence_difference > 0.02:\n return self.coherence > other.coherence\n elif chaos_difference < 0.01 and coherence_difference <= 0.02:\n # When having a difficult decision, use the result that decoded as many multi-byte as possible.\n # preserve RAM usage!\n if len(self._payload) >= TOO_BIG_SEQUENCE:\n return self.chaos < other.chaos\n return self.multi_byte_usage > other.multi_byte_usage\n\n return self.chaos < other.chaos\n\n @property\n def multi_byte_usage(self) -> float:\n return 1.0 - (len(str(self)) / len(self.raw))\n\n def __str__(self) -> str:\n # Lazy Str Loading\n if self._string is None:\n self._string = str(self._payload, self._encoding, "strict")\n return self._string\n\n def __repr__(self) -> str:\n return f"<CharsetMatch '{self.encoding}' bytes({self.fingerprint})>"\n\n def add_submatch(self, other: CharsetMatch) -> None:\n if not isinstance(other, CharsetMatch) or other == self:\n raise ValueError(\n "Unable to add instance <{}> as a submatch of a CharsetMatch".format(\n other.__class__\n )\n )\n\n other._string = None # Unload RAM usage; dirty trick.\n self._leaves.append(other)\n\n @property\n def encoding(self) -> str:\n return self._encoding\n\n @property\n def encoding_aliases(self) -> list[str]:\n """\n Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855.\n """\n also_known_as: list[str] = []\n for u, p in aliases.items():\n if self.encoding == u:\n also_known_as.append(p)\n elif self.encoding == p:\n also_known_as.append(u)\n return also_known_as\n\n @property\n def bom(self) -> bool:\n return self._has_sig_or_bom\n\n @property\n def byte_order_mark(self) -> bool:\n return self._has_sig_or_bom\n\n @property\n def languages(self) -> list[str]:\n """\n Return the complete list of possible languages found in decoded sequence.\n Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'.\n """\n return [e[0] for e in self._languages]\n\n @property\n def language(self) -> str:\n """\n Most probable language found in decoded sequence. If none were detected or inferred, the property will return\n "Unknown".\n """\n if not self._languages:\n # Trying to infer the language based on the given encoding\n # Its either English or we should not pronounce ourselves in certain cases.\n if "ascii" in self.could_be_from_charset:\n return "English"\n\n # doing it there to avoid circular import\n from charset_normalizer.cd import encoding_languages, mb_encoding_languages\n\n languages = (\n mb_encoding_languages(self.encoding)\n if is_multi_byte_encoding(self.encoding)\n else encoding_languages(self.encoding)\n )\n\n if len(languages) == 0 or "Latin Based" in languages:\n return "Unknown"\n\n return languages[0]\n\n return self._languages[0][0]\n\n @property\n def chaos(self) -> float:\n return self._mean_mess_ratio\n\n @property\n def coherence(self) -> float:\n if not self._languages:\n return 0.0\n return self._languages[0][1]\n\n @property\n def percent_chaos(self) -> float:\n return round(self.chaos * 100, ndigits=3)\n\n @property\n def percent_coherence(self) -> float:\n return round(self.coherence * 100, ndigits=3)\n\n @property\n def raw(self) -> bytes:\n """\n Original untouched bytes.\n """\n return self._payload\n\n @property\n def submatch(self) -> list[CharsetMatch]:\n return self._leaves\n\n @property\n def has_submatch(self) -> bool:\n return len(self._leaves) > 0\n\n @property\n def alphabets(self) -> list[str]:\n if self._unicode_ranges is not None:\n return self._unicode_ranges\n # list detected ranges\n detected_ranges: list[str | None] = [unicode_range(char) for char in str(self)]\n # filter and sort\n self._unicode_ranges = sorted(list({r for r in detected_ranges if r}))\n return self._unicode_ranges\n\n @property\n def could_be_from_charset(self) -> list[str]:\n """\n The complete list of encoding that output the exact SAME str result and therefore could be the originating\n encoding.\n This list does include the encoding available in property 'encoding'.\n """\n return [self._encoding] + [m.encoding for m in self._leaves]\n\n def output(self, encoding: str = "utf_8") -> bytes:\n """\n Method to get re-encoded bytes payload using given target encoding. Default to UTF-8.\n Any errors will be simply ignored by the encoder NOT replaced.\n """\n if self._output_encoding is None or self._output_encoding != encoding:\n self._output_encoding = encoding\n decoded_string = str(self)\n if (\n self._preemptive_declaration is not None\n and self._preemptive_declaration.lower()\n not in ["utf-8", "utf8", "utf_8"]\n ):\n patched_header = sub(\n RE_POSSIBLE_ENCODING_INDICATION,\n lambda m: m.string[m.span()[0] : m.span()[1]].replace(\n m.groups()[0],\n iana_name(self._output_encoding).replace("_", "-"), # type: ignore[arg-type]\n ),\n decoded_string[:8192],\n count=1,\n )\n\n decoded_string = patched_header + decoded_string[8192:]\n\n self._output_payload = decoded_string.encode(encoding, "replace")\n\n return self._output_payload # type: ignore\n\n @property\n def fingerprint(self) -> str:\n """\n Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one.\n """\n return sha256(self.output()).hexdigest()\n\n\nclass CharsetMatches:\n """\n Container with every CharsetMatch items ordered by default from most probable to the less one.\n Act like a list(iterable) but does not implements all related methods.\n """\n\n def __init__(self, results: list[CharsetMatch] | None = None):\n self._results: list[CharsetMatch] = sorted(results) if results else []\n\n def __iter__(self) -> Iterator[CharsetMatch]:\n yield from self._results\n\n def __getitem__(self, item: int | str) -> CharsetMatch:\n """\n Retrieve a single item either by its position or encoding name (alias may be used here).\n Raise KeyError upon invalid index or encoding not present in results.\n """\n if isinstance(item, int):\n return self._results[item]\n if isinstance(item, str):\n item = iana_name(item, False)\n for result in self._results:\n if item in result.could_be_from_charset:\n return result\n raise KeyError\n\n def __len__(self) -> int:\n return len(self._results)\n\n def __bool__(self) -> bool:\n return len(self._results) > 0\n\n def append(self, item: CharsetMatch) -> None:\n """\n Insert a single match. Will be inserted accordingly to preserve sort.\n Can be inserted as a submatch.\n """\n if not isinstance(item, CharsetMatch):\n raise ValueError(\n "Cannot append instance '{}' to CharsetMatches".format(\n str(item.__class__)\n )\n )\n # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage)\n if len(item.raw) < TOO_BIG_SEQUENCE:\n for match in self._results:\n if match.fingerprint == item.fingerprint and match.chaos == item.chaos:\n match.add_submatch(item)\n return\n self._results.append(item)\n self._results = sorted(self._results)\n\n def best(self) -> CharsetMatch | None:\n """\n Simply return the first match. Strict equivalent to matches[0].\n """\n if not self._results:\n return None\n return self._results[0]\n\n def first(self) -> CharsetMatch | None:\n """\n Redundant method, call the method best(). Kept for BC reasons.\n """\n return self.best()\n\n\nCoherenceMatch = Tuple[str, float]\nCoherenceMatches = List[CoherenceMatch]\n\n\nclass CliDetectionResult:\n def __init__(\n self,\n path: str,\n encoding: str | None,\n encoding_aliases: list[str],\n alternative_encodings: list[str],\n language: str,\n alphabets: list[str],\n has_sig_or_bom: bool,\n chaos: float,\n coherence: float,\n unicode_path: str | None,\n is_preferred: bool,\n ):\n self.path: str = path\n self.unicode_path: str | None = unicode_path\n self.encoding: str | None = encoding\n self.encoding_aliases: list[str] = encoding_aliases\n self.alternative_encodings: list[str] = alternative_encodings\n self.language: str = language\n self.alphabets: list[str] = alphabets\n self.has_sig_or_bom: bool = has_sig_or_bom\n self.chaos: float = chaos\n self.coherence: float = coherence\n self.is_preferred: bool = is_preferred\n\n @property\n def __dict__(self) -> dict[str, Any]: # type: ignore\n return {\n "path": self.path,\n "encoding": self.encoding,\n "encoding_aliases": self.encoding_aliases,\n "alternative_encodings": self.alternative_encodings,\n "language": self.language,\n "alphabets": self.alphabets,\n "has_sig_or_bom": self.has_sig_or_bom,\n "chaos": self.chaos,\n "coherence": self.coherence,\n "unicode_path": self.unicode_path,\n "is_preferred": self.is_preferred,\n }\n\n def to_json(self) -> str:\n return dumps(self.__dict__, ensure_ascii=True, indent=4)\n
|
.venv\Lib\site-packages\charset_normalizer\models.py
|
models.py
|
Python
| 12,754 | 0.95 | 0.202778 | 0.033333 |
vue-tools
| 854 |
2025-03-16T21:01:31.299683
|
Apache-2.0
| false |
2abdd713a6c2c3822626c1c2ba6d907d
|
from __future__ import annotations\n\nimport importlib\nimport logging\nimport unicodedata\nfrom codecs import IncrementalDecoder\nfrom encodings.aliases import aliases\nfrom functools import lru_cache\nfrom re import findall\nfrom typing import Generator\n\nfrom _multibytecodec import ( # type: ignore[import-not-found,import]\n MultibyteIncrementalDecoder,\n)\n\nfrom .constant import (\n ENCODING_MARKS,\n IANA_SUPPORTED_SIMILAR,\n RE_POSSIBLE_ENCODING_INDICATION,\n UNICODE_RANGES_COMBINED,\n UNICODE_SECONDARY_RANGE_KEYWORD,\n UTF8_MAXIMAL_ALLOCATION,\n COMMON_CJK_CHARACTERS,\n)\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_accentuated(character: str) -> bool:\n try:\n description: str = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n return (\n "WITH GRAVE" in description\n or "WITH ACUTE" in description\n or "WITH CEDILLA" in description\n or "WITH DIAERESIS" in description\n or "WITH CIRCUMFLEX" in description\n or "WITH TILDE" in description\n or "WITH MACRON" in description\n or "WITH RING ABOVE" in description\n )\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef remove_accent(character: str) -> str:\n decomposed: str = unicodedata.decomposition(character)\n if not decomposed:\n return character\n\n codes: list[str] = decomposed.split(" ")\n\n return chr(int(codes[0], 16))\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef unicode_range(character: str) -> str | None:\n """\n Retrieve the Unicode range official name from a single character.\n """\n character_ord: int = ord(character)\n\n for range_name, ord_range in UNICODE_RANGES_COMBINED.items():\n if character_ord in ord_range:\n return range_name\n\n return None\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_latin(character: str) -> bool:\n try:\n description: str = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n return "LATIN" in description\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_punctuation(character: str) -> bool:\n character_category: str = unicodedata.category(character)\n\n if "P" in character_category:\n return True\n\n character_range: str | None = unicode_range(character)\n\n if character_range is None:\n return False\n\n return "Punctuation" in character_range\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_symbol(character: str) -> bool:\n character_category: str = unicodedata.category(character)\n\n if "S" in character_category or "N" in character_category:\n return True\n\n character_range: str | None = unicode_range(character)\n\n if character_range is None:\n return False\n\n return "Forms" in character_range and character_category != "Lo"\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_emoticon(character: str) -> bool:\n character_range: str | None = unicode_range(character)\n\n if character_range is None:\n return False\n\n return "Emoticons" in character_range or "Pictographs" in character_range\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_separator(character: str) -> bool:\n if character.isspace() or character in {"|", "+", "<", ">"}:\n return True\n\n character_category: str = unicodedata.category(character)\n\n return "Z" in character_category or character_category in {"Po", "Pd", "Pc"}\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_case_variable(character: str) -> bool:\n return character.islower() != character.isupper()\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_cjk(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "CJK" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_hiragana(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "HIRAGANA" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_katakana(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "KATAKANA" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_hangul(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "HANGUL" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_thai(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "THAI" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_arabic(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "ARABIC" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_arabic_isolated_form(character: str) -> bool:\n try:\n character_name = unicodedata.name(character)\n except ValueError: # Defensive: unicode database outdated?\n return False\n\n return "ARABIC" in character_name and "ISOLATED FORM" in character_name\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_cjk_uncommon(character: str) -> bool:\n return character not in COMMON_CJK_CHARACTERS\n\n\n@lru_cache(maxsize=len(UNICODE_RANGES_COMBINED))\ndef is_unicode_range_secondary(range_name: str) -> bool:\n return any(keyword in range_name for keyword in UNICODE_SECONDARY_RANGE_KEYWORD)\n\n\n@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION)\ndef is_unprintable(character: str) -> bool:\n return (\n character.isspace() is False # includes \n \t \r \v\n and character.isprintable() is False\n and character != "\x1a" # Why? Its the ASCII substitute character.\n and character != "\ufeff" # bug discovered in Python,\n # Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space.\n )\n\n\ndef any_specified_encoding(sequence: bytes, search_zone: int = 8192) -> str | None:\n """\n Extract using ASCII-only decoder any specified encoding in the first n-bytes.\n """\n if not isinstance(sequence, bytes):\n raise TypeError\n\n seq_len: int = len(sequence)\n\n results: list[str] = findall(\n RE_POSSIBLE_ENCODING_INDICATION,\n sequence[: min(seq_len, search_zone)].decode("ascii", errors="ignore"),\n )\n\n if len(results) == 0:\n return None\n\n for specified_encoding in results:\n specified_encoding = specified_encoding.lower().replace("-", "_")\n\n encoding_alias: str\n encoding_iana: str\n\n for encoding_alias, encoding_iana in aliases.items():\n if encoding_alias == specified_encoding:\n return encoding_iana\n if encoding_iana == specified_encoding:\n return encoding_iana\n\n return None\n\n\n@lru_cache(maxsize=128)\ndef is_multi_byte_encoding(name: str) -> bool:\n """\n Verify is a specific encoding is a multi byte one based on it IANA name\n """\n return name in {\n "utf_8",\n "utf_8_sig",\n "utf_16",\n "utf_16_be",\n "utf_16_le",\n "utf_32",\n "utf_32_le",\n "utf_32_be",\n "utf_7",\n } or issubclass(\n importlib.import_module(f"encodings.{name}").IncrementalDecoder,\n MultibyteIncrementalDecoder,\n )\n\n\ndef identify_sig_or_bom(sequence: bytes) -> tuple[str | None, bytes]:\n """\n Identify and extract SIG/BOM in given sequence.\n """\n\n for iana_encoding in ENCODING_MARKS:\n marks: bytes | list[bytes] = ENCODING_MARKS[iana_encoding]\n\n if isinstance(marks, bytes):\n marks = [marks]\n\n for mark in marks:\n if sequence.startswith(mark):\n return iana_encoding, mark\n\n return None, b""\n\n\ndef should_strip_sig_or_bom(iana_encoding: str) -> bool:\n return iana_encoding not in {"utf_16", "utf_32"}\n\n\ndef iana_name(cp_name: str, strict: bool = True) -> str:\n """Returns the Python normalized encoding name (Not the IANA official name)."""\n cp_name = cp_name.lower().replace("-", "_")\n\n encoding_alias: str\n encoding_iana: str\n\n for encoding_alias, encoding_iana in aliases.items():\n if cp_name in [encoding_alias, encoding_iana]:\n return encoding_iana\n\n if strict:\n raise ValueError(f"Unable to retrieve IANA for '{cp_name}'")\n\n return cp_name\n\n\ndef cp_similarity(iana_name_a: str, iana_name_b: str) -> float:\n if is_multi_byte_encoding(iana_name_a) or is_multi_byte_encoding(iana_name_b):\n return 0.0\n\n decoder_a = importlib.import_module(f"encodings.{iana_name_a}").IncrementalDecoder\n decoder_b = importlib.import_module(f"encodings.{iana_name_b}").IncrementalDecoder\n\n id_a: IncrementalDecoder = decoder_a(errors="ignore")\n id_b: IncrementalDecoder = decoder_b(errors="ignore")\n\n character_match_count: int = 0\n\n for i in range(255):\n to_be_decoded: bytes = bytes([i])\n if id_a.decode(to_be_decoded) == id_b.decode(to_be_decoded):\n character_match_count += 1\n\n return character_match_count / 254\n\n\ndef is_cp_similar(iana_name_a: str, iana_name_b: str) -> bool:\n """\n Determine if two code page are at least 80% similar. IANA_SUPPORTED_SIMILAR dict was generated using\n the function cp_similarity.\n """\n return (\n iana_name_a in IANA_SUPPORTED_SIMILAR\n and iana_name_b in IANA_SUPPORTED_SIMILAR[iana_name_a]\n )\n\n\ndef set_logging_handler(\n name: str = "charset_normalizer",\n level: int = logging.INFO,\n format_string: str = "%(asctime)s | %(levelname)s | %(message)s",\n) -> None:\n logger = logging.getLogger(name)\n logger.setLevel(level)\n\n handler = logging.StreamHandler()\n handler.setFormatter(logging.Formatter(format_string))\n logger.addHandler(handler)\n\n\ndef cut_sequence_chunks(\n sequences: bytes,\n encoding_iana: str,\n offsets: range,\n chunk_size: int,\n bom_or_sig_available: bool,\n strip_sig_or_bom: bool,\n sig_payload: bytes,\n is_multi_byte_decoder: bool,\n decoded_payload: str | None = None,\n) -> Generator[str, None, None]:\n if decoded_payload and is_multi_byte_decoder is False:\n for i in offsets:\n chunk = decoded_payload[i : i + chunk_size]\n if not chunk:\n break\n yield chunk\n else:\n for i in offsets:\n chunk_end = i + chunk_size\n if chunk_end > len(sequences) + 8:\n continue\n\n cut_sequence = sequences[i : i + chunk_size]\n\n if bom_or_sig_available and strip_sig_or_bom is False:\n cut_sequence = sig_payload + cut_sequence\n\n chunk = cut_sequence.decode(\n encoding_iana,\n errors="ignore" if is_multi_byte_decoder else "strict",\n )\n\n # multi-byte bad cutting detector and adjustment\n # not the cleanest way to perform that fix but clever enough for now.\n if is_multi_byte_decoder and i > 0:\n chunk_partial_size_chk: int = min(chunk_size, 16)\n\n if (\n decoded_payload\n and chunk[:chunk_partial_size_chk] not in decoded_payload\n ):\n for j in range(i, i - 4, -1):\n cut_sequence = sequences[j:chunk_end]\n\n if bom_or_sig_available and strip_sig_or_bom is False:\n cut_sequence = sig_payload + cut_sequence\n\n chunk = cut_sequence.decode(encoding_iana, errors="ignore")\n\n if chunk[:chunk_partial_size_chk] in decoded_payload:\n break\n\n yield chunk\n
|
.venv\Lib\site-packages\charset_normalizer\utils.py
|
utils.py
|
Python
| 12,584 | 0.95 | 0.190821 | 0.009934 |
awesome-app
| 518 |
2025-03-26T02:57:54.921701
|
GPL-3.0
| false |
76073a099d3afbf374b233dcaeb9aaad
|
"""\nExpose version\n"""\n\nfrom __future__ import annotations\n\n__version__ = "3.4.2"\nVERSION = __version__.split(".")\n
|
.venv\Lib\site-packages\charset_normalizer\version.py
|
version.py
|
Python
| 123 | 0.85 | 0 | 0 |
react-lib
| 870 |
2023-10-06T02:39:38.955522
|
Apache-2.0
| false |
22bfc76bbdcc66cb401e2c9c921b4687
|
"""\nCharset-Normalizer\n~~~~~~~~~~~~~~\nThe Real First Universal Charset Detector.\nA library that helps you read text from an unknown charset encoding.\nMotivated by chardet, This package is trying to resolve the issue by taking a new approach.\nAll IANA character set names for which the Python core library provides codecs are supported.\n\nBasic usage:\n >>> from charset_normalizer import from_bytes\n >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8'))\n >>> best_guess = results.best()\n >>> str(best_guess)\n 'Bсеки човек има право на образование. Oбразованието!'\n\nOthers methods and usages are available - see the full documentation\nat <https://github.com/Ousret/charset_normalizer>.\n:copyright: (c) 2021 by Ahmed TAHRI\n:license: MIT, see LICENSE for more details.\n"""\n\nfrom __future__ import annotations\n\nimport logging\n\nfrom .api import from_bytes, from_fp, from_path, is_binary\nfrom .legacy import detect\nfrom .models import CharsetMatch, CharsetMatches\nfrom .utils import set_logging_handler\nfrom .version import VERSION, __version__\n\n__all__ = (\n "from_fp",\n "from_path",\n "from_bytes",\n "is_binary",\n "detect",\n "CharsetMatch",\n "CharsetMatches",\n "__version__",\n "VERSION",\n "set_logging_handler",\n)\n\n# Attach a NullHandler to the top level logger by default\n# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library\n\nlogging.getLogger("charset_normalizer").addHandler(logging.NullHandler())\n
|
.venv\Lib\site-packages\charset_normalizer\__init__.py
|
__init__.py
|
Python
| 1,638 | 0.95 | 0.0625 | 0.05 |
awesome-app
| 165 |
2023-09-17T06:48:17.376743
|
GPL-3.0
| false |
307f5a947843fd468106001212156178
|
from __future__ import annotations\n\nfrom .cli import cli_detect\n\nif __name__ == "__main__":\n cli_detect()\n
|
.venv\Lib\site-packages\charset_normalizer\__main__.py
|
__main__.py
|
Python
| 115 | 0.85 | 0.166667 | 0 |
python-kit
| 887 |
2024-12-15T05:02:13.551670
|
Apache-2.0
| false |
fb780dc34cd71306500b916e316a2cbd
|
from __future__ import annotations\n\nfrom .__main__ import cli_detect, query_yes_no\n\n__all__ = (\n "cli_detect",\n "query_yes_no",\n)\n
|
.venv\Lib\site-packages\charset_normalizer\cli\__init__.py
|
__init__.py
|
Python
| 144 | 0.85 | 0 | 0 |
node-utils
| 247 |
2025-05-27T10:07:00.483994
|
BSD-3-Clause
| false |
b1bbd2ff8505ff3edd43b1857907f2fb
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\cli\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 338 | 0.7 | 0 | 0 |
node-utils
| 459 |
2024-07-07T15:51:57.345523
|
GPL-3.0
| false |
3a27d23a5592c02444de4a218e3a290c
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\cli\__pycache__\__main__.cpython-313.pyc
|
__main__.cpython-313.pyc
|
Other
| 14,515 | 0.95 | 0.084337 | 0 |
react-lib
| 826 |
2024-10-26T05:35:30.002956
|
MIT
| false |
472efc754b1aac624c4a93fea61f9181
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\api.cpython-313.pyc
|
api.cpython-313.pyc
|
Other
| 18,709 | 0.95 | 0.061404 | 0 |
vue-tools
| 459 |
2024-04-12T09:51:11.051759
|
Apache-2.0
| false |
ffc1bfd56a1a5b106469a34fefd098c3
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\cd.cpython-313.pyc
|
cd.cpython-313.pyc
|
Other
| 13,391 | 0.95 | 0.06338 | 0.014286 |
react-lib
| 931 |
2024-05-28T05:00:55.533638
|
MIT
| false |
a3164749499a553c42b0014b286750f0
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\constant.cpython-313.pyc
|
constant.cpython-313.pyc
|
Other
| 40,815 | 0.8 | 0.010929 | 0 |
node-utils
| 617 |
2025-03-27T06:34:31.961388
|
MIT
| false |
77256f2367c1033d94280cc8d0ac4506
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\legacy.cpython-313.pyc
|
legacy.cpython-313.pyc
|
Other
| 2,877 | 0.95 | 0.108108 | 0 |
vue-tools
| 784 |
2024-03-31T17:09:19.058354
|
MIT
| false |
f2a77e0c96d7c9e39be05bd2505f4eea
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\md.cpython-313.pyc
|
md.cpython-313.pyc
|
Other
| 25,432 | 0.95 | 0.021858 | 0.005587 |
vue-tools
| 628 |
2024-06-01T23:56:33.274101
|
Apache-2.0
| false |
9b91289f0e9c95b2f149de666e416b1e
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\models.cpython-313.pyc
|
models.cpython-313.pyc
|
Other
| 17,296 | 0.8 | 0.022388 | 0 |
python-kit
| 256 |
2023-12-17T19:29:58.769574
|
GPL-3.0
| false |
0bc64362ed0c0a296d92977e9c7ee823
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\utils.cpython-313.pyc
|
utils.cpython-313.pyc
|
Other
| 14,051 | 0.95 | 0.022222 | 0.007634 |
vue-tools
| 406 |
2025-05-14T01:45:57.868200
|
GPL-3.0
| false |
633c01b512b3fd712c648ab097d49a83
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\version.cpython-313.pyc
|
version.cpython-313.pyc
|
Other
| 377 | 0.8 | 0 | 0 |
node-utils
| 590 |
2024-08-17T21:58:09.260195
|
MIT
| false |
caed6d492c3d9407243a92b1b361fffc
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 1,770 | 0.95 | 0.058824 | 0 |
react-lib
| 982 |
2025-04-07T19:25:30.992894
|
GPL-3.0
| false |
bfc700f4d5e9a2d67fb951de35a1a5bd
|
\n\n
|
.venv\Lib\site-packages\charset_normalizer\__pycache__\__main__.cpython-313.pyc
|
__main__.cpython-313.pyc
|
Other
| 352 | 0.7 | 0 | 0 |
vue-tools
| 1,000 |
2024-08-13T09:04:26.172670
|
GPL-3.0
| false |
566043b19f9266f0fcf429ebc67f0894
|
[console_scripts]\nnormalizer = charset_normalizer:cli.cli_detect\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\entry_points.txt
|
entry_points.txt
|
Other
| 65 | 0.5 | 0 | 0 |
awesome-app
| 200 |
2023-12-18T20:16:54.296468
|
MIT
| false |
7bf3687ce46264babb237f70762b472d
|
pip\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\INSTALLER
|
INSTALLER
|
Other
| 4 | 0.5 | 0 | 0 |
python-kit
| 938 |
2023-09-29T15:56:55.275121
|
MIT
| false |
365c9bfeb7d89244f2ce01c1de44cb85
|
Metadata-Version: 2.4\nName: charset-normalizer\nVersion: 3.4.2\nSummary: The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.\nAuthor-email: "Ahmed R. TAHRI" <tahri.ahmed@proton.me>\nMaintainer-email: "Ahmed R. TAHRI" <tahri.ahmed@proton.me>\nLicense: MIT\nProject-URL: Changelog, https://github.com/jawah/charset_normalizer/blob/master/CHANGELOG.md\nProject-URL: Documentation, https://charset-normalizer.readthedocs.io/\nProject-URL: Code, https://github.com/jawah/charset_normalizer\nProject-URL: Issue tracker, https://github.com/jawah/charset_normalizer/issues\nKeywords: encoding,charset,charset-detector,detector,normalization,unicode,chardet,detect\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: MIT License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Programming Language :: Python :: 3 :: Only\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: Programming Language :: Python :: Implementation :: PyPy\nClassifier: Topic :: Text Processing :: Linguistic\nClassifier: Topic :: Utilities\nClassifier: Typing :: Typed\nRequires-Python: >=3.7\nDescription-Content-Type: text/markdown\nLicense-File: LICENSE\nProvides-Extra: unicode-backport\nDynamic: license-file\n\n<h1 align="center">Charset Detection, for Everyone 👋</h1>\n\n<p align="center">\n <sup>The Real First Universal Charset Detector</sup><br>\n <a href="https://pypi.org/project/charset-normalizer">\n <img src="https://img.shields.io/pypi/pyversions/charset_normalizer.svg?orange=blue" />\n </a>\n <a href="https://pepy.tech/project/charset-normalizer/">\n <img alt="Download Count Total" src="https://static.pepy.tech/badge/charset-normalizer/month" />\n </a>\n <a href="https://bestpractices.coreinfrastructure.org/projects/7297">\n <img src="https://bestpractices.coreinfrastructure.org/projects/7297/badge">\n </a>\n</p>\n<p align="center">\n <sup><i>Featured Packages</i></sup><br>\n <a href="https://github.com/jawah/niquests">\n <img alt="Static Badge" src="https://img.shields.io/badge/Niquests-Best_HTTP_Client-cyan">\n </a>\n <a href="https://github.com/jawah/wassima">\n <img alt="Static Badge" src="https://img.shields.io/badge/Wassima-Certifi_Killer-cyan">\n </a>\n</p>\n<p align="center">\n <sup><i>In other language (unofficial port - by the community)</i></sup><br>\n <a href="https://github.com/nickspring/charset-normalizer-rs">\n <img alt="Static Badge" src="https://img.shields.io/badge/Rust-red">\n </a>\n</p>\n\n> A library that helps you read text from an unknown charset encoding.<br /> Motivated by `chardet`,\n> I'm trying to resolve the issue by taking a new approach.\n> All IANA character set names for which the Python core library provides codecs are supported.\n\n<p align="center">\n >>>>> <a href="https://charsetnormalizerweb.ousret.now.sh" target="_blank">👉 Try Me Online Now, Then Adopt Me 👈 </a> <<<<<\n</p>\n\nThis project offers you an alternative to **Universal Charset Encoding Detector**, also known as **Chardet**.\n\n| Feature | [Chardet](https://github.com/chardet/chardet) | Charset Normalizer | [cChardet](https://github.com/PyYoshi/cChardet) |\n|--------------------------------------------------|:---------------------------------------------:|:--------------------------------------------------------------------------------------------------:|:-----------------------------------------------:|\n| `Fast` | ❌ | ✅ | ✅ |\n| `Universal**` | ❌ | ✅ | ❌ |\n| `Reliable` **without** distinguishable standards | ❌ | ✅ | ✅ |\n| `Reliable` **with** distinguishable standards | ✅ | ✅ | ✅ |\n| `License` | LGPL-2.1<br>_restrictive_ | MIT | MPL-1.1<br>_restrictive_ |\n| `Native Python` | ✅ | ✅ | ❌ |\n| `Detect spoken language` | ❌ | ✅ | N/A |\n| `UnicodeDecodeError Safety` | ❌ | ✅ | ❌ |\n| `Whl Size (min)` | 193.6 kB | 42 kB | ~200 kB |\n| `Supported Encoding` | 33 | 🎉 [99](https://charset-normalizer.readthedocs.io/en/latest/user/support.html#supported-encodings) | 40 |\n\n<p align="center">\n<img src="https://i.imgflip.com/373iay.gif" alt="Reading Normalized Text" width="226"/><img src="https://media.tenor.com/images/c0180f70732a18b4965448d33adba3d0/tenor.gif" alt="Cat Reading Text" width="200"/>\n</p>\n\n*\*\* : They are clearly using specific code for a specific encoding even if covering most of used one*<br>\n\n## ⚡ Performance\n\nThis package offer better performance than its counterpart Chardet. Here are some numbers.\n\n| Package | Accuracy | Mean per file (ms) | File per sec (est) |\n|-----------------------------------------------|:--------:|:------------------:|:------------------:|\n| [chardet](https://github.com/chardet/chardet) | 86 % | 63 ms | 16 file/sec |\n| charset-normalizer | **98 %** | **10 ms** | 100 file/sec |\n\n| Package | 99th percentile | 95th percentile | 50th percentile |\n|-----------------------------------------------|:---------------:|:---------------:|:---------------:|\n| [chardet](https://github.com/chardet/chardet) | 265 ms | 71 ms | 7 ms |\n| charset-normalizer | 100 ms | 50 ms | 5 ms |\n\n_updated as of december 2024 using CPython 3.12_\n\nChardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload.\n\n> Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows.\n> And yes, these results might change at any time. The dataset can be updated to include more files.\n> The actual delays heavily depends on your CPU capabilities. The factors should remain the same.\n> Keep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability\n> (e.g. Supported Encoding) Challenge-them if you want.\n\n## ✨ Installation\n\nUsing pip:\n\n```sh\npip install charset-normalizer -U\n```\n\n## 🚀 Basic Usage\n\n### CLI\nThis package comes with a CLI.\n\n```\nusage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]\n file [file ...]\n\nThe Real First Universal Charset Detector. Discover originating encoding used\non text file. Normalize text to unicode.\n\npositional arguments:\n files File(s) to be analysed\n\noptional arguments:\n -h, --help show this help message and exit\n -v, --verbose Display complementary information about file if any.\n Stdout will contain logs about the detection process.\n -a, --with-alternative\n Output complementary possibilities if any. Top-level\n JSON WILL be a list.\n -n, --normalize Permit to normalize input file. If not set, program\n does not write anything.\n -m, --minimal Only output the charset detected to STDOUT. Disabling\n JSON output.\n -r, --replace Replace file when trying to normalize it instead of\n creating a new one.\n -f, --force Replace file without asking if you are sure, use this\n flag with caution.\n -t THRESHOLD, --threshold THRESHOLD\n Define a custom maximum amount of chaos allowed in\n decoded content. 0. <= chaos <= 1.\n --version Show version information and exit.\n```\n\n```bash\nnormalizer ./data/sample.1.fr.srt\n```\n\nor\n\n```bash\npython -m charset_normalizer ./data/sample.1.fr.srt\n```\n\n🎉 Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.\n\n```json\n{\n "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",\n "encoding": "cp1252",\n "encoding_aliases": [\n "1252",\n "windows_1252"\n ],\n "alternative_encodings": [\n "cp1254",\n "cp1256",\n "cp1258",\n "iso8859_14",\n "iso8859_15",\n "iso8859_16",\n "iso8859_3",\n "iso8859_9",\n "latin_1",\n "mbcs"\n ],\n "language": "French",\n "alphabets": [\n "Basic Latin",\n "Latin-1 Supplement"\n ],\n "has_sig_or_bom": false,\n "chaos": 0.149,\n "coherence": 97.152,\n "unicode_path": null,\n "is_preferred": true\n}\n```\n\n### Python\n*Just print out normalized text*\n```python\nfrom charset_normalizer import from_path\n\nresults = from_path('./my_subtitle.srt')\n\nprint(str(results.best()))\n```\n\n*Upgrade your code without effort*\n```python\nfrom charset_normalizer import detect\n```\n\nThe above code will behave the same as **chardet**. We ensure that we offer the best (reasonable) BC result possible.\n\nSee the docs for advanced usage : [readthedocs.io](https://charset-normalizer.readthedocs.io/en/latest/)\n\n## 😇 Why\n\nWhen I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a\nreliable alternative using a completely different method. Also! I never back down on a good challenge!\n\nI **don't care** about the **originating charset** encoding, because **two different tables** can\nproduce **two identical rendered string.**\nWhat I want is to get readable text, the best I can.\n\nIn a way, **I'm brute forcing text decoding.** How cool is that ? 😎\n\nDon't confuse package **ftfy** with charset-normalizer or chardet. ftfy goal is to repair Unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.\n\n## 🍰 How\n\n - Discard all charset encoding table that could not fit the binary content.\n - Measure noise, or the mess once opened (by chunks) with a corresponding charset encoding.\n - Extract matches with the lowest mess detected.\n - Additionally, we measure coherence / probe for a language.\n\n**Wait a minute**, what is noise/mess and coherence according to **YOU ?**\n\n*Noise :* I opened hundred of text files, **written by humans**, with the wrong encoding table. **I observed**, then\n**I established** some ground rules about **what is obvious** when **it seems like** a mess (aka. defining noise in rendered text).\n I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to\n improve or rewrite it.\n\n*Coherence :* For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought\nthat intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.\n\n## ⚡ Known limitations\n\n - Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters))\n - Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content.\n\n## ⚠️ About Python EOLs\n\n**If you are running:**\n\n- Python >=2.7,<3.5: Unsupported\n- Python 3.5: charset-normalizer < 2.1\n- Python 3.6: charset-normalizer < 3.1\n- Python 3.7: charset-normalizer < 4.0\n\nUpgrade your Python interpreter as soon as possible.\n\n## 👤 Contributing\n\nContributions, issues and feature requests are very much welcome.<br />\nFeel free to check [issues page](https://github.com/ousret/charset_normalizer/issues) if you want to contribute.\n\n## 📝 License\n\nCopyright © [Ahmed TAHRI @Ousret](https://github.com/Ousret).<br />\nThis project is [MIT](https://github.com/Ousret/charset_normalizer/blob/master/LICENSE) licensed.\n\nCharacters frequencies used in this project © 2012 [Denny Vrandečić](http://simia.net/letters/)\n\n## 💼 For Enterprise\n\nProfessional support for charset-normalizer is available as part of the [Tidelift\nSubscription][1]. Tidelift gives software development teams a single source for\npurchasing and maintaining their software, with professional grade assurances\nfrom the experts who know it best, while seamlessly integrating with existing\ntools.\n\n[1]: https://tidelift.com/subscription/pkg/pypi-charset-normalizer?utm_source=pypi-charset-normalizer&utm_medium=readme\n\n[](https://www.bestpractices.dev/projects/7297)\n\n# Changelog\nAll notable changes to charset-normalizer will be documented in this file. This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).\n\n## [3.4.2](https://github.com/Ousret/charset_normalizer/compare/3.4.1...3.4.2) (2025-05-02)\n\n### Fixed\n- Addressed the DeprecationWarning in our CLI regarding `argparse.FileType` by backporting the target class into the package. (#591)\n- Improved the overall reliability of the detector with CJK Ideographs. (#605) (#587)\n\n### Changed\n- Optional mypyc compilation upgraded to version 1.15 for Python >= 3.8\n\n## [3.4.1](https://github.com/Ousret/charset_normalizer/compare/3.4.0...3.4.1) (2024-12-24)\n\n### Changed\n- Project metadata are now stored using `pyproject.toml` instead of `setup.cfg` using setuptools as the build backend.\n- Enforce annotation delayed loading for a simpler and consistent types in the project.\n- Optional mypyc compilation upgraded to version 1.14 for Python >= 3.8\n\n### Added\n- pre-commit configuration.\n- noxfile.\n\n### Removed\n- `build-requirements.txt` as per using `pyproject.toml` native build configuration.\n- `bin/integration.py` and `bin/serve.py` in favor of downstream integration test (see noxfile).\n- `setup.cfg` in favor of `pyproject.toml` metadata configuration.\n- Unused `utils.range_scan` function.\n\n### Fixed\n- Converting content to Unicode bytes may insert `utf_8` instead of preferred `utf-8`. (#572)\n- Deprecation warning "'count' is passed as positional argument" when converting to Unicode bytes on Python 3.13+\n\n## [3.4.0](https://github.com/Ousret/charset_normalizer/compare/3.3.2...3.4.0) (2024-10-08)\n\n### Added\n- Argument `--no-preemptive` in the CLI to prevent the detector to search for hints.\n- Support for Python 3.13 (#512)\n\n### Fixed\n- Relax the TypeError exception thrown when trying to compare a CharsetMatch with anything else than a CharsetMatch.\n- Improved the general reliability of the detector based on user feedbacks. (#520) (#509) (#498) (#407) (#537)\n- Declared charset in content (preemptive detection) not changed when converting to utf-8 bytes. (#381)\n\n## [3.3.2](https://github.com/Ousret/charset_normalizer/compare/3.3.1...3.3.2) (2023-10-31)\n\n### Fixed\n- Unintentional memory usage regression when using large payload that match several encoding (#376)\n- Regression on some detection case showcased in the documentation (#371)\n\n### Added\n- Noise (md) probe that identify malformed arabic representation due to the presence of letters in isolated form (credit to my wife)\n\n## [3.3.1](https://github.com/Ousret/charset_normalizer/compare/3.3.0...3.3.1) (2023-10-22)\n\n### Changed\n- Optional mypyc compilation upgraded to version 1.6.1 for Python >= 3.8\n- Improved the general detection reliability based on reports from the community\n\n## [3.3.0](https://github.com/Ousret/charset_normalizer/compare/3.2.0...3.3.0) (2023-09-30)\n\n### Added\n- Allow to execute the CLI (e.g. normalizer) through `python -m charset_normalizer.cli` or `python -m charset_normalizer`\n- Support for 9 forgotten encoding that are supported by Python but unlisted in `encoding.aliases` as they have no alias (#323)\n\n### Removed\n- (internal) Redundant utils.is_ascii function and unused function is_private_use_only\n- (internal) charset_normalizer.assets is moved inside charset_normalizer.constant\n\n### Changed\n- (internal) Unicode code blocks in constants are updated using the latest v15.0.0 definition to improve detection\n- Optional mypyc compilation upgraded to version 1.5.1 for Python >= 3.8\n\n### Fixed\n- Unable to properly sort CharsetMatch when both chaos/noise and coherence were close due to an unreachable condition in \_\_lt\_\_ (#350)\n\n## [3.2.0](https://github.com/Ousret/charset_normalizer/compare/3.1.0...3.2.0) (2023-06-07)\n\n### Changed\n- Typehint for function `from_path` no longer enforce `PathLike` as its first argument\n- Minor improvement over the global detection reliability\n\n### Added\n- Introduce function `is_binary` that relies on main capabilities, and optimized to detect binaries\n- Propagate `enable_fallback` argument throughout `from_bytes`, `from_path`, and `from_fp` that allow a deeper control over the detection (default True)\n- Explicit support for Python 3.12\n\n### Fixed\n- Edge case detection failure where a file would contain 'very-long' camel cased word (Issue #289)\n\n## [3.1.0](https://github.com/Ousret/charset_normalizer/compare/3.0.1...3.1.0) (2023-03-06)\n\n### Added\n- Argument `should_rename_legacy` for legacy function `detect` and disregard any new arguments without errors (PR #262)\n\n### Removed\n- Support for Python 3.6 (PR #260)\n\n### Changed\n- Optional speedup provided by mypy/c 1.0.1\n\n## [3.0.1](https://github.com/Ousret/charset_normalizer/compare/3.0.0...3.0.1) (2022-11-18)\n\n### Fixed\n- Multi-bytes cutter/chunk generator did not always cut correctly (PR #233)\n\n### Changed\n- Speedup provided by mypy/c 0.990 on Python >= 3.7\n\n## [3.0.0](https://github.com/Ousret/charset_normalizer/compare/2.1.1...3.0.0) (2022-10-20)\n\n### Added\n- Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results\n- Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES\n- Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio\n- `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl)\n\n### Changed\n- Build with static metadata using 'build' frontend\n- Make the language detection stricter\n- Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1\n\n### Fixed\n- CLI with opt --normalize fail when using full path for files\n- TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it\n- Sphinx warnings when generating the documentation\n\n### Removed\n- Coherence detector no longer return 'Simple English' instead return 'English'\n- Coherence detector no longer return 'Classical Chinese' instead return 'Chinese'\n- Breaking: Method `first()` and `best()` from CharsetMatch\n- UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII)\n- Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches\n- Breaking: Top-level function `normalize`\n- Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch\n- Support for the backport `unicodedata2`\n\n## [3.0.0rc1](https://github.com/Ousret/charset_normalizer/compare/3.0.0b2...3.0.0rc1) (2022-10-18)\n\n### Added\n- Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results\n- Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES\n- Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio\n\n### Changed\n- Build with static metadata using 'build' frontend\n- Make the language detection stricter\n\n### Fixed\n- CLI with opt --normalize fail when using full path for files\n- TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it\n\n### Removed\n- Coherence detector no longer return 'Simple English' instead return 'English'\n- Coherence detector no longer return 'Classical Chinese' instead return 'Chinese'\n\n## [3.0.0b2](https://github.com/Ousret/charset_normalizer/compare/3.0.0b1...3.0.0b2) (2022-08-21)\n\n### Added\n- `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl)\n\n### Removed\n- Breaking: Method `first()` and `best()` from CharsetMatch\n- UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII)\n\n### Fixed\n- Sphinx warnings when generating the documentation\n\n## [3.0.0b1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...3.0.0b1) (2022-08-15)\n\n### Changed\n- Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1\n\n### Removed\n- Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches\n- Breaking: Top-level function `normalize`\n- Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch\n- Support for the backport `unicodedata2`\n\n## [2.1.1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...2.1.1) (2022-08-19)\n\n### Deprecated\n- Function `normalize` scheduled for removal in 3.0\n\n### Changed\n- Removed useless call to decode in fn is_unprintable (#206)\n\n### Fixed\n- Third-party library (i18n xgettext) crashing not recognizing utf_8 (PEP 263) with underscore from [@aleksandernovikov](https://github.com/aleksandernovikov) (#204)\n\n## [2.1.0](https://github.com/Ousret/charset_normalizer/compare/2.0.12...2.1.0) (2022-06-19)\n\n### Added\n- Output the Unicode table version when running the CLI with `--version` (PR #194)\n\n### Changed\n- Re-use decoded buffer for single byte character sets from [@nijel](https://github.com/nijel) (PR #175)\n- Fixing some performance bottlenecks from [@deedy5](https://github.com/deedy5) (PR #183)\n\n### Fixed\n- Workaround potential bug in cpython with Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space (PR #175)\n- CLI default threshold aligned with the API threshold from [@oleksandr-kuzmenko](https://github.com/oleksandr-kuzmenko) (PR #181)\n\n### Removed\n- Support for Python 3.5 (PR #192)\n\n### Deprecated\n- Use of backport unicodedata from `unicodedata2` as Python is quickly catching up, scheduled for removal in 3.0 (PR #194)\n\n## [2.0.12](https://github.com/Ousret/charset_normalizer/compare/2.0.11...2.0.12) (2022-02-12)\n\n### Fixed\n- ASCII miss-detection on rare cases (PR #170)\n\n## [2.0.11](https://github.com/Ousret/charset_normalizer/compare/2.0.10...2.0.11) (2022-01-30)\n\n### Added\n- Explicit support for Python 3.11 (PR #164)\n\n### Changed\n- The logging behavior have been completely reviewed, now using only TRACE and DEBUG levels (PR #163 #165)\n\n## [2.0.10](https://github.com/Ousret/charset_normalizer/compare/2.0.9...2.0.10) (2022-01-04)\n\n### Fixed\n- Fallback match entries might lead to UnicodeDecodeError for large bytes sequence (PR #154)\n\n### Changed\n- Skipping the language-detection (CD) on ASCII (PR #155)\n\n## [2.0.9](https://github.com/Ousret/charset_normalizer/compare/2.0.8...2.0.9) (2021-12-03)\n\n### Changed\n- Moderating the logging impact (since 2.0.8) for specific environments (PR #147)\n\n### Fixed\n- Wrong logging level applied when setting kwarg `explain` to True (PR #146)\n\n## [2.0.8](https://github.com/Ousret/charset_normalizer/compare/2.0.7...2.0.8) (2021-11-24)\n### Changed\n- Improvement over Vietnamese detection (PR #126)\n- MD improvement on trailing data and long foreign (non-pure latin) data (PR #124)\n- Efficiency improvements in cd/alphabet_languages from [@adbar](https://github.com/adbar) (PR #122)\n- call sum() without an intermediary list following PEP 289 recommendations from [@adbar](https://github.com/adbar) (PR #129)\n- Code style as refactored by Sourcery-AI (PR #131)\n- Minor adjustment on the MD around european words (PR #133)\n- Remove and replace SRTs from assets / tests (PR #139)\n- Initialize the library logger with a `NullHandler` by default from [@nmaynes](https://github.com/nmaynes) (PR #135)\n- Setting kwarg `explain` to True will add provisionally (bounded to function lifespan) a specific stream handler (PR #135)\n\n### Fixed\n- Fix large (misleading) sequence giving UnicodeDecodeError (PR #137)\n- Avoid using too insignificant chunk (PR #137)\n\n### Added\n- Add and expose function `set_logging_handler` to configure a specific StreamHandler from [@nmaynes](https://github.com/nmaynes) (PR #135)\n- Add `CHANGELOG.md` entries, format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) (PR #141)\n\n## [2.0.7](https://github.com/Ousret/charset_normalizer/compare/2.0.6...2.0.7) (2021-10-11)\n### Added\n- Add support for Kazakh (Cyrillic) language detection (PR #109)\n\n### Changed\n- Further, improve inferring the language from a given single-byte code page (PR #112)\n- Vainly trying to leverage PEP263 when PEP3120 is not supported (PR #116)\n- Refactoring for potential performance improvements in loops from [@adbar](https://github.com/adbar) (PR #113)\n- Various detection improvement (MD+CD) (PR #117)\n\n### Removed\n- Remove redundant logging entry about detected language(s) (PR #115)\n\n### Fixed\n- Fix a minor inconsistency between Python 3.5 and other versions regarding language detection (PR #117 #102)\n\n## [2.0.6](https://github.com/Ousret/charset_normalizer/compare/2.0.5...2.0.6) (2021-09-18)\n### Fixed\n- Unforeseen regression with the loss of the backward-compatibility with some older minor of Python 3.5.x (PR #100)\n- Fix CLI crash when using --minimal output in certain cases (PR #103)\n\n### Changed\n- Minor improvement to the detection efficiency (less than 1%) (PR #106 #101)\n\n## [2.0.5](https://github.com/Ousret/charset_normalizer/compare/2.0.4...2.0.5) (2021-09-14)\n### Changed\n- The project now comply with: flake8, mypy, isort and black to ensure a better overall quality (PR #81)\n- The BC-support with v1.x was improved, the old staticmethods are restored (PR #82)\n- The Unicode detection is slightly improved (PR #93)\n- Add syntax sugar \_\_bool\_\_ for results CharsetMatches list-container (PR #91)\n\n### Removed\n- The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead (PR #92)\n\n### Fixed\n- In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection (PR #95)\n- Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection (PR #96)\n- The MANIFEST.in was not exhaustive (PR #78)\n\n## [2.0.4](https://github.com/Ousret/charset_normalizer/compare/2.0.3...2.0.4) (2021-07-30)\n### Fixed\n- The CLI no longer raise an unexpected exception when no encoding has been found (PR #70)\n- Fix accessing the 'alphabets' property when the payload contains surrogate characters (PR #68)\n- The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (PR #72)\n- Submatch factoring could be wrong in rare edge cases (PR #72)\n- Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (PR #72)\n- Fix line endings from CRLF to LF for certain project files (PR #67)\n\n### Changed\n- Adjust the MD to lower the sensitivity, thus improving the global detection reliability (PR #69 #76)\n- Allow fallback on specified encoding if any (PR #71)\n\n## [2.0.3](https://github.com/Ousret/charset_normalizer/compare/2.0.2...2.0.3) (2021-07-16)\n### Changed\n- Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. (PR #63)\n- According to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. (PR #64)\n\n## [2.0.2](https://github.com/Ousret/charset_normalizer/compare/2.0.1...2.0.2) (2021-07-15)\n### Fixed\n- Empty/Too small JSON payload miss-detection fixed. Report from [@tseaver](https://github.com/tseaver) (PR #59)\n\n### Changed\n- Don't inject unicodedata2 into sys.modules from [@akx](https://github.com/akx) (PR #57)\n\n## [2.0.1](https://github.com/Ousret/charset_normalizer/compare/2.0.0...2.0.1) (2021-07-13)\n### Fixed\n- Make it work where there isn't a filesystem available, dropping assets frequencies.json. Report from [@sethmlarson](https://github.com/sethmlarson). (PR #55)\n- Using explain=False permanently disable the verbose output in the current runtime (PR #47)\n- One log entry (language target preemptive) was not show in logs when using explain=True (PR #47)\n- Fix undesired exception (ValueError) on getitem of instance CharsetMatches (PR #52)\n\n### Changed\n- Public function normalize default args values were not aligned with from_bytes (PR #53)\n\n### Added\n- You may now use charset aliases in cp_isolation and cp_exclusion arguments (PR #47)\n\n## [2.0.0](https://github.com/Ousret/charset_normalizer/compare/1.4.1...2.0.0) (2021-07-02)\n### Changed\n- 4x to 5 times faster than the previous 1.4.0 release. At least 2x faster than Chardet.\n- Accent has been made on UTF-8 detection, should perform rather instantaneous.\n- The backward compatibility with Chardet has been greatly improved. The legacy detect function returns an identical charset name whenever possible.\n- The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time)\n- The program has been rewritten to ease the readability and maintainability. (+Using static typing)+\n- utf_7 detection has been reinstated.\n\n### Removed\n- This package no longer require anything when used with Python 3.5 (Dropped cached_property)\n- Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, Volapük, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian.\n- The exception hook on UnicodeDecodeError has been removed.\n\n### Deprecated\n- Methods coherence_non_latin, w_counter, chaos_secondary_pass of the class CharsetMatch are now deprecated and scheduled for removal in v3.0\n\n### Fixed\n- The CLI output used the relative path of the file(s). Should be absolute.\n\n## [1.4.1](https://github.com/Ousret/charset_normalizer/compare/1.4.0...1.4.1) (2021-05-28)\n### Fixed\n- Logger configuration/usage no longer conflict with others (PR #44)\n\n## [1.4.0](https://github.com/Ousret/charset_normalizer/compare/1.3.9...1.4.0) (2021-05-21)\n### Removed\n- Using standard logging instead of using the package loguru.\n- Dropping nose test framework in favor of the maintained pytest.\n- Choose to not use dragonmapper package to help with gibberish Chinese/CJK text.\n- Require cached_property only for Python 3.5 due to constraint. Dropping for every other interpreter version.\n- Stop support for UTF-7 that does not contain a SIG.\n- Dropping PrettyTable, replaced with pure JSON output in CLI.\n\n### Fixed\n- BOM marker in a CharsetNormalizerMatch instance could be False in rare cases even if obviously present. Due to the sub-match factoring process.\n- Not searching properly for the BOM when trying utf32/16 parent codec.\n\n### Changed\n- Improving the package final size by compressing frequencies.json.\n- Huge improvement over the larges payload.\n\n### Added\n- CLI now produces JSON consumable output.\n- Return ASCII if given sequences fit. Given reasonable confidence.\n\n## [1.3.9](https://github.com/Ousret/charset_normalizer/compare/1.3.8...1.3.9) (2021-05-13)\n\n### Fixed\n- In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload (PR #40)\n\n## [1.3.8](https://github.com/Ousret/charset_normalizer/compare/1.3.7...1.3.8) (2021-05-12)\n\n### Fixed\n- Empty given payload for detection may cause an exception if trying to access the `alphabets` property. (PR #39)\n\n## [1.3.7](https://github.com/Ousret/charset_normalizer/compare/1.3.6...1.3.7) (2021-05-12)\n\n### Fixed\n- The legacy detect function should return UTF-8-SIG if sig is present in the payload. (PR #38)\n\n## [1.3.6](https://github.com/Ousret/charset_normalizer/compare/1.3.5...1.3.6) (2021-02-09)\n\n### Changed\n- Amend the previous release to allow prettytable 2.0 (PR #35)\n\n## [1.3.5](https://github.com/Ousret/charset_normalizer/compare/1.3.4...1.3.5) (2021-02-08)\n\n### Fixed\n- Fix error while using the package with a python pre-release interpreter (PR #33)\n\n### Changed\n- Dependencies refactoring, constraints revised.\n\n### Added\n- Add python 3.9 and 3.10 to the supported interpreters\n\nMIT License\n\nCopyright (c) 2025 TAHRI Ahmed R.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\METADATA
|
METADATA
|
Other
| 36,474 | 0.95 | 0.102599 | 0.258123 |
node-utils
| 162 |
2025-03-26T11:45:54.682640
|
MIT
| false |
5934c3f4fdc78b5c5da4e81afe331f88
|
../../Scripts/normalizer.exe,sha256=NQ7bIsSQu0gaeWi_HF30B9yChP0R-OMK1eQRkBOxVOo,108428\ncharset_normalizer-3.4.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\ncharset_normalizer-3.4.2.dist-info/METADATA,sha256=WneNNyl9QvsRZYzK1FeEC6Wwag4iIFoTAoevPgpZFTY,36474\ncharset_normalizer-3.4.2.dist-info/RECORD,,\ncharset_normalizer-3.4.2.dist-info/WHEEL,sha256=XW6WKtYlnajQfCPWECQ2MFigACDlBYXsBDvusy92CnU,101\ncharset_normalizer-3.4.2.dist-info/entry_points.txt,sha256=8C-Y3iXIfyXQ83Tpir2B8t-XLJYpxF5xbb38d_js-h4,65\ncharset_normalizer-3.4.2.dist-info/licenses/LICENSE,sha256=GFd0hdNwTxpHne2OVzwJds_tMV_S_ReYP6mI2kwvcNE,1092\ncharset_normalizer-3.4.2.dist-info/top_level.txt,sha256=7ASyzePr8_xuZWJsnqJjIBtyV8vhEo0wBCv1MPRRi3Q,19\ncharset_normalizer/__init__.py,sha256=0NT8MHi7SKq3juMqYfOdrkzjisK0L73lneNHH4qaUAs,1638\ncharset_normalizer/__main__.py,sha256=2sj_BS6H0sU25C1bMqz9DVwa6kOK9lchSEbSU-_iu7M,115\ncharset_normalizer/__pycache__/__init__.cpython-313.pyc,,\ncharset_normalizer/__pycache__/__main__.cpython-313.pyc,,\ncharset_normalizer/__pycache__/api.cpython-313.pyc,,\ncharset_normalizer/__pycache__/cd.cpython-313.pyc,,\ncharset_normalizer/__pycache__/constant.cpython-313.pyc,,\ncharset_normalizer/__pycache__/legacy.cpython-313.pyc,,\ncharset_normalizer/__pycache__/md.cpython-313.pyc,,\ncharset_normalizer/__pycache__/models.cpython-313.pyc,,\ncharset_normalizer/__pycache__/utils.cpython-313.pyc,,\ncharset_normalizer/__pycache__/version.cpython-313.pyc,,\ncharset_normalizer/api.py,sha256=2a0p2Gnhbdo9O6C04CNxTSN23fIbgOF20nxb0pWPNFM,23285\ncharset_normalizer/cd.py,sha256=uq8nVxRpR6Guc16ACvOWtL8KO3w7vYaCh8hHisuOyTg,12917\ncharset_normalizer/cli/__init__.py,sha256=d9MUx-1V_qD3x9igIy4JT4oC5CU0yjulk7QyZWeRFhg,144\ncharset_normalizer/cli/__main__.py,sha256=-pdJCyPywouPyFsC8_eTSgTmvh1YEvgjsvy1WZ0XjaA,13027\ncharset_normalizer/cli/__pycache__/__init__.cpython-313.pyc,,\ncharset_normalizer/cli/__pycache__/__main__.cpython-313.pyc,,\ncharset_normalizer/constant.py,sha256=mCJmYzpBU27Ut9kiNWWoBbhhxQ-aRVw3K7LSwoFwBGI,44728\ncharset_normalizer/legacy.py,sha256=NgK-8ZQa_M9FHgQjdNSiYzMaB332QGuElZSfCf2y2sQ,2351\ncharset_normalizer/md.cp313-win_amd64.pyd,sha256=dshwD_yYO77AdGjjVAObIeJeSefBn0PXNDmUyQ1Lt78,10752\ncharset_normalizer/md.py,sha256=LSuW2hNgXSgF7JGdRapLAHLuj6pABHiP85LTNAYmu7c,20780\ncharset_normalizer/md__mypyc.cp313-win_amd64.pyd,sha256=6cMIJF_gHTPvkscCYRWgqTD9hl--G_zvqR52xqoyoLM,125440\ncharset_normalizer/models.py,sha256=ZR2PE-fqf6dASZfqdE5Uhkmr0o1MciSdXOjuNqwkmvg,12754\ncharset_normalizer/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\ncharset_normalizer/utils.py,sha256=XtWIQeOuz7cnGebMzyi4Vvi1JtA84QBSIeR9PDzF7pw,12584\ncharset_normalizer/version.py,sha256=wtpyUZ7M57rCLclP3QjzRD0Nj2hvnMOzLZI-vwfTdWs,123\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\RECORD
|
RECORD
|
Other
| 2,775 | 0.7 | 0 | 0 |
vue-tools
| 362 |
2024-08-02T05:25:01.055413
|
MIT
| false |
1e46a1ab419d757796c48c2b2c565d4f
|
charset_normalizer\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\top_level.txt
|
top_level.txt
|
Other
| 19 | 0.5 | 0 | 0 |
awesome-app
| 866 |
2024-06-01T12:52:31.056587
|
GPL-3.0
| false |
2272ed22c63ebee3f83cd23e68ee7407
|
Wheel-Version: 1.0\nGenerator: setuptools (80.1.0)\nRoot-Is-Purelib: false\nTag: cp313-cp313-win_amd64\n\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\WHEEL
|
WHEEL
|
Other
| 101 | 0.7 | 0 | 0 |
node-utils
| 780 |
2024-05-19T03:01:55.681839
|
MIT
| false |
5fd3e273201118821d4cffda0d7c3549
|
MIT License\n\nCopyright (c) 2025 TAHRI Ahmed R.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
|
.venv\Lib\site-packages\charset_normalizer-3.4.2.dist-info\licenses\LICENSE
|
LICENSE
|
Other
| 1,092 | 0.7 | 0 | 0 |
react-lib
| 853 |
2024-05-17T20:15:12.055749
|
BSD-3-Clause
| false |
48178f3fc1374ad7e830412f812bde05
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nimport re\nimport sys\nimport os\n\nfrom .ansi import AnsiFore, AnsiBack, AnsiStyle, Style, BEL\nfrom .winterm import enable_vt_processing, WinTerm, WinColor, WinStyle\nfrom .win32 import windll, winapi_test\n\n\nwinterm = None\nif windll is not None:\n winterm = WinTerm()\n\n\nclass StreamWrapper(object):\n '''\n Wraps a stream (such as stdout), acting as a transparent proxy for all\n attribute access apart from method 'write()', which is delegated to our\n Converter instance.\n '''\n def __init__(self, wrapped, converter):\n # double-underscore everything to prevent clashes with names of\n # attributes on the wrapped stream object.\n self.__wrapped = wrapped\n self.__convertor = converter\n\n def __getattr__(self, name):\n return getattr(self.__wrapped, name)\n\n def __enter__(self, *args, **kwargs):\n # special method lookup bypasses __getattr__/__getattribute__, see\n # https://stackoverflow.com/questions/12632894/why-doesnt-getattr-work-with-exit\n # thus, contextlib magic methods are not proxied via __getattr__\n return self.__wrapped.__enter__(*args, **kwargs)\n\n def __exit__(self, *args, **kwargs):\n return self.__wrapped.__exit__(*args, **kwargs)\n\n def __setstate__(self, state):\n self.__dict__ = state\n\n def __getstate__(self):\n return self.__dict__\n\n def write(self, text):\n self.__convertor.write(text)\n\n def isatty(self):\n stream = self.__wrapped\n if 'PYCHARM_HOSTED' in os.environ:\n if stream is not None and (stream is sys.__stdout__ or stream is sys.__stderr__):\n return True\n try:\n stream_isatty = stream.isatty\n except AttributeError:\n return False\n else:\n return stream_isatty()\n\n @property\n def closed(self):\n stream = self.__wrapped\n try:\n return stream.closed\n # AttributeError in the case that the stream doesn't support being closed\n # ValueError for the case that the stream has already been detached when atexit runs\n except (AttributeError, ValueError):\n return True\n\n\nclass AnsiToWin32(object):\n '''\n Implements a 'write()' method which, on Windows, will strip ANSI character\n sequences from the text, and if outputting to a tty, will convert them into\n win32 function calls.\n '''\n ANSI_CSI_RE = re.compile('\001?\033\\[((?:\\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer\n ANSI_OSC_RE = re.compile('\001?\033\\]([^\a]*)(\a)\002?') # Operating System Command\n\n def __init__(self, wrapped, convert=None, strip=None, autoreset=False):\n # The wrapped stream (normally sys.stdout or sys.stderr)\n self.wrapped = wrapped\n\n # should we reset colors to defaults after every .write()\n self.autoreset = autoreset\n\n # create the proxy wrapping our output stream\n self.stream = StreamWrapper(wrapped, self)\n\n on_windows = os.name == 'nt'\n # We test if the WinAPI works, because even if we are on Windows\n # we may be using a terminal that doesn't support the WinAPI\n # (e.g. Cygwin Terminal). In this case it's up to the terminal\n # to support the ANSI codes.\n conversion_supported = on_windows and winapi_test()\n try:\n fd = wrapped.fileno()\n except Exception:\n fd = -1\n system_has_native_ansi = not on_windows or enable_vt_processing(fd)\n have_tty = not self.stream.closed and self.stream.isatty()\n need_conversion = conversion_supported and not system_has_native_ansi\n\n # should we strip ANSI sequences from our output?\n if strip is None:\n strip = need_conversion or not have_tty\n self.strip = strip\n\n # should we should convert ANSI sequences into win32 calls?\n if convert is None:\n convert = need_conversion and have_tty\n self.convert = convert\n\n # dict of ansi codes to win32 functions and parameters\n self.win32_calls = self.get_win32_calls()\n\n # are we wrapping stderr?\n self.on_stderr = self.wrapped is sys.stderr\n\n def should_wrap(self):\n '''\n True if this class is actually needed. If false, then the output\n stream will not be affected, nor will win32 calls be issued, so\n wrapping stdout is not actually required. This will generally be\n False on non-Windows platforms, unless optional functionality like\n autoreset has been requested using kwargs to init()\n '''\n return self.convert or self.strip or self.autoreset\n\n def get_win32_calls(self):\n if self.convert and winterm:\n return {\n AnsiStyle.RESET_ALL: (winterm.reset_all, ),\n AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT),\n AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL),\n AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL),\n AnsiFore.BLACK: (winterm.fore, WinColor.BLACK),\n AnsiFore.RED: (winterm.fore, WinColor.RED),\n AnsiFore.GREEN: (winterm.fore, WinColor.GREEN),\n AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW),\n AnsiFore.BLUE: (winterm.fore, WinColor.BLUE),\n AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA),\n AnsiFore.CYAN: (winterm.fore, WinColor.CYAN),\n AnsiFore.WHITE: (winterm.fore, WinColor.GREY),\n AnsiFore.RESET: (winterm.fore, ),\n AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True),\n AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True),\n AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True),\n AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True),\n AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True),\n AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True),\n AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True),\n AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True),\n AnsiBack.BLACK: (winterm.back, WinColor.BLACK),\n AnsiBack.RED: (winterm.back, WinColor.RED),\n AnsiBack.GREEN: (winterm.back, WinColor.GREEN),\n AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW),\n AnsiBack.BLUE: (winterm.back, WinColor.BLUE),\n AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA),\n AnsiBack.CYAN: (winterm.back, WinColor.CYAN),\n AnsiBack.WHITE: (winterm.back, WinColor.GREY),\n AnsiBack.RESET: (winterm.back, ),\n AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True),\n AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True),\n AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True),\n AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True),\n AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True),\n AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True),\n AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True),\n AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True),\n }\n return dict()\n\n def write(self, text):\n if self.strip or self.convert:\n self.write_and_convert(text)\n else:\n self.wrapped.write(text)\n self.wrapped.flush()\n if self.autoreset:\n self.reset_all()\n\n\n def reset_all(self):\n if self.convert:\n self.call_win32('m', (0,))\n elif not self.strip and not self.stream.closed:\n self.wrapped.write(Style.RESET_ALL)\n\n\n def write_and_convert(self, text):\n '''\n Write the given text to our wrapped stream, stripping any ANSI\n sequences from the text, and optionally converting them into win32\n calls.\n '''\n cursor = 0\n text = self.convert_osc(text)\n for match in self.ANSI_CSI_RE.finditer(text):\n start, end = match.span()\n self.write_plain_text(text, cursor, start)\n self.convert_ansi(*match.groups())\n cursor = end\n self.write_plain_text(text, cursor, len(text))\n\n\n def write_plain_text(self, text, start, end):\n if start < end:\n self.wrapped.write(text[start:end])\n self.wrapped.flush()\n\n\n def convert_ansi(self, paramstring, command):\n if self.convert:\n params = self.extract_params(command, paramstring)\n self.call_win32(command, params)\n\n\n def extract_params(self, command, paramstring):\n if command in 'Hf':\n params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';'))\n while len(params) < 2:\n # defaults:\n params = params + (1,)\n else:\n params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0)\n if len(params) == 0:\n # defaults:\n if command in 'JKm':\n params = (0,)\n elif command in 'ABCD':\n params = (1,)\n\n return params\n\n\n def call_win32(self, command, params):\n if command == 'm':\n for param in params:\n if param in self.win32_calls:\n func_args = self.win32_calls[param]\n func = func_args[0]\n args = func_args[1:]\n kwargs = dict(on_stderr=self.on_stderr)\n func(*args, **kwargs)\n elif command in 'J':\n winterm.erase_screen(params[0], on_stderr=self.on_stderr)\n elif command in 'K':\n winterm.erase_line(params[0], on_stderr=self.on_stderr)\n elif command in 'Hf': # cursor position - absolute\n winterm.set_cursor_position(params, on_stderr=self.on_stderr)\n elif command in 'ABCD': # cursor position - relative\n n = params[0]\n # A - up, B - down, C - forward, D - back\n x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command]\n winterm.cursor_adjust(x, y, on_stderr=self.on_stderr)\n\n\n def convert_osc(self, text):\n for match in self.ANSI_OSC_RE.finditer(text):\n start, end = match.span()\n text = text[:start] + text[end:]\n paramstring, command = match.groups()\n if command == BEL:\n if paramstring.count(";") == 1:\n params = paramstring.split(";")\n # 0 - change title and icon (we will only change title)\n # 1 - change icon (we don't support this)\n # 2 - change title\n if params[0] in '02':\n winterm.set_title(params[1])\n return text\n\n\n def flush(self):\n self.wrapped.flush()\n
|
.venv\Lib\site-packages\colorama\ansitowin32.py
|
ansitowin32.py
|
Python
| 11,128 | 0.95 | 0.220217 | 0.106838 |
react-lib
| 199 |
2023-09-03T14:14:57.874403
|
GPL-3.0
| false |
0ca18c79c4292fce0b3067b001b53b45
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nimport atexit\nimport contextlib\nimport sys\n\nfrom .ansitowin32 import AnsiToWin32\n\n\ndef _wipe_internal_state_for_tests():\n global orig_stdout, orig_stderr\n orig_stdout = None\n orig_stderr = None\n\n global wrapped_stdout, wrapped_stderr\n wrapped_stdout = None\n wrapped_stderr = None\n\n global atexit_done\n atexit_done = False\n\n global fixed_windows_console\n fixed_windows_console = False\n\n try:\n # no-op if it wasn't registered\n atexit.unregister(reset_all)\n except AttributeError:\n # python 2: no atexit.unregister. Oh well, we did our best.\n pass\n\n\ndef reset_all():\n if AnsiToWin32 is not None: # Issue #74: objects might become None at exit\n AnsiToWin32(orig_stdout).reset_all()\n\n\ndef init(autoreset=False, convert=None, strip=None, wrap=True):\n\n if not wrap and any([autoreset, convert, strip]):\n raise ValueError('wrap=False conflicts with any other arg=True')\n\n global wrapped_stdout, wrapped_stderr\n global orig_stdout, orig_stderr\n\n orig_stdout = sys.stdout\n orig_stderr = sys.stderr\n\n if sys.stdout is None:\n wrapped_stdout = None\n else:\n sys.stdout = wrapped_stdout = \\n wrap_stream(orig_stdout, convert, strip, autoreset, wrap)\n if sys.stderr is None:\n wrapped_stderr = None\n else:\n sys.stderr = wrapped_stderr = \\n wrap_stream(orig_stderr, convert, strip, autoreset, wrap)\n\n global atexit_done\n if not atexit_done:\n atexit.register(reset_all)\n atexit_done = True\n\n\ndef deinit():\n if orig_stdout is not None:\n sys.stdout = orig_stdout\n if orig_stderr is not None:\n sys.stderr = orig_stderr\n\n\ndef just_fix_windows_console():\n global fixed_windows_console\n\n if sys.platform != "win32":\n return\n if fixed_windows_console:\n return\n if wrapped_stdout is not None or wrapped_stderr is not None:\n # Someone already ran init() and it did stuff, so we won't second-guess them\n return\n\n # On newer versions of Windows, AnsiToWin32.__init__ will implicitly enable the\n # native ANSI support in the console as a side-effect. We only need to actually\n # replace sys.stdout/stderr if we're in the old-style conversion mode.\n new_stdout = AnsiToWin32(sys.stdout, convert=None, strip=None, autoreset=False)\n if new_stdout.convert:\n sys.stdout = new_stdout\n new_stderr = AnsiToWin32(sys.stderr, convert=None, strip=None, autoreset=False)\n if new_stderr.convert:\n sys.stderr = new_stderr\n\n fixed_windows_console = True\n\n@contextlib.contextmanager\ndef colorama_text(*args, **kwargs):\n init(*args, **kwargs)\n try:\n yield\n finally:\n deinit()\n\n\ndef reinit():\n if wrapped_stdout is not None:\n sys.stdout = wrapped_stdout\n if wrapped_stderr is not None:\n sys.stderr = wrapped_stderr\n\n\ndef wrap_stream(stream, convert, strip, autoreset, wrap):\n if wrap:\n wrapper = AnsiToWin32(stream,\n convert=convert, strip=strip, autoreset=autoreset)\n if wrapper.should_wrap():\n stream = wrapper.stream\n return stream\n\n\n# Use this for initial setup as well, to reduce code duplication\n_wipe_internal_state_for_tests()\n
|
.venv\Lib\site-packages\colorama\initialise.py
|
initialise.py
|
Python
| 3,325 | 0.95 | 0.239669 | 0.087912 |
node-utils
| 280 |
2023-08-23T02:03:38.037557
|
BSD-3-Clause
| false |
1a15620a349c61b3c9c135dfcd47bd73
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\n\n# from winbase.h\nSTDOUT = -11\nSTDERR = -12\n\nENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004\n\ntry:\n import ctypes\n from ctypes import LibraryLoader\n windll = LibraryLoader(ctypes.WinDLL)\n from ctypes import wintypes\nexcept (AttributeError, ImportError):\n windll = None\n SetConsoleTextAttribute = lambda *_: None\n winapi_test = lambda *_: None\nelse:\n from ctypes import byref, Structure, c_char, POINTER\n\n COORD = wintypes._COORD\n\n class CONSOLE_SCREEN_BUFFER_INFO(Structure):\n """struct in wincon.h."""\n _fields_ = [\n ("dwSize", COORD),\n ("dwCursorPosition", COORD),\n ("wAttributes", wintypes.WORD),\n ("srWindow", wintypes.SMALL_RECT),\n ("dwMaximumWindowSize", COORD),\n ]\n def __str__(self):\n return '(%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d)' % (\n self.dwSize.Y, self.dwSize.X\n , self.dwCursorPosition.Y, self.dwCursorPosition.X\n , self.wAttributes\n , self.srWindow.Top, self.srWindow.Left, self.srWindow.Bottom, self.srWindow.Right\n , self.dwMaximumWindowSize.Y, self.dwMaximumWindowSize.X\n )\n\n _GetStdHandle = windll.kernel32.GetStdHandle\n _GetStdHandle.argtypes = [\n wintypes.DWORD,\n ]\n _GetStdHandle.restype = wintypes.HANDLE\n\n _GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo\n _GetConsoleScreenBufferInfo.argtypes = [\n wintypes.HANDLE,\n POINTER(CONSOLE_SCREEN_BUFFER_INFO),\n ]\n _GetConsoleScreenBufferInfo.restype = wintypes.BOOL\n\n _SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute\n _SetConsoleTextAttribute.argtypes = [\n wintypes.HANDLE,\n wintypes.WORD,\n ]\n _SetConsoleTextAttribute.restype = wintypes.BOOL\n\n _SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition\n _SetConsoleCursorPosition.argtypes = [\n wintypes.HANDLE,\n COORD,\n ]\n _SetConsoleCursorPosition.restype = wintypes.BOOL\n\n _FillConsoleOutputCharacterA = windll.kernel32.FillConsoleOutputCharacterA\n _FillConsoleOutputCharacterA.argtypes = [\n wintypes.HANDLE,\n c_char,\n wintypes.DWORD,\n COORD,\n POINTER(wintypes.DWORD),\n ]\n _FillConsoleOutputCharacterA.restype = wintypes.BOOL\n\n _FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute\n _FillConsoleOutputAttribute.argtypes = [\n wintypes.HANDLE,\n wintypes.WORD,\n wintypes.DWORD,\n COORD,\n POINTER(wintypes.DWORD),\n ]\n _FillConsoleOutputAttribute.restype = wintypes.BOOL\n\n _SetConsoleTitleW = windll.kernel32.SetConsoleTitleW\n _SetConsoleTitleW.argtypes = [\n wintypes.LPCWSTR\n ]\n _SetConsoleTitleW.restype = wintypes.BOOL\n\n _GetConsoleMode = windll.kernel32.GetConsoleMode\n _GetConsoleMode.argtypes = [\n wintypes.HANDLE,\n POINTER(wintypes.DWORD)\n ]\n _GetConsoleMode.restype = wintypes.BOOL\n\n _SetConsoleMode = windll.kernel32.SetConsoleMode\n _SetConsoleMode.argtypes = [\n wintypes.HANDLE,\n wintypes.DWORD\n ]\n _SetConsoleMode.restype = wintypes.BOOL\n\n def _winapi_test(handle):\n csbi = CONSOLE_SCREEN_BUFFER_INFO()\n success = _GetConsoleScreenBufferInfo(\n handle, byref(csbi))\n return bool(success)\n\n def winapi_test():\n return any(_winapi_test(h) for h in\n (_GetStdHandle(STDOUT), _GetStdHandle(STDERR)))\n\n def GetConsoleScreenBufferInfo(stream_id=STDOUT):\n handle = _GetStdHandle(stream_id)\n csbi = CONSOLE_SCREEN_BUFFER_INFO()\n success = _GetConsoleScreenBufferInfo(\n handle, byref(csbi))\n return csbi\n\n def SetConsoleTextAttribute(stream_id, attrs):\n handle = _GetStdHandle(stream_id)\n return _SetConsoleTextAttribute(handle, attrs)\n\n def SetConsoleCursorPosition(stream_id, position, adjust=True):\n position = COORD(*position)\n # If the position is out of range, do nothing.\n if position.Y <= 0 or position.X <= 0:\n return\n # Adjust for Windows' SetConsoleCursorPosition:\n # 1. being 0-based, while ANSI is 1-based.\n # 2. expecting (x,y), while ANSI uses (y,x).\n adjusted_position = COORD(position.Y - 1, position.X - 1)\n if adjust:\n # Adjust for viewport's scroll position\n sr = GetConsoleScreenBufferInfo(STDOUT).srWindow\n adjusted_position.Y += sr.Top\n adjusted_position.X += sr.Left\n # Resume normal processing\n handle = _GetStdHandle(stream_id)\n return _SetConsoleCursorPosition(handle, adjusted_position)\n\n def FillConsoleOutputCharacter(stream_id, char, length, start):\n handle = _GetStdHandle(stream_id)\n char = c_char(char.encode())\n length = wintypes.DWORD(length)\n num_written = wintypes.DWORD(0)\n # Note that this is hard-coded for ANSI (vs wide) bytes.\n success = _FillConsoleOutputCharacterA(\n handle, char, length, start, byref(num_written))\n return num_written.value\n\n def FillConsoleOutputAttribute(stream_id, attr, length, start):\n ''' FillConsoleOutputAttribute( hConsole, csbi.wAttributes, dwConSize, coordScreen, &cCharsWritten )'''\n handle = _GetStdHandle(stream_id)\n attribute = wintypes.WORD(attr)\n length = wintypes.DWORD(length)\n num_written = wintypes.DWORD(0)\n # Note that this is hard-coded for ANSI (vs wide) bytes.\n return _FillConsoleOutputAttribute(\n handle, attribute, length, start, byref(num_written))\n\n def SetConsoleTitle(title):\n return _SetConsoleTitleW(title)\n\n def GetConsoleMode(handle):\n mode = wintypes.DWORD()\n success = _GetConsoleMode(handle, byref(mode))\n if not success:\n raise ctypes.WinError()\n return mode.value\n\n def SetConsoleMode(handle, mode):\n success = _SetConsoleMode(handle, mode)\n if not success:\n raise ctypes.WinError()\n
|
.venv\Lib\site-packages\colorama\win32.py
|
win32.py
|
Python
| 6,181 | 0.95 | 0.133333 | 0.064103 |
python-kit
| 865 |
2023-07-18T23:11:06.988503
|
BSD-3-Clause
| false |
0af1249cc740b035c9018a878510ee8e
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\ntry:\n from msvcrt import get_osfhandle\nexcept ImportError:\n def get_osfhandle(_):\n raise OSError("This isn't windows!")\n\n\nfrom . import win32\n\n# from wincon.h\nclass WinColor(object):\n BLACK = 0\n BLUE = 1\n GREEN = 2\n CYAN = 3\n RED = 4\n MAGENTA = 5\n YELLOW = 6\n GREY = 7\n\n# from wincon.h\nclass WinStyle(object):\n NORMAL = 0x00 # dim text, dim background\n BRIGHT = 0x08 # bright text, dim background\n BRIGHT_BACKGROUND = 0x80 # dim text, bright background\n\nclass WinTerm(object):\n\n def __init__(self):\n self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes\n self.set_attrs(self._default)\n self._default_fore = self._fore\n self._default_back = self._back\n self._default_style = self._style\n # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style.\n # So that LIGHT_EX colors and BRIGHT style do not clobber each other,\n # we track them separately, since LIGHT_EX is overwritten by Fore/Back\n # and BRIGHT is overwritten by Style codes.\n self._light = 0\n\n def get_attrs(self):\n return self._fore + self._back * 16 + (self._style | self._light)\n\n def set_attrs(self, value):\n self._fore = value & 7\n self._back = (value >> 4) & 7\n self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND)\n\n def reset_all(self, on_stderr=None):\n self.set_attrs(self._default)\n self.set_console(attrs=self._default)\n self._light = 0\n\n def fore(self, fore=None, light=False, on_stderr=False):\n if fore is None:\n fore = self._default_fore\n self._fore = fore\n # Emulate LIGHT_EX with BRIGHT Style\n if light:\n self._light |= WinStyle.BRIGHT\n else:\n self._light &= ~WinStyle.BRIGHT\n self.set_console(on_stderr=on_stderr)\n\n def back(self, back=None, light=False, on_stderr=False):\n if back is None:\n back = self._default_back\n self._back = back\n # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style\n if light:\n self._light |= WinStyle.BRIGHT_BACKGROUND\n else:\n self._light &= ~WinStyle.BRIGHT_BACKGROUND\n self.set_console(on_stderr=on_stderr)\n\n def style(self, style=None, on_stderr=False):\n if style is None:\n style = self._default_style\n self._style = style\n self.set_console(on_stderr=on_stderr)\n\n def set_console(self, attrs=None, on_stderr=False):\n if attrs is None:\n attrs = self.get_attrs()\n handle = win32.STDOUT\n if on_stderr:\n handle = win32.STDERR\n win32.SetConsoleTextAttribute(handle, attrs)\n\n def get_position(self, handle):\n position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition\n # Because Windows coordinates are 0-based,\n # and win32.SetConsoleCursorPosition expects 1-based.\n position.X += 1\n position.Y += 1\n return position\n\n def set_cursor_position(self, position=None, on_stderr=False):\n if position is None:\n # I'm not currently tracking the position, so there is no default.\n # position = self.get_position()\n return\n handle = win32.STDOUT\n if on_stderr:\n handle = win32.STDERR\n win32.SetConsoleCursorPosition(handle, position)\n\n def cursor_adjust(self, x, y, on_stderr=False):\n handle = win32.STDOUT\n if on_stderr:\n handle = win32.STDERR\n position = self.get_position(handle)\n adjusted_position = (position.Y + y, position.X + x)\n win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False)\n\n def erase_screen(self, mode=0, on_stderr=False):\n # 0 should clear from the cursor to the end of the screen.\n # 1 should clear from the cursor to the beginning of the screen.\n # 2 should clear the entire screen, and move cursor to (1,1)\n handle = win32.STDOUT\n if on_stderr:\n handle = win32.STDERR\n csbi = win32.GetConsoleScreenBufferInfo(handle)\n # get the number of character cells in the current buffer\n cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y\n # get number of character cells before current cursor position\n cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X\n if mode == 0:\n from_coord = csbi.dwCursorPosition\n cells_to_erase = cells_in_screen - cells_before_cursor\n elif mode == 1:\n from_coord = win32.COORD(0, 0)\n cells_to_erase = cells_before_cursor\n elif mode == 2:\n from_coord = win32.COORD(0, 0)\n cells_to_erase = cells_in_screen\n else:\n # invalid mode\n return\n # fill the entire screen with blanks\n win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord)\n # now set the buffer's attributes accordingly\n win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord)\n if mode == 2:\n # put the cursor where needed\n win32.SetConsoleCursorPosition(handle, (1, 1))\n\n def erase_line(self, mode=0, on_stderr=False):\n # 0 should clear from the cursor to the end of the line.\n # 1 should clear from the cursor to the beginning of the line.\n # 2 should clear the entire line.\n handle = win32.STDOUT\n if on_stderr:\n handle = win32.STDERR\n csbi = win32.GetConsoleScreenBufferInfo(handle)\n if mode == 0:\n from_coord = csbi.dwCursorPosition\n cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X\n elif mode == 1:\n from_coord = win32.COORD(0, csbi.dwCursorPosition.Y)\n cells_to_erase = csbi.dwCursorPosition.X\n elif mode == 2:\n from_coord = win32.COORD(0, csbi.dwCursorPosition.Y)\n cells_to_erase = csbi.dwSize.X\n else:\n # invalid mode\n return\n # fill the entire screen with blanks\n win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord)\n # now set the buffer's attributes accordingly\n win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord)\n\n def set_title(self, title):\n win32.SetConsoleTitle(title)\n\n\ndef enable_vt_processing(fd):\n if win32.windll is None or not win32.winapi_test():\n return False\n\n try:\n handle = get_osfhandle(fd)\n mode = win32.GetConsoleMode(handle)\n win32.SetConsoleMode(\n handle,\n mode | win32.ENABLE_VIRTUAL_TERMINAL_PROCESSING,\n )\n\n mode = win32.GetConsoleMode(handle)\n if mode & win32.ENABLE_VIRTUAL_TERMINAL_PROCESSING:\n return True\n # Can get TypeError in testsuite where 'fd' is a Mock()\n except (OSError, TypeError):\n return False\n
|
.venv\Lib\site-packages\colorama\winterm.py
|
winterm.py
|
Python
| 7,134 | 0.95 | 0.194872 | 0.168605 |
awesome-app
| 392 |
2024-02-10T10:01:18.527972
|
MIT
| false |
a52a65aeedfbf43c54d6302f0d2809cb
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nfrom .initialise import init, deinit, reinit, colorama_text, just_fix_windows_console\nfrom .ansi import Fore, Back, Style, Cursor\nfrom .ansitowin32 import AnsiToWin32\n\n__version__ = '0.4.6'\n\n
|
.venv\Lib\site-packages\colorama\__init__.py
|
__init__.py
|
Python
| 266 | 0.95 | 0 | 0.2 |
python-kit
| 990 |
2023-09-16T17:52:34.727030
|
GPL-3.0
| false |
c2daa3dfab2ba0694195cf5f15a32808
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nfrom io import StringIO, TextIOWrapper\nfrom unittest import TestCase, main\ntry:\n from contextlib import ExitStack\nexcept ImportError:\n # python 2\n from contextlib2 import ExitStack\n\ntry:\n from unittest.mock import MagicMock, Mock, patch\nexcept ImportError:\n from mock import MagicMock, Mock, patch\n\nfrom ..ansitowin32 import AnsiToWin32, StreamWrapper\nfrom ..win32 import ENABLE_VIRTUAL_TERMINAL_PROCESSING\nfrom .utils import osname\n\n\nclass StreamWrapperTest(TestCase):\n\n def testIsAProxy(self):\n mockStream = Mock()\n wrapper = StreamWrapper(mockStream, None)\n self.assertTrue( wrapper.random_attr is mockStream.random_attr )\n\n def testDelegatesWrite(self):\n mockStream = Mock()\n mockConverter = Mock()\n wrapper = StreamWrapper(mockStream, mockConverter)\n wrapper.write('hello')\n self.assertTrue(mockConverter.write.call_args, (('hello',), {}))\n\n def testDelegatesContext(self):\n mockConverter = Mock()\n s = StringIO()\n with StreamWrapper(s, mockConverter) as fp:\n fp.write(u'hello')\n self.assertTrue(s.closed)\n\n def testProxyNoContextManager(self):\n mockStream = MagicMock()\n mockStream.__enter__.side_effect = AttributeError()\n mockConverter = Mock()\n with self.assertRaises(AttributeError) as excinfo:\n with StreamWrapper(mockStream, mockConverter) as wrapper:\n wrapper.write('hello')\n\n def test_closed_shouldnt_raise_on_closed_stream(self):\n stream = StringIO()\n stream.close()\n wrapper = StreamWrapper(stream, None)\n self.assertEqual(wrapper.closed, True)\n\n def test_closed_shouldnt_raise_on_detached_stream(self):\n stream = TextIOWrapper(StringIO())\n stream.detach()\n wrapper = StreamWrapper(stream, None)\n self.assertEqual(wrapper.closed, True)\n\nclass AnsiToWin32Test(TestCase):\n\n def testInit(self):\n mockStdout = Mock()\n auto = Mock()\n stream = AnsiToWin32(mockStdout, autoreset=auto)\n self.assertEqual(stream.wrapped, mockStdout)\n self.assertEqual(stream.autoreset, auto)\n\n @patch('colorama.ansitowin32.winterm', None)\n @patch('colorama.ansitowin32.winapi_test', lambda *_: True)\n def testStripIsTrueOnWindows(self):\n with osname('nt'):\n mockStdout = Mock()\n stream = AnsiToWin32(mockStdout)\n self.assertTrue(stream.strip)\n\n def testStripIsFalseOffWindows(self):\n with osname('posix'):\n mockStdout = Mock(closed=False)\n stream = AnsiToWin32(mockStdout)\n self.assertFalse(stream.strip)\n\n def testWriteStripsAnsi(self):\n mockStdout = Mock()\n stream = AnsiToWin32(mockStdout)\n stream.wrapped = Mock()\n stream.write_and_convert = Mock()\n stream.strip = True\n\n stream.write('abc')\n\n self.assertFalse(stream.wrapped.write.called)\n self.assertEqual(stream.write_and_convert.call_args, (('abc',), {}))\n\n def testWriteDoesNotStripAnsi(self):\n mockStdout = Mock()\n stream = AnsiToWin32(mockStdout)\n stream.wrapped = Mock()\n stream.write_and_convert = Mock()\n stream.strip = False\n stream.convert = False\n\n stream.write('abc')\n\n self.assertFalse(stream.write_and_convert.called)\n self.assertEqual(stream.wrapped.write.call_args, (('abc',), {}))\n\n def assert_autoresets(self, convert, autoreset=True):\n stream = AnsiToWin32(Mock())\n stream.convert = convert\n stream.reset_all = Mock()\n stream.autoreset = autoreset\n stream.winterm = Mock()\n\n stream.write('abc')\n\n self.assertEqual(stream.reset_all.called, autoreset)\n\n def testWriteAutoresets(self):\n self.assert_autoresets(convert=True)\n self.assert_autoresets(convert=False)\n self.assert_autoresets(convert=True, autoreset=False)\n self.assert_autoresets(convert=False, autoreset=False)\n\n def testWriteAndConvertWritesPlainText(self):\n stream = AnsiToWin32(Mock())\n stream.write_and_convert( 'abc' )\n self.assertEqual( stream.wrapped.write.call_args, (('abc',), {}) )\n\n def testWriteAndConvertStripsAllValidAnsi(self):\n stream = AnsiToWin32(Mock())\n stream.call_win32 = Mock()\n data = [\n 'abc\033[mdef',\n 'abc\033[0mdef',\n 'abc\033[2mdef',\n 'abc\033[02mdef',\n 'abc\033[002mdef',\n 'abc\033[40mdef',\n 'abc\033[040mdef',\n 'abc\033[0;1mdef',\n 'abc\033[40;50mdef',\n 'abc\033[50;30;40mdef',\n 'abc\033[Adef',\n 'abc\033[0Gdef',\n 'abc\033[1;20;128Hdef',\n ]\n for datum in data:\n stream.wrapped.write.reset_mock()\n stream.write_and_convert( datum )\n self.assertEqual(\n [args[0] for args in stream.wrapped.write.call_args_list],\n [ ('abc',), ('def',) ]\n )\n\n def testWriteAndConvertSkipsEmptySnippets(self):\n stream = AnsiToWin32(Mock())\n stream.call_win32 = Mock()\n stream.write_and_convert( '\033[40m\033[41m' )\n self.assertFalse( stream.wrapped.write.called )\n\n def testWriteAndConvertCallsWin32WithParamsAndCommand(self):\n stream = AnsiToWin32(Mock())\n stream.convert = True\n stream.call_win32 = Mock()\n stream.extract_params = Mock(return_value='params')\n data = {\n 'abc\033[adef': ('a', 'params'),\n 'abc\033[;;bdef': ('b', 'params'),\n 'abc\033[0cdef': ('c', 'params'),\n 'abc\033[;;0;;Gdef': ('G', 'params'),\n 'abc\033[1;20;128Hdef': ('H', 'params'),\n }\n for datum, expected in data.items():\n stream.call_win32.reset_mock()\n stream.write_and_convert( datum )\n self.assertEqual( stream.call_win32.call_args[0], expected )\n\n def test_reset_all_shouldnt_raise_on_closed_orig_stdout(self):\n stream = StringIO()\n converter = AnsiToWin32(stream)\n stream.close()\n\n converter.reset_all()\n\n def test_wrap_shouldnt_raise_on_closed_orig_stdout(self):\n stream = StringIO()\n stream.close()\n with \\n patch("colorama.ansitowin32.os.name", "nt"), \\n patch("colorama.ansitowin32.winapi_test", lambda: True):\n converter = AnsiToWin32(stream)\n self.assertTrue(converter.strip)\n self.assertFalse(converter.convert)\n\n def test_wrap_shouldnt_raise_on_missing_closed_attr(self):\n with \\n patch("colorama.ansitowin32.os.name", "nt"), \\n patch("colorama.ansitowin32.winapi_test", lambda: True):\n converter = AnsiToWin32(object())\n self.assertTrue(converter.strip)\n self.assertFalse(converter.convert)\n\n def testExtractParams(self):\n stream = AnsiToWin32(Mock())\n data = {\n '': (0,),\n ';;': (0,),\n '2': (2,),\n ';;002;;': (2,),\n '0;1': (0, 1),\n ';;003;;456;;': (3, 456),\n '11;22;33;44;55': (11, 22, 33, 44, 55),\n }\n for datum, expected in data.items():\n self.assertEqual(stream.extract_params('m', datum), expected)\n\n def testCallWin32UsesLookup(self):\n listener = Mock()\n stream = AnsiToWin32(listener)\n stream.win32_calls = {\n 1: (lambda *_, **__: listener(11),),\n 2: (lambda *_, **__: listener(22),),\n 3: (lambda *_, **__: listener(33),),\n }\n stream.call_win32('m', (3, 1, 99, 2))\n self.assertEqual(\n [a[0][0] for a in listener.call_args_list],\n [33, 11, 22] )\n\n def test_osc_codes(self):\n mockStdout = Mock()\n stream = AnsiToWin32(mockStdout, convert=True)\n with patch('colorama.ansitowin32.winterm') as winterm:\n data = [\n '\033]0\x07', # missing arguments\n '\033]0;foo\x08', # wrong OSC command\n '\033]0;colorama_test_title\x07', # should work\n '\033]1;colorama_test_title\x07', # wrong set command\n '\033]2;colorama_test_title\x07', # should work\n '\033]' + ';' * 64 + '\x08', # see issue #247\n ]\n for code in data:\n stream.write(code)\n self.assertEqual(winterm.set_title.call_count, 2)\n\n def test_native_windows_ansi(self):\n with ExitStack() as stack:\n def p(a, b):\n stack.enter_context(patch(a, b, create=True))\n # Pretend to be on Windows\n p("colorama.ansitowin32.os.name", "nt")\n p("colorama.ansitowin32.winapi_test", lambda: True)\n p("colorama.win32.winapi_test", lambda: True)\n p("colorama.winterm.win32.windll", "non-None")\n p("colorama.winterm.get_osfhandle", lambda _: 1234)\n\n # Pretend that our mock stream has native ANSI support\n p(\n "colorama.winterm.win32.GetConsoleMode",\n lambda _: ENABLE_VIRTUAL_TERMINAL_PROCESSING,\n )\n SetConsoleMode = Mock()\n p("colorama.winterm.win32.SetConsoleMode", SetConsoleMode)\n\n stdout = Mock()\n stdout.closed = False\n stdout.isatty.return_value = True\n stdout.fileno.return_value = 1\n\n # Our fake console says it has native vt support, so AnsiToWin32 should\n # enable that support and do nothing else.\n stream = AnsiToWin32(stdout)\n SetConsoleMode.assert_called_with(1234, ENABLE_VIRTUAL_TERMINAL_PROCESSING)\n self.assertFalse(stream.strip)\n self.assertFalse(stream.convert)\n self.assertFalse(stream.should_wrap())\n\n # Now let's pretend we're on an old Windows console, that doesn't have\n # native ANSI support.\n p("colorama.winterm.win32.GetConsoleMode", lambda _: 0)\n SetConsoleMode = Mock()\n p("colorama.winterm.win32.SetConsoleMode", SetConsoleMode)\n\n stream = AnsiToWin32(stdout)\n SetConsoleMode.assert_called_with(1234, ENABLE_VIRTUAL_TERMINAL_PROCESSING)\n self.assertTrue(stream.strip)\n self.assertTrue(stream.convert)\n self.assertTrue(stream.should_wrap())\n\n\nif __name__ == '__main__':\n main()\n
|
.venv\Lib\site-packages\colorama\tests\ansitowin32_test.py
|
ansitowin32_test.py
|
Python
| 10,678 | 0.95 | 0.12585 | 0.031873 |
node-utils
| 267 |
2024-01-23T22:13:56.542483
|
MIT
| true |
ffd5754e37673ceac9f2c816e1d354a6
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nimport sys\nfrom unittest import TestCase, main\n\nfrom ..ansi import Back, Fore, Style\nfrom ..ansitowin32 import AnsiToWin32\n\nstdout_orig = sys.stdout\nstderr_orig = sys.stderr\n\n\nclass AnsiTest(TestCase):\n\n def setUp(self):\n # sanity check: stdout should be a file or StringIO object.\n # It will only be AnsiToWin32 if init() has previously wrapped it\n self.assertNotEqual(type(sys.stdout), AnsiToWin32)\n self.assertNotEqual(type(sys.stderr), AnsiToWin32)\n\n def tearDown(self):\n sys.stdout = stdout_orig\n sys.stderr = stderr_orig\n\n\n def testForeAttributes(self):\n self.assertEqual(Fore.BLACK, '\033[30m')\n self.assertEqual(Fore.RED, '\033[31m')\n self.assertEqual(Fore.GREEN, '\033[32m')\n self.assertEqual(Fore.YELLOW, '\033[33m')\n self.assertEqual(Fore.BLUE, '\033[34m')\n self.assertEqual(Fore.MAGENTA, '\033[35m')\n self.assertEqual(Fore.CYAN, '\033[36m')\n self.assertEqual(Fore.WHITE, '\033[37m')\n self.assertEqual(Fore.RESET, '\033[39m')\n\n # Check the light, extended versions.\n self.assertEqual(Fore.LIGHTBLACK_EX, '\033[90m')\n self.assertEqual(Fore.LIGHTRED_EX, '\033[91m')\n self.assertEqual(Fore.LIGHTGREEN_EX, '\033[92m')\n self.assertEqual(Fore.LIGHTYELLOW_EX, '\033[93m')\n self.assertEqual(Fore.LIGHTBLUE_EX, '\033[94m')\n self.assertEqual(Fore.LIGHTMAGENTA_EX, '\033[95m')\n self.assertEqual(Fore.LIGHTCYAN_EX, '\033[96m')\n self.assertEqual(Fore.LIGHTWHITE_EX, '\033[97m')\n\n\n def testBackAttributes(self):\n self.assertEqual(Back.BLACK, '\033[40m')\n self.assertEqual(Back.RED, '\033[41m')\n self.assertEqual(Back.GREEN, '\033[42m')\n self.assertEqual(Back.YELLOW, '\033[43m')\n self.assertEqual(Back.BLUE, '\033[44m')\n self.assertEqual(Back.MAGENTA, '\033[45m')\n self.assertEqual(Back.CYAN, '\033[46m')\n self.assertEqual(Back.WHITE, '\033[47m')\n self.assertEqual(Back.RESET, '\033[49m')\n\n # Check the light, extended versions.\n self.assertEqual(Back.LIGHTBLACK_EX, '\033[100m')\n self.assertEqual(Back.LIGHTRED_EX, '\033[101m')\n self.assertEqual(Back.LIGHTGREEN_EX, '\033[102m')\n self.assertEqual(Back.LIGHTYELLOW_EX, '\033[103m')\n self.assertEqual(Back.LIGHTBLUE_EX, '\033[104m')\n self.assertEqual(Back.LIGHTMAGENTA_EX, '\033[105m')\n self.assertEqual(Back.LIGHTCYAN_EX, '\033[106m')\n self.assertEqual(Back.LIGHTWHITE_EX, '\033[107m')\n\n\n def testStyleAttributes(self):\n self.assertEqual(Style.DIM, '\033[2m')\n self.assertEqual(Style.NORMAL, '\033[22m')\n self.assertEqual(Style.BRIGHT, '\033[1m')\n\n\nif __name__ == '__main__':\n main()\n
|
.venv\Lib\site-packages\colorama\tests\ansi_test.py
|
ansi_test.py
|
Python
| 2,839 | 0.95 | 0.105263 | 0.083333 |
awesome-app
| 122 |
2024-03-06T14:44:53.649729
|
MIT
| true |
5986a9683e8505bb1a6bb312767143e3
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nimport sys\nfrom unittest import TestCase, main, skipUnless\n\ntry:\n from unittest.mock import patch, Mock\nexcept ImportError:\n from mock import patch, Mock\n\nfrom ..ansitowin32 import StreamWrapper\nfrom ..initialise import init, just_fix_windows_console, _wipe_internal_state_for_tests\nfrom .utils import osname, replace_by\n\norig_stdout = sys.stdout\norig_stderr = sys.stderr\n\n\nclass InitTest(TestCase):\n\n @skipUnless(sys.stdout.isatty(), "sys.stdout is not a tty")\n def setUp(self):\n # sanity check\n self.assertNotWrapped()\n\n def tearDown(self):\n _wipe_internal_state_for_tests()\n sys.stdout = orig_stdout\n sys.stderr = orig_stderr\n\n def assertWrapped(self):\n self.assertIsNot(sys.stdout, orig_stdout, 'stdout should be wrapped')\n self.assertIsNot(sys.stderr, orig_stderr, 'stderr should be wrapped')\n self.assertTrue(isinstance(sys.stdout, StreamWrapper),\n 'bad stdout wrapper')\n self.assertTrue(isinstance(sys.stderr, StreamWrapper),\n 'bad stderr wrapper')\n\n def assertNotWrapped(self):\n self.assertIs(sys.stdout, orig_stdout, 'stdout should not be wrapped')\n self.assertIs(sys.stderr, orig_stderr, 'stderr should not be wrapped')\n\n @patch('colorama.initialise.reset_all')\n @patch('colorama.ansitowin32.winapi_test', lambda *_: True)\n @patch('colorama.ansitowin32.enable_vt_processing', lambda *_: False)\n def testInitWrapsOnWindows(self, _):\n with osname("nt"):\n init()\n self.assertWrapped()\n\n @patch('colorama.initialise.reset_all')\n @patch('colorama.ansitowin32.winapi_test', lambda *_: False)\n def testInitDoesntWrapOnEmulatedWindows(self, _):\n with osname("nt"):\n init()\n self.assertNotWrapped()\n\n def testInitDoesntWrapOnNonWindows(self):\n with osname("posix"):\n init()\n self.assertNotWrapped()\n\n def testInitDoesntWrapIfNone(self):\n with replace_by(None):\n init()\n # We can't use assertNotWrapped here because replace_by(None)\n # changes stdout/stderr already.\n self.assertIsNone(sys.stdout)\n self.assertIsNone(sys.stderr)\n\n def testInitAutoresetOnWrapsOnAllPlatforms(self):\n with osname("posix"):\n init(autoreset=True)\n self.assertWrapped()\n\n def testInitWrapOffDoesntWrapOnWindows(self):\n with osname("nt"):\n init(wrap=False)\n self.assertNotWrapped()\n\n def testInitWrapOffIncompatibleWithAutoresetOn(self):\n self.assertRaises(ValueError, lambda: init(autoreset=True, wrap=False))\n\n @patch('colorama.win32.SetConsoleTextAttribute')\n @patch('colorama.initialise.AnsiToWin32')\n def testAutoResetPassedOn(self, mockATW32, _):\n with osname("nt"):\n init(autoreset=True)\n self.assertEqual(len(mockATW32.call_args_list), 2)\n self.assertEqual(mockATW32.call_args_list[1][1]['autoreset'], True)\n self.assertEqual(mockATW32.call_args_list[0][1]['autoreset'], True)\n\n @patch('colorama.initialise.AnsiToWin32')\n def testAutoResetChangeable(self, mockATW32):\n with osname("nt"):\n init()\n\n init(autoreset=True)\n self.assertEqual(len(mockATW32.call_args_list), 4)\n self.assertEqual(mockATW32.call_args_list[2][1]['autoreset'], True)\n self.assertEqual(mockATW32.call_args_list[3][1]['autoreset'], True)\n\n init()\n self.assertEqual(len(mockATW32.call_args_list), 6)\n self.assertEqual(\n mockATW32.call_args_list[4][1]['autoreset'], False)\n self.assertEqual(\n mockATW32.call_args_list[5][1]['autoreset'], False)\n\n\n @patch('colorama.initialise.atexit.register')\n def testAtexitRegisteredOnlyOnce(self, mockRegister):\n init()\n self.assertTrue(mockRegister.called)\n mockRegister.reset_mock()\n init()\n self.assertFalse(mockRegister.called)\n\n\nclass JustFixWindowsConsoleTest(TestCase):\n def _reset(self):\n _wipe_internal_state_for_tests()\n sys.stdout = orig_stdout\n sys.stderr = orig_stderr\n\n def tearDown(self):\n self._reset()\n\n @patch("colorama.ansitowin32.winapi_test", lambda: True)\n def testJustFixWindowsConsole(self):\n if sys.platform != "win32":\n # just_fix_windows_console should be a no-op\n just_fix_windows_console()\n self.assertIs(sys.stdout, orig_stdout)\n self.assertIs(sys.stderr, orig_stderr)\n else:\n def fake_std():\n # Emulate stdout=not a tty, stderr=tty\n # to check that we handle both cases correctly\n stdout = Mock()\n stdout.closed = False\n stdout.isatty.return_value = False\n stdout.fileno.return_value = 1\n sys.stdout = stdout\n\n stderr = Mock()\n stderr.closed = False\n stderr.isatty.return_value = True\n stderr.fileno.return_value = 2\n sys.stderr = stderr\n\n for native_ansi in [False, True]:\n with patch(\n 'colorama.ansitowin32.enable_vt_processing',\n lambda *_: native_ansi\n ):\n self._reset()\n fake_std()\n\n # Regular single-call test\n prev_stdout = sys.stdout\n prev_stderr = sys.stderr\n just_fix_windows_console()\n self.assertIs(sys.stdout, prev_stdout)\n if native_ansi:\n self.assertIs(sys.stderr, prev_stderr)\n else:\n self.assertIsNot(sys.stderr, prev_stderr)\n\n # second call without resetting is always a no-op\n prev_stdout = sys.stdout\n prev_stderr = sys.stderr\n just_fix_windows_console()\n self.assertIs(sys.stdout, prev_stdout)\n self.assertIs(sys.stderr, prev_stderr)\n\n self._reset()\n fake_std()\n\n # If init() runs first, just_fix_windows_console should be a no-op\n init()\n prev_stdout = sys.stdout\n prev_stderr = sys.stderr\n just_fix_windows_console()\n self.assertIs(prev_stdout, sys.stdout)\n self.assertIs(prev_stderr, sys.stderr)\n\n\nif __name__ == '__main__':\n main()\n
|
.venv\Lib\site-packages\colorama\tests\initialise_test.py
|
initialise_test.py
|
Python
| 6,741 | 0.95 | 0.132275 | 0.064516 |
python-kit
| 53 |
2025-07-05T03:12:10.582496
|
Apache-2.0
| true |
711f7c7a03992d3c9b8523960e2cbffb
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nimport sys\nfrom unittest import TestCase, main\n\nfrom ..ansitowin32 import StreamWrapper, AnsiToWin32\nfrom .utils import pycharm, replace_by, replace_original_by, StreamTTY, StreamNonTTY\n\n\ndef is_a_tty(stream):\n return StreamWrapper(stream, None).isatty()\n\nclass IsattyTest(TestCase):\n\n def test_TTY(self):\n tty = StreamTTY()\n self.assertTrue(is_a_tty(tty))\n with pycharm():\n self.assertTrue(is_a_tty(tty))\n\n def test_nonTTY(self):\n non_tty = StreamNonTTY()\n self.assertFalse(is_a_tty(non_tty))\n with pycharm():\n self.assertFalse(is_a_tty(non_tty))\n\n def test_withPycharm(self):\n with pycharm():\n self.assertTrue(is_a_tty(sys.stderr))\n self.assertTrue(is_a_tty(sys.stdout))\n\n def test_withPycharmTTYOverride(self):\n tty = StreamTTY()\n with pycharm(), replace_by(tty):\n self.assertTrue(is_a_tty(tty))\n\n def test_withPycharmNonTTYOverride(self):\n non_tty = StreamNonTTY()\n with pycharm(), replace_by(non_tty):\n self.assertFalse(is_a_tty(non_tty))\n\n def test_withPycharmNoneOverride(self):\n with pycharm():\n with replace_by(None), replace_original_by(None):\n self.assertFalse(is_a_tty(None))\n self.assertFalse(is_a_tty(StreamNonTTY()))\n self.assertTrue(is_a_tty(StreamTTY()))\n\n def test_withPycharmStreamWrapped(self):\n with pycharm():\n self.assertTrue(AnsiToWin32(StreamTTY()).stream.isatty())\n self.assertFalse(AnsiToWin32(StreamNonTTY()).stream.isatty())\n self.assertTrue(AnsiToWin32(sys.stdout).stream.isatty())\n self.assertTrue(AnsiToWin32(sys.stderr).stream.isatty())\n\n\nif __name__ == '__main__':\n main()\n
|
.venv\Lib\site-packages\colorama\tests\isatty_test.py
|
isatty_test.py
|
Python
| 1,866 | 0.95 | 0.175439 | 0.022727 |
node-utils
| 381 |
2025-03-23T17:30:16.815032
|
Apache-2.0
| true |
7634e0302b0f5f962627b1922b07a3b9
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nfrom contextlib import contextmanager\nfrom io import StringIO\nimport sys\nimport os\n\n\nclass StreamTTY(StringIO):\n def isatty(self):\n return True\n\nclass StreamNonTTY(StringIO):\n def isatty(self):\n return False\n\n@contextmanager\ndef osname(name):\n orig = os.name\n os.name = name\n yield\n os.name = orig\n\n@contextmanager\ndef replace_by(stream):\n orig_stdout = sys.stdout\n orig_stderr = sys.stderr\n sys.stdout = stream\n sys.stderr = stream\n yield\n sys.stdout = orig_stdout\n sys.stderr = orig_stderr\n\n@contextmanager\ndef replace_original_by(stream):\n orig_stdout = sys.__stdout__\n orig_stderr = sys.__stderr__\n sys.__stdout__ = stream\n sys.__stderr__ = stream\n yield\n sys.__stdout__ = orig_stdout\n sys.__stderr__ = orig_stderr\n\n@contextmanager\ndef pycharm():\n os.environ["PYCHARM_HOSTED"] = "1"\n non_tty = StreamNonTTY()\n with replace_by(non_tty), replace_original_by(non_tty):\n yield\n del os.environ["PYCHARM_HOSTED"]\n
|
.venv\Lib\site-packages\colorama\tests\utils.py
|
utils.py
|
Python
| 1,079 | 0.95 | 0.163265 | 0.02381 |
vue-tools
| 217 |
2023-12-17T00:21:51.033165
|
Apache-2.0
| true |
31142629e641450ac51d1d4556112c7c
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\nimport sys\nfrom unittest import TestCase, main, skipUnless\n\ntry:\n from unittest.mock import Mock, patch\nexcept ImportError:\n from mock import Mock, patch\n\nfrom ..winterm import WinColor, WinStyle, WinTerm\n\n\nclass WinTermTest(TestCase):\n\n @patch('colorama.winterm.win32')\n def testInit(self, mockWin32):\n mockAttr = Mock()\n mockAttr.wAttributes = 7 + 6 * 16 + 8\n mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr\n term = WinTerm()\n self.assertEqual(term._fore, 7)\n self.assertEqual(term._back, 6)\n self.assertEqual(term._style, 8)\n\n @skipUnless(sys.platform.startswith("win"), "requires Windows")\n def testGetAttrs(self):\n term = WinTerm()\n\n term._fore = 0\n term._back = 0\n term._style = 0\n self.assertEqual(term.get_attrs(), 0)\n\n term._fore = WinColor.YELLOW\n self.assertEqual(term.get_attrs(), WinColor.YELLOW)\n\n term._back = WinColor.MAGENTA\n self.assertEqual(\n term.get_attrs(),\n WinColor.YELLOW + WinColor.MAGENTA * 16)\n\n term._style = WinStyle.BRIGHT\n self.assertEqual(\n term.get_attrs(),\n WinColor.YELLOW + WinColor.MAGENTA * 16 + WinStyle.BRIGHT)\n\n @patch('colorama.winterm.win32')\n def testResetAll(self, mockWin32):\n mockAttr = Mock()\n mockAttr.wAttributes = 1 + 2 * 16 + 8\n mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr\n term = WinTerm()\n\n term.set_console = Mock()\n term._fore = -1\n term._back = -1\n term._style = -1\n\n term.reset_all()\n\n self.assertEqual(term._fore, 1)\n self.assertEqual(term._back, 2)\n self.assertEqual(term._style, 8)\n self.assertEqual(term.set_console.called, True)\n\n @skipUnless(sys.platform.startswith("win"), "requires Windows")\n def testFore(self):\n term = WinTerm()\n term.set_console = Mock()\n term._fore = 0\n\n term.fore(5)\n\n self.assertEqual(term._fore, 5)\n self.assertEqual(term.set_console.called, True)\n\n @skipUnless(sys.platform.startswith("win"), "requires Windows")\n def testBack(self):\n term = WinTerm()\n term.set_console = Mock()\n term._back = 0\n\n term.back(5)\n\n self.assertEqual(term._back, 5)\n self.assertEqual(term.set_console.called, True)\n\n @skipUnless(sys.platform.startswith("win"), "requires Windows")\n def testStyle(self):\n term = WinTerm()\n term.set_console = Mock()\n term._style = 0\n\n term.style(22)\n\n self.assertEqual(term._style, 22)\n self.assertEqual(term.set_console.called, True)\n\n @patch('colorama.winterm.win32')\n def testSetConsole(self, mockWin32):\n mockAttr = Mock()\n mockAttr.wAttributes = 0\n mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr\n term = WinTerm()\n term.windll = Mock()\n\n term.set_console()\n\n self.assertEqual(\n mockWin32.SetConsoleTextAttribute.call_args,\n ((mockWin32.STDOUT, term.get_attrs()), {})\n )\n\n @patch('colorama.winterm.win32')\n def testSetConsoleOnStderr(self, mockWin32):\n mockAttr = Mock()\n mockAttr.wAttributes = 0\n mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr\n term = WinTerm()\n term.windll = Mock()\n\n term.set_console(on_stderr=True)\n\n self.assertEqual(\n mockWin32.SetConsoleTextAttribute.call_args,\n ((mockWin32.STDERR, term.get_attrs()), {})\n )\n\n\nif __name__ == '__main__':\n main()\n
|
.venv\Lib\site-packages\colorama\tests\winterm_test.py
|
winterm_test.py
|
Python
| 3,709 | 0.95 | 0.083969 | 0.01 |
node-utils
| 119 |
2025-07-09T08:21:48.249031
|
GPL-3.0
| true |
3322cabd2108da984bd053bf61b8c1cc
|
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.\n
|
.venv\Lib\site-packages\colorama\tests\__init__.py
|
__init__.py
|
Python
| 75 | 0.6 | 0 | 1 |
node-utils
| 241 |
2024-09-13T01:30:20.567907
|
BSD-3-Clause
| true |
b1fda43e92dec74456ef61c18b3071ff
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\ansitowin32_test.cpython-313.pyc
|
ansitowin32_test.cpython-313.pyc
|
Other
| 17,811 | 0.8 | 0.006494 | 0 |
awesome-app
| 823 |
2025-04-10T07:41:05.405550
|
Apache-2.0
| true |
88f225e73cd7719af3c90086461cdfff
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\ansi_test.cpython-313.pyc
|
ansi_test.cpython-313.pyc
|
Other
| 5,518 | 0.8 | 0 | 0 |
python-kit
| 253 |
2024-02-05T16:49:20.070748
|
MIT
| true |
f6eb6cfc82c3ccaf2015968fc097bdcc
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\initialise_test.cpython-313.pyc
|
initialise_test.cpython-313.pyc
|
Other
| 11,787 | 0.8 | 0 | 0 |
python-kit
| 331 |
2025-06-08T18:48:03.671172
|
BSD-3-Clause
| true |
fed60bf1426032a5ae93fc6d806b0060
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\isatty_test.cpython-313.pyc
|
isatty_test.cpython-313.pyc
|
Other
| 4,942 | 0.8 | 0 | 0 |
vue-tools
| 453 |
2024-06-16T02:51:30.697815
|
GPL-3.0
| true |
642a42c0d2eeb0ff69635d9d9782e3f7
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\utils.cpython-313.pyc
|
utils.cpython-313.pyc
|
Other
| 2,553 | 0.8 | 0 | 0 |
node-utils
| 77 |
2024-06-19T23:28:09.922334
|
Apache-2.0
| true |
3a3e4a36bc6d3f1a719e3df06c654188
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\winterm_test.cpython-313.pyc
|
winterm_test.cpython-313.pyc
|
Other
| 6,643 | 0.95 | 0 | 0.022222 |
python-kit
| 130 |
2023-08-03T15:49:58.804077
|
GPL-3.0
| true |
7dbe8dd888c535723372c4f2f5c2b417
|
\n\n
|
.venv\Lib\site-packages\colorama\tests\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 189 | 0.7 | 0 | 0 |
vue-tools
| 748 |
2023-11-01T04:19:56.170602
|
BSD-3-Clause
| true |
0bc873c2070a8f0318be977b48e24731
|
\n\n
|
.venv\Lib\site-packages\colorama\__pycache__\ansi.cpython-313.pyc
|
ansi.cpython-313.pyc
|
Other
| 4,121 | 0.8 | 0 | 0 |
react-lib
| 470 |
2025-01-11T08:49:27.581354
|
BSD-3-Clause
| false |
f135ab91a924248857e100dd6de5acdc
|
\n\n
|
.venv\Lib\site-packages\colorama\__pycache__\ansitowin32.cpython-313.pyc
|
ansitowin32.cpython-313.pyc
|
Other
| 16,587 | 0.95 | 0.048077 | 0 |
node-utils
| 363 |
2023-08-16T14:16:57.276207
|
BSD-3-Clause
| false |
6b9364153843b53fd53b42779b6d507b
|
\n\n
|
.venv\Lib\site-packages\colorama\__pycache__\initialise.cpython-313.pyc
|
initialise.cpython-313.pyc
|
Other
| 3,617 | 0.8 | 0 | 0 |
python-kit
| 254 |
2024-11-09T01:16:24.273070
|
MIT
| false |
6376b5643c2e65098cd614d752871f58
|
\n\n
|
.venv\Lib\site-packages\colorama\__pycache__\win32.cpython-313.pyc
|
win32.cpython-313.pyc
|
Other
| 8,217 | 0.8 | 0 | 0 |
vue-tools
| 143 |
2023-11-04T17:16:20.671325
|
BSD-3-Clause
| false |
1d28bef3799efcb99e08128493a6408b
|
\n\n
|
.venv\Lib\site-packages\colorama\__pycache__\winterm.cpython-313.pyc
|
winterm.cpython-313.pyc
|
Other
| 9,327 | 0.8 | 0 | 0 |
vue-tools
| 608 |
2024-10-25T22:07:06.659157
|
Apache-2.0
| false |
5573c0767888fbe6d942cb8d7d982906
|
\n\n
|
.venv\Lib\site-packages\colorama\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 483 | 0.7 | 0 | 0 |
node-utils
| 839 |
2024-12-24T12:44:33.342848
|
Apache-2.0
| false |
7bcb48b0e6336a8b5eb7efa14bc1a116
|
pip\n
|
.venv\Lib\site-packages\colorama-0.4.6.dist-info\INSTALLER
|
INSTALLER
|
Other
| 4 | 0.5 | 0 | 0 |
react-lib
| 83 |
2025-06-26T05:21:17.653637
|
MIT
| false |
365c9bfeb7d89244f2ce01c1de44cb85
|
Metadata-Version: 2.1\nName: colorama\nVersion: 0.4.6\nSummary: Cross-platform colored terminal text.\nProject-URL: Homepage, https://github.com/tartley/colorama\nAuthor-email: Jonathan Hartley <tartley@tartley.com>\nLicense-File: LICENSE.txt\nKeywords: ansi,color,colour,crossplatform,terminal,text,windows,xplatform\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Environment :: Console\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 2\nClassifier: Programming Language :: Python :: 2.7\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: Programming Language :: Python :: Implementation :: PyPy\nClassifier: Topic :: Terminals\nRequires-Python: !=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7\nDescription-Content-Type: text/x-rst\n\n.. image:: https://img.shields.io/pypi/v/colorama.svg\n :target: https://pypi.org/project/colorama/\n :alt: Latest Version\n\n.. image:: https://img.shields.io/pypi/pyversions/colorama.svg\n :target: https://pypi.org/project/colorama/\n :alt: Supported Python versions\n\n.. image:: https://github.com/tartley/colorama/actions/workflows/test.yml/badge.svg\n :target: https://github.com/tartley/colorama/actions/workflows/test.yml\n :alt: Build Status\n\nColorama\n========\n\nMakes ANSI escape character sequences (for producing colored terminal text and\ncursor positioning) work under MS Windows.\n\n.. |donate| image:: https://www.paypalobjects.com/en_US/i/btn/btn_donate_SM.gif\n :target: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=2MZ9D2GMLYCUJ&item_name=Colorama¤cy_code=USD\n :alt: Donate with Paypal\n\n`PyPI for releases <https://pypi.org/project/colorama/>`_ |\n`Github for source <https://github.com/tartley/colorama>`_ |\n`Colorama for enterprise on Tidelift <https://github.com/tartley/colorama/blob/master/ENTERPRISE.md>`_\n\nIf you find Colorama useful, please |donate| to the authors. Thank you!\n\nInstallation\n------------\n\nTested on CPython 2.7, 3.7, 3.8, 3.9 and 3.10 and Pypy 2.7 and 3.8.\n\nNo requirements other than the standard library.\n\n.. code-block:: bash\n\n pip install colorama\n # or\n conda install -c anaconda colorama\n\nDescription\n-----------\n\nANSI escape character sequences have long been used to produce colored terminal\ntext and cursor positioning on Unix and Macs. Colorama makes this work on\nWindows, too, by wrapping ``stdout``, stripping ANSI sequences it finds (which\nwould appear as gobbledygook in the output), and converting them into the\nappropriate win32 calls to modify the state of the terminal. On other platforms,\nColorama does nothing.\n\nThis has the upshot of providing a simple cross-platform API for printing\ncolored terminal text from Python, and has the happy side-effect that existing\napplications or libraries which use ANSI sequences to produce colored output on\nLinux or Macs can now also work on Windows, simply by calling\n``colorama.just_fix_windows_console()`` (since v0.4.6) or ``colorama.init()``\n(all versions, but may have other side-effects – see below).\n\nAn alternative approach is to install ``ansi.sys`` on Windows machines, which\nprovides the same behaviour for all applications running in terminals. Colorama\nis intended for situations where that isn't easy (e.g., maybe your app doesn't\nhave an installer.)\n\nDemo scripts in the source code repository print some colored text using\nANSI sequences. Compare their output under Gnome-terminal's built in ANSI\nhandling, versus on Windows Command-Prompt using Colorama:\n\n.. image:: https://github.com/tartley/colorama/raw/master/screenshots/ubuntu-demo.png\n :width: 661\n :height: 357\n :alt: ANSI sequences on Ubuntu under gnome-terminal.\n\n.. image:: https://github.com/tartley/colorama/raw/master/screenshots/windows-demo.png\n :width: 668\n :height: 325\n :alt: Same ANSI sequences on Windows, using Colorama.\n\nThese screenshots show that, on Windows, Colorama does not support ANSI 'dim\ntext'; it looks the same as 'normal text'.\n\nUsage\n-----\n\nInitialisation\n..............\n\nIf the only thing you want from Colorama is to get ANSI escapes to work on\nWindows, then run:\n\n.. code-block:: python\n\n from colorama import just_fix_windows_console\n just_fix_windows_console()\n\nIf you're on a recent version of Windows 10 or better, and your stdout/stderr\nare pointing to a Windows console, then this will flip the magic configuration\nswitch to enable Windows' built-in ANSI support.\n\nIf you're on an older version of Windows, and your stdout/stderr are pointing to\na Windows console, then this will wrap ``sys.stdout`` and/or ``sys.stderr`` in a\nmagic file object that intercepts ANSI escape sequences and issues the\nappropriate Win32 calls to emulate them.\n\nIn all other circumstances, it does nothing whatsoever. Basically the idea is\nthat this makes Windows act like Unix with respect to ANSI escape handling.\n\nIt's safe to call this function multiple times. It's safe to call this function\non non-Windows platforms, but it won't do anything. It's safe to call this\nfunction when one or both of your stdout/stderr are redirected to a file – it\nwon't do anything to those streams.\n\nAlternatively, you can use the older interface with more features (but also more\npotential footguns):\n\n.. code-block:: python\n\n from colorama import init\n init()\n\nThis does the same thing as ``just_fix_windows_console``, except for the\nfollowing differences:\n\n- It's not safe to call ``init`` multiple times; you can end up with multiple\n layers of wrapping and broken ANSI support.\n\n- Colorama will apply a heuristic to guess whether stdout/stderr support ANSI,\n and if it thinks they don't, then it will wrap ``sys.stdout`` and\n ``sys.stderr`` in a magic file object that strips out ANSI escape sequences\n before printing them. This happens on all platforms, and can be convenient if\n you want to write your code to emit ANSI escape sequences unconditionally, and\n let Colorama decide whether they should actually be output. But note that\n Colorama's heuristic is not particularly clever.\n\n- ``init`` also accepts explicit keyword args to enable/disable various\n functionality – see below.\n\nTo stop using Colorama before your program exits, simply call ``deinit()``.\nThis will restore ``stdout`` and ``stderr`` to their original values, so that\nColorama is disabled. To resume using Colorama again, call ``reinit()``; it is\ncheaper than calling ``init()`` again (but does the same thing).\n\nMost users should depend on ``colorama >= 0.4.6``, and use\n``just_fix_windows_console``. The old ``init`` interface will be supported\nindefinitely for backwards compatibility, but we don't plan to fix any issues\nwith it, also for backwards compatibility.\n\nColored Output\n..............\n\nCross-platform printing of colored text can then be done using Colorama's\nconstant shorthand for ANSI escape sequences. These are deliberately\nrudimentary, see below.\n\n.. code-block:: python\n\n from colorama import Fore, Back, Style\n print(Fore.RED + 'some red text')\n print(Back.GREEN + 'and with a green background')\n print(Style.DIM + 'and in dim text')\n print(Style.RESET_ALL)\n print('back to normal now')\n\n...or simply by manually printing ANSI sequences from your own code:\n\n.. code-block:: python\n\n print('\033[31m' + 'some red text')\n print('\033[39m') # and reset to default color\n\n...or, Colorama can be used in conjunction with existing ANSI libraries\nsuch as the venerable `Termcolor <https://pypi.org/project/termcolor/>`_\nthe fabulous `Blessings <https://pypi.org/project/blessings/>`_,\nor the incredible `_Rich <https://pypi.org/project/rich/>`_.\n\nIf you wish Colorama's Fore, Back and Style constants were more capable,\nthen consider using one of the above highly capable libraries to generate\ncolors, etc, and use Colorama just for its primary purpose: to convert\nthose ANSI sequences to also work on Windows:\n\nSIMILARLY, do not send PRs adding the generation of new ANSI types to Colorama.\nWe are only interested in converting ANSI codes to win32 API calls, not\nshortcuts like the above to generate ANSI characters.\n\n.. code-block:: python\n\n from colorama import just_fix_windows_console\n from termcolor import colored\n\n # use Colorama to make Termcolor work on Windows too\n just_fix_windows_console()\n\n # then use Termcolor for all colored text output\n print(colored('Hello, World!', 'green', 'on_red'))\n\nAvailable formatting constants are::\n\n Fore: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET.\n Back: BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, RESET.\n Style: DIM, NORMAL, BRIGHT, RESET_ALL\n\n``Style.RESET_ALL`` resets foreground, background, and brightness. Colorama will\nperform this reset automatically on program exit.\n\nThese are fairly well supported, but not part of the standard::\n\n Fore: LIGHTBLACK_EX, LIGHTRED_EX, LIGHTGREEN_EX, LIGHTYELLOW_EX, LIGHTBLUE_EX, LIGHTMAGENTA_EX, LIGHTCYAN_EX, LIGHTWHITE_EX\n Back: LIGHTBLACK_EX, LIGHTRED_EX, LIGHTGREEN_EX, LIGHTYELLOW_EX, LIGHTBLUE_EX, LIGHTMAGENTA_EX, LIGHTCYAN_EX, LIGHTWHITE_EX\n\nCursor Positioning\n..................\n\nANSI codes to reposition the cursor are supported. See ``demos/demo06.py`` for\nan example of how to generate them.\n\nInit Keyword Args\n.................\n\n``init()`` accepts some ``**kwargs`` to override default behaviour.\n\ninit(autoreset=False):\n If you find yourself repeatedly sending reset sequences to turn off color\n changes at the end of every print, then ``init(autoreset=True)`` will\n automate that:\n\n .. code-block:: python\n\n from colorama import init\n init(autoreset=True)\n print(Fore.RED + 'some red text')\n print('automatically back to default color again')\n\ninit(strip=None):\n Pass ``True`` or ``False`` to override whether ANSI codes should be\n stripped from the output. The default behaviour is to strip if on Windows\n or if output is redirected (not a tty).\n\ninit(convert=None):\n Pass ``True`` or ``False`` to override whether to convert ANSI codes in the\n output into win32 calls. The default behaviour is to convert if on Windows\n and output is to a tty (terminal).\n\ninit(wrap=True):\n On Windows, Colorama works by replacing ``sys.stdout`` and ``sys.stderr``\n with proxy objects, which override the ``.write()`` method to do their work.\n If this wrapping causes you problems, then this can be disabled by passing\n ``init(wrap=False)``. The default behaviour is to wrap if ``autoreset`` or\n ``strip`` or ``convert`` are True.\n\n When wrapping is disabled, colored printing on non-Windows platforms will\n continue to work as normal. To do cross-platform colored output, you can\n use Colorama's ``AnsiToWin32`` proxy directly:\n\n .. code-block:: python\n\n import sys\n from colorama import init, AnsiToWin32\n init(wrap=False)\n stream = AnsiToWin32(sys.stderr).stream\n\n # Python 2\n print >>stream, Fore.BLUE + 'blue text on stderr'\n\n # Python 3\n print(Fore.BLUE + 'blue text on stderr', file=stream)\n\nRecognised ANSI Sequences\n.........................\n\nANSI sequences generally take the form::\n\n ESC [ <param> ; <param> ... <command>\n\nWhere ``<param>`` is an integer, and ``<command>`` is a single letter. Zero or\nmore params are passed to a ``<command>``. If no params are passed, it is\ngenerally synonymous with passing a single zero. No spaces exist in the\nsequence; they have been inserted here simply to read more easily.\n\nThe only ANSI sequences that Colorama converts into win32 calls are::\n\n ESC [ 0 m # reset all (colors and brightness)\n ESC [ 1 m # bright\n ESC [ 2 m # dim (looks same as normal brightness)\n ESC [ 22 m # normal brightness\n\n # FOREGROUND:\n ESC [ 30 m # black\n ESC [ 31 m # red\n ESC [ 32 m # green\n ESC [ 33 m # yellow\n ESC [ 34 m # blue\n ESC [ 35 m # magenta\n ESC [ 36 m # cyan\n ESC [ 37 m # white\n ESC [ 39 m # reset\n\n # BACKGROUND\n ESC [ 40 m # black\n ESC [ 41 m # red\n ESC [ 42 m # green\n ESC [ 43 m # yellow\n ESC [ 44 m # blue\n ESC [ 45 m # magenta\n ESC [ 46 m # cyan\n ESC [ 47 m # white\n ESC [ 49 m # reset\n\n # cursor positioning\n ESC [ y;x H # position cursor at x across, y down\n ESC [ y;x f # position cursor at x across, y down\n ESC [ n A # move cursor n lines up\n ESC [ n B # move cursor n lines down\n ESC [ n C # move cursor n characters forward\n ESC [ n D # move cursor n characters backward\n\n # clear the screen\n ESC [ mode J # clear the screen\n\n # clear the line\n ESC [ mode K # clear the line\n\nMultiple numeric params to the ``'m'`` command can be combined into a single\nsequence::\n\n ESC [ 36 ; 45 ; 1 m # bright cyan text on magenta background\n\nAll other ANSI sequences of the form ``ESC [ <param> ; <param> ... <command>``\nare silently stripped from the output on Windows.\n\nAny other form of ANSI sequence, such as single-character codes or alternative\ninitial characters, are not recognised or stripped. It would be cool to add\nthem though. Let me know if it would be useful for you, via the Issues on\nGitHub.\n\nStatus & Known Problems\n-----------------------\n\nI've personally only tested it on Windows XP (CMD, Console2), Ubuntu\n(gnome-terminal, xterm), and OS X.\n\nSome valid ANSI sequences aren't recognised.\n\nIf you're hacking on the code, see `README-hacking.md`_. ESPECIALLY, see the\nexplanation there of why we do not want PRs that allow Colorama to generate new\ntypes of ANSI codes.\n\nSee outstanding issues and wish-list:\nhttps://github.com/tartley/colorama/issues\n\nIf anything doesn't work for you, or doesn't do what you expected or hoped for,\nI'd love to hear about it on that issues list, would be delighted by patches,\nand would be happy to grant commit access to anyone who submits a working patch\nor two.\n\n.. _README-hacking.md: README-hacking.md\n\nLicense\n-------\n\nCopyright Jonathan Hartley & Arnon Yaari, 2013-2020. BSD 3-Clause license; see\nLICENSE file.\n\nProfessional support\n--------------------\n\n.. |tideliftlogo| image:: https://cdn2.hubspot.net/hubfs/4008838/website/logos/logos_for_download/Tidelift_primary-shorthand-logo.png\n :alt: Tidelift\n :target: https://tidelift.com/subscription/pkg/pypi-colorama?utm_source=pypi-colorama&utm_medium=referral&utm_campaign=readme\n\n.. list-table::\n :widths: 10 100\n\n * - |tideliftlogo|\n - Professional support for colorama is available as part of the\n `Tidelift Subscription`_.\n Tidelift gives software development teams a single source for purchasing\n and maintaining their software, with professional grade assurances from\n the experts who know it best, while seamlessly integrating with existing\n tools.\n\n.. _Tidelift Subscription: https://tidelift.com/subscription/pkg/pypi-colorama?utm_source=pypi-colorama&utm_medium=referral&utm_campaign=readme\n\nThanks\n------\n\nSee the CHANGELOG for more thanks!\n\n* Marc Schlaich (schlamar) for a ``setup.py`` fix for Python2.5.\n* Marc Abramowitz, reported & fixed a crash on exit with closed ``stdout``,\n providing a solution to issue #7's setuptools/distutils debate,\n and other fixes.\n* User 'eryksun', for guidance on correctly instantiating ``ctypes.windll``.\n* Matthew McCormick for politely pointing out a longstanding crash on non-Win.\n* Ben Hoyt, for a magnificent fix under 64-bit Windows.\n* Jesse at Empty Square for submitting a fix for examples in the README.\n* User 'jamessp', an observant documentation fix for cursor positioning.\n* User 'vaal1239', Dave Mckee & Lackner Kristof for a tiny but much-needed Win7\n fix.\n* Julien Stuyck, for wisely suggesting Python3 compatible updates to README.\n* Daniel Griffith for multiple fabulous patches.\n* Oscar Lesta for a valuable fix to stop ANSI chars being sent to non-tty\n output.\n* Roger Binns, for many suggestions, valuable feedback, & bug reports.\n* Tim Golden for thought and much appreciated feedback on the initial idea.\n* User 'Zearin' for updates to the README file.\n* John Szakmeister for adding support for light colors\n* Charles Merriam for adding documentation to demos\n* Jurko for a fix on 64-bit Windows CPython2.5 w/o ctypes\n* Florian Bruhin for a fix when stdout or stderr are None\n* Thomas Weininger for fixing ValueError on Windows\n* Remi Rampin for better Github integration and fixes to the README file\n* Simeon Visser for closing a file handle using 'with' and updating classifiers\n to include Python 3.3 and 3.4\n* Andy Neff for fixing RESET of LIGHT_EX colors.\n* Jonathan Hartley for the initial idea and implementation.\n
|
.venv\Lib\site-packages\colorama-0.4.6.dist-info\METADATA
|
METADATA
|
Other
| 17,158 | 0.95 | 0.129252 | 0.10119 |
react-lib
| 826 |
2025-05-02T01:08:41.336209
|
BSD-3-Clause
| false |
40a32558d34334475bc175d03087174d
|
colorama-0.4.6.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\ncolorama-0.4.6.dist-info/METADATA,sha256=e67SnrUMOym9sz_4TjF3vxvAV4T3aF7NyqRHHH3YEMw,17158\ncolorama-0.4.6.dist-info/RECORD,,\ncolorama-0.4.6.dist-info/WHEEL,sha256=cdcF4Fbd0FPtw2EMIOwH-3rSOTUdTCeOSXRMD1iLUb8,105\ncolorama-0.4.6.dist-info/licenses/LICENSE.txt,sha256=ysNcAmhuXQSlpxQL-zs25zrtSWZW6JEQLkKIhteTAxg,1491\ncolorama/__init__.py,sha256=wePQA4U20tKgYARySLEC047ucNX-g8pRLpYBuiHlLb8,266\ncolorama/__pycache__/__init__.cpython-313.pyc,,\ncolorama/__pycache__/ansi.cpython-313.pyc,,\ncolorama/__pycache__/ansitowin32.cpython-313.pyc,,\ncolorama/__pycache__/initialise.cpython-313.pyc,,\ncolorama/__pycache__/win32.cpython-313.pyc,,\ncolorama/__pycache__/winterm.cpython-313.pyc,,\ncolorama/ansi.py,sha256=Top4EeEuaQdBWdteKMEcGOTeKeF19Q-Wo_6_Cj5kOzQ,2522\ncolorama/ansitowin32.py,sha256=vPNYa3OZbxjbuFyaVo0Tmhmy1FZ1lKMWCnT7odXpItk,11128\ncolorama/initialise.py,sha256=-hIny86ClXo39ixh5iSCfUIa2f_h_bgKRDW7gqs-KLU,3325\ncolorama/tests/__init__.py,sha256=MkgPAEzGQd-Rq0w0PZXSX2LadRWhUECcisJY8lSrm4Q,75\ncolorama/tests/__pycache__/__init__.cpython-313.pyc,,\ncolorama/tests/__pycache__/ansi_test.cpython-313.pyc,,\ncolorama/tests/__pycache__/ansitowin32_test.cpython-313.pyc,,\ncolorama/tests/__pycache__/initialise_test.cpython-313.pyc,,\ncolorama/tests/__pycache__/isatty_test.cpython-313.pyc,,\ncolorama/tests/__pycache__/utils.cpython-313.pyc,,\ncolorama/tests/__pycache__/winterm_test.cpython-313.pyc,,\ncolorama/tests/ansi_test.py,sha256=FeViDrUINIZcr505PAxvU4AjXz1asEiALs9GXMhwRaE,2839\ncolorama/tests/ansitowin32_test.py,sha256=RN7AIhMJ5EqDsYaCjVo-o4u8JzDD4ukJbmevWKS70rY,10678\ncolorama/tests/initialise_test.py,sha256=BbPy-XfyHwJ6zKozuQOvNvQZzsx9vdb_0bYXn7hsBTc,6741\ncolorama/tests/isatty_test.py,sha256=Pg26LRpv0yQDB5Ac-sxgVXG7hsA1NYvapFgApZfYzZg,1866\ncolorama/tests/utils.py,sha256=1IIRylG39z5-dzq09R_ngufxyPZxgldNbrxKxUGwGKE,1079\ncolorama/tests/winterm_test.py,sha256=qoWFPEjym5gm2RuMwpf3pOis3a5r_PJZFCzK254JL8A,3709\ncolorama/win32.py,sha256=YQOKwMTwtGBbsY4dL5HYTvwTeP9wIQra5MvPNddpxZs,6181\ncolorama/winterm.py,sha256=XCQFDHjPi6AHYNdZwy0tA02H-Jh48Jp-HvCjeLeLp3U,7134\n
|
.venv\Lib\site-packages\colorama-0.4.6.dist-info\RECORD
|
RECORD
|
Other
| 2,174 | 0.7 | 0 | 0 |
node-utils
| 940 |
2023-08-02T17:34:28.639989
|
GPL-3.0
| false |
b30fa6ac5d3b1418c880dea1ae32b7b6
|
Wheel-Version: 1.0\nGenerator: hatchling 1.11.1\nRoot-Is-Purelib: true\nTag: py2-none-any\nTag: py3-none-any\n
|
.venv\Lib\site-packages\colorama-0.4.6.dist-info\WHEEL
|
WHEEL
|
Other
| 105 | 0.7 | 0 | 0 |
vue-tools
| 753 |
2023-10-18T05:35:55.770698
|
BSD-3-Clause
| false |
292bc427f145c72dff4f09705f9581e4
|
Copyright (c) 2010 Jonathan Hartley\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the copyright holders, nor those of its contributors\n may be used to endorse or promote products derived from this software without\n specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
|
.venv\Lib\site-packages\colorama-0.4.6.dist-info\licenses\LICENSE.txt
|
LICENSE.txt
|
Other
| 1,491 | 0.7 | 0 | 0.136364 |
react-lib
| 711 |
2025-04-23T11:52:54.083936
|
GPL-3.0
| false |
b4936429a56a652b84c5c01280dcaa26
|
"""Default classes for Comm and CommManager, for usage in IPython.\n"""\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\nfrom __future__ import annotations\n\nimport contextlib\nimport logging\nimport typing as t\nimport uuid\n\nfrom traitlets.utils.importstring import import_item\n\nimport comm\n\nif t.TYPE_CHECKING:\n from zmq.eventloop.zmqstream import ZMQStream\n\nlogger = logging.getLogger("Comm")\n\nMessageType = t.Dict[str, t.Any]\nMaybeDict = t.Optional[t.Dict[str, t.Any]]\nBuffersType = t.Optional[t.List[bytes]]\nCommCallback = t.Callable[[MessageType], None]\nCommTargetCallback = t.Callable[["BaseComm", MessageType], None]\n\n\nclass BaseComm:\n """Class for communicating between a Frontend and a Kernel\n\n Must be subclassed with a publish_msg method implementation which\n sends comm messages through the iopub channel.\n """\n\n def __init__(\n self,\n target_name: str = "comm",\n data: MaybeDict = None,\n metadata: MaybeDict = None,\n buffers: BuffersType = None,\n comm_id: str | None = None,\n primary: bool = True,\n target_module: str | None = None,\n topic: bytes | None = None,\n _open_data: MaybeDict = None,\n _close_data: MaybeDict = None,\n **kwargs: t.Any,\n ) -> None:\n super().__init__(**kwargs)\n\n self.comm_id = comm_id if comm_id else uuid.uuid4().hex\n self.primary = primary\n self.target_name = target_name\n self.target_module = target_module\n self.topic = topic if topic else ("comm-%s" % self.comm_id).encode("ascii")\n\n self._open_data = _open_data if _open_data else {}\n self._close_data = _close_data if _close_data else {}\n\n self._msg_callback: CommCallback | None = None\n self._close_callback: CommCallback | None = None\n\n self._closed = True\n\n if self.primary:\n # I am primary, open my peer.\n self.open(data=data, metadata=metadata, buffers=buffers)\n else:\n self._closed = False\n\n def publish_msg(\n self,\n msg_type: str, # noqa: ARG002\n data: MaybeDict = None, # noqa: ARG002\n metadata: MaybeDict = None, # noqa: ARG002\n buffers: BuffersType = None, # noqa: ARG002\n **keys: t.Any, # noqa: ARG002\n ) -> None:\n msg = "publish_msg Comm method is not implemented"\n raise NotImplementedError(msg)\n\n def __del__(self) -> None:\n """trigger close on gc"""\n with contextlib.suppress(Exception):\n # any number of things can have gone horribly wrong\n # when called during interpreter teardown\n self.close(deleting=True)\n\n # publishing messages\n\n def open(\n self, data: MaybeDict = None, metadata: MaybeDict = None, buffers: BuffersType = None\n ) -> None:\n """Open the frontend-side version of this comm"""\n\n if data is None:\n data = self._open_data\n comm_manager = comm.get_comm_manager()\n if comm_manager is None:\n msg = "Comms cannot be opened without a comm_manager." # type:ignore[unreachable]\n raise RuntimeError(msg)\n\n comm_manager.register_comm(self)\n try:\n self.publish_msg(\n "comm_open",\n data=data,\n metadata=metadata,\n buffers=buffers,\n target_name=self.target_name,\n target_module=self.target_module,\n )\n self._closed = False\n except Exception:\n comm_manager.unregister_comm(self)\n raise\n\n def close(\n self,\n data: MaybeDict = None,\n metadata: MaybeDict = None,\n buffers: BuffersType = None,\n deleting: bool = False,\n ) -> None:\n """Close the frontend-side version of this comm"""\n if self._closed:\n # only close once\n return\n self._closed = True\n if data is None:\n data = self._close_data\n self.publish_msg(\n "comm_close",\n data=data,\n metadata=metadata,\n buffers=buffers,\n )\n if not deleting:\n # If deleting, the comm can't be registered\n comm.get_comm_manager().unregister_comm(self)\n\n def send(\n self, data: MaybeDict = None, metadata: MaybeDict = None, buffers: BuffersType = None\n ) -> None:\n """Send a message to the frontend-side version of this comm"""\n self.publish_msg(\n "comm_msg",\n data=data,\n metadata=metadata,\n buffers=buffers,\n )\n\n # registering callbacks\n\n def on_close(self, callback: CommCallback | None) -> None:\n """Register a callback for comm_close\n\n Will be called with the `data` of the close message.\n\n Call `on_close(None)` to disable an existing callback.\n """\n self._close_callback = callback\n\n def on_msg(self, callback: CommCallback | None) -> None:\n """Register a callback for comm_msg\n\n Will be called with the `data` of any comm_msg messages.\n\n Call `on_msg(None)` to disable an existing callback.\n """\n self._msg_callback = callback\n\n # handling of incoming messages\n\n def handle_close(self, msg: MessageType) -> None:\n """Handle a comm_close message"""\n logger.debug("handle_close[%s](%s)", self.comm_id, msg)\n if self._close_callback:\n self._close_callback(msg)\n\n def handle_msg(self, msg: MessageType) -> None:\n """Handle a comm_msg message"""\n logger.debug("handle_msg[%s](%s)", self.comm_id, msg)\n if self._msg_callback:\n from IPython import get_ipython\n\n shell = get_ipython()\n if shell:\n shell.events.trigger("pre_execute")\n self._msg_callback(msg)\n if shell:\n shell.events.trigger("post_execute")\n\n\nclass CommManager:\n """Default CommManager singleton implementation for Comms in the Kernel"""\n\n # Public APIs\n\n def __init__(self) -> None:\n self.comms: dict[str, BaseComm] = {}\n self.targets: dict[str, CommTargetCallback] = {}\n\n def register_target(self, target_name: str, f: CommTargetCallback | str) -> None:\n """Register a callable f for a given target name\n\n f will be called with two arguments when a comm_open message is received with `target`:\n\n - the Comm instance\n - the `comm_open` message itself.\n\n f can be a Python callable or an import string for one.\n """\n if isinstance(f, str):\n f = import_item(f)\n\n self.targets[target_name] = t.cast(CommTargetCallback, f)\n\n def unregister_target(self, target_name: str, f: CommTargetCallback) -> CommTargetCallback: # noqa: ARG002\n """Unregister a callable registered with register_target"""\n return self.targets.pop(target_name)\n\n def register_comm(self, comm: BaseComm) -> str:\n """Register a new comm"""\n comm_id = comm.comm_id\n self.comms[comm_id] = comm\n return comm_id\n\n def unregister_comm(self, comm: BaseComm) -> None:\n """Unregister a comm, and close its counterpart"""\n # unlike get_comm, this should raise a KeyError\n comm = self.comms.pop(comm.comm_id)\n\n def get_comm(self, comm_id: str) -> BaseComm | None:\n """Get a comm with a particular id\n\n Returns the comm if found, otherwise None.\n\n This will not raise an error,\n it will log messages if the comm cannot be found.\n """\n try:\n return self.comms[comm_id]\n except KeyError:\n logger.warning("No such comm: %s", comm_id)\n if logger.isEnabledFor(logging.DEBUG):\n # don't create the list of keys if debug messages aren't enabled\n logger.debug("Current comms: %s", list(self.comms.keys()))\n return None\n\n # Message handlers\n\n def comm_open(self, stream: ZMQStream, ident: str, msg: MessageType) -> None: # noqa: ARG002\n """Handler for comm_open messages"""\n from comm import create_comm\n\n content = msg["content"]\n comm_id = content["comm_id"]\n target_name = content["target_name"]\n f = self.targets.get(target_name, None)\n comm = create_comm(\n comm_id=comm_id,\n primary=False,\n target_name=target_name,\n )\n self.register_comm(comm)\n if f is None:\n logger.error("No such comm target registered: %s", target_name)\n else:\n try:\n f(comm, msg)\n return\n except Exception:\n logger.error("Exception opening comm with target: %s", target_name, exc_info=True)\n\n # Failure.\n try:\n comm.close()\n except Exception:\n logger.error(\n """Could not close comm during `comm_open` failure\n clean-up. The comm may not have been opened yet.""",\n exc_info=True,\n )\n\n def comm_msg(self, stream: ZMQStream, ident: str, msg: MessageType) -> None: # noqa: ARG002\n """Handler for comm_msg messages"""\n content = msg["content"]\n comm_id = content["comm_id"]\n comm = self.get_comm(comm_id)\n if comm is None:\n return\n\n try:\n comm.handle_msg(msg)\n except Exception:\n logger.error("Exception in comm_msg for %s", comm_id, exc_info=True)\n\n def comm_close(self, stream: ZMQStream, ident: str, msg: MessageType) -> None: # noqa: ARG002\n """Handler for comm_close messages"""\n content = msg["content"]\n comm_id = content["comm_id"]\n comm = self.get_comm(comm_id)\n if comm is None:\n return\n\n self.comms[comm_id]._closed = True\n del self.comms[comm_id]\n\n try:\n comm.handle_close(msg)\n except Exception:\n logger.error("Exception in comm_close for %s", comm_id, exc_info=True)\n\n\n__all__ = ["CommManager", "BaseComm"]\n
|
.venv\Lib\site-packages\comm\base_comm.py
|
base_comm.py
|
Python
| 10,153 | 0.95 | 0.198738 | 0.066406 |
node-utils
| 531 |
2023-09-27T01:29:29.726665
|
MIT
| false |
25496c9c3eda20ced5209b918d2dc239
|
"""Comm package.\n\nCopyright (c) IPython Development Team.\nDistributed under the terms of the Modified BSD License.\n\nThis package provides a way to register a Kernel Comm implementation, as per\nthe Jupyter kernel protocol.\nIt also provides a base Comm implementation and a default CommManager for the IPython case.\n"""\nfrom __future__ import annotations\n\nfrom typing import Any\n\nfrom .base_comm import BaseComm, BuffersType, CommManager, MaybeDict\n\n__version__ = "0.2.2"\n__all__ = [\n "create_comm",\n "get_comm_manager",\n "__version__",\n]\n\n_comm_manager = None\n\n\nclass DummyComm(BaseComm):\n def publish_msg(\n self,\n msg_type: str,\n data: MaybeDict = None,\n metadata: MaybeDict = None,\n buffers: BuffersType = None,\n **keys: Any,\n ) -> None:\n pass\n\n\ndef _create_comm(*args: Any, **kwargs: Any) -> BaseComm:\n """Create a Comm.\n\n This method is intended to be replaced, so that it returns your Comm instance.\n """\n return DummyComm(*args, **kwargs)\n\n\ndef _get_comm_manager() -> CommManager:\n """Get the current Comm manager, creates one if there is none.\n\n This method is intended to be replaced if needed (if you want to manage multiple CommManagers).\n """\n global _comm_manager # noqa: PLW0603\n\n if _comm_manager is None:\n _comm_manager = CommManager()\n\n return _comm_manager\n\n\ncreate_comm = _create_comm\nget_comm_manager = _get_comm_manager\n
|
.venv\Lib\site-packages\comm\__init__.py
|
__init__.py
|
Python
| 1,441 | 0.95 | 0.15 | 0.02381 |
node-utils
| 73 |
2024-05-10T08:00:45.780771
|
GPL-3.0
| false |
85c38c9ea9b5b391c170961f793b9dbb
|
\n\n
|
.venv\Lib\site-packages\comm\__pycache__\base_comm.cpython-313.pyc
|
base_comm.cpython-313.pyc
|
Other
| 12,858 | 0.95 | 0.099338 | 0 |
node-utils
| 59 |
2023-11-03T14:25:42.911615
|
GPL-3.0
| false |
a93f9c112c3dc43124ecde27cc939249
|
\n\n
|
.venv\Lib\site-packages\comm\__pycache__\__init__.cpython-313.pyc
|
__init__.cpython-313.pyc
|
Other
| 2,096 | 0.8 | 0.093023 | 0 |
awesome-app
| 931 |
2024-12-26T11:32:09.133247
|
GPL-3.0
| false |
d9886945802327c165cb7960b61e2b0c
|
pip\n
|
.venv\Lib\site-packages\comm-0.2.2.dist-info\INSTALLER
|
INSTALLER
|
Other
| 4 | 0.5 | 0 | 0 |
node-utils
| 256 |
2025-01-07T16:32:09.034541
|
Apache-2.0
| false |
365c9bfeb7d89244f2ce01c1de44cb85
|
Metadata-Version: 2.1\nName: comm\nVersion: 0.2.2\nSummary: Jupyter Python Comm implementation, for usage in ipykernel, xeus-python etc.\nProject-URL: Homepage, https://github.com/ipython/comm\nAuthor: Jupyter contributors\nLicense: BSD 3-Clause License\n \n Copyright (c) 2022, Jupyter\n All rights reserved.\n \n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n \n 1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n \n 2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n \n 3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n \n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nLicense-File: LICENSE\nKeywords: ipykernel,jupyter,xeus-python\nClassifier: Framework :: Jupyter\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nRequires-Python: >=3.8\nRequires-Dist: traitlets>=4\nProvides-Extra: test\nRequires-Dist: pytest; extra == 'test'\nDescription-Content-Type: text/markdown\n\n# Comm\n\nIt provides a way to register a Kernel Comm implementation, as per the Jupyter kernel protocol.\nIt also provides a base Comm implementation and a default CommManager that can be used.\n\n## Register a comm implementation in the kernel:\n\n### Case 1: Using the default CommManager and the BaseComm implementations\n\nWe provide default implementations for usage in IPython:\n\n```python\nimport comm\n\n\nclass MyCustomComm(comm.base_comm.BaseComm):\n def publish_msg(self, msg_type, data=None, metadata=None, buffers=None, **keys):\n # TODO implement the logic for sending comm messages through the iopub channel\n pass\n\n\ncomm.create_comm = MyCustomComm\n```\n\nThis is typically what ipykernel and JupyterLite's pyolite kernel will do.\n\n### Case 2: Providing your own comm manager creation implementation\n\n```python\nimport comm\n\ncomm.create_comm = custom_create_comm\ncomm.get_comm_manager = custom_comm_manager_getter\n```\n\nThis is typically what xeus-python does (it has its own manager implementation using xeus's C++ messaging logic).\n\n## Comm users\n\nLibraries like ipywidgets can then use the comms implementation that has been registered by the kernel:\n\n```python\nfrom comm import create_comm, get_comm_manager\n\n# Create a comm\ncomm_manager = get_comm_manager()\ncomm = create_comm()\n\ncomm_manager.register_comm(comm)\n```\n
|
.venv\Lib\site-packages\comm-0.2.2.dist-info\METADATA
|
METADATA
|
Other
| 3,689 | 0.95 | 0.051546 | 0.098592 |
node-utils
| 998 |
2024-10-11T21:44:24.328811
|
MIT
| false |
48060e8999641b8c00ea519103a135f6
|
comm-0.2.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\ncomm-0.2.2.dist-info/METADATA,sha256=o5oGQm64kDFK0M9OjFoJBG32bQfW4dNJWrdqlsoChAA,3689\ncomm-0.2.2.dist-info/RECORD,,\ncomm-0.2.2.dist-info/WHEEL,sha256=TJPnKdtrSue7xZ_AVGkp9YXcvDrobsjBds1du3Nx6dc,87\ncomm-0.2.2.dist-info/licenses/LICENSE,sha256=l6fgoUK3wdzSZGGyD8E5gJiLREm3RG7XdpKIJv0sqcQ,1515\ncomm/__init__.py,sha256=aoo4fpF-iJaMgT5FSRUrhdLF6c29N-Cp3g_2eZhKddA,1441\ncomm/__pycache__/__init__.cpython-313.pyc,,\ncomm/__pycache__/base_comm.cpython-313.pyc,,\ncomm/base_comm.py,sha256=PZr8Bj-pny4x4tK9TW6HX-rP8TI4uSOF_O6ObflqECk,10153\ncomm/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\n
|
.venv\Lib\site-packages\comm-0.2.2.dist-info\RECORD
|
RECORD
|
Other
| 689 | 0.7 | 0 | 0 |
react-lib
| 957 |
2024-08-12T18:44:35.268389
|
BSD-3-Clause
| false |
2a6498368c278f2cf17666b2ccd1811e
|
Wheel-Version: 1.0\nGenerator: hatchling 1.21.1\nRoot-Is-Purelib: true\nTag: py3-none-any\n
|
.venv\Lib\site-packages\comm-0.2.2.dist-info\WHEEL
|
WHEEL
|
Other
| 87 | 0.5 | 0 | 0 |
python-kit
| 307 |
2024-10-31T14:36:10.149638
|
MIT
| false |
a2e74b4e3aea204ad48eb8854874f5a5
|
BSD 3-Clause License\n\nCopyright (c) 2022, Jupyter\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n3. Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
|
.venv\Lib\site-packages\comm-0.2.2.dist-info\licenses\LICENSE
|
LICENSE
|
Other
| 1,515 | 0.7 | 0 | 0 |
python-kit
| 13 |
2024-06-29T23:07:58.941994
|
BSD-3-Clause
| false |
3a623b76c23287751590a7793a86e552
|
from __future__ import annotations\n\nfrom itertools import chain, pairwise\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\n\nfrom contourpy.typecheck import check_code_array, check_offset_array, check_point_array\nfrom contourpy.types import CLOSEPOLY, LINETO, MOVETO, code_dtype, offset_dtype, point_dtype\n\nif TYPE_CHECKING:\n import contourpy._contourpy as cpy\n\n\ndef codes_from_offsets(offsets: cpy.OffsetArray) -> cpy.CodeArray:\n """Determine codes from offsets, assuming they all correspond to closed polygons.\n """\n check_offset_array(offsets)\n\n n = offsets[-1]\n codes = np.full(n, LINETO, dtype=code_dtype)\n codes[offsets[:-1]] = MOVETO\n codes[offsets[1:] - 1] = CLOSEPOLY\n return codes\n\n\ndef codes_from_offsets_and_points(\n offsets: cpy.OffsetArray,\n points: cpy.PointArray,\n) -> cpy.CodeArray:\n """Determine codes from offsets and points, using the equality of the start and end points of\n each line to determine if lines are closed or not.\n """\n check_offset_array(offsets)\n check_point_array(points)\n\n codes = np.full(len(points), LINETO, dtype=code_dtype)\n codes[offsets[:-1]] = MOVETO\n\n end_offsets = offsets[1:] - 1\n closed = np.all(points[offsets[:-1]] == points[end_offsets], axis=1)\n codes[end_offsets[closed]] = CLOSEPOLY\n\n return codes\n\n\ndef codes_from_points(points: cpy.PointArray) -> cpy.CodeArray:\n """Determine codes for a single line, using the equality of the start and end points to\n determine if the line is closed or not.\n """\n check_point_array(points)\n\n n = len(points)\n codes = np.full(n, LINETO, dtype=code_dtype)\n codes[0] = MOVETO\n if np.all(points[0] == points[-1]):\n codes[-1] = CLOSEPOLY\n return codes\n\n\ndef concat_codes(list_of_codes: list[cpy.CodeArray]) -> cpy.CodeArray:\n """Concatenate a list of codes arrays into a single code array.\n """\n if not list_of_codes:\n raise ValueError("Empty list passed to concat_codes")\n\n return np.concatenate(list_of_codes, dtype=code_dtype)\n\n\ndef concat_codes_or_none(list_of_codes_or_none: list[cpy.CodeArray | None]) -> cpy.CodeArray | None:\n """Concatenate a list of codes arrays or None into a single code array or None.\n """\n list_of_codes = [codes for codes in list_of_codes_or_none if codes is not None]\n if list_of_codes:\n return concat_codes(list_of_codes)\n else:\n return None\n\n\ndef concat_offsets(list_of_offsets: list[cpy.OffsetArray]) -> cpy.OffsetArray:\n """Concatenate a list of offsets arrays into a single offset array.\n """\n if not list_of_offsets:\n raise ValueError("Empty list passed to concat_offsets")\n\n n = len(list_of_offsets)\n cumulative = np.cumsum([offsets[-1] for offsets in list_of_offsets], dtype=offset_dtype)\n ret: cpy.OffsetArray = np.concatenate(\n (list_of_offsets[0], *(list_of_offsets[i+1][1:] + cumulative[i] for i in range(n-1))),\n dtype=offset_dtype,\n )\n return ret\n\n\ndef concat_offsets_or_none(\n list_of_offsets_or_none: list[cpy.OffsetArray | None],\n) -> cpy.OffsetArray | None:\n """Concatenate a list of offsets arrays or None into a single offset array or None.\n """\n list_of_offsets = [offsets for offsets in list_of_offsets_or_none if offsets is not None]\n if list_of_offsets:\n return concat_offsets(list_of_offsets)\n else:\n return None\n\n\ndef concat_points(list_of_points: list[cpy.PointArray]) -> cpy.PointArray:\n """Concatenate a list of point arrays into a single point array.\n """\n if not list_of_points:\n raise ValueError("Empty list passed to concat_points")\n\n return np.concatenate(list_of_points, dtype=point_dtype)\n\n\ndef concat_points_or_none(\n list_of_points_or_none: list[cpy.PointArray | None],\n) -> cpy.PointArray | None:\n """Concatenate a list of point arrays or None into a single point array or None.\n """\n list_of_points = [points for points in list_of_points_or_none if points is not None]\n if list_of_points:\n return concat_points(list_of_points)\n else:\n return None\n\n\ndef concat_points_or_none_with_nan(\n list_of_points_or_none: list[cpy.PointArray | None],\n) -> cpy.PointArray | None:\n """Concatenate a list of points or None into a single point array or None, with NaNs used to\n separate each line.\n """\n list_of_points = [points for points in list_of_points_or_none if points is not None]\n if list_of_points:\n return concat_points_with_nan(list_of_points)\n else:\n return None\n\n\ndef concat_points_with_nan(list_of_points: list[cpy.PointArray]) -> cpy.PointArray:\n """Concatenate a list of points into a single point array with NaNs used to separate each line.\n """\n if not list_of_points:\n raise ValueError("Empty list passed to concat_points_with_nan")\n\n if len(list_of_points) == 1:\n return list_of_points[0]\n else:\n nan_spacer = np.full((1, 2), np.nan, dtype=point_dtype)\n list_of_points = [list_of_points[0],\n *list(chain(*((nan_spacer, x) for x in list_of_points[1:])))]\n return concat_points(list_of_points)\n\n\ndef insert_nan_at_offsets(points: cpy.PointArray, offsets: cpy.OffsetArray) -> cpy.PointArray:\n """Insert NaNs into a point array at locations specified by an offset array.\n """\n check_point_array(points)\n check_offset_array(offsets)\n\n if len(offsets) <= 2:\n return points\n else:\n nan_spacer = np.array([np.nan, np.nan], dtype=point_dtype)\n # Convert offsets to int64 to avoid numpy error when mixing signed and unsigned ints.\n return np.insert(points, offsets[1:-1].astype(np.int64), nan_spacer, axis=0)\n\n\ndef offsets_from_codes(codes: cpy.CodeArray) -> cpy.OffsetArray:\n """Determine offsets from codes using locations of MOVETO codes.\n """\n check_code_array(codes)\n\n return np.append(np.nonzero(codes == MOVETO)[0], len(codes)).astype(offset_dtype)\n\n\ndef offsets_from_lengths(list_of_points: list[cpy.PointArray]) -> cpy.OffsetArray:\n """Determine offsets from lengths of point arrays.\n """\n if not list_of_points:\n raise ValueError("Empty list passed to offsets_from_lengths")\n\n return np.cumsum([0] + [len(line) for line in list_of_points], dtype=offset_dtype)\n\n\ndef outer_offsets_from_list_of_codes(list_of_codes: list[cpy.CodeArray]) -> cpy.OffsetArray:\n """Determine outer offsets from codes using locations of MOVETO codes.\n """\n if not list_of_codes:\n raise ValueError("Empty list passed to outer_offsets_from_list_of_codes")\n\n return np.cumsum([0] + [np.count_nonzero(codes == MOVETO) for codes in list_of_codes],\n dtype=offset_dtype)\n\n\ndef outer_offsets_from_list_of_offsets(list_of_offsets: list[cpy.OffsetArray]) -> cpy.OffsetArray:\n """Determine outer offsets from a list of offsets.\n """\n if not list_of_offsets:\n raise ValueError("Empty list passed to outer_offsets_from_list_of_offsets")\n\n return np.cumsum([0] + [len(offsets)-1 for offsets in list_of_offsets], dtype=offset_dtype)\n\n\ndef remove_nan(points: cpy.PointArray) -> tuple[cpy.PointArray, cpy.OffsetArray]:\n """Remove NaN from a points array, also return the offsets corresponding to the NaN removed.\n """\n check_point_array(points)\n\n nan_offsets = np.nonzero(np.isnan(points[:, 0]))[0]\n if len(nan_offsets) == 0:\n return points, np.array([0, len(points)], dtype=offset_dtype)\n else:\n points = np.delete(points, nan_offsets, axis=0)\n nan_offsets -= np.arange(len(nan_offsets))\n offsets: cpy.OffsetArray = np.empty(len(nan_offsets)+2, dtype=offset_dtype)\n offsets[0] = 0\n offsets[1:-1] = nan_offsets\n offsets[-1] = len(points)\n return points, offsets\n\n\ndef split_codes_by_offsets(codes: cpy.CodeArray, offsets: cpy.OffsetArray) -> list[cpy.CodeArray]:\n """Split a code array at locations specified by an offset array into a list of code arrays.\n """\n check_code_array(codes)\n check_offset_array(offsets)\n\n if len(offsets) > 2:\n return np.split(codes, offsets[1:-1])\n else:\n return [codes]\n\n\ndef split_points_by_offsets(\n points: cpy.PointArray,\n offsets: cpy.OffsetArray,\n) -> list[cpy.PointArray]:\n """Split a point array at locations specified by an offset array into a list of point arrays.\n """\n check_point_array(points)\n check_offset_array(offsets)\n\n if len(offsets) > 2:\n return np.split(points, offsets[1:-1])\n else:\n return [points]\n\n\ndef split_points_at_nan(points: cpy.PointArray) -> list[cpy.PointArray]:\n """Split a points array at NaNs into a list of point arrays.\n """\n check_point_array(points)\n\n nan_offsets = np.nonzero(np.isnan(points[:, 0]))[0]\n if len(nan_offsets) == 0:\n return [points]\n else:\n nan_offsets = np.concatenate(([-1], nan_offsets, [len(points)]))\n return [points[s+1:e] for s, e in pairwise(nan_offsets)]\n
|
.venv\Lib\site-packages\contourpy\array.py
|
array.py
|
Python
| 9,240 | 0.95 | 0.218391 | 0.01005 |
awesome-app
| 870 |
2025-01-16T22:27:04.109369
|
GPL-3.0
| false |
5c501bfde86fbfcf02b0b1ceaa39c3fa
|
from __future__ import annotations\n\nimport math\n\n\ndef calc_chunk_sizes(\n chunk_size: int | tuple[int, int] | None,\n chunk_count: int | tuple[int, int] | None,\n total_chunk_count: int | None,\n ny: int,\n nx: int,\n) -> tuple[int, int]:\n """Calculate chunk sizes.\n\n Args:\n chunk_size (int or tuple(int, int), optional): Chunk size in (y, x) directions, or the same\n size in both directions if only one is specified. Cannot be negative.\n chunk_count (int or tuple(int, int), optional): Chunk count in (y, x) directions, or the\n same count in both directions if only one is specified. If less than 1, set to 1.\n total_chunk_count (int, optional): Total number of chunks. If less than 1, set to 1.\n ny (int): Number of grid points in y-direction.\n nx (int): Number of grid points in x-direction.\n\n Return:\n tuple(int, int): Chunk sizes (y_chunk_size, x_chunk_size).\n\n Note:\n Zero or one of ``chunk_size``, ``chunk_count`` and ``total_chunk_count`` should be\n specified.\n """\n if sum([chunk_size is not None, chunk_count is not None, total_chunk_count is not None]) > 1:\n raise ValueError("Only one of chunk_size, chunk_count and total_chunk_count should be set")\n\n if nx < 2 or ny < 2:\n raise ValueError(f"(ny, nx) must be at least (2, 2), not ({ny}, {nx})")\n\n if total_chunk_count is not None:\n max_chunk_count = (nx-1)*(ny-1)\n total_chunk_count = min(max(total_chunk_count, 1), max_chunk_count)\n if total_chunk_count == 1:\n chunk_size = 0\n elif total_chunk_count == max_chunk_count:\n chunk_size = (1, 1)\n else:\n factors = two_factors(total_chunk_count)\n if ny > nx:\n chunk_count = factors\n else:\n chunk_count = (factors[1], factors[0])\n\n if chunk_count is not None:\n if isinstance(chunk_count, tuple):\n y_chunk_count, x_chunk_count = chunk_count\n else:\n y_chunk_count = x_chunk_count = chunk_count\n x_chunk_count = min(max(x_chunk_count, 1), nx-1)\n y_chunk_count = min(max(y_chunk_count, 1), ny-1)\n chunk_size = (math.ceil((ny-1) / y_chunk_count), math.ceil((nx-1) / x_chunk_count))\n\n if chunk_size is None:\n y_chunk_size = x_chunk_size = 0\n elif isinstance(chunk_size, tuple):\n y_chunk_size, x_chunk_size = chunk_size\n else:\n y_chunk_size = x_chunk_size = chunk_size\n\n if x_chunk_size < 0 or y_chunk_size < 0:\n raise ValueError("chunk_size cannot be negative")\n\n return y_chunk_size, x_chunk_size\n\n\ndef two_factors(n: int) -> tuple[int, int]:\n """Split an integer into two integer factors.\n\n The two factors will be as close as possible to the sqrt of n, and are returned in decreasing\n order. Worst case returns (n, 1).\n\n Args:\n n (int): The integer to factorize, must be positive.\n\n Return:\n tuple(int, int): The two factors of n, in decreasing order.\n """\n if n < 0:\n raise ValueError(f"two_factors expects positive integer not {n}")\n\n i = math.ceil(math.sqrt(n))\n while n % i != 0:\n i -= 1\n j = n // i\n if i > j:\n return i, j\n else:\n return j, i\n
|
.venv\Lib\site-packages\contourpy\chunk.py
|
chunk.py
|
Python
| 3,374 | 0.95 | 0.168421 | 0 |
awesome-app
| 851 |
2024-04-17T02:26:11.294196
|
Apache-2.0
| false |
da4f48ae0db5cc99836b478a7ae7427c
|
from __future__ import annotations\n\nfrom itertools import pairwise\nfrom typing import TYPE_CHECKING, cast\n\nimport numpy as np\n\nfrom contourpy._contourpy import FillType, LineType\nimport contourpy.array as arr\nfrom contourpy.enum_util import as_fill_type, as_line_type\nfrom contourpy.typecheck import check_filled, check_lines\nfrom contourpy.types import MOVETO, offset_dtype\n\nif TYPE_CHECKING:\n import contourpy._contourpy as cpy\n\n\ndef _convert_filled_from_OuterCode(\n filled: cpy.FillReturn_OuterCode,\n fill_type_to: FillType,\n) -> cpy.FillReturn:\n if fill_type_to == FillType.OuterCode:\n return filled\n elif fill_type_to == FillType.OuterOffset:\n return (filled[0], [arr.offsets_from_codes(codes) for codes in filled[1]])\n\n if len(filled[0]) > 0:\n points = arr.concat_points(filled[0])\n codes = arr.concat_codes(filled[1])\n else:\n points = None\n codes = None\n\n if fill_type_to == FillType.ChunkCombinedCode:\n return ([points], [codes])\n elif fill_type_to == FillType.ChunkCombinedOffset:\n return ([points], [None if codes is None else arr.offsets_from_codes(codes)])\n elif fill_type_to == FillType.ChunkCombinedCodeOffset:\n outer_offsets = None if points is None else arr.offsets_from_lengths(filled[0])\n ret1: cpy.FillReturn_ChunkCombinedCodeOffset = ([points], [codes], [outer_offsets])\n return ret1\n elif fill_type_to == FillType.ChunkCombinedOffsetOffset:\n if codes is None:\n ret2: cpy.FillReturn_ChunkCombinedOffsetOffset = ([None], [None], [None])\n else:\n offsets = arr.offsets_from_codes(codes)\n outer_offsets = arr.outer_offsets_from_list_of_codes(filled[1])\n ret2 = ([points], [offsets], [outer_offsets])\n return ret2\n else:\n raise ValueError(f"Invalid FillType {fill_type_to}")\n\n\ndef _convert_filled_from_OuterOffset(\n filled: cpy.FillReturn_OuterOffset,\n fill_type_to: FillType,\n) -> cpy.FillReturn:\n if fill_type_to == FillType.OuterCode:\n separate_codes = [arr.codes_from_offsets(offsets) for offsets in filled[1]]\n return (filled[0], separate_codes)\n elif fill_type_to == FillType.OuterOffset:\n return filled\n\n if len(filled[0]) > 0:\n points = arr.concat_points(filled[0])\n offsets = arr.concat_offsets(filled[1])\n else:\n points = None\n offsets = None\n\n if fill_type_to == FillType.ChunkCombinedCode:\n return ([points], [None if offsets is None else arr.codes_from_offsets(offsets)])\n elif fill_type_to == FillType.ChunkCombinedOffset:\n return ([points], [offsets])\n elif fill_type_to == FillType.ChunkCombinedCodeOffset:\n if offsets is None:\n ret1: cpy.FillReturn_ChunkCombinedCodeOffset = ([None], [None], [None])\n else:\n codes = arr.codes_from_offsets(offsets)\n outer_offsets = arr.offsets_from_lengths(filled[0])\n ret1 = ([points], [codes], [outer_offsets])\n return ret1\n elif fill_type_to == FillType.ChunkCombinedOffsetOffset:\n if points is None:\n ret2: cpy.FillReturn_ChunkCombinedOffsetOffset = ([None], [None], [None])\n else:\n outer_offsets = arr.outer_offsets_from_list_of_offsets(filled[1])\n ret2 = ([points], [offsets], [outer_offsets])\n return ret2\n else:\n raise ValueError(f"Invalid FillType {fill_type_to}")\n\n\ndef _convert_filled_from_ChunkCombinedCode(\n filled: cpy.FillReturn_ChunkCombinedCode,\n fill_type_to: FillType,\n) -> cpy.FillReturn:\n if fill_type_to == FillType.ChunkCombinedCode:\n return filled\n elif fill_type_to == FillType.ChunkCombinedOffset:\n codes = [None if codes is None else arr.offsets_from_codes(codes) for codes in filled[1]]\n return (filled[0], codes)\n else:\n raise ValueError(\n f"Conversion from {FillType.ChunkCombinedCode} to {fill_type_to} not supported")\n\n\ndef _convert_filled_from_ChunkCombinedOffset(\n filled: cpy.FillReturn_ChunkCombinedOffset,\n fill_type_to: FillType,\n) -> cpy.FillReturn:\n if fill_type_to == FillType.ChunkCombinedCode:\n chunk_codes: list[cpy.CodeArray | None] = []\n for points, offsets in zip(*filled):\n if points is None:\n chunk_codes.append(None)\n else:\n if TYPE_CHECKING:\n assert offsets is not None\n chunk_codes.append(arr.codes_from_offsets_and_points(offsets, points))\n return (filled[0], chunk_codes)\n elif fill_type_to == FillType.ChunkCombinedOffset:\n return filled\n else:\n raise ValueError(\n f"Conversion from {FillType.ChunkCombinedOffset} to {fill_type_to} not supported")\n\n\ndef _convert_filled_from_ChunkCombinedCodeOffset(\n filled: cpy.FillReturn_ChunkCombinedCodeOffset,\n fill_type_to: FillType,\n) -> cpy.FillReturn:\n if fill_type_to == FillType.OuterCode:\n separate_points = []\n separate_codes = []\n for points, codes, outer_offsets in zip(*filled):\n if points is not None:\n if TYPE_CHECKING:\n assert codes is not None\n assert outer_offsets is not None\n separate_points += arr.split_points_by_offsets(points, outer_offsets)\n separate_codes += arr.split_codes_by_offsets(codes, outer_offsets)\n return (separate_points, separate_codes)\n elif fill_type_to == FillType.OuterOffset:\n separate_points = []\n separate_offsets = []\n for points, codes, outer_offsets in zip(*filled):\n if points is not None:\n if TYPE_CHECKING:\n assert codes is not None\n assert outer_offsets is not None\n separate_points += arr.split_points_by_offsets(points, outer_offsets)\n separate_codes = arr.split_codes_by_offsets(codes, outer_offsets)\n separate_offsets += [arr.offsets_from_codes(codes) for codes in separate_codes]\n return (separate_points, separate_offsets)\n elif fill_type_to == FillType.ChunkCombinedCode:\n ret1: cpy.FillReturn_ChunkCombinedCode = (filled[0], filled[1])\n return ret1\n elif fill_type_to == FillType.ChunkCombinedOffset:\n all_offsets = [None if codes is None else arr.offsets_from_codes(codes)\n for codes in filled[1]]\n ret2: cpy.FillReturn_ChunkCombinedOffset = (filled[0], all_offsets)\n return ret2\n elif fill_type_to == FillType.ChunkCombinedCodeOffset:\n return filled\n elif fill_type_to == FillType.ChunkCombinedOffsetOffset:\n chunk_offsets: list[cpy.OffsetArray | None] = []\n chunk_outer_offsets: list[cpy.OffsetArray | None] = []\n for codes, outer_offsets in zip(*filled[1:]):\n if codes is None:\n chunk_offsets.append(None)\n chunk_outer_offsets.append(None)\n else:\n if TYPE_CHECKING:\n assert outer_offsets is not None\n offsets = arr.offsets_from_codes(codes)\n outer_offsets = np.array([np.nonzero(offsets == oo)[0][0] for oo in outer_offsets],\n dtype=offset_dtype)\n chunk_offsets.append(offsets)\n chunk_outer_offsets.append(outer_offsets)\n ret3: cpy.FillReturn_ChunkCombinedOffsetOffset = (\n filled[0], chunk_offsets, chunk_outer_offsets,\n )\n return ret3\n else:\n raise ValueError(f"Invalid FillType {fill_type_to}")\n\n\ndef _convert_filled_from_ChunkCombinedOffsetOffset(\n filled: cpy.FillReturn_ChunkCombinedOffsetOffset,\n fill_type_to: FillType,\n) -> cpy.FillReturn:\n if fill_type_to == FillType.OuterCode:\n separate_points = []\n separate_codes = []\n for points, offsets, outer_offsets in zip(*filled):\n if points is not None:\n if TYPE_CHECKING:\n assert offsets is not None\n assert outer_offsets is not None\n codes = arr.codes_from_offsets_and_points(offsets, points)\n outer_offsets = offsets[outer_offsets]\n separate_points += arr.split_points_by_offsets(points, outer_offsets)\n separate_codes += arr.split_codes_by_offsets(codes, outer_offsets)\n return (separate_points, separate_codes)\n elif fill_type_to == FillType.OuterOffset:\n separate_points = []\n separate_offsets = []\n for points, offsets, outer_offsets in zip(*filled):\n if points is not None:\n if TYPE_CHECKING:\n assert offsets is not None\n assert outer_offsets is not None\n if len(outer_offsets) > 2:\n separate_offsets += [offsets[s:e+1] - offsets[s] for s, e in\n pairwise(outer_offsets)]\n else:\n separate_offsets.append(offsets)\n separate_points += arr.split_points_by_offsets(points, offsets[outer_offsets])\n return (separate_points, separate_offsets)\n elif fill_type_to == FillType.ChunkCombinedCode:\n chunk_codes: list[cpy.CodeArray | None] = []\n for points, offsets, outer_offsets in zip(*filled):\n if points is None:\n chunk_codes.append(None)\n else:\n if TYPE_CHECKING:\n assert offsets is not None\n assert outer_offsets is not None\n chunk_codes.append(arr.codes_from_offsets_and_points(offsets, points))\n ret1: cpy.FillReturn_ChunkCombinedCode = (filled[0], chunk_codes)\n return ret1\n elif fill_type_to == FillType.ChunkCombinedOffset:\n return (filled[0], filled[1])\n elif fill_type_to == FillType.ChunkCombinedCodeOffset:\n chunk_codes = []\n chunk_outer_offsets: list[cpy.OffsetArray | None] = []\n for points, offsets, outer_offsets in zip(*filled):\n if points is None:\n chunk_codes.append(None)\n chunk_outer_offsets.append(None)\n else:\n if TYPE_CHECKING:\n assert offsets is not None\n assert outer_offsets is not None\n chunk_codes.append(arr.codes_from_offsets_and_points(offsets, points))\n chunk_outer_offsets.append(offsets[outer_offsets])\n ret2: cpy.FillReturn_ChunkCombinedCodeOffset = (filled[0], chunk_codes, chunk_outer_offsets)\n return ret2\n elif fill_type_to == FillType.ChunkCombinedOffsetOffset:\n return filled\n else:\n raise ValueError(f"Invalid FillType {fill_type_to}")\n\n\ndef convert_filled(\n filled: cpy.FillReturn,\n fill_type_from: FillType | str,\n fill_type_to: FillType | str,\n) -> cpy.FillReturn:\n """Convert filled contours from one :class:`~.FillType` to another.\n\n Args:\n filled (sequence of arrays): Filled contour polygons to convert, such as those returned by\n :meth:`.ContourGenerator.filled`.\n fill_type_from (FillType or str): :class:`~.FillType` to convert from as enum or\n string equivalent.\n fill_type_to (FillType or str): :class:`~.FillType` to convert to as enum or string\n equivalent.\n\n Return:\n Converted filled contour polygons.\n\n When converting non-chunked fill types (``FillType.OuterCode`` or ``FillType.OuterOffset``) to\n chunked ones, all polygons are placed in the first chunk. When converting in the other\n direction, all chunk information is discarded. Converting a fill type that is not aware of the\n relationship between outer boundaries and contained holes (``FillType.ChunkCombinedCode`` or\n ``FillType.ChunkCombinedOffset``) to one that is will raise a ``ValueError``.\n\n .. versionadded:: 1.2.0\n """\n fill_type_from = as_fill_type(fill_type_from)\n fill_type_to = as_fill_type(fill_type_to)\n\n check_filled(filled, fill_type_from)\n\n if fill_type_from == FillType.OuterCode:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_OuterCode, filled)\n return _convert_filled_from_OuterCode(filled, fill_type_to)\n elif fill_type_from == FillType.OuterOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_OuterOffset, filled)\n return _convert_filled_from_OuterOffset(filled, fill_type_to)\n elif fill_type_from == FillType.ChunkCombinedCode:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedCode, filled)\n return _convert_filled_from_ChunkCombinedCode(filled, fill_type_to)\n elif fill_type_from == FillType.ChunkCombinedOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled)\n return _convert_filled_from_ChunkCombinedOffset(filled, fill_type_to)\n elif fill_type_from == FillType.ChunkCombinedCodeOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled)\n return _convert_filled_from_ChunkCombinedCodeOffset(filled, fill_type_to)\n elif fill_type_from == FillType.ChunkCombinedOffsetOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled)\n return _convert_filled_from_ChunkCombinedOffsetOffset(filled, fill_type_to)\n else:\n raise ValueError(f"Invalid FillType {fill_type_from}")\n\n\ndef _convert_lines_from_Separate(\n lines: cpy.LineReturn_Separate,\n line_type_to: LineType,\n) -> cpy.LineReturn:\n if line_type_to == LineType.Separate:\n return lines\n elif line_type_to == LineType.SeparateCode:\n separate_codes = [arr.codes_from_points(line) for line in lines]\n return (lines, separate_codes)\n elif line_type_to == LineType.ChunkCombinedCode:\n if not lines:\n ret1: cpy.LineReturn_ChunkCombinedCode = ([None], [None])\n else:\n points = arr.concat_points(lines)\n offsets = arr.offsets_from_lengths(lines)\n codes = arr.codes_from_offsets_and_points(offsets, points)\n ret1 = ([points], [codes])\n return ret1\n elif line_type_to == LineType.ChunkCombinedOffset:\n if not lines:\n ret2: cpy.LineReturn_ChunkCombinedOffset = ([None], [None])\n else:\n ret2 = ([arr.concat_points(lines)], [arr.offsets_from_lengths(lines)])\n return ret2\n elif line_type_to == LineType.ChunkCombinedNan:\n if not lines:\n ret3: cpy.LineReturn_ChunkCombinedNan = ([None],)\n else:\n ret3 = ([arr.concat_points_with_nan(lines)],)\n return ret3\n else:\n raise ValueError(f"Invalid LineType {line_type_to}")\n\n\ndef _convert_lines_from_SeparateCode(\n lines: cpy.LineReturn_SeparateCode,\n line_type_to: LineType,\n) -> cpy.LineReturn:\n if line_type_to == LineType.Separate:\n # Drop codes.\n return lines[0]\n elif line_type_to == LineType.SeparateCode:\n return lines\n elif line_type_to == LineType.ChunkCombinedCode:\n if not lines[0]:\n ret1: cpy.LineReturn_ChunkCombinedCode = ([None], [None])\n else:\n ret1 = ([arr.concat_points(lines[0])], [arr.concat_codes(lines[1])])\n return ret1\n elif line_type_to == LineType.ChunkCombinedOffset:\n if not lines[0]:\n ret2: cpy.LineReturn_ChunkCombinedOffset = ([None], [None])\n else:\n ret2 = ([arr.concat_points(lines[0])], [arr.offsets_from_lengths(lines[0])])\n return ret2\n elif line_type_to == LineType.ChunkCombinedNan:\n if not lines[0]:\n ret3: cpy.LineReturn_ChunkCombinedNan = ([None],)\n else:\n ret3 = ([arr.concat_points_with_nan(lines[0])],)\n return ret3\n else:\n raise ValueError(f"Invalid LineType {line_type_to}")\n\n\ndef _convert_lines_from_ChunkCombinedCode(\n lines: cpy.LineReturn_ChunkCombinedCode,\n line_type_to: LineType,\n) -> cpy.LineReturn:\n if line_type_to in (LineType.Separate, LineType.SeparateCode):\n separate_lines = []\n for points, codes in zip(*lines):\n if points is not None:\n if TYPE_CHECKING:\n assert codes is not None\n split_at = np.nonzero(codes == MOVETO)[0]\n if len(split_at) > 1:\n separate_lines += np.split(points, split_at[1:])\n else:\n separate_lines.append(points)\n if line_type_to == LineType.Separate:\n return separate_lines\n else:\n separate_codes = [arr.codes_from_points(line) for line in separate_lines]\n return (separate_lines, separate_codes)\n elif line_type_to == LineType.ChunkCombinedCode:\n return lines\n elif line_type_to == LineType.ChunkCombinedOffset:\n chunk_offsets = [None if codes is None else arr.offsets_from_codes(codes)\n for codes in lines[1]]\n return (lines[0], chunk_offsets)\n elif line_type_to == LineType.ChunkCombinedNan:\n points_nan: list[cpy.PointArray | None] = []\n for points, codes in zip(*lines):\n if points is None:\n points_nan.append(None)\n else:\n if TYPE_CHECKING:\n assert codes is not None\n offsets = arr.offsets_from_codes(codes)\n points_nan.append(arr.insert_nan_at_offsets(points, offsets))\n return (points_nan,)\n else:\n raise ValueError(f"Invalid LineType {line_type_to}")\n\n\ndef _convert_lines_from_ChunkCombinedOffset(\n lines: cpy.LineReturn_ChunkCombinedOffset,\n line_type_to: LineType,\n) -> cpy.LineReturn:\n if line_type_to in (LineType.Separate, LineType.SeparateCode):\n separate_lines = []\n for points, offsets in zip(*lines):\n if points is not None:\n if TYPE_CHECKING:\n assert offsets is not None\n separate_lines += arr.split_points_by_offsets(points, offsets)\n if line_type_to == LineType.Separate:\n return separate_lines\n else:\n separate_codes = [arr.codes_from_points(line) for line in separate_lines]\n return (separate_lines, separate_codes)\n elif line_type_to == LineType.ChunkCombinedCode:\n chunk_codes: list[cpy.CodeArray | None] = []\n for points, offsets in zip(*lines):\n if points is None:\n chunk_codes.append(None)\n else:\n if TYPE_CHECKING:\n assert offsets is not None\n chunk_codes.append(arr.codes_from_offsets_and_points(offsets, points))\n return (lines[0], chunk_codes)\n elif line_type_to == LineType.ChunkCombinedOffset:\n return lines\n elif line_type_to == LineType.ChunkCombinedNan:\n points_nan: list[cpy.PointArray | None] = []\n for points, offsets in zip(*lines):\n if points is None:\n points_nan.append(None)\n else:\n if TYPE_CHECKING:\n assert offsets is not None\n points_nan.append(arr.insert_nan_at_offsets(points, offsets))\n return (points_nan,)\n else:\n raise ValueError(f"Invalid LineType {line_type_to}")\n\n\ndef _convert_lines_from_ChunkCombinedNan(\n lines: cpy.LineReturn_ChunkCombinedNan,\n line_type_to: LineType,\n) -> cpy.LineReturn:\n if line_type_to in (LineType.Separate, LineType.SeparateCode):\n separate_lines = []\n for points in lines[0]:\n if points is not None:\n separate_lines += arr.split_points_at_nan(points)\n if line_type_to == LineType.Separate:\n return separate_lines\n else:\n separate_codes = [arr.codes_from_points(points) for points in separate_lines]\n return (separate_lines, separate_codes)\n elif line_type_to == LineType.ChunkCombinedCode:\n chunk_points: list[cpy.PointArray | None] = []\n chunk_codes: list[cpy.CodeArray | None] = []\n for points in lines[0]:\n if points is None:\n chunk_points.append(None)\n chunk_codes.append(None)\n else:\n points, offsets = arr.remove_nan(points)\n chunk_points.append(points)\n chunk_codes.append(arr.codes_from_offsets_and_points(offsets, points))\n return (chunk_points, chunk_codes)\n elif line_type_to == LineType.ChunkCombinedOffset:\n chunk_points = []\n chunk_offsets: list[cpy.OffsetArray | None] = []\n for points in lines[0]:\n if points is None:\n chunk_points.append(None)\n chunk_offsets.append(None)\n else:\n points, offsets = arr.remove_nan(points)\n chunk_points.append(points)\n chunk_offsets.append(offsets)\n return (chunk_points, chunk_offsets)\n elif line_type_to == LineType.ChunkCombinedNan:\n return lines\n else:\n raise ValueError(f"Invalid LineType {line_type_to}")\n\n\ndef convert_lines(\n lines: cpy.LineReturn,\n line_type_from: LineType | str,\n line_type_to: LineType | str,\n) -> cpy.LineReturn:\n """Convert contour lines from one :class:`~.LineType` to another.\n\n Args:\n lines (sequence of arrays): Contour lines to convert, such as those returned by\n :meth:`.ContourGenerator.lines`.\n line_type_from (LineType or str): :class:`~.LineType` to convert from as enum or\n string equivalent.\n line_type_to (LineType or str): :class:`~.LineType` to convert to as enum or string\n equivalent.\n\n Return:\n Converted contour lines.\n\n When converting non-chunked line types (``LineType.Separate`` or ``LineType.SeparateCode``) to\n chunked ones (``LineType.ChunkCombinedCode``, ``LineType.ChunkCombinedOffset`` or\n ``LineType.ChunkCombinedNan``), all lines are placed in the first chunk. When converting in the\n other direction, all chunk information is discarded.\n\n .. versionadded:: 1.2.0\n """\n line_type_from = as_line_type(line_type_from)\n line_type_to = as_line_type(line_type_to)\n\n check_lines(lines, line_type_from)\n\n if line_type_from == LineType.Separate:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_Separate, lines)\n return _convert_lines_from_Separate(lines, line_type_to)\n elif line_type_from == LineType.SeparateCode:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_SeparateCode, lines)\n return _convert_lines_from_SeparateCode(lines, line_type_to)\n elif line_type_from == LineType.ChunkCombinedCode:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedCode, lines)\n return _convert_lines_from_ChunkCombinedCode(lines, line_type_to)\n elif line_type_from == LineType.ChunkCombinedOffset:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines)\n return _convert_lines_from_ChunkCombinedOffset(lines, line_type_to)\n elif line_type_from == LineType.ChunkCombinedNan:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedNan, lines)\n return _convert_lines_from_ChunkCombinedNan(lines, line_type_to)\n else:\n raise ValueError(f"Invalid LineType {line_type_from}")\n\n\ndef convert_multi_filled(\n multi_filled: list[cpy.FillReturn],\n fill_type_from: FillType | str,\n fill_type_to: FillType | str,\n) -> list[cpy.FillReturn]:\n """Convert multiple sets of filled contours from one :class:`~.FillType` to another.\n\n Args:\n multi_filled (nested sequence of arrays): Filled contour polygons to convert, such as those\n returned by :meth:`.ContourGenerator.multi_filled`.\n fill_type_from (FillType or str): :class:`~.FillType` to convert from as enum or\n string equivalent.\n fill_type_to (FillType or str): :class:`~.FillType` to convert to as enum or string\n equivalent.\n\n Return:\n Converted sets filled contour polygons.\n\n When converting non-chunked fill types (``FillType.OuterCode`` or ``FillType.OuterOffset``) to\n chunked ones, all polygons are placed in the first chunk. When converting in the other\n direction, all chunk information is discarded. Converting a fill type that is not aware of the\n relationship between outer boundaries and contained holes (``FillType.ChunkCombinedCode`` or\n ``FillType.ChunkCombinedOffset``) to one that is will raise a ``ValueError``.\n\n .. versionadded:: 1.3.0\n """\n fill_type_from = as_fill_type(fill_type_from)\n fill_type_to = as_fill_type(fill_type_to)\n\n return [convert_filled(filled, fill_type_from, fill_type_to) for filled in multi_filled]\n\n\ndef convert_multi_lines(\n multi_lines: list[cpy.LineReturn],\n line_type_from: LineType | str,\n line_type_to: LineType | str,\n) -> list[cpy.LineReturn]:\n """Convert multiple sets of contour lines from one :class:`~.LineType` to another.\n\n Args:\n multi_lines (nested sequence of arrays): Contour lines to convert, such as those returned by\n :meth:`.ContourGenerator.multi_lines`.\n line_type_from (LineType or str): :class:`~.LineType` to convert from as enum or\n string equivalent.\n line_type_to (LineType or str): :class:`~.LineType` to convert to as enum or string\n equivalent.\n\n Return:\n Converted set of contour lines.\n\n When converting non-chunked line types (``LineType.Separate`` or ``LineType.SeparateCode``) to\n chunked ones (``LineType.ChunkCombinedCode``, ``LineType.ChunkCombinedOffset`` or\n ``LineType.ChunkCombinedNan``), all lines are placed in the first chunk. When converting in the\n other direction, all chunk information is discarded.\n\n .. versionadded:: 1.3.0\n """\n line_type_from = as_line_type(line_type_from)\n line_type_to = as_line_type(line_type_to)\n\n return [convert_lines(lines, line_type_from, line_type_to) for lines in multi_lines]\n
|
.venv\Lib\site-packages\contourpy\convert.py
|
convert.py
|
Python
| 26,775 | 0.95 | 0.217391 | 0.001783 |
react-lib
| 685 |
2025-04-06T00:29:31.411490
|
BSD-3-Clause
| false |
76d1258f2a3e1fed5809daf0512a8ab5
|
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, cast\n\nfrom contourpy._contourpy import FillType, LineType\nfrom contourpy.array import (\n concat_codes_or_none,\n concat_offsets_or_none,\n concat_points_or_none,\n concat_points_or_none_with_nan,\n)\nfrom contourpy.enum_util import as_fill_type, as_line_type\nfrom contourpy.typecheck import check_filled, check_lines\n\nif TYPE_CHECKING:\n import contourpy._contourpy as cpy\n\n\ndef dechunk_filled(filled: cpy.FillReturn, fill_type: FillType | str) -> cpy.FillReturn:\n """Return the specified filled contours with chunked data moved into the first chunk.\n\n Filled contours that are not chunked (``FillType.OuterCode`` and ``FillType.OuterOffset``) and\n those that are but only contain a single chunk are returned unmodified. Individual polygons are\n unchanged, they are not geometrically combined.\n\n Args:\n filled (sequence of arrays): Filled contour data, such as returned by\n :meth:`.ContourGenerator.filled`.\n fill_type (FillType or str): Type of :meth:`~.ContourGenerator.filled` as enum or string\n equivalent.\n\n Return:\n Filled contours in a single chunk.\n\n .. versionadded:: 1.2.0\n """\n fill_type = as_fill_type(fill_type)\n\n if fill_type in (FillType.OuterCode, FillType.OuterOffset):\n # No-op if fill_type is not chunked.\n return filled\n\n check_filled(filled, fill_type)\n if len(filled[0]) < 2:\n # No-op if just one chunk.\n return filled\n\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_Chunk, filled)\n points = concat_points_or_none(filled[0])\n\n if fill_type == FillType.ChunkCombinedCode:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedCode, filled)\n if points is None:\n ret1: cpy.FillReturn_ChunkCombinedCode = ([None], [None])\n else:\n ret1 = ([points], [concat_codes_or_none(filled[1])])\n return ret1\n elif fill_type == FillType.ChunkCombinedOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled)\n if points is None:\n ret2: cpy.FillReturn_ChunkCombinedOffset = ([None], [None])\n else:\n ret2 = ([points], [concat_offsets_or_none(filled[1])])\n return ret2\n elif fill_type == FillType.ChunkCombinedCodeOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled)\n if points is None:\n ret3: cpy.FillReturn_ChunkCombinedCodeOffset = ([None], [None], [None])\n else:\n outer_offsets = concat_offsets_or_none(filled[2])\n ret3 = ([points], [concat_codes_or_none(filled[1])], [outer_offsets])\n return ret3\n elif fill_type == FillType.ChunkCombinedOffsetOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled)\n if points is None:\n ret4: cpy.FillReturn_ChunkCombinedOffsetOffset = ([None], [None], [None])\n else:\n outer_offsets = concat_offsets_or_none(filled[2])\n ret4 = ([points], [concat_offsets_or_none(filled[1])], [outer_offsets])\n return ret4\n else:\n raise ValueError(f"Invalid FillType {fill_type}")\n\n\ndef dechunk_lines(lines: cpy.LineReturn, line_type: LineType | str) -> cpy.LineReturn:\n """Return the specified contour lines with chunked data moved into the first chunk.\n\n Contour lines that are not chunked (``LineType.Separate`` and ``LineType.SeparateCode``) and\n those that are but only contain a single chunk are returned unmodified. Individual lines are\n unchanged, they are not geometrically combined.\n\n Args:\n lines (sequence of arrays): Contour line data, such as returned by\n :meth:`.ContourGenerator.lines`.\n line_type (LineType or str): Type of :meth:`~.ContourGenerator.lines` as enum or string\n equivalent.\n\n Return:\n Contour lines in a single chunk.\n\n .. versionadded:: 1.2.0\n """\n line_type = as_line_type(line_type)\n\n if line_type in (LineType.Separate, LineType.SeparateCode):\n # No-op if line_type is not chunked.\n return lines\n\n check_lines(lines, line_type)\n if len(lines[0]) < 2:\n # No-op if just one chunk.\n return lines\n\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_Chunk, lines)\n\n if line_type == LineType.ChunkCombinedCode:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedCode, lines)\n points = concat_points_or_none(lines[0])\n if points is None:\n ret1: cpy.LineReturn_ChunkCombinedCode = ([None], [None])\n else:\n ret1 = ([points], [concat_codes_or_none(lines[1])])\n return ret1\n elif line_type == LineType.ChunkCombinedOffset:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines)\n points = concat_points_or_none(lines[0])\n if points is None:\n ret2: cpy.LineReturn_ChunkCombinedOffset = ([None], [None])\n else:\n ret2 = ([points], [concat_offsets_or_none(lines[1])])\n return ret2\n elif line_type == LineType.ChunkCombinedNan:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedNan, lines)\n points = concat_points_or_none_with_nan(lines[0])\n ret3: cpy.LineReturn_ChunkCombinedNan = ([points],)\n return ret3\n else:\n raise ValueError(f"Invalid LineType {line_type}")\n\n\ndef dechunk_multi_filled(\n multi_filled: list[cpy.FillReturn],\n fill_type: FillType | str,\n) -> list[cpy.FillReturn]:\n """Return multiple sets of filled contours with chunked data moved into the first chunks.\n\n Filled contours that are not chunked (``FillType.OuterCode`` and ``FillType.OuterOffset``) and\n those that are but only contain a single chunk are returned unmodified. Individual polygons are\n unchanged, they are not geometrically combined.\n\n Args:\n multi_filled (nested sequence of arrays): Filled contour data, such as returned by\n :meth:`.ContourGenerator.multi_filled`.\n fill_type (FillType or str): Type of :meth:`~.ContourGenerator.filled` as enum or string\n equivalent.\n\n Return:\n Multiple sets of filled contours in a single chunk.\n\n .. versionadded:: 1.3.0\n """\n fill_type = as_fill_type(fill_type)\n\n if fill_type in (FillType.OuterCode, FillType.OuterOffset):\n # No-op if fill_type is not chunked.\n return multi_filled\n\n return [dechunk_filled(filled, fill_type) for filled in multi_filled]\n\n\ndef dechunk_multi_lines(\n multi_lines: list[cpy.LineReturn],\n line_type: LineType | str,\n) -> list[cpy.LineReturn]:\n """Return multiple sets of contour lines with all chunked data moved into the first chunks.\n\n Contour lines that are not chunked (``LineType.Separate`` and ``LineType.SeparateCode``) and\n those that are but only contain a single chunk are returned unmodified. Individual lines are\n unchanged, they are not geometrically combined.\n\n Args:\n multi_lines (nested sequence of arrays): Contour line data, such as returned by\n :meth:`.ContourGenerator.multi_lines`.\n line_type (LineType or str): Type of :meth:`~.ContourGenerator.lines` as enum or string\n equivalent.\n\n Return:\n Multiple sets of contour lines in a single chunk.\n\n .. versionadded:: 1.3.0\n """\n line_type = as_line_type(line_type)\n\n if line_type in (LineType.Separate, LineType.SeparateCode):\n # No-op if line_type is not chunked.\n return multi_lines\n\n return [dechunk_lines(lines, line_type) for lines in multi_lines]\n
|
.venv\Lib\site-packages\contourpy\dechunk.py
|
dechunk.py
|
Python
| 7,963 | 0.95 | 0.173913 | 0.035714 |
awesome-app
| 436 |
2025-01-13T17:25:07.915323
|
Apache-2.0
| false |
5b29f2e1bde1fbe9dbac6fa164adf604
|
from __future__ import annotations\n\nfrom contourpy._contourpy import FillType, LineType, ZInterp\n\n\ndef as_fill_type(fill_type: FillType | str) -> FillType:\n """Coerce a FillType or string value to a FillType.\n\n Args:\n fill_type (FillType or str): Value to convert.\n\n Return:\n FillType: Converted value.\n """\n if isinstance(fill_type, str):\n try:\n return FillType.__members__[fill_type]\n except KeyError as e:\n raise ValueError(f"'{fill_type}' is not a valid FillType") from e\n else:\n return fill_type\n\n\ndef as_line_type(line_type: LineType | str) -> LineType:\n """Coerce a LineType or string value to a LineType.\n\n Args:\n line_type (LineType or str): Value to convert.\n\n Return:\n LineType: Converted value.\n """\n if isinstance(line_type, str):\n try:\n return LineType.__members__[line_type]\n except KeyError as e:\n raise ValueError(f"'{line_type}' is not a valid LineType") from e\n else:\n return line_type\n\n\ndef as_z_interp(z_interp: ZInterp | str) -> ZInterp:\n """Coerce a ZInterp or string value to a ZInterp.\n\n Args:\n z_interp (ZInterp or str): Value to convert.\n\n Return:\n ZInterp: Converted value.\n """\n if isinstance(z_interp, str):\n try:\n return ZInterp.__members__[z_interp]\n except KeyError as e:\n raise ValueError(f"'{z_interp}' is not a valid ZInterp") from e\n else:\n return z_interp\n
|
.venv\Lib\site-packages\contourpy\enum_util.py
|
enum_util.py
|
Python
| 1,576 | 0.85 | 0.157895 | 0 |
vue-tools
| 267 |
2023-11-14T13:53:24.217514
|
Apache-2.0
| false |
66f2bf9eb052e750f879cc2f9de1ed2b
|
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, cast\n\nimport numpy as np\n\nfrom contourpy import FillType, LineType\nfrom contourpy.enum_util import as_fill_type, as_line_type\nfrom contourpy.types import MOVETO, code_dtype, offset_dtype, point_dtype\n\nif TYPE_CHECKING:\n import contourpy._contourpy as cpy\n\n\n# Minimalist array-checking functions that check dtype, ndims and shape only.\n# They do not walk the arrays to check the contents for performance reasons.\ndef check_code_array(codes: Any) -> None:\n if not isinstance(codes, np.ndarray):\n raise TypeError(f"Expected numpy array not {type(codes)}")\n if codes.dtype != code_dtype:\n raise ValueError(f"Expected numpy array of dtype {code_dtype} not {codes.dtype}")\n if not (codes.ndim == 1 and len(codes) > 1):\n raise ValueError(f"Expected numpy array of shape (?,) not {codes.shape}")\n if codes[0] != MOVETO:\n raise ValueError(f"First element of code array must be {MOVETO}, not {codes[0]}")\n\n\ndef check_offset_array(offsets: Any) -> None:\n if not isinstance(offsets, np.ndarray):\n raise TypeError(f"Expected numpy array not {type(offsets)}")\n if offsets.dtype != offset_dtype:\n raise ValueError(f"Expected numpy array of dtype {offset_dtype} not {offsets.dtype}")\n if not (offsets.ndim == 1 and len(offsets) > 1):\n raise ValueError(f"Expected numpy array of shape (?,) not {offsets.shape}")\n if offsets[0] != 0:\n raise ValueError(f"First element of offset array must be 0, not {offsets[0]}")\n\n\ndef check_point_array(points: Any) -> None:\n if not isinstance(points, np.ndarray):\n raise TypeError(f"Expected numpy array not {type(points)}")\n if points.dtype != point_dtype:\n raise ValueError(f"Expected numpy array of dtype {point_dtype} not {points.dtype}")\n if not (points.ndim == 2 and points.shape[1] ==2 and points.shape[0] > 1):\n raise ValueError(f"Expected numpy array of shape (?, 2) not {points.shape}")\n\n\ndef _check_tuple_of_lists_with_same_length(\n maybe_tuple: Any,\n tuple_length: int,\n allow_empty_lists: bool = True,\n) -> None:\n if not isinstance(maybe_tuple, tuple):\n raise TypeError(f"Expected tuple not {type(maybe_tuple)}")\n if len(maybe_tuple) != tuple_length:\n raise ValueError(f"Expected tuple of length {tuple_length} not {len(maybe_tuple)}")\n for maybe_list in maybe_tuple:\n if not isinstance(maybe_list, list):\n msg = f"Expected tuple to contain {tuple_length} lists but found a {type(maybe_list)}"\n raise TypeError(msg)\n lengths = [len(item) for item in maybe_tuple]\n if len(set(lengths)) != 1:\n msg = f"Expected {tuple_length} lists with same length but lengths are {lengths}"\n raise ValueError(msg)\n if not allow_empty_lists and lengths[0] == 0:\n raise ValueError(f"Expected {tuple_length} non-empty lists")\n\n\ndef check_filled(filled: cpy.FillReturn, fill_type: FillType | str) -> None:\n fill_type = as_fill_type(fill_type)\n\n if fill_type == FillType.OuterCode:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_OuterCode, filled)\n _check_tuple_of_lists_with_same_length(filled, 2)\n for i, (points, codes) in enumerate(zip(*filled)):\n check_point_array(points)\n check_code_array(codes)\n if len(points) != len(codes):\n raise ValueError(f"Points and codes have different lengths in polygon {i}")\n elif fill_type == FillType.OuterOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_OuterOffset, filled)\n _check_tuple_of_lists_with_same_length(filled, 2)\n for i, (points, offsets) in enumerate(zip(*filled)):\n check_point_array(points)\n check_offset_array(offsets)\n if offsets[-1] != len(points):\n raise ValueError(f"Inconsistent points and offsets in polygon {i}")\n elif fill_type == FillType.ChunkCombinedCode:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedCode, filled)\n _check_tuple_of_lists_with_same_length(filled, 2, allow_empty_lists=False)\n for chunk, (points_or_none, codes_or_none) in enumerate(zip(*filled)):\n if points_or_none is not None and codes_or_none is not None:\n check_point_array(points_or_none)\n check_code_array(codes_or_none)\n if len(points_or_none) != len(codes_or_none):\n raise ValueError(f"Points and codes have different lengths in chunk {chunk}")\n elif not (points_or_none is None and codes_or_none is None):\n raise ValueError(f"Inconsistent Nones in chunk {chunk}")\n elif fill_type == FillType.ChunkCombinedOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedOffset, filled)\n _check_tuple_of_lists_with_same_length(filled, 2, allow_empty_lists=False)\n for chunk, (points_or_none, offsets_or_none) in enumerate(zip(*filled)):\n if points_or_none is not None and offsets_or_none is not None:\n check_point_array(points_or_none)\n check_offset_array(offsets_or_none)\n if offsets_or_none[-1] != len(points_or_none):\n raise ValueError(f"Inconsistent points and offsets in chunk {chunk}")\n elif not (points_or_none is None and offsets_or_none is None):\n raise ValueError(f"Inconsistent Nones in chunk {chunk}")\n elif fill_type == FillType.ChunkCombinedCodeOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedCodeOffset, filled)\n _check_tuple_of_lists_with_same_length(filled, 3, allow_empty_lists=False)\n for i, (points_or_none, codes_or_none, outer_offsets_or_none) in enumerate(zip(*filled)):\n if (points_or_none is not None and codes_or_none is not None and\n outer_offsets_or_none is not None):\n check_point_array(points_or_none)\n check_code_array(codes_or_none)\n check_offset_array(outer_offsets_or_none)\n if len(codes_or_none) != len(points_or_none):\n raise ValueError(f"Points and codes have different lengths in chunk {i}")\n if outer_offsets_or_none[-1] != len(codes_or_none):\n raise ValueError(f"Inconsistent codes and outer_offsets in chunk {i}")\n elif not (points_or_none is None and codes_or_none is None and\n outer_offsets_or_none is None):\n raise ValueError(f"Inconsistent Nones in chunk {i}")\n elif fill_type == FillType.ChunkCombinedOffsetOffset:\n if TYPE_CHECKING:\n filled = cast(cpy.FillReturn_ChunkCombinedOffsetOffset, filled)\n _check_tuple_of_lists_with_same_length(filled, 3, allow_empty_lists=False)\n for i, (points_or_none, offsets_or_none, outer_offsets_or_none) in enumerate(zip(*filled)):\n if (points_or_none is not None and offsets_or_none is not None and\n outer_offsets_or_none is not None):\n check_point_array(points_or_none)\n check_offset_array(offsets_or_none)\n check_offset_array(outer_offsets_or_none)\n if offsets_or_none[-1] != len(points_or_none):\n raise ValueError(f"Inconsistent points and offsets in chunk {i}")\n if outer_offsets_or_none[-1] != len(offsets_or_none) - 1:\n raise ValueError(f"Inconsistent offsets and outer_offsets in chunk {i}")\n elif not (points_or_none is None and offsets_or_none is None and\n outer_offsets_or_none is None):\n raise ValueError(f"Inconsistent Nones in chunk {i}")\n else:\n raise ValueError(f"Invalid FillType {fill_type}")\n\n\ndef check_lines(lines: cpy.LineReturn, line_type: LineType | str) -> None:\n line_type = as_line_type(line_type)\n\n if line_type == LineType.Separate:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_Separate, lines)\n if not isinstance(lines, list):\n raise TypeError(f"Expected list not {type(lines)}")\n for points in lines:\n check_point_array(points)\n elif line_type == LineType.SeparateCode:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_SeparateCode, lines)\n _check_tuple_of_lists_with_same_length(lines, 2)\n for i, (points, codes) in enumerate(zip(*lines)):\n check_point_array(points)\n check_code_array(codes)\n if len(points) != len(codes):\n raise ValueError(f"Points and codes have different lengths in line {i}")\n elif line_type == LineType.ChunkCombinedCode:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedCode, lines)\n _check_tuple_of_lists_with_same_length(lines, 2, allow_empty_lists=False)\n for chunk, (points_or_none, codes_or_none) in enumerate(zip(*lines)):\n if points_or_none is not None and codes_or_none is not None:\n check_point_array(points_or_none)\n check_code_array(codes_or_none)\n if len(points_or_none) != len(codes_or_none):\n raise ValueError(f"Points and codes have different lengths in chunk {chunk}")\n elif not (points_or_none is None and codes_or_none is None):\n raise ValueError(f"Inconsistent Nones in chunk {chunk}")\n elif line_type == LineType.ChunkCombinedOffset:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedOffset, lines)\n _check_tuple_of_lists_with_same_length(lines, 2, allow_empty_lists=False)\n for chunk, (points_or_none, offsets_or_none) in enumerate(zip(*lines)):\n if points_or_none is not None and offsets_or_none is not None:\n check_point_array(points_or_none)\n check_offset_array(offsets_or_none)\n if offsets_or_none[-1] != len(points_or_none):\n raise ValueError(f"Inconsistent points and offsets in chunk {chunk}")\n elif not (points_or_none is None and offsets_or_none is None):\n raise ValueError(f"Inconsistent Nones in chunk {chunk}")\n elif line_type == LineType.ChunkCombinedNan:\n if TYPE_CHECKING:\n lines = cast(cpy.LineReturn_ChunkCombinedNan, lines)\n _check_tuple_of_lists_with_same_length(lines, 1, allow_empty_lists=False)\n for _chunk, points_or_none in enumerate(lines[0]):\n if points_or_none is not None:\n check_point_array(points_or_none)\n else:\n raise ValueError(f"Invalid LineType {line_type}")\n
|
.venv\Lib\site-packages\contourpy\typecheck.py
|
typecheck.py
|
Python
| 10,950 | 0.95 | 0.339901 | 0.010811 |
node-utils
| 859 |
2024-03-16T03:57:07.691168
|
BSD-3-Clause
| false |
2d6cf917e2e721c9855914e72cf11399
|
from __future__ import annotations\n\nimport numpy as np\n\n# dtypes of arrays returned by ContourPy.\npoint_dtype = np.float64\ncode_dtype = np.uint8\noffset_dtype = np.uint32\n\n# Kind codes used in Matplotlib Paths.\nMOVETO = 1\nLINETO = 2\nCLOSEPOLY = 79\n
|
.venv\Lib\site-packages\contourpy\types.py
|
types.py
|
Python
| 260 | 0.95 | 0 | 0.2 |
react-lib
| 925 |
2024-10-03T11:37:32.980838
|
BSD-3-Clause
| false |
6e88dddf9e5154385cf690ecd5324a93
|
!<arch>\n/ -1 0 182 `\n
|
.venv\Lib\site-packages\contourpy\_contourpy.cp313-win_amd64.lib
|
_contourpy.cp313-win_amd64.lib
|
Other
| 2,068 | 0.8 | 0 | 0 |
python-kit
| 744 |
2023-11-01T06:16:46.557252
|
MIT
| false |
5ee1c35f56b031c658f700170e9cf60f
|
from typing import ClassVar, NoReturn, TypeAlias\n\nimport numpy as np\nimport numpy.typing as npt\n\nimport contourpy._contourpy as cpy\n\n# Input numpy array types, the same as in common.h\nCoordinateArray: TypeAlias = npt.NDArray[np.float64]\nMaskArray: TypeAlias = npt.NDArray[np.bool_]\nLevelArray: TypeAlias = npt.ArrayLike\n\n# Output numpy array types, the same as in common.h\nPointArray: TypeAlias = npt.NDArray[np.float64]\nCodeArray: TypeAlias = npt.NDArray[np.uint8]\nOffsetArray: TypeAlias = npt.NDArray[np.uint32]\n\n# Types returned from filled()\nFillReturn_OuterCode: TypeAlias = tuple[list[PointArray], list[CodeArray]]\nFillReturn_OuterOffset: TypeAlias = tuple[list[PointArray], list[OffsetArray]]\nFillReturn_ChunkCombinedCode: TypeAlias = tuple[list[PointArray | None], list[CodeArray | None]]\nFillReturn_ChunkCombinedOffset: TypeAlias = tuple[list[PointArray | None], list[OffsetArray | None]]\nFillReturn_ChunkCombinedCodeOffset: TypeAlias = tuple[list[PointArray | None], list[CodeArray | None], list[OffsetArray | None]]\nFillReturn_ChunkCombinedOffsetOffset: TypeAlias = tuple[list[PointArray | None], list[OffsetArray | None], list[OffsetArray | None]]\nFillReturn_Chunk: TypeAlias = FillReturn_ChunkCombinedCode | FillReturn_ChunkCombinedOffset | FillReturn_ChunkCombinedCodeOffset | FillReturn_ChunkCombinedOffsetOffset\nFillReturn: TypeAlias = FillReturn_OuterCode | FillReturn_OuterOffset | FillReturn_Chunk\n\n# Types returned from lines()\nLineReturn_Separate: TypeAlias = list[PointArray]\nLineReturn_SeparateCode: TypeAlias = tuple[list[PointArray], list[CodeArray]]\nLineReturn_ChunkCombinedCode: TypeAlias = tuple[list[PointArray | None], list[CodeArray | None]]\nLineReturn_ChunkCombinedOffset: TypeAlias = tuple[list[PointArray | None], list[OffsetArray | None]]\nLineReturn_ChunkCombinedNan: TypeAlias = tuple[list[PointArray | None]]\nLineReturn_Chunk: TypeAlias = LineReturn_ChunkCombinedCode | LineReturn_ChunkCombinedOffset | LineReturn_ChunkCombinedNan\nLineReturn: TypeAlias = LineReturn_Separate | LineReturn_SeparateCode | LineReturn_Chunk\n\n\nNDEBUG: int\n__version__: str\n\nclass FillType:\n ChunkCombinedCode: ClassVar[cpy.FillType]\n ChunkCombinedCodeOffset: ClassVar[cpy.FillType]\n ChunkCombinedOffset: ClassVar[cpy.FillType]\n ChunkCombinedOffsetOffset: ClassVar[cpy.FillType]\n OuterCode: ClassVar[cpy.FillType]\n OuterOffset: ClassVar[cpy.FillType]\n __members__: ClassVar[dict[str, cpy.FillType]]\n def __eq__(self, other: object) -> bool: ...\n def __getstate__(self) -> int: ...\n def __hash__(self) -> int: ...\n def __index__(self) -> int: ...\n def __init__(self, value: int) -> None: ...\n def __int__(self) -> int: ...\n def __ne__(self, other: object) -> bool: ...\n def __setstate__(self, state: int) -> NoReturn: ...\n @property\n def name(self) -> str: ...\n @property\n def value(self) -> int: ...\n\nclass LineType:\n ChunkCombinedCode: ClassVar[cpy.LineType]\n ChunkCombinedNan: ClassVar[cpy.LineType]\n ChunkCombinedOffset: ClassVar[cpy.LineType]\n Separate: ClassVar[cpy.LineType]\n SeparateCode: ClassVar[cpy.LineType]\n __members__: ClassVar[dict[str, cpy.LineType]]\n def __eq__(self, other: object) -> bool: ...\n def __getstate__(self) -> int: ...\n def __hash__(self) -> int: ...\n def __index__(self) -> int: ...\n def __init__(self, value: int) -> None: ...\n def __int__(self) -> int: ...\n def __ne__(self, other: object) -> bool: ...\n def __setstate__(self, state: int) -> NoReturn: ...\n @property\n def name(self) -> str: ...\n @property\n def value(self) -> int: ...\n\nclass ZInterp:\n Linear: ClassVar[cpy.ZInterp]\n Log: ClassVar[cpy.ZInterp]\n __members__: ClassVar[dict[str, cpy.ZInterp]]\n def __eq__(self, other: object) -> bool: ...\n def __getstate__(self) -> int: ...\n def __hash__(self) -> int: ...\n def __index__(self) -> int: ...\n def __init__(self, value: int) -> None: ...\n def __int__(self) -> int: ...\n def __ne__(self, other: object) -> bool: ...\n def __setstate__(self, state: int) -> NoReturn: ...\n @property\n def name(self) -> str: ...\n @property\n def value(self) -> int: ...\n\ndef max_threads() -> int: ...\n\nclass ContourGenerator:\n def create_contour(self, level: float) -> LineReturn: ...\n def create_filled_contour(self, lower_level: float, upper_level: float) -> FillReturn: ...\n def filled(self, lower_level: float, upper_level: float) -> FillReturn: ...\n def lines(self, level: float) -> LineReturn: ...\n def multi_filled(self, levels: LevelArray) -> list[FillReturn]: ...\n def multi_lines(self, levels: LevelArray) -> list[LineReturn]: ...\n @staticmethod\n def supports_corner_mask() -> bool: ...\n @staticmethod\n def supports_fill_type(fill_type: FillType) -> bool: ...\n @staticmethod\n def supports_line_type(line_type: LineType) -> bool: ...\n @staticmethod\n def supports_quad_as_tri() -> bool: ...\n @staticmethod\n def supports_threads() -> bool: ...\n @staticmethod\n def supports_z_interp() -> bool: ...\n @property\n def chunk_count(self) -> tuple[int, int]: ...\n @property\n def chunk_size(self) -> tuple[int, int]: ...\n @property\n def corner_mask(self) -> bool: ...\n @property\n def fill_type(self) -> FillType: ...\n @property\n def line_type(self) -> LineType: ...\n @property\n def quad_as_tri(self) -> bool: ...\n @property\n def thread_count(self) -> int: ...\n @property\n def z_interp(self) -> ZInterp: ...\n default_fill_type: cpy.FillType\n default_line_type: cpy.LineType\n\nclass Mpl2005ContourGenerator(ContourGenerator):\n def __init__(\n self,\n x: CoordinateArray,\n y: CoordinateArray,\n z: CoordinateArray,\n mask: MaskArray,\n *,\n x_chunk_size: int = 0,\n y_chunk_size: int = 0,\n ) -> None: ...\n\nclass Mpl2014ContourGenerator(ContourGenerator):\n def __init__(\n self,\n x: CoordinateArray,\n y: CoordinateArray,\n z: CoordinateArray,\n mask: MaskArray,\n *,\n corner_mask: bool,\n x_chunk_size: int = 0,\n y_chunk_size: int = 0,\n ) -> None: ...\n\nclass SerialContourGenerator(ContourGenerator):\n def __init__(\n self,\n x: CoordinateArray,\n y: CoordinateArray,\n z: CoordinateArray,\n mask: MaskArray,\n *,\n corner_mask: bool,\n line_type: LineType,\n fill_type: FillType,\n quad_as_tri: bool,\n z_interp: ZInterp,\n x_chunk_size: int = 0,\n y_chunk_size: int = 0,\n ) -> None: ...\n def _write_cache(self) -> NoReturn: ...\n\nclass ThreadedContourGenerator(ContourGenerator):\n def __init__(\n self,\n x: CoordinateArray,\n y: CoordinateArray,\n z: CoordinateArray,\n mask: MaskArray,\n *,\n corner_mask: bool,\n line_type: LineType,\n fill_type: FillType,\n quad_as_tri: bool,\n z_interp: ZInterp,\n x_chunk_size: int = 0,\n y_chunk_size: int = 0,\n thread_count: int = 0,\n ) -> None: ...\n def _write_cache(self) -> None: ...\n
|
.venv\Lib\site-packages\contourpy\_contourpy.pyi
|
_contourpy.pyi
|
Other
| 7,321 | 0.95 | 0.326633 | 0.043956 |
node-utils
| 440 |
2024-11-20T22:50:58.377879
|
Apache-2.0
| false |
6cc069763a1bc941adf429fa3b085363
|
__version__ = "1.3.2"\n
|
.venv\Lib\site-packages\contourpy\_version.py
|
_version.py
|
Python
| 23 | 0.5 | 0 | 0 |
node-utils
| 951 |
2025-01-02T13:13:04.947960
|
Apache-2.0
| false |
99c0fd59a954826e9422399f22c94d7f
|
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\n\nfrom contourpy._contourpy import (\n ContourGenerator,\n FillType,\n LineType,\n Mpl2005ContourGenerator,\n Mpl2014ContourGenerator,\n SerialContourGenerator,\n ThreadedContourGenerator,\n ZInterp,\n max_threads,\n)\nfrom contourpy._version import __version__\nfrom contourpy.chunk import calc_chunk_sizes\nfrom contourpy.convert import (\n convert_filled,\n convert_lines,\n convert_multi_filled,\n convert_multi_lines,\n)\nfrom contourpy.dechunk import (\n dechunk_filled,\n dechunk_lines,\n dechunk_multi_filled,\n dechunk_multi_lines,\n)\nfrom contourpy.enum_util import as_fill_type, as_line_type, as_z_interp\n\nif TYPE_CHECKING:\n from typing import Any\n\n from numpy.typing import ArrayLike\n\n from ._contourpy import CoordinateArray, MaskArray\n\n__all__ = [\n "__version__",\n "contour_generator",\n "convert_filled",\n "convert_lines",\n "convert_multi_filled",\n "convert_multi_lines",\n "dechunk_filled",\n "dechunk_lines",\n "dechunk_multi_filled",\n "dechunk_multi_lines",\n "max_threads",\n "FillType",\n "LineType",\n "ContourGenerator",\n "Mpl2005ContourGenerator",\n "Mpl2014ContourGenerator",\n "SerialContourGenerator",\n "ThreadedContourGenerator",\n "ZInterp",\n]\n\n\n# Simple mapping of algorithm name to class name.\n_class_lookup: dict[str, type[ContourGenerator]] = {\n "mpl2005": Mpl2005ContourGenerator,\n "mpl2014": Mpl2014ContourGenerator,\n "serial": SerialContourGenerator,\n "threaded": ThreadedContourGenerator,\n}\n\n\ndef _remove_z_mask(\n z: ArrayLike | np.ma.MaskedArray[Any, Any] | None,\n) -> tuple[CoordinateArray, MaskArray | None]:\n # Preserve mask if present.\n z_array = np.ma.asarray(z, dtype=np.float64) # type: ignore[no-untyped-call]\n z_masked = np.ma.masked_invalid(z_array, copy=False) # type: ignore[no-untyped-call]\n\n if np.ma.is_masked(z_masked): # type: ignore[no-untyped-call]\n mask = np.ma.getmask(z_masked) # type: ignore[no-untyped-call]\n else:\n mask = None\n\n return np.ma.getdata(z_masked), mask # type: ignore[no-untyped-call]\n\n\ndef contour_generator(\n x: ArrayLike | None = None,\n y: ArrayLike | None = None,\n z: ArrayLike | np.ma.MaskedArray[Any, Any] | None = None,\n *,\n name: str = "serial",\n corner_mask: bool | None = None,\n line_type: LineType | str | None = None,\n fill_type: FillType | str | None = None,\n chunk_size: int | tuple[int, int] | None = None,\n chunk_count: int | tuple[int, int] | None = None,\n total_chunk_count: int | None = None,\n quad_as_tri: bool = False,\n z_interp: ZInterp | str | None = ZInterp.Linear,\n thread_count: int = 0,\n) -> ContourGenerator:\n """Create and return a :class:`~.ContourGenerator` object.\n\n The class and properties of the returned :class:`~.ContourGenerator` are determined by the\n function arguments, with sensible defaults.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,), optional): The x-coordinates of the ``z`` values.\n May be 2D with the same shape as ``z.shape``, or 1D with length ``nx = z.shape[1]``.\n If not specified are assumed to be ``np.arange(nx)``. Must be ordered monotonically.\n y (array-like of shape (ny, nx) or (ny,), optional): The y-coordinates of the ``z`` values.\n May be 2D with the same shape as ``z.shape``, or 1D with length ``ny = z.shape[0]``.\n If not specified are assumed to be ``np.arange(ny)``. Must be ordered monotonically.\n z (array-like of shape (ny, nx), may be a masked array): The 2D gridded values to calculate\n the contours of. May be a masked array, and any invalid values (``np.inf`` or\n ``np.nan``) will also be masked out.\n name (str): Algorithm name, one of ``"serial"``, ``"threaded"``, ``"mpl2005"`` or\n ``"mpl2014"``, default ``"serial"``.\n corner_mask (bool, optional): Enable/disable corner masking, which only has an effect if\n ``z`` is a masked array. If ``False``, any quad touching a masked point is masked out.\n If ``True``, only the triangular corners of quads nearest these points are always masked\n out, other triangular corners comprising three unmasked points are contoured as usual.\n If not specified, uses the default provided by the algorithm ``name``.\n line_type (LineType or str, optional): The format of contour line data returned from calls\n to :meth:`~.ContourGenerator.lines`, specified either as a :class:`~.LineType` or its\n string equivalent such as ``"SeparateCode"``.\n If not specified, uses the default provided by the algorithm ``name``.\n The relationship between the :class:`~.LineType` enum and the data format returned from\n :meth:`~.ContourGenerator.lines` is explained at :ref:`line_type`.\n fill_type (FillType or str, optional): The format of filled contour data returned from calls\n to :meth:`~.ContourGenerator.filled`, specified either as a :class:`~.FillType` or its\n string equivalent such as ``"OuterOffset"``.\n If not specified, uses the default provided by the algorithm ``name``.\n The relationship between the :class:`~.FillType` enum and the data format returned from\n :meth:`~.ContourGenerator.filled` is explained at :ref:`fill_type`.\n chunk_size (int or tuple(int, int), optional): Chunk size in (y, x) directions, or the same\n size in both directions if only one value is specified.\n chunk_count (int or tuple(int, int), optional): Chunk count in (y, x) directions, or the\n same count in both directions if only one value is specified.\n total_chunk_count (int, optional): Total number of chunks.\n quad_as_tri (bool): Enable/disable treating quads as 4 triangles, default ``False``.\n If ``False``, a contour line within a quad is a straight line between points on two of\n its edges. If ``True``, each full quad is divided into 4 triangles using a virtual point\n at the centre (mean x, y of the corner points) and a contour line is piecewise linear\n within those triangles. Corner-masked triangles are not affected by this setting, only\n full unmasked quads.\n z_interp (ZInterp or str, optional): How to interpolate ``z`` values when determining where\n contour lines intersect the edges of quads and the ``z`` values of the central points of\n quads, specified either as a :class:`~contourpy.ZInterp` or its string equivalent such\n as ``"Log"``. Default is ``ZInterp.Linear``.\n thread_count (int): Number of threads to use for contour calculation, default 0. Threads can\n only be used with an algorithm ``name`` that supports threads (currently only\n ``name="threaded"``) and there must be at least the same number of chunks as threads.\n If ``thread_count=0`` and ``name="threaded"`` then it uses the maximum number of threads\n as determined by the C++11 call ``std::thread::hardware_concurrency()``. If ``name`` is\n something other than ``"threaded"`` then the ``thread_count`` will be set to ``1``.\n\n Return:\n :class:`~.ContourGenerator`.\n\n Note:\n A maximum of one of ``chunk_size``, ``chunk_count`` and ``total_chunk_count`` may be\n specified.\n\n Warning:\n The ``name="mpl2005"`` algorithm does not implement chunking for contour lines.\n """\n x = np.asarray(x, dtype=np.float64)\n y = np.asarray(y, dtype=np.float64)\n z, mask = _remove_z_mask(z)\n\n # Check arguments: z.\n if z.ndim != 2:\n raise TypeError(f"Input z must be 2D, not {z.ndim}D")\n\n if z.shape[0] < 2 or z.shape[1] < 2:\n raise TypeError(f"Input z must be at least a (2, 2) shaped array, but has shape {z.shape}")\n\n ny, nx = z.shape\n\n # Check arguments: x and y.\n if x.ndim != y.ndim:\n raise TypeError(f"Number of dimensions of x ({x.ndim}) and y ({y.ndim}) do not match")\n\n if x.ndim == 0:\n x = np.arange(nx, dtype=np.float64)\n y = np.arange(ny, dtype=np.float64)\n x, y = np.meshgrid(x, y)\n elif x.ndim == 1:\n if len(x) != nx:\n raise TypeError(f"Length of x ({len(x)}) must match number of columns in z ({nx})")\n if len(y) != ny:\n raise TypeError(f"Length of y ({len(y)}) must match number of rows in z ({ny})")\n x, y = np.meshgrid(x, y)\n elif x.ndim == 2:\n if x.shape != z.shape:\n raise TypeError(f"Shapes of x {x.shape} and z {z.shape} do not match")\n if y.shape != z.shape:\n raise TypeError(f"Shapes of y {y.shape} and z {z.shape} do not match")\n else:\n raise TypeError(f"Inputs x and y must be None, 1D or 2D, not {x.ndim}D")\n\n # Check mask shape just in case.\n if mask is not None and mask.shape != z.shape:\n raise ValueError("If mask is set it must be a 2D array with the same shape as z")\n\n # Check arguments: name.\n if name not in _class_lookup:\n raise ValueError(f"Unrecognised contour generator name: {name}")\n\n # Check arguments: chunk_size, chunk_count and total_chunk_count.\n y_chunk_size, x_chunk_size = calc_chunk_sizes(\n chunk_size, chunk_count, total_chunk_count, ny, nx)\n\n cls = _class_lookup[name]\n\n # Check arguments: corner_mask.\n if corner_mask is None:\n # Set it to default, which is True if the algorithm supports it.\n corner_mask = cls.supports_corner_mask()\n elif corner_mask and not cls.supports_corner_mask():\n raise ValueError(f"{name} contour generator does not support corner_mask=True")\n\n # Check arguments: line_type.\n if line_type is None:\n line_type = cls.default_line_type\n else:\n line_type = as_line_type(line_type)\n\n if not cls.supports_line_type(line_type):\n raise ValueError(f"{name} contour generator does not support line_type {line_type}")\n\n # Check arguments: fill_type.\n if fill_type is None:\n fill_type = cls.default_fill_type\n else:\n fill_type = as_fill_type(fill_type)\n\n if not cls.supports_fill_type(fill_type):\n raise ValueError(f"{name} contour generator does not support fill_type {fill_type}")\n\n # Check arguments: quad_as_tri.\n if quad_as_tri and not cls.supports_quad_as_tri():\n raise ValueError(f"{name} contour generator does not support quad_as_tri=True")\n\n # Check arguments: z_interp.\n if z_interp is None:\n z_interp = ZInterp.Linear\n else:\n z_interp = as_z_interp(z_interp)\n\n if z_interp != ZInterp.Linear and not cls.supports_z_interp():\n raise ValueError(f"{name} contour generator does not support z_interp {z_interp}")\n\n # Check arguments: thread_count.\n if thread_count not in (0, 1) and not cls.supports_threads():\n raise ValueError(f"{name} contour generator does not support thread_count {thread_count}")\n\n # Prepare args and kwargs for contour generator constructor.\n args = [x, y, z, mask]\n kwargs: dict[str, int | bool | LineType | FillType | ZInterp] = {\n "x_chunk_size": x_chunk_size,\n "y_chunk_size": y_chunk_size,\n }\n\n if name not in ("mpl2005", "mpl2014"):\n kwargs["line_type"] = line_type\n kwargs["fill_type"] = fill_type\n\n if cls.supports_corner_mask():\n kwargs["corner_mask"] = corner_mask\n\n if cls.supports_quad_as_tri():\n kwargs["quad_as_tri"] = quad_as_tri\n\n if cls.supports_z_interp():\n kwargs["z_interp"] = z_interp\n\n if cls.supports_threads():\n kwargs["thread_count"] = thread_count\n\n # Create contour generator.\n return cls(*args, **kwargs)\n
|
.venv\Lib\site-packages\contourpy\__init__.py
|
__init__.py
|
Python
| 12,116 | 0.95 | 0.164912 | 0.070833 |
python-kit
| 767 |
2025-05-24T05:22:14.195920
|
BSD-3-Clause
| false |
4cc52508f40bbf83d18ef7f47b50a861
|
from __future__ import annotations\n\nimport io\nfrom typing import TYPE_CHECKING, Any\n\nfrom bokeh.io import export_png, export_svg, show\nfrom bokeh.io.export import get_screenshot_as_png\nfrom bokeh.layouts import gridplot\nfrom bokeh.models.annotations.labels import Label\nfrom bokeh.palettes import Category10\nfrom bokeh.plotting import figure\nimport numpy as np\n\nfrom contourpy.enum_util import as_fill_type, as_line_type\nfrom contourpy.util.bokeh_util import filled_to_bokeh, lines_to_bokeh\nfrom contourpy.util.renderer import Renderer\n\nif TYPE_CHECKING:\n from bokeh.core.enums import OutputBackendType\n from bokeh.models import GridPlot\n from bokeh.palettes import Palette\n from numpy.typing import ArrayLike\n from selenium.webdriver.remote.webdriver import WebDriver\n\n from contourpy import FillType, LineType\n from contourpy._contourpy import FillReturn, LineReturn\n\n\nclass BokehRenderer(Renderer):\n """Utility renderer using Bokeh to render a grid of plots over the same (x, y) range.\n\n Args:\n nrows (int, optional): Number of rows of plots, default ``1``.\n ncols (int, optional): Number of columns of plots, default ``1``.\n figsize (tuple(float, float), optional): Figure size in inches (assuming 100 dpi), default\n ``(9, 9)``.\n show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``.\n want_svg (bool, optional): Whether output is required in SVG format or not, default\n ``False``.\n\n Warning:\n :class:`~.BokehRenderer`, unlike :class:`~.MplRenderer`, needs to be told in advance if\n output to SVG format will be required later, otherwise it will assume PNG output.\n """\n _figures: list[figure]\n _layout: GridPlot\n _palette: Palette\n _want_svg: bool\n\n def __init__(\n self,\n nrows: int = 1,\n ncols: int = 1,\n figsize: tuple[float, float] = (9, 9),\n show_frame: bool = True,\n want_svg: bool = False,\n ) -> None:\n self._want_svg = want_svg\n self._palette = Category10[10]\n\n total_size = 100*np.asarray(figsize, dtype=int) # Assuming 100 dpi.\n\n nfigures = nrows*ncols\n self._figures = []\n backend: OutputBackendType = "svg" if self._want_svg else "canvas"\n for _ in range(nfigures):\n fig = figure(output_backend=backend)\n fig.xgrid.visible = False # type: ignore[attr-defined]\n fig.ygrid.visible = False # type: ignore[attr-defined]\n self._figures.append(fig)\n if not show_frame:\n fig.outline_line_color = None\n fig.axis.visible = False # type: ignore[attr-defined]\n\n self._layout = gridplot(\n self._figures, ncols=ncols, toolbar_location=None, # type: ignore[arg-type]\n width=total_size[0] // ncols, height=total_size[1] // nrows)\n\n def _convert_color(self, color: str) -> str:\n if isinstance(color, str) and color[0] == "C":\n index = int(color[1:])\n color = self._palette[index]\n return color\n\n def _get_figure(self, ax: figure | int) -> figure:\n if isinstance(ax, int):\n ax = self._figures[ax]\n return ax\n\n def filled(\n self,\n filled: FillReturn,\n fill_type: FillType | str,\n ax: figure | int = 0,\n color: str = "C0",\n alpha: float = 0.7,\n ) -> None:\n """Plot filled contours on a single plot.\n\n Args:\n filled (sequence of arrays): Filled contour data as returned by\n :meth:`~.ContourGenerator.filled`.\n fill_type (FillType or str): Type of :meth:`~.ContourGenerator.filled` data as returned\n by :attr:`~.ContourGenerator.fill_type`, or a string equivalent.\n ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.\n color (str, optional): Color to plot with. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``Category10`` palette. Default ``"C0"``.\n alpha (float, optional): Opacity to plot with, default ``0.7``.\n """\n fill_type = as_fill_type(fill_type)\n fig = self._get_figure(ax)\n color = self._convert_color(color)\n xs, ys = filled_to_bokeh(filled, fill_type)\n if len(xs) > 0:\n fig.multi_polygons(xs=[xs], ys=[ys], color=color, fill_alpha=alpha, line_width=0) # type: ignore[arg-type]\n\n def grid(\n self,\n x: ArrayLike,\n y: ArrayLike,\n ax: figure | int = 0,\n color: str = "black",\n alpha: float = 0.1,\n point_color: str | None = None,\n quad_as_tri_alpha: float = 0,\n ) -> None:\n """Plot quad grid lines on a single plot.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.\n y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.\n ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.\n color (str, optional): Color to plot grid lines, default ``"black"``.\n alpha (float, optional): Opacity to plot lines with, default ``0.1``.\n point_color (str, optional): Color to plot grid points or ``None`` if grid points\n should not be plotted, default ``None``.\n quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default\n ``0``.\n\n Colors may be a string color or the letter ``"C"`` followed by an integer in the range\n ``"C0"`` to ``"C9"`` to use a color from the ``Category10`` palette.\n\n Warning:\n ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked.\n """\n fig = self._get_figure(ax)\n x, y = self._grid_as_2d(x, y)\n xs = list(x) + list(x.T)\n ys = list(y) + list(y.T)\n kwargs = {"line_color": color, "alpha": alpha}\n fig.multi_line(xs, ys, **kwargs)\n if quad_as_tri_alpha > 0:\n # Assumes no quad mask.\n xmid = (0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])).ravel()\n ymid = (0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])).ravel()\n fig.multi_line(\n list(np.stack((x[:-1, :-1].ravel(), xmid, x[1:, 1:].ravel()), axis=1)),\n list(np.stack((y[:-1, :-1].ravel(), ymid, y[1:, 1:].ravel()), axis=1)),\n **kwargs)\n fig.multi_line(\n list(np.stack((x[:-1, 1:].ravel(), xmid, x[1:, :-1].ravel()), axis=1)),\n list(np.stack((y[:-1, 1:].ravel(), ymid, y[1:, :-1].ravel()), axis=1)),\n **kwargs)\n if point_color is not None:\n fig.scatter(\n x=x.ravel(), y=y.ravel(), fill_color=color, line_color=None, alpha=alpha,\n marker="circle", size=8)\n\n def lines(\n self,\n lines: LineReturn,\n line_type: LineType | str,\n ax: figure | int = 0,\n color: str = "C0",\n alpha: float = 1.0,\n linewidth: float = 1,\n ) -> None:\n """Plot contour lines on a single plot.\n\n Args:\n lines (sequence of arrays): Contour line data as returned by\n :meth:`~.ContourGenerator.lines`.\n line_type (LineType or str): Type of :meth:`~.ContourGenerator.lines` data as returned\n by :attr:`~.ContourGenerator.line_type`, or a string equivalent.\n ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.\n color (str, optional): Color to plot lines. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``Category10`` palette. Default ``"C0"``.\n alpha (float, optional): Opacity to plot lines with, default ``1.0``.\n linewidth (float, optional): Width of lines, default ``1``.\n\n Note:\n Assumes all lines are open line strips not closed line loops.\n """\n line_type = as_line_type(line_type)\n fig = self._get_figure(ax)\n color = self._convert_color(color)\n xs, ys = lines_to_bokeh(lines, line_type)\n if xs is not None:\n assert ys is not None\n fig.line(xs, ys, line_color=color, line_alpha=alpha, line_width=linewidth)\n\n def mask(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike | np.ma.MaskedArray[Any, Any],\n ax: figure | int = 0,\n color: str = "black",\n ) -> None:\n """Plot masked out grid points as circles on a single plot.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.\n y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.\n z (masked array of shape (ny, nx): z-values.\n ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.\n color (str, optional): Circle color, default ``"black"``.\n """\n mask = np.ma.getmask(z) # type: ignore[no-untyped-call]\n if mask is np.ma.nomask:\n return\n fig = self._get_figure(ax)\n color = self._convert_color(color)\n x, y = self._grid_as_2d(x, y)\n fig.scatter(x[mask], y[mask], fill_color=color, marker="circle", size=10)\n\n def save(\n self,\n filename: str,\n transparent: bool = False,\n *,\n webdriver: WebDriver | None = None,\n ) -> None:\n """Save plots to SVG or PNG file.\n\n Args:\n filename (str): Filename to save to.\n transparent (bool, optional): Whether background should be transparent, default\n ``False``.\n webdriver (WebDriver, optional): Selenium WebDriver instance to use to create the image.\n\n .. versionadded:: 1.1.1\n\n Warning:\n To output to SVG file, ``want_svg=True`` must have been passed to the constructor.\n """\n if transparent:\n for fig in self._figures:\n fig.background_fill_color = None\n fig.border_fill_color = None\n\n if self._want_svg:\n export_svg(self._layout, filename=filename, webdriver=webdriver)\n else:\n export_png(self._layout, filename=filename, webdriver=webdriver)\n\n def save_to_buffer(self, *, webdriver: WebDriver | None = None) -> io.BytesIO:\n """Save plots to an ``io.BytesIO`` buffer.\n\n Args:\n webdriver (WebDriver, optional): Selenium WebDriver instance to use to create the image.\n\n .. versionadded:: 1.1.1\n\n Return:\n BytesIO: PNG image buffer.\n """\n image = get_screenshot_as_png(self._layout, driver=webdriver)\n buffer = io.BytesIO()\n image.save(buffer, "png")\n return buffer\n\n def show(self) -> None:\n """Show plots in web browser, in usual Bokeh manner.\n """\n show(self._layout)\n\n def title(self, title: str, ax: figure | int = 0, color: str | None = None) -> None:\n """Set the title of a single plot.\n\n Args:\n title (str): Title text.\n ax (int or Bokeh Figure, optional): Which plot to set the title of, default ``0``.\n color (str, optional): Color to set title. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``Category10`` palette. Default ``None`` which is ``black``.\n """\n fig = self._get_figure(ax)\n fig.title = title\n fig.title.align = "center" # type: ignore[attr-defined]\n if color is not None:\n fig.title.text_color = self._convert_color(color) # type: ignore[attr-defined]\n\n def z_values(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike,\n ax: figure | int = 0,\n color: str = "green",\n fmt: str = ".1f",\n quad_as_tri: bool = False,\n ) -> None:\n """Show ``z`` values on a single plot.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.\n y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.\n z (array-like of shape (ny, nx): z-values.\n ax (int or Bokeh Figure, optional): Which plot to use, default ``0``.\n color (str, optional): Color of added text. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``Category10`` palette. Default ``"green"``.\n fmt (str, optional): Format to display z-values, default ``".1f"``.\n quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centres\n of quads.\n\n Warning:\n ``quad_as_tri=True`` shows z-values for all quads, even if masked.\n """\n fig = self._get_figure(ax)\n color = self._convert_color(color)\n x, y = self._grid_as_2d(x, y)\n z = np.asarray(z)\n ny, nx = z.shape\n kwargs = {"text_color": color, "text_align": "center", "text_baseline": "middle"}\n for j in range(ny):\n for i in range(nx):\n label = Label(x=x[j, i], y=y[j, i], text=f"{z[j, i]:{fmt}}", **kwargs) # type: ignore[arg-type]\n fig.add_layout(label)\n if quad_as_tri:\n for j in range(ny-1):\n for i in range(nx-1):\n xx = np.mean(x[j:j+2, i:i+2])\n yy = np.mean(y[j:j+2, i:i+2])\n zz = np.mean(z[j:j+2, i:i+2])\n fig.add_layout(Label(x=xx, y=yy, text=f"{zz:{fmt}}", **kwargs)) # type: ignore[arg-type]\n
|
.venv\Lib\site-packages\contourpy\util\bokeh_renderer.py
|
bokeh_renderer.py
|
Python
| 14,298 | 0.95 | 0.115044 | 0.013423 |
node-utils
| 95 |
2025-01-16T03:38:56.436143
|
Apache-2.0
| false |
3428255bb19d0af4fc20f0f354c405f2
|
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, cast\n\nfrom contourpy import FillType, LineType\nfrom contourpy.array import offsets_from_codes\nfrom contourpy.convert import convert_lines\nfrom contourpy.dechunk import dechunk_lines\n\nif TYPE_CHECKING:\n from contourpy._contourpy import (\n CoordinateArray,\n FillReturn,\n LineReturn,\n LineReturn_ChunkCombinedNan,\n )\n\n\ndef filled_to_bokeh(\n filled: FillReturn,\n fill_type: FillType,\n) -> tuple[list[list[CoordinateArray]], list[list[CoordinateArray]]]:\n xs: list[list[CoordinateArray]] = []\n ys: list[list[CoordinateArray]] = []\n if fill_type in (FillType.OuterOffset, FillType.ChunkCombinedOffset,\n FillType.OuterCode, FillType.ChunkCombinedCode):\n have_codes = fill_type in (FillType.OuterCode, FillType.ChunkCombinedCode)\n\n for points, offsets in zip(*filled):\n if points is None:\n continue\n if have_codes:\n offsets = offsets_from_codes(offsets)\n xs.append([]) # New outer with zero or more holes.\n ys.append([])\n for i in range(len(offsets)-1):\n xys = points[offsets[i]:offsets[i+1]]\n xs[-1].append(xys[:, 0])\n ys[-1].append(xys[:, 1])\n elif fill_type in (FillType.ChunkCombinedCodeOffset, FillType.ChunkCombinedOffsetOffset):\n for points, codes_or_offsets, outer_offsets in zip(*filled):\n if points is None:\n continue\n for j in range(len(outer_offsets)-1):\n if fill_type == FillType.ChunkCombinedCodeOffset:\n codes = codes_or_offsets[outer_offsets[j]:outer_offsets[j+1]]\n offsets = offsets_from_codes(codes) + outer_offsets[j]\n else:\n offsets = codes_or_offsets[outer_offsets[j]:outer_offsets[j+1]+1]\n xs.append([]) # New outer with zero or more holes.\n ys.append([])\n for k in range(len(offsets)-1):\n xys = points[offsets[k]:offsets[k+1]]\n xs[-1].append(xys[:, 0])\n ys[-1].append(xys[:, 1])\n else:\n raise RuntimeError(f"Conversion of FillType {fill_type} to Bokeh is not implemented")\n\n return xs, ys\n\n\ndef lines_to_bokeh(\n lines: LineReturn,\n line_type: LineType,\n) -> tuple[CoordinateArray | None, CoordinateArray | None]:\n lines = convert_lines(lines, line_type, LineType.ChunkCombinedNan)\n lines = dechunk_lines(lines, LineType.ChunkCombinedNan)\n if TYPE_CHECKING:\n lines = cast(LineReturn_ChunkCombinedNan, lines)\n points = lines[0][0]\n if points is None:\n return None, None\n else:\n return points[:, 0], points[:, 1]\n
|
.venv\Lib\site-packages\contourpy\util\bokeh_util.py
|
bokeh_util.py
|
Python
| 2,878 | 0.95 | 0.202703 | 0 |
vue-tools
| 61 |
2025-02-13T15:19:56.115613
|
Apache-2.0
| false |
07499f2a15a42827c140cbcc071f609a
|
from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any\n\nimport numpy as np\n\nif TYPE_CHECKING:\n from contourpy._contourpy import CoordinateArray\n\n\ndef simple(\n shape: tuple[int, int], want_mask: bool = False,\n) -> tuple[CoordinateArray, CoordinateArray, CoordinateArray | np.ma.MaskedArray[Any, Any]]:\n """Return simple test data consisting of the sum of two gaussians.\n\n Args:\n shape (tuple(int, int)): 2D shape of data to return.\n want_mask (bool, optional): Whether test data should be masked or not, default ``False``.\n\n Return:\n Tuple of 3 arrays: ``x``, ``y``, ``z`` test data, ``z`` will be masked if\n ``want_mask=True``.\n """\n ny, nx = shape\n x = np.arange(nx, dtype=np.float64)\n y = np.arange(ny, dtype=np.float64)\n x, y = np.meshgrid(x, y)\n\n xscale = nx - 1.0\n yscale = ny - 1.0\n\n # z is sum of 2D gaussians.\n amp = np.asarray([1.0, -1.0, 0.8, -0.9, 0.7])\n mid = np.asarray([[0.4, 0.2], [0.3, 0.8], [0.9, 0.75], [0.7, 0.3], [0.05, 0.7]])\n width = np.asarray([0.4, 0.2, 0.2, 0.2, 0.1])\n\n z = np.zeros_like(x)\n for i in range(len(amp)):\n z += amp[i]*np.exp(-((x/xscale - mid[i, 0])**2 + (y/yscale - mid[i, 1])**2) / width[i]**2)\n\n if want_mask:\n mask = np.logical_or(\n ((x/xscale - 1.0)**2 / 0.2 + (y/yscale - 0.0)**2 / 0.1) < 1.0,\n ((x/xscale - 0.2)**2 / 0.02 + (y/yscale - 0.45)**2 / 0.08) < 1.0,\n )\n z = np.ma.array(z, mask=mask) # type: ignore[no-untyped-call]\n\n return x, y, z\n\n\ndef random(\n shape: tuple[int, int], seed: int = 2187, mask_fraction: float = 0.0,\n) -> tuple[CoordinateArray, CoordinateArray, CoordinateArray | np.ma.MaskedArray[Any, Any]]:\n """Return random test data in the range 0 to 1.\n\n Args:\n shape (tuple(int, int)): 2D shape of data to return.\n seed (int, optional): Seed for random number generator, default 2187.\n mask_fraction (float, optional): Fraction of elements to mask, default 0.\n\n Return:\n Tuple of 3 arrays: ``x``, ``y``, ``z`` test data, ``z`` will be masked if\n ``mask_fraction`` is greater than zero.\n """\n ny, nx = shape\n x = np.arange(nx, dtype=np.float64)\n y = np.arange(ny, dtype=np.float64)\n x, y = np.meshgrid(x, y)\n\n rng = np.random.default_rng(seed)\n z = rng.uniform(size=shape)\n\n if mask_fraction > 0.0:\n mask_fraction = min(mask_fraction, 0.99)\n mask = rng.uniform(size=shape) < mask_fraction\n z = np.ma.array(z, mask=mask) # type: ignore[no-untyped-call]\n\n return x, y, z\n
|
.venv\Lib\site-packages\contourpy\util\data.py
|
data.py
|
Python
| 2,664 | 0.95 | 0.115385 | 0.016949 |
node-utils
| 0 |
2024-01-18T13:40:32.097925
|
GPL-3.0
| false |
a7d54f52b0ddfdfb49164deef4ec4907
|
from __future__ import annotations\n\nimport io\nfrom itertools import pairwise\nfrom typing import TYPE_CHECKING, Any, cast\n\nimport matplotlib.collections as mcollections\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom contourpy import FillType, LineType\nfrom contourpy.convert import convert_filled, convert_lines\nfrom contourpy.enum_util import as_fill_type, as_line_type\nfrom contourpy.util.mpl_util import filled_to_mpl_paths, lines_to_mpl_paths\nfrom contourpy.util.renderer import Renderer\n\nif TYPE_CHECKING:\n from collections.abc import Sequence\n\n from matplotlib.axes import Axes\n from matplotlib.figure import Figure\n from numpy.typing import ArrayLike\n\n import contourpy._contourpy as cpy\n\n\nclass MplRenderer(Renderer):\n """Utility renderer using Matplotlib to render a grid of plots over the same (x, y) range.\n\n Args:\n nrows (int, optional): Number of rows of plots, default ``1``.\n ncols (int, optional): Number of columns of plots, default ``1``.\n figsize (tuple(float, float), optional): Figure size in inches, default ``(9, 9)``.\n show_frame (bool, optional): Whether to show frame and axes ticks, default ``True``.\n backend (str, optional): Matplotlib backend to use or ``None`` for default backend.\n Default ``None``.\n gridspec_kw (dict, optional): Gridspec keyword arguments to pass to ``plt.subplots``,\n default None.\n """\n _axes: Sequence[Axes]\n _fig: Figure\n _want_tight: bool\n\n def __init__(\n self,\n nrows: int = 1,\n ncols: int = 1,\n figsize: tuple[float, float] = (9, 9),\n show_frame: bool = True,\n backend: str | None = None,\n gridspec_kw: dict[str, Any] | None = None,\n ) -> None:\n if backend is not None:\n import matplotlib as mpl\n mpl.use(backend)\n\n kwargs: dict[str, Any] = {"figsize": figsize, "squeeze": False,\n "sharex": True, "sharey": True}\n if gridspec_kw is not None:\n kwargs["gridspec_kw"] = gridspec_kw\n else:\n kwargs["subplot_kw"] = {"aspect": "equal"}\n\n self._fig, axes = plt.subplots(nrows, ncols, **kwargs)\n self._axes = axes.flatten()\n if not show_frame:\n for ax in self._axes:\n ax.axis("off")\n\n self._want_tight = True\n\n def __del__(self) -> None:\n if hasattr(self, "_fig"):\n plt.close(self._fig)\n\n def _autoscale(self) -> None:\n # Using axes._need_autoscale attribute if need to autoscale before rendering after adding\n # lines/filled. Only want to autoscale once per axes regardless of how many lines/filled\n # added.\n for ax in self._axes:\n if getattr(ax, "_need_autoscale", False):\n ax.autoscale_view(tight=True)\n ax._need_autoscale = False # type: ignore[attr-defined]\n if self._want_tight and len(self._axes) > 1:\n self._fig.tight_layout()\n\n def _get_ax(self, ax: Axes | int) -> Axes:\n if isinstance(ax, int):\n ax = self._axes[ax]\n return ax\n\n def filled(\n self,\n filled: cpy.FillReturn,\n fill_type: FillType | str,\n ax: Axes | int = 0,\n color: str = "C0",\n alpha: float = 0.7,\n ) -> None:\n """Plot filled contours on a single Axes.\n\n Args:\n filled (sequence of arrays): Filled contour data as returned by\n :meth:`~.ContourGenerator.filled`.\n fill_type (FillType or str): Type of :meth:`~.ContourGenerator.filled` data as returned\n by :attr:`~.ContourGenerator.fill_type`, or string equivalent\n ax (int or Maplotlib Axes, optional): Which axes to plot on, default ``0``.\n color (str, optional): Color to plot with. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``tab10`` colormap. Default ``"C0"``.\n alpha (float, optional): Opacity to plot with, default ``0.7``.\n """\n fill_type = as_fill_type(fill_type)\n ax = self._get_ax(ax)\n paths = filled_to_mpl_paths(filled, fill_type)\n collection = mcollections.PathCollection(\n paths, facecolors=color, edgecolors="none", lw=0, alpha=alpha)\n ax.add_collection(collection)\n ax._need_autoscale = True # type: ignore[attr-defined]\n\n def grid(\n self,\n x: ArrayLike,\n y: ArrayLike,\n ax: Axes | int = 0,\n color: str = "black",\n alpha: float = 0.1,\n point_color: str | None = None,\n quad_as_tri_alpha: float = 0,\n ) -> None:\n """Plot quad grid lines on a single Axes.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.\n y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.\n ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.\n color (str, optional): Color to plot grid lines, default ``"black"``.\n alpha (float, optional): Opacity to plot lines with, default ``0.1``.\n point_color (str, optional): Color to plot grid points or ``None`` if grid points\n should not be plotted, default ``None``.\n quad_as_tri_alpha (float, optional): Opacity to plot ``quad_as_tri`` grid, default 0.\n\n Colors may be a string color or the letter ``"C"`` followed by an integer in the range\n ``"C0"`` to ``"C9"`` to use a color from the ``tab10`` colormap.\n\n Warning:\n ``quad_as_tri_alpha > 0`` plots all quads as though they are unmasked.\n """\n ax = self._get_ax(ax)\n x, y = self._grid_as_2d(x, y)\n kwargs: dict[str, Any] = {"color": color, "alpha": alpha}\n ax.plot(x, y, x.T, y.T, **kwargs)\n if quad_as_tri_alpha > 0:\n # Assumes no quad mask.\n xmid = 0.25*(x[:-1, :-1] + x[1:, :-1] + x[:-1, 1:] + x[1:, 1:])\n ymid = 0.25*(y[:-1, :-1] + y[1:, :-1] + y[:-1, 1:] + y[1:, 1:])\n kwargs["alpha"] = quad_as_tri_alpha\n ax.plot(\n np.stack((x[:-1, :-1], xmid, x[1:, 1:])).reshape((3, -1)),\n np.stack((y[:-1, :-1], ymid, y[1:, 1:])).reshape((3, -1)),\n np.stack((x[1:, :-1], xmid, x[:-1, 1:])).reshape((3, -1)),\n np.stack((y[1:, :-1], ymid, y[:-1, 1:])).reshape((3, -1)),\n **kwargs)\n if point_color is not None:\n ax.plot(x, y, color=point_color, alpha=alpha, marker="o", lw=0)\n ax._need_autoscale = True # type: ignore[attr-defined]\n\n def lines(\n self,\n lines: cpy.LineReturn,\n line_type: LineType | str,\n ax: Axes | int = 0,\n color: str = "C0",\n alpha: float = 1.0,\n linewidth: float = 1,\n ) -> None:\n """Plot contour lines on a single Axes.\n\n Args:\n lines (sequence of arrays): Contour line data as returned by\n :meth:`~.ContourGenerator.lines`.\n line_type (LineType or str): Type of :meth:`~.ContourGenerator.lines` data as returned\n by :attr:`~.ContourGenerator.line_type`, or string equivalent.\n ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.\n color (str, optional): Color to plot lines. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``tab10`` colormap. Default ``"C0"``.\n alpha (float, optional): Opacity to plot lines with, default ``1.0``.\n linewidth (float, optional): Width of lines, default ``1``.\n """\n line_type = as_line_type(line_type)\n ax = self._get_ax(ax)\n paths = lines_to_mpl_paths(lines, line_type)\n collection = mcollections.PathCollection(\n paths, facecolors="none", edgecolors=color, lw=linewidth, alpha=alpha)\n ax.add_collection(collection)\n ax._need_autoscale = True # type: ignore[attr-defined]\n\n def mask(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike | np.ma.MaskedArray[Any, Any],\n ax: Axes | int = 0,\n color: str = "black",\n ) -> None:\n """Plot masked out grid points as circles on a single Axes.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.\n y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.\n z (masked array of shape (ny, nx): z-values.\n ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.\n color (str, optional): Circle color, default ``"black"``.\n """\n mask = np.ma.getmask(z) # type: ignore[no-untyped-call]\n if mask is np.ma.nomask:\n return\n ax = self._get_ax(ax)\n x, y = self._grid_as_2d(x, y)\n ax.plot(x[mask], y[mask], "o", c=color)\n\n def save(self, filename: str, transparent: bool = False) -> None:\n """Save plots to SVG or PNG file.\n\n Args:\n filename (str): Filename to save to.\n transparent (bool, optional): Whether background should be transparent, default\n ``False``.\n """\n self._autoscale()\n self._fig.savefig(filename, transparent=transparent)\n\n def save_to_buffer(self) -> io.BytesIO:\n """Save plots to an ``io.BytesIO`` buffer.\n\n Return:\n BytesIO: PNG image buffer.\n """\n self._autoscale()\n buf = io.BytesIO()\n self._fig.savefig(buf, format="png")\n buf.seek(0)\n return buf\n\n def show(self) -> None:\n """Show plots in an interactive window, in the usual Matplotlib manner.\n """\n self._autoscale()\n plt.show()\n\n def title(self, title: str, ax: Axes | int = 0, color: str | None = None) -> None:\n """Set the title of a single Axes.\n\n Args:\n title (str): Title text.\n ax (int or Matplotlib Axes, optional): Which Axes to set the title of, default ``0``.\n color (str, optional): Color to set title. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``tab10`` colormap. Default is ``None`` which uses Matplotlib's default title color\n that depends on the stylesheet in use.\n """\n if color:\n self._get_ax(ax).set_title(title, color=color)\n else:\n self._get_ax(ax).set_title(title)\n\n def z_values(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike,\n ax: Axes | int = 0,\n color: str = "green",\n fmt: str = ".1f",\n quad_as_tri: bool = False,\n ) -> None:\n """Show ``z`` values on a single Axes.\n\n Args:\n x (array-like of shape (ny, nx) or (nx,)): The x-coordinates of the grid points.\n y (array-like of shape (ny, nx) or (ny,)): The y-coordinates of the grid points.\n z (array-like of shape (ny, nx): z-values.\n ax (int or Matplotlib Axes, optional): Which Axes to plot on, default ``0``.\n color (str, optional): Color of added text. May be a string color or the letter ``"C"``\n followed by an integer in the range ``"C0"`` to ``"C9"`` to use a color from the\n ``tab10`` colormap. Default ``"green"``.\n fmt (str, optional): Format to display z-values, default ``".1f"``.\n quad_as_tri (bool, optional): Whether to show z-values at the ``quad_as_tri`` centers\n of quads.\n\n Warning:\n ``quad_as_tri=True`` shows z-values for all quads, even if masked.\n """\n ax = self._get_ax(ax)\n x, y = self._grid_as_2d(x, y)\n z = np.asarray(z)\n ny, nx = z.shape\n for j in range(ny):\n for i in range(nx):\n ax.text(x[j, i], y[j, i], f"{z[j, i]:{fmt}}", ha="center", va="center",\n color=color, clip_on=True)\n if quad_as_tri:\n for j in range(ny-1):\n for i in range(nx-1):\n xx = np.mean(x[j:j+2, i:i+2], dtype=np.float64)\n yy = np.mean(y[j:j+2, i:i+2], dtype=np.float64)\n zz = np.mean(z[j:j+2, i:i+2])\n ax.text(xx, yy, f"{zz:{fmt}}", ha="center", va="center", color=color,\n clip_on=True)\n\n\nclass MplTestRenderer(MplRenderer):\n """Test renderer implemented using Matplotlib.\n\n No whitespace around plots and no spines/ticks displayed.\n Uses Agg backend, so can only save to file/buffer, cannot call ``show()``.\n """\n def __init__(\n self,\n nrows: int = 1,\n ncols: int = 1,\n figsize: tuple[float, float] = (9, 9),\n ) -> None:\n gridspec = {\n "left": 0.01,\n "right": 0.99,\n "top": 0.99,\n "bottom": 0.01,\n "wspace": 0.01,\n "hspace": 0.01,\n }\n super().__init__(\n nrows, ncols, figsize, show_frame=True, backend="Agg", gridspec_kw=gridspec,\n )\n\n for ax in self._axes:\n ax.set_xmargin(0.0)\n ax.set_ymargin(0.0)\n ax.set_xticks([])\n ax.set_yticks([])\n\n self._want_tight = False\n\n\nclass MplDebugRenderer(MplRenderer):\n """Debug renderer implemented using Matplotlib.\n\n Extends ``MplRenderer`` to add extra information to help in debugging such as markers, arrows,\n text, etc.\n """\n def __init__(\n self,\n nrows: int = 1,\n ncols: int = 1,\n figsize: tuple[float, float] = (9, 9),\n show_frame: bool = True,\n ) -> None:\n super().__init__(nrows, ncols, figsize, show_frame)\n\n def _arrow(\n self,\n ax: Axes,\n line_start: cpy.CoordinateArray,\n line_end: cpy.CoordinateArray,\n color: str,\n alpha: float,\n arrow_size: float,\n ) -> None:\n mid = 0.5*(line_start + line_end)\n along = line_end - line_start\n along /= np.sqrt(np.dot(along, along)) # Unit vector.\n right = np.asarray((along[1], -along[0]))\n arrow = np.stack((\n mid - (along*0.5 - right)*arrow_size,\n mid + along*0.5*arrow_size,\n mid - (along*0.5 + right)*arrow_size,\n ))\n ax.plot(arrow[:, 0], arrow[:, 1], "-", c=color, alpha=alpha)\n\n def filled(\n self,\n filled: cpy.FillReturn,\n fill_type: FillType | str,\n ax: Axes | int = 0,\n color: str = "C1",\n alpha: float = 0.7,\n line_color: str = "C0",\n line_alpha: float = 0.7,\n point_color: str = "C0",\n start_point_color: str = "red",\n arrow_size: float = 0.1,\n ) -> None:\n fill_type = as_fill_type(fill_type)\n super().filled(filled, fill_type, ax, color, alpha)\n\n if line_color is None and point_color is None:\n return\n\n ax = self._get_ax(ax)\n filled = convert_filled(filled, fill_type, FillType.ChunkCombinedOffset)\n\n # Lines.\n if line_color is not None:\n for points, offsets in zip(*filled):\n if points is None:\n continue\n for start, end in pairwise(offsets):\n xys = points[start:end]\n ax.plot(xys[:, 0], xys[:, 1], c=line_color, alpha=line_alpha)\n\n if arrow_size > 0.0:\n n = len(xys)\n for i in range(n-1):\n self._arrow(ax, xys[i], xys[i+1], line_color, line_alpha, arrow_size)\n\n # Points.\n if point_color is not None:\n for points, offsets in zip(*filled):\n if points is None:\n continue\n mask = np.ones(offsets[-1], dtype=bool)\n mask[offsets[1:]-1] = False # Exclude end points.\n if start_point_color is not None:\n start_indices = offsets[:-1]\n mask[start_indices] = False # Exclude start points.\n ax.plot(\n points[:, 0][mask], points[:, 1][mask], "o", c=point_color, alpha=line_alpha)\n\n if start_point_color is not None:\n ax.plot(points[:, 0][start_indices], points[:, 1][start_indices], "o",\n c=start_point_color, alpha=line_alpha)\n\n def lines(\n self,\n lines: cpy.LineReturn,\n line_type: LineType | str,\n ax: Axes | int = 0,\n color: str = "C0",\n alpha: float = 1.0,\n linewidth: float = 1,\n point_color: str = "C0",\n start_point_color: str = "red",\n arrow_size: float = 0.1,\n ) -> None:\n line_type = as_line_type(line_type)\n super().lines(lines, line_type, ax, color, alpha, linewidth)\n\n if arrow_size == 0.0 and point_color is None:\n return\n\n ax = self._get_ax(ax)\n separate_lines = convert_lines(lines, line_type, LineType.Separate)\n if TYPE_CHECKING:\n separate_lines = cast(cpy.LineReturn_Separate, separate_lines)\n\n if arrow_size > 0.0:\n for line in separate_lines:\n for i in range(len(line)-1):\n self._arrow(ax, line[i], line[i+1], color, alpha, arrow_size)\n\n if point_color is not None:\n for line in separate_lines:\n start_index = 0\n end_index = len(line)\n if start_point_color is not None:\n ax.plot(line[0, 0], line[0, 1], "o", c=start_point_color, alpha=alpha)\n start_index = 1\n if line[0][0] == line[-1][0] and line[0][1] == line[-1][1]:\n end_index -= 1\n ax.plot(line[start_index:end_index, 0], line[start_index:end_index, 1], "o",\n c=color, alpha=alpha)\n\n def point_numbers(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike,\n ax: Axes | int = 0,\n color: str = "red",\n ) -> None:\n ax = self._get_ax(ax)\n x, y = self._grid_as_2d(x, y)\n z = np.asarray(z)\n ny, nx = z.shape\n for j in range(ny):\n for i in range(nx):\n quad = i + j*nx\n ax.text(x[j, i], y[j, i], str(quad), ha="right", va="top", color=color,\n clip_on=True)\n\n def quad_numbers(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike,\n ax: Axes | int = 0,\n color: str = "blue",\n ) -> None:\n ax = self._get_ax(ax)\n x, y = self._grid_as_2d(x, y)\n z = np.asarray(z)\n ny, nx = z.shape\n for j in range(1, ny):\n for i in range(1, nx):\n quad = i + j*nx\n xmid = x[j-1:j+1, i-1:i+1].mean()\n ymid = y[j-1:j+1, i-1:i+1].mean()\n ax.text(xmid, ymid, str(quad), ha="center", va="center", color=color, clip_on=True)\n\n def z_levels(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike,\n lower_level: float,\n upper_level: float | None = None,\n ax: Axes | int = 0,\n color: str = "green",\n ) -> None:\n ax = self._get_ax(ax)\n x, y = self._grid_as_2d(x, y)\n z = np.asarray(z)\n ny, nx = z.shape\n for j in range(ny):\n for i in range(nx):\n zz = z[j, i]\n if upper_level is not None and zz > upper_level:\n z_level = 2\n elif zz > lower_level:\n z_level = 1\n else:\n z_level = 0\n ax.text(x[j, i], y[j, i], str(z_level), ha="left", va="bottom", color=color,\n clip_on=True)\n
|
.venv\Lib\site-packages\contourpy\util\mpl_renderer.py
|
mpl_renderer.py
|
Python
| 20,660 | 0.95 | 0.143925 | 0.014737 |
react-lib
| 360 |
2024-12-22T15:11:59.132612
|
Apache-2.0
| false |
0b29ceb7764c581f0b5e274f369deaec
|
from __future__ import annotations\n\nfrom itertools import pairwise\nfrom typing import TYPE_CHECKING, cast\n\nimport matplotlib.path as mpath\nimport numpy as np\n\nfrom contourpy import FillType, LineType\nfrom contourpy.array import codes_from_offsets\n\nif TYPE_CHECKING:\n from contourpy._contourpy import FillReturn, LineReturn, LineReturn_Separate\n\n\ndef filled_to_mpl_paths(filled: FillReturn, fill_type: FillType) -> list[mpath.Path]:\n if fill_type in (FillType.OuterCode, FillType.ChunkCombinedCode):\n paths = [mpath.Path(points, codes) for points, codes in zip(*filled) if points is not None]\n elif fill_type in (FillType.OuterOffset, FillType.ChunkCombinedOffset):\n paths = [mpath.Path(points, codes_from_offsets(offsets))\n for points, offsets in zip(*filled) if points is not None]\n elif fill_type == FillType.ChunkCombinedCodeOffset:\n paths = []\n for points, codes, outer_offsets in zip(*filled):\n if points is None:\n continue\n points = np.split(points, outer_offsets[1:-1])\n codes = np.split(codes, outer_offsets[1:-1])\n paths += [mpath.Path(p, c) for p, c in zip(points, codes)]\n elif fill_type == FillType.ChunkCombinedOffsetOffset:\n paths = []\n for points, offsets, outer_offsets in zip(*filled):\n if points is None:\n continue\n for i in range(len(outer_offsets)-1):\n offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1]\n pts = points[offs[0]:offs[-1]]\n paths += [mpath.Path(pts, codes_from_offsets(offs - offs[0]))]\n else:\n raise RuntimeError(f"Conversion of FillType {fill_type} to MPL Paths is not implemented")\n return paths\n\n\ndef lines_to_mpl_paths(lines: LineReturn, line_type: LineType) -> list[mpath.Path]:\n if line_type == LineType.Separate:\n if TYPE_CHECKING:\n lines = cast(LineReturn_Separate, lines)\n paths = []\n for line in lines:\n # Drawing as Paths so that they can be closed correctly.\n closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1]\n paths.append(mpath.Path(line, closed=closed))\n elif line_type in (LineType.SeparateCode, LineType.ChunkCombinedCode):\n paths = [mpath.Path(points, codes) for points, codes in zip(*lines) if points is not None]\n elif line_type == LineType.ChunkCombinedOffset:\n paths = []\n for points, offsets in zip(*lines):\n if points is None:\n continue\n for i in range(len(offsets)-1):\n line = points[offsets[i]:offsets[i+1]]\n closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1]\n paths.append(mpath.Path(line, closed=closed))\n elif line_type == LineType.ChunkCombinedNan:\n paths = []\n for points in lines[0]:\n if points is None:\n continue\n nan_offsets = np.nonzero(np.isnan(points[:, 0]))[0]\n nan_offsets = np.concatenate([[-1], nan_offsets, [len(points)]])\n for s, e in pairwise(nan_offsets):\n line = points[s+1:e]\n closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1]\n paths.append(mpath.Path(line, closed=closed))\n else:\n raise RuntimeError(f"Conversion of LineType {line_type} to MPL Paths is not implemented")\n return paths\n
|
.venv\Lib\site-packages\contourpy\util\mpl_util.py
|
mpl_util.py
|
Python
| 3,529 | 0.95 | 0.324675 | 0.014493 |
awesome-app
| 272 |
2025-06-08T02:04:01.116402
|
GPL-3.0
| false |
5741ead8fb69ac8a6d4aa021e242218b
|
from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom typing import TYPE_CHECKING, Any\n\nimport numpy as np\n\nif TYPE_CHECKING:\n import io\n\n from numpy.typing import ArrayLike\n\n from contourpy._contourpy import CoordinateArray, FillReturn, FillType, LineReturn, LineType\n\n\nclass Renderer(ABC):\n """Abstract base class for renderers."""\n\n def _grid_as_2d(self, x: ArrayLike, y: ArrayLike) -> tuple[CoordinateArray, CoordinateArray]:\n x = np.asarray(x)\n y = np.asarray(y)\n if x.ndim == 1:\n x, y = np.meshgrid(x, y)\n return x, y\n\n @abstractmethod\n def filled(\n self,\n filled: FillReturn,\n fill_type: FillType | str,\n ax: Any = 0,\n color: str = "C0",\n alpha: float = 0.7,\n ) -> None:\n pass\n\n @abstractmethod\n def grid(\n self,\n x: ArrayLike,\n y: ArrayLike,\n ax: Any = 0,\n color: str = "black",\n alpha: float = 0.1,\n point_color: str | None = None,\n quad_as_tri_alpha: float = 0,\n ) -> None:\n pass\n\n @abstractmethod\n def lines(\n self,\n lines: LineReturn,\n line_type: LineType | str,\n ax: Any = 0,\n color: str = "C0",\n alpha: float = 1.0,\n linewidth: float = 1,\n ) -> None:\n pass\n\n @abstractmethod\n def mask(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike | np.ma.MaskedArray[Any, Any],\n ax: Any = 0,\n color: str = "black",\n ) -> None:\n pass\n\n def multi_filled(\n self,\n multi_filled: list[FillReturn],\n fill_type: FillType | str,\n ax: Any = 0,\n color: str | None = None,\n **kwargs: Any,\n ) -> None:\n """Plot multiple sets of filled contours on a single axes.\n\n Args:\n multi_filled (list of filled contour arrays): Multiple filled contour sets as returned\n by :meth:`.ContourGenerator.multi_filled`.\n fill_type (FillType or str): Type of filled data as returned by\n :attr:`~.ContourGenerator.fill_type`, or string equivalent.\n ax (int or Renderer-specific axes or figure object, optional): Which axes to plot on,\n default ``0``.\n color (str or None, optional): If a string color then this same color is used for all\n filled contours. If ``None``, the default, then the filled contour sets use colors\n from the ``tab10`` colormap in order, wrapping around to the beginning if more than\n 10 sets of filled contours are rendered.\n kwargs: All other keyword argument are passed on to\n :meth:`.Renderer.filled` unchanged.\n\n .. versionadded:: 1.3.0\n """\n if color is not None:\n kwargs["color"] = color\n for i, filled in enumerate(multi_filled):\n if color is None:\n kwargs["color"] = f"C{i % 10}"\n self.filled(filled, fill_type, ax, **kwargs)\n\n def multi_lines(\n self,\n multi_lines: list[LineReturn],\n line_type: LineType | str,\n ax: Any = 0,\n color: str | None = None,\n **kwargs: Any,\n ) -> None:\n """Plot multiple sets of contour lines on a single axes.\n\n Args:\n multi_lines (list of contour line arrays): Multiple contour line sets as returned by\n :meth:`.ContourGenerator.multi_lines`.\n line_type (LineType or str): Type of line data as returned by\n :attr:`~.ContourGenerator.line_type`, or string equivalent.\n ax (int or Renderer-specific axes or figure object, optional): Which axes to plot on,\n default ``0``.\n color (str or None, optional): If a string color then this same color is used for all\n lines. If ``None``, the default, then the line sets use colors from the ``tab10``\n colormap in order, wrapping around to the beginning if more than 10 sets of lines\n are rendered.\n kwargs: All other keyword argument are passed on to\n :meth:`Renderer.lines` unchanged.\n\n .. versionadded:: 1.3.0\n """\n if color is not None:\n kwargs["color"] = color\n for i, lines in enumerate(multi_lines):\n if color is None:\n kwargs["color"] = f"C{i % 10}"\n self.lines(lines, line_type, ax, **kwargs)\n\n @abstractmethod\n def save(self, filename: str, transparent: bool = False) -> None:\n pass\n\n @abstractmethod\n def save_to_buffer(self) -> io.BytesIO:\n pass\n\n @abstractmethod\n def show(self) -> None:\n pass\n\n @abstractmethod\n def title(self, title: str, ax: Any = 0, color: str | None = None) -> None:\n pass\n\n @abstractmethod\n def z_values(\n self,\n x: ArrayLike,\n y: ArrayLike,\n z: ArrayLike,\n ax: Any = 0,\n color: str = "green",\n fmt: str = ".1f",\n quad_as_tri: bool = False,\n ) -> None:\n pass\n
|
.venv\Lib\site-packages\contourpy\util\renderer.py
|
renderer.py
|
Python
| 5,284 | 0.85 | 0.162651 | 0.013986 |
python-kit
| 135 |
2023-09-24T05:01:26.988400
|
BSD-3-Clause
| false |
b01872683ccf489b3b5d467e38c538e3
|
# _build_config.py.in is converted into _build_config.py during the meson build process.\n\nfrom __future__ import annotations\n\n\ndef build_config() -> dict[str, str]:\n """\n Return a dictionary containing build configuration settings.\n\n All dictionary keys and values are strings, for example ``False`` is\n returned as ``"False"``.\n\n .. versionadded:: 1.1.0\n """\n return dict(\n # Python settings\n python_version="3.13",\n python_install_dir=r"c:/Lib/site-packages/",\n python_path=r"C:/Users/runneradmin/AppData/Local/Temp/build-env-hycg5pau/Scripts/python.exe",\n\n # Package versions\n contourpy_version="1.3.2",\n meson_version="1.7.2",\n mesonpy_version="0.17.1",\n pybind11_version="2.13.6",\n\n # Misc meson settings\n meson_backend="ninja",\n build_dir=r"D:/a/contourpy/contourpy/.mesonpy-u6taogop/lib/contourpy/util",\n source_dir=r"D:/a/contourpy/contourpy/lib/contourpy/util",\n cross_build="False",\n\n # Build options\n build_options=r"-Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=mt '-Dcpp_link_args=['ucrt.lib','vcruntime.lib','/nodefaultlib:libucrt.lib','/nodefaultlib:libvcruntime.lib']' -Dvsenv=True '--native-file=D:/a/contourpy/contourpy/.mesonpy-u6taogop/meson-python-native-file.ini'",\n buildtype="release",\n cpp_std="c++17",\n debug="False",\n optimization="3",\n vsenv="True",\n b_ndebug="if-release",\n b_vscrt="mt",\n\n # C++ compiler\n compiler_name="msvc",\n compiler_version="19.43.34808",\n linker_id="link",\n compile_command="cl",\n\n # Host machine\n host_cpu="x86_64",\n host_cpu_family="x86_64",\n host_cpu_endian="little",\n host_cpu_system="windows",\n\n # Build machine, same as host machine if not a cross_build\n build_cpu="x86_64",\n build_cpu_family="x86_64",\n build_cpu_endian="little",\n build_cpu_system="windows",\n )\n
|
.venv\Lib\site-packages\contourpy\util\_build_config.py
|
_build_config.py
|
Python
| 2,085 | 0.95 | 0.083333 | 0.163265 |
python-kit
| 715 |
2024-12-30T05:12:58.368860
|
BSD-3-Clause
| false |
b029b41fe4d98fb624e148bc9f7504fd
|
from __future__ import annotations\n\nfrom contourpy.util._build_config import build_config\n\n__all__ = ["build_config"]\n
|
.venv\Lib\site-packages\contourpy\util\__init__.py
|
__init__.py
|
Python
| 123 | 0.85 | 0 | 0 |
vue-tools
| 733 |
2023-07-29T09:18:40.057393
|
MIT
| false |
67b33e2bbb6822381a7429ab758a3d57
|
\n\n
|
.venv\Lib\site-packages\contourpy\util\__pycache__\bokeh_renderer.cpython-313.pyc
|
bokeh_renderer.cpython-313.pyc
|
Other
| 17,708 | 0.95 | 0.02439 | 0 |
react-lib
| 218 |
2025-02-20T02:49:42.417902
|
MIT
| false |
34f561b6f1e35ceccc253f40b411f7bb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.