content
stringlengths
1
103k
path
stringlengths
8
216
filename
stringlengths
2
179
language
stringclasses
15 values
size_bytes
int64
2
189k
quality_score
float64
0.5
0.95
complexity
float64
0
1
documentation_ratio
float64
0
1
repository
stringclasses
5 values
stars
int64
0
1k
created_date
stringdate
2023-07-10 19:21:08
2025-07-09 19:11:45
license
stringclasses
4 values
is_test
bool
2 classes
file_hash
stringlengths
32
32
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\draft04.cpython-313.pyc
draft04.cpython-313.pyc
Other
40,900
0.95
0.104839
0.002198
python-kit
525
2024-02-23T11:55:11.873934
BSD-3-Clause
false
ab95f221e23dc102211f628ce7ca4f85
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\draft06.cpython-313.pyc
draft06.cpython-313.pyc
Other
11,542
0.95
0.111111
0
node-utils
309
2024-06-25T21:11:03.804500
GPL-3.0
false
a91556a522b601d73177922a0c336f94
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\draft07.cpython-313.pyc
draft07.cpython-313.pyc
Other
6,377
0.95
0.081395
0.012821
react-lib
94
2024-06-05T04:41:49.067512
MIT
false
0f83e28008dd3497e91849bf0f85aba2
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\exceptions.cpython-313.pyc
exceptions.cpython-313.pyc
Other
2,888
0.95
0.0625
0.241379
react-lib
4
2024-05-10T08:56:00.613124
BSD-3-Clause
false
a12fea985a3c4afd9cb36ed4743f29d6
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\generator.cpython-313.pyc
generator.cpython-313.pyc
Other
16,767
0.95
0.149733
0.023392
node-utils
676
2025-04-03T22:18:16.755424
BSD-3-Clause
false
36fd84cf4377b8edc1fcb9620c97e05b
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\indent.cpython-313.pyc
indent.cpython-313.pyc
Other
1,812
0.8
0.1
0
react-lib
221
2025-07-07T09:05:01.090623
BSD-3-Clause
false
6b04d216a9f24a2f5e62f668c960b127
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\ref_resolver.cpython-313.pyc
ref_resolver.cpython-313.pyc
Other
8,590
0.95
0.02439
0.026549
node-utils
469
2025-02-28T18:25:56.847743
GPL-3.0
false
ff0ff6c23f6de960bc1a3d783f39c426
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\version.cpython-313.pyc
version.cpython-313.pyc
Other
211
0.7
0
0
vue-tools
799
2023-09-30T02:11:50.288959
GPL-3.0
false
4652c6ed4b1b19ce88f2c6d8c100440b
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
9,898
0.95
0.085202
0.063584
react-lib
696
2025-02-25T17:05:56.236782
Apache-2.0
false
e23401f5bb50bf221cc8dbe5b7d5b50a
\n\n
.venv\Lib\site-packages\fastjsonschema\__pycache__\__main__.cpython-313.pyc
__main__.cpython-313.pyc
Other
833
0.7
0
0
react-lib
178
2025-01-06T03:23:36.463955
MIT
false
3d13863232bd553024bf6872871f466b
MAINTAINER\nMichal Hořejšek <horejsekmichal@gmail.com>\n\nCONTRIBUTORS\nanentropic <ego@anentropic.com>\nAntti Jokipii <anttijokipii@gmail.com>\nbcaller <bcaller@users.noreply.github.com>\nFrederik Petersen <fp@abusix.com>\nGuillaume Desvé <guillaume.desve@surycat.com>\nKris Molendyke <krismolendyke@users.noreply.github.com>\nDavid Majda <david@majda.cz>\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\AUTHORS
AUTHORS
Other
350
0.7
0
0
node-utils
332
2025-01-09T23:39:43.182509
MIT
false
10988d13f9af03ca0b4e331ef90d2918
pip\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\INSTALLER
INSTALLER
Other
4
0.5
0
0
awesome-app
220
2023-10-23T09:54:35.519507
BSD-3-Clause
false
365c9bfeb7d89244f2ce01c1de44cb85
Copyright (c) 2018, Michal Horejsek\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification,\nare permitted provided that the following conditions are met:\n\n Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n Redistributions in binary form must reproduce the above copyright notice, this\n list of conditions and the following disclaimer in the documentation and/or\n other materials provided with the distribution.\n\n Neither the name of the {organization} nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\LICENSE
LICENSE
Other
1,518
0.7
0
0
vue-tools
193
2024-06-26T03:10:32.427586
Apache-2.0
false
6ddb48173ae95b92910c0f4d45b6d875
Metadata-Version: 2.1\nName: fastjsonschema\nVersion: 2.21.1\nSummary: Fastest Python implementation of JSON schema\nHome-page: https://github.com/horejsek/python-fastjsonschema\nAuthor: Michal Horejsek\nAuthor-email: fastjsonschema@horejsek.com\nLicense: BSD\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3\nClassifier: Programming Language :: Python :: 3.3\nClassifier: Programming Language :: Python :: 3.4\nClassifier: Programming Language :: Python :: 3.5\nClassifier: Programming Language :: Python :: 3.6\nClassifier: Programming Language :: Python :: 3.7\nClassifier: Programming Language :: Python :: 3.8\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Programming Language :: Python :: Implementation :: CPython\nClassifier: License :: OSI Approved :: BSD License\nClassifier: Operating System :: OS Independent\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: Topic :: Software Development :: Libraries :: Python Modules\nLicense-File: LICENSE\nLicense-File: AUTHORS\nProvides-Extra: devel\nRequires-Dist: colorama; extra == "devel"\nRequires-Dist: jsonschema; extra == "devel"\nRequires-Dist: json-spec; extra == "devel"\nRequires-Dist: pylint; extra == "devel"\nRequires-Dist: pytest; extra == "devel"\nRequires-Dist: pytest-benchmark; extra == "devel"\nRequires-Dist: pytest-cache; extra == "devel"\nRequires-Dist: validictory; extra == "devel"\n\n===========================\nFast JSON schema for Python\n===========================\n\n|PyPI| |Pythons|\n\n.. |PyPI| image:: https://img.shields.io/pypi/v/fastjsonschema.svg\n :alt: PyPI version\n :target: https://pypi.python.org/pypi/fastjsonschema\n\n.. |Pythons| image:: https://img.shields.io/pypi/pyversions/fastjsonschema.svg\n :alt: Supported Python versions\n :target: https://pypi.python.org/pypi/fastjsonschema\n\nSee `documentation <https://horejsek.github.io/python-fastjsonschema/>`_.\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\METADATA
METADATA
Other
2,152
0.8
0.018519
0
awesome-app
573
2024-02-13T00:03:17.527535
GPL-3.0
false
8860c395bf96b232370f93b242159088
fastjsonschema-2.21.1.dist-info/AUTHORS,sha256=DLGgN1TEmM2VoBM4cRn-gklc4HA8jLLPDDCeBD1kGhU,350\nfastjsonschema-2.21.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nfastjsonschema-2.21.1.dist-info/LICENSE,sha256=nM3faes5mKYBSN6-hblMWv7VNpG2R0aS54q8wKDlRPE,1518\nfastjsonschema-2.21.1.dist-info/METADATA,sha256=BCBfPGXzH4nCu_0NeyJ8y37GFhQ96iJm_AaZuA8SrAI,2152\nfastjsonschema-2.21.1.dist-info/RECORD,,\nfastjsonschema-2.21.1.dist-info/WHEEL,sha256=PZUExdf71Ui_so67QXpySuHtCi3-J3wvF4ORK6k_S8U,91\nfastjsonschema-2.21.1.dist-info/top_level.txt,sha256=8RQcPDFXXHZKduTjgzugpPNW3zIjxFT0axTh4UsT6gE,15\nfastjsonschema/__init__.py,sha256=GzCywWlandjQQsJLXaZkHYdnydNcITF6r24Av5gQYgU,10347\nfastjsonschema/__main__.py,sha256=4hfd23przxmQc8VjL0fUsbsrvvA73gJ2HDNPgLLFdAI,312\nfastjsonschema/__pycache__/__init__.cpython-313.pyc,,\nfastjsonschema/__pycache__/__main__.cpython-313.pyc,,\nfastjsonschema/__pycache__/draft04.cpython-313.pyc,,\nfastjsonschema/__pycache__/draft06.cpython-313.pyc,,\nfastjsonschema/__pycache__/draft07.cpython-313.pyc,,\nfastjsonschema/__pycache__/exceptions.cpython-313.pyc,,\nfastjsonschema/__pycache__/generator.cpython-313.pyc,,\nfastjsonschema/__pycache__/indent.cpython-313.pyc,,\nfastjsonschema/__pycache__/ref_resolver.cpython-313.pyc,,\nfastjsonschema/__pycache__/version.cpython-313.pyc,,\nfastjsonschema/draft04.py,sha256=aFhmYp1Rjx6mDZohnBnCfd3gOqUUylpQXCkClAvWKPc,30808\nfastjsonschema/draft06.py,sha256=cSPnflqydr6EV4p02T_gh4VFX7mVVdoKCxnNwnC_PPA,7892\nfastjsonschema/draft07.py,sha256=D4qNNhWcjg0TrEiHQ0BJNwvlyv1Rp8gyEBYgRBmV2b8,4449\nfastjsonschema/exceptions.py,sha256=w749JgqKi8clBFcObdcbZVqsmF4oJ_QByhZ1SGbUFNw,1612\nfastjsonschema/generator.py,sha256=bYZt_QfrCH_v7rJDBMteeJx4UDygEV7XZjOtFL3ikls,13059\nfastjsonschema/indent.py,sha256=juZFW9LSvmDJbPFIUm3GPqdPqJoUnqvM8neHN5rkvzU,920\nfastjsonschema/ref_resolver.py,sha256=PWnu-2MZzWH5cymDvdcvXfx3iOW_Mr6c-xXMYm9FD7Q,5577\nfastjsonschema/version.py,sha256=dNAwyKYTo58dhA957101JXTUnXy1nRqqewytAwxSmEM,19\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\RECORD
RECORD
Other
2,014
0.7
0
0
awesome-app
2
2024-08-01T10:50:03.799411
MIT
false
0f6717d80cc06a549839a03f2acf9fce
fastjsonschema\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\top_level.txt
top_level.txt
Other
15
0.5
0
0
awesome-app
940
2024-02-01T22:00:40.759385
BSD-3-Clause
false
9ffeccdf3529c7be7c4a40a585e166b9
Wheel-Version: 1.0\nGenerator: setuptools (75.6.0)\nRoot-Is-Purelib: true\nTag: py3-none-any\n\n
.venv\Lib\site-packages\fastjsonschema-2.21.1.dist-info\WHEEL
WHEEL
Other
91
0.5
0
0
react-lib
866
2025-01-29T22:09:27.244850
BSD-3-Clause
false
75ee1e0d275021ca61f0502e783d543c
# file generated by setuptools-scm\n# don't change, don't track in version control\n\n__all__ = ["__version__", "__version_tuple__", "version", "version_tuple"]\n\nTYPE_CHECKING = False\nif TYPE_CHECKING:\n from typing import Tuple\n from typing import Union\n\n VERSION_TUPLE = Tuple[Union[int, str], ...]\nelse:\n VERSION_TUPLE = object\n\nversion: str\n__version__: str\n__version_tuple__: VERSION_TUPLE\nversion_tuple: VERSION_TUPLE\n\n__version__ = version = '3.18.0'\n__version_tuple__ = version_tuple = (3, 18, 0)\n
.venv\Lib\site-packages\filelock\version.py
version.py
Python
513
0.95
0.047619
0.125
react-lib
902
2024-02-27T08:30:07.185308
MIT
false
0c473d790c2c712f09ef309407597387
from __future__ import annotations\n\nfrom typing import Any\n\n\nclass Timeout(TimeoutError): # noqa: N818\n """Raised when the lock could not be acquired in *timeout* seconds."""\n\n def __init__(self, lock_file: str) -> None:\n super().__init__()\n self._lock_file = lock_file\n\n def __reduce__(self) -> str | tuple[Any, ...]:\n return self.__class__, (self._lock_file,) # Properly pickle the exception\n\n def __str__(self) -> str:\n return f"The file lock '{self._lock_file}' could not be acquired."\n\n def __repr__(self) -> str:\n return f"{self.__class__.__name__}({self.lock_file!r})"\n\n @property\n def lock_file(self) -> str:\n """:return: The path of the file lock."""\n return self._lock_file\n\n\n__all__ = [\n "Timeout",\n]\n
.venv\Lib\site-packages\filelock\_error.py
_error.py
Python
787
0.95
0.2
0
python-kit
5
2024-02-11T00:57:21.445901
MIT
false
65a8e24a4cdee2bdba311013a074d920
from __future__ import annotations\n\nimport os\nimport stat\nimport sys\nfrom errno import EACCES, EISDIR\nfrom pathlib import Path\n\n\ndef raise_on_not_writable_file(filename: str) -> None:\n """\n Raise an exception if attempting to open the file for writing would fail.\n\n This is done so files that will never be writable can be separated from files that are writable but currently\n locked.\n\n :param filename: file to check\n :raises OSError: as if the file was opened for writing.\n\n """\n try: # use stat to do exists + can write to check without race condition\n file_stat = os.stat(filename) # noqa: PTH116\n except OSError:\n return # swallow does not exist or other errors\n\n if file_stat.st_mtime != 0: # if os.stat returns but modification is zero that's an invalid os.stat - ignore it\n if not (file_stat.st_mode & stat.S_IWUSR):\n raise PermissionError(EACCES, "Permission denied", filename)\n\n if stat.S_ISDIR(file_stat.st_mode):\n if sys.platform == "win32": # pragma: win32 cover\n # On Windows, this is PermissionError\n raise PermissionError(EACCES, "Permission denied", filename)\n else: # pragma: win32 no cover # noqa: RET506\n # On linux / macOS, this is IsADirectoryError\n raise IsADirectoryError(EISDIR, "Is a directory", filename)\n\n\ndef ensure_directory_exists(filename: Path | str) -> None:\n """\n Ensure the directory containing the file exists (create it if necessary).\n\n :param filename: file.\n\n """\n Path(filename).parent.mkdir(parents=True, exist_ok=True)\n\n\n__all__ = [\n "ensure_directory_exists",\n "raise_on_not_writable_file",\n]\n
.venv\Lib\site-packages\filelock\_util.py
_util.py
Python
1,715
0.95
0.25
0.052632
react-lib
677
2024-02-14T16:49:07.456871
BSD-3-Clause
false
31d7d29f31a014bf3d5577d081fc4166
"""\nA platform independent file lock that supports the with-statement.\n\n.. autodata:: filelock.__version__\n :no-value:\n\n"""\n\nfrom __future__ import annotations\n\nimport sys\nimport warnings\nfrom typing import TYPE_CHECKING\n\nfrom ._api import AcquireReturnProxy, BaseFileLock\nfrom ._error import Timeout\nfrom ._soft import SoftFileLock\nfrom ._unix import UnixFileLock, has_fcntl\nfrom ._windows import WindowsFileLock\nfrom .asyncio import (\n AsyncAcquireReturnProxy,\n AsyncSoftFileLock,\n AsyncUnixFileLock,\n AsyncWindowsFileLock,\n BaseAsyncFileLock,\n)\nfrom .version import version\n\n#: version of the project as a string\n__version__: str = version\n\n\nif sys.platform == "win32": # pragma: win32 cover\n _FileLock: type[BaseFileLock] = WindowsFileLock\n _AsyncFileLock: type[BaseAsyncFileLock] = AsyncWindowsFileLock\nelse: # pragma: win32 no cover # noqa: PLR5501\n if has_fcntl:\n _FileLock: type[BaseFileLock] = UnixFileLock\n _AsyncFileLock: type[BaseAsyncFileLock] = AsyncUnixFileLock\n else:\n _FileLock = SoftFileLock\n _AsyncFileLock = AsyncSoftFileLock\n if warnings is not None:\n warnings.warn("only soft file lock is available", stacklevel=2)\n\nif TYPE_CHECKING:\n FileLock = SoftFileLock\n AsyncFileLock = AsyncSoftFileLock\nelse:\n #: Alias for the lock, which should be used for the current platform.\n FileLock = _FileLock\n AsyncFileLock = _AsyncFileLock\n\n\n__all__ = [\n "AcquireReturnProxy",\n "AsyncAcquireReturnProxy",\n "AsyncFileLock",\n "AsyncSoftFileLock",\n "AsyncUnixFileLock",\n "AsyncWindowsFileLock",\n "BaseAsyncFileLock",\n "BaseFileLock",\n "FileLock",\n "SoftFileLock",\n "Timeout",\n "UnixFileLock",\n "WindowsFileLock",\n "__version__",\n]\n
.venv\Lib\site-packages\filelock\__init__.py
__init__.py
Python
1,769
0.95
0.085714
0.033898
react-lib
899
2024-09-06T19:04:54.762027
GPL-3.0
false
09f8a650c850f8d12f58b7eaf4f0f80d
\n\n
.venv\Lib\site-packages\filelock\__pycache__\asyncio.cpython-313.pyc
asyncio.cpython-313.pyc
Other
15,489
0.95
0.105
0.017751
react-lib
475
2024-06-19T23:29:20.806910
BSD-3-Clause
false
34f7b3dd001cca80d8945d52a5308ee9
\n\n
.venv\Lib\site-packages\filelock\__pycache__\version.cpython-313.pyc
version.cpython-313.pyc
Other
654
0.7
0
0
awesome-app
887
2024-07-25T23:06:10.892102
GPL-3.0
false
0c870f61a13de47e9821d1c747912364
\n\n
.venv\Lib\site-packages\filelock\__pycache__\_api.cpython-313.pyc
_api.cpython-313.pyc
Other
16,445
0.95
0.13
0.012195
awesome-app
654
2025-05-16T12:37:32.739914
GPL-3.0
false
e5728d045a8eec167630db6447da9c2d
\n\n
.venv\Lib\site-packages\filelock\__pycache__\_error.cpython-313.pyc
_error.cpython-313.pyc
Other
1,827
0.8
0
0
awesome-app
869
2023-10-01T05:59:18.437882
GPL-3.0
false
c4a638828619ea2f9f127f7caf8976be
\n\n
.venv\Lib\site-packages\filelock\__pycache__\_soft.cpython-313.pyc
_soft.cpython-313.pyc
Other
2,530
0.8
0
0
node-utils
554
2024-08-20T03:22:46.371397
Apache-2.0
false
44306f6862afd57d258284a43397073f
\n\n
.venv\Lib\site-packages\filelock\__pycache__\_unix.cpython-313.pyc
_unix.cpython-313.pyc
Other
3,621
0.8
0
0
vue-tools
895
2024-03-24T20:21:35.098616
MIT
false
b5d3cbd52b4a1f394e62c87ea7d7245b
\n\n
.venv\Lib\site-packages\filelock\__pycache__\_util.cpython-313.pyc
_util.cpython-313.pyc
Other
2,000
0.7
0.172414
0
vue-tools
18
2024-07-02T04:53:58.937812
Apache-2.0
false
f421b67303791b701a62fc579b1667f4
\n\n
.venv\Lib\site-packages\filelock\__pycache__\_windows.cpython-313.pyc
_windows.cpython-313.pyc
Other
3,306
0.95
0.030303
0
react-lib
113
2025-07-02T03:22:41.267528
Apache-2.0
false
5e70e3db927d7a169183b618b4d95739
\n\n
.venv\Lib\site-packages\filelock\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
1,586
0.8
0
0
react-lib
947
2024-06-01T11:52:12.123627
GPL-3.0
false
7cc736809cc587c7f5a9c02a26d3aaea
pip\n
.venv\Lib\site-packages\filelock-3.18.0.dist-info\INSTALLER
INSTALLER
Other
4
0.5
0
0
react-lib
439
2025-06-24T03:11:28.667437
MIT
false
365c9bfeb7d89244f2ce01c1de44cb85
Metadata-Version: 2.4\nName: filelock\nVersion: 3.18.0\nSummary: A platform independent file lock.\nProject-URL: Documentation, https://py-filelock.readthedocs.io\nProject-URL: Homepage, https://github.com/tox-dev/py-filelock\nProject-URL: Source, https://github.com/tox-dev/py-filelock\nProject-URL: Tracker, https://github.com/tox-dev/py-filelock/issues\nMaintainer-email: Bernát Gábor <gaborjbernat@gmail.com>\nLicense-Expression: Unlicense\nLicense-File: LICENSE\nKeywords: application,cache,directory,log,user\nClassifier: Development Status :: 5 - Production/Stable\nClassifier: Intended Audience :: Developers\nClassifier: License :: OSI Approved :: The Unlicense (Unlicense)\nClassifier: Operating System :: OS Independent\nClassifier: Programming Language :: Python\nClassifier: Programming Language :: Python :: 3 :: Only\nClassifier: Programming Language :: Python :: 3.9\nClassifier: Programming Language :: Python :: 3.10\nClassifier: Programming Language :: Python :: 3.11\nClassifier: Programming Language :: Python :: 3.12\nClassifier: Programming Language :: Python :: 3.13\nClassifier: Topic :: Internet\nClassifier: Topic :: Software Development :: Libraries\nClassifier: Topic :: System\nRequires-Python: >=3.9\nProvides-Extra: docs\nRequires-Dist: furo>=2024.8.6; extra == 'docs'\nRequires-Dist: sphinx-autodoc-typehints>=3; extra == 'docs'\nRequires-Dist: sphinx>=8.1.3; extra == 'docs'\nProvides-Extra: testing\nRequires-Dist: covdefaults>=2.3; extra == 'testing'\nRequires-Dist: coverage>=7.6.10; extra == 'testing'\nRequires-Dist: diff-cover>=9.2.1; extra == 'testing'\nRequires-Dist: pytest-asyncio>=0.25.2; extra == 'testing'\nRequires-Dist: pytest-cov>=6; extra == 'testing'\nRequires-Dist: pytest-mock>=3.14; extra == 'testing'\nRequires-Dist: pytest-timeout>=2.3.1; extra == 'testing'\nRequires-Dist: pytest>=8.3.4; extra == 'testing'\nRequires-Dist: virtualenv>=20.28.1; extra == 'testing'\nProvides-Extra: typing\nRequires-Dist: typing-extensions>=4.12.2; (python_version < '3.11') and extra == 'typing'\nDescription-Content-Type: text/markdown\n\n# filelock\n\n[![PyPI](https://img.shields.io/pypi/v/filelock)](https://pypi.org/project/filelock/)\n[![Supported Python\nversions](https://img.shields.io/pypi/pyversions/filelock.svg)](https://pypi.org/project/filelock/)\n[![Documentation\nstatus](https://readthedocs.org/projects/py-filelock/badge/?version=latest)](https://py-filelock.readthedocs.io/en/latest/?badge=latest)\n[![Code style:\nblack](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Downloads](https://static.pepy.tech/badge/filelock/month)](https://pepy.tech/project/filelock)\n[![check](https://github.com/tox-dev/py-filelock/actions/workflows/check.yaml/badge.svg)](https://github.com/tox-dev/py-filelock/actions/workflows/check.yaml)\n\nFor more information checkout the [official documentation](https://py-filelock.readthedocs.io/en/latest/index.html).\n
.venv\Lib\site-packages\filelock-3.18.0.dist-info\METADATA
METADATA
Other
2,897
0.8
0
0.018182
vue-tools
864
2024-07-31T09:47:32.942860
Apache-2.0
false
0e872066ab89c5f39a7d4d1af118ebb9
filelock-3.18.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4\nfilelock-3.18.0.dist-info/METADATA,sha256=bMzrZMIFytIbgg_WaLomH79i_7KEx8ahX0IJBxbx1_I,2897\nfilelock-3.18.0.dist-info/RECORD,,\nfilelock-3.18.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87\nfilelock-3.18.0.dist-info/licenses/LICENSE,sha256=iNm062BXnBkew5HKBMFhMFctfu3EqG2qWL8oxuFMm80,1210\nfilelock/__init__.py,sha256=_t_-OAGXo_qyPa9lNQ1YnzVYEvSW3I0onPqzpomsVVg,1769\nfilelock/__pycache__/__init__.cpython-313.pyc,,\nfilelock/__pycache__/_api.cpython-313.pyc,,\nfilelock/__pycache__/_error.cpython-313.pyc,,\nfilelock/__pycache__/_soft.cpython-313.pyc,,\nfilelock/__pycache__/_unix.cpython-313.pyc,,\nfilelock/__pycache__/_util.cpython-313.pyc,,\nfilelock/__pycache__/_windows.cpython-313.pyc,,\nfilelock/__pycache__/asyncio.cpython-313.pyc,,\nfilelock/__pycache__/version.cpython-313.pyc,,\nfilelock/_api.py,sha256=2aATBeJ3-jtMj5OSm7EE539iNaTBsf13KXtcBMoi8oM,14545\nfilelock/_error.py,sha256=-5jMcjTu60YAvAO1UbqDD1GIEjVkwr8xCFwDBtMeYDg,787\nfilelock/_soft.py,sha256=haqtc_TB_KJbYv2a8iuEAclKuM4fMG1vTcp28sK919c,1711\nfilelock/_unix.py,sha256=eGOs4gDgZ-5fGnJUz-OkJDeZkAMzgvYcD8hVD6XH7e4,2351\nfilelock/_util.py,sha256=QHBoNFIYfbAThhotH3Q8E2acFc84wpG49-T-uu017ZE,1715\nfilelock/_windows.py,sha256=8k4XIBl_zZVfGC2gz0kEr8DZBvpNa8wdU9qeM1YrBb8,2179\nfilelock/asyncio.py,sha256=EZdJVkbMnZMuQwzuPN5IvXD0Ugzt__vOtrMP4-siVeU,12451\nfilelock/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0\nfilelock/version.py,sha256=D9gAiF9PGH4dQFjbe6VcXhU8kyCLpU7-c7_vfZP--Hc,513\n
.venv\Lib\site-packages\filelock-3.18.0.dist-info\RECORD
RECORD
Other
1,586
0.7
0
0
node-utils
31
2024-02-07T00:08:46.620280
Apache-2.0
false
f8d8a7a11ca34afa5c0e0135ac95ad83
Wheel-Version: 1.0\nGenerator: hatchling 1.27.0\nRoot-Is-Purelib: true\nTag: py3-none-any\n
.venv\Lib\site-packages\filelock-3.18.0.dist-info\WHEEL
WHEEL
Other
87
0.5
0
0
awesome-app
435
2024-11-02T11:02:40.764072
BSD-3-Clause
false
e2fcb0ad9ea59332c808928b4b439e7a
This is free and unencumbered software released into the public domain.\n\nAnyone is free to copy, modify, publish, use, compile, sell, or\ndistribute this software, either in source code form or as a compiled\nbinary, for any purpose, commercial or non-commercial, and by any\nmeans.\n\nIn jurisdictions that recognize copyright laws, the author or authors\nof this software dedicate any and all copyright interest in the\nsoftware to the public domain. We make this dedication for the benefit\nof the public at large and to the detriment of our heirs and\nsuccessors. We intend this dedication to be an overt act of\nrelinquishment in perpetuity of all present and future rights to this\nsoftware under copyright law.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,\nEXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\nIN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR\nOTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,\nARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR\nOTHER DEALINGS IN THE SOFTWARE.\n\nFor more information, please refer to <http://unlicense.org>\n
.venv\Lib\site-packages\filelock-3.18.0.dist-info\licenses\LICENSE
LICENSE
Other
1,210
0.8
0.083333
0
awesome-app
89
2024-10-24T17:45:22.092413
MIT
false
911690f51af322440237a253d695d19f
__all__ = ["FontBuilder"]\n\n"""\nThis module is *experimental*, meaning it still may evolve and change.\n\nThe `FontBuilder` class is a convenient helper to construct working TTF or\nOTF fonts from scratch.\n\nNote that the various setup methods cannot be called in arbitrary order,\ndue to various interdependencies between OpenType tables. Here is an order\nthat works:\n\n fb = FontBuilder(...)\n fb.setupGlyphOrder(...)\n fb.setupCharacterMap(...)\n fb.setupGlyf(...) --or-- fb.setupCFF(...)\n fb.setupHorizontalMetrics(...)\n fb.setupHorizontalHeader()\n fb.setupNameTable(...)\n fb.setupOS2()\n fb.addOpenTypeFeatures(...)\n fb.setupPost()\n fb.save(...)\n\nHere is how to build a minimal TTF:\n\n```python\nfrom fontTools.fontBuilder import FontBuilder\nfrom fontTools.pens.ttGlyphPen import TTGlyphPen\n\n\ndef drawTestGlyph(pen):\n pen.moveTo((100, 100))\n pen.lineTo((100, 1000))\n pen.qCurveTo((200, 900), (400, 900), (500, 1000))\n pen.lineTo((500, 100))\n pen.closePath()\n\n\nfb = FontBuilder(1024, isTTF=True)\nfb.setupGlyphOrder([".notdef", ".null", "space", "A", "a"])\nfb.setupCharacterMap({32: "space", 65: "A", 97: "a"})\nadvanceWidths = {".notdef": 600, "space": 500, "A": 600, "a": 600, ".null": 0}\n\nfamilyName = "HelloTestFont"\nstyleName = "TotallyNormal"\nversion = "0.1"\n\nnameStrings = dict(\n familyName=dict(en=familyName, nl="HalloTestFont"),\n styleName=dict(en=styleName, nl="TotaalNormaal"),\n uniqueFontIdentifier="fontBuilder: " + familyName + "." + styleName,\n fullName=familyName + "-" + styleName,\n psName=familyName + "-" + styleName,\n version="Version " + version,\n)\n\npen = TTGlyphPen(None)\ndrawTestGlyph(pen)\nglyph = pen.glyph()\nglyphs = {".notdef": glyph, "space": glyph, "A": glyph, "a": glyph, ".null": glyph}\nfb.setupGlyf(glyphs)\nmetrics = {}\nglyphTable = fb.font["glyf"]\nfor gn, advanceWidth in advanceWidths.items():\n metrics[gn] = (advanceWidth, glyphTable[gn].xMin)\nfb.setupHorizontalMetrics(metrics)\nfb.setupHorizontalHeader(ascent=824, descent=-200)\nfb.setupNameTable(nameStrings)\nfb.setupOS2(sTypoAscender=824, usWinAscent=824, usWinDescent=200)\nfb.setupPost()\nfb.save("test.ttf")\n```\n\nAnd here's how to build a minimal OTF:\n\n```python\nfrom fontTools.fontBuilder import FontBuilder\nfrom fontTools.pens.t2CharStringPen import T2CharStringPen\n\n\ndef drawTestGlyph(pen):\n pen.moveTo((100, 100))\n pen.lineTo((100, 1000))\n pen.curveTo((200, 900), (400, 900), (500, 1000))\n pen.lineTo((500, 100))\n pen.closePath()\n\n\nfb = FontBuilder(1024, isTTF=False)\nfb.setupGlyphOrder([".notdef", ".null", "space", "A", "a"])\nfb.setupCharacterMap({32: "space", 65: "A", 97: "a"})\nadvanceWidths = {".notdef": 600, "space": 500, "A": 600, "a": 600, ".null": 0}\n\nfamilyName = "HelloTestFont"\nstyleName = "TotallyNormal"\nversion = "0.1"\n\nnameStrings = dict(\n familyName=dict(en=familyName, nl="HalloTestFont"),\n styleName=dict(en=styleName, nl="TotaalNormaal"),\n uniqueFontIdentifier="fontBuilder: " + familyName + "." + styleName,\n fullName=familyName + "-" + styleName,\n psName=familyName + "-" + styleName,\n version="Version " + version,\n)\n\npen = T2CharStringPen(600, None)\ndrawTestGlyph(pen)\ncharString = pen.getCharString()\ncharStrings = {\n ".notdef": charString,\n "space": charString,\n "A": charString,\n "a": charString,\n ".null": charString,\n}\nfb.setupCFF(nameStrings["psName"], {"FullName": nameStrings["psName"]}, charStrings, {})\nlsb = {gn: cs.calcBounds(None)[0] for gn, cs in charStrings.items()}\nmetrics = {}\nfor gn, advanceWidth in advanceWidths.items():\n metrics[gn] = (advanceWidth, lsb[gn])\nfb.setupHorizontalMetrics(metrics)\nfb.setupHorizontalHeader(ascent=824, descent=200)\nfb.setupNameTable(nameStrings)\nfb.setupOS2(sTypoAscender=824, usWinAscent=824, usWinDescent=200)\nfb.setupPost()\nfb.save("test.otf")\n```\n"""\n\nfrom .ttLib import TTFont, newTable\nfrom .ttLib.tables._c_m_a_p import cmap_classes\nfrom .ttLib.tables._g_l_y_f import flagCubic\nfrom .ttLib.tables.O_S_2f_2 import Panose\nfrom .misc.timeTools import timestampNow\nimport struct\nfrom collections import OrderedDict\n\n\n_headDefaults = dict(\n tableVersion=1.0,\n fontRevision=1.0,\n checkSumAdjustment=0,\n magicNumber=0x5F0F3CF5,\n flags=0x0003,\n unitsPerEm=1000,\n created=0,\n modified=0,\n xMin=0,\n yMin=0,\n xMax=0,\n yMax=0,\n macStyle=0,\n lowestRecPPEM=3,\n fontDirectionHint=2,\n indexToLocFormat=0,\n glyphDataFormat=0,\n)\n\n_maxpDefaultsTTF = dict(\n tableVersion=0x00010000,\n numGlyphs=0,\n maxPoints=0,\n maxContours=0,\n maxCompositePoints=0,\n maxCompositeContours=0,\n maxZones=2,\n maxTwilightPoints=0,\n maxStorage=0,\n maxFunctionDefs=0,\n maxInstructionDefs=0,\n maxStackElements=0,\n maxSizeOfInstructions=0,\n maxComponentElements=0,\n maxComponentDepth=0,\n)\n_maxpDefaultsOTF = dict(\n tableVersion=0x00005000,\n numGlyphs=0,\n)\n\n_postDefaults = dict(\n formatType=3.0,\n italicAngle=0,\n underlinePosition=0,\n underlineThickness=0,\n isFixedPitch=0,\n minMemType42=0,\n maxMemType42=0,\n minMemType1=0,\n maxMemType1=0,\n)\n\n_hheaDefaults = dict(\n tableVersion=0x00010000,\n ascent=0,\n descent=0,\n lineGap=0,\n advanceWidthMax=0,\n minLeftSideBearing=0,\n minRightSideBearing=0,\n xMaxExtent=0,\n caretSlopeRise=1,\n caretSlopeRun=0,\n caretOffset=0,\n reserved0=0,\n reserved1=0,\n reserved2=0,\n reserved3=0,\n metricDataFormat=0,\n numberOfHMetrics=0,\n)\n\n_vheaDefaults = dict(\n tableVersion=0x00010000,\n ascent=0,\n descent=0,\n lineGap=0,\n advanceHeightMax=0,\n minTopSideBearing=0,\n minBottomSideBearing=0,\n yMaxExtent=0,\n caretSlopeRise=0,\n caretSlopeRun=0,\n reserved0=0,\n reserved1=0,\n reserved2=0,\n reserved3=0,\n reserved4=0,\n metricDataFormat=0,\n numberOfVMetrics=0,\n)\n\n_nameIDs = dict(\n copyright=0,\n familyName=1,\n styleName=2,\n uniqueFontIdentifier=3,\n fullName=4,\n version=5,\n psName=6,\n trademark=7,\n manufacturer=8,\n designer=9,\n description=10,\n vendorURL=11,\n designerURL=12,\n licenseDescription=13,\n licenseInfoURL=14,\n # reserved = 15,\n typographicFamily=16,\n typographicSubfamily=17,\n compatibleFullName=18,\n sampleText=19,\n postScriptCIDFindfontName=20,\n wwsFamilyName=21,\n wwsSubfamilyName=22,\n lightBackgroundPalette=23,\n darkBackgroundPalette=24,\n variationsPostScriptNamePrefix=25,\n)\n\n# to insert in setupNameTable doc string:\n# print("\n".join(("%s (nameID %s)" % (k, v)) for k, v in sorted(_nameIDs.items(), key=lambda x: x[1])))\n\n_panoseDefaults = Panose()\n\n_OS2Defaults = dict(\n version=3,\n xAvgCharWidth=0,\n usWeightClass=400,\n usWidthClass=5,\n fsType=0x0004, # default: Preview & Print embedding\n ySubscriptXSize=0,\n ySubscriptYSize=0,\n ySubscriptXOffset=0,\n ySubscriptYOffset=0,\n ySuperscriptXSize=0,\n ySuperscriptYSize=0,\n ySuperscriptXOffset=0,\n ySuperscriptYOffset=0,\n yStrikeoutSize=0,\n yStrikeoutPosition=0,\n sFamilyClass=0,\n panose=_panoseDefaults,\n ulUnicodeRange1=0,\n ulUnicodeRange2=0,\n ulUnicodeRange3=0,\n ulUnicodeRange4=0,\n achVendID="????",\n fsSelection=0,\n usFirstCharIndex=0,\n usLastCharIndex=0,\n sTypoAscender=0,\n sTypoDescender=0,\n sTypoLineGap=0,\n usWinAscent=0,\n usWinDescent=0,\n ulCodePageRange1=0,\n ulCodePageRange2=0,\n sxHeight=0,\n sCapHeight=0,\n usDefaultChar=0, # .notdef\n usBreakChar=32, # space\n usMaxContext=0,\n usLowerOpticalPointSize=0,\n usUpperOpticalPointSize=0,\n)\n\n\nclass FontBuilder(object):\n def __init__(self, unitsPerEm=None, font=None, isTTF=True, glyphDataFormat=0):\n """Initialize a FontBuilder instance.\n\n If the `font` argument is not given, a new `TTFont` will be\n constructed, and `unitsPerEm` must be given. If `isTTF` is True,\n the font will be a glyf-based TTF; if `isTTF` is False it will be\n a CFF-based OTF.\n\n The `glyphDataFormat` argument corresponds to the `head` table field\n that defines the format of the TrueType `glyf` table (default=0).\n TrueType glyphs historically can only contain quadratic splines and static\n components, but there's a proposal to add support for cubic Bezier curves as well\n as variable composites/components at\n https://github.com/harfbuzz/boring-expansion-spec/blob/main/glyf1.md\n You can experiment with the new features by setting `glyphDataFormat` to 1.\n A ValueError is raised if `glyphDataFormat` is left at 0 but glyphs are added\n that contain cubic splines or varcomposites. This is to prevent accidentally\n creating fonts that are incompatible with existing TrueType implementations.\n\n If `font` is given, it must be a `TTFont` instance and `unitsPerEm`\n must _not_ be given. The `isTTF` and `glyphDataFormat` arguments will be ignored.\n """\n if font is None:\n self.font = TTFont(recalcTimestamp=False)\n self.isTTF = isTTF\n now = timestampNow()\n assert unitsPerEm is not None\n self.setupHead(\n unitsPerEm=unitsPerEm,\n created=now,\n modified=now,\n glyphDataFormat=glyphDataFormat,\n )\n self.setupMaxp()\n else:\n assert unitsPerEm is None\n self.font = font\n self.isTTF = "glyf" in font\n\n def save(self, file):\n """Save the font. The 'file' argument can be either a pathname or a\n writable file object.\n """\n self.font.save(file)\n\n def _initTableWithValues(self, tableTag, defaults, values):\n table = self.font[tableTag] = newTable(tableTag)\n for k, v in defaults.items():\n setattr(table, k, v)\n for k, v in values.items():\n setattr(table, k, v)\n return table\n\n def _updateTableWithValues(self, tableTag, values):\n table = self.font[tableTag]\n for k, v in values.items():\n setattr(table, k, v)\n\n def setupHead(self, **values):\n """Create a new `head` table and initialize it with default values,\n which can be overridden by keyword arguments.\n """\n self._initTableWithValues("head", _headDefaults, values)\n\n def updateHead(self, **values):\n """Update the head table with the fields and values passed as\n keyword arguments.\n """\n self._updateTableWithValues("head", values)\n\n def setupGlyphOrder(self, glyphOrder):\n """Set the glyph order for the font."""\n self.font.setGlyphOrder(glyphOrder)\n\n def setupCharacterMap(self, cmapping, uvs=None, allowFallback=False):\n """Build the `cmap` table for the font. The `cmapping` argument should\n be a dict mapping unicode code points as integers to glyph names.\n\n The `uvs` argument, when passed, must be a list of tuples, describing\n Unicode Variation Sequences. These tuples have three elements:\n (unicodeValue, variationSelector, glyphName)\n `unicodeValue` and `variationSelector` are integer code points.\n `glyphName` may be None, to indicate this is the default variation.\n Text processors will then use the cmap to find the glyph name.\n Each Unicode Variation Sequence should be an officially supported\n sequence, but this is not policed.\n """\n subTables = []\n highestUnicode = max(cmapping) if cmapping else 0\n if highestUnicode > 0xFFFF:\n cmapping_3_1 = dict((k, v) for k, v in cmapping.items() if k < 0x10000)\n subTable_3_10 = buildCmapSubTable(cmapping, 12, 3, 10)\n subTables.append(subTable_3_10)\n else:\n cmapping_3_1 = cmapping\n format = 4\n subTable_3_1 = buildCmapSubTable(cmapping_3_1, format, 3, 1)\n try:\n subTable_3_1.compile(self.font)\n except struct.error:\n # format 4 overflowed, fall back to format 12\n if not allowFallback:\n raise ValueError(\n "cmap format 4 subtable overflowed; sort glyph order by unicode to fix."\n )\n format = 12\n subTable_3_1 = buildCmapSubTable(cmapping_3_1, format, 3, 1)\n subTables.append(subTable_3_1)\n subTable_0_3 = buildCmapSubTable(cmapping_3_1, format, 0, 3)\n subTables.append(subTable_0_3)\n\n if uvs is not None:\n uvsDict = {}\n for unicodeValue, variationSelector, glyphName in uvs:\n if cmapping.get(unicodeValue) == glyphName:\n # this is a default variation\n glyphName = None\n if variationSelector not in uvsDict:\n uvsDict[variationSelector] = []\n uvsDict[variationSelector].append((unicodeValue, glyphName))\n uvsSubTable = buildCmapSubTable({}, 14, 0, 5)\n uvsSubTable.uvsDict = uvsDict\n subTables.append(uvsSubTable)\n\n self.font["cmap"] = newTable("cmap")\n self.font["cmap"].tableVersion = 0\n self.font["cmap"].tables = subTables\n\n def setupNameTable(self, nameStrings, windows=True, mac=True):\n """Create the `name` table for the font. The `nameStrings` argument must\n be a dict, mapping nameIDs or descriptive names for the nameIDs to name\n record values. A value is either a string, or a dict, mapping language codes\n to strings, to allow localized name table entries.\n\n By default, both Windows (platformID=3) and Macintosh (platformID=1) name\n records are added, unless any of `windows` or `mac` arguments is False.\n\n The following descriptive names are available for nameIDs:\n\n copyright (nameID 0)\n familyName (nameID 1)\n styleName (nameID 2)\n uniqueFontIdentifier (nameID 3)\n fullName (nameID 4)\n version (nameID 5)\n psName (nameID 6)\n trademark (nameID 7)\n manufacturer (nameID 8)\n designer (nameID 9)\n description (nameID 10)\n vendorURL (nameID 11)\n designerURL (nameID 12)\n licenseDescription (nameID 13)\n licenseInfoURL (nameID 14)\n typographicFamily (nameID 16)\n typographicSubfamily (nameID 17)\n compatibleFullName (nameID 18)\n sampleText (nameID 19)\n postScriptCIDFindfontName (nameID 20)\n wwsFamilyName (nameID 21)\n wwsSubfamilyName (nameID 22)\n lightBackgroundPalette (nameID 23)\n darkBackgroundPalette (nameID 24)\n variationsPostScriptNamePrefix (nameID 25)\n """\n nameTable = self.font["name"] = newTable("name")\n nameTable.names = []\n\n for nameName, nameValue in nameStrings.items():\n if isinstance(nameName, int):\n nameID = nameName\n else:\n nameID = _nameIDs[nameName]\n if isinstance(nameValue, str):\n nameValue = dict(en=nameValue)\n nameTable.addMultilingualName(\n nameValue, ttFont=self.font, nameID=nameID, windows=windows, mac=mac\n )\n\n def setupOS2(self, **values):\n """Create a new `OS/2` table and initialize it with default values,\n which can be overridden by keyword arguments.\n """\n self._initTableWithValues("OS/2", _OS2Defaults, values)\n if "xAvgCharWidth" not in values:\n assert (\n "hmtx" in self.font\n ), "the 'hmtx' table must be setup before the 'OS/2' table"\n self.font["OS/2"].recalcAvgCharWidth(self.font)\n if not (\n "ulUnicodeRange1" in values\n or "ulUnicodeRange2" in values\n or "ulUnicodeRange3" in values\n or "ulUnicodeRange3" in values\n ):\n assert (\n "cmap" in self.font\n ), "the 'cmap' table must be setup before the 'OS/2' table"\n self.font["OS/2"].recalcUnicodeRanges(self.font)\n\n def setupCFF(self, psName, fontInfo, charStringsDict, privateDict):\n from .cffLib import (\n CFFFontSet,\n TopDictIndex,\n TopDict,\n CharStrings,\n GlobalSubrsIndex,\n PrivateDict,\n )\n\n assert not self.isTTF\n self.font.sfntVersion = "OTTO"\n fontSet = CFFFontSet()\n fontSet.major = 1\n fontSet.minor = 0\n fontSet.otFont = self.font\n fontSet.fontNames = [psName]\n fontSet.topDictIndex = TopDictIndex()\n\n globalSubrs = GlobalSubrsIndex()\n fontSet.GlobalSubrs = globalSubrs\n private = PrivateDict()\n for key, value in privateDict.items():\n setattr(private, key, value)\n fdSelect = None\n fdArray = None\n\n topDict = TopDict()\n topDict.charset = self.font.getGlyphOrder()\n topDict.Private = private\n topDict.GlobalSubrs = fontSet.GlobalSubrs\n for key, value in fontInfo.items():\n setattr(topDict, key, value)\n if "FontMatrix" not in fontInfo:\n scale = 1 / self.font["head"].unitsPerEm\n topDict.FontMatrix = [scale, 0, 0, scale, 0, 0]\n\n charStrings = CharStrings(\n None, topDict.charset, globalSubrs, private, fdSelect, fdArray\n )\n for glyphName, charString in charStringsDict.items():\n charString.private = private\n charString.globalSubrs = globalSubrs\n charStrings[glyphName] = charString\n topDict.CharStrings = charStrings\n\n fontSet.topDictIndex.append(topDict)\n\n self.font["CFF "] = newTable("CFF ")\n self.font["CFF "].cff = fontSet\n\n def setupCFF2(self, charStringsDict, fdArrayList=None, regions=None):\n from .cffLib import (\n CFFFontSet,\n TopDictIndex,\n TopDict,\n CharStrings,\n GlobalSubrsIndex,\n PrivateDict,\n FDArrayIndex,\n FontDict,\n )\n\n assert not self.isTTF\n self.font.sfntVersion = "OTTO"\n fontSet = CFFFontSet()\n fontSet.major = 2\n fontSet.minor = 0\n\n cff2GetGlyphOrder = self.font.getGlyphOrder\n fontSet.topDictIndex = TopDictIndex(None, cff2GetGlyphOrder, None)\n\n globalSubrs = GlobalSubrsIndex()\n fontSet.GlobalSubrs = globalSubrs\n\n if fdArrayList is None:\n fdArrayList = [{}]\n fdSelect = None\n fdArray = FDArrayIndex()\n fdArray.strings = None\n fdArray.GlobalSubrs = globalSubrs\n for privateDict in fdArrayList:\n fontDict = FontDict()\n fontDict.setCFF2(True)\n private = PrivateDict()\n for key, value in privateDict.items():\n setattr(private, key, value)\n fontDict.Private = private\n fdArray.append(fontDict)\n\n topDict = TopDict()\n topDict.cff2GetGlyphOrder = cff2GetGlyphOrder\n topDict.FDArray = fdArray\n scale = 1 / self.font["head"].unitsPerEm\n topDict.FontMatrix = [scale, 0, 0, scale, 0, 0]\n\n private = fdArray[0].Private\n charStrings = CharStrings(None, None, globalSubrs, private, fdSelect, fdArray)\n for glyphName, charString in charStringsDict.items():\n charString.private = private\n charString.globalSubrs = globalSubrs\n charStrings[glyphName] = charString\n topDict.CharStrings = charStrings\n\n fontSet.topDictIndex.append(topDict)\n\n self.font["CFF2"] = newTable("CFF2")\n self.font["CFF2"].cff = fontSet\n\n if regions:\n self.setupCFF2Regions(regions)\n\n def setupCFF2Regions(self, regions):\n from .varLib.builder import buildVarRegionList, buildVarData, buildVarStore\n from .cffLib import VarStoreData\n\n assert "fvar" in self.font, "fvar must to be set up first"\n assert "CFF2" in self.font, "CFF2 must to be set up first"\n axisTags = [a.axisTag for a in self.font["fvar"].axes]\n varRegionList = buildVarRegionList(regions, axisTags)\n varData = buildVarData(list(range(len(regions))), None, optimize=False)\n varStore = buildVarStore(varRegionList, [varData])\n vstore = VarStoreData(otVarStore=varStore)\n topDict = self.font["CFF2"].cff.topDictIndex[0]\n topDict.VarStore = vstore\n for fontDict in topDict.FDArray:\n fontDict.Private.vstore = vstore\n\n def setupGlyf(self, glyphs, calcGlyphBounds=True, validateGlyphFormat=True):\n """Create the `glyf` table from a dict, that maps glyph names\n to `fontTools.ttLib.tables._g_l_y_f.Glyph` objects, for example\n as made by `fontTools.pens.ttGlyphPen.TTGlyphPen`.\n\n If `calcGlyphBounds` is True, the bounds of all glyphs will be\n calculated. Only pass False if your glyph objects already have\n their bounding box values set.\n\n If `validateGlyphFormat` is True, raise ValueError if any of the glyphs contains\n cubic curves or is a variable composite but head.glyphDataFormat=0.\n Set it to False to skip the check if you know in advance all the glyphs are\n compatible with the specified glyphDataFormat.\n """\n assert self.isTTF\n\n if validateGlyphFormat and self.font["head"].glyphDataFormat == 0:\n for name, g in glyphs.items():\n if g.numberOfContours > 0 and any(f & flagCubic for f in g.flags):\n raise ValueError(\n f"Glyph {name!r} has cubic Bezier outlines, but glyphDataFormat=0; "\n "either convert to quadratics with cu2qu or set glyphDataFormat=1."\n )\n\n self.font["loca"] = newTable("loca")\n self.font["glyf"] = newTable("glyf")\n self.font["glyf"].glyphs = glyphs\n if hasattr(self.font, "glyphOrder"):\n self.font["glyf"].glyphOrder = self.font.glyphOrder\n if calcGlyphBounds:\n self.calcGlyphBounds()\n\n def setupFvar(self, axes, instances):\n """Adds an font variations table to the font.\n\n Args:\n axes (list): See below.\n instances (list): See below.\n\n ``axes`` should be a list of axes, with each axis either supplied as\n a py:class:`.designspaceLib.AxisDescriptor` object, or a tuple in the\n format ```tupletag, minValue, defaultValue, maxValue, name``.\n The ``name`` is either a string, or a dict, mapping language codes\n to strings, to allow localized name table entries.\n\n ```instances`` should be a list of instances, with each instance either\n supplied as a py:class:`.designspaceLib.InstanceDescriptor` object, or a\n dict with keys ``location`` (mapping of axis tags to float values),\n ``stylename`` and (optionally) ``postscriptfontname``.\n The ``stylename`` is either a string, or a dict, mapping language codes\n to strings, to allow localized name table entries.\n """\n\n addFvar(self.font, axes, instances)\n\n def setupAvar(self, axes, mappings=None):\n """Adds an axis variations table to the font.\n\n Args:\n axes (list): A list of py:class:`.designspaceLib.AxisDescriptor` objects.\n """\n from .varLib import _add_avar\n\n if "fvar" not in self.font:\n raise KeyError("'fvar' table is missing; can't add 'avar'.")\n\n axisTags = [axis.axisTag for axis in self.font["fvar"].axes]\n axes = OrderedDict(enumerate(axes)) # Only values are used\n _add_avar(self.font, axes, mappings, axisTags)\n\n def setupGvar(self, variations):\n gvar = self.font["gvar"] = newTable("gvar")\n gvar.version = 1\n gvar.reserved = 0\n gvar.variations = variations\n\n def setupGVAR(self, variations):\n gvar = self.font["GVAR"] = newTable("GVAR")\n gvar.version = 1\n gvar.reserved = 0\n gvar.variations = variations\n\n def calcGlyphBounds(self):\n """Calculate the bounding boxes of all glyphs in the `glyf` table.\n This is usually not called explicitly by client code.\n """\n glyphTable = self.font["glyf"]\n for glyph in glyphTable.glyphs.values():\n glyph.recalcBounds(glyphTable)\n\n def setupHorizontalMetrics(self, metrics):\n """Create a new `hmtx` table, for horizontal metrics.\n\n The `metrics` argument must be a dict, mapping glyph names to\n `(width, leftSidebearing)` tuples.\n """\n self.setupMetrics("hmtx", metrics)\n\n def setupVerticalMetrics(self, metrics):\n """Create a new `vmtx` table, for horizontal metrics.\n\n The `metrics` argument must be a dict, mapping glyph names to\n `(height, topSidebearing)` tuples.\n """\n self.setupMetrics("vmtx", metrics)\n\n def setupMetrics(self, tableTag, metrics):\n """See `setupHorizontalMetrics()` and `setupVerticalMetrics()`."""\n assert tableTag in ("hmtx", "vmtx")\n mtxTable = self.font[tableTag] = newTable(tableTag)\n roundedMetrics = {}\n for gn in metrics:\n w, lsb = metrics[gn]\n roundedMetrics[gn] = int(round(w)), int(round(lsb))\n mtxTable.metrics = roundedMetrics\n\n def setupHorizontalHeader(self, **values):\n """Create a new `hhea` table initialize it with default values,\n which can be overridden by keyword arguments.\n """\n self._initTableWithValues("hhea", _hheaDefaults, values)\n\n def setupVerticalHeader(self, **values):\n """Create a new `vhea` table initialize it with default values,\n which can be overridden by keyword arguments.\n """\n self._initTableWithValues("vhea", _vheaDefaults, values)\n\n def setupVerticalOrigins(self, verticalOrigins, defaultVerticalOrigin=None):\n """Create a new `VORG` table. The `verticalOrigins` argument must be\n a dict, mapping glyph names to vertical origin values.\n\n The `defaultVerticalOrigin` argument should be the most common vertical\n origin value. If omitted, this value will be derived from the actual\n values in the `verticalOrigins` argument.\n """\n if defaultVerticalOrigin is None:\n # find the most frequent vorg value\n bag = {}\n for gn in verticalOrigins:\n vorg = verticalOrigins[gn]\n if vorg not in bag:\n bag[vorg] = 1\n else:\n bag[vorg] += 1\n defaultVerticalOrigin = sorted(\n bag, key=lambda vorg: bag[vorg], reverse=True\n )[0]\n self._initTableWithValues(\n "VORG",\n {},\n dict(VOriginRecords={}, defaultVertOriginY=defaultVerticalOrigin),\n )\n vorgTable = self.font["VORG"]\n vorgTable.majorVersion = 1\n vorgTable.minorVersion = 0\n for gn in verticalOrigins:\n vorgTable[gn] = verticalOrigins[gn]\n\n def setupPost(self, keepGlyphNames=True, **values):\n """Create a new `post` table and initialize it with default values,\n which can be overridden by keyword arguments.\n """\n isCFF2 = "CFF2" in self.font\n postTable = self._initTableWithValues("post", _postDefaults, values)\n if (self.isTTF or isCFF2) and keepGlyphNames:\n postTable.formatType = 2.0\n postTable.extraNames = []\n postTable.mapping = {}\n else:\n postTable.formatType = 3.0\n\n def setupMaxp(self):\n """Create a new `maxp` table. This is called implicitly by FontBuilder\n itself and is usually not called by client code.\n """\n if self.isTTF:\n defaults = _maxpDefaultsTTF\n else:\n defaults = _maxpDefaultsOTF\n self._initTableWithValues("maxp", defaults, {})\n\n def setupDummyDSIG(self):\n """This adds an empty DSIG table to the font to make some MS applications\n happy. This does not properly sign the font.\n """\n values = dict(\n ulVersion=1,\n usFlag=0,\n usNumSigs=0,\n signatureRecords=[],\n )\n self._initTableWithValues("DSIG", {}, values)\n\n def addOpenTypeFeatures(self, features, filename=None, tables=None, debug=False):\n """Add OpenType features to the font from a string containing\n Feature File syntax.\n\n The `filename` argument is used in error messages and to determine\n where to look for "include" files.\n\n The optional `tables` argument can be a list of OTL tables tags to\n build, allowing the caller to only build selected OTL tables. See\n `fontTools.feaLib` for details.\n\n The optional `debug` argument controls whether to add source debugging\n information to the font in the `Debg` table.\n """\n from .feaLib.builder import addOpenTypeFeaturesFromString\n\n addOpenTypeFeaturesFromString(\n self.font, features, filename=filename, tables=tables, debug=debug\n )\n\n def addFeatureVariations(self, conditionalSubstitutions, featureTag="rvrn"):\n """Add conditional substitutions to a Variable Font.\n\n See `fontTools.varLib.featureVars.addFeatureVariations`.\n """\n from .varLib import featureVars\n\n if "fvar" not in self.font:\n raise KeyError("'fvar' table is missing; can't add FeatureVariations.")\n\n featureVars.addFeatureVariations(\n self.font, conditionalSubstitutions, featureTag=featureTag\n )\n\n def setupCOLR(\n self,\n colorLayers,\n version=None,\n varStore=None,\n varIndexMap=None,\n clipBoxes=None,\n allowLayerReuse=True,\n ):\n """Build new COLR table using color layers dictionary.\n\n Cf. `fontTools.colorLib.builder.buildCOLR`.\n """\n from fontTools.colorLib.builder import buildCOLR\n\n glyphMap = self.font.getReverseGlyphMap()\n self.font["COLR"] = buildCOLR(\n colorLayers,\n version=version,\n glyphMap=glyphMap,\n varStore=varStore,\n varIndexMap=varIndexMap,\n clipBoxes=clipBoxes,\n allowLayerReuse=allowLayerReuse,\n )\n\n def setupCPAL(\n self,\n palettes,\n paletteTypes=None,\n paletteLabels=None,\n paletteEntryLabels=None,\n ):\n """Build new CPAL table using list of palettes.\n\n Optionally build CPAL v1 table using paletteTypes, paletteLabels and\n paletteEntryLabels.\n\n Cf. `fontTools.colorLib.builder.buildCPAL`.\n """\n from fontTools.colorLib.builder import buildCPAL\n\n self.font["CPAL"] = buildCPAL(\n palettes,\n paletteTypes=paletteTypes,\n paletteLabels=paletteLabels,\n paletteEntryLabels=paletteEntryLabels,\n nameTable=self.font.get("name"),\n )\n\n def setupStat(self, axes, locations=None, elidedFallbackName=2):\n """Build a new 'STAT' table.\n\n See `fontTools.otlLib.builder.buildStatTable` for details about\n the arguments.\n """\n from .otlLib.builder import buildStatTable\n\n assert "name" in self.font, "name must to be set up first"\n\n buildStatTable(\n self.font,\n axes,\n locations,\n elidedFallbackName,\n macNames=any(nr.platformID == 1 for nr in self.font["name"].names),\n )\n\n\ndef buildCmapSubTable(cmapping, format, platformID, platEncID):\n subTable = cmap_classes[format](format)\n subTable.cmap = cmapping\n subTable.platformID = platformID\n subTable.platEncID = platEncID\n subTable.language = 0\n return subTable\n\n\ndef addFvar(font, axes, instances):\n from .ttLib.tables._f_v_a_r import Axis, NamedInstance\n\n assert axes\n\n fvar = newTable("fvar")\n nameTable = font["name"]\n\n # if there are not currently any mac names don't add them here, that's inconsistent\n # https://github.com/fonttools/fonttools/issues/683\n macNames = any(nr.platformID == 1 for nr in getattr(nameTable, "names", ()))\n\n # we have all the best ways to express mac names\n platforms = ((3, 1, 0x409),)\n if macNames:\n platforms = ((1, 0, 0),) + platforms\n\n for axis_def in axes:\n axis = Axis()\n\n if isinstance(axis_def, tuple):\n (\n axis.axisTag,\n axis.minValue,\n axis.defaultValue,\n axis.maxValue,\n name,\n ) = axis_def\n else:\n (axis.axisTag, axis.minValue, axis.defaultValue, axis.maxValue, name) = (\n axis_def.tag,\n axis_def.minimum,\n axis_def.default,\n axis_def.maximum,\n axis_def.name,\n )\n if axis_def.hidden:\n axis.flags = 0x0001 # HIDDEN_AXIS\n\n if isinstance(name, str):\n name = dict(en=name)\n\n axis.axisNameID = nameTable.addMultilingualName(name, ttFont=font, mac=macNames)\n fvar.axes.append(axis)\n\n for instance in instances:\n if isinstance(instance, dict):\n coordinates = instance["location"]\n name = instance["stylename"]\n psname = instance.get("postscriptfontname")\n else:\n coordinates = instance.location\n name = instance.localisedStyleName or instance.styleName\n psname = instance.postScriptFontName\n\n if isinstance(name, str):\n name = dict(en=name)\n\n inst = NamedInstance()\n inst.subfamilyNameID = nameTable.addMultilingualName(\n name, ttFont=font, mac=macNames\n )\n if psname is not None:\n inst.postscriptNameID = nameTable.addName(psname, platforms=platforms)\n inst.coordinates = coordinates\n fvar.instances.append(inst)\n\n font["fvar"] = fvar\n
.venv\Lib\site-packages\fontTools\fontBuilder.py
fontBuilder.py
Python
35,144
0.95
0.120316
0.010274
python-kit
823
2025-04-24T22:54:53.905095
GPL-3.0
false
dcfc02f78f2fd3414cf5e1c1768faa01
import pkgutil\nimport sys\nimport fontTools\nimport importlib\nimport os\nfrom pathlib import Path\n\n\ndef main():\n """Show this help"""\n path = fontTools.__path__\n descriptions = {}\n for pkg in sorted(\n mod.name\n for mod in pkgutil.walk_packages([fontTools.__path__[0]], prefix="fontTools.")\n ):\n try:\n imports = __import__(pkg, globals(), locals(), ["main"])\n except ImportError as e:\n continue\n try:\n description = imports.main.__doc__\n # Cython modules seem to return "main()" as the docstring\n if description and description != "main()":\n pkg = pkg.replace("fontTools.", "").replace(".__main__", "")\n # show the docstring's first line only\n descriptions[pkg] = description.splitlines()[0]\n except AttributeError as e:\n pass\n for pkg, description in descriptions.items():\n print("fonttools %-25s %s" % (pkg, description), file=sys.stderr)\n\n\nif __name__ == "__main__":\n print("fonttools v%s\n" % fontTools.__version__, file=sys.stderr)\n main()\n
.venv\Lib\site-packages\fontTools\help.py
help.py
Python
1,161
0.95
0.222222
0.0625
node-utils
620
2024-10-30T12:40:10.291749
MIT
false
54bc582ccc7b452435235a9b9dad83a3
"""Module for reading TFM (TeX Font Metrics) files.\n\nThe TFM format is described in the TFtoPL WEB source code, whose typeset form\ncan be found on `CTAN <http://mirrors.ctan.org/info/knuth-pdf/texware/tftopl.pdf>`_.\n\n >>> from fontTools.tfmLib import TFM\n >>> tfm = TFM("Tests/tfmLib/data/cmr10.tfm")\n >>>\n >>> # Accessing an attribute gets you metadata.\n >>> tfm.checksum\n 1274110073\n >>> tfm.designsize\n 10.0\n >>> tfm.codingscheme\n 'TeX text'\n >>> tfm.family\n 'CMR'\n >>> tfm.seven_bit_safe_flag\n False\n >>> tfm.face\n 234\n >>> tfm.extraheader\n {}\n >>> tfm.fontdimens\n {'SLANT': 0.0, 'SPACE': 0.33333396911621094, 'STRETCH': 0.16666698455810547, 'SHRINK': 0.11111164093017578, 'XHEIGHT': 0.4305553436279297, 'QUAD': 1.0000028610229492, 'EXTRASPACE': 0.11111164093017578}\n >>> # Accessing a character gets you its metrics.\n >>> # “width” is always available, other metrics are available only when\n >>> # applicable. All values are relative to “designsize”.\n >>> tfm.chars[ord("g")]\n {'width': 0.5000019073486328, 'height': 0.4305553436279297, 'depth': 0.1944446563720703, 'italic': 0.013888359069824219}\n >>> # Kerning and ligature can be accessed as well.\n >>> tfm.kerning[ord("c")]\n {104: -0.02777862548828125, 107: -0.02777862548828125}\n >>> tfm.ligatures[ord("f")]\n {105: ('LIG', 12), 102: ('LIG', 11), 108: ('LIG', 13)}\n"""\n\nfrom types import SimpleNamespace\n\nfrom fontTools.misc.sstruct import calcsize, unpack, unpack2\n\nSIZES_FORMAT = """\n >\n lf: h # length of the entire file, in words\n lh: h # length of the header data, in words\n bc: h # smallest character code in the font\n ec: h # largest character code in the font\n nw: h # number of words in the width table\n nh: h # number of words in the height table\n nd: h # number of words in the depth table\n ni: h # number of words in the italic correction table\n nl: h # number of words in the ligature/kern table\n nk: h # number of words in the kern table\n ne: h # number of words in the extensible character table\n np: h # number of font parameter words\n"""\n\nSIZES_SIZE = calcsize(SIZES_FORMAT)\n\nFIXED_FORMAT = "12.20F"\n\nHEADER_FORMAT1 = f"""\n >\n checksum: L\n designsize: {FIXED_FORMAT}\n"""\n\nHEADER_FORMAT2 = f"""\n {HEADER_FORMAT1}\n codingscheme: 40p\n"""\n\nHEADER_FORMAT3 = f"""\n {HEADER_FORMAT2}\n family: 20p\n"""\n\nHEADER_FORMAT4 = f"""\n {HEADER_FORMAT3}\n seven_bit_safe_flag: ?\n ignored: x\n ignored: x\n face: B\n"""\n\nHEADER_SIZE1 = calcsize(HEADER_FORMAT1)\nHEADER_SIZE2 = calcsize(HEADER_FORMAT2)\nHEADER_SIZE3 = calcsize(HEADER_FORMAT3)\nHEADER_SIZE4 = calcsize(HEADER_FORMAT4)\n\nLIG_KERN_COMMAND = """\n >\n skip_byte: B\n next_char: B\n op_byte: B\n remainder: B\n"""\n\nBASE_PARAMS = [\n "SLANT",\n "SPACE",\n "STRETCH",\n "SHRINK",\n "XHEIGHT",\n "QUAD",\n "EXTRASPACE",\n]\n\nMATHSY_PARAMS = [\n "NUM1",\n "NUM2",\n "NUM3",\n "DENOM1",\n "DENOM2",\n "SUP1",\n "SUP2",\n "SUP3",\n "SUB1",\n "SUB2",\n "SUPDROP",\n "SUBDROP",\n "DELIM1",\n "DELIM2",\n "AXISHEIGHT",\n]\n\nMATHEX_PARAMS = [\n "DEFAULTRULETHICKNESS",\n "BIGOPSPACING1",\n "BIGOPSPACING2",\n "BIGOPSPACING3",\n "BIGOPSPACING4",\n "BIGOPSPACING5",\n]\n\nVANILLA = 0\nMATHSY = 1\nMATHEX = 2\n\nUNREACHABLE = 0\nPASSTHROUGH = 1\nACCESSABLE = 2\n\nNO_TAG = 0\nLIG_TAG = 1\nLIST_TAG = 2\nEXT_TAG = 3\n\nSTOP_FLAG = 128\nKERN_FLAG = 128\n\n\nclass TFMException(Exception):\n def __init__(self, message):\n super().__init__(message)\n\n\nclass TFM:\n def __init__(self, file):\n self._read(file)\n\n def __repr__(self):\n return (\n f"<TFM"\n f" for {self.family}"\n f" in {self.codingscheme}"\n f" at {self.designsize:g}pt>"\n )\n\n def _read(self, file):\n if hasattr(file, "read"):\n data = file.read()\n else:\n with open(file, "rb") as fp:\n data = fp.read()\n\n self._data = data\n\n if len(data) < SIZES_SIZE:\n raise TFMException("Too short input file")\n\n sizes = SimpleNamespace()\n unpack2(SIZES_FORMAT, data, sizes)\n\n # Do some file structure sanity checks.\n # TeX and TFtoPL do additional functional checks and might even correct\n # “errors” in the input file, but we instead try to output the file as\n # it is as long as it is parsable, even if the data make no sense.\n\n if sizes.lf < 0:\n raise TFMException("The file claims to have negative or zero length!")\n\n if len(data) < sizes.lf * 4:\n raise TFMException("The file has fewer bytes than it claims!")\n\n for name, length in vars(sizes).items():\n if length < 0:\n raise TFMException("The subfile size: '{name}' is negative!")\n\n if sizes.lh < 2:\n raise TFMException(f"The header length is only {sizes.lh}!")\n\n if sizes.bc > sizes.ec + 1 or sizes.ec > 255:\n raise TFMException(\n f"The character code range {sizes.bc}..{sizes.ec} is illegal!"\n )\n\n if sizes.nw == 0 or sizes.nh == 0 or sizes.nd == 0 or sizes.ni == 0:\n raise TFMException("Incomplete subfiles for character dimensions!")\n\n if sizes.ne > 256:\n raise TFMException(f"There are {ne} extensible recipes!")\n\n if sizes.lf != (\n 6\n + sizes.lh\n + (sizes.ec - sizes.bc + 1)\n + sizes.nw\n + sizes.nh\n + sizes.nd\n + sizes.ni\n + sizes.nl\n + sizes.nk\n + sizes.ne\n + sizes.np\n ):\n raise TFMException("Subfile sizes don’t add up to the stated total")\n\n # Subfile offsets, used in the helper function below. These all are\n # 32-bit word offsets not 8-bit byte offsets.\n char_base = 6 + sizes.lh - sizes.bc\n width_base = char_base + sizes.ec + 1\n height_base = width_base + sizes.nw\n depth_base = height_base + sizes.nh\n italic_base = depth_base + sizes.nd\n lig_kern_base = italic_base + sizes.ni\n kern_base = lig_kern_base + sizes.nl\n exten_base = kern_base + sizes.nk\n param_base = exten_base + sizes.ne\n\n # Helper functions for accessing individual data. If this looks\n # nonidiomatic Python, I blame the effect of reading the literate WEB\n # documentation of TFtoPL.\n def char_info(c):\n return 4 * (char_base + c)\n\n def width_index(c):\n return data[char_info(c)]\n\n def noneexistent(c):\n return c < sizes.bc or c > sizes.ec or width_index(c) == 0\n\n def height_index(c):\n return data[char_info(c) + 1] // 16\n\n def depth_index(c):\n return data[char_info(c) + 1] % 16\n\n def italic_index(c):\n return data[char_info(c) + 2] // 4\n\n def tag(c):\n return data[char_info(c) + 2] % 4\n\n def remainder(c):\n return data[char_info(c) + 3]\n\n def width(c):\n r = 4 * (width_base + width_index(c))\n return read_fixed(r, "v")["v"]\n\n def height(c):\n r = 4 * (height_base + height_index(c))\n return read_fixed(r, "v")["v"]\n\n def depth(c):\n r = 4 * (depth_base + depth_index(c))\n return read_fixed(r, "v")["v"]\n\n def italic(c):\n r = 4 * (italic_base + italic_index(c))\n return read_fixed(r, "v")["v"]\n\n def exten(c):\n return 4 * (exten_base + remainder(c))\n\n def lig_step(i):\n return 4 * (lig_kern_base + i)\n\n def lig_kern_command(i):\n command = SimpleNamespace()\n unpack2(LIG_KERN_COMMAND, data[i:], command)\n return command\n\n def kern(i):\n r = 4 * (kern_base + i)\n return read_fixed(r, "v")["v"]\n\n def param(i):\n return 4 * (param_base + i)\n\n def read_fixed(index, key, obj=None):\n ret = unpack2(f">;{key}:{FIXED_FORMAT}", data[index:], obj)\n return ret[0]\n\n # Set all attributes to empty values regardless of the header size.\n unpack(HEADER_FORMAT4, [0] * HEADER_SIZE4, self)\n\n offset = 24\n length = sizes.lh * 4\n self.extraheader = {}\n if length >= HEADER_SIZE4:\n rest = unpack2(HEADER_FORMAT4, data[offset:], self)[1]\n if self.face < 18:\n s = self.face % 2\n b = self.face // 2\n self.face = "MBL"[b % 3] + "RI"[s] + "RCE"[b // 3]\n for i in range(sizes.lh - HEADER_SIZE4 // 4):\n rest = unpack2(f">;HEADER{i + 18}:l", rest, self.extraheader)[1]\n elif length >= HEADER_SIZE3:\n unpack2(HEADER_FORMAT3, data[offset:], self)\n elif length >= HEADER_SIZE2:\n unpack2(HEADER_FORMAT2, data[offset:], self)\n elif length >= HEADER_SIZE1:\n unpack2(HEADER_FORMAT1, data[offset:], self)\n\n self.fonttype = VANILLA\n scheme = self.codingscheme.upper()\n if scheme.startswith("TEX MATH SY"):\n self.fonttype = MATHSY\n elif scheme.startswith("TEX MATH EX"):\n self.fonttype = MATHEX\n\n self.fontdimens = {}\n for i in range(sizes.np):\n name = f"PARAMETER{i+1}"\n if i <= 6:\n name = BASE_PARAMS[i]\n elif self.fonttype == MATHSY and i <= 21:\n name = MATHSY_PARAMS[i - 7]\n elif self.fonttype == MATHEX and i <= 12:\n name = MATHEX_PARAMS[i - 7]\n read_fixed(param(i), name, self.fontdimens)\n\n lig_kern_map = {}\n self.right_boundary_char = None\n self.left_boundary_char = None\n if sizes.nl > 0:\n cmd = lig_kern_command(lig_step(0))\n if cmd.skip_byte == 255:\n self.right_boundary_char = cmd.next_char\n\n cmd = lig_kern_command(lig_step((sizes.nl - 1)))\n if cmd.skip_byte == 255:\n self.left_boundary_char = 256\n r = 256 * cmd.op_byte + cmd.remainder\n lig_kern_map[self.left_boundary_char] = r\n\n self.chars = {}\n for c in range(sizes.bc, sizes.ec + 1):\n if width_index(c) > 0:\n self.chars[c] = info = {}\n info["width"] = width(c)\n if height_index(c) > 0:\n info["height"] = height(c)\n if depth_index(c) > 0:\n info["depth"] = depth(c)\n if italic_index(c) > 0:\n info["italic"] = italic(c)\n char_tag = tag(c)\n if char_tag == NO_TAG:\n pass\n elif char_tag == LIG_TAG:\n lig_kern_map[c] = remainder(c)\n elif char_tag == LIST_TAG:\n info["nextlarger"] = remainder(c)\n elif char_tag == EXT_TAG:\n info["varchar"] = varchar = {}\n for i in range(4):\n part = data[exten(c) + i]\n if i == 3 or part > 0:\n name = "rep"\n if i == 0:\n name = "top"\n elif i == 1:\n name = "mid"\n elif i == 2:\n name = "bot"\n if noneexistent(part):\n varchar[name] = c\n else:\n varchar[name] = part\n\n self.ligatures = {}\n self.kerning = {}\n for c, i in sorted(lig_kern_map.items()):\n cmd = lig_kern_command(lig_step(i))\n if cmd.skip_byte > STOP_FLAG:\n i = 256 * cmd.op_byte + cmd.remainder\n\n while i < sizes.nl:\n cmd = lig_kern_command(lig_step(i))\n if cmd.skip_byte > STOP_FLAG:\n pass\n else:\n if cmd.op_byte >= KERN_FLAG:\n r = 256 * (cmd.op_byte - KERN_FLAG) + cmd.remainder\n self.kerning.setdefault(c, {})[cmd.next_char] = kern(r)\n else:\n r = cmd.op_byte\n if r == 4 or (r > 7 and r != 11):\n # Ligature step with nonstandard code, we output\n # the code verbatim.\n lig = r\n else:\n lig = ""\n if r % 4 > 1:\n lig += "/"\n lig += "LIG"\n if r % 2 != 0:\n lig += "/"\n while r > 3:\n lig += ">"\n r -= 4\n self.ligatures.setdefault(c, {})[cmd.next_char] = (\n lig,\n cmd.remainder,\n )\n\n if cmd.skip_byte >= STOP_FLAG:\n break\n i += cmd.skip_byte + 1\n\n\nif __name__ == "__main__":\n import sys\n\n tfm = TFM(sys.argv[1])\n print(\n "\n".join(\n x\n for x in [\n f"tfm.checksum={tfm.checksum}",\n f"tfm.designsize={tfm.designsize}",\n f"tfm.codingscheme={tfm.codingscheme}",\n f"tfm.fonttype={tfm.fonttype}",\n f"tfm.family={tfm.family}",\n f"tfm.seven_bit_safe_flag={tfm.seven_bit_safe_flag}",\n f"tfm.face={tfm.face}",\n f"tfm.extraheader={tfm.extraheader}",\n f"tfm.fontdimens={tfm.fontdimens}",\n f"tfm.right_boundary_char={tfm.right_boundary_char}",\n f"tfm.left_boundary_char={tfm.left_boundary_char}",\n f"tfm.kerning={tfm.kerning}",\n f"tfm.ligatures={tfm.ligatures}",\n f"tfm.chars={tfm.chars}",\n ]\n )\n )\n print(tfm)\n
.venv\Lib\site-packages\fontTools\tfmLib.py
tfmLib.py
Python
14,730
0.95
0.158696
0.030769
node-utils
981
2025-06-23T18:29:08.782022
BSD-3-Clause
false
7990ec06783464ba721793421b71440a
"""\\nusage: ttx [options] inputfile1 [... inputfileN]\n\nTTX -- From OpenType To XML And Back\n\nIf an input file is a TrueType or OpenType font file, it will be\ndecompiled to a TTX file (an XML-based text format).\nIf an input file is a TTX file, it will be compiled to whatever\nformat the data is in, a TrueType or OpenType/CFF font file.\nA special input value of - means read from the standard input.\n\nOutput files are created so they are unique: an existing file is\nnever overwritten.\n\nGeneral options\n===============\n\n-h Help print this message.\n--version show version and exit.\n-d <outputfolder> Specify a directory where the output files are\n to be created.\n-o <outputfile> Specify a file to write the output to. A special\n value of - would use the standard output.\n-f Overwrite existing output file(s), ie. don't append\n numbers.\n-v Verbose: more messages will be written to stdout\n about what is being done.\n-q Quiet: No messages will be written to stdout about\n what is being done.\n-a allow virtual glyphs ID's on compile or decompile.\n\nDump options\n============\n\n-l List table info: instead of dumping to a TTX file, list\n some minimal info about each table.\n-t <table> Specify a table to dump. Multiple -t options\n are allowed. When no -t option is specified, all tables\n will be dumped.\n-x <table> Specify a table to exclude from the dump. Multiple\n -x options are allowed. -t and -x are mutually exclusive.\n-s Split tables: save the TTX data into separate TTX files per\n table and write one small TTX file that contains references\n to the individual table dumps. This file can be used as\n input to ttx, as long as the table files are in the\n same directory.\n-g Split glyf table: Save the glyf data into separate TTX files\n per glyph and write a small TTX for the glyf table which\n contains references to the individual TTGlyph elements.\n NOTE: specifying -g implies -s (no need for -s together\n with -g)\n-i Do NOT disassemble TT instructions: when this option is\n given, all TrueType programs (glyph programs, the font\n program and the pre-program) will be written to the TTX\n file as hex data instead of assembly. This saves some time\n and makes the TTX file smaller.\n-z <format> Specify a bitmap data export option for EBDT:\n {'raw', 'row', 'bitwise', 'extfile'} or for the CBDT:\n {'raw', 'extfile'} Each option does one of the following:\n\n -z raw\n export the bitmap data as a hex dump\n -z row\n export each row as hex data\n -z bitwise\n export each row as binary in an ASCII art style\n -z extfile\n export the data as external files with XML references\n\n If no export format is specified 'raw' format is used.\n-e Don't ignore decompilation errors, but show a full traceback\n and abort.\n-y <number> Select font number for TrueType Collection (.ttc/.otc),\n starting from 0.\n--unicodedata <UnicodeData.txt>\n Use custom database file to write character names in the\n comments of the cmap TTX output.\n--newline <value>\n Control how line endings are written in the XML file. It\n can be 'LF', 'CR', or 'CRLF'. If not specified, the\n default platform-specific line endings are used.\n\nCompile options\n===============\n\n-m Merge with TrueType-input-file: specify a TrueType or\n OpenType font file to be merged with the TTX file. This\n option is only valid when at most one TTX file is specified.\n-b Don't recalc glyph bounding boxes: use the values in the\n TTX file as-is.\n--recalc-timestamp\n Set font 'modified' timestamp to current time.\n By default, the modification time of the TTX file will be\n used.\n--no-recalc-timestamp\n Keep the original font 'modified' timestamp.\n--flavor <type>\n Specify flavor of output font file. May be 'woff' or 'woff2'.\n Note that WOFF2 requires the Brotli Python extension,\n available at https://github.com/google/brotli\n--with-zopfli\n Use Zopfli instead of Zlib to compress WOFF. The Python\n extension is available at https://pypi.python.org/pypi/zopfli\n--optimize-font-speed\n Enable optimizations that prioritize speed over file size.\n This mainly affects how glyf t able and gvar / VARC tables are\n compiled. The produced fonts will be larger, but rendering\n performance will be improved with HarfBuzz and other text\n layout engines.\n"""\n\nfrom fontTools.ttLib import OPTIMIZE_FONT_SPEED, TTFont, TTLibError\nfrom fontTools.misc.macCreatorType import getMacCreatorAndType\nfrom fontTools.unicode import setUnicodeData\nfrom fontTools.misc.textTools import Tag, tostr\nfrom fontTools.misc.timeTools import timestampSinceEpoch\nfrom fontTools.misc.loggingTools import Timer\nfrom fontTools.misc.cliTools import makeOutputFileName\nimport os\nimport sys\nimport getopt\nimport re\nimport logging\n\n\nlog = logging.getLogger("fontTools.ttx")\n\nopentypeheaderRE = re.compile("""sfntVersion=['"]OTTO["']""")\n\n\nclass Options(object):\n listTables = False\n outputDir = None\n outputFile = None\n overWrite = False\n verbose = False\n quiet = False\n splitTables = False\n splitGlyphs = False\n disassembleInstructions = True\n mergeFile = None\n recalcBBoxes = True\n ignoreDecompileErrors = True\n bitmapGlyphDataFormat = "raw"\n unicodedata = None\n newlinestr = "\n"\n recalcTimestamp = None\n flavor = None\n useZopfli = False\n optimizeFontSpeed = False\n\n def __init__(self, rawOptions, numFiles):\n self.onlyTables = []\n self.skipTables = []\n self.fontNumber = -1\n for option, value in rawOptions:\n # general options\n if option == "-h":\n print(__doc__)\n sys.exit(0)\n elif option == "--version":\n from fontTools import version\n\n print(version)\n sys.exit(0)\n elif option == "-d":\n if not os.path.isdir(value):\n raise getopt.GetoptError(\n "The -d option value must be an existing directory"\n )\n self.outputDir = value\n elif option == "-o":\n self.outputFile = value\n elif option == "-f":\n self.overWrite = True\n elif option == "-v":\n self.verbose = True\n elif option == "-q":\n self.quiet = True\n # dump options\n elif option == "-l":\n self.listTables = True\n elif option == "-t":\n # pad with space if table tag length is less than 4\n value = value.ljust(4)\n self.onlyTables.append(value)\n elif option == "-x":\n # pad with space if table tag length is less than 4\n value = value.ljust(4)\n self.skipTables.append(value)\n elif option == "-s":\n self.splitTables = True\n elif option == "-g":\n # -g implies (and forces) splitTables\n self.splitGlyphs = True\n self.splitTables = True\n elif option == "-i":\n self.disassembleInstructions = False\n elif option == "-z":\n validOptions = ("raw", "row", "bitwise", "extfile")\n if value not in validOptions:\n raise getopt.GetoptError(\n "-z does not allow %s as a format. Use %s"\n % (option, validOptions)\n )\n self.bitmapGlyphDataFormat = value\n elif option == "-y":\n self.fontNumber = int(value)\n # compile options\n elif option == "-m":\n self.mergeFile = value\n elif option == "-b":\n self.recalcBBoxes = False\n elif option == "-e":\n self.ignoreDecompileErrors = False\n elif option == "--unicodedata":\n self.unicodedata = value\n elif option == "--newline":\n validOptions = ("LF", "CR", "CRLF")\n if value == "LF":\n self.newlinestr = "\n"\n elif value == "CR":\n self.newlinestr = "\r"\n elif value == "CRLF":\n self.newlinestr = "\r\n"\n else:\n raise getopt.GetoptError(\n "Invalid choice for --newline: %r (choose from %s)"\n % (value, ", ".join(map(repr, validOptions)))\n )\n elif option == "--recalc-timestamp":\n self.recalcTimestamp = True\n elif option == "--no-recalc-timestamp":\n self.recalcTimestamp = False\n elif option == "--flavor":\n self.flavor = value\n elif option == "--with-zopfli":\n self.useZopfli = True\n elif option == "--optimize-font-speed":\n self.optimizeFontSpeed = True\n if self.verbose and self.quiet:\n raise getopt.GetoptError("-q and -v options are mutually exclusive")\n if self.verbose:\n self.logLevel = logging.DEBUG\n elif self.quiet:\n self.logLevel = logging.WARNING\n else:\n self.logLevel = logging.INFO\n if self.mergeFile and self.flavor:\n raise getopt.GetoptError("-m and --flavor options are mutually exclusive")\n if self.onlyTables and self.skipTables:\n raise getopt.GetoptError("-t and -x options are mutually exclusive")\n if self.mergeFile and numFiles > 1:\n raise getopt.GetoptError(\n "Must specify exactly one TTX source file when using -m"\n )\n if self.flavor != "woff" and self.useZopfli:\n raise getopt.GetoptError("--with-zopfli option requires --flavor 'woff'")\n\n\ndef ttList(input, output, options):\n ttf = TTFont(input, fontNumber=options.fontNumber, lazy=True)\n reader = ttf.reader\n tags = sorted(reader.keys())\n print('Listing table info for "%s":' % input)\n format = " %4s %10s %8s %8s"\n print(format % ("tag ", " checksum", " length", " offset"))\n print(format % ("----", "----------", "--------", "--------"))\n for tag in tags:\n entry = reader.tables[tag]\n if ttf.flavor == "woff2":\n # WOFF2 doesn't store table checksums, so they must be calculated\n from fontTools.ttLib.sfnt import calcChecksum\n\n data = entry.loadData(reader.transformBuffer)\n checkSum = calcChecksum(data)\n else:\n checkSum = int(entry.checkSum)\n if checkSum < 0:\n checkSum = checkSum + 0x100000000\n checksum = "0x%08X" % checkSum\n print(format % (tag, checksum, entry.length, entry.offset))\n print()\n ttf.close()\n\n\n@Timer(log, "Done dumping TTX in %(time).3f seconds")\ndef ttDump(input, output, options):\n input_name = input\n if input == "-":\n input, input_name = sys.stdin.buffer, sys.stdin.name\n output_name = output\n if output == "-":\n output, output_name = sys.stdout, sys.stdout.name\n log.info('Dumping "%s" to "%s"...', input_name, output_name)\n if options.unicodedata:\n setUnicodeData(options.unicodedata)\n ttf = TTFont(\n input,\n 0,\n ignoreDecompileErrors=options.ignoreDecompileErrors,\n fontNumber=options.fontNumber,\n )\n ttf.saveXML(\n output,\n tables=options.onlyTables,\n skipTables=options.skipTables,\n splitTables=options.splitTables,\n splitGlyphs=options.splitGlyphs,\n disassembleInstructions=options.disassembleInstructions,\n bitmapGlyphDataFormat=options.bitmapGlyphDataFormat,\n newlinestr=options.newlinestr,\n )\n ttf.close()\n\n\n@Timer(log, "Done compiling TTX in %(time).3f seconds")\ndef ttCompile(input, output, options):\n input_name = input\n if input == "-":\n input, input_name = sys.stdin, sys.stdin.name\n output_name = output\n if output == "-":\n output, output_name = sys.stdout.buffer, sys.stdout.name\n log.info('Compiling "%s" to "%s"...' % (input_name, output))\n if options.useZopfli:\n from fontTools.ttLib import sfnt\n\n sfnt.USE_ZOPFLI = True\n ttf = TTFont(\n options.mergeFile,\n flavor=options.flavor,\n recalcBBoxes=options.recalcBBoxes,\n recalcTimestamp=options.recalcTimestamp,\n )\n if options.optimizeFontSpeed:\n ttf.cfg[OPTIMIZE_FONT_SPEED] = options.optimizeFontSpeed\n ttf.importXML(input)\n\n if options.recalcTimestamp is None and "head" in ttf and input is not sys.stdin:\n # use TTX file modification time for head "modified" timestamp\n mtime = os.path.getmtime(input)\n ttf["head"].modified = timestampSinceEpoch(mtime)\n\n ttf.save(output)\n\n\ndef guessFileType(fileName):\n if fileName == "-":\n header = sys.stdin.buffer.peek(256)\n ext = ""\n else:\n base, ext = os.path.splitext(fileName)\n try:\n with open(fileName, "rb") as f:\n header = f.read(256)\n except IOError:\n return None\n\n if header.startswith(b"\xef\xbb\xbf<?xml"):\n header = header.lstrip(b"\xef\xbb\xbf")\n cr, tp = getMacCreatorAndType(fileName)\n if tp in ("sfnt", "FFIL"):\n return "TTF"\n if ext == ".dfont":\n return "TTF"\n head = Tag(header[:4])\n if head == "OTTO":\n return "OTF"\n elif head == "ttcf":\n return "TTC"\n elif head in ("\0\1\0\0", "true"):\n return "TTF"\n elif head == "wOFF":\n return "WOFF"\n elif head == "wOF2":\n return "WOFF2"\n elif head == "<?xm":\n # Use 'latin1' because that can't fail.\n header = tostr(header, "latin1")\n if opentypeheaderRE.search(header):\n return "OTX"\n else:\n return "TTX"\n return None\n\n\ndef parseOptions(args):\n rawOptions, files = getopt.gnu_getopt(\n args,\n "ld:o:fvqht:x:sgim:z:baey:",\n [\n "unicodedata=",\n "recalc-timestamp",\n "no-recalc-timestamp",\n "flavor=",\n "version",\n "with-zopfli",\n "newline=",\n "optimize-font-speed",\n ],\n )\n\n options = Options(rawOptions, len(files))\n jobs = []\n\n if not files:\n raise getopt.GetoptError("Must specify at least one input file")\n\n for input in files:\n if input != "-" and not os.path.isfile(input):\n raise getopt.GetoptError('File not found: "%s"' % input)\n tp = guessFileType(input)\n if tp in ("OTF", "TTF", "TTC", "WOFF", "WOFF2"):\n extension = ".ttx"\n if options.listTables:\n action = ttList\n else:\n action = ttDump\n elif tp == "TTX":\n extension = "." + options.flavor if options.flavor else ".ttf"\n action = ttCompile\n elif tp == "OTX":\n extension = "." + options.flavor if options.flavor else ".otf"\n action = ttCompile\n else:\n raise getopt.GetoptError('Unknown file type: "%s"' % input)\n\n if options.outputFile:\n output = options.outputFile\n else:\n if input == "-":\n raise getopt.GetoptError("Must provide -o when reading from stdin")\n output = makeOutputFileName(\n input, options.outputDir, extension, options.overWrite\n )\n # 'touch' output file to avoid race condition in choosing file names\n if action != ttList:\n open(output, "a").close()\n jobs.append((action, input, output))\n return jobs, options\n\n\ndef process(jobs, options):\n for action, input, output in jobs:\n action(input, output, options)\n\n\ndef main(args=None):\n """Convert OpenType fonts to XML and back"""\n from fontTools import configLogger\n\n if args is None:\n args = sys.argv[1:]\n try:\n jobs, options = parseOptions(args)\n except getopt.GetoptError as e:\n print("%s\nERROR: %s" % (__doc__, e), file=sys.stderr)\n sys.exit(2)\n\n configLogger(level=options.logLevel)\n\n try:\n process(jobs, options)\n except KeyboardInterrupt:\n log.error("(Cancelled.)")\n sys.exit(1)\n except SystemExit:\n raise\n except TTLibError as e:\n log.error(e)\n sys.exit(1)\n except:\n log.exception("Unhandled exception has occurred")\n sys.exit(1)\n\n\nif __name__ == "__main__":\n sys.exit(main())\n
.venv\Lib\site-packages\fontTools\ttx.py
ttx.py
Python
17,756
0.95
0.131524
0.023148
vue-tools
131
2025-05-04T08:43:55.363858
MIT
false
9312eaf3ee070d2f00b4752526ca7f69
def _makeunicodes(f):\n lines = iter(f.readlines())\n unicodes = {}\n for line in lines:\n if not line:\n continue\n num, name = line.split(";")[:2]\n if name[0] == "<":\n continue # "<control>", etc.\n num = int(num, 16)\n unicodes[num] = name\n return unicodes\n\n\nclass _UnicodeCustom(object):\n def __init__(self, f):\n if isinstance(f, str):\n with open(f) as fd:\n codes = _makeunicodes(fd)\n else:\n codes = _makeunicodes(f)\n self.codes = codes\n\n def __getitem__(self, charCode):\n try:\n return self.codes[charCode]\n except KeyError:\n return "????"\n\n\nclass _UnicodeBuiltin(object):\n def __getitem__(self, charCode):\n try:\n # use unicodedata backport to python2, if available:\n # https://github.com/mikekap/unicodedata2\n import unicodedata2 as unicodedata\n except ImportError:\n import unicodedata\n try:\n return unicodedata.name(chr(charCode))\n except ValueError:\n return "????"\n\n\nUnicode = _UnicodeBuiltin()\n\n\ndef setUnicodeData(f):\n global Unicode\n Unicode = _UnicodeCustom(f)\n
.venv\Lib\site-packages\fontTools\unicode.py
unicode.py
Python
1,287
0.95
0.3
0.04878
react-lib
546
2024-01-27T00:07:51.213694
Apache-2.0
false
1500eb6e4ec75eee15ecb9654a47e2c5
import logging\nfrom fontTools.misc.loggingTools import configLogger\n\nlog = logging.getLogger(__name__)\n\nversion = __version__ = "4.58.5"\n\n__all__ = ["version", "log", "configLogger"]\n
.venv\Lib\site-packages\fontTools\__init__.py
__init__.py
Python
191
0.85
0
0
vue-tools
498
2025-06-15T17:23:14.455981
BSD-3-Clause
false
8df36827ef81622b6717a0c899f86278
import sys\n\n\ndef main(args=None):\n if args is None:\n args = sys.argv[1:]\n\n # TODO Handle library-wide options. Eg.:\n # --unicodedata\n # --verbose / other logging stuff\n\n # TODO Allow a way to run arbitrary modules? Useful for setting\n # library-wide options and calling another library. Eg.:\n #\n # $ fonttools --unicodedata=... fontmake ...\n #\n # This allows for a git-like command where thirdparty commands\n # can be added. Should we just try importing the fonttools\n # module first and try without if it fails?\n\n if len(sys.argv) < 2:\n sys.argv.append("help")\n if sys.argv[1] == "-h" or sys.argv[1] == "--help":\n sys.argv[1] = "help"\n mod = "fontTools." + sys.argv[1]\n sys.argv[1] = sys.argv[0] + " " + sys.argv[1]\n del sys.argv[0]\n\n import runpy\n\n runpy.run_module(mod, run_name="__main__")\n\n\nif __name__ == "__main__":\n sys.exit(main())\n
.venv\Lib\site-packages\fontTools\__main__.py
__main__.py
Python
960
0.95
0.285714
0.423077
awesome-app
566
2024-02-06T23:40:20.644661
MIT
false
ebf9813a3f125e1f5bdd5a45a6427c8c
"""CFF2 to CFF converter."""\n\nfrom fontTools.ttLib import TTFont, newTable\nfrom fontTools.misc.cliTools import makeOutputFileName\nfrom fontTools.cffLib import (\n TopDictIndex,\n buildOrder,\n buildDefaults,\n topDictOperators,\n privateDictOperators,\n)\nfrom .width import optimizeWidths\nfrom collections import defaultdict\nimport logging\n\n\n__all__ = ["convertCFF2ToCFF", "main"]\n\n\nlog = logging.getLogger("fontTools.cffLib")\n\n\ndef _convertCFF2ToCFF(cff, otFont):\n """Converts this object from CFF2 format to CFF format. This conversion\n is done 'in-place'. The conversion cannot be reversed.\n\n The CFF2 font cannot be variable. (TODO Accept those and convert to the\n default instance?)\n\n This assumes a decompiled CFF table. (i.e. that the object has been\n filled via :meth:`decompile` and e.g. not loaded from XML.)"""\n\n cff.major = 1\n\n topDictData = TopDictIndex(None)\n for item in cff.topDictIndex:\n # Iterate over, such that all are decompiled\n item.cff2GetGlyphOrder = None\n topDictData.append(item)\n cff.topDictIndex = topDictData\n topDict = topDictData[0]\n\n if hasattr(topDict, "VarStore"):\n raise ValueError("Variable CFF2 font cannot be converted to CFF format.")\n\n opOrder = buildOrder(topDictOperators)\n topDict.order = opOrder\n for key in topDict.rawDict.keys():\n if key not in opOrder:\n del topDict.rawDict[key]\n if hasattr(topDict, key):\n delattr(topDict, key)\n\n fdArray = topDict.FDArray\n charStrings = topDict.CharStrings\n\n defaults = buildDefaults(privateDictOperators)\n order = buildOrder(privateDictOperators)\n for fd in fdArray:\n fd.setCFF2(False)\n privateDict = fd.Private\n privateDict.order = order\n for key in order:\n if key not in privateDict.rawDict and key in defaults:\n privateDict.rawDict[key] = defaults[key]\n for key in privateDict.rawDict.keys():\n if key not in order:\n del privateDict.rawDict[key]\n if hasattr(privateDict, key):\n delattr(privateDict, key)\n\n for cs in charStrings.values():\n cs.decompile()\n cs.program.append("endchar")\n for subrSets in [cff.GlobalSubrs] + [\n getattr(fd.Private, "Subrs", []) for fd in fdArray\n ]:\n for cs in subrSets:\n cs.program.append("return")\n\n # Add (optimal) width to CharStrings that need it.\n widths = defaultdict(list)\n metrics = otFont["hmtx"].metrics\n for glyphName in charStrings.keys():\n cs, fdIndex = charStrings.getItemAndSelector(glyphName)\n if fdIndex == None:\n fdIndex = 0\n widths[fdIndex].append(metrics[glyphName][0])\n for fdIndex, widthList in widths.items():\n bestDefault, bestNominal = optimizeWidths(widthList)\n private = fdArray[fdIndex].Private\n private.defaultWidthX = bestDefault\n private.nominalWidthX = bestNominal\n for glyphName in charStrings.keys():\n cs, fdIndex = charStrings.getItemAndSelector(glyphName)\n if fdIndex == None:\n fdIndex = 0\n private = fdArray[fdIndex].Private\n width = metrics[glyphName][0]\n if width != private.defaultWidthX:\n cs.program.insert(0, width - private.nominalWidthX)\n\n mapping = {\n name: ("cid" + str(n) if n else ".notdef")\n for n, name in enumerate(topDict.charset)\n }\n topDict.charset = [\n "cid" + str(n) if n else ".notdef" for n in range(len(topDict.charset))\n ]\n charStrings.charStrings = {\n mapping[name]: v for name, v in charStrings.charStrings.items()\n }\n\n # I'm not sure why the following is *not* necessary. And it breaks\n # the output if I add it.\n # topDict.ROS = ("Adobe", "Identity", 0)\n\n\ndef convertCFF2ToCFF(font, *, updatePostTable=True):\n cff = font["CFF2"].cff\n _convertCFF2ToCFF(cff, font)\n del font["CFF2"]\n table = font["CFF "] = newTable("CFF ")\n table.cff = cff\n\n if updatePostTable and "post" in font:\n # Only version supported for fonts with CFF table is 0x00030000 not 0x20000\n post = font["post"]\n if post.formatType == 2.0:\n post.formatType = 3.0\n\n\ndef main(args=None):\n """Convert CFF OTF font to CFF2 OTF font"""\n if args is None:\n import sys\n\n args = sys.argv[1:]\n\n import argparse\n\n parser = argparse.ArgumentParser(\n "fonttools cffLib.CFFToCFF2",\n description="Upgrade a CFF font to CFF2.",\n )\n parser.add_argument(\n "input", metavar="INPUT.ttf", help="Input OTF file with CFF table."\n )\n parser.add_argument(\n "-o",\n "--output",\n metavar="OUTPUT.ttf",\n default=None,\n help="Output instance OTF file (default: INPUT-CFF2.ttf).",\n )\n parser.add_argument(\n "--no-recalc-timestamp",\n dest="recalc_timestamp",\n action="store_false",\n help="Don't set the output font's timestamp to the current time.",\n )\n loggingGroup = parser.add_mutually_exclusive_group(required=False)\n loggingGroup.add_argument(\n "-v", "--verbose", action="store_true", help="Run more verbosely."\n )\n loggingGroup.add_argument(\n "-q", "--quiet", action="store_true", help="Turn verbosity off."\n )\n options = parser.parse_args(args)\n\n from fontTools import configLogger\n\n configLogger(\n level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO")\n )\n\n import os\n\n infile = options.input\n if not os.path.isfile(infile):\n parser.error("No such file '{}'".format(infile))\n\n outfile = (\n makeOutputFileName(infile, overWrite=True, suffix="-CFF")\n if not options.output\n else options.output\n )\n\n font = TTFont(infile, recalcTimestamp=options.recalc_timestamp, recalcBBoxes=False)\n\n convertCFF2ToCFF(font)\n\n log.info(\n "Saving %s",\n outfile,\n )\n font.save(outfile)\n\n\nif __name__ == "__main__":\n import sys\n\n sys.exit(main(sys.argv[1:]))\n
.venv\Lib\site-packages\fontTools\cffLib\CFF2ToCFF.py
CFF2ToCFF.py
Python
6,291
0.95
0.192118
0.036364
react-lib
472
2024-08-31T08:57:54.480985
Apache-2.0
false
776549101e838fc49b258f5cb4098b64
"""CFF to CFF2 converter."""\n\nfrom fontTools.ttLib import TTFont, newTable\nfrom fontTools.misc.cliTools import makeOutputFileName\nfrom fontTools.misc.psCharStrings import T2WidthExtractor\nfrom fontTools.cffLib import (\n TopDictIndex,\n FDArrayIndex,\n FontDict,\n buildOrder,\n topDictOperators,\n privateDictOperators,\n topDictOperators2,\n privateDictOperators2,\n)\nfrom io import BytesIO\nimport logging\n\n__all__ = ["convertCFFToCFF2", "main"]\n\n\nlog = logging.getLogger("fontTools.cffLib")\n\n\nclass _NominalWidthUsedError(Exception):\n def __add__(self, other):\n raise self\n\n def __radd__(self, other):\n raise self\n\n\ndef _convertCFFToCFF2(cff, otFont):\n """Converts this object from CFF format to CFF2 format. This conversion\n is done 'in-place'. The conversion cannot be reversed.\n\n This assumes a decompiled CFF table. (i.e. that the object has been\n filled via :meth:`decompile` and e.g. not loaded from XML.)"""\n\n # Clean up T2CharStrings\n\n topDict = cff.topDictIndex[0]\n fdArray = topDict.FDArray if hasattr(topDict, "FDArray") else None\n charStrings = topDict.CharStrings\n globalSubrs = cff.GlobalSubrs\n localSubrs = (\n [getattr(fd.Private, "Subrs", []) for fd in fdArray]\n if fdArray\n else (\n [topDict.Private.Subrs]\n if hasattr(topDict, "Private") and hasattr(topDict.Private, "Subrs")\n else []\n )\n )\n\n for glyphName in charStrings.keys():\n cs, fdIndex = charStrings.getItemAndSelector(glyphName)\n cs.decompile()\n\n # Clean up subroutines first\n for subrs in [globalSubrs] + localSubrs:\n for subr in subrs:\n program = subr.program\n i = j = len(program)\n try:\n i = program.index("return")\n except ValueError:\n pass\n try:\n j = program.index("endchar")\n except ValueError:\n pass\n program[min(i, j) :] = []\n\n # Clean up glyph charstrings\n removeUnusedSubrs = False\n nominalWidthXError = _NominalWidthUsedError()\n for glyphName in charStrings.keys():\n cs, fdIndex = charStrings.getItemAndSelector(glyphName)\n program = cs.program\n\n thisLocalSubrs = (\n localSubrs[fdIndex]\n if fdIndex is not None\n else (\n getattr(topDict.Private, "Subrs", [])\n if hasattr(topDict, "Private")\n else []\n )\n )\n\n # Intentionally use custom type for nominalWidthX, such that any\n # CharString that has an explicit width encoded will throw back to us.\n extractor = T2WidthExtractor(\n thisLocalSubrs,\n globalSubrs,\n nominalWidthXError,\n 0,\n )\n try:\n extractor.execute(cs)\n except _NominalWidthUsedError:\n # Program has explicit width. We want to drop it, but can't\n # just pop the first number since it may be a subroutine call.\n # Instead, when seeing that, we embed the subroutine and recurse.\n # If this ever happened, we later prune unused subroutines.\n while len(program) >= 2 and program[1] in ["callsubr", "callgsubr"]:\n removeUnusedSubrs = True\n subrNumber = program.pop(0)\n assert isinstance(subrNumber, int), subrNumber\n op = program.pop(0)\n bias = extractor.localBias if op == "callsubr" else extractor.globalBias\n subrNumber += bias\n subrSet = thisLocalSubrs if op == "callsubr" else globalSubrs\n subrProgram = subrSet[subrNumber].program\n program[:0] = subrProgram\n # Now pop the actual width\n assert len(program) >= 1, program\n program.pop(0)\n\n if program and program[-1] == "endchar":\n program.pop()\n\n if removeUnusedSubrs:\n cff.remove_unused_subroutines()\n\n # Upconvert TopDict\n\n cff.major = 2\n cff2GetGlyphOrder = cff.otFont.getGlyphOrder\n topDictData = TopDictIndex(None, cff2GetGlyphOrder)\n for item in cff.topDictIndex:\n # Iterate over, such that all are decompiled\n topDictData.append(item)\n cff.topDictIndex = topDictData\n topDict = topDictData[0]\n if hasattr(topDict, "Private"):\n privateDict = topDict.Private\n else:\n privateDict = None\n opOrder = buildOrder(topDictOperators2)\n topDict.order = opOrder\n topDict.cff2GetGlyphOrder = cff2GetGlyphOrder\n\n if not hasattr(topDict, "FDArray"):\n fdArray = topDict.FDArray = FDArrayIndex()\n fdArray.strings = None\n fdArray.GlobalSubrs = topDict.GlobalSubrs\n topDict.GlobalSubrs.fdArray = fdArray\n charStrings = topDict.CharStrings\n if charStrings.charStringsAreIndexed:\n charStrings.charStringsIndex.fdArray = fdArray\n else:\n charStrings.fdArray = fdArray\n fontDict = FontDict()\n fontDict.setCFF2(True)\n fdArray.append(fontDict)\n fontDict.Private = privateDict\n privateOpOrder = buildOrder(privateDictOperators2)\n if privateDict is not None:\n for entry in privateDictOperators:\n key = entry[1]\n if key not in privateOpOrder:\n if key in privateDict.rawDict:\n # print "Removing private dict", key\n del privateDict.rawDict[key]\n if hasattr(privateDict, key):\n delattr(privateDict, key)\n # print "Removing privateDict attr", key\n else:\n # clean up the PrivateDicts in the fdArray\n fdArray = topDict.FDArray\n privateOpOrder = buildOrder(privateDictOperators2)\n for fontDict in fdArray:\n fontDict.setCFF2(True)\n for key in list(fontDict.rawDict.keys()):\n if key not in fontDict.order:\n del fontDict.rawDict[key]\n if hasattr(fontDict, key):\n delattr(fontDict, key)\n\n privateDict = fontDict.Private\n for entry in privateDictOperators:\n key = entry[1]\n if key not in privateOpOrder:\n if key in list(privateDict.rawDict.keys()):\n # print "Removing private dict", key\n del privateDict.rawDict[key]\n if hasattr(privateDict, key):\n delattr(privateDict, key)\n # print "Removing privateDict attr", key\n\n # Now delete up the deprecated topDict operators from CFF 1.0\n for entry in topDictOperators:\n key = entry[1]\n # We seem to need to keep the charset operator for now,\n # or we fail to compile with some fonts, like AdditionFont.otf.\n # I don't know which kind of CFF font those are. But keeping\n # charset seems to work. It will be removed when we save and\n # read the font again.\n #\n # AdditionFont.otf has <Encoding name="StandardEncoding"/>.\n if key == "charset":\n continue\n if key not in opOrder:\n if key in topDict.rawDict:\n del topDict.rawDict[key]\n if hasattr(topDict, key):\n delattr(topDict, key)\n\n # TODO(behdad): What does the following comment even mean? Both CFF and CFF2\n # use the same T2Charstring class. I *think* what it means is that the CharStrings\n # were loaded for CFF1, and we need to reload them for CFF2 to set varstore, etc\n # on them. At least that's what I understand. It's probably safe to remove this\n # and just set vstore where needed.\n #\n # See comment above about charset as well.\n\n # At this point, the Subrs and Charstrings are all still T2Charstring class\n # easiest to fix this by compiling, then decompiling again\n file = BytesIO()\n cff.compile(file, otFont, isCFF2=True)\n file.seek(0)\n cff.decompile(file, otFont, isCFF2=True)\n\n\ndef convertCFFToCFF2(font):\n cff = font["CFF "].cff\n del font["CFF "]\n _convertCFFToCFF2(cff, font)\n table = font["CFF2"] = newTable("CFF2")\n table.cff = cff\n\n\ndef main(args=None):\n """Convert CFF OTF font to CFF2 OTF font"""\n if args is None:\n import sys\n\n args = sys.argv[1:]\n\n import argparse\n\n parser = argparse.ArgumentParser(\n "fonttools cffLib.CFFToCFF2",\n description="Upgrade a CFF font to CFF2.",\n )\n parser.add_argument(\n "input", metavar="INPUT.ttf", help="Input OTF file with CFF table."\n )\n parser.add_argument(\n "-o",\n "--output",\n metavar="OUTPUT.ttf",\n default=None,\n help="Output instance OTF file (default: INPUT-CFF2.ttf).",\n )\n parser.add_argument(\n "--no-recalc-timestamp",\n dest="recalc_timestamp",\n action="store_false",\n help="Don't set the output font's timestamp to the current time.",\n )\n loggingGroup = parser.add_mutually_exclusive_group(required=False)\n loggingGroup.add_argument(\n "-v", "--verbose", action="store_true", help="Run more verbosely."\n )\n loggingGroup.add_argument(\n "-q", "--quiet", action="store_true", help="Turn verbosity off."\n )\n options = parser.parse_args(args)\n\n from fontTools import configLogger\n\n configLogger(\n level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO")\n )\n\n import os\n\n infile = options.input\n if not os.path.isfile(infile):\n parser.error("No such file '{}'".format(infile))\n\n outfile = (\n makeOutputFileName(infile, overWrite=True, suffix="-CFF2")\n if not options.output\n else options.output\n )\n\n font = TTFont(infile, recalcTimestamp=options.recalc_timestamp, recalcBBoxes=False)\n\n convertCFFToCFF2(font)\n\n log.info(\n "Saving %s",\n outfile,\n )\n font.save(outfile)\n\n\nif __name__ == "__main__":\n import sys\n\n sys.exit(main(sys.argv[1:]))\n
.venv\Lib\site-packages\fontTools\cffLib\CFFToCFF2.py
CFFToCFF2.py
Python
10,424
0.95
0.190164
0.130268
vue-tools
158
2025-04-07T03:01:01.994168
MIT
false
cde72653aa06ba9b49ba12990746b52e
# -*- coding: utf-8 -*-\n\n"""T2CharString operator specializer and generalizer.\n\nPostScript glyph drawing operations can be expressed in multiple different\nways. For example, as well as the ``lineto`` operator, there is also a\n``hlineto`` operator which draws a horizontal line, removing the need to\nspecify a ``dx`` coordinate, and a ``vlineto`` operator which draws a\nvertical line, removing the need to specify a ``dy`` coordinate. As well\nas decompiling :class:`fontTools.misc.psCharStrings.T2CharString` objects\ninto lists of operations, this module allows for conversion between general\nand specific forms of the operation.\n\n"""\n\nfrom fontTools.cffLib import maxStackLimit\n\n\ndef stringToProgram(string):\n if isinstance(string, str):\n string = string.split()\n program = []\n for token in string:\n try:\n token = int(token)\n except ValueError:\n try:\n token = float(token)\n except ValueError:\n pass\n program.append(token)\n return program\n\n\ndef programToString(program):\n return " ".join(str(x) for x in program)\n\n\ndef programToCommands(program, getNumRegions=None):\n """Takes a T2CharString program list and returns list of commands.\n Each command is a two-tuple of commandname,arg-list. The commandname might\n be empty string if no commandname shall be emitted (used for glyph width,\n hintmask/cntrmask argument, as well as stray arguments at the end of the\n program (🤷).\n 'getNumRegions' may be None, or a callable object. It must return the\n number of regions. 'getNumRegions' takes a single argument, vsindex. It\n returns the numRegions for the vsindex.\n The Charstring may or may not start with a width value. If the first\n non-blend operator has an odd number of arguments, then the first argument is\n a width, and is popped off. This is complicated with blend operators, as\n there may be more than one before the first hint or moveto operator, and each\n one reduces several arguments to just one list argument. We have to sum the\n number of arguments that are not part of the blend arguments, and all the\n 'numBlends' values. We could instead have said that by definition, if there\n is a blend operator, there is no width value, since CFF2 Charstrings don't\n have width values. I discussed this with Behdad, and we are allowing for an\n initial width value in this case because developers may assemble a CFF2\n charstring from CFF Charstrings, which could have width values.\n """\n\n seenWidthOp = False\n vsIndex = 0\n lenBlendStack = 0\n lastBlendIndex = 0\n commands = []\n stack = []\n it = iter(program)\n\n for token in it:\n if not isinstance(token, str):\n stack.append(token)\n continue\n\n if token == "blend":\n assert getNumRegions is not None\n numSourceFonts = 1 + getNumRegions(vsIndex)\n # replace the blend op args on the stack with a single list\n # containing all the blend op args.\n numBlends = stack[-1]\n numBlendArgs = numBlends * numSourceFonts + 1\n # replace first blend op by a list of the blend ops.\n stack[-numBlendArgs:] = [stack[-numBlendArgs:]]\n lenStack = len(stack)\n lenBlendStack += numBlends + lenStack - 1\n lastBlendIndex = lenStack\n # if a blend op exists, this is or will be a CFF2 charstring.\n continue\n\n elif token == "vsindex":\n vsIndex = stack[-1]\n assert type(vsIndex) is int\n\n elif (not seenWidthOp) and token in {\n "hstem",\n "hstemhm",\n "vstem",\n "vstemhm",\n "cntrmask",\n "hintmask",\n "hmoveto",\n "vmoveto",\n "rmoveto",\n "endchar",\n }:\n seenWidthOp = True\n parity = token in {"hmoveto", "vmoveto"}\n if lenBlendStack:\n # lenBlendStack has the number of args represented by the last blend\n # arg and all the preceding args. We need to now add the number of\n # args following the last blend arg.\n numArgs = lenBlendStack + len(stack[lastBlendIndex:])\n else:\n numArgs = len(stack)\n if numArgs and (numArgs % 2) ^ parity:\n width = stack.pop(0)\n commands.append(("", [width]))\n\n if token in {"hintmask", "cntrmask"}:\n if stack:\n commands.append(("", stack))\n commands.append((token, []))\n commands.append(("", [next(it)]))\n else:\n commands.append((token, stack))\n stack = []\n if stack:\n commands.append(("", stack))\n return commands\n\n\ndef _flattenBlendArgs(args):\n token_list = []\n for arg in args:\n if isinstance(arg, list):\n token_list.extend(arg)\n token_list.append("blend")\n else:\n token_list.append(arg)\n return token_list\n\n\ndef commandsToProgram(commands):\n """Takes a commands list as returned by programToCommands() and converts\n it back to a T2CharString program list."""\n program = []\n for op, args in commands:\n if any(isinstance(arg, list) for arg in args):\n args = _flattenBlendArgs(args)\n program.extend(args)\n if op:\n program.append(op)\n return program\n\n\ndef _everyN(el, n):\n """Group the list el into groups of size n"""\n l = len(el)\n if l % n != 0:\n raise ValueError(el)\n for i in range(0, l, n):\n yield el[i : i + n]\n\n\nclass _GeneralizerDecombinerCommandsMap(object):\n @staticmethod\n def rmoveto(args):\n if len(args) != 2:\n raise ValueError(args)\n yield ("rmoveto", args)\n\n @staticmethod\n def hmoveto(args):\n if len(args) != 1:\n raise ValueError(args)\n yield ("rmoveto", [args[0], 0])\n\n @staticmethod\n def vmoveto(args):\n if len(args) != 1:\n raise ValueError(args)\n yield ("rmoveto", [0, args[0]])\n\n @staticmethod\n def rlineto(args):\n if not args:\n raise ValueError(args)\n for args in _everyN(args, 2):\n yield ("rlineto", args)\n\n @staticmethod\n def hlineto(args):\n if not args:\n raise ValueError(args)\n it = iter(args)\n try:\n while True:\n yield ("rlineto", [next(it), 0])\n yield ("rlineto", [0, next(it)])\n except StopIteration:\n pass\n\n @staticmethod\n def vlineto(args):\n if not args:\n raise ValueError(args)\n it = iter(args)\n try:\n while True:\n yield ("rlineto", [0, next(it)])\n yield ("rlineto", [next(it), 0])\n except StopIteration:\n pass\n\n @staticmethod\n def rrcurveto(args):\n if not args:\n raise ValueError(args)\n for args in _everyN(args, 6):\n yield ("rrcurveto", args)\n\n @staticmethod\n def hhcurveto(args):\n l = len(args)\n if l < 4 or l % 4 > 1:\n raise ValueError(args)\n if l % 2 == 1:\n yield ("rrcurveto", [args[1], args[0], args[2], args[3], args[4], 0])\n args = args[5:]\n for args in _everyN(args, 4):\n yield ("rrcurveto", [args[0], 0, args[1], args[2], args[3], 0])\n\n @staticmethod\n def vvcurveto(args):\n l = len(args)\n if l < 4 or l % 4 > 1:\n raise ValueError(args)\n if l % 2 == 1:\n yield ("rrcurveto", [args[0], args[1], args[2], args[3], 0, args[4]])\n args = args[5:]\n for args in _everyN(args, 4):\n yield ("rrcurveto", [0, args[0], args[1], args[2], 0, args[3]])\n\n @staticmethod\n def hvcurveto(args):\n l = len(args)\n if l < 4 or l % 8 not in {0, 1, 4, 5}:\n raise ValueError(args)\n last_args = None\n if l % 2 == 1:\n lastStraight = l % 8 == 5\n args, last_args = args[:-5], args[-5:]\n it = _everyN(args, 4)\n try:\n while True:\n args = next(it)\n yield ("rrcurveto", [args[0], 0, args[1], args[2], 0, args[3]])\n args = next(it)\n yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], 0])\n except StopIteration:\n pass\n if last_args:\n args = last_args\n if lastStraight:\n yield ("rrcurveto", [args[0], 0, args[1], args[2], args[4], args[3]])\n else:\n yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], args[4]])\n\n @staticmethod\n def vhcurveto(args):\n l = len(args)\n if l < 4 or l % 8 not in {0, 1, 4, 5}:\n raise ValueError(args)\n last_args = None\n if l % 2 == 1:\n lastStraight = l % 8 == 5\n args, last_args = args[:-5], args[-5:]\n it = _everyN(args, 4)\n try:\n while True:\n args = next(it)\n yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], 0])\n args = next(it)\n yield ("rrcurveto", [args[0], 0, args[1], args[2], 0, args[3]])\n except StopIteration:\n pass\n if last_args:\n args = last_args\n if lastStraight:\n yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], args[4]])\n else:\n yield ("rrcurveto", [args[0], 0, args[1], args[2], args[4], args[3]])\n\n @staticmethod\n def rcurveline(args):\n l = len(args)\n if l < 8 or l % 6 != 2:\n raise ValueError(args)\n args, last_args = args[:-2], args[-2:]\n for args in _everyN(args, 6):\n yield ("rrcurveto", args)\n yield ("rlineto", last_args)\n\n @staticmethod\n def rlinecurve(args):\n l = len(args)\n if l < 8 or l % 2 != 0:\n raise ValueError(args)\n args, last_args = args[:-6], args[-6:]\n for args in _everyN(args, 2):\n yield ("rlineto", args)\n yield ("rrcurveto", last_args)\n\n\ndef _convertBlendOpToArgs(blendList):\n # args is list of blend op args. Since we are supporting\n # recursive blend op calls, some of these args may also\n # be a list of blend op args, and need to be converted before\n # we convert the current list.\n if any([isinstance(arg, list) for arg in blendList]):\n args = [\n i\n for e in blendList\n for i in (_convertBlendOpToArgs(e) if isinstance(e, list) else [e])\n ]\n else:\n args = blendList\n\n # We now know that blendList contains a blend op argument list, even if\n # some of the args are lists that each contain a blend op argument list.\n # Convert from:\n # [default font arg sequence x0,...,xn] + [delta tuple for x0] + ... + [delta tuple for xn]\n # to:\n # [ [x0] + [delta tuple for x0],\n # ...,\n # [xn] + [delta tuple for xn] ]\n numBlends = args[-1]\n # Can't use args.pop() when the args are being used in a nested list\n # comprehension. See calling context\n args = args[:-1]\n\n l = len(args)\n numRegions = l // numBlends - 1\n if not (numBlends * (numRegions + 1) == l):\n raise ValueError(blendList)\n\n defaultArgs = [[arg] for arg in args[:numBlends]]\n deltaArgs = args[numBlends:]\n numDeltaValues = len(deltaArgs)\n deltaList = [\n deltaArgs[i : i + numRegions] for i in range(0, numDeltaValues, numRegions)\n ]\n blend_args = [a + b + [1] for a, b in zip(defaultArgs, deltaList)]\n return blend_args\n\n\ndef generalizeCommands(commands, ignoreErrors=False):\n result = []\n mapping = _GeneralizerDecombinerCommandsMap\n for op, args in commands:\n # First, generalize any blend args in the arg list.\n if any([isinstance(arg, list) for arg in args]):\n try:\n args = [\n n\n for arg in args\n for n in (\n _convertBlendOpToArgs(arg) if isinstance(arg, list) else [arg]\n )\n ]\n except ValueError:\n if ignoreErrors:\n # Store op as data, such that consumers of commands do not have to\n # deal with incorrect number of arguments.\n result.append(("", args))\n result.append(("", [op]))\n else:\n raise\n\n func = getattr(mapping, op, None)\n if func is None:\n result.append((op, args))\n continue\n try:\n for command in func(args):\n result.append(command)\n except ValueError:\n if ignoreErrors:\n # Store op as data, such that consumers of commands do not have to\n # deal with incorrect number of arguments.\n result.append(("", args))\n result.append(("", [op]))\n else:\n raise\n return result\n\n\ndef generalizeProgram(program, getNumRegions=None, **kwargs):\n return commandsToProgram(\n generalizeCommands(programToCommands(program, getNumRegions), **kwargs)\n )\n\n\ndef _categorizeVector(v):\n """\n Takes X,Y vector v and returns one of r, h, v, or 0 depending on which\n of X and/or Y are zero, plus tuple of nonzero ones. If both are zero,\n it returns a single zero still.\n\n >>> _categorizeVector((0,0))\n ('0', (0,))\n >>> _categorizeVector((1,0))\n ('h', (1,))\n >>> _categorizeVector((0,2))\n ('v', (2,))\n >>> _categorizeVector((1,2))\n ('r', (1, 2))\n """\n if not v[0]:\n if not v[1]:\n return "0", v[:1]\n else:\n return "v", v[1:]\n else:\n if not v[1]:\n return "h", v[:1]\n else:\n return "r", v\n\n\ndef _mergeCategories(a, b):\n if a == "0":\n return b\n if b == "0":\n return a\n if a == b:\n return a\n return None\n\n\ndef _negateCategory(a):\n if a == "h":\n return "v"\n if a == "v":\n return "h"\n assert a in "0r"\n return a\n\n\ndef _convertToBlendCmds(args):\n # return a list of blend commands, and\n # the remaining non-blended args, if any.\n num_args = len(args)\n stack_use = 0\n new_args = []\n i = 0\n while i < num_args:\n arg = args[i]\n i += 1\n if not isinstance(arg, list):\n new_args.append(arg)\n stack_use += 1\n else:\n prev_stack_use = stack_use\n # The arg is a tuple of blend values.\n # These are each (master 0,delta 1..delta n, 1)\n # Combine as many successive tuples as we can,\n # up to the max stack limit.\n num_sources = len(arg) - 1\n blendlist = [arg]\n stack_use += 1 + num_sources # 1 for the num_blends arg\n\n # if we are here, max stack is the CFF2 max stack.\n # I use the CFF2 max stack limit here rather than\n # the 'maxstack' chosen by the client, as the default\n # maxstack may have been used unintentionally. For all\n # the other operators, this just produces a little less\n # optimization, but here it puts a hard (and low) limit\n # on the number of source fonts that can be used.\n #\n # Make sure the stack depth does not exceed (maxstack - 1), so\n # that subroutinizer can insert subroutine calls at any point.\n while (\n (i < num_args)\n and isinstance(args[i], list)\n and stack_use + num_sources < maxStackLimit\n ):\n blendlist.append(args[i])\n i += 1\n stack_use += num_sources\n # blendList now contains as many single blend tuples as can be\n # combined without exceeding the CFF2 stack limit.\n num_blends = len(blendlist)\n # append the 'num_blends' default font values\n blend_args = []\n for arg in blendlist:\n blend_args.append(arg[0])\n for arg in blendlist:\n assert arg[-1] == 1\n blend_args.extend(arg[1:-1])\n blend_args.append(num_blends)\n new_args.append(blend_args)\n stack_use = prev_stack_use + num_blends\n\n return new_args\n\n\ndef _addArgs(a, b):\n if isinstance(b, list):\n if isinstance(a, list):\n if len(a) != len(b) or a[-1] != b[-1]:\n raise ValueError()\n return [_addArgs(va, vb) for va, vb in zip(a[:-1], b[:-1])] + [a[-1]]\n else:\n a, b = b, a\n if isinstance(a, list):\n assert a[-1] == 1\n return [_addArgs(a[0], b)] + a[1:]\n return a + b\n\n\ndef _argsStackUse(args):\n stackLen = 0\n maxLen = 0\n for arg in args:\n if type(arg) is list:\n # Blended arg\n maxLen = max(maxLen, stackLen + _argsStackUse(arg))\n stackLen += arg[-1]\n else:\n stackLen += 1\n return max(stackLen, maxLen)\n\n\ndef specializeCommands(\n commands,\n ignoreErrors=False,\n generalizeFirst=True,\n preserveTopology=False,\n maxstack=48,\n):\n # We perform several rounds of optimizations. They are carefully ordered and are:\n #\n # 0. Generalize commands.\n # This ensures that they are in our expected simple form, with each line/curve only\n # having arguments for one segment, and using the generic form (rlineto/rrcurveto).\n # If caller is sure the input is in this form, they can turn off generalization to\n # save time.\n #\n # 1. Combine successive rmoveto operations.\n #\n # 2. Specialize rmoveto/rlineto/rrcurveto operators into horizontal/vertical variants.\n # We specialize into some, made-up, variants as well, which simplifies following\n # passes.\n #\n # 3. Merge or delete redundant operations, to the extent requested.\n # OpenType spec declares point numbers in CFF undefined. As such, we happily\n # change topology. If client relies on point numbers (in GPOS anchors, or for\n # hinting purposes(what?)) they can turn this off.\n #\n # 4. Peephole optimization to revert back some of the h/v variants back into their\n # original "relative" operator (rline/rrcurveto) if that saves a byte.\n #\n # 5. Combine adjacent operators when possible, minding not to go over max stack size.\n #\n # 6. Resolve any remaining made-up operators into real operators.\n #\n # I have convinced myself that this produces optimal bytecode (except for, possibly\n # one byte each time maxstack size prohibits combining.) YMMV, but you'd be wrong. :-)\n # A dynamic-programming approach can do the same but would be significantly slower.\n #\n # 7. For any args which are blend lists, convert them to a blend command.\n\n # 0. Generalize commands.\n if generalizeFirst:\n commands = generalizeCommands(commands, ignoreErrors=ignoreErrors)\n else:\n commands = list(commands) # Make copy since we modify in-place later.\n\n # 1. Combine successive rmoveto operations.\n for i in range(len(commands) - 1, 0, -1):\n if "rmoveto" == commands[i][0] == commands[i - 1][0]:\n v1, v2 = commands[i - 1][1], commands[i][1]\n commands[i - 1] = (\n "rmoveto",\n [_addArgs(v1[0], v2[0]), _addArgs(v1[1], v2[1])],\n )\n del commands[i]\n\n # 2. Specialize rmoveto/rlineto/rrcurveto operators into horizontal/vertical variants.\n #\n # We, in fact, specialize into more, made-up, variants that special-case when both\n # X and Y components are zero. This simplifies the following optimization passes.\n # This case is rare, but OCD does not let me skip it.\n #\n # After this round, we will have four variants that use the following mnemonics:\n #\n # - 'r' for relative, ie. non-zero X and non-zero Y,\n # - 'h' for horizontal, ie. zero X and non-zero Y,\n # - 'v' for vertical, ie. non-zero X and zero Y,\n # - '0' for zeros, ie. zero X and zero Y.\n #\n # The '0' pseudo-operators are not part of the spec, but help simplify the following\n # optimization rounds. We resolve them at the end. So, after this, we will have four\n # moveto and four lineto variants:\n #\n # - 0moveto, 0lineto\n # - hmoveto, hlineto\n # - vmoveto, vlineto\n # - rmoveto, rlineto\n #\n # and sixteen curveto variants. For example, a '0hcurveto' operator means a curve\n # dx0,dy0,dx1,dy1,dx2,dy2,dx3,dy3 where dx0, dx1, and dy3 are zero but not dx3.\n # An 'rvcurveto' means dx3 is zero but not dx0,dy0,dy3.\n #\n # There are nine different variants of curves without the '0'. Those nine map exactly\n # to the existing curve variants in the spec: rrcurveto, and the four variants hhcurveto,\n # vvcurveto, hvcurveto, and vhcurveto each cover two cases, one with an odd number of\n # arguments and one without. Eg. an hhcurveto with an extra argument (odd number of\n # arguments) is in fact an rhcurveto. The operators in the spec are designed such that\n # all four of rhcurveto, rvcurveto, hrcurveto, and vrcurveto are encodable for one curve.\n #\n # Of the curve types with '0', the 00curveto is equivalent to a lineto variant. The rest\n # of the curve types with a 0 need to be encoded as a h or v variant. Ie. a '0' can be\n # thought of a "don't care" and can be used as either an 'h' or a 'v'. As such, we always\n # encode a number 0 as argument when we use a '0' variant. Later on, we can just substitute\n # the '0' with either 'h' or 'v' and it works.\n #\n # When we get to curve splines however, things become more complicated... XXX finish this.\n # There's one more complexity with splines. If one side of the spline is not horizontal or\n # vertical (or zero), ie. if it's 'r', then it limits which spline types we can encode.\n # Only hhcurveto and vvcurveto operators can encode a spline starting with 'r', and\n # only hvcurveto and vhcurveto operators can encode a spline ending with 'r'.\n # This limits our merge opportunities later.\n #\n for i in range(len(commands)):\n op, args = commands[i]\n\n if op in {"rmoveto", "rlineto"}:\n c, args = _categorizeVector(args)\n commands[i] = c + op[1:], args\n continue\n\n if op == "rrcurveto":\n c1, args1 = _categorizeVector(args[:2])\n c2, args2 = _categorizeVector(args[-2:])\n commands[i] = c1 + c2 + "curveto", args1 + args[2:4] + args2\n continue\n\n # 3. Merge or delete redundant operations, to the extent requested.\n #\n # TODO\n # A 0moveto that comes before all other path operations can be removed.\n # though I find conflicting evidence for this.\n #\n # TODO\n # "If hstem and vstem hints are both declared at the beginning of a\n # CharString, and this sequence is followed directly by the hintmask or\n # cntrmask operators, then the vstem hint operator (or, if applicable,\n # the vstemhm operator) need not be included."\n #\n # "The sequence and form of a CFF2 CharString program may be represented as:\n # {hs* vs* cm* hm* mt subpath}? {mt subpath}*"\n #\n # https://www.microsoft.com/typography/otspec/cff2charstr.htm#section3.1\n #\n # For Type2 CharStrings the sequence is:\n # w? {hs* vs* cm* hm* mt subpath}? {mt subpath}* endchar"\n\n # Some other redundancies change topology (point numbers).\n if not preserveTopology:\n for i in range(len(commands) - 1, -1, -1):\n op, args = commands[i]\n\n # A 00curveto is demoted to a (specialized) lineto.\n if op == "00curveto":\n assert len(args) == 4\n c, args = _categorizeVector(args[1:3])\n op = c + "lineto"\n commands[i] = op, args\n # and then...\n\n # A 0lineto can be deleted.\n if op == "0lineto":\n del commands[i]\n continue\n\n # Merge adjacent hlineto's and vlineto's.\n # In CFF2 charstrings from variable fonts, each\n # arg item may be a list of blendable values, one from\n # each source font.\n if i and op in {"hlineto", "vlineto"} and (op == commands[i - 1][0]):\n _, other_args = commands[i - 1]\n assert len(args) == 1 and len(other_args) == 1\n try:\n new_args = [_addArgs(args[0], other_args[0])]\n except ValueError:\n continue\n commands[i - 1] = (op, new_args)\n del commands[i]\n continue\n\n # 4. Peephole optimization to revert back some of the h/v variants back into their\n # original "relative" operator (rline/rrcurveto) if that saves a byte.\n for i in range(1, len(commands) - 1):\n op, args = commands[i]\n prv, nxt = commands[i - 1][0], commands[i + 1][0]\n\n if op in {"0lineto", "hlineto", "vlineto"} and prv == nxt == "rlineto":\n assert len(args) == 1\n args = [0, args[0]] if op[0] == "v" else [args[0], 0]\n commands[i] = ("rlineto", args)\n continue\n\n if op[2:] == "curveto" and len(args) == 5 and prv == nxt == "rrcurveto":\n assert (op[0] == "r") ^ (op[1] == "r")\n if op[0] == "v":\n pos = 0\n elif op[0] != "r":\n pos = 1\n elif op[1] == "v":\n pos = 4\n else:\n pos = 5\n # Insert, while maintaining the type of args (can be tuple or list).\n args = args[:pos] + type(args)((0,)) + args[pos:]\n commands[i] = ("rrcurveto", args)\n continue\n\n # 5. Combine adjacent operators when possible, minding not to go over max stack size.\n stackUse = _argsStackUse(commands[-1][1]) if commands else 0\n for i in range(len(commands) - 1, 0, -1):\n op1, args1 = commands[i - 1]\n op2, args2 = commands[i]\n new_op = None\n\n # Merge logic...\n if {op1, op2} <= {"rlineto", "rrcurveto"}:\n if op1 == op2:\n new_op = op1\n else:\n l = len(args2)\n if op2 == "rrcurveto" and l == 6:\n new_op = "rlinecurve"\n elif l == 2:\n new_op = "rcurveline"\n\n elif (op1, op2) in {("rlineto", "rlinecurve"), ("rrcurveto", "rcurveline")}:\n new_op = op2\n\n elif {op1, op2} == {"vlineto", "hlineto"}:\n new_op = op1\n\n elif "curveto" == op1[2:] == op2[2:]:\n d0, d1 = op1[:2]\n d2, d3 = op2[:2]\n\n if d1 == "r" or d2 == "r" or d0 == d3 == "r":\n continue\n\n d = _mergeCategories(d1, d2)\n if d is None:\n continue\n if d0 == "r":\n d = _mergeCategories(d, d3)\n if d is None:\n continue\n new_op = "r" + d + "curveto"\n elif d3 == "r":\n d0 = _mergeCategories(d0, _negateCategory(d))\n if d0 is None:\n continue\n new_op = d0 + "r" + "curveto"\n else:\n d0 = _mergeCategories(d0, d3)\n if d0 is None:\n continue\n new_op = d0 + d + "curveto"\n\n # Make sure the stack depth does not exceed (maxstack - 1), so\n # that subroutinizer can insert subroutine calls at any point.\n args1StackUse = _argsStackUse(args1)\n combinedStackUse = max(args1StackUse, len(args1) + stackUse)\n if new_op and combinedStackUse < maxstack:\n commands[i - 1] = (new_op, args1 + args2)\n del commands[i]\n stackUse = combinedStackUse\n else:\n stackUse = args1StackUse\n\n # 6. Resolve any remaining made-up operators into real operators.\n for i in range(len(commands)):\n op, args = commands[i]\n\n if op in {"0moveto", "0lineto"}:\n commands[i] = "h" + op[1:], args\n continue\n\n if op[2:] == "curveto" and op[:2] not in {"rr", "hh", "vv", "vh", "hv"}:\n l = len(args)\n\n op0, op1 = op[:2]\n if (op0 == "r") ^ (op1 == "r"):\n assert l % 2 == 1\n if op0 == "0":\n op0 = "h"\n if op1 == "0":\n op1 = "h"\n if op0 == "r":\n op0 = op1\n if op1 == "r":\n op1 = _negateCategory(op0)\n assert {op0, op1} <= {"h", "v"}, (op0, op1)\n\n if l % 2:\n if op0 != op1: # vhcurveto / hvcurveto\n if (op0 == "h") ^ (l % 8 == 1):\n # Swap last two args order\n args = args[:-2] + args[-1:] + args[-2:-1]\n else: # hhcurveto / vvcurveto\n if op0 == "h": # hhcurveto\n # Swap first two args order\n args = args[1:2] + args[:1] + args[2:]\n\n commands[i] = op0 + op1 + "curveto", args\n continue\n\n # 7. For any series of args which are blend lists, convert the series to a single blend arg.\n for i in range(len(commands)):\n op, args = commands[i]\n if any(isinstance(arg, list) for arg in args):\n commands[i] = op, _convertToBlendCmds(args)\n\n return commands\n\n\ndef specializeProgram(program, getNumRegions=None, **kwargs):\n return commandsToProgram(\n specializeCommands(programToCommands(program, getNumRegions), **kwargs)\n )\n\n\nif __name__ == "__main__":\n import sys\n\n if len(sys.argv) == 1:\n import doctest\n\n sys.exit(doctest.testmod().failed)\n\n import argparse\n\n parser = argparse.ArgumentParser(\n "fonttools cffLib.specializer",\n description="CFF CharString generalizer/specializer",\n )\n parser.add_argument("program", metavar="command", nargs="*", help="Commands.")\n parser.add_argument(\n "--num-regions",\n metavar="NumRegions",\n nargs="*",\n default=None,\n help="Number of variable-font regions for blend opertaions.",\n )\n parser.add_argument(\n "--font",\n metavar="FONTFILE",\n default=None,\n help="CFF2 font to specialize.",\n )\n parser.add_argument(\n "-o",\n "--output-file",\n type=str,\n help="Output font file name.",\n )\n\n options = parser.parse_args(sys.argv[1:])\n\n if options.program:\n getNumRegions = (\n None\n if options.num_regions is None\n else lambda vsIndex: int(\n options.num_regions[0 if vsIndex is None else vsIndex]\n )\n )\n\n program = stringToProgram(options.program)\n print("Program:")\n print(programToString(program))\n commands = programToCommands(program, getNumRegions)\n print("Commands:")\n print(commands)\n program2 = commandsToProgram(commands)\n print("Program from commands:")\n print(programToString(program2))\n assert program == program2\n print("Generalized program:")\n print(programToString(generalizeProgram(program, getNumRegions)))\n print("Specialized program:")\n print(programToString(specializeProgram(program, getNumRegions)))\n\n if options.font:\n from fontTools.ttLib import TTFont\n\n font = TTFont(options.font)\n cff2 = font["CFF2"].cff.topDictIndex[0]\n charstrings = cff2.CharStrings\n for glyphName in charstrings.keys():\n charstring = charstrings[glyphName]\n charstring.decompile()\n getNumRegions = charstring.private.getNumRegions\n charstring.program = specializeProgram(\n charstring.program, getNumRegions, maxstack=maxStackLimit\n )\n\n if options.output_file is None:\n from fontTools.misc.cliTools import makeOutputFileName\n\n outfile = makeOutputFileName(\n options.font, overWrite=True, suffix=".specialized"\n )\n else:\n outfile = options.output_file\n if outfile:\n print("Saving", outfile)\n font.save(outfile)\n
.venv\Lib\site-packages\fontTools\cffLib\specializer.py
specializer.py
Python
33,536
0.95
0.228695
0.2
awesome-app
988
2025-02-13T02:08:09.459465
GPL-3.0
false
968d613df01143dfdc4ddbaaaafa1969
from fontTools.misc.psCharStrings import (\n SimpleT2Decompiler,\n T2WidthExtractor,\n calcSubrBias,\n)\n\n\ndef _uniq_sort(l):\n return sorted(set(l))\n\n\nclass StopHintCountEvent(Exception):\n pass\n\n\nclass _DesubroutinizingT2Decompiler(SimpleT2Decompiler):\n stop_hintcount_ops = (\n "op_hintmask",\n "op_cntrmask",\n "op_rmoveto",\n "op_hmoveto",\n "op_vmoveto",\n )\n\n def __init__(self, localSubrs, globalSubrs, private=None):\n SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private)\n\n def execute(self, charString):\n self.need_hintcount = True # until proven otherwise\n for op_name in self.stop_hintcount_ops:\n setattr(self, op_name, self.stop_hint_count)\n\n if hasattr(charString, "_desubroutinized"):\n # If a charstring has already been desubroutinized, we will still\n # need to execute it if we need to count hints in order to\n # compute the byte length for mask arguments, and haven't finished\n # counting hints pairs.\n if self.need_hintcount and self.callingStack:\n try:\n SimpleT2Decompiler.execute(self, charString)\n except StopHintCountEvent:\n del self.callingStack[-1]\n return\n\n charString._patches = []\n SimpleT2Decompiler.execute(self, charString)\n desubroutinized = charString.program[:]\n for idx, expansion in reversed(charString._patches):\n assert idx >= 2\n assert desubroutinized[idx - 1] in [\n "callsubr",\n "callgsubr",\n ], desubroutinized[idx - 1]\n assert type(desubroutinized[idx - 2]) == int\n if expansion[-1] == "return":\n expansion = expansion[:-1]\n desubroutinized[idx - 2 : idx] = expansion\n if not self.private.in_cff2:\n if "endchar" in desubroutinized:\n # Cut off after first endchar\n desubroutinized = desubroutinized[\n : desubroutinized.index("endchar") + 1\n ]\n\n charString._desubroutinized = desubroutinized\n del charString._patches\n\n def op_callsubr(self, index):\n subr = self.localSubrs[self.operandStack[-1] + self.localBias]\n SimpleT2Decompiler.op_callsubr(self, index)\n self.processSubr(index, subr)\n\n def op_callgsubr(self, index):\n subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]\n SimpleT2Decompiler.op_callgsubr(self, index)\n self.processSubr(index, subr)\n\n def stop_hint_count(self, *args):\n self.need_hintcount = False\n for op_name in self.stop_hintcount_ops:\n setattr(self, op_name, None)\n cs = self.callingStack[-1]\n if hasattr(cs, "_desubroutinized"):\n raise StopHintCountEvent()\n\n def op_hintmask(self, index):\n SimpleT2Decompiler.op_hintmask(self, index)\n if self.need_hintcount:\n self.stop_hint_count()\n\n def processSubr(self, index, subr):\n cs = self.callingStack[-1]\n if not hasattr(cs, "_desubroutinized"):\n cs._patches.append((index, subr._desubroutinized))\n\n\ndef desubroutinize(cff):\n for fontName in cff.fontNames:\n font = cff[fontName]\n cs = font.CharStrings\n for c in cs.values():\n c.decompile()\n subrs = getattr(c.private, "Subrs", [])\n decompiler = _DesubroutinizingT2Decompiler(subrs, c.globalSubrs, c.private)\n decompiler.execute(c)\n c.program = c._desubroutinized\n del c._desubroutinized\n # Delete all the local subrs\n if hasattr(font, "FDArray"):\n for fd in font.FDArray:\n pd = fd.Private\n if hasattr(pd, "Subrs"):\n del pd.Subrs\n if "Subrs" in pd.rawDict:\n del pd.rawDict["Subrs"]\n else:\n pd = font.Private\n if hasattr(pd, "Subrs"):\n del pd.Subrs\n if "Subrs" in pd.rawDict:\n del pd.rawDict["Subrs"]\n # as well as the global subrs\n cff.GlobalSubrs.clear()\n\n\nclass _MarkingT2Decompiler(SimpleT2Decompiler):\n def __init__(self, localSubrs, globalSubrs, private):\n SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private)\n for subrs in [localSubrs, globalSubrs]:\n if subrs and not hasattr(subrs, "_used"):\n subrs._used = set()\n\n def op_callsubr(self, index):\n self.localSubrs._used.add(self.operandStack[-1] + self.localBias)\n SimpleT2Decompiler.op_callsubr(self, index)\n\n def op_callgsubr(self, index):\n self.globalSubrs._used.add(self.operandStack[-1] + self.globalBias)\n SimpleT2Decompiler.op_callgsubr(self, index)\n\n\nclass _DehintingT2Decompiler(T2WidthExtractor):\n class Hints(object):\n def __init__(self):\n # Whether calling this charstring produces any hint stems\n # Note that if a charstring starts with hintmask, it will\n # have has_hint set to True, because it *might* produce an\n # implicit vstem if called under certain conditions.\n self.has_hint = False\n # Index to start at to drop all hints\n self.last_hint = 0\n # Index up to which we know more hints are possible.\n # Only relevant if status is 0 or 1.\n self.last_checked = 0\n # The status means:\n # 0: after dropping hints, this charstring is empty\n # 1: after dropping hints, there may be more hints\n # continuing after this, or there might be\n # other things. Not clear yet.\n # 2: no more hints possible after this charstring\n self.status = 0\n # Has hintmask instructions; not recursive\n self.has_hintmask = False\n # List of indices of calls to empty subroutines to remove.\n self.deletions = []\n\n pass\n\n def __init__(\n self, css, localSubrs, globalSubrs, nominalWidthX, defaultWidthX, private=None\n ):\n self._css = css\n T2WidthExtractor.__init__(\n self, localSubrs, globalSubrs, nominalWidthX, defaultWidthX\n )\n self.private = private\n\n def execute(self, charString):\n old_hints = charString._hints if hasattr(charString, "_hints") else None\n charString._hints = self.Hints()\n\n T2WidthExtractor.execute(self, charString)\n\n hints = charString._hints\n\n if hints.has_hint or hints.has_hintmask:\n self._css.add(charString)\n\n if hints.status != 2:\n # Check from last_check, make sure we didn't have any operators.\n for i in range(hints.last_checked, len(charString.program) - 1):\n if isinstance(charString.program[i], str):\n hints.status = 2\n break\n else:\n hints.status = 1 # There's *something* here\n hints.last_checked = len(charString.program)\n\n if old_hints:\n assert hints.__dict__ == old_hints.__dict__\n\n def op_callsubr(self, index):\n subr = self.localSubrs[self.operandStack[-1] + self.localBias]\n T2WidthExtractor.op_callsubr(self, index)\n self.processSubr(index, subr)\n\n def op_callgsubr(self, index):\n subr = self.globalSubrs[self.operandStack[-1] + self.globalBias]\n T2WidthExtractor.op_callgsubr(self, index)\n self.processSubr(index, subr)\n\n def op_hstem(self, index):\n T2WidthExtractor.op_hstem(self, index)\n self.processHint(index)\n\n def op_vstem(self, index):\n T2WidthExtractor.op_vstem(self, index)\n self.processHint(index)\n\n def op_hstemhm(self, index):\n T2WidthExtractor.op_hstemhm(self, index)\n self.processHint(index)\n\n def op_vstemhm(self, index):\n T2WidthExtractor.op_vstemhm(self, index)\n self.processHint(index)\n\n def op_hintmask(self, index):\n rv = T2WidthExtractor.op_hintmask(self, index)\n self.processHintmask(index)\n return rv\n\n def op_cntrmask(self, index):\n rv = T2WidthExtractor.op_cntrmask(self, index)\n self.processHintmask(index)\n return rv\n\n def processHintmask(self, index):\n cs = self.callingStack[-1]\n hints = cs._hints\n hints.has_hintmask = True\n if hints.status != 2:\n # Check from last_check, see if we may be an implicit vstem\n for i in range(hints.last_checked, index - 1):\n if isinstance(cs.program[i], str):\n hints.status = 2\n break\n else:\n # We are an implicit vstem\n hints.has_hint = True\n hints.last_hint = index + 1\n hints.status = 0\n hints.last_checked = index + 1\n\n def processHint(self, index):\n cs = self.callingStack[-1]\n hints = cs._hints\n hints.has_hint = True\n hints.last_hint = index\n hints.last_checked = index\n\n def processSubr(self, index, subr):\n cs = self.callingStack[-1]\n hints = cs._hints\n subr_hints = subr._hints\n\n # Check from last_check, make sure we didn't have\n # any operators.\n if hints.status != 2:\n for i in range(hints.last_checked, index - 1):\n if isinstance(cs.program[i], str):\n hints.status = 2\n break\n hints.last_checked = index\n\n if hints.status != 2:\n if subr_hints.has_hint:\n hints.has_hint = True\n\n # Decide where to chop off from\n if subr_hints.status == 0:\n hints.last_hint = index\n else:\n hints.last_hint = index - 2 # Leave the subr call in\n\n elif subr_hints.status == 0:\n hints.deletions.append(index)\n\n hints.status = max(hints.status, subr_hints.status)\n\n\ndef _cs_subset_subroutines(charstring, subrs, gsubrs):\n p = charstring.program\n for i in range(1, len(p)):\n if p[i] == "callsubr":\n assert isinstance(p[i - 1], int)\n p[i - 1] = subrs._used.index(p[i - 1] + subrs._old_bias) - subrs._new_bias\n elif p[i] == "callgsubr":\n assert isinstance(p[i - 1], int)\n p[i - 1] = (\n gsubrs._used.index(p[i - 1] + gsubrs._old_bias) - gsubrs._new_bias\n )\n\n\ndef _cs_drop_hints(charstring):\n hints = charstring._hints\n\n if hints.deletions:\n p = charstring.program\n for idx in reversed(hints.deletions):\n del p[idx - 2 : idx]\n\n if hints.has_hint:\n assert not hints.deletions or hints.last_hint <= hints.deletions[0]\n charstring.program = charstring.program[hints.last_hint :]\n if not charstring.program:\n # TODO CFF2 no need for endchar.\n charstring.program.append("endchar")\n if hasattr(charstring, "width"):\n # Insert width back if needed\n if charstring.width != charstring.private.defaultWidthX:\n # For CFF2 charstrings, this should never happen\n assert (\n charstring.private.defaultWidthX is not None\n ), "CFF2 CharStrings must not have an initial width value"\n charstring.program.insert(\n 0, charstring.width - charstring.private.nominalWidthX\n )\n\n if hints.has_hintmask:\n i = 0\n p = charstring.program\n while i < len(p):\n if p[i] in ["hintmask", "cntrmask"]:\n assert i + 1 <= len(p)\n del p[i : i + 2]\n continue\n i += 1\n\n assert len(charstring.program)\n\n del charstring._hints\n\n\ndef remove_hints(cff, *, removeUnusedSubrs: bool = True):\n for fontname in cff.keys():\n font = cff[fontname]\n cs = font.CharStrings\n # This can be tricky, but doesn't have to. What we do is:\n #\n # - Run all used glyph charstrings and recurse into subroutines,\n # - For each charstring (including subroutines), if it has any\n # of the hint stem operators, we mark it as such.\n # Upon returning, for each charstring we note all the\n # subroutine calls it makes that (recursively) contain a stem,\n # - Dropping hinting then consists of the following two ops:\n # * Drop the piece of the program in each charstring before the\n # last call to a stem op or a stem-calling subroutine,\n # * Drop all hintmask operations.\n # - It's trickier... A hintmask right after hints and a few numbers\n # will act as an implicit vstemhm. As such, we track whether\n # we have seen any non-hint operators so far and do the right\n # thing, recursively... Good luck understanding that :(\n css = set()\n for c in cs.values():\n c.decompile()\n subrs = getattr(c.private, "Subrs", [])\n decompiler = _DehintingT2Decompiler(\n css,\n subrs,\n c.globalSubrs,\n c.private.nominalWidthX,\n c.private.defaultWidthX,\n c.private,\n )\n decompiler.execute(c)\n c.width = decompiler.width\n for charstring in css:\n _cs_drop_hints(charstring)\n del css\n\n # Drop font-wide hinting values\n all_privs = []\n if hasattr(font, "FDArray"):\n all_privs.extend(fd.Private for fd in font.FDArray)\n else:\n all_privs.append(font.Private)\n for priv in all_privs:\n for k in [\n "BlueValues",\n "OtherBlues",\n "FamilyBlues",\n "FamilyOtherBlues",\n "BlueScale",\n "BlueShift",\n "BlueFuzz",\n "StemSnapH",\n "StemSnapV",\n "StdHW",\n "StdVW",\n "ForceBold",\n "LanguageGroup",\n "ExpansionFactor",\n ]:\n if hasattr(priv, k):\n setattr(priv, k, None)\n if removeUnusedSubrs:\n remove_unused_subroutines(cff)\n\n\ndef _pd_delete_empty_subrs(private_dict):\n if hasattr(private_dict, "Subrs") and not private_dict.Subrs:\n if "Subrs" in private_dict.rawDict:\n del private_dict.rawDict["Subrs"]\n del private_dict.Subrs\n\n\ndef remove_unused_subroutines(cff):\n for fontname in cff.keys():\n font = cff[fontname]\n cs = font.CharStrings\n # Renumber subroutines to remove unused ones\n\n # Mark all used subroutines\n for c in cs.values():\n subrs = getattr(c.private, "Subrs", [])\n decompiler = _MarkingT2Decompiler(subrs, c.globalSubrs, c.private)\n decompiler.execute(c)\n\n all_subrs = [font.GlobalSubrs]\n if hasattr(font, "FDArray"):\n all_subrs.extend(\n fd.Private.Subrs\n for fd in font.FDArray\n if hasattr(fd.Private, "Subrs") and fd.Private.Subrs\n )\n elif hasattr(font.Private, "Subrs") and font.Private.Subrs:\n all_subrs.append(font.Private.Subrs)\n\n subrs = set(subrs) # Remove duplicates\n\n # Prepare\n for subrs in all_subrs:\n if not hasattr(subrs, "_used"):\n subrs._used = set()\n subrs._used = _uniq_sort(subrs._used)\n subrs._old_bias = calcSubrBias(subrs)\n subrs._new_bias = calcSubrBias(subrs._used)\n\n # Renumber glyph charstrings\n for c in cs.values():\n subrs = getattr(c.private, "Subrs", None)\n _cs_subset_subroutines(c, subrs, font.GlobalSubrs)\n\n # Renumber subroutines themselves\n for subrs in all_subrs:\n if subrs == font.GlobalSubrs:\n if not hasattr(font, "FDArray") and hasattr(font.Private, "Subrs"):\n local_subrs = font.Private.Subrs\n elif (\n hasattr(font, "FDArray")\n and len(font.FDArray) == 1\n and hasattr(font.FDArray[0].Private, "Subrs")\n ):\n # Technically we shouldn't do this. But I've run into fonts that do it.\n local_subrs = font.FDArray[0].Private.Subrs\n else:\n local_subrs = None\n else:\n local_subrs = subrs\n\n subrs.items = [subrs.items[i] for i in subrs._used]\n if hasattr(subrs, "file"):\n del subrs.file\n if hasattr(subrs, "offsets"):\n del subrs.offsets\n\n for subr in subrs.items:\n _cs_subset_subroutines(subr, local_subrs, font.GlobalSubrs)\n\n # Delete local SubrsIndex if empty\n if hasattr(font, "FDArray"):\n for fd in font.FDArray:\n _pd_delete_empty_subrs(fd.Private)\n else:\n _pd_delete_empty_subrs(font.Private)\n\n # Cleanup\n for subrs in all_subrs:\n del subrs._used, subrs._old_bias, subrs._new_bias\n
.venv\Lib\site-packages\fontTools\cffLib\transforms.py
transforms.py
Python
17,861
0.95
0.253061
0.132212
node-utils
246
2024-12-02T03:01:50.954349
MIT
false
f60e50bfeff2328ff04f80e627d0636b
# -*- coding: utf-8 -*-\n\n"""T2CharString glyph width optimizer.\n\nCFF glyphs whose width equals the CFF Private dictionary's ``defaultWidthX``\nvalue do not need to specify their width in their charstring, saving bytes.\nThis module determines the optimum ``defaultWidthX`` and ``nominalWidthX``\nvalues for a font, when provided with a list of glyph widths."""\n\nfrom fontTools.ttLib import TTFont\nfrom collections import defaultdict\nfrom operator import add\nfrom functools import reduce\n\n\n__all__ = ["optimizeWidths", "main"]\n\n\nclass missingdict(dict):\n def __init__(self, missing_func):\n self.missing_func = missing_func\n\n def __missing__(self, v):\n return self.missing_func(v)\n\n\ndef cumSum(f, op=add, start=0, decreasing=False):\n keys = sorted(f.keys())\n minx, maxx = keys[0], keys[-1]\n\n total = reduce(op, f.values(), start)\n\n if decreasing:\n missing = lambda x: start if x > maxx else total\n domain = range(maxx, minx - 1, -1)\n else:\n missing = lambda x: start if x < minx else total\n domain = range(minx, maxx + 1)\n\n out = missingdict(missing)\n\n v = start\n for x in domain:\n v = op(v, f[x])\n out[x] = v\n\n return out\n\n\ndef byteCost(widths, default, nominal):\n if not hasattr(widths, "items"):\n d = defaultdict(int)\n for w in widths:\n d[w] += 1\n widths = d\n\n cost = 0\n for w, freq in widths.items():\n if w == default:\n continue\n diff = abs(w - nominal)\n if diff <= 107:\n cost += freq\n elif diff <= 1131:\n cost += freq * 2\n else:\n cost += freq * 5\n return cost\n\n\ndef optimizeWidthsBruteforce(widths):\n """Bruteforce version. Veeeeeeeeeeeeeeeeery slow. Only works for smallests of fonts."""\n\n d = defaultdict(int)\n for w in widths:\n d[w] += 1\n\n # Maximum number of bytes using default can possibly save\n maxDefaultAdvantage = 5 * max(d.values())\n\n minw, maxw = min(widths), max(widths)\n domain = list(range(minw, maxw + 1))\n\n bestCostWithoutDefault = min(byteCost(widths, None, nominal) for nominal in domain)\n\n bestCost = len(widths) * 5 + 1\n for nominal in domain:\n if byteCost(widths, None, nominal) > bestCost + maxDefaultAdvantage:\n continue\n for default in domain:\n cost = byteCost(widths, default, nominal)\n if cost < bestCost:\n bestCost = cost\n bestDefault = default\n bestNominal = nominal\n\n return bestDefault, bestNominal\n\n\ndef optimizeWidths(widths):\n """Given a list of glyph widths, or dictionary mapping glyph width to number of\n glyphs having that, returns a tuple of best CFF default and nominal glyph widths.\n\n This algorithm is linear in UPEM+numGlyphs."""\n\n if not hasattr(widths, "items"):\n d = defaultdict(int)\n for w in widths:\n d[w] += 1\n widths = d\n\n keys = sorted(widths.keys())\n minw, maxw = keys[0], keys[-1]\n domain = list(range(minw, maxw + 1))\n\n # Cumulative sum/max forward/backward.\n cumFrqU = cumSum(widths, op=add)\n cumMaxU = cumSum(widths, op=max)\n cumFrqD = cumSum(widths, op=add, decreasing=True)\n cumMaxD = cumSum(widths, op=max, decreasing=True)\n\n # Cost per nominal choice, without default consideration.\n nomnCostU = missingdict(\n lambda x: cumFrqU[x] + cumFrqU[x - 108] + cumFrqU[x - 1132] * 3\n )\n nomnCostD = missingdict(\n lambda x: cumFrqD[x] + cumFrqD[x + 108] + cumFrqD[x + 1132] * 3\n )\n nomnCost = missingdict(lambda x: nomnCostU[x] + nomnCostD[x] - widths[x])\n\n # Cost-saving per nominal choice, by best default choice.\n dfltCostU = missingdict(\n lambda x: max(cumMaxU[x], cumMaxU[x - 108] * 2, cumMaxU[x - 1132] * 5)\n )\n dfltCostD = missingdict(\n lambda x: max(cumMaxD[x], cumMaxD[x + 108] * 2, cumMaxD[x + 1132] * 5)\n )\n dfltCost = missingdict(lambda x: max(dfltCostU[x], dfltCostD[x]))\n\n # Combined cost per nominal choice.\n bestCost = missingdict(lambda x: nomnCost[x] - dfltCost[x])\n\n # Best nominal.\n nominal = min(domain, key=lambda x: bestCost[x])\n\n # Work back the best default.\n bestC = bestCost[nominal]\n dfltC = nomnCost[nominal] - bestCost[nominal]\n ends = []\n if dfltC == dfltCostU[nominal]:\n starts = [nominal, nominal - 108, nominal - 1132]\n for start in starts:\n while cumMaxU[start] and cumMaxU[start] == cumMaxU[start - 1]:\n start -= 1\n ends.append(start)\n else:\n starts = [nominal, nominal + 108, nominal + 1132]\n for start in starts:\n while cumMaxD[start] and cumMaxD[start] == cumMaxD[start + 1]:\n start += 1\n ends.append(start)\n default = min(ends, key=lambda default: byteCost(widths, default, nominal))\n\n return default, nominal\n\n\ndef main(args=None):\n """Calculate optimum defaultWidthX/nominalWidthX values"""\n\n import argparse\n\n parser = argparse.ArgumentParser(\n "fonttools cffLib.width",\n description=main.__doc__,\n )\n parser.add_argument(\n "inputs", metavar="FILE", type=str, nargs="+", help="Input TTF files"\n )\n parser.add_argument(\n "-b",\n "--brute-force",\n dest="brute",\n action="store_true",\n help="Use brute-force approach (VERY slow)",\n )\n\n args = parser.parse_args(args)\n\n for fontfile in args.inputs:\n font = TTFont(fontfile)\n hmtx = font["hmtx"]\n widths = [m[0] for m in hmtx.metrics.values()]\n if args.brute:\n default, nominal = optimizeWidthsBruteforce(widths)\n else:\n default, nominal = optimizeWidths(widths)\n print(\n "glyphs=%d default=%d nominal=%d byteCost=%d"\n % (len(widths), default, nominal, byteCost(widths, default, nominal))\n )\n\n\nif __name__ == "__main__":\n import sys\n\n if len(sys.argv) == 1:\n import doctest\n\n sys.exit(doctest.testmod().failed)\n main()\n
.venv\Lib\site-packages\fontTools\cffLib\width.py
width.py
Python
6,284
0.95
0.17619
0.049383
python-kit
661
2024-06-26T04:21:07.022370
Apache-2.0
false
857d8c8c27f345b3580f0ab86326b47f
\n\n
.venv\Lib\site-packages\fontTools\cffLib\__pycache__\CFF2ToCFF.cpython-313.pyc
CFF2ToCFF.cpython-313.pyc
Other
8,235
0.95
0
0
react-lib
664
2024-03-25T09:04:58.522826
Apache-2.0
false
7845ccf5d795e90854a24bf462a49d86
\n\n
.venv\Lib\site-packages\fontTools\cffLib\__pycache__\CFFToCFF2.cpython-313.pyc
CFFToCFF2.cpython-313.pyc
Other
10,449
0.95
0
0
node-utils
862
2023-10-02T21:10:06.787245
Apache-2.0
false
fab9d0efba232339668434ee60c0fa97
\n\n
.venv\Lib\site-packages\fontTools\cffLib\__pycache__\specializer.cpython-313.pyc
specializer.cpython-313.pyc
Other
29,106
0.8
0.029412
0
node-utils
20
2025-02-02T18:24:29.473366
GPL-3.0
false
905c2522b22e196bffb04ac8fb45062a
\n\n
.venv\Lib\site-packages\fontTools\cffLib\__pycache__\transforms.cpython-313.pyc
transforms.cpython-313.pyc
Other
22,224
0.8
0
0.01005
react-lib
468
2025-02-12T07:11:55.021263
BSD-3-Clause
false
06d906b97c2ab7dc47eb7a3629f19947
\n\n
.venv\Lib\site-packages\fontTools\cffLib\__pycache__\width.cpython-313.pyc
width.cpython-313.pyc
Other
9,664
0.8
0.017391
0
awesome-app
908
2023-10-05T23:28:33.402537
BSD-3-Clause
false
002336493a6325fa443ec6d8dc6fd3a9
"""\ncolorLib.builder: Build COLR/CPAL tables from scratch\n\n"""\n\nimport collections\nimport copy\nimport enum\nfrom functools import partial\nfrom math import ceil, log\nfrom typing import (\n Any,\n Dict,\n Generator,\n Iterable,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n Type,\n TypeVar,\n Union,\n)\nfrom fontTools.misc.arrayTools import intRect\nfrom fontTools.misc.fixedTools import fixedToFloat\nfrom fontTools.misc.treeTools import build_n_ary_tree\nfrom fontTools.ttLib.tables import C_O_L_R_\nfrom fontTools.ttLib.tables import C_P_A_L_\nfrom fontTools.ttLib.tables import _n_a_m_e\nfrom fontTools.ttLib.tables import otTables as ot\nfrom fontTools.ttLib.tables.otTables import ExtendMode, CompositeMode\nfrom .errors import ColorLibError\nfrom .geometry import round_start_circle_stable_containment\nfrom .table_builder import BuildCallback, TableBuilder\n\n\n# TODO move type aliases to colorLib.types?\nT = TypeVar("T")\n_Kwargs = Mapping[str, Any]\n_PaintInput = Union[int, _Kwargs, ot.Paint, Tuple[str, "_PaintInput"]]\n_PaintInputList = Sequence[_PaintInput]\n_ColorGlyphsDict = Dict[str, Union[_PaintInputList, _PaintInput]]\n_ColorGlyphsV0Dict = Dict[str, Sequence[Tuple[str, int]]]\n_ClipBoxInput = Union[\n Tuple[int, int, int, int, int], # format 1, variable\n Tuple[int, int, int, int], # format 0, non-variable\n ot.ClipBox,\n]\n\n\nMAX_PAINT_COLR_LAYER_COUNT = 255\n_DEFAULT_ALPHA = 1.0\n_MAX_REUSE_LEN = 32\n\n\ndef _beforeBuildPaintRadialGradient(paint, source):\n x0 = source["x0"]\n y0 = source["y0"]\n r0 = source["r0"]\n x1 = source["x1"]\n y1 = source["y1"]\n r1 = source["r1"]\n\n # TODO apparently no builder_test confirms this works (?)\n\n # avoid abrupt change after rounding when c0 is near c1's perimeter\n c = round_start_circle_stable_containment((x0, y0), r0, (x1, y1), r1)\n x0, y0 = c.centre\n r0 = c.radius\n\n # update source to ensure paint is built with corrected values\n source["x0"] = x0\n source["y0"] = y0\n source["r0"] = r0\n source["x1"] = x1\n source["y1"] = y1\n source["r1"] = r1\n\n return paint, source\n\n\ndef _defaultColorStop():\n colorStop = ot.ColorStop()\n colorStop.Alpha = _DEFAULT_ALPHA\n return colorStop\n\n\ndef _defaultVarColorStop():\n colorStop = ot.VarColorStop()\n colorStop.Alpha = _DEFAULT_ALPHA\n return colorStop\n\n\ndef _defaultColorLine():\n colorLine = ot.ColorLine()\n colorLine.Extend = ExtendMode.PAD\n return colorLine\n\n\ndef _defaultVarColorLine():\n colorLine = ot.VarColorLine()\n colorLine.Extend = ExtendMode.PAD\n return colorLine\n\n\ndef _defaultPaintSolid():\n paint = ot.Paint()\n paint.Alpha = _DEFAULT_ALPHA\n return paint\n\n\ndef _buildPaintCallbacks():\n return {\n (\n BuildCallback.BEFORE_BUILD,\n ot.Paint,\n ot.PaintFormat.PaintRadialGradient,\n ): _beforeBuildPaintRadialGradient,\n (\n BuildCallback.BEFORE_BUILD,\n ot.Paint,\n ot.PaintFormat.PaintVarRadialGradient,\n ): _beforeBuildPaintRadialGradient,\n (BuildCallback.CREATE_DEFAULT, ot.ColorStop): _defaultColorStop,\n (BuildCallback.CREATE_DEFAULT, ot.VarColorStop): _defaultVarColorStop,\n (BuildCallback.CREATE_DEFAULT, ot.ColorLine): _defaultColorLine,\n (BuildCallback.CREATE_DEFAULT, ot.VarColorLine): _defaultVarColorLine,\n (\n BuildCallback.CREATE_DEFAULT,\n ot.Paint,\n ot.PaintFormat.PaintSolid,\n ): _defaultPaintSolid,\n (\n BuildCallback.CREATE_DEFAULT,\n ot.Paint,\n ot.PaintFormat.PaintVarSolid,\n ): _defaultPaintSolid,\n }\n\n\ndef populateCOLRv0(\n table: ot.COLR,\n colorGlyphsV0: _ColorGlyphsV0Dict,\n glyphMap: Optional[Mapping[str, int]] = None,\n):\n """Build v0 color layers and add to existing COLR table.\n\n Args:\n table: a raw ``otTables.COLR()`` object (not ttLib's ``table_C_O_L_R_``).\n colorGlyphsV0: map of base glyph names to lists of (layer glyph names,\n color palette index) tuples. Can be empty.\n glyphMap: a map from glyph names to glyph indices, as returned from\n ``TTFont.getReverseGlyphMap()``, to optionally sort base records by GID.\n """\n if glyphMap is not None:\n colorGlyphItems = sorted(\n colorGlyphsV0.items(), key=lambda item: glyphMap[item[0]]\n )\n else:\n colorGlyphItems = colorGlyphsV0.items()\n baseGlyphRecords = []\n layerRecords = []\n for baseGlyph, layers in colorGlyphItems:\n baseRec = ot.BaseGlyphRecord()\n baseRec.BaseGlyph = baseGlyph\n baseRec.FirstLayerIndex = len(layerRecords)\n baseRec.NumLayers = len(layers)\n baseGlyphRecords.append(baseRec)\n\n for layerGlyph, paletteIndex in layers:\n layerRec = ot.LayerRecord()\n layerRec.LayerGlyph = layerGlyph\n layerRec.PaletteIndex = paletteIndex\n layerRecords.append(layerRec)\n\n table.BaseGlyphRecordArray = table.LayerRecordArray = None\n if baseGlyphRecords:\n table.BaseGlyphRecordArray = ot.BaseGlyphRecordArray()\n table.BaseGlyphRecordArray.BaseGlyphRecord = baseGlyphRecords\n if layerRecords:\n table.LayerRecordArray = ot.LayerRecordArray()\n table.LayerRecordArray.LayerRecord = layerRecords\n table.BaseGlyphRecordCount = len(baseGlyphRecords)\n table.LayerRecordCount = len(layerRecords)\n\n\ndef buildCOLR(\n colorGlyphs: _ColorGlyphsDict,\n version: Optional[int] = None,\n *,\n glyphMap: Optional[Mapping[str, int]] = None,\n varStore: Optional[ot.VarStore] = None,\n varIndexMap: Optional[ot.DeltaSetIndexMap] = None,\n clipBoxes: Optional[Dict[str, _ClipBoxInput]] = None,\n allowLayerReuse: bool = True,\n) -> C_O_L_R_.table_C_O_L_R_:\n """Build COLR table from color layers mapping.\n\n Args:\n\n colorGlyphs: map of base glyph name to, either list of (layer glyph name,\n color palette index) tuples for COLRv0; or a single ``Paint`` (dict) or\n list of ``Paint`` for COLRv1.\n version: the version of COLR table. If None, the version is determined\n by the presence of COLRv1 paints or variation data (varStore), which\n require version 1; otherwise, if all base glyphs use only simple color\n layers, version 0 is used.\n glyphMap: a map from glyph names to glyph indices, as returned from\n TTFont.getReverseGlyphMap(), to optionally sort base records by GID.\n varStore: Optional ItemVarationStore for deltas associated with v1 layer.\n varIndexMap: Optional DeltaSetIndexMap for deltas associated with v1 layer.\n clipBoxes: Optional map of base glyph name to clip box 4- or 5-tuples:\n (xMin, yMin, xMax, yMax) or (xMin, yMin, xMax, yMax, varIndexBase).\n\n Returns:\n A new COLR table.\n """\n self = C_O_L_R_.table_C_O_L_R_()\n\n if varStore is not None and version == 0:\n raise ValueError("Can't add VarStore to COLRv0")\n\n if version in (None, 0) and not varStore:\n # split color glyphs into v0 and v1 and encode separately\n colorGlyphsV0, colorGlyphsV1 = _split_color_glyphs_by_version(colorGlyphs)\n if version == 0 and colorGlyphsV1:\n raise ValueError("Can't encode COLRv1 glyphs in COLRv0")\n else:\n # unless explicitly requested for v1 or have variations, in which case\n # we encode all color glyph as v1\n colorGlyphsV0, colorGlyphsV1 = {}, colorGlyphs\n\n colr = ot.COLR()\n\n populateCOLRv0(colr, colorGlyphsV0, glyphMap)\n\n colr.LayerList, colr.BaseGlyphList = buildColrV1(\n colorGlyphsV1,\n glyphMap,\n allowLayerReuse=allowLayerReuse,\n )\n\n if version is None:\n version = 1 if (varStore or colorGlyphsV1) else 0\n elif version not in (0, 1):\n raise NotImplementedError(version)\n self.version = colr.Version = version\n\n if version == 0:\n self.ColorLayers = self._decompileColorLayersV0(colr)\n else:\n colr.ClipList = buildClipList(clipBoxes) if clipBoxes else None\n colr.VarIndexMap = varIndexMap\n colr.VarStore = varStore\n self.table = colr\n\n return self\n\n\ndef buildClipList(clipBoxes: Dict[str, _ClipBoxInput]) -> ot.ClipList:\n clipList = ot.ClipList()\n clipList.Format = 1\n clipList.clips = {name: buildClipBox(box) for name, box in clipBoxes.items()}\n return clipList\n\n\ndef buildClipBox(clipBox: _ClipBoxInput) -> ot.ClipBox:\n if isinstance(clipBox, ot.ClipBox):\n return clipBox\n n = len(clipBox)\n clip = ot.ClipBox()\n if n not in (4, 5):\n raise ValueError(f"Invalid ClipBox: expected 4 or 5 values, found {n}")\n clip.xMin, clip.yMin, clip.xMax, clip.yMax = intRect(clipBox[:4])\n clip.Format = int(n == 5) + 1\n if n == 5:\n clip.VarIndexBase = int(clipBox[4])\n return clip\n\n\nclass ColorPaletteType(enum.IntFlag):\n USABLE_WITH_LIGHT_BACKGROUND = 0x0001\n USABLE_WITH_DARK_BACKGROUND = 0x0002\n\n @classmethod\n def _missing_(cls, value):\n # enforce reserved bits\n if isinstance(value, int) and (value < 0 or value & 0xFFFC != 0):\n raise ValueError(f"{value} is not a valid {cls.__name__}")\n return super()._missing_(value)\n\n\n# None, 'abc' or {'en': 'abc', 'de': 'xyz'}\n_OptionalLocalizedString = Union[None, str, Dict[str, str]]\n\n\ndef buildPaletteLabels(\n labels: Iterable[_OptionalLocalizedString], nameTable: _n_a_m_e.table__n_a_m_e\n) -> List[Optional[int]]:\n return [\n (\n nameTable.addMultilingualName(l, mac=False)\n if isinstance(l, dict)\n else (\n C_P_A_L_.table_C_P_A_L_.NO_NAME_ID\n if l is None\n else nameTable.addMultilingualName({"en": l}, mac=False)\n )\n )\n for l in labels\n ]\n\n\ndef buildCPAL(\n palettes: Sequence[Sequence[Tuple[float, float, float, float]]],\n paletteTypes: Optional[Sequence[ColorPaletteType]] = None,\n paletteLabels: Optional[Sequence[_OptionalLocalizedString]] = None,\n paletteEntryLabels: Optional[Sequence[_OptionalLocalizedString]] = None,\n nameTable: Optional[_n_a_m_e.table__n_a_m_e] = None,\n) -> C_P_A_L_.table_C_P_A_L_:\n """Build CPAL table from list of color palettes.\n\n Args:\n palettes: list of lists of colors encoded as tuples of (R, G, B, A) floats\n in the range [0..1].\n paletteTypes: optional list of ColorPaletteType, one for each palette.\n paletteLabels: optional list of palette labels. Each lable can be either:\n None (no label), a string (for for default English labels), or a\n localized string (as a dict keyed with BCP47 language codes).\n paletteEntryLabels: optional list of palette entry labels, one for each\n palette entry (see paletteLabels).\n nameTable: optional name table where to store palette and palette entry\n labels. Required if either paletteLabels or paletteEntryLabels is set.\n\n Return:\n A new CPAL v0 or v1 table, if custom palette types or labels are specified.\n """\n if len({len(p) for p in palettes}) != 1:\n raise ColorLibError("color palettes have different lengths")\n\n if (paletteLabels or paletteEntryLabels) and not nameTable:\n raise TypeError(\n "nameTable is required if palette or palette entries have labels"\n )\n\n cpal = C_P_A_L_.table_C_P_A_L_()\n cpal.numPaletteEntries = len(palettes[0])\n\n cpal.palettes = []\n for i, palette in enumerate(palettes):\n colors = []\n for j, color in enumerate(palette):\n if not isinstance(color, tuple) or len(color) != 4:\n raise ColorLibError(\n f"In palette[{i}][{j}]: expected (R, G, B, A) tuple, got {color!r}"\n )\n if any(v > 1 or v < 0 for v in color):\n raise ColorLibError(\n f"palette[{i}][{j}] has invalid out-of-range [0..1] color: {color!r}"\n )\n # input colors are RGBA, CPAL encodes them as BGRA\n red, green, blue, alpha = color\n colors.append(\n C_P_A_L_.Color(*(round(v * 255) for v in (blue, green, red, alpha)))\n )\n cpal.palettes.append(colors)\n\n if any(v is not None for v in (paletteTypes, paletteLabels, paletteEntryLabels)):\n cpal.version = 1\n\n if paletteTypes is not None:\n if len(paletteTypes) != len(palettes):\n raise ColorLibError(\n f"Expected {len(palettes)} paletteTypes, got {len(paletteTypes)}"\n )\n cpal.paletteTypes = [ColorPaletteType(t).value for t in paletteTypes]\n else:\n cpal.paletteTypes = [C_P_A_L_.table_C_P_A_L_.DEFAULT_PALETTE_TYPE] * len(\n palettes\n )\n\n if paletteLabels is not None:\n if len(paletteLabels) != len(palettes):\n raise ColorLibError(\n f"Expected {len(palettes)} paletteLabels, got {len(paletteLabels)}"\n )\n cpal.paletteLabels = buildPaletteLabels(paletteLabels, nameTable)\n else:\n cpal.paletteLabels = [C_P_A_L_.table_C_P_A_L_.NO_NAME_ID] * len(palettes)\n\n if paletteEntryLabels is not None:\n if len(paletteEntryLabels) != cpal.numPaletteEntries:\n raise ColorLibError(\n f"Expected {cpal.numPaletteEntries} paletteEntryLabels, "\n f"got {len(paletteEntryLabels)}"\n )\n cpal.paletteEntryLabels = buildPaletteLabels(paletteEntryLabels, nameTable)\n else:\n cpal.paletteEntryLabels = [\n C_P_A_L_.table_C_P_A_L_.NO_NAME_ID\n ] * cpal.numPaletteEntries\n else:\n cpal.version = 0\n\n return cpal\n\n\n# COLR v1 tables\n# See draft proposal at: https://github.com/googlefonts/colr-gradients-spec\n\n\ndef _is_colrv0_layer(layer: Any) -> bool:\n # Consider as COLRv0 layer any sequence of length 2 (be it tuple or list) in which\n # the first element is a str (the layerGlyph) and the second element is an int\n # (CPAL paletteIndex).\n # https://github.com/googlefonts/ufo2ft/issues/426\n try:\n layerGlyph, paletteIndex = layer\n except (TypeError, ValueError):\n return False\n else:\n return isinstance(layerGlyph, str) and isinstance(paletteIndex, int)\n\n\ndef _split_color_glyphs_by_version(\n colorGlyphs: _ColorGlyphsDict,\n) -> Tuple[_ColorGlyphsV0Dict, _ColorGlyphsDict]:\n colorGlyphsV0 = {}\n colorGlyphsV1 = {}\n for baseGlyph, layers in colorGlyphs.items():\n if all(_is_colrv0_layer(l) for l in layers):\n colorGlyphsV0[baseGlyph] = layers\n else:\n colorGlyphsV1[baseGlyph] = layers\n\n # sanity check\n assert set(colorGlyphs) == (set(colorGlyphsV0) | set(colorGlyphsV1))\n\n return colorGlyphsV0, colorGlyphsV1\n\n\ndef _reuse_ranges(num_layers: int) -> Generator[Tuple[int, int], None, None]:\n # TODO feels like something itertools might have already\n for lbound in range(num_layers):\n # Reuse of very large #s of layers is relatively unlikely\n # +2: we want sequences of at least 2\n # otData handles single-record duplication\n for ubound in range(\n lbound + 2, min(num_layers + 1, lbound + 2 + _MAX_REUSE_LEN)\n ):\n yield (lbound, ubound)\n\n\nclass LayerReuseCache:\n reusePool: Mapping[Tuple[Any, ...], int]\n tuples: Mapping[int, Tuple[Any, ...]]\n keepAlive: List[ot.Paint] # we need id to remain valid\n\n def __init__(self):\n self.reusePool = {}\n self.tuples = {}\n self.keepAlive = []\n\n def _paint_tuple(self, paint: ot.Paint):\n # start simple, who even cares about cyclic graphs or interesting field types\n def _tuple_safe(value):\n if isinstance(value, enum.Enum):\n return value\n elif hasattr(value, "__dict__"):\n return tuple(\n (k, _tuple_safe(v)) for k, v in sorted(value.__dict__.items())\n )\n elif isinstance(value, collections.abc.MutableSequence):\n return tuple(_tuple_safe(e) for e in value)\n return value\n\n # Cache the tuples for individual Paint instead of the whole sequence\n # because the seq could be a transient slice\n result = self.tuples.get(id(paint), None)\n if result is None:\n result = _tuple_safe(paint)\n self.tuples[id(paint)] = result\n self.keepAlive.append(paint)\n return result\n\n def _as_tuple(self, paints: Sequence[ot.Paint]) -> Tuple[Any, ...]:\n return tuple(self._paint_tuple(p) for p in paints)\n\n def try_reuse(self, layers: List[ot.Paint]) -> List[ot.Paint]:\n found_reuse = True\n while found_reuse:\n found_reuse = False\n\n ranges = sorted(\n _reuse_ranges(len(layers)),\n key=lambda t: (t[1] - t[0], t[1], t[0]),\n reverse=True,\n )\n for lbound, ubound in ranges:\n reuse_lbound = self.reusePool.get(\n self._as_tuple(layers[lbound:ubound]), -1\n )\n if reuse_lbound == -1:\n continue\n new_slice = ot.Paint()\n new_slice.Format = int(ot.PaintFormat.PaintColrLayers)\n new_slice.NumLayers = ubound - lbound\n new_slice.FirstLayerIndex = reuse_lbound\n layers = layers[:lbound] + [new_slice] + layers[ubound:]\n found_reuse = True\n break\n return layers\n\n def add(self, layers: List[ot.Paint], first_layer_index: int):\n for lbound, ubound in _reuse_ranges(len(layers)):\n self.reusePool[self._as_tuple(layers[lbound:ubound])] = (\n lbound + first_layer_index\n )\n\n\nclass LayerListBuilder:\n layers: List[ot.Paint]\n cache: LayerReuseCache\n allowLayerReuse: bool\n\n def __init__(self, *, allowLayerReuse=True):\n self.layers = []\n if allowLayerReuse:\n self.cache = LayerReuseCache()\n else:\n self.cache = None\n\n # We need to intercept construction of PaintColrLayers\n callbacks = _buildPaintCallbacks()\n callbacks[\n (\n BuildCallback.BEFORE_BUILD,\n ot.Paint,\n ot.PaintFormat.PaintColrLayers,\n )\n ] = self._beforeBuildPaintColrLayers\n self.tableBuilder = TableBuilder(callbacks)\n\n # COLR layers is unusual in that it modifies shared state\n # so we need a callback into an object\n def _beforeBuildPaintColrLayers(self, dest, source):\n # Sketchy gymnastics: a sequence input will have dropped it's layers\n # into NumLayers; get it back\n if isinstance(source.get("NumLayers", None), collections.abc.Sequence):\n layers = source["NumLayers"]\n else:\n layers = source["Layers"]\n\n # Convert maps seqs or whatever into typed objects\n layers = [self.buildPaint(l) for l in layers]\n\n # No reason to have a colr layers with just one entry\n if len(layers) == 1:\n return layers[0], {}\n\n if self.cache is not None:\n # Look for reuse, with preference to longer sequences\n # This may make the layer list smaller\n layers = self.cache.try_reuse(layers)\n\n # The layer list is now final; if it's too big we need to tree it\n is_tree = len(layers) > MAX_PAINT_COLR_LAYER_COUNT\n layers = build_n_ary_tree(layers, n=MAX_PAINT_COLR_LAYER_COUNT)\n\n # We now have a tree of sequences with Paint leaves.\n # Convert the sequences into PaintColrLayers.\n def listToColrLayers(layer):\n if isinstance(layer, collections.abc.Sequence):\n return self.buildPaint(\n {\n "Format": ot.PaintFormat.PaintColrLayers,\n "Layers": [listToColrLayers(l) for l in layer],\n }\n )\n return layer\n\n layers = [listToColrLayers(l) for l in layers]\n\n # No reason to have a colr layers with just one entry\n if len(layers) == 1:\n return layers[0], {}\n\n paint = ot.Paint()\n paint.Format = int(ot.PaintFormat.PaintColrLayers)\n paint.NumLayers = len(layers)\n paint.FirstLayerIndex = len(self.layers)\n self.layers.extend(layers)\n\n # Register our parts for reuse provided we aren't a tree\n # If we are a tree the leaves registered for reuse and that will suffice\n if self.cache is not None and not is_tree:\n self.cache.add(layers, paint.FirstLayerIndex)\n\n # we've fully built dest; empty source prevents generalized build from kicking in\n return paint, {}\n\n def buildPaint(self, paint: _PaintInput) -> ot.Paint:\n return self.tableBuilder.build(ot.Paint, paint)\n\n def build(self) -> Optional[ot.LayerList]:\n if not self.layers:\n return None\n layers = ot.LayerList()\n layers.LayerCount = len(self.layers)\n layers.Paint = self.layers\n return layers\n\n\ndef buildBaseGlyphPaintRecord(\n baseGlyph: str, layerBuilder: LayerListBuilder, paint: _PaintInput\n) -> ot.BaseGlyphList:\n self = ot.BaseGlyphPaintRecord()\n self.BaseGlyph = baseGlyph\n self.Paint = layerBuilder.buildPaint(paint)\n return self\n\n\ndef _format_glyph_errors(errors: Mapping[str, Exception]) -> str:\n lines = []\n for baseGlyph, error in sorted(errors.items()):\n lines.append(f" {baseGlyph} => {type(error).__name__}: {error}")\n return "\n".join(lines)\n\n\ndef buildColrV1(\n colorGlyphs: _ColorGlyphsDict,\n glyphMap: Optional[Mapping[str, int]] = None,\n *,\n allowLayerReuse: bool = True,\n) -> Tuple[Optional[ot.LayerList], ot.BaseGlyphList]:\n if glyphMap is not None:\n colorGlyphItems = sorted(\n colorGlyphs.items(), key=lambda item: glyphMap[item[0]]\n )\n else:\n colorGlyphItems = colorGlyphs.items()\n\n errors = {}\n baseGlyphs = []\n layerBuilder = LayerListBuilder(allowLayerReuse=allowLayerReuse)\n for baseGlyph, paint in colorGlyphItems:\n try:\n baseGlyphs.append(buildBaseGlyphPaintRecord(baseGlyph, layerBuilder, paint))\n\n except (ColorLibError, OverflowError, ValueError, TypeError) as e:\n errors[baseGlyph] = e\n\n if errors:\n failed_glyphs = _format_glyph_errors(errors)\n exc = ColorLibError(f"Failed to build BaseGlyphList:\n{failed_glyphs}")\n exc.errors = errors\n raise exc from next(iter(errors.values()))\n\n layers = layerBuilder.build()\n glyphs = ot.BaseGlyphList()\n glyphs.BaseGlyphCount = len(baseGlyphs)\n glyphs.BaseGlyphPaintRecord = baseGlyphs\n return (layers, glyphs)\n
.venv\Lib\site-packages\fontTools\colorLib\builder.py
builder.py
Python
23,672
0.95
0.182229
0.075949
python-kit
551
2024-10-20T21:20:14.882666
MIT
false
88ee908f0a98de21ff56076277d4e393
class ColorLibError(Exception):\n pass\n
.venv\Lib\site-packages\fontTools\colorLib\errors.py
errors.py
Python
43
0.65
0.5
0
awesome-app
301
2025-04-17T02:48:34.093909
GPL-3.0
false
f9ec1b7f3838e8fe9843797fba699b3d
"""Helpers for manipulating 2D points and vectors in COLR table."""\n\nfrom math import copysign, cos, hypot, isclose, pi\nfrom fontTools.misc.roundTools import otRound\n\n\ndef _vector_between(origin, target):\n return (target[0] - origin[0], target[1] - origin[1])\n\n\ndef _round_point(pt):\n return (otRound(pt[0]), otRound(pt[1]))\n\n\ndef _unit_vector(vec):\n length = hypot(*vec)\n if length == 0:\n return None\n return (vec[0] / length, vec[1] / length)\n\n\n_CIRCLE_INSIDE_TOLERANCE = 1e-4\n\n\n# The unit vector's X and Y components are respectively\n# U = (cos(α), sin(α))\n# where α is the angle between the unit vector and the positive x axis.\n_UNIT_VECTOR_THRESHOLD = cos(3 / 8 * pi) # == sin(1/8 * pi) == 0.38268343236508984\n\n\ndef _rounding_offset(direction):\n # Return 2-tuple of -/+ 1.0 or 0.0 approximately based on the direction vector.\n # We divide the unit circle in 8 equal slices oriented towards the cardinal\n # (N, E, S, W) and intermediate (NE, SE, SW, NW) directions. To each slice we\n # map one of the possible cases: -1, 0, +1 for either X and Y coordinate.\n # E.g. Return (+1.0, -1.0) if unit vector is oriented towards SE, or\n # (-1.0, 0.0) if it's pointing West, etc.\n uv = _unit_vector(direction)\n if not uv:\n return (0, 0)\n\n result = []\n for uv_component in uv:\n if -_UNIT_VECTOR_THRESHOLD <= uv_component < _UNIT_VECTOR_THRESHOLD:\n # unit vector component near 0: direction almost orthogonal to the\n # direction of the current axis, thus keep coordinate unchanged\n result.append(0)\n else:\n # nudge coord by +/- 1.0 in direction of unit vector\n result.append(copysign(1.0, uv_component))\n return tuple(result)\n\n\nclass Circle:\n def __init__(self, centre, radius):\n self.centre = centre\n self.radius = radius\n\n def __repr__(self):\n return f"Circle(centre={self.centre}, radius={self.radius})"\n\n def round(self):\n return Circle(_round_point(self.centre), otRound(self.radius))\n\n def inside(self, outer_circle, tolerance=_CIRCLE_INSIDE_TOLERANCE):\n dist = self.radius + hypot(*_vector_between(self.centre, outer_circle.centre))\n return (\n isclose(outer_circle.radius, dist, rel_tol=_CIRCLE_INSIDE_TOLERANCE)\n or outer_circle.radius > dist\n )\n\n def concentric(self, other):\n return self.centre == other.centre\n\n def move(self, dx, dy):\n self.centre = (self.centre[0] + dx, self.centre[1] + dy)\n\n\ndef round_start_circle_stable_containment(c0, r0, c1, r1):\n """Round start circle so that it stays inside/outside end circle after rounding.\n\n The rounding of circle coordinates to integers may cause an abrupt change\n if the start circle c0 is so close to the end circle c1's perimiter that\n it ends up falling outside (or inside) as a result of the rounding.\n To keep the gradient unchanged, we nudge it in the right direction.\n\n See:\n https://github.com/googlefonts/colr-gradients-spec/issues/204\n https://github.com/googlefonts/picosvg/issues/158\n """\n start, end = Circle(c0, r0), Circle(c1, r1)\n\n inside_before_round = start.inside(end)\n\n round_start = start.round()\n round_end = end.round()\n inside_after_round = round_start.inside(round_end)\n\n if inside_before_round == inside_after_round:\n return round_start\n elif inside_after_round:\n # start was outside before rounding: we need to push start away from end\n direction = _vector_between(round_end.centre, round_start.centre)\n radius_delta = +1.0\n else:\n # start was inside before rounding: we need to push start towards end\n direction = _vector_between(round_start.centre, round_end.centre)\n radius_delta = -1.0\n dx, dy = _rounding_offset(direction)\n\n # At most 2 iterations ought to be enough to converge. Before the loop, we\n # know the start circle didn't keep containment after normal rounding; thus\n # we continue adjusting by -/+ 1.0 until containment is restored.\n # Normal rounding can at most move each coordinates -/+0.5; in the worst case\n # both the start and end circle's centres and radii will be rounded in opposite\n # directions, e.g. when they move along a 45 degree diagonal:\n # c0 = (1.5, 1.5) ===> (2.0, 2.0)\n # r0 = 0.5 ===> 1.0\n # c1 = (0.499, 0.499) ===> (0.0, 0.0)\n # r1 = 2.499 ===> 2.0\n # In this example, the relative distance between the circles, calculated\n # as r1 - (r0 + distance(c0, c1)) is initially 0.57437 (c0 is inside c1), and\n # -1.82842 after rounding (c0 is now outside c1). Nudging c0 by -1.0 on both\n # x and y axes moves it towards c1 by hypot(-1.0, -1.0) = 1.41421. Two of these\n # moves cover twice that distance, which is enough to restore containment.\n max_attempts = 2\n for _ in range(max_attempts):\n if round_start.concentric(round_end):\n # can't move c0 towards c1 (they are the same), so we change the radius\n round_start.radius += radius_delta\n assert round_start.radius >= 0\n else:\n round_start.move(dx, dy)\n if inside_before_round == round_start.inside(round_end):\n break\n else: # likely a bug\n raise AssertionError(\n f"Rounding circle {start} "\n f"{'inside' if inside_before_round else 'outside'} "\n f"{end} failed after {max_attempts} attempts!"\n )\n\n return round_start\n
.venv\Lib\site-packages\fontTools\colorLib\geometry.py
geometry.py
Python
5,661
0.95
0.181818
0.265487
react-lib
488
2024-08-04T19:38:05.722036
Apache-2.0
false
a386a9521678d3e7df50e69d10892252
"""\ncolorLib.table_builder: Generic helper for filling in BaseTable derivatives from tuples and maps and such.\n\n"""\n\nimport collections\nimport enum\nfrom fontTools.ttLib.tables.otBase import (\n BaseTable,\n FormatSwitchingBaseTable,\n UInt8FormatSwitchingBaseTable,\n)\nfrom fontTools.ttLib.tables.otConverters import (\n ComputedInt,\n SimpleValue,\n Struct,\n Short,\n UInt8,\n UShort,\n IntValue,\n FloatValue,\n OptionalValue,\n)\nfrom fontTools.misc.roundTools import otRound\n\n\nclass BuildCallback(enum.Enum):\n """Keyed on (BEFORE_BUILD, class[, Format if available]).\n Receives (dest, source).\n Should return (dest, source), which can be new objects.\n """\n\n BEFORE_BUILD = enum.auto()\n\n """Keyed on (AFTER_BUILD, class[, Format if available]).\n Receives (dest).\n Should return dest, which can be a new object.\n """\n AFTER_BUILD = enum.auto()\n\n """Keyed on (CREATE_DEFAULT, class[, Format if available]).\n Receives no arguments.\n Should return a new instance of class.\n """\n CREATE_DEFAULT = enum.auto()\n\n\ndef _assignable(convertersByName):\n return {k: v for k, v in convertersByName.items() if not isinstance(v, ComputedInt)}\n\n\ndef _isNonStrSequence(value):\n return isinstance(value, collections.abc.Sequence) and not isinstance(value, str)\n\n\ndef _split_format(cls, source):\n if _isNonStrSequence(source):\n assert len(source) > 0, f"{cls} needs at least format from {source}"\n fmt, remainder = source[0], source[1:]\n elif isinstance(source, collections.abc.Mapping):\n assert "Format" in source, f"{cls} needs at least Format from {source}"\n remainder = source.copy()\n fmt = remainder.pop("Format")\n else:\n raise ValueError(f"Not sure how to populate {cls} from {source}")\n\n assert isinstance(\n fmt, collections.abc.Hashable\n ), f"{cls} Format is not hashable: {fmt!r}"\n assert fmt in cls.convertersByName, f"{cls} invalid Format: {fmt!r}"\n\n return fmt, remainder\n\n\nclass TableBuilder:\n """\n Helps to populate things derived from BaseTable from maps, tuples, etc.\n\n A table of lifecycle callbacks may be provided to add logic beyond what is possible\n based on otData info for the target class. See BuildCallbacks.\n """\n\n def __init__(self, callbackTable=None):\n if callbackTable is None:\n callbackTable = {}\n self._callbackTable = callbackTable\n\n def _convert(self, dest, field, converter, value):\n enumClass = getattr(converter, "enumClass", None)\n\n if enumClass:\n if isinstance(value, enumClass):\n pass\n elif isinstance(value, str):\n try:\n value = getattr(enumClass, value.upper())\n except AttributeError:\n raise ValueError(f"{value} is not a valid {enumClass}")\n else:\n value = enumClass(value)\n\n elif isinstance(converter, IntValue):\n value = otRound(value)\n elif isinstance(converter, FloatValue):\n value = float(value)\n\n elif isinstance(converter, Struct):\n if converter.repeat:\n if _isNonStrSequence(value):\n value = [self.build(converter.tableClass, v) for v in value]\n else:\n value = [self.build(converter.tableClass, value)]\n setattr(dest, converter.repeat, len(value))\n else:\n value = self.build(converter.tableClass, value)\n elif callable(converter):\n value = converter(value)\n\n setattr(dest, field, value)\n\n def build(self, cls, source):\n assert issubclass(cls, BaseTable)\n\n if isinstance(source, cls):\n return source\n\n callbackKey = (cls,)\n fmt = None\n if issubclass(cls, FormatSwitchingBaseTable):\n fmt, source = _split_format(cls, source)\n callbackKey = (cls, fmt)\n\n dest = self._callbackTable.get(\n (BuildCallback.CREATE_DEFAULT,) + callbackKey, lambda: cls()\n )()\n assert isinstance(dest, cls)\n\n convByName = _assignable(cls.convertersByName)\n skippedFields = set()\n\n # For format switchers we need to resolve converters based on format\n if issubclass(cls, FormatSwitchingBaseTable):\n dest.Format = fmt\n convByName = _assignable(convByName[dest.Format])\n skippedFields.add("Format")\n\n # Convert sequence => mapping so before thunk only has to handle one format\n if _isNonStrSequence(source):\n # Sequence (typically list or tuple) assumed to match fields in declaration order\n assert len(source) <= len(\n convByName\n ), f"Sequence of {len(source)} too long for {cls}; expected <= {len(convByName)} values"\n source = dict(zip(convByName.keys(), source))\n\n dest, source = self._callbackTable.get(\n (BuildCallback.BEFORE_BUILD,) + callbackKey, lambda d, s: (d, s)\n )(dest, source)\n\n if isinstance(source, collections.abc.Mapping):\n for field, value in source.items():\n if field in skippedFields:\n continue\n converter = convByName.get(field, None)\n if not converter:\n raise ValueError(\n f"Unrecognized field {field} for {cls}; expected one of {sorted(convByName.keys())}"\n )\n self._convert(dest, field, converter, value)\n else:\n # let's try as a 1-tuple\n dest = self.build(cls, (source,))\n\n for field, conv in convByName.items():\n if not hasattr(dest, field) and isinstance(conv, OptionalValue):\n setattr(dest, field, conv.DEFAULT)\n\n dest = self._callbackTable.get(\n (BuildCallback.AFTER_BUILD,) + callbackKey, lambda d: d\n )(dest)\n\n return dest\n\n\nclass TableUnbuilder:\n def __init__(self, callbackTable=None):\n if callbackTable is None:\n callbackTable = {}\n self._callbackTable = callbackTable\n\n def unbuild(self, table):\n assert isinstance(table, BaseTable)\n\n source = {}\n\n callbackKey = (type(table),)\n if isinstance(table, FormatSwitchingBaseTable):\n source["Format"] = int(table.Format)\n callbackKey += (table.Format,)\n\n for converter in table.getConverters():\n if isinstance(converter, ComputedInt):\n continue\n value = getattr(table, converter.name)\n\n enumClass = getattr(converter, "enumClass", None)\n if enumClass:\n source[converter.name] = value.name.lower()\n elif isinstance(converter, Struct):\n if converter.repeat:\n source[converter.name] = [self.unbuild(v) for v in value]\n else:\n source[converter.name] = self.unbuild(value)\n elif isinstance(converter, SimpleValue):\n # "simple" values (e.g. int, float, str) need no further un-building\n source[converter.name] = value\n else:\n raise NotImplementedError(\n "Don't know how unbuild {value!r} with {converter!r}"\n )\n\n source = self._callbackTable.get(callbackKey, lambda s: s)(source)\n\n return source\n
.venv\Lib\site-packages\fontTools\colorLib\table_builder.py
table_builder.py
Python
7,692
0.95
0.2287
0.02809
vue-tools
64
2025-06-10T20:01:45.238259
GPL-3.0
false
c83950fb5db9d98e50eac22ca62d7d4c
from fontTools.ttLib.tables import otTables as ot\nfrom .table_builder import TableUnbuilder\n\n\ndef unbuildColrV1(layerList, baseGlyphList):\n layers = []\n if layerList:\n layers = layerList.Paint\n unbuilder = LayerListUnbuilder(layers)\n return {\n rec.BaseGlyph: unbuilder.unbuildPaint(rec.Paint)\n for rec in baseGlyphList.BaseGlyphPaintRecord\n }\n\n\ndef _flatten_layers(lst):\n for paint in lst:\n if paint["Format"] == ot.PaintFormat.PaintColrLayers:\n yield from _flatten_layers(paint["Layers"])\n else:\n yield paint\n\n\nclass LayerListUnbuilder:\n def __init__(self, layers):\n self.layers = layers\n\n callbacks = {\n (\n ot.Paint,\n ot.PaintFormat.PaintColrLayers,\n ): self._unbuildPaintColrLayers,\n }\n self.tableUnbuilder = TableUnbuilder(callbacks)\n\n def unbuildPaint(self, paint):\n assert isinstance(paint, ot.Paint)\n return self.tableUnbuilder.unbuild(paint)\n\n def _unbuildPaintColrLayers(self, source):\n assert source["Format"] == ot.PaintFormat.PaintColrLayers\n\n layers = list(\n _flatten_layers(\n [\n self.unbuildPaint(childPaint)\n for childPaint in self.layers[\n source["FirstLayerIndex"] : source["FirstLayerIndex"]\n + source["NumLayers"]\n ]\n ]\n )\n )\n\n if len(layers) == 1:\n return layers[0]\n\n return {"Format": source["Format"], "Layers": layers}\n\n\nif __name__ == "__main__":\n from pprint import pprint\n import sys\n from fontTools.ttLib import TTFont\n\n try:\n fontfile = sys.argv[1]\n except IndexError:\n sys.exit("usage: fonttools colorLib.unbuilder FONTFILE")\n\n font = TTFont(fontfile)\n colr = font["COLR"]\n if colr.version < 1:\n sys.exit(f"error: No COLR table version=1 found in {fontfile}")\n\n colorGlyphs = unbuildColrV1(\n colr.table.LayerList,\n colr.table.BaseGlyphList,\n )\n\n pprint(colorGlyphs)\n
.venv\Lib\site-packages\fontTools\colorLib\unbuilder.py
unbuilder.py
Python
2,223
0.85
0.185185
0
vue-tools
671
2024-04-13T17:05:54.610791
Apache-2.0
false
98f4eb57efaf6c8465e15bda46d69b3b
\n\n
.venv\Lib\site-packages\fontTools\colorLib\__pycache__\builder.cpython-313.pyc
builder.cpython-313.pyc
Other
30,170
0.95
0.048387
0
react-lib
93
2023-10-14T04:32:25.833066
Apache-2.0
false
0ec994fb8edd851ca978c59a1d992585
\n\n
.venv\Lib\site-packages\fontTools\colorLib\__pycache__\errors.cpython-313.pyc
errors.cpython-313.pyc
Other
439
0.7
0
0
node-utils
335
2023-11-06T20:59:45.641214
MIT
false
8160d744349edad8ecf9f5b9b3f87b63
\n\n
.venv\Lib\site-packages\fontTools\colorLib\__pycache__\geometry.cpython-313.pyc
geometry.cpython-313.pyc
Other
5,424
0.8
0.039216
0.020833
node-utils
996
2023-09-02T22:25:20.924649
BSD-3-Clause
false
2002004ecef276e695dd9004f9113b98
\n\n
.venv\Lib\site-packages\fontTools\colorLib\__pycache__\table_builder.cpython-313.pyc
table_builder.cpython-313.pyc
Other
10,331
0.8
0.067961
0.021505
python-kit
603
2024-06-04T22:06:36.787781
Apache-2.0
false
ee846e8f8de0b1b593210b3f18cbd705
\n\n
.venv\Lib\site-packages\fontTools\colorLib\__pycache__\unbuilder.cpython-313.pyc
unbuilder.cpython-313.pyc
Other
3,771
0.8
0
0
python-kit
345
2023-08-17T18:13:35.179959
BSD-3-Clause
false
81d8b72aaa7a4f702da16518635284d5
\n\n
.venv\Lib\site-packages\fontTools\colorLib\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
193
0.7
0
0
python-kit
570
2024-08-28T10:08:56.687593
BSD-3-Clause
false
1b44337e80a7dbdf7b04ee1c8986bb25
\n\n
.venv\Lib\site-packages\fontTools\config\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
3,759
0.95
0.064103
0.083333
node-utils
424
2023-08-11T12:42:12.173595
Apache-2.0
false
349556acef4618b210060ed4e3fba2ff
"""Benchmark the cu2qu algorithm performance."""\n\nfrom .cu2qu import *\nimport random\nimport timeit\n\nMAX_ERR = 0.05\n\n\ndef generate_curve():\n return [\n tuple(float(random.randint(0, 2048)) for coord in range(2))\n for point in range(4)\n ]\n\n\ndef setup_curve_to_quadratic():\n return generate_curve(), MAX_ERR\n\n\ndef setup_curves_to_quadratic():\n num_curves = 3\n return ([generate_curve() for curve in range(num_curves)], [MAX_ERR] * num_curves)\n\n\ndef run_benchmark(module, function, setup_suffix="", repeat=5, number=1000):\n setup_func = "setup_" + function\n if setup_suffix:\n print("%s with %s:" % (function, setup_suffix), end="")\n setup_func += "_" + setup_suffix\n else:\n print("%s:" % function, end="")\n\n def wrapper(function, setup_func):\n function = globals()[function]\n setup_func = globals()[setup_func]\n\n def wrapped():\n return function(*setup_func())\n\n return wrapped\n\n results = timeit.repeat(wrapper(function, setup_func), repeat=repeat, number=number)\n print("\t%5.1fus" % (min(results) * 1000000.0 / number))\n\n\ndef main():\n run_benchmark("cu2qu", "curve_to_quadratic")\n run_benchmark("cu2qu", "curves_to_quadratic")\n\n\nif __name__ == "__main__":\n random.seed(1)\n main()\n
.venv\Lib\site-packages\fontTools\cu2qu\benchmark.py
benchmark.py
Python
1,350
0.85
0.388889
0
node-utils
4
2023-08-15T13:48:51.556795
MIT
false
bd8417ed22d5a0df65f4b000922eee45
import os\nimport argparse\nimport logging\nimport shutil\nimport multiprocessing as mp\nfrom contextlib import closing\nfrom functools import partial\n\nimport fontTools\nfrom .ufo import font_to_quadratic, fonts_to_quadratic\n\nufo_module = None\ntry:\n import ufoLib2 as ufo_module\nexcept ImportError:\n try:\n import defcon as ufo_module\n except ImportError as e:\n pass\n\n\nlogger = logging.getLogger("fontTools.cu2qu")\n\n\ndef _cpu_count():\n try:\n return mp.cpu_count()\n except NotImplementedError: # pragma: no cover\n return 1\n\n\ndef open_ufo(path):\n if hasattr(ufo_module.Font, "open"): # ufoLib2\n return ufo_module.Font.open(path)\n return ufo_module.Font(path) # defcon\n\n\ndef _font_to_quadratic(input_path, output_path=None, **kwargs):\n ufo = open_ufo(input_path)\n logger.info("Converting curves for %s", input_path)\n if font_to_quadratic(ufo, **kwargs):\n logger.info("Saving %s", output_path)\n if output_path:\n ufo.save(output_path)\n else:\n ufo.save() # save in-place\n elif output_path:\n _copytree(input_path, output_path)\n\n\ndef _samepath(path1, path2):\n # TODO on python3+, there's os.path.samefile\n path1 = os.path.normcase(os.path.abspath(os.path.realpath(path1)))\n path2 = os.path.normcase(os.path.abspath(os.path.realpath(path2)))\n return path1 == path2\n\n\ndef _copytree(input_path, output_path):\n if _samepath(input_path, output_path):\n logger.debug("input and output paths are the same file; skipped copy")\n return\n if os.path.exists(output_path):\n shutil.rmtree(output_path)\n shutil.copytree(input_path, output_path)\n\n\ndef _main(args=None):\n """Convert a UFO font from cubic to quadratic curves"""\n parser = argparse.ArgumentParser(prog="cu2qu")\n parser.add_argument("--version", action="version", version=fontTools.__version__)\n parser.add_argument(\n "infiles",\n nargs="+",\n metavar="INPUT",\n help="one or more input UFO source file(s).",\n )\n parser.add_argument("-v", "--verbose", action="count", default=0)\n parser.add_argument(\n "-e",\n "--conversion-error",\n type=float,\n metavar="ERROR",\n default=None,\n help="maxiumum approximation error measured in EM (default: 0.001)",\n )\n parser.add_argument(\n "-m",\n "--mixed",\n default=False,\n action="store_true",\n help="whether to used mixed quadratic and cubic curves",\n )\n parser.add_argument(\n "--keep-direction",\n dest="reverse_direction",\n action="store_false",\n help="do not reverse the contour direction",\n )\n\n mode_parser = parser.add_mutually_exclusive_group()\n mode_parser.add_argument(\n "-i",\n "--interpolatable",\n action="store_true",\n help="whether curve conversion should keep interpolation compatibility",\n )\n mode_parser.add_argument(\n "-j",\n "--jobs",\n type=int,\n nargs="?",\n default=1,\n const=_cpu_count(),\n metavar="N",\n help="Convert using N multiple processes (default: %(default)s)",\n )\n\n output_parser = parser.add_mutually_exclusive_group()\n output_parser.add_argument(\n "-o",\n "--output-file",\n default=None,\n metavar="OUTPUT",\n help=(\n "output filename for the converted UFO. By default fonts are "\n "modified in place. This only works with a single input."\n ),\n )\n output_parser.add_argument(\n "-d",\n "--output-dir",\n default=None,\n metavar="DIRECTORY",\n help="output directory where to save converted UFOs",\n )\n\n options = parser.parse_args(args)\n\n if ufo_module is None:\n parser.error("Either ufoLib2 or defcon are required to run this script.")\n\n if not options.verbose:\n level = "WARNING"\n elif options.verbose == 1:\n level = "INFO"\n else:\n level = "DEBUG"\n logging.basicConfig(level=level)\n\n if len(options.infiles) > 1 and options.output_file:\n parser.error("-o/--output-file can't be used with multile inputs")\n\n if options.output_dir:\n output_dir = options.output_dir\n if not os.path.exists(output_dir):\n os.mkdir(output_dir)\n elif not os.path.isdir(output_dir):\n parser.error("'%s' is not a directory" % output_dir)\n output_paths = [\n os.path.join(output_dir, os.path.basename(p)) for p in options.infiles\n ]\n elif options.output_file:\n output_paths = [options.output_file]\n else:\n # save in-place\n output_paths = [None] * len(options.infiles)\n\n kwargs = dict(\n dump_stats=options.verbose > 0,\n max_err_em=options.conversion_error,\n reverse_direction=options.reverse_direction,\n all_quadratic=False if options.mixed else True,\n )\n\n if options.interpolatable:\n logger.info("Converting curves compatibly")\n ufos = [open_ufo(infile) for infile in options.infiles]\n if fonts_to_quadratic(ufos, **kwargs):\n for ufo, output_path in zip(ufos, output_paths):\n logger.info("Saving %s", output_path)\n if output_path:\n ufo.save(output_path)\n else:\n ufo.save()\n else:\n for input_path, output_path in zip(options.infiles, output_paths):\n if output_path:\n _copytree(input_path, output_path)\n else:\n jobs = min(len(options.infiles), options.jobs) if options.jobs > 1 else 1\n if jobs > 1:\n func = partial(_font_to_quadratic, **kwargs)\n logger.info("Running %d parallel processes", jobs)\n with closing(mp.Pool(jobs)) as pool:\n pool.starmap(func, zip(options.infiles, output_paths))\n else:\n for input_path, output_path in zip(options.infiles, output_paths):\n _font_to_quadratic(input_path, output_path, **kwargs)\n
.venv\Lib\site-packages\fontTools\cu2qu\cli.py
cli.py
Python
6,274
0.95
0.166667
0.011561
vue-tools
241
2024-05-29T06:45:27.351658
MIT
false
ec1d88a86353cae27d25dec42ee6e000
MZ
.venv\Lib\site-packages\fontTools\cu2qu\cu2qu.cp313-win_amd64.pyd
cu2qu.cp313-win_amd64.pyd
Other
99,840
0.75
0.026631
0.006868
vue-tools
367
2024-08-25T00:26:43.245075
BSD-3-Clause
false
9efb3560cafa45831e24946648a6e8e0
# cython: language_level=3\n# distutils: define_macros=CYTHON_TRACE_NOGIL=1\n\n# Copyright 2015 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\ntry:\n import cython\nexcept (AttributeError, ImportError):\n # if cython not installed, use mock module with no-op decorators and types\n from fontTools.misc import cython\nCOMPILED = cython.compiled\n\nimport math\n\nfrom .errors import Error as Cu2QuError, ApproxNotFoundError\n\n\n__all__ = ["curve_to_quadratic", "curves_to_quadratic"]\n\nMAX_N = 100\n\nNAN = float("NaN")\n\n\n@cython.cfunc\n@cython.inline\n@cython.returns(cython.double)\n@cython.locals(v1=cython.complex, v2=cython.complex)\ndef dot(v1, v2):\n """Return the dot product of two vectors.\n\n Args:\n v1 (complex): First vector.\n v2 (complex): Second vector.\n\n Returns:\n double: Dot product.\n """\n return (v1 * v2.conjugate()).real\n\n\n@cython.cfunc\n@cython.inline\n@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex)\n@cython.locals(\n _1=cython.complex, _2=cython.complex, _3=cython.complex, _4=cython.complex\n)\ndef calc_cubic_points(a, b, c, d):\n _1 = d\n _2 = (c / 3.0) + d\n _3 = (b + c) / 3.0 + _2\n _4 = a + d + c + b\n return _1, _2, _3, _4\n\n\n@cython.cfunc\n@cython.inline\n@cython.locals(\n p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex\n)\n@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex)\ndef calc_cubic_parameters(p0, p1, p2, p3):\n c = (p1 - p0) * 3.0\n b = (p2 - p1) * 3.0 - c\n d = p0\n a = p3 - d - c - b\n return a, b, c, d\n\n\n@cython.cfunc\n@cython.inline\n@cython.locals(\n p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex\n)\ndef split_cubic_into_n_iter(p0, p1, p2, p3, n):\n """Split a cubic Bezier into n equal parts.\n\n Splits the curve into `n` equal parts by curve time.\n (t=0..1/n, t=1/n..2/n, ...)\n\n Args:\n p0 (complex): Start point of curve.\n p1 (complex): First handle of curve.\n p2 (complex): Second handle of curve.\n p3 (complex): End point of curve.\n\n Returns:\n An iterator yielding the control points (four complex values) of the\n subcurves.\n """\n # Hand-coded special-cases\n if n == 2:\n return iter(split_cubic_into_two(p0, p1, p2, p3))\n if n == 3:\n return iter(split_cubic_into_three(p0, p1, p2, p3))\n if n == 4:\n a, b = split_cubic_into_two(p0, p1, p2, p3)\n return iter(\n split_cubic_into_two(a[0], a[1], a[2], a[3])\n + split_cubic_into_two(b[0], b[1], b[2], b[3])\n )\n if n == 6:\n a, b = split_cubic_into_two(p0, p1, p2, p3)\n return iter(\n split_cubic_into_three(a[0], a[1], a[2], a[3])\n + split_cubic_into_three(b[0], b[1], b[2], b[3])\n )\n\n return _split_cubic_into_n_gen(p0, p1, p2, p3, n)\n\n\n@cython.locals(\n p0=cython.complex,\n p1=cython.complex,\n p2=cython.complex,\n p3=cython.complex,\n n=cython.int,\n)\n@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex)\n@cython.locals(\n dt=cython.double, delta_2=cython.double, delta_3=cython.double, i=cython.int\n)\n@cython.locals(\n a1=cython.complex, b1=cython.complex, c1=cython.complex, d1=cython.complex\n)\ndef _split_cubic_into_n_gen(p0, p1, p2, p3, n):\n a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3)\n dt = 1 / n\n delta_2 = dt * dt\n delta_3 = dt * delta_2\n for i in range(n):\n t1 = i * dt\n t1_2 = t1 * t1\n # calc new a, b, c and d\n a1 = a * delta_3\n b1 = (3 * a * t1 + b) * delta_2\n c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt\n d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d\n yield calc_cubic_points(a1, b1, c1, d1)\n\n\n@cython.cfunc\n@cython.inline\n@cython.locals(\n p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex\n)\n@cython.locals(mid=cython.complex, deriv3=cython.complex)\ndef split_cubic_into_two(p0, p1, p2, p3):\n """Split a cubic Bezier into two equal parts.\n\n Splits the curve into two equal parts at t = 0.5\n\n Args:\n p0 (complex): Start point of curve.\n p1 (complex): First handle of curve.\n p2 (complex): Second handle of curve.\n p3 (complex): End point of curve.\n\n Returns:\n tuple: Two cubic Beziers (each expressed as a tuple of four complex\n values).\n """\n mid = (p0 + 3 * (p1 + p2) + p3) * 0.125\n deriv3 = (p3 + p2 - p1 - p0) * 0.125\n return (\n (p0, (p0 + p1) * 0.5, mid - deriv3, mid),\n (mid, mid + deriv3, (p2 + p3) * 0.5, p3),\n )\n\n\n@cython.cfunc\n@cython.inline\n@cython.locals(\n p0=cython.complex,\n p1=cython.complex,\n p2=cython.complex,\n p3=cython.complex,\n)\n@cython.locals(\n mid1=cython.complex,\n deriv1=cython.complex,\n mid2=cython.complex,\n deriv2=cython.complex,\n)\ndef split_cubic_into_three(p0, p1, p2, p3):\n """Split a cubic Bezier into three equal parts.\n\n Splits the curve into three equal parts at t = 1/3 and t = 2/3\n\n Args:\n p0 (complex): Start point of curve.\n p1 (complex): First handle of curve.\n p2 (complex): Second handle of curve.\n p3 (complex): End point of curve.\n\n Returns:\n tuple: Three cubic Beziers (each expressed as a tuple of four complex\n values).\n """\n mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27)\n deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27)\n mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27)\n deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27)\n return (\n (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1),\n (mid1, mid1 + deriv1, mid2 - deriv2, mid2),\n (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3),\n )\n\n\n@cython.cfunc\n@cython.inline\n@cython.returns(cython.complex)\n@cython.locals(\n t=cython.double,\n p0=cython.complex,\n p1=cython.complex,\n p2=cython.complex,\n p3=cython.complex,\n)\n@cython.locals(_p1=cython.complex, _p2=cython.complex)\ndef cubic_approx_control(t, p0, p1, p2, p3):\n """Approximate a cubic Bezier using a quadratic one.\n\n Args:\n t (double): Position of control point.\n p0 (complex): Start point of curve.\n p1 (complex): First handle of curve.\n p2 (complex): Second handle of curve.\n p3 (complex): End point of curve.\n\n Returns:\n complex: Location of candidate control point on quadratic curve.\n """\n _p1 = p0 + (p1 - p0) * 1.5\n _p2 = p3 + (p2 - p3) * 1.5\n return _p1 + (_p2 - _p1) * t\n\n\n@cython.cfunc\n@cython.inline\n@cython.returns(cython.complex)\n@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex)\n@cython.locals(ab=cython.complex, cd=cython.complex, p=cython.complex, h=cython.double)\ndef calc_intersect(a, b, c, d):\n """Calculate the intersection of two lines.\n\n Args:\n a (complex): Start point of first line.\n b (complex): End point of first line.\n c (complex): Start point of second line.\n d (complex): End point of second line.\n\n Returns:\n complex: Location of intersection if one present, ``complex(NaN,NaN)``\n if no intersection was found.\n """\n ab = b - a\n cd = d - c\n p = ab * 1j\n try:\n h = dot(p, a - c) / dot(p, cd)\n except ZeroDivisionError:\n return complex(NAN, NAN)\n return c + cd * h\n\n\n@cython.cfunc\n@cython.returns(cython.int)\n@cython.locals(\n tolerance=cython.double,\n p0=cython.complex,\n p1=cython.complex,\n p2=cython.complex,\n p3=cython.complex,\n)\n@cython.locals(mid=cython.complex, deriv3=cython.complex)\ndef cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance):\n """Check if a cubic Bezier lies within a given distance of the origin.\n\n "Origin" means *the* origin (0,0), not the start of the curve. Note that no\n checks are made on the start and end positions of the curve; this function\n only checks the inside of the curve.\n\n Args:\n p0 (complex): Start point of curve.\n p1 (complex): First handle of curve.\n p2 (complex): Second handle of curve.\n p3 (complex): End point of curve.\n tolerance (double): Distance from origin.\n\n Returns:\n bool: True if the cubic Bezier ``p`` entirely lies within a distance\n ``tolerance`` of the origin, False otherwise.\n """\n # First check p2 then p1, as p2 has higher error early on.\n if abs(p2) <= tolerance and abs(p1) <= tolerance:\n return True\n\n # Split.\n mid = (p0 + 3 * (p1 + p2) + p3) * 0.125\n if abs(mid) > tolerance:\n return False\n deriv3 = (p3 + p2 - p1 - p0) * 0.125\n return cubic_farthest_fit_inside(\n p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance\n ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance)\n\n\n@cython.cfunc\n@cython.inline\n@cython.locals(tolerance=cython.double)\n@cython.locals(\n q1=cython.complex,\n c0=cython.complex,\n c1=cython.complex,\n c2=cython.complex,\n c3=cython.complex,\n)\ndef cubic_approx_quadratic(cubic, tolerance):\n """Approximate a cubic Bezier with a single quadratic within a given tolerance.\n\n Args:\n cubic (sequence): Four complex numbers representing control points of\n the cubic Bezier curve.\n tolerance (double): Permitted deviation from the original curve.\n\n Returns:\n Three complex numbers representing control points of the quadratic\n curve if it fits within the given tolerance, or ``None`` if no suitable\n curve could be calculated.\n """\n\n q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3])\n if math.isnan(q1.imag):\n return None\n c0 = cubic[0]\n c3 = cubic[3]\n c1 = c0 + (q1 - c0) * (2 / 3)\n c2 = c3 + (q1 - c3) * (2 / 3)\n if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance):\n return None\n return c0, q1, c3\n\n\n@cython.cfunc\n@cython.locals(n=cython.int, tolerance=cython.double)\n@cython.locals(i=cython.int)\n@cython.locals(all_quadratic=cython.int)\n@cython.locals(\n c0=cython.complex, c1=cython.complex, c2=cython.complex, c3=cython.complex\n)\n@cython.locals(\n q0=cython.complex,\n q1=cython.complex,\n next_q1=cython.complex,\n q2=cython.complex,\n d1=cython.complex,\n)\ndef cubic_approx_spline(cubic, n, tolerance, all_quadratic):\n """Approximate a cubic Bezier curve with a spline of n quadratics.\n\n Args:\n cubic (sequence): Four complex numbers representing control points of\n the cubic Bezier curve.\n n (int): Number of quadratic Bezier curves in the spline.\n tolerance (double): Permitted deviation from the original curve.\n\n Returns:\n A list of ``n+2`` complex numbers, representing control points of the\n quadratic spline if it fits within the given tolerance, or ``None`` if\n no suitable spline could be calculated.\n """\n\n if n == 1:\n return cubic_approx_quadratic(cubic, tolerance)\n if n == 2 and all_quadratic == False:\n return cubic\n\n cubics = split_cubic_into_n_iter(cubic[0], cubic[1], cubic[2], cubic[3], n)\n\n # calculate the spline of quadratics and check errors at the same time.\n next_cubic = next(cubics)\n next_q1 = cubic_approx_control(\n 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3]\n )\n q2 = cubic[0]\n d1 = 0j\n spline = [cubic[0], next_q1]\n for i in range(1, n + 1):\n # Current cubic to convert\n c0, c1, c2, c3 = next_cubic\n\n # Current quadratic approximation of current cubic\n q0 = q2\n q1 = next_q1\n if i < n:\n next_cubic = next(cubics)\n next_q1 = cubic_approx_control(\n i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3]\n )\n spline.append(next_q1)\n q2 = (q1 + next_q1) * 0.5\n else:\n q2 = c3\n\n # End-point deltas\n d0 = d1\n d1 = q2 - c3\n\n if abs(d1) > tolerance or not cubic_farthest_fit_inside(\n d0,\n q0 + (q1 - q0) * (2 / 3) - c1,\n q2 + (q1 - q2) * (2 / 3) - c2,\n d1,\n tolerance,\n ):\n return None\n spline.append(cubic[3])\n\n return spline\n\n\n@cython.locals(max_err=cython.double)\n@cython.locals(n=cython.int)\n@cython.locals(all_quadratic=cython.int)\ndef curve_to_quadratic(curve, max_err, all_quadratic=True):\n """Approximate a cubic Bezier curve with a spline of n quadratics.\n\n Args:\n cubic (sequence): Four 2D tuples representing control points of\n the cubic Bezier curve.\n max_err (double): Permitted deviation from the original curve.\n all_quadratic (bool): If True (default) returned value is a\n quadratic spline. If False, it's either a single quadratic\n curve or a single cubic curve.\n\n Returns:\n If all_quadratic is True: A list of 2D tuples, representing\n control points of the quadratic spline if it fits within the\n given tolerance, or ``None`` if no suitable spline could be\n calculated.\n\n If all_quadratic is False: Either a quadratic curve (if length\n of output is 3), or a cubic curve (if length of output is 4).\n """\n\n curve = [complex(*p) for p in curve]\n\n for n in range(1, MAX_N + 1):\n spline = cubic_approx_spline(curve, n, max_err, all_quadratic)\n if spline is not None:\n # done. go home\n return [(s.real, s.imag) for s in spline]\n\n raise ApproxNotFoundError(curve)\n\n\n@cython.locals(l=cython.int, last_i=cython.int, i=cython.int)\n@cython.locals(all_quadratic=cython.int)\ndef curves_to_quadratic(curves, max_errors, all_quadratic=True):\n """Return quadratic Bezier splines approximating the input cubic Beziers.\n\n Args:\n curves: A sequence of *n* curves, each curve being a sequence of four\n 2D tuples.\n max_errors: A sequence of *n* floats representing the maximum permissible\n deviation from each of the cubic Bezier curves.\n all_quadratic (bool): If True (default) returned values are a\n quadratic spline. If False, they are either a single quadratic\n curve or a single cubic curve.\n\n Example::\n\n >>> curves_to_quadratic( [\n ... [ (50,50), (100,100), (150,100), (200,50) ],\n ... [ (75,50), (120,100), (150,75), (200,60) ]\n ... ], [1,1] )\n [[(50.0, 50.0), (75.0, 75.0), (125.0, 91.66666666666666), (175.0, 75.0), (200.0, 50.0)], [(75.0, 50.0), (97.5, 75.0), (135.41666666666666, 82.08333333333333), (175.0, 67.5), (200.0, 60.0)]]\n\n The returned splines have "implied oncurve points" suitable for use in\n TrueType ``glif`` outlines - i.e. in the first spline returned above,\n the first quadratic segment runs from (50,50) to\n ( (75 + 125)/2 , (120 + 91.666..)/2 ) = (100, 83.333...).\n\n Returns:\n If all_quadratic is True, a list of splines, each spline being a list\n of 2D tuples.\n\n If all_quadratic is False, a list of curves, each curve being a quadratic\n (length 3), or cubic (length 4).\n\n Raises:\n fontTools.cu2qu.Errors.ApproxNotFoundError: if no suitable approximation\n can be found for all curves with the given parameters.\n """\n\n curves = [[complex(*p) for p in curve] for curve in curves]\n assert len(max_errors) == len(curves)\n\n l = len(curves)\n splines = [None] * l\n last_i = i = 0\n n = 1\n while True:\n spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic)\n if spline is None:\n if n == MAX_N:\n break\n n += 1\n last_i = i\n continue\n splines[i] = spline\n i = (i + 1) % l\n if i == last_i:\n # done. go home\n return [[(s.real, s.imag) for s in spline] for spline in splines]\n\n raise ApproxNotFoundError(curves)\n
.venv\Lib\site-packages\fontTools\cu2qu\cu2qu.py
cu2qu.py
Python
16,970
0.95
0.112994
0.058166
react-lib
401
2023-07-20T21:49:53.280703
GPL-3.0
false
52467f78b98f2f6cb70502728451ecf4
# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nclass Error(Exception):\n """Base Cu2Qu exception class for all other errors."""\n\n\nclass ApproxNotFoundError(Error):\n def __init__(self, curve):\n message = "no approximation found: %s" % curve\n super().__init__(message)\n self.curve = curve\n\n\nclass UnequalZipLengthsError(Error):\n pass\n\n\nclass IncompatibleGlyphsError(Error):\n def __init__(self, glyphs):\n assert len(glyphs) > 1\n self.glyphs = glyphs\n names = set(repr(g.name) for g in glyphs)\n if len(names) > 1:\n self.combined_name = "{%s}" % ", ".join(sorted(names))\n else:\n self.combined_name = names.pop()\n\n def __repr__(self):\n return "<%s %s>" % (type(self).__name__, self.combined_name)\n\n\nclass IncompatibleSegmentNumberError(IncompatibleGlyphsError):\n def __str__(self):\n return "Glyphs named %s have different number of segments" % (\n self.combined_name\n )\n\n\nclass IncompatibleSegmentTypesError(IncompatibleGlyphsError):\n def __init__(self, glyphs, segments):\n IncompatibleGlyphsError.__init__(self, glyphs)\n self.segments = segments\n\n def __str__(self):\n lines = []\n ndigits = len(str(max(self.segments)))\n for i, tags in sorted(self.segments.items()):\n lines.append(\n "%s: (%s)" % (str(i).rjust(ndigits), ", ".join(repr(t) for t in tags))\n )\n return "Glyphs named %s have incompatible segment types:\n %s" % (\n self.combined_name,\n "\n ".join(lines),\n )\n\n\nclass IncompatibleFontsError(Error):\n def __init__(self, glyph_errors):\n self.glyph_errors = glyph_errors\n\n def __str__(self):\n return "fonts contains incompatible glyphs: %s" % (\n ", ".join(repr(g) for g in sorted(self.glyph_errors.keys()))\n )\n
.venv\Lib\site-packages\fontTools\cu2qu\errors.py
errors.py
Python
2,518
0.95
0.298701
0.216667
node-utils
776
2024-03-27T19:29:21.212914
Apache-2.0
false
3a65843b2681212d0556bbec1be1de53
# Copyright 2015 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n"""Converts cubic bezier curves to quadratic splines.\n\nConversion is performed such that the quadratic splines keep the same end-curve\ntangents as the original cubics. The approach is iterative, increasing the\nnumber of segments for a spline until the error gets below a bound.\n\nRespective curves from multiple fonts will be converted at once to ensure that\nthe resulting splines are interpolation-compatible.\n"""\n\nimport logging\nfrom fontTools.pens.basePen import AbstractPen\nfrom fontTools.pens.pointPen import PointToSegmentPen\nfrom fontTools.pens.reverseContourPen import ReverseContourPen\n\nfrom . import curves_to_quadratic\nfrom .errors import (\n UnequalZipLengthsError,\n IncompatibleSegmentNumberError,\n IncompatibleSegmentTypesError,\n IncompatibleGlyphsError,\n IncompatibleFontsError,\n)\n\n\n__all__ = ["fonts_to_quadratic", "font_to_quadratic"]\n\n# The default approximation error below is a relative value (1/1000 of the EM square).\n# Later on, we convert it to absolute font units by multiplying it by a font's UPEM\n# (see fonts_to_quadratic).\nDEFAULT_MAX_ERR = 0.001\nCURVE_TYPE_LIB_KEY = "com.github.googlei18n.cu2qu.curve_type"\n\nlogger = logging.getLogger(__name__)\n\n\n_zip = zip\n\n\ndef zip(*args):\n """Ensure each argument to zip has the same length. Also make sure a list is\n returned for python 2/3 compatibility.\n """\n\n if len(set(len(a) for a in args)) != 1:\n raise UnequalZipLengthsError(*args)\n return list(_zip(*args))\n\n\nclass GetSegmentsPen(AbstractPen):\n """Pen to collect segments into lists of points for conversion.\n\n Curves always include their initial on-curve point, so some points are\n duplicated between segments.\n """\n\n def __init__(self):\n self._last_pt = None\n self.segments = []\n\n def _add_segment(self, tag, *args):\n if tag in ["move", "line", "qcurve", "curve"]:\n self._last_pt = args[-1]\n self.segments.append((tag, args))\n\n def moveTo(self, pt):\n self._add_segment("move", pt)\n\n def lineTo(self, pt):\n self._add_segment("line", pt)\n\n def qCurveTo(self, *points):\n self._add_segment("qcurve", self._last_pt, *points)\n\n def curveTo(self, *points):\n self._add_segment("curve", self._last_pt, *points)\n\n def closePath(self):\n self._add_segment("close")\n\n def endPath(self):\n self._add_segment("end")\n\n def addComponent(self, glyphName, transformation):\n pass\n\n\ndef _get_segments(glyph):\n """Get a glyph's segments as extracted by GetSegmentsPen."""\n\n pen = GetSegmentsPen()\n # glyph.draw(pen)\n # We can't simply draw the glyph with the pen, but we must initialize the\n # PointToSegmentPen explicitly with outputImpliedClosingLine=True.\n # By default PointToSegmentPen does not outputImpliedClosingLine -- unless\n # last and first point on closed contour are duplicated. Because we are\n # converting multiple glyphs at the same time, we want to make sure\n # this function returns the same number of segments, whether or not\n # the last and first point overlap.\n # https://github.com/googlefonts/fontmake/issues/572\n # https://github.com/fonttools/fonttools/pull/1720\n pointPen = PointToSegmentPen(pen, outputImpliedClosingLine=True)\n glyph.drawPoints(pointPen)\n return pen.segments\n\n\ndef _set_segments(glyph, segments, reverse_direction):\n """Draw segments as extracted by GetSegmentsPen back to a glyph."""\n\n glyph.clearContours()\n pen = glyph.getPen()\n if reverse_direction:\n pen = ReverseContourPen(pen)\n for tag, args in segments:\n if tag == "move":\n pen.moveTo(*args)\n elif tag == "line":\n pen.lineTo(*args)\n elif tag == "curve":\n pen.curveTo(*args[1:])\n elif tag == "qcurve":\n pen.qCurveTo(*args[1:])\n elif tag == "close":\n pen.closePath()\n elif tag == "end":\n pen.endPath()\n else:\n raise AssertionError('Unhandled segment type "%s"' % tag)\n\n\ndef _segments_to_quadratic(segments, max_err, stats, all_quadratic=True):\n """Return quadratic approximations of cubic segments."""\n\n assert all(s[0] == "curve" for s in segments), "Non-cubic given to convert"\n\n new_points = curves_to_quadratic([s[1] for s in segments], max_err, all_quadratic)\n n = len(new_points[0])\n assert all(len(s) == n for s in new_points[1:]), "Converted incompatibly"\n\n spline_length = str(n - 2)\n stats[spline_length] = stats.get(spline_length, 0) + 1\n\n if all_quadratic or n == 3:\n return [("qcurve", p) for p in new_points]\n else:\n return [("curve", p) for p in new_points]\n\n\ndef _glyphs_to_quadratic(glyphs, max_err, reverse_direction, stats, all_quadratic=True):\n """Do the actual conversion of a set of compatible glyphs, after arguments\n have been set up.\n\n Return True if the glyphs were modified, else return False.\n """\n\n try:\n segments_by_location = zip(*[_get_segments(g) for g in glyphs])\n except UnequalZipLengthsError:\n raise IncompatibleSegmentNumberError(glyphs)\n if not any(segments_by_location):\n return False\n\n # always modify input glyphs if reverse_direction is True\n glyphs_modified = reverse_direction\n\n new_segments_by_location = []\n incompatible = {}\n for i, segments in enumerate(segments_by_location):\n tag = segments[0][0]\n if not all(s[0] == tag for s in segments[1:]):\n incompatible[i] = [s[0] for s in segments]\n elif tag == "curve":\n new_segments = _segments_to_quadratic(\n segments, max_err, stats, all_quadratic\n )\n if all_quadratic or new_segments != segments:\n glyphs_modified = True\n segments = new_segments\n new_segments_by_location.append(segments)\n\n if glyphs_modified:\n new_segments_by_glyph = zip(*new_segments_by_location)\n for glyph, new_segments in zip(glyphs, new_segments_by_glyph):\n _set_segments(glyph, new_segments, reverse_direction)\n\n if incompatible:\n raise IncompatibleSegmentTypesError(glyphs, segments=incompatible)\n return glyphs_modified\n\n\ndef glyphs_to_quadratic(\n glyphs, max_err=None, reverse_direction=False, stats=None, all_quadratic=True\n):\n """Convert the curves of a set of compatible of glyphs to quadratic.\n\n All curves will be converted to quadratic at once, ensuring interpolation\n compatibility. If this is not required, calling glyphs_to_quadratic with one\n glyph at a time may yield slightly more optimized results.\n\n Return True if glyphs were modified, else return False.\n\n Raises IncompatibleGlyphsError if glyphs have non-interpolatable outlines.\n """\n if stats is None:\n stats = {}\n\n if not max_err:\n # assume 1000 is the default UPEM\n max_err = DEFAULT_MAX_ERR * 1000\n\n if isinstance(max_err, (list, tuple)):\n max_errors = max_err\n else:\n max_errors = [max_err] * len(glyphs)\n assert len(max_errors) == len(glyphs)\n\n return _glyphs_to_quadratic(\n glyphs, max_errors, reverse_direction, stats, all_quadratic\n )\n\n\ndef fonts_to_quadratic(\n fonts,\n max_err_em=None,\n max_err=None,\n reverse_direction=False,\n stats=None,\n dump_stats=False,\n remember_curve_type=True,\n all_quadratic=True,\n):\n """Convert the curves of a collection of fonts to quadratic.\n\n All curves will be converted to quadratic at once, ensuring interpolation\n compatibility. If this is not required, calling fonts_to_quadratic with one\n font at a time may yield slightly more optimized results.\n\n Return the set of modified glyph names if any, else return an empty set.\n\n By default, cu2qu stores the curve type in the fonts' lib, under a private\n key "com.github.googlei18n.cu2qu.curve_type", and will not try to convert\n them again if the curve type is already set to "quadratic".\n Setting 'remember_curve_type' to False disables this optimization.\n\n Raises IncompatibleFontsError if same-named glyphs from different fonts\n have non-interpolatable outlines.\n """\n\n if remember_curve_type:\n curve_types = {f.lib.get(CURVE_TYPE_LIB_KEY, "cubic") for f in fonts}\n if len(curve_types) == 1:\n curve_type = next(iter(curve_types))\n if curve_type in ("quadratic", "mixed"):\n logger.info("Curves already converted to quadratic")\n return False\n elif curve_type == "cubic":\n pass # keep converting\n else:\n raise NotImplementedError(curve_type)\n elif len(curve_types) > 1:\n # going to crash later if they do differ\n logger.warning("fonts may contain different curve types")\n\n if stats is None:\n stats = {}\n\n if max_err_em and max_err:\n raise TypeError("Only one of max_err and max_err_em can be specified.")\n if not (max_err_em or max_err):\n max_err_em = DEFAULT_MAX_ERR\n\n if isinstance(max_err, (list, tuple)):\n assert len(max_err) == len(fonts)\n max_errors = max_err\n elif max_err:\n max_errors = [max_err] * len(fonts)\n\n if isinstance(max_err_em, (list, tuple)):\n assert len(fonts) == len(max_err_em)\n max_errors = [f.info.unitsPerEm * e for f, e in zip(fonts, max_err_em)]\n elif max_err_em:\n max_errors = [f.info.unitsPerEm * max_err_em for f in fonts]\n\n modified = set()\n glyph_errors = {}\n for name in set().union(*(f.keys() for f in fonts)):\n glyphs = []\n cur_max_errors = []\n for font, error in zip(fonts, max_errors):\n if name in font:\n glyphs.append(font[name])\n cur_max_errors.append(error)\n try:\n if _glyphs_to_quadratic(\n glyphs, cur_max_errors, reverse_direction, stats, all_quadratic\n ):\n modified.add(name)\n except IncompatibleGlyphsError as exc:\n logger.error(exc)\n glyph_errors[name] = exc\n\n if glyph_errors:\n raise IncompatibleFontsError(glyph_errors)\n\n if modified and dump_stats:\n spline_lengths = sorted(stats.keys())\n logger.info(\n "New spline lengths: %s"\n % (", ".join("%s: %d" % (l, stats[l]) for l in spline_lengths))\n )\n\n if remember_curve_type:\n for font in fonts:\n curve_type = font.lib.get(CURVE_TYPE_LIB_KEY, "cubic")\n new_curve_type = "quadratic" if all_quadratic else "mixed"\n if curve_type != new_curve_type:\n font.lib[CURVE_TYPE_LIB_KEY] = new_curve_type\n return modified\n\n\ndef glyph_to_quadratic(glyph, **kwargs):\n """Convenience wrapper around glyphs_to_quadratic, for just one glyph.\n Return True if the glyph was modified, else return False.\n """\n\n return glyphs_to_quadratic([glyph], **kwargs)\n\n\ndef font_to_quadratic(font, **kwargs):\n """Convenience wrapper around fonts_to_quadratic, for just one font.\n Return the set of modified glyph names if any, else return empty set.\n """\n\n return fonts_to_quadratic([font], **kwargs)\n
.venv\Lib\site-packages\fontTools\cu2qu\ufo.py
ufo.py
Python
12,143
0.95
0.249284
0.106227
vue-tools
587
2023-11-05T17:03:41.450218
Apache-2.0
false
0a0f222e638e2aff47fd9da95a4ff3ec
# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom .cu2qu import *\n
.venv\Lib\site-packages\fontTools\cu2qu\__init__.py
__init__.py
Python
633
0.95
0.066667
0.928571
vue-tools
776
2024-10-04T07:28:35.514202
MIT
false
ad1d3e14b73e2f817c11b0519df016e6
import sys\nfrom .cli import _main as main\n\n\nif __name__ == "__main__":\n sys.exit(main())\n
.venv\Lib\site-packages\fontTools\cu2qu\__main__.py
__main__.py
Python
98
0.65
0.166667
0
react-lib
446
2024-11-02T17:01:10.732985
BSD-3-Clause
false
e8e2464656e4bfaf29f3151528f9c6b5
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\benchmark.cpython-313.pyc
benchmark.cpython-313.pyc
Other
2,792
0.95
0.028571
0
vue-tools
399
2023-10-12T11:03:13.944403
Apache-2.0
false
dda422cd410bc54147c41387d63869bf
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\cli.cpython-313.pyc
cli.cpython-313.pyc
Other
8,693
0.95
0.022472
0
vue-tools
618
2024-06-02T16:24:29.874254
GPL-3.0
false
3b92a10a01f9e73d89708d93c1997c1b
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\cu2qu.cpython-313.pyc
cu2qu.cpython-313.pyc
Other
20,988
0.95
0.045326
0.003165
awesome-app
106
2024-12-08T17:01:54.614438
GPL-3.0
false
c647c17e355c6e2411b57b07c2b026ea
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\errors.cpython-313.pyc
errors.cpython-313.pyc
Other
5,230
0.95
0.036364
0
react-lib
635
2024-08-07T01:47:28.509954
MIT
false
cfded47231ecc94dac5cc956bcf96bfa
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\ufo.cpython-313.pyc
ufo.cpython-313.pyc
Other
14,531
0.95
0.096552
0
python-kit
491
2024-09-06T21:36:59.870199
MIT
false
7e9f624138f9298c6176f057ab77f726
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
219
0.7
0
0
awesome-app
191
2023-12-25T12:17:41.913549
GPL-3.0
false
55fde3c6049cfbb8c1551b8dcdc2e0a1
\n\n
.venv\Lib\site-packages\fontTools\cu2qu\__pycache__\__main__.cpython-313.pyc
__main__.cpython-313.pyc
Other
368
0.7
0
0
react-lib
650
2024-10-27T03:26:42.292223
BSD-3-Clause
false
2fb9f82be0029962b6b4d2cda788eddf
"""Compute name information for a given location in user-space coordinates\nusing STAT data. This can be used to fill-in automatically the names of an\ninstance:\n\n.. code:: python\n\n instance = doc.instances[0]\n names = getStatNames(doc, instance.getFullUserLocation(doc))\n print(names.styleNames)\n"""\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Dict, Literal, Optional, Tuple, Union\nimport logging\n\nfrom fontTools.designspaceLib import (\n AxisDescriptor,\n AxisLabelDescriptor,\n DesignSpaceDocument,\n DiscreteAxisDescriptor,\n SimpleLocationDict,\n SourceDescriptor,\n)\n\nLOGGER = logging.getLogger(__name__)\n\nRibbiStyleName = Union[\n Literal["regular"],\n Literal["bold"],\n Literal["italic"],\n Literal["bold italic"],\n]\n\nBOLD_ITALIC_TO_RIBBI_STYLE = {\n (False, False): "regular",\n (False, True): "italic",\n (True, False): "bold",\n (True, True): "bold italic",\n}\n\n\n@dataclass\nclass StatNames:\n """Name data generated from the STAT table information."""\n\n familyNames: Dict[str, str]\n styleNames: Dict[str, str]\n postScriptFontName: Optional[str]\n styleMapFamilyNames: Dict[str, str]\n styleMapStyleName: Optional[RibbiStyleName]\n\n\ndef getStatNames(\n doc: DesignSpaceDocument, userLocation: SimpleLocationDict\n) -> StatNames:\n """Compute the family, style, PostScript names of the given ``userLocation``\n using the document's STAT information.\n\n Also computes localizations.\n\n If not enough STAT data is available for a given name, either its dict of\n localized names will be empty (family and style names), or the name will be\n None (PostScript name).\n\n Note: this method does not consider info attached to the instance, like\n family name. The user needs to override all names on an instance that STAT\n information would compute differently than desired.\n\n .. versionadded:: 5.0\n """\n familyNames: Dict[str, str] = {}\n defaultSource: Optional[SourceDescriptor] = doc.findDefault()\n if defaultSource is None:\n LOGGER.warning("Cannot determine default source to look up family name.")\n elif defaultSource.familyName is None:\n LOGGER.warning(\n "Cannot look up family name, assign the 'familyname' attribute to the default source."\n )\n else:\n familyNames = {\n "en": defaultSource.familyName,\n **defaultSource.localisedFamilyName,\n }\n\n styleNames: Dict[str, str] = {}\n # If a free-standing label matches the location, use it for name generation.\n label = doc.labelForUserLocation(userLocation)\n if label is not None:\n styleNames = {"en": label.name, **label.labelNames}\n # Otherwise, scour the axis labels for matches.\n else:\n # Gather all languages in which at least one translation is provided\n # Then build names for all these languages, but fallback to English\n # whenever a translation is missing.\n labels = _getAxisLabelsForUserLocation(doc.axes, userLocation)\n if labels:\n languages = set(\n language for label in labels for language in label.labelNames\n )\n languages.add("en")\n for language in languages:\n styleName = " ".join(\n label.labelNames.get(language, label.defaultName)\n for label in labels\n if not label.elidable\n )\n if not styleName and doc.elidedFallbackName is not None:\n styleName = doc.elidedFallbackName\n styleNames[language] = styleName\n\n if "en" not in familyNames or "en" not in styleNames:\n # Not enough information to compute PS names of styleMap names\n return StatNames(\n familyNames=familyNames,\n styleNames=styleNames,\n postScriptFontName=None,\n styleMapFamilyNames={},\n styleMapStyleName=None,\n )\n\n postScriptFontName = f"{familyNames['en']}-{styleNames['en']}".replace(" ", "")\n\n styleMapStyleName, regularUserLocation = _getRibbiStyle(doc, userLocation)\n\n styleNamesForStyleMap = styleNames\n if regularUserLocation != userLocation:\n regularStatNames = getStatNames(doc, regularUserLocation)\n styleNamesForStyleMap = regularStatNames.styleNames\n\n styleMapFamilyNames = {}\n for language in set(familyNames).union(styleNames.keys()):\n familyName = familyNames.get(language, familyNames["en"])\n styleName = styleNamesForStyleMap.get(language, styleNamesForStyleMap["en"])\n styleMapFamilyNames[language] = (familyName + " " + styleName).strip()\n\n return StatNames(\n familyNames=familyNames,\n styleNames=styleNames,\n postScriptFontName=postScriptFontName,\n styleMapFamilyNames=styleMapFamilyNames,\n styleMapStyleName=styleMapStyleName,\n )\n\n\ndef _getSortedAxisLabels(\n axes: list[Union[AxisDescriptor, DiscreteAxisDescriptor]],\n) -> Dict[str, list[AxisLabelDescriptor]]:\n """Returns axis labels sorted by their ordering, with unordered ones appended as\n they are listed."""\n\n # First, get the axis labels with explicit ordering...\n sortedAxes = sorted(\n (axis for axis in axes if axis.axisOrdering is not None),\n key=lambda a: a.axisOrdering,\n )\n sortedLabels: Dict[str, list[AxisLabelDescriptor]] = {\n axis.name: axis.axisLabels for axis in sortedAxes\n }\n\n # ... then append the others in the order they appear.\n # NOTE: This relies on Python 3.7+ dict's preserved insertion order.\n for axis in axes:\n if axis.axisOrdering is None:\n sortedLabels[axis.name] = axis.axisLabels\n\n return sortedLabels\n\n\ndef _getAxisLabelsForUserLocation(\n axes: list[Union[AxisDescriptor, DiscreteAxisDescriptor]],\n userLocation: SimpleLocationDict,\n) -> list[AxisLabelDescriptor]:\n labels: list[AxisLabelDescriptor] = []\n\n allAxisLabels = _getSortedAxisLabels(axes)\n if allAxisLabels.keys() != userLocation.keys():\n LOGGER.warning(\n f"Mismatch between user location '{userLocation.keys()}' and available "\n f"labels for '{allAxisLabels.keys()}'."\n )\n\n for axisName, axisLabels in allAxisLabels.items():\n userValue = userLocation[axisName]\n label: Optional[AxisLabelDescriptor] = next(\n (\n l\n for l in axisLabels\n if l.userValue == userValue\n or (\n l.userMinimum is not None\n and l.userMaximum is not None\n and l.userMinimum <= userValue <= l.userMaximum\n )\n ),\n None,\n )\n if label is None:\n LOGGER.debug(\n f"Document needs a label for axis '{axisName}', user value '{userValue}'."\n )\n else:\n labels.append(label)\n\n return labels\n\n\ndef _getRibbiStyle(\n self: DesignSpaceDocument, userLocation: SimpleLocationDict\n) -> Tuple[RibbiStyleName, SimpleLocationDict]:\n """Compute the RIBBI style name of the given user location,\n return the location of the matching Regular in the RIBBI group.\n\n .. versionadded:: 5.0\n """\n regularUserLocation = {}\n axes_by_tag = {axis.tag: axis for axis in self.axes}\n\n bold: bool = False\n italic: bool = False\n\n axis = axes_by_tag.get("wght")\n if axis is not None:\n for regular_label in axis.axisLabels:\n if (\n regular_label.linkedUserValue == userLocation[axis.name]\n # In the "recursive" case where both the Regular has\n # linkedUserValue pointing the Bold, and the Bold has\n # linkedUserValue pointing to the Regular, only consider the\n # first case: Regular (e.g. 400) has linkedUserValue pointing to\n # Bold (e.g. 700, higher than Regular)\n and regular_label.userValue < regular_label.linkedUserValue\n ):\n regularUserLocation[axis.name] = regular_label.userValue\n bold = True\n break\n\n axis = axes_by_tag.get("ital") or axes_by_tag.get("slnt")\n if axis is not None:\n for upright_label in axis.axisLabels:\n if (\n upright_label.linkedUserValue == userLocation[axis.name]\n # In the "recursive" case where both the Upright has\n # linkedUserValue pointing the Italic, and the Italic has\n # linkedUserValue pointing to the Upright, only consider the\n # first case: Upright (e.g. ital=0, slant=0) has\n # linkedUserValue pointing to Italic (e.g ital=1, slant=-12 or\n # slant=12 for backwards italics, in any case higher than\n # Upright in absolute value, hence the abs() below.\n and abs(upright_label.userValue) < abs(upright_label.linkedUserValue)\n ):\n regularUserLocation[axis.name] = upright_label.userValue\n italic = True\n break\n\n return BOLD_ITALIC_TO_RIBBI_STYLE[bold, italic], {\n **userLocation,\n **regularUserLocation,\n }\n
.venv\Lib\site-packages\fontTools\designspaceLib\statNames.py
statNames.py
Python
9,497
0.95
0.161538
0.109589
node-utils
186
2024-08-27T08:43:16.501223
BSD-3-Clause
false
0f041848d80098be79c9d932c192b990
import sys\nfrom fontTools.designspaceLib import main\n\n\nif __name__ == "__main__":\n sys.exit(main())\n
.venv\Lib\site-packages\fontTools\designspaceLib\__main__.py
__main__.py
Python
109
0.85
0.166667
0
awesome-app
93
2024-06-15T16:40:39.521473
BSD-3-Clause
false
1c45c36104fdb25d665e0c5ddb08229c
\n\n
.venv\Lib\site-packages\fontTools\designspaceLib\__pycache__\split.cpython-313.pyc
split.cpython-313.pyc
Other
18,138
0.8
0.017045
0
node-utils
264
2025-05-06T04:07:28.750489
MIT
false
a2ab8c1ff66cfed593367232a5479752
\n\n
.venv\Lib\site-packages\fontTools\designspaceLib\__pycache__\statNames.cpython-313.pyc
statNames.cpython-313.pyc
Other
9,548
0.8
0.038095
0
node-utils
812
2025-04-26T15:05:38.746552
GPL-3.0
false
cdea2803a34a21f58c71717d8450931f
\n\n
.venv\Lib\site-packages\fontTools\designspaceLib\__pycache__\types.cpython-313.pyc
types.cpython-313.pyc
Other
6,540
0.8
0.04878
0
node-utils
547
2025-06-28T21:39:04.855617
MIT
false
1610fcb8a18647419ad80029e5856a49
\n\n
.venv\Lib\site-packages\fontTools\designspaceLib\__pycache__\__main__.cpython-313.pyc
__main__.cpython-313.pyc
Other
386
0.7
0
0
awesome-app
840
2023-09-19T02:20:41.749898
Apache-2.0
false
d16f99d8e70f0245f1dd8273ff7a4e89
"""Extend the Python codecs module with a few encodings that are used in OpenType (name table)\nbut missing from Python. See https://github.com/fonttools/fonttools/issues/236 for details."""\n\nimport codecs\nimport encodings\n\n\nclass ExtendCodec(codecs.Codec):\n def __init__(self, name, base_encoding, mapping):\n self.name = name\n self.base_encoding = base_encoding\n self.mapping = mapping\n self.reverse = {v: k for k, v in mapping.items()}\n self.max_len = max(len(v) for v in mapping.values())\n self.info = codecs.CodecInfo(\n name=self.name, encode=self.encode, decode=self.decode\n )\n codecs.register_error(name, self.error)\n\n def _map(self, mapper, output_type, exc_type, input, errors):\n base_error_handler = codecs.lookup_error(errors)\n length = len(input)\n out = output_type()\n while input:\n # first try to use self.error as the error handler\n try:\n part = mapper(input, self.base_encoding, errors=self.name)\n out += part\n break # All converted\n except exc_type as e:\n # else convert the correct part, handle error as requested and continue\n out += mapper(input[: e.start], self.base_encoding, self.name)\n replacement, pos = base_error_handler(e)\n out += replacement\n input = input[pos:]\n return out, length\n\n def encode(self, input, errors="strict"):\n return self._map(codecs.encode, bytes, UnicodeEncodeError, input, errors)\n\n def decode(self, input, errors="strict"):\n return self._map(codecs.decode, str, UnicodeDecodeError, input, errors)\n\n def error(self, e):\n if isinstance(e, UnicodeDecodeError):\n for end in range(e.start + 1, e.end + 1):\n s = e.object[e.start : end]\n if s in self.mapping:\n return self.mapping[s], end\n elif isinstance(e, UnicodeEncodeError):\n for end in range(e.start + 1, e.start + self.max_len + 1):\n s = e.object[e.start : end]\n if s in self.reverse:\n return self.reverse[s], end\n e.encoding = self.name\n raise e\n\n\n_extended_encodings = {\n "x_mac_japanese_ttx": (\n "shift_jis",\n {\n b"\xFC": chr(0x007C),\n b"\x7E": chr(0x007E),\n b"\x80": chr(0x005C),\n b"\xA0": chr(0x00A0),\n b"\xFD": chr(0x00A9),\n b"\xFE": chr(0x2122),\n b"\xFF": chr(0x2026),\n },\n ),\n "x_mac_trad_chinese_ttx": (\n "big5",\n {\n b"\x80": chr(0x005C),\n b"\xA0": chr(0x00A0),\n b"\xFD": chr(0x00A9),\n b"\xFE": chr(0x2122),\n b"\xFF": chr(0x2026),\n },\n ),\n "x_mac_korean_ttx": (\n "euc_kr",\n {\n b"\x80": chr(0x00A0),\n b"\x81": chr(0x20A9),\n b"\x82": chr(0x2014),\n b"\x83": chr(0x00A9),\n b"\xFE": chr(0x2122),\n b"\xFF": chr(0x2026),\n },\n ),\n "x_mac_simp_chinese_ttx": (\n "gb2312",\n {\n b"\x80": chr(0x00FC),\n b"\xA0": chr(0x00A0),\n b"\xFD": chr(0x00A9),\n b"\xFE": chr(0x2122),\n b"\xFF": chr(0x2026),\n },\n ),\n}\n\n_cache = {}\n\n\ndef search_function(name):\n name = encodings.normalize_encoding(name) # Rather undocumented...\n if name in _extended_encodings:\n if name not in _cache:\n base_encoding, mapping = _extended_encodings[name]\n assert name[-4:] == "_ttx"\n # Python 2 didn't have any of the encodings that we are implementing\n # in this file. Python 3 added aliases for the East Asian ones, mapping\n # them "temporarily" to the same base encoding as us, with a comment\n # suggesting that full implementation will appear some time later.\n # As such, try the Python version of the x_mac_... first, if that is found,\n # use *that* as our base encoding. This would make our encoding upgrade\n # to the full encoding when and if Python finally implements that.\n # http://bugs.python.org/issue24041\n base_encodings = [name[:-4], base_encoding]\n for base_encoding in base_encodings:\n try:\n codecs.lookup(base_encoding)\n except LookupError:\n continue\n _cache[name] = ExtendCodec(name, base_encoding, mapping)\n break\n return _cache[name].info\n\n return None\n\n\ncodecs.register(search_function)\n
.venv\Lib\site-packages\fontTools\encodings\codecs.py
codecs.py
Python
4,856
0.95
0.192593
0.083333
react-lib
445
2024-11-23T23:51:49.047474
Apache-2.0
false
11a360eb864db370ff76053f88bbe9cd
MacRoman = [\n "NUL",\n "Eth",\n "eth",\n "Lslash",\n "lslash",\n "Scaron",\n "scaron",\n "Yacute",\n "yacute",\n "HT",\n "LF",\n "Thorn",\n "thorn",\n "CR",\n "Zcaron",\n "zcaron",\n "DLE",\n "DC1",\n "DC2",\n "DC3",\n "DC4",\n "onehalf",\n "onequarter",\n "onesuperior",\n "threequarters",\n "threesuperior",\n "twosuperior",\n "brokenbar",\n "minus",\n "multiply",\n "RS",\n "US",\n "space",\n "exclam",\n "quotedbl",\n "numbersign",\n "dollar",\n "percent",\n "ampersand",\n "quotesingle",\n "parenleft",\n "parenright",\n "asterisk",\n "plus",\n "comma",\n "hyphen",\n "period",\n "slash",\n "zero",\n "one",\n "two",\n "three",\n "four",\n "five",\n "six",\n "seven",\n "eight",\n "nine",\n "colon",\n "semicolon",\n "less",\n "equal",\n "greater",\n "question",\n "at",\n "A",\n "B",\n "C",\n "D",\n "E",\n "F",\n "G",\n "H",\n "I",\n "J",\n "K",\n "L",\n "M",\n "N",\n "O",\n "P",\n "Q",\n "R",\n "S",\n "T",\n "U",\n "V",\n "W",\n "X",\n "Y",\n "Z",\n "bracketleft",\n "backslash",\n "bracketright",\n "asciicircum",\n "underscore",\n "grave",\n "a",\n "b",\n "c",\n "d",\n "e",\n "f",\n "g",\n "h",\n "i",\n "j",\n "k",\n "l",\n "m",\n "n",\n "o",\n "p",\n "q",\n "r",\n "s",\n "t",\n "u",\n "v",\n "w",\n "x",\n "y",\n "z",\n "braceleft",\n "bar",\n "braceright",\n "asciitilde",\n "DEL",\n "Adieresis",\n "Aring",\n "Ccedilla",\n "Eacute",\n "Ntilde",\n "Odieresis",\n "Udieresis",\n "aacute",\n "agrave",\n "acircumflex",\n "adieresis",\n "atilde",\n "aring",\n "ccedilla",\n "eacute",\n "egrave",\n "ecircumflex",\n "edieresis",\n "iacute",\n "igrave",\n "icircumflex",\n "idieresis",\n "ntilde",\n "oacute",\n "ograve",\n "ocircumflex",\n "odieresis",\n "otilde",\n "uacute",\n "ugrave",\n "ucircumflex",\n "udieresis",\n "dagger",\n "degree",\n "cent",\n "sterling",\n "section",\n "bullet",\n "paragraph",\n "germandbls",\n "registered",\n "copyright",\n "trademark",\n "acute",\n "dieresis",\n "notequal",\n "AE",\n "Oslash",\n "infinity",\n "plusminus",\n "lessequal",\n "greaterequal",\n "yen",\n "mu",\n "partialdiff",\n "summation",\n "product",\n "pi",\n "integral",\n "ordfeminine",\n "ordmasculine",\n "Omega",\n "ae",\n "oslash",\n "questiondown",\n "exclamdown",\n "logicalnot",\n "radical",\n "florin",\n "approxequal",\n "Delta",\n "guillemotleft",\n "guillemotright",\n "ellipsis",\n "nbspace",\n "Agrave",\n "Atilde",\n "Otilde",\n "OE",\n "oe",\n "endash",\n "emdash",\n "quotedblleft",\n "quotedblright",\n "quoteleft",\n "quoteright",\n "divide",\n "lozenge",\n "ydieresis",\n "Ydieresis",\n "fraction",\n "currency",\n "guilsinglleft",\n "guilsinglright",\n "fi",\n "fl",\n "daggerdbl",\n "periodcentered",\n "quotesinglbase",\n "quotedblbase",\n "perthousand",\n "Acircumflex",\n "Ecircumflex",\n "Aacute",\n "Edieresis",\n "Egrave",\n "Iacute",\n "Icircumflex",\n "Idieresis",\n "Igrave",\n "Oacute",\n "Ocircumflex",\n "apple",\n "Ograve",\n "Uacute",\n "Ucircumflex",\n "Ugrave",\n "dotlessi",\n "circumflex",\n "tilde",\n "macron",\n "breve",\n "dotaccent",\n "ring",\n "cedilla",\n "hungarumlaut",\n "ogonek",\n "caron",\n]\n
.venv\Lib\site-packages\fontTools\encodings\MacRoman.py
MacRoman.py
Python
3,834
0.7
0
0
react-lib
463
2025-02-07T15:02:50.103076
MIT
false
4ed073560a6c0653fea4a0761d2d7d8b
StandardEncoding = [\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n "space",\n "exclam",\n "quotedbl",\n "numbersign",\n "dollar",\n "percent",\n "ampersand",\n "quoteright",\n "parenleft",\n "parenright",\n "asterisk",\n "plus",\n "comma",\n "hyphen",\n "period",\n "slash",\n "zero",\n "one",\n "two",\n "three",\n "four",\n "five",\n "six",\n "seven",\n "eight",\n "nine",\n "colon",\n "semicolon",\n "less",\n "equal",\n "greater",\n "question",\n "at",\n "A",\n "B",\n "C",\n "D",\n "E",\n "F",\n "G",\n "H",\n "I",\n "J",\n "K",\n "L",\n "M",\n "N",\n "O",\n "P",\n "Q",\n "R",\n "S",\n "T",\n "U",\n "V",\n "W",\n "X",\n "Y",\n "Z",\n "bracketleft",\n "backslash",\n "bracketright",\n "asciicircum",\n "underscore",\n "quoteleft",\n "a",\n "b",\n "c",\n "d",\n "e",\n "f",\n "g",\n "h",\n "i",\n "j",\n "k",\n "l",\n "m",\n "n",\n "o",\n "p",\n "q",\n "r",\n "s",\n "t",\n "u",\n "v",\n "w",\n "x",\n "y",\n "z",\n "braceleft",\n "bar",\n "braceright",\n "asciitilde",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n "exclamdown",\n "cent",\n "sterling",\n "fraction",\n "yen",\n "florin",\n "section",\n "currency",\n "quotesingle",\n "quotedblleft",\n "guillemotleft",\n "guilsinglleft",\n "guilsinglright",\n "fi",\n "fl",\n ".notdef",\n "endash",\n "dagger",\n "daggerdbl",\n "periodcentered",\n ".notdef",\n "paragraph",\n "bullet",\n "quotesinglbase",\n "quotedblbase",\n "quotedblright",\n "guillemotright",\n "ellipsis",\n "perthousand",\n ".notdef",\n "questiondown",\n ".notdef",\n "grave",\n "acute",\n "circumflex",\n "tilde",\n "macron",\n "breve",\n "dotaccent",\n "dieresis",\n ".notdef",\n "ring",\n "cedilla",\n ".notdef",\n "hungarumlaut",\n "ogonek",\n "caron",\n "emdash",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n "AE",\n ".notdef",\n "ordfeminine",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n "Lslash",\n "Oslash",\n "OE",\n "ordmasculine",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n "ae",\n ".notdef",\n ".notdef",\n ".notdef",\n "dotlessi",\n ".notdef",\n ".notdef",\n "lslash",\n "oslash",\n "oe",\n "germandbls",\n ".notdef",\n ".notdef",\n ".notdef",\n ".notdef",\n]\n
.venv\Lib\site-packages\fontTools\encodings\StandardEncoding.py
StandardEncoding.py
Python
3,839
0.7
0
0
awesome-app
186
2024-06-16T01:14:54.319701
GPL-3.0
false
db7fd770b7debe2fa2acbbdfef048a86
"""Empty __init__.py file to signal Python this directory is a package."""\n
.venv\Lib\site-packages\fontTools\encodings\__init__.py
__init__.py
Python
76
0.5
0
0
vue-tools
324
2024-08-06T11:17:19.507205
GPL-3.0
false
6d412be7408e8f32685229b58fb23583
\n\n
.venv\Lib\site-packages\fontTools\encodings\__pycache__\codecs.cpython-313.pyc
codecs.cpython-313.pyc
Other
5,842
0.95
0.019608
0
react-lib
290
2024-05-31T21:08:33.931918
BSD-3-Clause
false
be3f3e1e50ca53ced4de601864a41fa1
\n\n
.venv\Lib\site-packages\fontTools\encodings\__pycache__\MacRoman.cpython-313.pyc
MacRoman.cpython-313.pyc
Other
2,247
0.7
0
0
react-lib
286
2024-11-28T07:00:44.318905
BSD-3-Clause
false
e72ce105bccf57481c668f17cdf4c55b
\n\n
.venv\Lib\site-packages\fontTools\encodings\__pycache__\StandardEncoding.cpython-313.pyc
StandardEncoding.cpython-313.pyc
Other
1,837
0.7
0
0
python-kit
324
2025-04-05T13:24:05.917648
Apache-2.0
false
030568884286e3beb371842acc33a5bb
\n\n
.venv\Lib\site-packages\fontTools\encodings\__pycache__\__init__.cpython-313.pyc
__init__.cpython-313.pyc
Other
277
0.7
0
0
react-lib
75
2024-09-05T08:16:37.541196
GPL-3.0
false
4a671bfe61a90d5b1a7f411c0e278e27
class FeatureLibError(Exception):\n def __init__(self, message, location=None):\n Exception.__init__(self, message)\n self.location = location\n\n def __str__(self):\n message = Exception.__str__(self)\n if self.location:\n return f"{self.location}: {message}"\n else:\n return message\n\n\nclass IncludedFeaNotFound(FeatureLibError):\n def __str__(self):\n assert self.location is not None\n\n message = (\n "The following feature file should be included but cannot be found: "\n f"{Exception.__str__(self)}"\n )\n return f"{self.location}: {message}"\n
.venv\Lib\site-packages\fontTools\feaLib\error.py
error.py
Python
670
0.85
0.272727
0
python-kit
832
2024-03-02T06:55:53.654110
GPL-3.0
false
2e1701be1b4de4cfa6b9627455e2a83e
MZ
.venv\Lib\site-packages\fontTools\feaLib\lexer.cp313-win_amd64.pyd
lexer.cp313-win_amd64.pyd
Other
118,272
0.75
0.014317
0.012291
python-kit
547
2024-04-05T18:18:41.228670
Apache-2.0
false
414aef07671d3cef7e32a9d47702f151
from fontTools.feaLib.error import FeatureLibError, IncludedFeaNotFound\nfrom fontTools.feaLib.location import FeatureLibLocation\nimport re\nimport os\n\ntry:\n import cython\nexcept ImportError:\n # if cython not installed, use mock module with no-op decorators and types\n from fontTools.misc import cython\n\n\nclass Lexer(object):\n NUMBER = "NUMBER"\n HEXADECIMAL = "HEXADECIMAL"\n OCTAL = "OCTAL"\n NUMBERS = (NUMBER, HEXADECIMAL, OCTAL)\n FLOAT = "FLOAT"\n STRING = "STRING"\n NAME = "NAME"\n FILENAME = "FILENAME"\n GLYPHCLASS = "GLYPHCLASS"\n CID = "CID"\n SYMBOL = "SYMBOL"\n COMMENT = "COMMENT"\n NEWLINE = "NEWLINE"\n ANONYMOUS_BLOCK = "ANONYMOUS_BLOCK"\n\n CHAR_WHITESPACE_ = " \t"\n CHAR_NEWLINE_ = "\r\n"\n CHAR_SYMBOL_ = ",;:-+'{}[]<>()="\n CHAR_DIGIT_ = "0123456789"\n CHAR_HEXDIGIT_ = "0123456789ABCDEFabcdef"\n CHAR_LETTER_ = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"\n CHAR_NAME_START_ = CHAR_LETTER_ + "_+*:.^~!\\"\n CHAR_NAME_CONTINUATION_ = CHAR_LETTER_ + CHAR_DIGIT_ + "_.+*:^~!/-"\n\n RE_GLYPHCLASS = re.compile(r"^[A-Za-z_0-9.\-]+$")\n\n MODE_NORMAL_ = "NORMAL"\n MODE_FILENAME_ = "FILENAME"\n\n def __init__(self, text, filename):\n self.filename_ = filename\n self.line_ = 1\n self.pos_ = 0\n self.line_start_ = 0\n self.text_ = text\n self.text_length_ = len(text)\n self.mode_ = Lexer.MODE_NORMAL_\n\n def __iter__(self):\n return self\n\n def next(self): # Python 2\n return self.__next__()\n\n def __next__(self): # Python 3\n while True:\n token_type, token, location = self.next_()\n if token_type != Lexer.NEWLINE:\n return (token_type, token, location)\n\n def location_(self):\n column = self.pos_ - self.line_start_ + 1\n return FeatureLibLocation(self.filename_ or "<features>", self.line_, column)\n\n def next_(self):\n self.scan_over_(Lexer.CHAR_WHITESPACE_)\n location = self.location_()\n start = self.pos_\n text = self.text_\n limit = len(text)\n if start >= limit:\n raise StopIteration()\n cur_char = text[start]\n next_char = text[start + 1] if start + 1 < limit else None\n\n if cur_char == "\n":\n self.pos_ += 1\n self.line_ += 1\n self.line_start_ = self.pos_\n return (Lexer.NEWLINE, None, location)\n if cur_char == "\r":\n self.pos_ += 2 if next_char == "\n" else 1\n self.line_ += 1\n self.line_start_ = self.pos_\n return (Lexer.NEWLINE, None, location)\n if cur_char == "#":\n self.scan_until_(Lexer.CHAR_NEWLINE_)\n return (Lexer.COMMENT, text[start : self.pos_], location)\n\n if self.mode_ is Lexer.MODE_FILENAME_:\n if cur_char != "(":\n raise FeatureLibError("Expected '(' before file name", location)\n self.scan_until_(")")\n cur_char = text[self.pos_] if self.pos_ < limit else None\n if cur_char != ")":\n raise FeatureLibError("Expected ')' after file name", location)\n self.pos_ += 1\n self.mode_ = Lexer.MODE_NORMAL_\n return (Lexer.FILENAME, text[start + 1 : self.pos_ - 1], location)\n\n if cur_char == "\\" and next_char in Lexer.CHAR_DIGIT_:\n self.pos_ += 1\n self.scan_over_(Lexer.CHAR_DIGIT_)\n return (Lexer.CID, int(text[start + 1 : self.pos_], 10), location)\n if cur_char == "@":\n self.pos_ += 1\n self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_)\n glyphclass = text[start + 1 : self.pos_]\n if len(glyphclass) < 1:\n raise FeatureLibError("Expected glyph class name", location)\n if not Lexer.RE_GLYPHCLASS.match(glyphclass):\n raise FeatureLibError(\n "Glyph class names must consist of letters, digits, "\n "underscore, period or hyphen",\n location,\n )\n return (Lexer.GLYPHCLASS, glyphclass, location)\n if cur_char in Lexer.CHAR_NAME_START_:\n self.pos_ += 1\n self.scan_over_(Lexer.CHAR_NAME_CONTINUATION_)\n token = text[start : self.pos_]\n if token == "include":\n self.mode_ = Lexer.MODE_FILENAME_\n return (Lexer.NAME, token, location)\n if cur_char == "0" and next_char in "xX":\n self.pos_ += 2\n self.scan_over_(Lexer.CHAR_HEXDIGIT_)\n return (Lexer.HEXADECIMAL, int(text[start : self.pos_], 16), location)\n if cur_char == "0" and next_char in Lexer.CHAR_DIGIT_:\n self.scan_over_(Lexer.CHAR_DIGIT_)\n return (Lexer.OCTAL, int(text[start : self.pos_], 8), location)\n if cur_char in Lexer.CHAR_DIGIT_:\n self.scan_over_(Lexer.CHAR_DIGIT_)\n if self.pos_ >= limit or text[self.pos_] != ".":\n return (Lexer.NUMBER, int(text[start : self.pos_], 10), location)\n self.scan_over_(".")\n self.scan_over_(Lexer.CHAR_DIGIT_)\n return (Lexer.FLOAT, float(text[start : self.pos_]), location)\n if cur_char == "-" and next_char in Lexer.CHAR_DIGIT_:\n self.pos_ += 1\n self.scan_over_(Lexer.CHAR_DIGIT_)\n if self.pos_ >= limit or text[self.pos_] != ".":\n return (Lexer.NUMBER, int(text[start : self.pos_], 10), location)\n self.scan_over_(".")\n self.scan_over_(Lexer.CHAR_DIGIT_)\n return (Lexer.FLOAT, float(text[start : self.pos_]), location)\n if cur_char in Lexer.CHAR_SYMBOL_:\n self.pos_ += 1\n return (Lexer.SYMBOL, cur_char, location)\n if cur_char == '"':\n self.pos_ += 1\n self.scan_until_('"')\n if self.pos_ < self.text_length_ and self.text_[self.pos_] == '"':\n self.pos_ += 1\n # strip newlines embedded within a string\n string = re.sub("[\r\n]", "", text[start + 1 : self.pos_ - 1])\n return (Lexer.STRING, string, location)\n else:\n raise FeatureLibError("Expected '\"' to terminate string", location)\n raise FeatureLibError("Unexpected character: %r" % cur_char, location)\n\n def scan_over_(self, valid):\n p = self.pos_\n while p < self.text_length_ and self.text_[p] in valid:\n p += 1\n self.pos_ = p\n\n def scan_until_(self, stop_at):\n p = self.pos_\n while p < self.text_length_ and self.text_[p] not in stop_at:\n p += 1\n self.pos_ = p\n\n def scan_anonymous_block(self, tag):\n location = self.location_()\n tag = tag.strip()\n self.scan_until_(Lexer.CHAR_NEWLINE_)\n self.scan_over_(Lexer.CHAR_NEWLINE_)\n regexp = r"}\s*" + tag + r"\s*;"\n split = re.split(regexp, self.text_[self.pos_ :], maxsplit=1)\n if len(split) != 2:\n raise FeatureLibError(\n "Expected '} %s;' to terminate anonymous block" % tag, location\n )\n self.pos_ += len(split[0])\n return (Lexer.ANONYMOUS_BLOCK, split[0], location)\n\n\nclass IncludingLexer(object):\n """A Lexer that follows include statements.\n\n The OpenType feature file specification states that due to\n historical reasons, relative imports should be resolved in this\n order:\n\n 1. If the source font is UFO format, then relative to the UFO's\n font directory\n 2. relative to the top-level include file\n 3. relative to the parent include file\n\n We only support 1 (via includeDir) and 2.\n """\n\n def __init__(self, featurefile, *, includeDir=None):\n """Initializes an IncludingLexer.\n\n Behavior:\n If includeDir is passed, it will be used to determine the top-level\n include directory to use for all encountered include statements. If it is\n not passed, ``os.path.dirname(featurefile)`` will be considered the\n include directory.\n """\n\n self.lexers_ = [self.make_lexer_(featurefile)]\n self.featurefilepath = self.lexers_[0].filename_\n self.includeDir = includeDir\n\n def __iter__(self):\n return self\n\n def next(self): # Python 2\n return self.__next__()\n\n def __next__(self): # Python 3\n while self.lexers_:\n lexer = self.lexers_[-1]\n try:\n token_type, token, location = next(lexer)\n except StopIteration:\n self.lexers_.pop()\n continue\n if token_type is Lexer.NAME and token == "include":\n fname_type, fname_token, fname_location = lexer.next()\n if fname_type is not Lexer.FILENAME:\n raise FeatureLibError("Expected file name", fname_location)\n # semi_type, semi_token, semi_location = lexer.next()\n # if semi_type is not Lexer.SYMBOL or semi_token != ";":\n # raise FeatureLibError("Expected ';'", semi_location)\n if os.path.isabs(fname_token):\n path = fname_token\n else:\n if self.includeDir is not None:\n curpath = self.includeDir\n elif self.featurefilepath is not None:\n curpath = os.path.dirname(self.featurefilepath)\n else:\n # if the IncludingLexer was initialized from an in-memory\n # file-like stream, it doesn't have a 'name' pointing to\n # its filesystem path, therefore we fall back to using the\n # current working directory to resolve relative includes\n curpath = os.getcwd()\n path = os.path.join(curpath, fname_token)\n if len(self.lexers_) >= 5:\n raise FeatureLibError("Too many recursive includes", fname_location)\n try:\n self.lexers_.append(self.make_lexer_(path))\n except FileNotFoundError as err:\n raise IncludedFeaNotFound(fname_token, fname_location) from err\n else:\n return (token_type, token, location)\n raise StopIteration()\n\n @staticmethod\n def make_lexer_(file_or_path):\n if hasattr(file_or_path, "read"):\n fileobj, closing = file_or_path, False\n else:\n filename, closing = file_or_path, True\n fileobj = open(filename, "r", encoding="utf-8-sig")\n data = fileobj.read()\n filename = getattr(fileobj, "name", None)\n if closing:\n fileobj.close()\n return Lexer(data, filename)\n\n def scan_anonymous_block(self, tag):\n return self.lexers_[-1].scan_anonymous_block(tag)\n\n\nclass NonIncludingLexer(IncludingLexer):\n """Lexer that does not follow `include` statements, emits them as-is."""\n\n def __next__(self): # Python 3\n return next(self.lexers_[0])\n
.venv\Lib\site-packages\fontTools\feaLib\lexer.py
lexer.py
Python
11,408
0.95
0.229965
0.035573
awesome-app
796
2024-07-27T16:21:15.279843
BSD-3-Clause
false
89638bad677e0c287779a876597ec728
from typing import NamedTuple\n\n\nclass FeatureLibLocation(NamedTuple):\n """A location in a feature file"""\n\n file: str\n line: int\n column: int\n\n def __str__(self):\n return f"{self.file}:{self.line}:{self.column}"\n
.venv\Lib\site-packages\fontTools\feaLib\location.py
location.py
Python
246
0.85
0.166667
0
react-lib
389
2025-04-25T19:06:52.979036
Apache-2.0
false
071d3ddd88fb3e6e1638b177110fa313
from typing import NamedTuple\n\nLOOKUP_DEBUG_INFO_KEY = "com.github.fonttools.feaLib"\nLOOKUP_DEBUG_ENV_VAR = "FONTTOOLS_LOOKUP_DEBUGGING"\n\n\nclass LookupDebugInfo(NamedTuple):\n """Information about where a lookup came from, to be embedded in a font"""\n\n location: str\n name: str\n feature: list\n
.venv\Lib\site-packages\fontTools\feaLib\lookupDebugInfo.py
lookupDebugInfo.py
Python
316
0.85
0.083333
0
react-lib
291
2023-09-12T06:32:22.165164
MIT
false
8ead058843fe6ce0c5c5ef7be8b02c21
from fontTools.varLib.models import VariationModel, normalizeValue, piecewiseLinearMap\n\n\ndef Location(loc):\n return tuple(sorted(loc.items()))\n\n\nclass VariableScalar:\n """A scalar with different values at different points in the designspace."""\n\n def __init__(self, location_value={}):\n self.values = {}\n self.axes = {}\n for location, value in location_value.items():\n self.add_value(location, value)\n\n def __repr__(self):\n items = []\n for location, value in self.values.items():\n loc = ",".join(["%s=%i" % (ax, loc) for ax, loc in location])\n items.append("%s:%i" % (loc, value))\n return "(" + (" ".join(items)) + ")"\n\n @property\n def does_vary(self):\n values = list(self.values.values())\n return any(v != values[0] for v in values[1:])\n\n @property\n def axes_dict(self):\n if not self.axes:\n raise ValueError(\n ".axes must be defined on variable scalar before interpolating"\n )\n return {ax.axisTag: ax for ax in self.axes}\n\n def _normalized_location(self, location):\n location = self.fix_location(location)\n normalized_location = {}\n for axtag in location.keys():\n if axtag not in self.axes_dict:\n raise ValueError("Unknown axis %s in %s" % (axtag, location))\n axis = self.axes_dict[axtag]\n normalized_location[axtag] = normalizeValue(\n location[axtag], (axis.minValue, axis.defaultValue, axis.maxValue)\n )\n\n return Location(normalized_location)\n\n def fix_location(self, location):\n location = dict(location)\n for tag, axis in self.axes_dict.items():\n if tag not in location:\n location[tag] = axis.defaultValue\n return location\n\n def add_value(self, location, value):\n if self.axes:\n location = self.fix_location(location)\n\n self.values[Location(location)] = value\n\n def fix_all_locations(self):\n self.values = {\n Location(self.fix_location(l)): v for l, v in self.values.items()\n }\n\n @property\n def default(self):\n self.fix_all_locations()\n key = Location({ax.axisTag: ax.defaultValue for ax in self.axes})\n if key not in self.values:\n raise ValueError("Default value could not be found")\n # I *guess* we could interpolate one, but I don't know how.\n return self.values[key]\n\n def value_at_location(self, location, model_cache=None, avar=None):\n loc = Location(location)\n if loc in self.values.keys():\n return self.values[loc]\n values = list(self.values.values())\n loc = dict(self._normalized_location(loc))\n return self.model(model_cache, avar).interpolateFromMasters(loc, values)\n\n def model(self, model_cache=None, avar=None):\n if model_cache is not None:\n key = tuple(self.values.keys())\n if key in model_cache:\n return model_cache[key]\n locations = [dict(self._normalized_location(k)) for k in self.values.keys()]\n if avar is not None:\n mapping = avar.segments\n locations = [\n {\n k: piecewiseLinearMap(v, mapping[k]) if k in mapping else v\n for k, v in location.items()\n }\n for location in locations\n ]\n m = VariationModel(locations)\n if model_cache is not None:\n model_cache[key] = m\n return m\n\n def get_deltas_and_supports(self, model_cache=None, avar=None):\n values = list(self.values.values())\n return self.model(model_cache, avar).getDeltasAndSupports(values)\n\n def add_to_variation_store(self, store_builder, model_cache=None, avar=None):\n deltas, supports = self.get_deltas_and_supports(model_cache, avar)\n store_builder.setSupports(supports)\n index = store_builder.storeDeltas(deltas)\n return int(self.default), index\n
.venv\Lib\site-packages\fontTools\feaLib\variableScalar.py
variableScalar.py
Python
4,182
0.95
0.336283
0.010638
awesome-app
576
2024-05-17T17:20:21.998990
GPL-3.0
false
b3a56db78e3be6a0278d2341fc43ea7b
"""fontTools.feaLib -- a package for dealing with OpenType feature files."""\n\n# The structure of OpenType feature files is defined here:\n# http://www.adobe.com/devnet/opentype/afdko/topic_feature_file_syntax.html\n
.venv\Lib\site-packages\fontTools\feaLib\__init__.py
__init__.py
Python
217
0.8
0.25
0.666667
react-lib
901
2025-04-23T15:14:52.076426
BSD-3-Clause
false
9f6743934160552f78b76e92b3701742