id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
842556969
|
Wrongly Wriiten formula of CSA and TSA of hollow sphere
I would like to correct the formula.It should not be Internal surface area or external surface area,
In that column corresponding formula for CSA and TSA shoul be written
@sairish2001 Please assign it to me.I would like to work on it
Go ahead
Deadline 30 March,2021 (11:59 PM IST)
|
gharchive/issue
| 2021-03-27T16:58:16 |
2025-04-01T06:40:18.973654
|
{
"authors": [
"bitaashna",
"sairish2001"
],
"repo": "sairish2001/makesmatheasy",
"url": "https://github.com/sairish2001/makesmatheasy/issues/567",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2390422066
|
Documentation URL
https://strangedocs.hostz.me/ is not loading for the Documentation
Yes, the CNAME User is cross banned... but idk what that means. Can some1 fix that?
@saiteja-madha ig u misconfigured the cloudflare settings, do fix it.
Done. It should be up now 🆙
|
gharchive/issue
| 2024-07-04T09:38:51 |
2025-04-01T06:40:18.975199
|
{
"authors": [
"JoshuaBilboe96",
"ZwergTuete",
"chethanyadav456",
"saiteja-madha"
],
"repo": "saiteja-madha/discord-js-bot",
"url": "https://github.com/saiteja-madha/discord-js-bot/issues/528",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1736235797
|
S2U-24 Tests & Quizzes: Event Log - Several fixes
Fixes:
Only English bundle being used for export
Spanish bundle is Catalan, Catalan bundle is English
UTF-8 encoding not specified in HTTP header for export
Errors header not present in export
IP Address column not present in export
Datatables column configuration for IP Address column not defined
https://sakaiproject.atlassian.net/browse/S2U-24
Nice and clean!
|
gharchive/pull-request
| 2023-06-01T12:36:57 |
2025-04-01T06:40:18.978758
|
{
"authors": [
"mpellicer",
"stetsche"
],
"repo": "sakaiproject/sakai",
"url": "https://github.com/sakaiproject/sakai/pull/11628",
"license": "ECL-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2522271467
|
SAK-50134 Portal theme property for auto detecting dark mode should match documentation
…tches with documentation
Do I understand you correctly, that you feel to change the comment in sakai.properties instead to
Enable/disable the OS dark theme auto-detect mode
DEFAULT: true
Yes many applications these days follow the user's os setting when it comes to dark mode.
|
gharchive/pull-request
| 2024-09-12T12:32:46 |
2025-04-01T06:40:18.980696
|
{
"authors": [
"ern",
"jkjanetschek"
],
"repo": "sakaiproject/sakai",
"url": "https://github.com/sakaiproject/sakai/pull/12886",
"license": "ECL-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1170694994
|
undefined reference to symbol '_ZN3MPI8Datatype4FreeEv' build error
I'm taking the following error when I write "make" to command window.
Do you have any suggestion for that?
/usr/bin/ld: CMakeFiles/FpsCpu.dir/src/hdf5.cpp.o: undefined reference to symbol '_ZN3MPI8Datatype4FreeEv'
//usr/lib/x86_64-linux-gnu/libmpi_cxx.so.20: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
CMakeFiles/FpsCpu.dir/build.make:409: recipe for target 'FpsCpu' failed
make[2]: *** [FpsCpu] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/FpsCpu.dir/all' failed
make[1]: *** [CMakeFiles/FpsCpu.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Hi sefa,
Could you please include the details of your environment, please?
(Distro, version numbers of G++, CMake, and HDF5)
Does the cmake script execute without any warnings and issues?
I am closing this for now, please feel free to re-open it if your issue still persists.
|
gharchive/issue
| 2022-03-16T08:28:58 |
2025-04-01T06:40:19.007727
|
{
"authors": [
"salehjg",
"sefakurtipek"
],
"repo": "salehjg/MeshToPointcloudFPS",
"url": "https://github.com/salehjg/MeshToPointcloudFPS/issues/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1667483857
|
How to Finetune BLIP2 Captioning Model on custom dataset?
I'm looking for any resource that will get me the closest to try this. I've had poor luck finding a notebook and/or tutorial on how to finetune the BLIP2 captioning model, so I thought this was the place to ask.
me too
Hello,
I am currently working on a project that requires fine-tuning BLIP2 image caption with a custom dataset. Based on my interpretation of the documentation, the process involves modifying the captation_builder.py and coco_captation_dataset.py files to include any special conditions for the new dataset. Following this, we can add different transformations to the vis_processor, especially the blip_processor. Finally, we need to adapt our dataset to conform to the standard Coco caption format, or at least that is my current understanding.
Since I just started reading and understanding the documentation for this custom training project, I would appreciate it if a LAVIS developer could confirm or correct any mistakes in my assumptions. Thank you.
@GMBarra Your understanding is correct. Either you can adapt your dataset format, or you create a new dataset builder and dataset class, do which one fits best.
Does finetuning BLIP-2 require the frozen vision model and language model to be on the same GPU has the Q-former?
你好,
我目前正在开发一个项目,需要使用自定义数据集微调 BLIP2 图像标题。根据我对文档的解释,该过程涉及修改captation_builder.py和coco_captation_dataset.py文件以包含新数据集的任何特殊条件。接下来,我们可以向 vis_processor 添加不同的转换,尤其是blip_processor。最后,我们需要调整我们的数据集以符合标准的 Coco 字幕格式,或者至少这是我目前的理解。
由于我刚刚开始阅读和理解这个自定义培训项目的文档,如果 LAVIS 开发人员能够确认或纠正我的假设中的任何错误,我将不胜感激。谢谢。
Hello!
May I ask if I also want to use BLIP2 for caption operations? How can I achieve this? In the future, Eternal Island's own dataset is also needed for fine-tuning. How can we achieve this?
I need your help, thank you!
我正在寻找任何能让我最接近尝试这个的资源。我很不幸地找到了关于如何微调 BLIP2 字幕模型的笔记本和/或教程,所以我认为这是提问的地方。
请问 BLIP2的微调您成功了吗?我希望能在您这获得一些帮助
|
gharchive/issue
| 2023-04-14T04:17:06 |
2025-04-01T06:40:19.055178
|
{
"authors": [
"ECE-Engineer",
"GMBarra",
"JackCai1206",
"dreamlychina",
"dxli94",
"shams2023"
],
"repo": "salesforce/LAVIS",
"url": "https://github.com/salesforce/LAVIS/issues/256",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
948605202
|
Vertical Navigation: Missing icon and notification
Currently the Vertical Navigation is missing
Items with icons
Items with notifications
from the SLDS design.
I'm proposing adding two optional fields, similar to how Badge is done, in the Item object to define an Icon and a Badge for notifications. For example:
[
{
id: 'store',
label: 'Store',
items: [
{
id: 'credit_store',
label: 'credit',
url: 'store/credit',
icon: <Icon category="utility" name="moneybag" size="xx-small" />,
notification: <Badge id="badge-base-example" content="423 Credits Available" />,
},
{ id: 'transactions_store', label: 'Transactions', url: 'store/transactions' },
],
}
];
I realised there is some inconsistency in the way that icons are done for other components when I started to implement this.
For badges the prop is a component:
<Badge
id="badge-base-example-light"
color="light"
content="423 Credits Available"
icon={<Icon category="utility" name="moneybag" size="xx-small" />}
/>
and for buttons it's passed as the props of the icon with icon as a prefix:
<Button
assistiveText={{ icon: 'Icon Bare Small' }}
iconCategory="utility"
iconName="settings"
iconSize="small"
iconVariant="bare"
onClick={() => {
console.log('Icon Bare Clicked');
}}
variant="icon"
/>
@interactivellama which of these approaches is the current preferred way?
Good question! In short, the Badge way is preferred since it limits "duplicate" props.
See https://github.com/salesforce/design-system-react/blob/master/docs/codebase-overview.md#reuse-existing-component-props-by-using-component-no-button-iconclassname-
Also, thanks for wanting to contribute. I like the proposal. What do you think about notificationBadge instead of notification so that consumers already know what to pass in?
Thanks for the feedback. Yes, notificationBadge seems to make more sense to make it clear what it does.
I've opened an MR for this change. I struggled to get the typings correct and ended up using any which isn't ideal. If you have some suggestion to get that right then I'll change it.
@interactivellama is there any further changes needed on the PR?
@gnzzz Thanks for the ping! PR is merged now.
|
gharchive/issue
| 2021-07-20T12:28:22 |
2025-04-01T06:40:19.061976
|
{
"authors": [
"gnzzz",
"interactivellama"
],
"repo": "salesforce/design-system-react",
"url": "https://github.com/salesforce/design-system-react/issues/2934",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1294895567
|
Do context switching for only root POs
This is applicable for only mobile webviews. As per the current implementation, context is switched to webview for all web page objects implicitly when bootsrapped. This involves multiple commands to get available contexts, switch to webview context and validate the title.
For child page objects, since the context is already set via root, this is an overhead. This change does a root PO check thereby avoiding repeated context switch and improves the speed of the mobile tests involving webviews.
@lizaiv77 @helenren Please review
@rajukamal is there same change in JS coming? pls submit parallel PR and reference here
@rajukamal @helenren let's wait for JS PR, then I'll approve this one
|
gharchive/pull-request
| 2022-07-05T23:29:10 |
2025-04-01T06:40:19.069318
|
{
"authors": [
"lizaiv77",
"rajukamal"
],
"repo": "salesforce/utam-java",
"url": "https://github.com/salesforce/utam-java/pull/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2505755856
|
feat: multi-stage output
What does this PR do?
Use new @oclif/multi-stage-output for
project delete source
project deploy start
project deploy resume
project deploy validate
project deploy report
project retrieve start
What issues does this PR fix or reference?
[skip-validate-pr]
QA Notes
✅ : quickly played with various options, prior testing already done by Vivek & co
@WillieRuemmele
Is there any way to suppress such very verbose output stats, maybe by providing some option?
Because when we are running metadata validation or deploy in our CI/CD system, it creates tremendously long log of job execution with ~200,000 stats records in it.
And it is really hard to open it in a browser - takes a lot of time to load, then scroll through zillions of stats records to inspect results.
Am I just missing the point of this feature? I just want my concise logs back.
@avesolovksyy - please post your findings and opinions here
|
gharchive/pull-request
| 2024-09-04T15:57:12 |
2025-04-01T06:40:19.073754
|
{
"authors": [
"WillieRuemmele",
"avesolovksyy",
"mdonnalley"
],
"repo": "salesforcecli/plugin-deploy-retrieve",
"url": "https://github.com/salesforcecli/plugin-deploy-retrieve/pull/1155",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
919517133
|
Прошу показывать в подсказке по вызову метода имя метода и имя сигнатуры
Если внутри выражения вызова метода расположен вызов метода, то иногда становится сложно понимать, к какому методу относится подсказка по вызову метода. Поэтому прошу показывать имя метода в этой подсказке и расположить первым
Также у метода может быть несколько сигнатур и при их переключении хочется видеть название текущей сигнатуры. Например у конструктора объекта "Структура" они называются
На основании фиксированной структуры
По ключам и значениям
Пока прикрутил отображение имени метода и сигнатуры через свойство documentation. Неудобство в том, что если описание параметра достаточно большое, то переданные таким образом имени метода и сигнатуры не отображаются, т.к. располагаются в самом низу текста подсказки.
Заголовок подсказки - это обычная строка. Мне неизвестен способ сделать его хотя бы многострочным, чтобы в первой строке выводить имя метода. Могу предложить только такой вариант:
Да. Это уже лучше. Но не всегда. Иногда имя метода будет очень длинным, а нужно оно по большому счету только чтобы в сложных выражениях понимать, внутри вызова какого из нескольких методов находится каретка. Поэтому предлагаю
Уменьшить размер шрифта этой подсказки по умолчанию и сделать этот размер настраиваемым.
Имя метода показывать с обрезкой до 20 символов, включая "..." либо дать возможность отображаемое имя передавать явно через API.
Имя метода показывать с обрезкой до 20 символов, включая "..." либо дать возможность отображаемое имя передавать явно через API.
Ты и так передаешь label в setCustomSignatures. Можешь вместо вот этого
{
"label": "(Включить [Булево])",
}
передать это
{
"label": "ОписаниеТипов(Включить [Булево])",
}
Уменьшить размер шрифта этой подсказки по умолчанию и сделать этот размер настраиваемым.
Отдельной опции для размера шрифта в окне подсказки по вызову метода нет. Тут только с decorations.css играть
По п. 2 согласен - свойства label достаточно.
Про изменение размера штрифта не смог пока найти имя элемента стиля. Еще шрифт сейчас используется моноширинный. А для такой подсказки это явно не нужно и потому просто расточительно. Очень хотелось бы многоширинный шрифт.
Все мои попытки влиять на шрифт (размер, семью и прочее) через стили окончились неудачей. Пробовал например так
//РедакторHTML.setFontSize(13);
//РедакторHTML.setFontFamily("Lucida Console");
.editor-widget parameter-hints-widget {
font-size: 10px;
}
При этом шрифт в этом окне упорно продолжают использовать стандартные параметры.
Как изменить шрифт через стили описал тут #194
Появилась проблема.
Проверю
Воспроизведение проблемы
setCustomSignatures(`{
"ЗначениеВСтрокуВнутр": [
{
"label": "ЗначениеВСтрокуВнутр(Значение) [Строка]",
"documentation": "Получает системное строковое представление переданного объекта",
"parameters": [
{
"label": "Значение",
"documentation": "Значение, представление которого необходимо получить"
}
]
}
]
}`)
Проблема наблюдается и в самой последней версии monaco editor.
Причём проблема только при использовании кириллицы. Английские названия функции и параметров такого глюка не вызывают.
Проблема устранена
|
gharchive/issue
| 2021-06-12T10:23:20 |
2025-04-01T06:40:19.084866
|
{
"authors": [
"salexdv",
"tormozit"
],
"repo": "salexdv/bsl_console",
"url": "https://github.com/salexdv/bsl_console/issues/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
988234703
|
Хотелось бы особой окраски имени переменной, которой присваивается значение
ф = ф = 2;
VSCode и github для встроенного языка 1С это уже делает https://github.com/1c-syntax/1c-syntax/issues/330#issuecomment-904436285
Хотелось бы здесь также, только не сильно уходить от основного цвета.
Хотя не уверен, что от этого так уж много пользы будет. Ведь операции присваивания и сравнения на равенство достаточно редко можно перепутать во встроенном языке 1С. Поэтому если это сделать просто, то наверное стоит. А если это сложно делать, то наверное оно того не стоит.
Подумал еще и решил что скорее всего пользы будет очень мало.
|
gharchive/issue
| 2021-09-04T07:36:11 |
2025-04-01T06:40:19.090345
|
{
"authors": [
"tormozit"
],
"repo": "salexdv/bsl_console",
"url": "https://github.com/salexdv/bsl_console/issues/230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
256718333
|
Added yarn.lock
Executed dependency installation using $ yarn
I don't think we should add a yarn.lock.
This project does not use yarn, so we would need a new PR for the yarn.lock each time we change/update the dependencies.
Alright then. Closing this PR.
|
gharchive/pull-request
| 2017-09-11T14:15:44 |
2025-04-01T06:40:19.104854
|
{
"authors": [
"jhnferraris",
"pubkey"
],
"repo": "salomonelli/best-resume-ever",
"url": "https://github.com/salomonelli/best-resume-ever/pull/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2017137254
|
🛑 Blog is down
In cb6b5bb, Blog (https://saltbo.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Blog is back up in eef4f59 after 10 minutes.
|
gharchive/issue
| 2023-11-29T17:59:35 |
2025-04-01T06:40:19.109685
|
{
"authors": [
"saltbo"
],
"repo": "saltbo/status",
"url": "https://github.com/saltbo/status/issues/611",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1501044991
|
Tabulator is empty
Does not display data list, table is completely empty, when DB contains rows
Hello! @glutamate, can you try this? please
hi @gianlucafiore it definitely sometimes works I have it working here on latest version! So there must be something specific you with your set up. Could you paste screenshots of your configuration and also see if there is an entry in the crash log. You make it better error reporting if you turn off Progressive loading under the Content options.
|
gharchive/issue
| 2022-12-17T00:25:49 |
2025-04-01T06:40:19.114259
|
{
"authors": [
"gianlucafiore",
"glutamate"
],
"repo": "saltcorn/tabulator",
"url": "https://github.com/saltcorn/tabulator/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2336874851
|
Setup fails trying to build wheels for immutables multidict lxml
Hi friends!
Trying to kick the tires on some salt-extension development and I'm at a roadblock that I can't seem to work out on my own.
I followed the Quickstart instructions from this repo, followed by the followup instructions from the output of create-salt-extension.
I get stuck at python -m pip install -e '.[dev,tests,docs]' with...
python -m pip install -e .[dev,tests,docs]
Obtaining file:///Users/shea/Developer/sq/saltext_humio
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Collecting salt>=3005 (from saltext.humio==0.1.dev0+d20240605)
Using cached salt-3007.1-py3-none-any.whl
Collecting nox (from saltext.humio==0.1.dev0+d20240605)
Using cached nox-2024.4.15-py3-none-any.whl.metadata (4.7 kB)
Collecting pre-commit>=2.4.0 (from saltext.humio==0.1.dev0+d20240605)
Using cached pre_commit-3.7.1-py2.py3-none-any.whl.metadata (1.3 kB)
Collecting pylint (from saltext.humio==0.1.dev0+d20240605)
Using cached pylint-3.2.2-py3-none-any.whl.metadata (12 kB)
Collecting SaltPyLint (from saltext.humio==0.1.dev0+d20240605)
Using cached SaltPyLint-2024.2.5-py3-none-any.whl.metadata (1.4 kB)
Collecting sphinx (from saltext.humio==0.1.dev0+d20240605)
Using cached sphinx-7.3.7-py3-none-any.whl.metadata (6.0 kB)
Collecting sphinx-material-saltstack (from saltext.humio==0.1.dev0+d20240605)
Using cached sphinx_material_saltstack-1.0.5-py3-none-any.whl.metadata (6.8 kB)
Collecting sphinx-prompt (from saltext.humio==0.1.dev0+d20240605)
Using cached sphinx_prompt-1.8.0-py3-none-any.whl.metadata (3.1 kB)
Collecting sphinxcontrib-spelling (from saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_spelling-8.0.0-py3-none-any.whl.metadata (2.9 kB)
Collecting sphinx-copybutton (from saltext.humio==0.1.dev0+d20240605)
Using cached sphinx_copybutton-0.5.2-py3-none-any.whl.metadata (3.2 kB)
Collecting furo (from saltext.humio==0.1.dev0+d20240605)
Using cached furo-2024.5.6-py3-none-any.whl.metadata (5.9 kB)
Collecting pytest>=7.2.0 (from saltext.humio==0.1.dev0+d20240605)
Using cached pytest-8.2.2-py3-none-any.whl.metadata (7.6 kB)
Collecting pytest-salt-factories>=1.0.0rc28 (from saltext.humio==0.1.dev0+d20240605)
Using cached pytest_salt_factories-1.0.1-py3-none-any.whl.metadata (4.7 kB)
Collecting cfgv>=2.0.0 (from pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached cfgv-3.4.0-py2.py3-none-any.whl.metadata (8.5 kB)
Collecting identify>=1.0.0 (from pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached identify-2.5.36-py2.py3-none-any.whl.metadata (4.4 kB)
Collecting nodeenv>=0.11.1 (from pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached nodeenv-1.9.1-py2.py3-none-any.whl.metadata (21 kB)
Collecting pyyaml>=5.1 (from pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl.metadata (2.1 kB)
Collecting virtualenv>=20.10.0 (from pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached virtualenv-20.26.2-py3-none-any.whl.metadata (4.4 kB)
Collecting iniconfig (from pytest>=7.2.0->saltext.humio==0.1.dev0+d20240605)
Using cached iniconfig-2.0.0-py3-none-any.whl.metadata (2.6 kB)
Collecting packaging (from pytest>=7.2.0->saltext.humio==0.1.dev0+d20240605)
Using cached packaging-24.0-py3-none-any.whl.metadata (3.2 kB)
Collecting pluggy<2.0,>=1.5 (from pytest>=7.2.0->saltext.humio==0.1.dev0+d20240605)
Using cached pluggy-1.5.0-py3-none-any.whl.metadata (4.8 kB)
Collecting attrs>=19.2.0 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB)
Collecting pytest-helpers-namespace>=2021.4.29 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached pytest_helpers_namespace-2021.12.29-py3-none-any.whl.metadata (5.8 kB)
Collecting pytest-skip-markers>=1.1.3 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached pytest_skip_markers-1.5.1-py3-none-any.whl.metadata (5.1 kB)
Collecting pytest-system-statistics>=1.0.2 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached pytest_system_statistics-1.0.2-py3-none-any.whl.metadata (6.3 kB)
Collecting pytest-shell-utilities>=1.4.0 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached pytest_shell_utilities-1.9.0-py3-none-any.whl.metadata (5.9 kB)
Collecting psutil (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached psutil-5.9.8-cp38-abi3-macosx_11_0_arm64.whl.metadata (21 kB)
Collecting pyzmq (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached pyzmq-26.0.3-cp312-cp312-macosx_10_15_universal2.whl.metadata (6.1 kB)
Collecting msgpack>=0.5.2 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached msgpack-1.0.8-cp312-cp312-macosx_11_0_arm64.whl.metadata (9.1 kB)
Collecting aiohttp==3.9.5 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached aiohttp-3.9.5-cp312-cp312-macosx_11_0_arm64.whl.metadata (7.5 kB)
Collecting aiosignal==1.3.1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB)
Collecting annotated-types==0.6.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting autocommand==2.2.2 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached autocommand-2.2.2-py3-none-any.whl.metadata (15 kB)
Collecting certifi==2023.07.22 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached certifi-2023.7.22-py3-none-any.whl.metadata (2.2 kB)
Collecting cffi==1.16.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl.metadata (1.5 kB)
Collecting charset-normalizer==3.2.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached charset_normalizer-3.2.0-py3-none-any.whl.metadata (31 kB)
Collecting cheroot==10.0.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached cheroot-10.0.0-py3-none-any.whl.metadata (6.4 kB)
Collecting cherrypy==18.8.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached CherryPy-18.8.0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting contextvars==2.4 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached contextvars-2.4-py3-none-any.whl
Collecting cryptography==42.0.5 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached cryptography-42.0.5-cp39-abi3-macosx_10_12_universal2.whl.metadata (5.3 kB)
Collecting distro==1.8.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached distro-1.8.0-py3-none-any.whl.metadata (6.9 kB)
Collecting frozenlist==1.4.1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached frozenlist-1.4.1-cp312-cp312-macosx_11_0_arm64.whl.metadata (12 kB)
Collecting idna==3.7 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Collecting immutables==0.15 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached immutables-0.15.tar.gz (44 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting importlib-metadata==6.6.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached importlib_metadata-6.6.0-py3-none-any.whl.metadata (5.0 kB)
Collecting inflect==7.0.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached inflect-7.0.0-py3-none-any.whl.metadata (21 kB)
Collecting jaraco.collections==4.1.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached jaraco.collections-4.1.0-py3-none-any.whl.metadata (4.2 kB)
Collecting jaraco.context==4.3.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached jaraco.context-4.3.0-py3-none-any.whl.metadata (3.0 kB)
Collecting jaraco.functools==3.7.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached jaraco.functools-3.7.0-py3-none-any.whl.metadata (3.1 kB)
Collecting jaraco.text==3.11.1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached jaraco.text-3.11.1-py3-none-any.whl.metadata (4.0 kB)
Collecting jinja2==3.1.4 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting jmespath==1.0.1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached jmespath-1.0.1-py3-none-any.whl.metadata (7.6 kB)
Collecting looseversion==1.3.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached looseversion-1.3.0-py2.py3-none-any.whl.metadata (4.6 kB)
Collecting markupsafe==2.1.3 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_universal2.whl.metadata (2.9 kB)
Collecting more-itertools==8.2.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached more_itertools-8.2.0-py3-none-any.whl.metadata (41 kB)
Collecting msgpack>=0.5.2 (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached msgpack-1.0.7-cp312-cp312-macosx_11_0_arm64.whl.metadata (9.1 kB)
Collecting multidict==6.0.4 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached multidict-6.0.4.tar.gz (51 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting packaging (from pytest>=7.2.0->saltext.humio==0.1.dev0+d20240605)
Using cached packaging-23.1-py3-none-any.whl.metadata (3.1 kB)
Collecting portend==3.1.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached portend-3.1.0-py3-none-any.whl.metadata (3.5 kB)
Collecting psutil (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached psutil-5.9.6-cp38-abi3-macosx_11_0_arm64.whl.metadata (21 kB)
Collecting pycparser==2.21 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached pycparser-2.21-py2.py3-none-any.whl.metadata (1.1 kB)
Collecting pycryptodomex==3.19.1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached pycryptodomex-3.19.1-cp35-abi3-macosx_10_9_universal2.whl.metadata (3.4 kB)
Collecting pydantic-core==2.16.3 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached pydantic_core-2.16.3-cp312-cp312-macosx_11_0_arm64.whl.metadata (6.5 kB)
Collecting pydantic==2.6.4 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached pydantic-2.6.4-py3-none-any.whl.metadata (85 kB)
Collecting pyopenssl==24.0.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached pyOpenSSL-24.0.0-py3-none-any.whl.metadata (12 kB)
Collecting python-dateutil==2.8.2 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl.metadata (8.2 kB)
Collecting python-gnupg==0.5.2 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached python_gnupg-0.5.2-py2.py3-none-any.whl.metadata (1.9 kB)
Collecting pytz==2024.1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached pytz-2024.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting pyzmq (from pytest-salt-factories>=1.0.0rc28->saltext.humio==0.1.dev0+d20240605)
Using cached pyzmq-25.1.2-cp312-cp312-macosx_10_15_universal2.whl.metadata (4.9 kB)
Collecting requests==2.31.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting setproctitle==1.3.2 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached setproctitle-1.3.2-cp312-cp312-macosx_14_0_arm64.whl
Collecting six==1.16.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Collecting tempora==5.3.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached tempora-5.3.0-py3-none-any.whl.metadata (3.4 kB)
Collecting timelib==0.3.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached timelib-0.3.0-cp312-cp312-macosx_14_0_arm64.whl
Collecting tornado==6.3.3 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached tornado-6.3.3-cp38-abi3-macosx_10_9_universal2.whl.metadata (2.5 kB)
Collecting typing-extensions==4.8.0 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached typing_extensions-4.8.0-py3-none-any.whl.metadata (3.0 kB)
Collecting urllib3==1.26.18 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached urllib3-1.26.18-py2.py3-none-any.whl.metadata (48 kB)
Collecting yarl==1.9.4 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached yarl-1.9.4-cp312-cp312-macosx_11_0_arm64.whl.metadata (31 kB)
Collecting zc.lockfile==3.0.post1 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached zc.lockfile-3.0.post1-py3-none-any.whl.metadata (6.2 kB)
Collecting zipp==3.16.2 (from salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached zipp-3.16.2-py3-none-any.whl.metadata (3.7 kB)
Collecting setuptools (from zc.lockfile==3.0.post1->salt>=3005->saltext.humio==0.1.dev0+d20240605)
Using cached setuptools-70.0.0-py3-none-any.whl.metadata (5.9 kB)
Collecting beautifulsoup4 (from furo->saltext.humio==0.1.dev0+d20240605)
Using cached beautifulsoup4-4.12.3-py3-none-any.whl.metadata (3.8 kB)
Collecting sphinx-basic-ng>=1.0.0.beta2 (from furo->saltext.humio==0.1.dev0+d20240605)
Using cached sphinx_basic_ng-1.0.0b2-py3-none-any.whl.metadata (1.5 kB)
Collecting pygments>=2.7 (from furo->saltext.humio==0.1.dev0+d20240605)
Using cached pygments-2.18.0-py3-none-any.whl.metadata (2.5 kB)
Collecting sphinxcontrib-applehelp (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_applehelp-1.0.8-py3-none-any.whl.metadata (2.3 kB)
Collecting sphinxcontrib-devhelp (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_devhelp-1.0.6-py3-none-any.whl.metadata (2.3 kB)
Collecting sphinxcontrib-jsmath (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting sphinxcontrib-htmlhelp>=2.0.0 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_htmlhelp-2.0.5-py3-none-any.whl.metadata (2.3 kB)
Collecting sphinxcontrib-serializinghtml>=1.1.9 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_serializinghtml-1.1.10-py3-none-any.whl.metadata (2.4 kB)
Collecting sphinxcontrib-qthelp (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached sphinxcontrib_qthelp-1.0.7-py3-none-any.whl.metadata (2.2 kB)
Collecting docutils<0.22,>=0.18.1 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached docutils-0.21.2-py3-none-any.whl.metadata (2.8 kB)
Collecting snowballstemmer>=2.0 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached snowballstemmer-2.2.0-py2.py3-none-any.whl.metadata (6.5 kB)
Collecting babel>=2.9 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached Babel-2.15.0-py3-none-any.whl.metadata (1.5 kB)
Collecting alabaster~=0.7.14 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached alabaster-0.7.16-py3-none-any.whl.metadata (2.9 kB)
Collecting imagesize>=1.3 (from sphinx->saltext.humio==0.1.dev0+d20240605)
Using cached imagesize-1.4.1-py2.py3-none-any.whl.metadata (1.5 kB)
Collecting argcomplete<4.0,>=1.9.4 (from nox->saltext.humio==0.1.dev0+d20240605)
Using cached argcomplete-3.3.0-py3-none-any.whl.metadata (16 kB)
Collecting colorlog<7.0.0,>=2.6.1 (from nox->saltext.humio==0.1.dev0+d20240605)
Using cached colorlog-6.8.2-py3-none-any.whl.metadata (10 kB)
Collecting platformdirs>=2.2.0 (from pylint->saltext.humio==0.1.dev0+d20240605)
Using cached platformdirs-4.2.2-py3-none-any.whl.metadata (11 kB)
Collecting astroid<=3.3.0-dev0,>=3.2.2 (from pylint->saltext.humio==0.1.dev0+d20240605)
Using cached astroid-3.2.2-py3-none-any.whl.metadata (4.5 kB)
Collecting isort!=5.13.0,<6,>=4.2.5 (from pylint->saltext.humio==0.1.dev0+d20240605)
Using cached isort-5.13.2-py3-none-any.whl.metadata (12 kB)
Collecting mccabe<0.8,>=0.6 (from pylint->saltext.humio==0.1.dev0+d20240605)
Using cached mccabe-0.7.0-py2.py3-none-any.whl.metadata (5.0 kB)
Collecting tomlkit>=0.10.1 (from pylint->saltext.humio==0.1.dev0+d20240605)
Using cached tomlkit-0.12.5-py3-none-any.whl.metadata (2.7 kB)
Collecting dill>=0.3.6 (from pylint->saltext.humio==0.1.dev0+d20240605)
Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB)
Collecting beautifulsoup4 (from furo->saltext.humio==0.1.dev0+d20240605)
Using cached beautifulsoup4-4.9.1-py3-none-any.whl.metadata (4.1 kB)
Collecting python-slugify==4.0.1 (from python-slugify[unidecode]==4.0.1->sphinx-material-saltstack->saltext.humio==0.1.dev0+d20240605)
Using cached python_slugify-4.0.1-py2.py3-none-any.whl
Collecting css-html-js-minify==2.5.5 (from sphinx-material-saltstack->saltext.humio==0.1.dev0+d20240605)
Using cached css_html_js_minify-2.5.5-py2.py3-none-any.whl.metadata (12 kB)
Collecting lxml==4.5.2 (from sphinx-material-saltstack->saltext.humio==0.1.dev0+d20240605)
Using cached lxml-4.5.2.tar.gz (4.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting soupsieve>1.2 (from beautifulsoup4->furo->saltext.humio==0.1.dev0+d20240605)
Using cached soupsieve-2.5-py3-none-any.whl.metadata (4.7 kB)
Collecting text-unidecode>=1.3 (from python-slugify==4.0.1->python-slugify[unidecode]==4.0.1->sphinx-material-saltstack->saltext.humio==0.1.dev0+d20240605)
Using cached text_unidecode-1.3-py2.py3-none-any.whl.metadata (2.4 kB)
Collecting Unidecode>=1.1.1 (from python-slugify[unidecode]==4.0.1->sphinx-material-saltstack->saltext.humio==0.1.dev0+d20240605)
Using cached Unidecode-1.3.8-py3-none-any.whl.metadata (13 kB)
Collecting PyEnchant>=3.1.1 (from sphinxcontrib-spelling->saltext.humio==0.1.dev0+d20240605)
Using cached pyenchant-3.2.2-py3-none-any.whl.metadata (3.8 kB)
Collecting distlib<1,>=0.3.7 (from virtualenv>=20.10.0->pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached distlib-0.3.8-py2.py3-none-any.whl.metadata (5.1 kB)
Collecting filelock<4,>=3.12.2 (from virtualenv>=20.10.0->pre-commit>=2.4.0->saltext.humio==0.1.dev0+d20240605)
Using cached filelock-3.14.0-py3-none-any.whl.metadata (2.8 kB)
Using cached pre_commit-3.7.1-py2.py3-none-any.whl (204 kB)
Using cached pytest-8.2.2-py3-none-any.whl (339 kB)
Using cached pytest_salt_factories-1.0.1-py3-none-any.whl (94 kB)
Using cached aiohttp-3.9.5-cp312-cp312-macosx_11_0_arm64.whl (392 kB)
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Using cached annotated_types-0.6.0-py3-none-any.whl (12 kB)
Using cached attrs-23.2.0-py3-none-any.whl (60 kB)
Using cached autocommand-2.2.2-py3-none-any.whl (19 kB)
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Using cached cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl (177 kB)
Using cached charset_normalizer-3.2.0-py3-none-any.whl (46 kB)
Using cached cheroot-10.0.0-py3-none-any.whl (101 kB)
Using cached CherryPy-18.8.0-py2.py3-none-any.whl (348 kB)
Using cached cryptography-42.0.5-cp39-abi3-macosx_10_12_universal2.whl (5.9 MB)
Using cached distro-1.8.0-py3-none-any.whl (20 kB)
Using cached frozenlist-1.4.1-cp312-cp312-macosx_11_0_arm64.whl (51 kB)
Using cached idna-3.7-py3-none-any.whl (66 kB)
Using cached importlib_metadata-6.6.0-py3-none-any.whl (22 kB)
Using cached inflect-7.0.0-py3-none-any.whl (34 kB)
Using cached jaraco.collections-4.1.0-py3-none-any.whl (11 kB)
Using cached jaraco.context-4.3.0-py3-none-any.whl (5.3 kB)
Using cached jaraco.functools-3.7.0-py3-none-any.whl (8.1 kB)
Using cached jaraco.text-3.11.1-py3-none-any.whl (11 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached jmespath-1.0.1-py3-none-any.whl (20 kB)
Using cached looseversion-1.3.0-py2.py3-none-any.whl (8.2 kB)
Using cached MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_universal2.whl (17 kB)
Using cached more_itertools-8.2.0-py3-none-any.whl (43 kB)
Using cached msgpack-1.0.7-cp312-cp312-macosx_11_0_arm64.whl (232 kB)
Using cached packaging-23.1-py3-none-any.whl (48 kB)
Using cached portend-3.1.0-py3-none-any.whl (5.3 kB)
Using cached psutil-5.9.6-cp38-abi3-macosx_11_0_arm64.whl (246 kB)
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Using cached pycryptodomex-3.19.1-cp35-abi3-macosx_10_9_universal2.whl (2.4 MB)
Using cached pydantic-2.6.4-py3-none-any.whl (394 kB)
Using cached pydantic_core-2.16.3-cp312-cp312-macosx_11_0_arm64.whl (1.7 MB)
Using cached pyOpenSSL-24.0.0-py3-none-any.whl (58 kB)
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Using cached python_gnupg-0.5.2-py2.py3-none-any.whl (20 kB)
Using cached pytz-2024.1-py2.py3-none-any.whl (505 kB)
Using cached PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl (165 kB)
Using cached pyzmq-25.1.2-cp312-cp312-macosx_10_15_universal2.whl (1.9 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Using cached tempora-5.3.0-py3-none-any.whl (13 kB)
Using cached tornado-6.3.3-cp38-abi3-macosx_10_9_universal2.whl (425 kB)
Using cached typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Using cached urllib3-1.26.18-py2.py3-none-any.whl (143 kB)
Using cached yarl-1.9.4-cp312-cp312-macosx_11_0_arm64.whl (79 kB)
Using cached zc.lockfile-3.0.post1-py3-none-any.whl (9.8 kB)
Using cached zipp-3.16.2-py3-none-any.whl (7.2 kB)
Using cached furo-2024.5.6-py3-none-any.whl (341 kB)
Using cached sphinx-7.3.7-py3-none-any.whl (3.3 MB)
Using cached nox-2024.4.15-py3-none-any.whl (60 kB)
Using cached pylint-3.2.2-py3-none-any.whl (519 kB)
Using cached SaltPyLint-2024.2.5-py3-none-any.whl (16 kB)
Using cached sphinx_copybutton-0.5.2-py3-none-any.whl (13 kB)
Using cached sphinx_material_saltstack-1.0.5-py3-none-any.whl (802 kB)
Using cached beautifulsoup4-4.9.1-py3-none-any.whl (115 kB)
Using cached css_html_js_minify-2.5.5-py2.py3-none-any.whl (40 kB)
Using cached sphinx_prompt-1.8.0-py3-none-any.whl (7.3 kB)
Using cached sphinxcontrib_spelling-8.0.0-py3-none-any.whl (16 kB)
Using cached alabaster-0.7.16-py3-none-any.whl (13 kB)
Using cached argcomplete-3.3.0-py3-none-any.whl (42 kB)
Using cached astroid-3.2.2-py3-none-any.whl (276 kB)
Using cached Babel-2.15.0-py3-none-any.whl (9.6 MB)
Using cached cfgv-3.4.0-py2.py3-none-any.whl (7.2 kB)
Using cached colorlog-6.8.2-py3-none-any.whl (11 kB)
Using cached dill-0.3.8-py3-none-any.whl (116 kB)
Using cached docutils-0.21.2-py3-none-any.whl (587 kB)
Using cached identify-2.5.36-py2.py3-none-any.whl (98 kB)
Using cached imagesize-1.4.1-py2.py3-none-any.whl (8.8 kB)
Using cached isort-5.13.2-py3-none-any.whl (92 kB)
Using cached mccabe-0.7.0-py2.py3-none-any.whl (7.3 kB)
Using cached nodeenv-1.9.1-py2.py3-none-any.whl (22 kB)
Using cached platformdirs-4.2.2-py3-none-any.whl (18 kB)
Using cached pluggy-1.5.0-py3-none-any.whl (20 kB)
Using cached pyenchant-3.2.2-py3-none-any.whl (55 kB)
Using cached pygments-2.18.0-py3-none-any.whl (1.2 MB)
Using cached pytest_helpers_namespace-2021.12.29-py3-none-any.whl (10 kB)
Using cached pytest_shell_utilities-1.9.0-py3-none-any.whl (45 kB)
Using cached pytest_skip_markers-1.5.1-py3-none-any.whl (20 kB)
Using cached pytest_system_statistics-1.0.2-py3-none-any.whl (11 kB)
Using cached snowballstemmer-2.2.0-py2.py3-none-any.whl (93 kB)
Using cached sphinx_basic_ng-1.0.0b2-py3-none-any.whl (22 kB)
Using cached sphinxcontrib_htmlhelp-2.0.5-py3-none-any.whl (99 kB)
Using cached sphinxcontrib_serializinghtml-1.1.10-py3-none-any.whl (92 kB)
Using cached tomlkit-0.12.5-py3-none-any.whl (37 kB)
Using cached virtualenv-20.26.2-py3-none-any.whl (3.9 MB)
Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Using cached sphinxcontrib_applehelp-1.0.8-py3-none-any.whl (120 kB)
Using cached sphinxcontrib_devhelp-1.0.6-py3-none-any.whl (83 kB)
Using cached sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl (5.1 kB)
Using cached sphinxcontrib_qthelp-1.0.7-py3-none-any.whl (89 kB)
Using cached distlib-0.3.8-py2.py3-none-any.whl (468 kB)
Using cached filelock-3.14.0-py3-none-any.whl (12 kB)
Using cached soupsieve-2.5-py3-none-any.whl (36 kB)
Using cached text_unidecode-1.3-py2.py3-none-any.whl (78 kB)
Using cached Unidecode-1.3.8-py3-none-any.whl (235 kB)
Using cached setuptools-70.0.0-py3-none-any.whl (863 kB)
Checking if build backend supports build_editable ... done
Building wheels for collected packages: saltext.humio, immutables, multidict, lxml
Building editable for saltext.humio (pyproject.toml) ... done
Created wheel for saltext.humio: filename=saltext.humio-0.1.dev0+d20240605-0.editable-py2.py3-none-any.whl size=2718 sha256=b4710a3414c2e624427564c44348f2dcf88e5d451139830e57f418f1e4454d89
Stored in directory: /private/var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn/T/pip-ephem-wheel-cache-x3pdsaqr/wheels/04/0b/0f/630b15a31dd2b80771315ce14ae4618f10d39719d0e8257578
Building wheel for immutables (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for immutables (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [80 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-312
creating build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/_testutils.py -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/_version.py -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/map.py -> build/lib.macosx-10.9-universal2-cpython-312/immutables
running egg_info
writing immutables.egg-info/PKG-INFO
writing dependency_links to immutables.egg-info/dependency_links.txt
writing requirements to immutables.egg-info/requires.txt
writing top-level names to immutables.egg-info/top_level.txt
reading manifest file 'immutables.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'immutables.egg-info/SOURCES.txt'
copying immutables/_map.c -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/_map.h -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/_map.pyi -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/py.typed -> build/lib.macosx-10.9-universal2-cpython-312/immutables
copying immutables/pythoncapi_compat.h -> build/lib.macosx-10.9-universal2-cpython-312/immutables
running build_ext
building 'immutables._map' extension
creating build/temp.macosx-10.9-universal2-cpython-312
creating build/temp.macosx-10.9-universal2-cpython-312/immutables
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -DNDEBUG=1 -I/Users/shea/Developer/sq/saltext_humio/.env/include -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c immutables/_map.c -o build/temp.macosx-10.9-universal2-cpython-312/immutables/_map.o -O2 -std=c99 -fsigned-char -Wall -Wsign-compare -Wconversion
immutables/_map.c:535:19: error: too few arguments provided to function-like macro invocation
va_start(vargs);
^
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stdarg.h:17:9: note: macro 'va_start' defined here
#define va_start(ap, param) __builtin_va_start(ap, param)
^
immutables/_map.c:535:5: error: call to undeclared library function 'va_start' with type 'void (__builtin_va_list &, ...)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
va_start(vargs);
^
immutables/_map.c:535:5: note: include the header <stdarg.h> or explicitly provide a declaration for 'va_start'
immutables/_map.c:535:5: warning: expression result unused [-Wunused-value]
va_start(vargs);
^~~~~~~~
immutables/_map.c:1250:5: warning: 'UsingDeprecatedTrashcanMacro' is deprecated [-Wdeprecated-declarations]
Py_TRASHCAN_SAFE_BEGIN(self)
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:551:9: note: expanded from macro 'Py_TRASHCAN_SAFE_BEGIN'
UsingDeprecatedTrashcanMacro cond=1; \
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:548:1: note: 'UsingDeprecatedTrashcanMacro' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) typedef int UsingDeprecatedTrashcanMacro;
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
immutables/_map.c:1667:5: warning: 'UsingDeprecatedTrashcanMacro' is deprecated [-Wdeprecated-declarations]
Py_TRASHCAN_SAFE_BEGIN(self)
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:551:9: note: expanded from macro 'Py_TRASHCAN_SAFE_BEGIN'
UsingDeprecatedTrashcanMacro cond=1; \
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:548:1: note: 'UsingDeprecatedTrashcanMacro' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) typedef int UsingDeprecatedTrashcanMacro;
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
immutables/_map.c:2086:5: warning: 'UsingDeprecatedTrashcanMacro' is deprecated [-Wdeprecated-declarations]
Py_TRASHCAN_SAFE_BEGIN(self)
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:551:9: note: expanded from macro 'Py_TRASHCAN_SAFE_BEGIN'
UsingDeprecatedTrashcanMacro cond=1; \
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:548:1: note: 'UsingDeprecatedTrashcanMacro' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) typedef int UsingDeprecatedTrashcanMacro;
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
4 warnings and 2 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for immutables
Building wheel for multidict (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for multidict (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [200 lines of output]
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-312
creating build/lib.macosx-10.9-universal2-cpython-312/multidict
copying multidict/_multidict_py.py -> build/lib.macosx-10.9-universal2-cpython-312/multidict
copying multidict/_abc.py -> build/lib.macosx-10.9-universal2-cpython-312/multidict
copying multidict/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/multidict
copying multidict/_multidict_base.py -> build/lib.macosx-10.9-universal2-cpython-312/multidict
copying multidict/_compat.py -> build/lib.macosx-10.9-universal2-cpython-312/multidict
running egg_info
writing multidict.egg-info/PKG-INFO
writing dependency_links to multidict.egg-info/dependency_links.txt
writing top-level names to multidict.egg-info/top_level.txt
reading manifest file 'multidict.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files found matching 'multidict/_multidict.html'
warning: no previously-included files found matching 'multidict/*.so'
warning: no previously-included files found matching 'multidict/*.pyd'
warning: no previously-included files found matching 'multidict/*.pyd'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE'
writing manifest file 'multidict.egg-info/SOURCES.txt'
/private/var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn/T/pip-build-env-bc1q_fgd/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'multidict._multilib' is absent from the `packages` configuration.
!!
********************************************************************************
############################
# Package would be ignored #
############################
Python recognizes 'multidict._multilib' as an importable package[^1],
but it is absent from setuptools' `packages` configuration.
This leads to an ambiguous overall configuration. If you want to distribute this
package, please make sure that 'multidict._multilib' is explicitly added
to the `packages` configuration field.
Alternatively, you can also rely on setuptools' discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
If you don't want 'multidict._multilib' to be distributed and are
already explicitly excluding 'multidict._multilib' via
`find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
you can try to use `exclude_package_data`, or `include-package-data=False` in
combination with a more fine grained `package-data` configuration.
You can read more about "package data files" on setuptools documentation page:
- https://setuptools.pypa.io/en/latest/userguide/datafiles.html
[^1]: For Python, any directory (with suitable naming) can be imported,
even if it does not contain any `.py` files.
On the other hand, currently there is no concept of package data
directory, all directories are treated like packages.
********************************************************************************
!!
check.warn(importable)
copying multidict/__init__.pyi -> build/lib.macosx-10.9-universal2-cpython-312/multidict
copying multidict/py.typed -> build/lib.macosx-10.9-universal2-cpython-312/multidict
running build_ext
building 'multidict._multidict' extension
creating build/temp.macosx-10.9-universal2-cpython-312
creating build/temp.macosx-10.9-universal2-cpython-312/multidict
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -I/Users/shea/Developer/sq/saltext_humio/.env/include -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c multidict/_multidict.c -o build/temp.macosx-10.9-universal2-cpython-312/multidict/_multidict.o -O2 -std=c99 -Wall -Wsign-compare -Wconversion -fno-strict-aliasing -pedantic
In file included from multidict/_multidict.c:9:
multidict/_multilib/iter.h:225:20: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
multidict_iter_init()
^
void
In file included from multidict/_multidict.c:10:
multidict/_multilib/views.h:388:21: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
multidict_views_init()
^
void
multidict/_multidict.c:458:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "getall", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:458:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "getall", 0};
^~~~~~~~~
multidict/_multidict.c:458:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[7]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "getall", 0};
^~~~~~~~
multidict/_multidict.c:503:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "getone", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:503:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "getone", 0};
^~~~~~~~~
multidict/_multidict.c:503:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[7]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "getone", 0};
^~~~~~~~
multidict/_multidict.c:538:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "get", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:538:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "get", 0};
^~~~~~~~~
multidict/_multidict.c:538:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[4]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "get", 0};
^~~~~
multidict/_multidict.c:712:5: warning: 'UsingDeprecatedTrashcanMacro' is deprecated [-Wdeprecated-declarations]
Py_TRASHCAN_SAFE_BEGIN(self);
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:551:9: note: expanded from macro 'Py_TRASHCAN_SAFE_BEGIN'
UsingDeprecatedTrashcanMacro cond=1; \
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/object.h:548:1: note: 'UsingDeprecatedTrashcanMacro' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) typedef int UsingDeprecatedTrashcanMacro;
^
/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
multidict/_multidict.c:780:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "add", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:780:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "add", 0};
^~~~~~~~~
multidict/_multidict.c:780:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[4]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "add", 0};
^~~~~
multidict/_multidict.c:839:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "setdefault", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:839:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "setdefault", 0};
^~~~~~~~~
multidict/_multidict.c:839:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[11]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "setdefault", 0};
^~~~~~~~~~~~
multidict/_multidict.c:875:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "popone", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:875:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "popone", 0};
^~~~~~~~~
multidict/_multidict.c:875:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[7]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "popone", 0};
^~~~~~~~
multidict/_multidict.c:922:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "pop", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:922:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "pop", 0};
^~~~~~~~~
multidict/_multidict.c:922:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[4]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "pop", 0};
^~~~~
multidict/_multidict.c:970:37: error: incompatible pointer to integer conversion initializing 'int' with an expression of type 'void *' [-Wint-conversion]
static _PyArg_Parser _parser = {NULL, _keywords, "popall", 0};
^~~~
/Library/Developer/CommandLineTools/usr/lib/clang/15.0.0/include/stddef.h:89:16: note: expanded from macro 'NULL'
# define NULL ((void*)0)
^~~~~~~~~~
multidict/_multidict.c:970:43: warning: incompatible pointer types initializing 'const char *' with an expression of type 'const char *const[3]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "popall", 0};
^~~~~~~~~
multidict/_multidict.c:970:54: warning: incompatible pointer types initializing 'const char *const *' with an expression of type 'char[7]' [-Wincompatible-pointer-types]
static _PyArg_Parser _parser = {NULL, _keywords, "popall", 0};
^~~~~~~~
multidict/_multidict.c:1684:18: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
PyInit__multidict()
^
void
20 warnings and 8 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for multidict
Building wheel for lxml (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for lxml (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [91 lines of output]
<string>:64: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
Building lxml version 4.5.2.
Building without Cython.
Building against libxml2 2.9.13 and libxslt 1.1.35
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-universal2-cpython-312
creating build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/_elementpath.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/sax.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/pyclasslookup.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/builder.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/doctestcompare.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/usedoctest.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/cssselect.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/ElementInclude.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/soupparser.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/defs.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/_setmixin.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/clean.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/_diffcommand.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/html5parser.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/formfill.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/builder.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/ElementSoup.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/_html5builder.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/usedoctest.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
copying src/lxml/html/diff.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/html
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron
copying src/lxml/isoschematron/__init__.py -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron
copying src/lxml/etree.h -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/etree_api.h -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/lxml.etree.h -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/lxml.etree_api.h -> build/lib.macosx-10.9-universal2-cpython-312/lxml
copying src/lxml/includes/xmlerror.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/c14n.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/xmlschema.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/__init__.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/schematron.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/tree.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/uri.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/etreepublic.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/xpath.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/htmlparser.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/xslt.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/config.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/xmlparser.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/xinclude.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/dtdvalid.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/relaxng.pxd -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/lxml-version.h -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
copying src/lxml/includes/etree_defs.h -> build/lib.macosx-10.9-universal2-cpython-312/lxml/includes
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/rng
copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/rng
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl
creating build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.macosx-10.9-universal2-cpython-312/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
creating build/temp.macosx-10.9-universal2-cpython-312
creating build/temp.macosx-10.9-universal2-cpython-312/src
creating build/temp.macosx-10.9-universal2-cpython-312/src/lxml
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -arch arm64 -arch x86_64 -g -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc/lxml/includes -I/Users/shea/Developer/sq/saltext_humio/.env/include -I/Library/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c src/lxml/etree.c -o build/temp.macosx-10.9-universal2-cpython-312/src/lxml/etree.o -w -flat_namespace
src/lxml/etree.c:289:12: fatal error: 'longintrepr.h' file not found
#include "longintrepr.h"
^~~~~~~~~~~~~~~
1 error generated.
Compile failed: command '/usr/bin/clang' failed with exit code 1
creating var
creating var/folders
creating var/folders/8b
creating var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn
creating var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn/T
cc -I/usr/include/libxml2 -c /var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn/T/xmlXPathInitd13ri5wg.c -o var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn/T/xmlXPathInitd13ri5wg.o
cc var/folders/8b/2v5j2m8x5h77d1gp8fl8g4cm0000gn/T/xmlXPathInitd13ri5wg.o -lxml2 -o a.out
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for lxml
Successfully built saltext.humio
Failed to build immutables multidict lxml
ERROR: Could not build wheels for immutables, multidict, lxml, which is required to install pyproject.toml-based projects
I've tried both the python.org and homebrew python 3.12s, as well as adding CFLAGS for the longintrepr.h file, which it strangely can't find.
Even then-failure
THanks for your help!
So, sorry for the enormous snippet above.
I was able to get the dev and tests groups to install by
Using the Salt 3007.1 python I already had on my machine to create the virtual env
Creating a virtualenv with another python, copying the activate script to the above venv, and editing it to point to the above venv, since salt python oddly didn't create the activate script?
Restricted my salt extension to Salt 3007.1 or higher
Adding the following to requirements/base.txt
lxml>=5.0.0
immutables>=0.15
multidict>=6.0.4
Commented out the static docs requirements section of the pre-commit config (they say macOS is not really supported yet anyway)
Fixed a couple of syntax errors that I didn't see but pre-commit was complaining about by using an inline list for some args.
The above were oddly trying to build from source on my machine, possibly because python 3.12 wheels don't exist for older versions of these packages which are resolving?
So I think I'm in business but it seems there are some rough edges for getting started with these that I'm happy to help out with if you can direct me.
Salt Project / VMware has ended active development of this project, this repository will no longer be updated.
The community has created and maintained a better alternative to the development of Salt extensions: salt-extensions/salt-extension-copier (Create and maintain Salt extensions using Copier)
|
gharchive/issue
| 2024-06-05T21:35:21 |
2025-04-01T06:40:19.233364
|
{
"authors": [
"ScriptAutomate",
"sheagcraig"
],
"repo": "saltstack/salt-extension",
"url": "https://github.com/saltstack/salt-extension/issues/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2097945053
|
plasma-new-hope: fix id generation for Modal, Popup
Modal
поправлена генерация id для Modal, Popup
добавлены тесты для Modal, Popup
What/why changed
Ломалось использование ModalBase без id из-за того, что неправильно пробрасывали. Теперь также используем safeUseId вместо useUniqId. Добавлены тесты для web, b2c для обоих компонент.
📦 Published PR as canary version: Canary Versions
:sparkles: Test out this PR locally via:
npm install @salutejs/plasma-asdk@0.30.2-canary.1001.7638822667.0
npm install @salutejs/plasma-b2c@1.272.2-canary.1001.7638822667.0
npm install @salutejs/plasma-new-hope@0.36.1-canary.1001.7638822667.0
npm install @salutejs/plasma-web@1.272.2-canary.1001.7638822667.0
# or
yarn add @salutejs/plasma-asdk@0.30.2-canary.1001.7638822667.0
yarn add @salutejs/plasma-b2c@1.272.2-canary.1001.7638822667.0
yarn add @salutejs/plasma-new-hope@0.36.1-canary.1001.7638822667.0
yarn add @salutejs/plasma-web@1.272.2-canary.1001.7638822667.0
Theme Builder app deployed!
http://plasma.sberdevices.ru/pr/plasma-theme-builder-pr-1001/
Documentation preview deployed!
website: http://plasma.sberdevices.ru/pr/pr-1001/
b2c storybook: http://plasma.sberdevices.ru/pr/pr-1001/b2c-storybook/
web storybook: http://plasma.sberdevices.ru/pr/pr-1001/web-storybook/
new-hope storybook: http://plasma.sberdevices.ru/pr/pr-1001/new-hope-storybook/
asdk storybook: http://plasma.sberdevices.ru/pr/pr-1001/asdk-storybook/
Theme Builder app deployed!
http://plasma.sberdevices.ru/pr/plasma-theme-builder-pr-1001/
Documentation preview deployed!
website: http://plasma.sberdevices.ru/pr/pr-1001/
b2c storybook: http://plasma.sberdevices.ru/pr/pr-1001/b2c-storybook/
web storybook: http://plasma.sberdevices.ru/pr/pr-1001/web-storybook/
new-hope storybook: http://plasma.sberdevices.ru/pr/pr-1001/new-hope-storybook/
asdk storybook: http://plasma.sberdevices.ru/pr/pr-1001/asdk-storybook/
Theme Builder app deployed!
http://plasma.sberdevices.ru/pr/plasma-theme-builder-pr-1001/
Documentation preview deployed!
website: http://plasma.sberdevices.ru/pr/pr-1001/
b2c storybook: http://plasma.sberdevices.ru/pr/pr-1001/b2c-storybook/
web storybook: http://plasma.sberdevices.ru/pr/pr-1001/web-storybook/
new-hope storybook: http://plasma.sberdevices.ru/pr/pr-1001/new-hope-storybook/
asdk storybook: http://plasma.sberdevices.ru/pr/pr-1001/asdk-storybook/
🚀 This PR is included in version: @salutejs/plasma-asdk@0.30.2-dev.0, @salutejs/plasma-b2c@1.272.2-dev.0, @salutejs/plasma-new-hope@0.36.1-dev.0, @salutejs/plasma-temple-docs@0.210.2-dev.0, @salutejs/plasma-ui-docs@0.259.2-dev.0, @salutejs/plasma-web-docs@0.226.2-dev.0, @salutejs/plasma-web@1.272.2-dev.0, @salutejs/plasma-website@0.234.2-dev.0 🚀
🚀 This PR is included in version: @salutejs/plasma-asdk@0.30.2-dev.0, @salutejs/plasma-b2c@1.272.2-dev.0, @salutejs/plasma-new-hope@0.36.1-dev.0, @salutejs/plasma-temple-docs@0.210.2-dev.0, @salutejs/plasma-ui-docs@0.259.2-dev.0, @salutejs/plasma-web-docs@0.226.2-dev.0, @salutejs/plasma-web@1.272.2-dev.0, @salutejs/plasma-website@0.234.2-dev.0 🚀
🚀 This PR is included in version: @salutejs/plasma-asdk@0.35.0, @salutejs/plasma-b2c@1.277.0, @salutejs/plasma-core@1.146.0, @salutejs/plasma-cy-utils@0.78.0, @salutejs/plasma-hope@1.258.0, @salutejs/plasma-icons@1.179.0, @salutejs/plasma-new-hope@0.41.0, @salutejs/plasma-sb-utils@0.144.0, @salutejs/plasma-temple-docs@0.215.0, @salutejs/plasma-temple@1.198.0, @salutejs/plasma-tokens-native@1.21.0, @salutejs/plasma-tokens@1.70.0, @salutejs/plasma-ui-docs@0.264.0, @salutejs/plasma-ui@1.230.0, @salutejs/plasma-web-docs@0.231.0, @salutejs/plasma-web@1.277.0, @salutejs/plasma-website@0.239.0 🚀
|
gharchive/pull-request
| 2024-01-24T10:27:00 |
2025-04-01T06:40:19.375177
|
{
"authors": [
"Salute-Eva",
"kayman233"
],
"repo": "salute-developers/plasma",
"url": "https://github.com/salute-developers/plasma/pull/1001",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2280559968
|
Incorrect pointer to interface
https://github.com/samber/go-type-to-string/blob/810c3951834a110bde702691eddfbdaf18ff581f/converter.go#L31-L32
Why?
package main
import (
"fmt"
typetostring "github.com/samber/go-type-to-string"
)
func main() {
type I interface {}
fmt.Println(typetostring.GetType[I]()) // *main.I
fmt.Println(typetostring.GetType[*I]()) // *main.I
fmt.Println(typetostring.GetType[**I]()) // **main.I
}
Without the getInterfaceType hack, typetostring.GetType[interface{}]() would return *interface {}.
This is a know limitation.
If you have a better way to handle this case, I would be very happy to accept your contribution!!
Meant why only for any
What about trim * for all
return t[1:]
We cannot disable all *, beucase we need to differentiate *string et string.
You can disable the getInterfaceType hack and iterate with existing unit test.
Not all *, only for interface. when reflect.TypeOf(t) == nil
This behavior wasn't planned, was it?
https://play.golang.com/p/RM9X82ZRprN
You are confused here
https://github.com/samber/go-type-to-string/blob/668e29203ca583ef1191bbd3f8f911731520d9b5/README.md?plain=1#L45
After 1.3.0 there will be any
No try to unpack interface (not only any)
Yes, this is why I listed this in the "not supported yet" section.
OK. I'll add an explanation of reason here too. Just
type I interface {
// ...
}
The same type as any other (in the context of this package)
which has its own name.
|
gharchive/issue
| 2024-05-06T10:35:16 |
2025-04-01T06:40:19.406084
|
{
"authors": [
"d-enk",
"samber"
],
"repo": "samber/go-type-to-string",
"url": "https://github.com/samber/go-type-to-string/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
214906587
|
The latest image doesnt has squid.conf
docker inspect sameersbn/squid:3.3.8-23 - doesnt shows any exposed file to be mouted to local host
@vinchauhan you can mount a custom squid configuration at /etc/squid3/squid.conf. Unless you mount such a volume, docker inspect will now show this in its output. Please see https://github.com/sameersbn/docker-squid#configuration for more information.
|
gharchive/issue
| 2017-03-17T04:43:13 |
2025-04-01T06:40:19.422868
|
{
"authors": [
"sameersbn",
"vinchauhan"
],
"repo": "sameersbn/docker-squid",
"url": "https://github.com/sameersbn/docker-squid/issues/18",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1091242250
|
Update docs at ramda-comparison.md
has link to fp-ts-std
should be link to fp-ts
Thanks!
|
gharchive/pull-request
| 2021-12-30T18:11:32 |
2025-04-01T06:40:19.427487
|
{
"authors": [
"SimonAM",
"samhh"
],
"repo": "samhh/fp-ts-std",
"url": "https://github.com/samhh/fp-ts-std/pull/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2260448659
|
Be able to narrow the width of vertical tabs sidebar to just show/fit the icons.
Thank you for the Vertical Tabs extension.
f possible I think it would be great to be able to make the width of vertical tabs sidebar smaller.
Currently the minimum imposed width is too big and eats up too much from the entire screen.
I think a minimum width of 60px (or less) where only the icons should be visible would be good candidate for min width.
Thank you.
Hey @catalin-enache, this is controlled by Chrome you can check more info here https://github.com/samihaddad/vertical-tabs-chrome-extension/issues/42. You can upvote this issue so it can get more visibility by Chrome team.
@samihaddad , thanks for quick feedback. I went there and upvoted as suggested.
Cheers !
|
gharchive/issue
| 2024-04-24T06:23:03 |
2025-04-01T06:40:19.429953
|
{
"authors": [
"catalin-enache",
"samihaddad"
],
"repo": "samihaddad/vertical-tabs-chrome-extension",
"url": "https://github.com/samihaddad/vertical-tabs-chrome-extension/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2462661455
|
Algorithm 1 in the paper
In algo 1, the running mean and variance is updated at step 12, but not used anywhere.
Can you elaborate please?
Hi, it's not fully explicit in the algorithm but section 3.5.2 (link) explains in detail. We use that to normalize each dimension of the random prior's output. That way, if the learned component outputs 0 (which it may do for things you've never seen), the initial bonus is still 1, which is roughly the behavior you want on totally novel observations.
Hope that clears it up, and sorry for the slow response -- I didn't see the comment until just now.
|
gharchive/issue
| 2024-08-13T07:54:56 |
2025-04-01T06:40:19.440578
|
{
"authors": [
"ndvbd",
"samlobel"
],
"repo": "samlobel/CFN",
"url": "https://github.com/samlobel/CFN/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
314048326
|
APIError(code=-1021): Timestamp for this request was 1000ms ahead of the server's time.
Hello - first off great program. when I run the example code (from readme):
# place a test market buy order, to place an actual order use the create_order function
order = client.create_test_order(
symbol='BNBBTC',
side=Client.SIDE_BUY,
type=Client.ORDER_TYPE_MARKET,
quantity=100)
I receive the following error:
line 199, in _handle_response
raise BinanceAPIException(response)
binance.exceptions.BinanceAPIException: APIError(code=-1021): Timestamp for this request was 1000ms ahead of the server's time.
I am able to successfully run the previous example:
# get market depth
depth = client.get_order_book(symbol='BNBBTC')
with no problems. How can I fix this issue? I know it has to do with my computer being different than the binance server time, but I thought that this was automatically handled by python-binance? Thanks in advance.
I've experienced this issue a lot (specially in docker environment). I solved the problem by updating the system time either manually or using ntp service
I tried using time.clock_settime() but received an error. What's NTP service? Did you solve it like this (nadir dogan comment): https://github.com/yasinkuyu/binance-trader/issues/63#issuecomment-355857901
Hi @Roibal NTP is a linux service you can google more about it. Basically I will run
ntpdate -s ntp.ubuntu.com to sync the system time.
No I saw the comment but I didn't succeed with it. I'm developing using Mac, which has the problem but under ubuntu (where my server is) it's totally fine.
Thank you for the info. I'm using Windows, I may try this on a different machine. Have you been able to get a bot working with this library?
I'm not familiar with Windows, but it should have similar service though.
Yes. I'm currently building one with it, have minor issues but overall it's good to me.
From my other comment thread, and thanks to nadir for code:
when I try to run win32api as suggested by nadir's code, I receive following error:
line 19, in <module> win32api.SetSystemTime(tt[0],tt[1],0,tt[2],tt[3],tt[4],tt[5],0) pywintypes.error: (1314, 'SetSystemTime', 'A required privilege is not held by the client.')
I understand this has to do with permissions on my Windows machine.
When I run this code:
gt = client.get_server_time()
print(gt)
print(time.localtime())
aa = str(gt)
bb = aa.replace("{'serverTime': ","")
aa = bb.replace("}","")
gg=int(aa)
ff=gg-10799260
uu=ff/1000
yy=int(uu)
tt=time.localtime(yy)
print(tt)
I receive the following output:
{'serverTime': 1523688379266}
time.struct_time(tm_year=2018, tm_mon=4, tm_mday=14, tm_hour=0, tm_min=46, tm_sec=19, tm_wday=5, tm_yday=104, tm_isdst=1)
time.struct_time(tm_year=2018, tm_mon=4, tm_mday=13, tm_hour=21, tm_min=46, tm_sec=20, tm_wday=4, tm_yday=103, tm_isdst=1)
The first time struct is my local time, the second is the time formatted, which seems to be off by 3 hours and 1 second.
Try manually syncing your computer time to internet time. Details here for Windows: https://github.com/ccxt/ccxt/issues/850#issuecomment-381308373
hi @ShervinDD , thank you for the suggestion. I was able to 'sync' with Nist.gov internet time, and the 1000 ms error went away. Here's how I did it on Windows 10:
Date & Time -> Internet Time (tab) -> sync with nist.gov
it now says it will sync tomorrow at 7:31 A.M.
hi @ShervinDD - this solution worked for me last night, however, when I tried to run the script today I received the same error (1000ms ahead of server time), and I synchronized it 10 minutes previously (automatically). when I re-synced it (manually) the program worked for a few minutes, but once again it is giving same error message. Is there anyway to fix this inside the program, such as setting request time = binance server time?
I'm not sure. If your system time is constantly out of sync with internet time, then there's something wrong with your system. Try to fix that first.
I use windows myself and havent been getting the timestamp error in a while. Now today out of the blue I cant start my bot. The previous fix was to go into regedit on windows and change the time update interval so it updates every 5 min from the default of once a day. I cant recall exactly the location of the time to change it but its easily found on the internet. Google changing time interval registry settings and it should take you where you need to go.
Yet today I restarted my PC AND went into date/time -> internet time and synced and its still saying timestamp error. so its finicky sometimes.
Sorry for posting on a closed issue, but, having awful internet here I am often burdened with getting rid of this issue in order to interact with Binance's private API methods.
This, for posterity, is my solution. I create wrapper around the client object, with an extra function synced.
import time
from binance.client import Client
class Binance:
def __init__(self, public_key = '', secret_key = '', sync = False):
self.time_offset = 0
self.b = Client(public_key, secret_key)
if sync:
self.time_offset = self._get_time_offset()
def _get_time_offset(self):
res = self.b.get_server_time()
return res['serverTime'] - int(time.time() * 1000)
def synced(self, fn_name, **args):
args['timestamp'] = int(time.time() - self.time_offset)
return getattr(self.b, fn_name)(**args)
I then instantiate it and call my private account functions with synced
binance = Binance(public_key = 'my_pub_key', secret_key = 'my_secret_key', sync=True)
binance.synced('order_market_buy', symbol='BNBBTC', quantity=10)
Hope this is of help.
Thanks @ale316. That's a helpful solution.
I've just had a similar problem [also using OSX]. I'm in Europe and my OSX SystemPreferences for Date & Time are already set to use sync via the internet using Apple's Europe time servers. As an experiment, I changed the prefs to use Apple's US time servers instead and ran the script again and the error went away.
Interestingly, I've just reset the prefs back to use Apple's Europe time servers again and the error has [so far!] not returned. Seems, on that basis, to be a problem with Apple's time servers.
from control panel > date and time > internet time
change the server to >>>> time.nist.gov
it worked with me
Sorry for posting on a closed issue, but, having awful internet here I am often burdened with getting rid of this issue in order to interact with Binance's private API methods.
This, for posterity, is my solution. I create wrapper around the client object, with an extra function synced.
import time
from binance.client import Client
class Binance:
def __init__(self, public_key = '', secret_key = '', sync = False):
self.time_offset = 0
self.b = Client(public_key, secret_key)
if sync:
self.time_offset = self._get_time_offset()
def _get_time_offset(self):
res = self.b.get_server_time()
return res['serverTime'] - int(time.time() * 1000)
def synced(self, fn_name, **args):
args['timestamp'] = int(time.time() - self.time_offset)
return getattr(self.b, fn_name)(**args)
I then instantiate it and call my private account functions with synced
binance = Binance(public_key = 'my_pub_key', secret_key = 'my_secret_key', sync=True)
binance.synced('order_market_buy', symbol='BNBBTC', quantity=10)
Hope this is of help.
can you explain the solution? seems like its not work for me
@ale316 Your idea looks great, but I have add some changes to your code.
class Binance:
def __init__(self, public_key = '', secret_key = '', sync = False):
self.time_offset = 0
self.b = Client(public_key, secret_key)
self.b.API_URL = 'https://testnet.binance.vision/api' # for testnet
if sync:
self.time_offset = self._get_time_offset()
print( "Offset: %s ms" % (self.time_offset) )
def _get_time_offset(self):
res = self.b.get_server_time()
return res['serverTime'] - int(time.time() * 1000)
def synced(self, fn_name, **args):
args['timestamp'] = int(time.time() * 1000 + self.time_offset)
return getattr(self.b, fn_name)(**args)
my_binance = Binance(API_KEY, SECRET_KEY, True)
# my_binance.synced('order_market_buy', symbol='BNBBTC', quantity=10)
acc_info = my_binance.synced('get_account', recvWindow=60000)
print(acc_info)
After adding this change, I am able to bypass this issue by removing this line from the pyhon-binance library code.
@ale316 Your idea looks great, but I have add some changes to your code.
class Binance:
def __init__(self, public_key = '', secret_key = '', sync = False):
self.time_offset = 0
self.b = Client(public_key, secret_key)
self.b.API_URL = 'https://testnet.binance.vision/api' # for testnet
if sync:
self.time_offset = self._get_time_offset()
print( "Offset: %s ms" % (self.time_offset) )
def _get_time_offset(self):
res = self.b.get_server_time()
return res['serverTime'] - int(time.time() * 1000)
def synced(self, fn_name, **args):
args['timestamp'] = int(time.time() * 1000 + self.time_offset)
return getattr(self.b, fn_name)(**args)
my_binance = Binance(API_KEY, SECRET_KEY, True)
# my_binance.synced('order_market_buy', symbol='BNBBTC', quantity=10)
acc_info = my_binance.synced('get_account', recvWindow=60000)
print(acc_info)
After adding this change, I am able to bypass this issue by removing this line from the pyhon-binance library code.
For Windows, try this: https://steemit.com/crypto/@biyi/how-to-resolve-binance-s-timestamp-ahead-of-server-s-time-challenge
net stop w32time
w32tm /unregister
w32tm /register
net start w32time
w32tm /resync
In my case, I use Windows and from Settings -> Date & Time, with the switch "Set the time automatically" and "Set the time zone automatically" both on, them pressed the "Sync now" button, and that works for me.
binance.exceptions.BinanceAPIException: APIError(code=-1021): Timestamp for this request is outside of the recvWindow.
I had the same error, I'm using MacOS Catalina Version 10.15.7 If some one had news about this issues please and Thx
Press Windows button -> Data and Time -> Set time automatically + set date automatically; Worked for me perfectly.
On Windows 10, just go to date/time settings and click on "Synchronize now" (or in french "Synchroniser maintenant").
I was running into this issue when running commands out of a docker container. I run windows wsl2 and found that the linux kernel had a time sync bug. I was able to fix my issue by updating the wsl linux kernel. Here are the steps
windows docker fix
fixed in 5.10.16.3 WSL 2 Linux kernel
Shows issue for Clock Sync is resolved
https://devblogs.microsoft.com/commandline/servicing-the-windows-subsystem-for-linux-wsl-2-linux-kernel/
How to update WSL linux kernel
https://winaero.com/how-to-install-linux-kernel-update-for-wsl-2-in-windows-10/
Please go to date time and press all buttons to automatic, and don't forget to press the sync button
Simple. solo tienes que actualizar la frecuencia de actualización de la hora, como es automatico a veces dura mucho tiempo en actualizar ya que por defecto está configurado así. Pero puedes actualizarla en regedit entrando por (Windows + R)
|
gharchive/issue
| 2018-04-13T10:10:36 |
2025-04-01T06:40:19.472231
|
{
"authors": [
"JoshuaFry",
"MrRobotXX",
"RobertoGarridoTrillo",
"Roibal",
"ShaikAnsarBasha",
"ShervinDD",
"Technorocker",
"ale316",
"amitkalo",
"hardkind",
"i3wangyi",
"madranet",
"mujeebishaque",
"neiromendez",
"normanlmfung",
"reuniware",
"wiseinvoker"
],
"repo": "sammchardy/python-binance",
"url": "https://github.com/sammchardy/python-binance/issues/249",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
424619441
|
Add script to test rubyfmt against any Git repo.
This commit adds a script to run all of the Ruby files in some external Git repo through rubyfmt, only outputting errors and the paths to the files where they were encountered.
The purpose of the script is to make it easier to throw a whole bunch of code at rubyfmt, so that we can more easily find bugs and uncovered edge-cases.
The issues and PRs I've opened so far were found by running a bunch of thoughtbot repos through rubyfmt (so far: gitsh, factory_bot, and an internal Rails app called Hub), and I've found enough stuff that it seemed useful to turn the awful shell one-liner I was using into a properly reusable script.
Sure why not.
|
gharchive/pull-request
| 2019-03-24T14:31:05 |
2025-04-01T06:40:19.474804
|
{
"authors": [
"georgebrock",
"samphippen"
],
"repo": "samphippen/rubyfmt",
"url": "https://github.com/samphippen/rubyfmt/pull/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
117893139
|
Check if relationship exists before trying to persist
When saving a model with a null relationship, mirage throws an error. This allows nullable relationships using Mirage.Model. I'm not sure how to test this change.
thank you!
|
gharchive/pull-request
| 2015-11-19T19:47:18 |
2025-04-01T06:40:19.479118
|
{
"authors": [
"samselikoff",
"tim-evans"
],
"repo": "samselikoff/ember-cli-mirage",
"url": "https://github.com/samselikoff/ember-cli-mirage/pull/399",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
818355591
|
iOS Catalyst: navigating to first story from the keyboard does not hide the feeds pane
On first launch on iPad, press the 'j' key; you will see something like this.
Potentially related — if you’re on the second story and use ‘k’ to move to the first story, the feeds pane shows itself even if it had been hidden previously.
|
gharchive/issue
| 2021-03-01T00:23:36 |
2025-04-01T06:40:19.497412
|
{
"authors": [
"nriley"
],
"repo": "samuelclay/NewsBlur",
"url": "https://github.com/samuelclay/NewsBlur/issues/1423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
85484157
|
Android: Social, Social, Social
This patch not only implements the new reply-to-share-without-comment UI, it also fixes a bunch of old bugs related to social (initially triaged in #643), including but not limited to:
Broken button labels for already-shared stories
Inconsistent use of "share" vs. platform "send to"
New comments did not always appear until leaving reading view and performing full refresh
New comment replies did not always appear until leaving reading view and performing full refresh
Friend comments did not appear if public comments were present and enabled
Shares could not be performed offline
Replies could not be performed offline
Creation and removal of favourites could not be performed offline
Comments and replies on stories read via Global Shared Stories leaked into storage
All comment, reply, and favourite actions had long UI lag
All comment, reply, and favourite actions had massive memory churn
Replies other than first reply on a comment never showed
Un-shared stories never disappeared from a user's share list
Additional improvements and bugfixes:
Code cleanup via use of Butterknife to bind views and inject listeners
Fixed unread search order
Improved instrumentation of network latency
Removal of orphaned resources
Crash fixes
Fixed bleeding of sync metadata after account switch (would keep downloading stories/images for old account)
Ready for Beta!
Will release as v4.3.0b3. I think we keep skipping versions, so I want to keep it consistent.
What else did we have wrapped up in 4.3.0? I'm going to write release notes.
Looking at the commit logs:
Memory use reduction
"Read Stories" view
Modern http lib
Many bugfixes
Nice, this is a great release.
I uninstalled and I'm still getting a crash on startup.
06-05 11:48:21.933 7481-7481/com.newsblur E/AndroidRuntime﹕ FATAL EXCEPTION: main
Process: com.newsblur, PID: 7481
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.newsblur/com.newsblur.activity.Login}: java.lang.NullPointerException: Attempt to invoke virtual method 'void android.widget.EditText.setOnEditorActionListener(android.widget.TextView$OnEditorActionListener)' on a null object reference
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2325)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2387)
at android.app.ActivityThread.access$800(ActivityThread.java:151)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5254)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'void android.widget.EditText.setOnEditorActionListener(android.widget.TextView$OnEditorActionListener)' on a null object reference
at com.newsblur.fragment.LoginRegisterFragment.onCreateView(LoginRegisterFragment.java:41)
at android.app.Fragment.performCreateView(Fragment.java:2053)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:894)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1067)
at android.app.BackStackRecord.run(BackStackRecord.java:834)
at android.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:1452)
at android.app.Activity.performStart(Activity.java:6005)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2288)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2387)
at android.app.ActivityThread.access$800(ActivityThread.java:151)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5254)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
Device and os version?
Also, uninstall and try this one, just to see if builds are going differently:
https://github.com/dosiecki/dosieckihome/blob/master/NewsBlur-debug.apk?raw=true
Happened on both 5.0 and 5.1. I'd rather not side-load an APK. Any idea on that error?
I think your build environment is over-stripping classes. Using AndroidStuidio?
Yep.
I've tried a clean rebuild and uninstalling repeatedly. Still getting a crash on boot, and now logcat doesn't even show any output (no filter).
Yeah, confirmed that AndroidStuido is botching the build. Trying to figure out what it is up to that the raw tools aren't.
Released v4.3.0 to production.
Looking pretty good so far. Have already patched few little bugs that were missed in Beta, will let it soak for another few hours and send along a PR. (if only we had more active Beta users!)
|
gharchive/pull-request
| 2015-06-05T09:39:37 |
2025-04-01T06:40:19.507815
|
{
"authors": [
"dosiecki",
"samuelclay"
],
"repo": "samuelclay/NewsBlur",
"url": "https://github.com/samuelclay/NewsBlur/pull/695",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
566398348
|
Question about eel.spawn
Describe the problem
I noticed that eel.spawn only properly creates multiple threads when no arguments are passed. Is this supposed to happen or am I doing something wrong?
Code snippet(s)
def infiniteloop(word):
x = 1
if x==1:
print("one")
eel.sleep(2)
loop()
def infiniteloop2():
x = 1
if x==1:
print("two")
eel.sleep(3)
loop2()
eel.spawn(infiniteloop)
eel.spawn(infiniteloop2)
^^^ This properly prints out 1 and 2
def infiniteloop(var1):
x = 1
if x==1:
print(var1)
eel.sleep(2)
loop(var1)
def infiniteloop2(var2):
x = 1
if x==1:
print(var2)
eel.sleep(3)
loop2(var2)
var1 = 'one'
var2 = 'two'
eel.spawn(infiniteloop(var1))
eel.spawn(infiniteloop2(var2))
^^^ This does not start infiniteloop2
Any ideas why?
Desktop (please complete the following information):
Browser: Chrome
I'm stupid just put eel.spawn(infiniteloop, var). The function definition forces it to go through the whole thread.
|
gharchive/issue
| 2020-02-17T16:12:59 |
2025-04-01T06:40:19.515410
|
{
"authors": [
"daniel442li"
],
"repo": "samuelhwilliams/Eel",
"url": "https://github.com/samuelhwilliams/Eel/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2241492590
|
add GATorch
Hi there,
First of all great list, thanks for curating it!
My name is Rover and last year my team and I published a PyTorch library that seamlessly integrates energy measurement hooks that allow users to generate an energy consumption report after training. The main goal of this project is to create more awareness of the energy consumption of model training and give specific insights into the consumption per layer and per pass. This should create additional awareness of architectural problems which can potentially prompt the developer to choose better energy optimisations.
It's not a big project and consists of a few basic features, however, the reason I'm posting this here is that I've not seen any similar projects and the project recently got some traction on LinkedIn. I'm currently the only active maintainer, however, I do plan to continue the work if necessary. What do you think about adding this to this list?
GH: https://github.com/GreenAITorch/GATorch
Blog: https://luiscruz.github.io/course_sustainableSE/2023/p2_hacking_sustainability/g6_GATorch.html
Cheers
Hello @rvandernoort
Thank you for opening up this issue! Really interesting project indeed and GATorch definitely belongs to this list!
I would be very interested in seeing more experiments with this tool, I am kind of curious about the stability of the power measurement with such a small time step. I think CodeCarbon takes punctual measurement every X seconds to reduce the variability of RAPL / NVML, that is probably why you are not able to use it?
|
gharchive/issue
| 2024-04-13T10:19:52 |
2025-04-01T06:40:19.519037
|
{
"authors": [
"rvandernoort",
"samuelrince"
],
"repo": "samuelrince/awesome-green-ai",
"url": "https://github.com/samuelrince/awesome-green-ai/issues/3",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1211567652
|
Fix erroring migrations
Summary
Currently, migrations are failing to run in projects that Bulkrax is used in. This is the error:
PG::DuplicateColumn: ERROR: column "processed_relationships" of relation "bulkrax_importer_runs" already exists
Acceptance Criteria
[ ] Migrations can be run in projects without throwing errors
Implementation Details
Guard statements need to be added to recently added migrations (similar to how they exist in this migration)
https://github.com/samvera-labs/bulkrax/pull/487
|
gharchive/issue
| 2022-04-21T21:24:07 |
2025-04-01T06:40:19.530079
|
{
"authors": [
"ShanaLMoore",
"bkiahstroud"
],
"repo": "samvera-labs/bulkrax",
"url": "https://github.com/samvera-labs/bulkrax/issues/483",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
453611313
|
Add proper macro support
-Added TargetInspector component to read in-game target states.
-Allow for hold transfers.
Controls should behave identical to PS4 version now.
The code for this feature is not the best, but it works well.
I will see if I can improve this overtime.
|
gharchive/pull-request
| 2019-06-07T17:08:04 |
2025-04-01T06:40:19.561472
|
{
"authors": [
"crash5band"
],
"repo": "samyuu/TotallyLegitArcadeController",
"url": "https://github.com/samyuu/TotallyLegitArcadeController/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
363372255
|
Typescript Typings
I would like to see if there are any interests or existing efforts to add typings for Ember CLI Page Object to help work with Typescript support and general editor support and suggestions/autocompletes.
Yea, this would be awesome 😄
Glad to see the ticket.
I've made some progress with definitions recently.
It auto-completes sub-components and attrs:
const p = create({
scope: '.AComponent',
b: { ... },
doSomething: clickable()
});
a.b.isVisible;
a.b.click();
a.doSomething().b.focus();
// it fails:
a.isVisible.b
a.isVisible.click();
a.click().isVisible
There is still lots of work to be done yet.
TS with pojos is problematic. Custom PO methods have this: any by default. Maybe public typing for component should be providedm like
export { Component } from 'ember-cli-page-object';
const Def = {
doIt(this: Component) {
}
};
Will try to come up with some PR.
I'm not a typescript user (yet) but this looks amazing 👏👏
This is incomplete, but this is what I have so far:
// types/ember-cli-page-object/index.d.ts
type Options = {
multiple?: boolean;
};
export function create(page: object): any;
export function collection(selector: string, page: object): any;
export function clickable(selector?: string): any;
export function isVisible(selector?: string, options?: Options): boolean;
export function fillable(selector?: string): any;
export function text(selector?: string): string;
export function count(selector?: string): number;
export function is(selector?: string): boolean;
export function property(selectorOrProperty?: string, selector?: string): string;
export function hasClass(className: string): boolean;
export function isPresent(selector?: string): boolean;
// types/ember-cli-page-object/macros.d.ts
export function getter<T>(fn: () => T): T;
@NullVoxPopuli you might be interested in https://github.com/san650/ember-cli-page-object/pull/458
Ambient types was included to 1.16.0 release, thank you all for the patience!
|
gharchive/issue
| 2018-09-25T00:58:13 |
2025-04-01T06:40:19.569870
|
{
"authors": [
"NullVoxPopuli",
"leondmello",
"ro0gr",
"rtablada",
"san650"
],
"repo": "san650/ember-cli-page-object",
"url": "https://github.com/san650/ember-cli-page-object/issues/426",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2073871837
|
DPL-1043 Change label type for 'LRC Blood Aliquot' to plate
User story
As a user of the scRNA Core Cell Extraction pipeline, I would like the label type for the 'LRC Blood Aliquot' tube to be a plate label rather than a tube label, as we intend to use falcon tubes. It means we only have to have one label printer in the lab rather than two.
Who are the primary contacts for this story
Lesley, Katy
Who is the nominated tester for UAT
Lesley, Abby
Acceptance criteria
To be considered successful the solution must allow:
[ ] 'LRC Blood Aliquot' tube uses a 96-well plate label type.
[ ] Leave a comment somewhere relevant in the code, so that future developers know this intentional behaviour.
[ ] Printing label is tested using the relevant printer
Additional context
This request came out of the training sessions.
I've put the size as medium rather than small because testing label printing stories normally takes a bit of back and forth.
|
gharchive/issue
| 2024-01-10T08:43:52 |
2025-04-01T06:40:19.666813
|
{
"authors": [
"KatyTaylor"
],
"repo": "sanger/limber",
"url": "https://github.com/sanger/limber/issues/1536",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
653091980
|
GPL-560 As a developer I would like to replace the materialised view with a MLWH report
User story
As a developer I would like to replace the materialised view introduced in GPL-544 #117 with a report in the unified warehouse to reduce coupling to the samples_extraction database, increase flexibility of reporting, centralise the report tables, and reduce overall brittleness of the reports.
Who are the primary contacts for this story
James G (Wrote original view)
Laura L
Acceptance criteria
To be considered successful the solution must allow:
[ ] Reporting on all data within the existing view
[ ] Reporting of correct source barcodes for ALL activities
Additional context
The addition of the view was made to fulfil an urgent need to gain insight into Heron samples. However it has resulted in a fairly complicated query, which is difficult to generalise to all activities. The main difficult is in ensuring that the report can handle:
Activities in which there are intermediate step between the input plate and output
Activities where we are dealing with anything other than a single input mapped to a single output
Multiple different asset types (especially tubes vs. plate/racks)
The current view handles requirement 1, and has some support for requirement 3, however currently does not support requirement 2. To help restrict this impact it confines itself to ONLY the heron extraction pipleines, which at time of writing have other issues confining them to 1->1 transfers.
Meanwhile we cheat on Option 1, essentially extracting the source of the earliest transfer associated with the activity, this avoids the need for recursive queries, but prevents us simultaneously delivering requirement 2.
Meanwhile, a report in the application itself will be able to make use of internal application logic, and potentially handle recursive queries programatically.
The view also runs counter to the general strategy of building reports of the warehouse and isolating application databases within the applications themselves.
Doing a bit of a maintenance pass before I get started, as many of the dependencies are quite out of date now.
Managed to upgrade to ruby 2.6.6. Upgrade to 2.7.2 blocked by google_hash gem
Starting to build messenger. Got most fields sorted, just need to refresh my memory about what was needed for the various source-destination relationships.
|
gharchive/issue
| 2020-07-08T08:18:52 |
2025-04-01T06:40:19.672519
|
{
"authors": [
"JamesGlover"
],
"repo": "sanger/samples_extraction",
"url": "https://github.com/sanger/samples_extraction/issues/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2390900752
|
The Great Prettier Linting
When Prettier was updated to version 3, a config was required to specify an plugins, such as https://github.com/prettier/plugin-ruby, that were used. This PR retrospectively adds this change, along with the subsequent linting.
Changes proposed in this pull request
Specify plugin-ruby in prettier config file
Add Ruby dependencies for plugin-ruby
Lint codebase
In some cases Prettier and Rubocop were fighting over who's formatting rules would win. I tried to defer to Rubocop by telling Prettier (STree) to ignore those lines, but in some cases the ignores failed, so I told Rubocop to ignore them instead. I think it is more important that linting is reenabled soon, than the style be completely consistent.
Related
See #4089 for the Rubocop equivalent of this PR.
Instructions for Reviewers
[All PRs] - Confirm PR template filled
[Feature Branches] - Review code
[Production Merges to main]
- Check story numbers included
- Check for debug code
- Check version
Note that GitHub is hiding some of my comments.
|
gharchive/pull-request
| 2024-07-04T13:36:47 |
2025-04-01T06:40:19.677038
|
{
"authors": [
"StephenHulme",
"sdjmchattie"
],
"repo": "sanger/sequencescape",
"url": "https://github.com/sanger/sequencescape/pull/4187",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2509891188
|
The described method of testing the plugin locally does not work as described.
npm run link-watch will correctly start watching for changes to attempt to republish.
However, even though changes are detected it will always state "Package content has not changed, skipping publishing" which makes it impossible for your test sanity instance to receive those changes.
Even if you manually build and publish in the plugin, you then need to completely remove your node_modules folder and reinstall in your test environment to get a single change to come through.
Yalc just doesn't seem to work at all.
Issue was using pnpm
|
gharchive/issue
| 2024-09-06T08:55:55 |
2025-04-01T06:40:19.692131
|
{
"authors": [
"WarboxLiam"
],
"repo": "sanity-io/plugin-kit",
"url": "https://github.com/sanity-io/plugin-kit/issues/273",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1275017873
|
TELEBOT ERROR
always getting this error anytime i run the program
raceback (most recent call last):
File "/home/christopalace/z-cam/zcam.py", line 3, in
from telebot import types
ModuleNotFoundError: No module named 'telebot'
Please wait for a day, I'll resolve all errors and update to you.
Follow the instructions given (go through readme.md again). You have not installed telebot library.
|
gharchive/issue
| 2022-06-17T13:14:21 |
2025-04-01T06:40:19.701832
|
{
"authors": [
"Christopalace01",
"sankethj"
],
"repo": "sankethj/z-cam",
"url": "https://github.com/sankethj/z-cam/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
235798674
|
Remove print
Remove unnecessary print.
It causes weird issue on the prod, btw :) (looks like it can't access stdout and throw an error)
Is it possible to redeploy 0.9.7 with this fix?
Coverage decreased (-0.01%) to 81.545% when pulling a0a5cff76ad8df72ff566900e81a66729a69cd2f on andreyrusanov:remove_print into b0023832764e96a5649fbda7932687a254ac04a5 on sanoma:develop.
|
gharchive/pull-request
| 2017-06-14T08:19:15 |
2025-04-01T06:40:19.705060
|
{
"authors": [
"andreyrusanov",
"coveralls"
],
"repo": "sanoma/django-arctic",
"url": "https://github.com/sanoma/django-arctic/pull/217",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
198672495
|
[GST-2369] Enable sansible users_and_groups to create system users
We are building some services that have data that persists between instances on an auxiliary EBS volume. When the service has a UID/GID that is mixed in with the regular users, the id can change between instances when new regular users are added.
If the service account is created outside of the user accounts uid/gid range, there is far less chance that the service uid/gid will differ between instances.
LGTM 👍
LGTM :+1:
|
gharchive/pull-request
| 2017-01-04T10:13:41 |
2025-04-01T06:40:19.706734
|
{
"authors": [
"lobsterdore",
"quater",
"sreddel"
],
"repo": "sansible/users_and_groups",
"url": "https://github.com/sansible/users_and_groups/pull/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1223187411
|
[Feature Request] Compare against some faster Python web frameworks
Hi there 👋 ,
It’d be interesting to see Vibora and Japronto in your performance comparisons.
They’re not really maintained, but their performance is in a different class from things like uvicorn/django/flask.
Hi @jordangarside ,
I just saw these projects. I will definitely have a look the comparison. Thank you! :D
Hi @jordangarside ,
I tried installing these projects on my machine but I was unable to get them up and running. I remember trying with 3.10 and 3.9 for sure and maybe even 3.8 .
Which Python version do you use them wiith?
Yeah it looks like I also had issues with the pypi installs for both,
For Vibora try this:
pip uninstall vibora
pip install Cython
git clone https://github.com/vibora-io/vibora.git
cd vibora
python build.py
python setup.py install
For Japronto try this:
pip install https://github.com/squeaky-pl/japronto/archive/master.zip
Thanks @jordangarside . I will give it a go :D
|
gharchive/issue
| 2022-05-02T18:12:52 |
2025-04-01T06:40:19.714746
|
{
"authors": [
"jordangarside",
"sansyrox"
],
"repo": "sansyrox/robyn",
"url": "https://github.com/sansyrox/robyn/issues/193",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
787777533
|
Pull-request desde la segunda cuenta a la primera
Actualizar el repositorio original con los cambios hechos en la segunda cuenta.
Capetar el pull-request
Capetar el pull-request
|
gharchive/pull-request
| 2021-01-17T19:32:05 |
2025-04-01T06:40:19.715897
|
{
"authors": [
"santi250574"
],
"repo": "santiortells/MOOC_git_mod7-cal_2com",
"url": "https://github.com/santiortells/MOOC_git_mod7-cal_2com/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
232548490
|
Security Key Invalid Error #19
Security Key Invalid Errors do not happen anymore because
aws-es-kibana now supports both the URL or hostname syntax
for the cluster-endpoint
Is this PR likely to be merged or is there more work required for a fix?
|
gharchive/pull-request
| 2017-05-31T12:12:41 |
2025-04-01T06:40:19.716996
|
{
"authors": [
"nlv09165",
"rationull"
],
"repo": "santthosh/aws-es-kibana",
"url": "https://github.com/santthosh/aws-es-kibana/pull/23",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
276181797
|
multiplying tags
Every time I uninstall/disable the package, it multiplies how many different wraps are created. After trying this several times in search of a solution, it does this:
<<<<<p>p</p>>p</<p>p</p>>>p</<<p>p</p>>p</<p>p</p>>>>p</<<<p>p</p>>p</<p>p</p>>>p</<<p>p</p>>p</<p>p</p>>>>>test</<<<<p>p</p>>p</<p>p</p>>>p</<<p>p</p>>p</<p>p</p>>>>p</<<<p>p</p>>p</<p>p</p>>>p</<<p>p</p>>p</<p>p</p>>>>>
Each time I undo it removes the latest one only, so it would take many undos to get rid of all these.
This was triggered by an attempt to change the shortcut keys, which didn't work as expected, so I attempted to uninstall/reinstall to get back to the default settings, which is when this started.
+1 this issue has been happening to me, too. Nearly 6 months now.
Multiple cursors seem to trigger this as well.
|
gharchive/issue
| 2017-11-22T19:30:08 |
2025-04-01T06:40:19.719603
|
{
"authors": [
"danferth",
"jgeibrosch",
"josheche"
],
"repo": "sanusart/atom-wrap-in-tag",
"url": "https://github.com/sanusart/atom-wrap-in-tag/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2467013633
|
Extension does not load on VS Code on Win Server 2022
I have tried to both install through the extension marketplace as through building it through Git, but once everything is downloaded and installed the placeholder of the icon just stays blank with a little blue clock next to it.
I have tried older versions (all of them actually) and they all produce the same issue. Not sure if I am missing another extension or if the server is messing with it, but any help would be greatly appreciated.
Is this only happening for you on WIn Server 2022, or are you seeing this in other environments as well?
I only have VS code on the one environment. I'll install it on a different computer tomorrow to try and see if the problem replicates off the server.
This works well on the version of VS code I spun up on a regular desktop so It is something server-side, that is causing the issue.
I will look to see if I can fix it, but any suggestions would be welcome. Thanks for taking the time to look at this.
It's likely the version of vscode you were using was outdated, you need to be on at least 1.9 for claude dev to work as expected. Closing but feel free to re-open if it still doesn't work after updating to the latest.
|
gharchive/issue
| 2024-08-15T00:07:41 |
2025-04-01T06:40:19.733235
|
{
"authors": [
"joeyg6393",
"saoudrizwan"
],
"repo": "saoudrizwan/claude-dev",
"url": "https://github.com/saoudrizwan/claude-dev/issues/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
151603713
|
fix issue with hide, bump to 1.0.10
fix issue with hide and bump to 1.0.10
:+1:
|
gharchive/pull-request
| 2016-04-28T10:29:51 |
2025-04-01T06:40:19.734532
|
{
"authors": [
"devinea",
"eouin"
],
"repo": "sapbuild/angular-szn-autocomplete-build",
"url": "https://github.com/sapbuild/angular-szn-autocomplete-build/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
305851532
|
Performance problem after updating the version
Hi Everyone,
I updated abap2xlsx version via abapgit but now i have a huge performance issue. Nothing changed, i am still using BIND_TABLE method with 16K rows and 60 columns.
Last time it was taking 20-35 seconds with huge file writer but now it is nearly 150 seconds.
I tried SET_TABLE and same issue.
Any advice ?
Can you try to do a trace with se30 to identify the bottleneck?
On Mar 16, 2018, at 4:12 AM, Bilen notifications@github.com wrote:
Hi Everyone,
I updated abap2xlsx version via abapgit but now i have a huge performance issue. Nothing changed, i am still using BIND_TABLE method with 16K rows and 60 columns.
Last time it was taking 20-35 seconds with huge file writer but now it is nearly 150 seconds.
I tried SET_TABLE and same issue.
Any advice ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
above is the SAT results, somehow GET_ROW method is taking %91 of total time.
Thanks,
I remember a similar issue in the past, I trying to recall what was the final outcome.
thank you :D please try to remember i need to transport requests to test system very soon. now i noticed system is using object_collection_iteration method after version update do get next row. inside iteration method it is using standard table maybe this can cause performance issue i didin't get strange issue.
ok i think i found the issue;
first of all that iterator doesn't look necessary, we can store in a global hash table. Each time itearator is instantiated table keep moving from one variable to another variable.
and second thing is, program keep doing same search in each line. lets say there is row already in index 10, now it is searching if there is value in index 11, everytime it starts from the beggining.
So in row number 2000, it works 2000 times to check if there is any value. But if there was a global variable like last_value_cell, it can start directly from 2000 to check.
Hi @bilencekic I'm haveing the same problemm, all you did was change the method GET_ROW with that code?
@bilencekic / @MLDOliveira If you can provide a fix to the project it would be really great!
@bilencekic Could you commit your correction to the projetc?
@ivanfemia @MLDOliveira alright i will commit. I changed 2 classes in total, i will commit asap.
@MLDOliveira have you tested after latest changes ? How is the performance ?
Could we close this issue? Thanks.
|
gharchive/issue
| 2018-03-16T09:12:36 |
2025-04-01T06:40:19.745439
|
{
"authors": [
"MLDOliveira",
"bilencekic",
"ivanfemia",
"sandraros"
],
"repo": "sapmentors/abap2xlsx",
"url": "https://github.com/sapmentors/abap2xlsx/issues/527",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
67943518
|
Clarifies pin numbers
Resolves issue #4 by clarifying which pins can be used.
Thanks! :+1:
|
gharchive/pull-request
| 2015-04-12T18:51:46 |
2025-04-01T06:40:19.750007
|
{
"authors": [
"sarfata",
"spuder"
],
"repo": "sarfata/pi-blaster.js",
"url": "https://github.com/sarfata/pi-blaster.js/pull/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
194775196
|
Dependency on lodash.isarray
I saw that 79e86f32ce3661569748164d26e2f2667a79699a introduced a dependency on lodash.isarray among others.
Is that really necessary? Any particular reason to not use the standard Array.isArray()?
Array.isArray() already appears in other parts of the code and we're having this message on install:
warning node-sass > lodash.isarray@4.0.0: This package is deprecated. Use Array.isArray.
Array.isArray is not available until Node 4.6.2. We support back to Node 0.10.
My apologies you are correct. Fixed by #1830.
|
gharchive/issue
| 2016-12-10T15:49:49 |
2025-04-01T06:40:19.796331
|
{
"authors": [
"daltones",
"xzyfer"
],
"repo": "sass/node-sass",
"url": "https://github.com/sass/node-sass/issues/1829",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
74057072
|
could not install 3.0 on windows
got this error on windows with node0.12.2 npm 2.7.4 git windows mysysgit 1.9.5.1
npm ERR! Cloning into bare repository 'C:\Users\cfunk\AppData\Roaming\npm-cache
_git-remotes\git-github-com-am11-pangyp-git-a953a761'...
npm ERR! Permission denied (publickey).
npm ERR! fatal: Could not read from remote repository.
full trace
C:\Users\cfunk>npm install node-sass@3.0
npm ERR! git -c core.longpaths=true config --get remote.origin.url
npm WARN addRemoteGit Error: Command failed: git -c core.longpaths=true config -
-get remote.origin.url
npm WARN addRemoteGit
npm WARN addRemoteGit at ChildProcess.exithandler (child_process.js:751:12)
npm WARN addRemoteGit at ChildProcess.emit (events.js:110:17)
npm WARN addRemoteGit at maybeClose (child_process.js:1015:16)
npm WARN addRemoteGit at Socket. (child_process.js:1183:11)
npm WARN addRemoteGit at Socket.emit (events.js:107:17)
npm WARN addRemoteGit at Pipe.close (net.js:485:12)
npm WARN addRemoteGit resetting remote C:\Users\cfunk\AppData\Roaming\npm-cache
_git-remotes\git-github-com-am11-pangyp-git-7eb24aaa because of error: { [Error
: Command failed: git -c core.longpaths=true config --get remote.origin.url
npm WARN addRemoteGit ]
npm WARN addRemoteGit killed: false,
npm WARN addRemoteGit code: 1,
npm WARN addRemoteGit signal: null,
npm WARN addRemoteGit cmd: 'git -c core.longpaths=true config --get remote.ori
gin.url' }
npm ERR! git -c core.longpaths=true clone --template=C:\Users\cfunk\AppData\Roam
ing\npm-cache_git-remotes_templates --mirror git://github.com/am11/pangyp.git
C:\Users\cfunk\AppData\Roaming\npm-cache_git-remotes\git-github-com-am11-pangyp
-git-7eb24aaa
npm ERR! git -c core.longpaths=true config --get remote.origin.url
npm WARN addRemoteGit Error: Command failed: git -c core.longpaths=true config -
-get remote.origin.url
npm WARN addRemoteGit
npm WARN addRemoteGit at ChildProcess.exithandler (child_process.js:751:12)
npm WARN addRemoteGit at ChildProcess.emit (events.js:110:17)
npm WARN addRemoteGit at maybeClose (child_process.js:1015:16)
npm WARN addRemoteGit at Process.ChildProcess._handle.onexit (child_process.
js:1087:5)
npm WARN addRemoteGit resetting remote C:\Users\cfunk\AppData\Roaming\npm-cache
_git-remotes\git-github-com-am11-pangyp-git-a953a761 because of error: { [Error
: Command failed: git -c core.longpaths=true config --get remote.origin.url
npm WARN addRemoteGit ]
npm WARN addRemoteGit killed: false,
npm WARN addRemoteGit code: 1,
npm WARN addRemoteGit signal: null,
npm WARN addRemoteGit cmd: 'git -c core.longpaths=true config --get remote.ori
gin.url' }
npm ERR! git -c core.longpaths=true clone --template=C:\Users\cfunk\AppData\Roam
ing\npm-cache_git-remotes_templates --mirror git@github.com:am11/pangyp.git C:
\Users\cfunk\AppData\Roaming\npm-cache_git-remotes\git-github-com-am11-pangyp-g
it-a953a761
npm ERR! git clone --template=C:\Users\cfunk\AppData\Roaming\npm-cache_git-remo
tes_templates --mirror git@github.com:am11/pangyp.git C:\Users\cfunk\AppData\Ro
aming\npm-cache_git-remotes\git-github-com-am11-pangyp-git-a953a761: Cloning in
to bare repository 'C:\Users\cfunk\AppData\Roaming\npm-cache_git-remotes\git-gi
thub-com-am11-pangyp-git-a953a761'...
npm ERR! git clone --template=C:\Users\cfunk\AppData\Roaming\npm-cache_git-remo
tes_templates --mirror git@github.com:am11/pangyp.git C:\Users\cfunk\AppData\Ro
aming\npm-cache_git-remotes\git-github-com-am11-pangyp-git-a953a761: Permission
denied (publickey).
npm ERR! git clone --template=C:\Users\cfunk\AppData\Roaming\npm-cache_git-remo
tes_templates --mirror git@github.com:am11/pangyp.git C:\Users\cfunk\AppData\Ro
aming\npm-cache_git-remotes\git-github-com-am11-pangyp-git-a953a761: fatal: Cou
ld not read from remote repository.
npm ERR! git clone --template=C:\Users\cfunk\AppData\Roaming\npm-cache_git-remo
tes_templates --mirror git@github.com:am11/pangyp.git C:\Users\cfunk\AppData\Ro
aming\npm-cache_git-remotes\git-github-com-am11-pangyp-git-a953a761:
npm ERR! git clone --template=C:\Users\cfunk\AppData\Roaming\npm-cache_git-remo
tes_templates --mirror git@github.com:am11/pangyp.git C:\Users\cfunk\AppData\Ro
aming\npm-cache_git-remotes\git-github-com-am11-pangyp-git-a953a761: Please mak
e sure you have the correct access rights
npm ERR! git clone --template=C:\Users\cfunk\AppData\Roaming\npm-cache_git-remo
tes_templates --mirror git@github.com:am11/pangyp.git C:\Users\cfunk\AppData\Ro
aming\npm-cache_git-remotes\git-github-com-am11-pangyp-git-a953a761: and the re
pository exists.
npm ERR! Windows_NT 6.1.7601
npm ERR! argv "C:\Program Files\nodejs\\node.exe" "C:\Program Files\nodejs
\node_modules\npm\bin\npm-cli.js" "install" "node-sass@3.0"
npm ERR! node v0.12.2
npm ERR! npm v2.7.4
npm ERR! code 128
npm ERR! Command failed: git -c core.longpaths=true clone --template=C:\Users\cf
unk\AppData\Roaming\npm-cache_git-remotes_templates --mirror git@github.com:am
11/pangyp.git C:\Users\cfunk\AppData\Roaming\npm-cache_git-remotes\git-github-c
om-am11-pangyp-git-a953a761
npm ERR! Cloning into bare repository 'C:\Users\cfunk\AppData\Roaming\npm-cache
_git-remotes\git-github-com-am11-pangyp-git-a953a761'...
npm ERR! Permission denied (publickey).
npm ERR! fatal: Could not read from remote repository.
npm ERR!
npm ERR! Please make sure you have the correct access rights
npm ERR! and the repository exists.
npm ERR!
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! https://github.com/npm/npm/issues
npm ERR! Please include the following file with any support request:
npm ERR! C:\Users\cfunk\npm-debug.log
npm install pangyp works but npm install pangyp@a11.... version you are set to does not.
|
gharchive/issue
| 2015-05-07T17:05:00 |
2025-04-01T06:40:19.817320
|
{
"authors": [
"chrisjfunk"
],
"repo": "sass/node-sass",
"url": "https://github.com/sass/node-sass/issues/934",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
173218365
|
Implement SASS_PATH
This is intended to implement the SASS_PATH environment variable.
Fixes #1678.
Tests
I've added two new API tests under .render:
"should check SASS_PATH in the specified order"
"should prefer include path over SASS_PATH"
If you run mocha test/api.js you should see these tests pass.
Manual checking
You can test it manually using the test fixtures as follows:
(lib/vars contains $color: red, and lib-alternate/vars contains $color: orange)
SASS_PATH is picked up
$ fixdir=`pwd`/test/fixtures/sass-path
$ export SASS_PATH=$fixdir/red
$ bin/node-sass $fixdir/index.scss
body {
background: red; }
Earlier paths are preferred
$ export SASS_PATH=$fixdir/orange:$fixdir/red
$ bin/node-sass $fixdir/index.scss
body {
background: orange; }
Specified include-paths still take precedence
$ bin/node-sass $fixdir/index.scss --include-path $fixdir/red
body {
background: red; }
How can a travis build take quite so long? @nschonni is something up? Is there some way for me to start the travis build again?
Looks like OSX jobs got stuck and couldn't even be started.
here's an update from @travisci : https://www.traviscistatus.com/incidents/4mvp857qx8bw
Thanks @saper, I hadn't seen that. It's all passed now. @nschonni are you happy that I've address all your points?
Looks good to me. Would that be possible to squeeze this into one commit?
@saper sure no problem
Nice work everyone. Added this to the next.minor milestone. I believe we have one other PR to land in 3.9.0 also.
Okay commits squashed into 3788c5d9570f2141ccbd6a574af21fcd57d63110.
That's odd, it's failing on a test which is nothing to do with this commit. Any ideas?
Big thank you for the contribution!
No problem. Thanks for all your help with this @saper et al., it was fun! Do you know when the next minor release will be?
|
gharchive/pull-request
| 2016-08-25T14:27:12 |
2025-04-01T06:40:19.824818
|
{
"authors": [
"nottrobin",
"saper",
"xzyfer"
],
"repo": "sass/node-sass",
"url": "https://github.com/sass/node-sass/pull/1680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1775465654
|
Removal of hello-world service deployment verification
Until now, the pre-install report has deployed a hello-world service into the Kubernetes cluster as an additional verification. This feature is being removed because the publicly available google-sample image used by this feature is not being maintained.
This issue is addressed in Release 2.0.0.
|
gharchive/issue
| 2023-06-26T19:38:43 |
2025-04-01T06:40:19.827196
|
{
"authors": [
"kevinlinglesas"
],
"repo": "sassoftware/viya4-ark",
"url": "https://github.com/sassoftware/viya4-ark/issues/200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
71309952
|
Unit tests for high level views
Current the high level views (adminsAsDiscovery, discoveryAsContent, partnersAsDiscovery, posAsMap, teachersAsDiscovery, and teacherSearchAsPage) lack unit tests. We should add unit tests for those views.
Do you want us to continue to merge off of rc1?
Yeah, I think we'll stick with rc1 until the official launch
Working branch info (for reference): https://github.com/gscoppino/STEM/tree/STEM_52_unit_tests_for_high_level_views
Hey @sathomas , let me pick your brain for a second regarding a test case I'm working on? In a test case for PoisAsMap, I'm attempting to add a Poi to a Pois, and expecting the DOM to update containing a new marker. However, this test fails as it is unable to find any markers in the DOM.
Here is the code, with the problem statement highlighted:
https://github.com/gscoppino/STEM/blob/STEM_52_unit_tests_for_high_level_views/test/views/poisAsMap.spec.js#L51
If you see anything immediately wrong with the structure of the test or the test fixtures...otherwise, don't worry about it.
Disregard that last comment, checking the code coverage in the browser test made the problem fairly obvious.
When testing partnersAsDiscovery and adminsAsDiscovery with a simple empty element as the starting point, eg. <article id="admins" class="discovery theme-2"></article> the render() function will fail since it attempts to render the PoisAsMap views when the necessary els do not exist yet. This could be fixed by just moving the PoisAsMap instantiations into the render() function, but I'm not sure that's a good idea. Thoughts?
I'm still recovering from (minor) surgery, so I may not be thinking straight, but
If you need a DOM element, you can insert a $scaffolding container in the page. There are some views that already do this, so that can give you a template.
If it's not too challenging, it would be better to test the view independently of other views. To do that, you could use a sinon stub.
Got it, so I should just provide the elements it expects to see in the scaffolding. Thanks!
Understood, I have been avoiding doing so. I'm considering making more use of mocks as well.
A problem I keep coming back to try and solve concerns a listener for an event set:searchQuery (to be emanated from the teachers model which has an attribute searchQuery. However, I don't see this event on the catalog of built in Backbone events: http://backbonejs.org/#Events-catalog and the source for teachers.js doesn't emanate the event manually. Here's the test source:
it('After render, if the teachers model has its search model reset, the searchForm property of this view should be updated and re-rendered.', function() {
this.TeachersAsDiscovery.remove();
var functionSpy = sinon.spy(this.TeachersAsDiscovery, 'renderSearch');
this.TeachersAsDiscovery.initialize(); // re-bind event handler to use spy.
this.TeachersAsDiscovery.render();
functionSpy.reset(); // render makes a call to renderSearch which we don't care for in this test.
this.TeachersAsDiscovery.model.unset('searchQuery', { silent: true });
var newQuery = new Stem.Models.Search({
label: 'Test Label',
placeholder: 'Test Placeholder'
});
this.TeachersAsDiscovery.model.set('searchQuery', newQuery);
functionSpy.callCount.should.equal(1);
this.TeachersAsDiscovery.searchForm.model.should.equal(newQuery);
functionSpy.restore();
});
Yeah, I think that's a bug in the code itself. Should be 'change:searchQuery' instead of 'set:searchQuery'
The test code looks good BTW
Thanks! Making that change to the event listener fixes the problem, and does not break any other existing tests.
Different problem: When testing spotlights for AdminsAsDiscovery, the template which builds OaeAsSpotlightItem will fail out if picture isn't defined. The reason for this is I put up a fake server which returns a response without a picture property. Should I give it a picture property, or should the possible lack of a source be handled in OaeAsSpotlightItem's template. Here's the test I'm writing for reference:
it('After render, if the spotlight list is populated, it should be shown.', function() {
var baseUrl = Stem.config.oae.protocol + '//' + Stem.config.oae.host + '/api/group/';
var groupUrl = new RegExp(baseUrl + '\d+');
var subgroupUrl = new RegExp(baseUrl = '\d+/members$');
var server = sinon.fakeServer.create()
server.respondWith("GET", groupUrl, [200, { 'Content-Type': 'application/json'}, '{}']);
server.respondWith("GET", subgroupUrl, [200, { 'Content-Type': 'application/json'}, '{}']);
this.AdminsAsDiscovery.model.get('spotlights').add(new Stem.Models.Group());
var $el = this.AdminsAsDiscovery.render().$el;
$el.find('.spotlight-block').hasClass('util--hide').should.be.false();
server.restore();
});
Problem went away. Probably was actually caused by me returning objects instead of arrays (doh!...). Sorry about that. Got the tests working and pushed. Here is what the above test looks like now:
it('After render, if the spotlight list is populated, it should be shown.', function() {
var baseUrl = Stem.config.oae.protocol + '//' + Stem.config.oae.host + '/api/group/';
var subgroupUrl = new RegExp(baseUrl + '.+/members([?]limit=\d+)?');
var server = sinon.fakeServer.create();
server.respondImmediately = true;
server.respondWith("GET", subgroupUrl, [
200,
{ 'Content-Type': 'application/json' },
JSON.stringify([{"profile":{}, "role": "test"}, {"profile":{ "resourceType": "group" }, "role": "test"}])
]);
/* Reset test fixtures */
this.Discovery = new Stem.Models.Discovery();
server.respond();
this.AdminsAsDiscovery = new Stem.Views.AdminsAsDiscovery({
el: this.$Scaffolding.empty(),
model: this.Discovery.get('admins')
});
var $el = this.AdminsAsDiscovery.render().$el;
$el.find('.spotlight-block').hasClass('util--hide').should.be.false();
server.restore();
});
Stephen: While working on discoveryAsContent, I cam across something rather unexpected. First of all, - the article tags are not closed in the template, so this causes rendering issues (I have fixed this on my branch). discoveryAsContent.ejs
Secondly, the template doesn't include discovery-nav or the landing-page-heading. So, if the template were empty, neither of these would be present on the page. Is this desired or something that should be fixed?
I wouldn't sweat the high-level templates too much. They're really only defined as a convenience for testing. The production app doesn't generate the page de novo from templates. Instead, the initial index.html provides the basic "infrastructure" for the page. The JavaScript code then "fills in" the dynamic content where it's appropriate. This approach allows the page to work even for users that don't have JavaScript. (You can try it by disabling JavaScript in your browser and visiting the site.) Obviously, the full functionality is available, but the site is (supposed to be) still usable.
It wouldn't hurt to add the closing tags, but there's no need to add unnecessary elements to the templates.
Hey Stephen:
Both me and Jordan are having trouble with tests which involve triggering radio buttons in the browser. Jordan's problem is DiscoveryAsContent`` child view switching, while mine is TeacherSearchAsPage``` main view switching. Triggering DOM events simply does not invoke the view functions that are watching for them. I ran into a similar problem with checkboxes which I got working by triggering a click and a change event on the necessary elements, but this does not work for the radio buttons. This is the only thing blocking us from bringing all view coverage to 100%. We may just settle for triggering the events on the views/models manually if we can't find a solution to that. Any feedback is appreciated.
Cleaning up test outputs. I adopted a format for the views you assigned us that looks like this:
Would you mind if I formatted the other existing view tests you wrote to look like this?
Suggest moving this issue to https://github.com/Georgia-STEM-Incubator/STEM if it's still desirable
|
gharchive/issue
| 2015-04-27T15:01:35 |
2025-04-01T06:40:19.865197
|
{
"authors": [
"gscoppino",
"jcarroll2007",
"sathomas"
],
"repo": "sathomas/STEM",
"url": "https://github.com/sathomas/STEM/issues/53",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2407500022
|
refactor: remove redundant role
What
The <article> tag already has an implicit role defined by the HTML specification, so we do not need to add an ARIA role attribute.
Reference:
https://html-validate.org/rules/no-redundant-role.html
https://web.dev/learn/accessibility/aria-html#aria_in_html
Screenshot
After the change:
Thanks again!
|
gharchive/pull-request
| 2024-07-14T16:15:52 |
2025-04-01T06:40:19.949275
|
{
"authors": [
"87xie",
"satnaing"
],
"repo": "satnaing/astro-paper",
"url": "https://github.com/satnaing/astro-paper/pull/323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
276628736
|
cor2.isThreadDead() is true after kaguya::LuaThread cor2 = state.newThread();
kaguya::LuaThread cor = state.newThread();
state("corfun = function(arg)"
"coroutine.yield(arg) "
"coroutine.yield(arg2) "
"coroutine.yield(arg3) "
"return arg*4 "
" end");//define corouine function
kaguya::LuaFunction corfun = state["corfun"];//lua function get
//exec coroutine with function and argment
std::cout << int(cor(corfun, 3)) << std::endl;//3
std::cout << int(cor()) << std::endl;//6
//resume template argument is result type
std::cout << cor.resume() << std::endl;//9
std::cout << int(cor()) << std::endl;//12
kaguya::LuaThread cor2 = state.newThread();
//3,6,9,12,
while(!cor2.isThreadDead()) <=====cor2.isThreadDead() is true
{
std::cout << cor2.resume<int>(corfun, 3) << ",";
}
coroutine is return "dead" if function is not assigned.
because can not distinguish both.
Can you try this?
state("corfun = function(arg)"
"coroutine.yield(arg) "
"coroutine.yield(arg2) "
"coroutine.yield(arg3) "
"return arg*4 "
" end");//define corouine function
kaguya::LuaFunction corfun = state["corfun"];//lua function get
kaguya::LuaThread cor2 = state.newThread(corfun);
//3,6,9,12,
while(!cor2.isThreadDead())
{
std::cout << cor2.resume<int>(3) << ",";
}
Hi, it works.
Could you please help me for "attempt to yield across C-call boundary" ?
Luajit 2.1.0 beta3 + kaguya:
1.create a coroutine in C by newThread
2. run a lua function with corutine.yield()
3. error:
attempt to yield across C-call boundary
|
gharchive/issue
| 2017-11-24T14:14:30 |
2025-04-01T06:40:19.955159
|
{
"authors": [
"guijun",
"satoren"
],
"repo": "satoren/kaguya",
"url": "https://github.com/satoren/kaguya/issues/78",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2012572731
|
Diktat 2.0 doesn't apply diktat-analysis.yml in sub-project
Tested with MAGIC_NUMBER in frontend using gradle plugin
Actually, configuration is invalid:
- name: MAGIC_NUMBER
enabled: true
# reduces speed of development on the FE
# will remove it by for now
- name: MAGIC_NUMBER
enabled: false
It contains two configurations: one enabled, the second one -- disables it
|
gharchive/issue
| 2023-11-27T15:47:46 |
2025-04-01T06:40:20.033864
|
{
"authors": [
"nulls"
],
"repo": "saveourtool/diktat",
"url": "https://github.com/saveourtool/diktat/issues/1827",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2651653366
|
The year is 2139?
Please use some other year and maybe add a month for fun. Let people think and realize if there was some 2140 prediction, it might have been current at the time of that prediction but is being slightly adjusted with every mined block.
See my always current prediction*.
Search for word 2139 in this repository and you find the files in i18n directory: https://github.com/search?q=repo%3Asaving-satoshi%2Fsaving-satoshi 2139&type=code
* in the top-right corner of my prediction there is a short kode
Hi @carnhofdaki thanks for filing this issue! I notice we have a discrepancy in chapter 10 where we say the year is 2140, but in chapter 1 it's 2139. Is this what this ticket is referring to?
This project was started over two years ago so you are correct that that prediction has since changed :)
|
gharchive/issue
| 2024-11-12T09:53:15 |
2025-04-01T06:40:20.040696
|
{
"authors": [
"carnhofdaki",
"satsie"
],
"repo": "saving-satoshi/saving-satoshi",
"url": "https://github.com/saving-satoshi/saving-satoshi/issues/1172",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2338940008
|
Chapter 3 help pages
Because chapter 3 has little actual user input I think these help pages will be more resource instensive. Alternatively we could remove the help pages for this chapter as I assume this will be considered the "easy" mode in the future where we can possibly add lesson content that actually requires help pages in the future
Closing as I am satisfied with the current resources as they exist now.
|
gharchive/issue
| 2024-06-06T18:46:59 |
2025-04-01T06:40:20.042007
|
{
"authors": [
"benalleng"
],
"repo": "saving-satoshi/saving-satoshi",
"url": "https://github.com/saving-satoshi/saving-satoshi/issues/970",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
957178172
|
Missing Icons for some Techs and libs
Describe the bug
In Choose Your Icon of Icon part while we choose certain techs like electron or NPM the icon is not showing in the image.
To Reproduce
Steps to reproduce the behavior:
Go to 'https://slickr.vercel.app/app'.
Click on 'Icon'.
Select 'electron' or 'NPM' from Choose Your Icon Dropdown.
See bottom right of the Image.
Expected behavior
The Icon should be added in the Image.
Screenshots
Desktop (please complete the following information):
OS: Windows 10 Home 2004
Browser Brave (Chromium)
Version 92
Yeah, I have found why it happens. It is because Slickr is using devicons library and certain icons donot have plain version. That's the reason.
Ok..🙂🙂
|
gharchive/issue
| 2021-07-31T05:52:13 |
2025-04-01T06:40:20.046343
|
{
"authors": [
"Ajay-056",
"saviomartin"
],
"repo": "saviomartin/slickr",
"url": "https://github.com/saviomartin/slickr/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
947637667
|
Proposal: ability to specify subdirectory containing vim plugin
Some color schemes have their vim plugins within a subdirectory of the repo. For instance, vim plug has the rtp option, which allows you to specify the subdirectory containing the vim plugin.
This seems to be a duplicate of #10.
This seems to be a duplicate of #10.
I read #10, but I found another method for this, maybe also work in Windows?
https://stackoverflow.com/questions/600079/how-do-i-clone-a-subdirectory-only-of-a-git-repository/52269934#52269934
Duplicate of #10
I figured out a workaround for this that works for my use cases.
First, install the main repository with some as = alias, then symlink or copy/install the desired subdirectory to the desired location using build =. Example:
require("paq") {
p {"vlime/vlime", as = "_vlime", build = "ln -fnrs vim ../vlime"}
}
You could use install or rsync here instead of ln, as desired.
(I know that this isn't actually necessary for Vlime anymore, but it's the first example I came up with).
|
gharchive/issue
| 2021-07-19T13:10:28 |
2025-04-01T06:40:20.075648
|
{
"authors": [
"bR3iN",
"gwerbin",
"nanozuki",
"rcoconnor",
"savq"
],
"repo": "savq/paq-nvim",
"url": "https://github.com/savq/paq-nvim/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
485566983
|
Can I unify the style of response in atreugo?
In the restful api project, i try to unify the style of response.
If the middlewares or filters return err, actx.Error() will response the body, i can't modify to json response.
I want
HTTP/1.1 401 Unauthorized
Server: atreugo
Date: Tue, 27 Aug 2019 03:38:40 GMT
Content-Type: application/json
Content-Length: 28
{"code":401,"msg":"Unauthorized"}
But middlewares or filters return err, the response is
HTTP/1.1 401 Unauthorized
Server: atreugo
Date: Tue, 27 Aug 2019 03:39:37 GMT
Content-Type: text/plain
Content-Length: 12
Unauthorized
utils.go line 38 ctx.Error(err.Error(), fasthttp.StatusInternalServerError)
func viewToHandler(view View) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
actx := acquireRequestCtx(ctx)
if err := view(actx); err != nil {
ctx.Error(err.Error(), fasthttp.StatusInternalServerError)
}
releaseRequestCtx(actx)
}
}
router.go line 99 actx.Error(err.Error(), statusCode)
if err != nil {
r.log.Error(err)
actx.Error(err.Error(), statusCode)
}
Because actx inherits the method of ctx, so can use the ctx.method, Is the effect the same?
Because actx inherits the method of ctx, so can use the ctx.method, Is the effect the same?
Yes, It's explained in README 😄
And I've just added custom error view in configuration, so you could configure it with something like that:
config := &atreugo.Config{
...
ErrorView: func(ctx *atreugo.RequestCtx, err error, statusCode int) {
ctx.JSONResponse(atreugo.JSON{"code": statusCode, "msg": err.Error()}, statusCode)
},
...
}
|
gharchive/issue
| 2019-08-27T03:49:11 |
2025-04-01T06:40:20.079569
|
{
"authors": [
"imxxiv",
"savsgio"
],
"repo": "savsgio/atreugo",
"url": "https://github.com/savsgio/atreugo/issues/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1101907072
|
Bad gateway on /longpolling/poll
Ah, I know why. It's not deployed with workers so we should not redirect /longpolling to 8072 in the ingress.
Resolved in 115d292c9fa084db31bbe24321afbcaf5240bf23
|
gharchive/issue
| 2022-01-13T15:02:44 |
2025-04-01T06:40:20.129042
|
{
"authors": [
"sbidoul"
],
"repo": "sbidoul/runboat",
"url": "https://github.com/sbidoul/runboat/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
807848957
|
doda lane
Issue Fixed #
What was a problem?
How this PR fixes the problem?
Check lists (check x in [ ] of list items)
[ ] Test passed
[ ] Coding style (indentation, etc)
Additional Comments (if any)
comment on the pr
terraform plan
terraform plan
terraform plan
|
gharchive/pull-request
| 2021-02-13T23:18:36 |
2025-04-01T06:40:20.131420
|
{
"authors": [
"sblack4"
],
"repo": "sblack4/learning-terraform-github-actions",
"url": "https://github.com/sblack4/learning-terraform-github-actions/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
217697328
|
Error in version 3.5.8?
When VS 2017 is opened (no solution), the "TSVN Pending Changes" window has this:
Exception details:
System.ArgumentException: The path is not of a legal form.
at System.IO.Path.NormalizePath(String path, Boolean fullCheck, Int32 maxPathLength, Boolean expandShortPaths)
at System.IO.Path.GetDirectoryName(String path)
at SamirBoulema.TSVN.Helpers.CommandHelper.GetRepositoryRoot(String path)
at SamirBoulema.TSVN.Helpers.CommandHelper.GetPendingChanges()
at SamirBoulema.TSVN.TSVNToolWindow.OnToolWindowCreated()
at Microsoft.VisualStudio.Shell.Package.CreateToolWindow(Type toolWindowType, Int32 id, ProvideToolWindowAttribute tool)
at Microsoft.VisualStudio.Shell.Package.FindToolWindow(Type toolWindowType, Int32 id, Boolean create, ProvideToolWindowAttribute tool)
at Microsoft.VisualStudio.Shell.Package.Microsoft.VisualStudio.Shell.Interop.IVsToolWindowFactory.CreateToolWindow(Guid& toolWindowType, UInt32 id)
at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.ConstructContent()
If the window is closed, then the Tsvn/Windows/Pending Changes menu shows a dialog:
This happened on two computers.
I've reverted to v3.4 and am fine for now.
Sorry! Should be fixed in the new 3.6 release.
|
gharchive/issue
| 2017-03-28T21:25:41 |
2025-04-01T06:40:20.137781
|
{
"authors": [
"glittle",
"sboulema"
],
"repo": "sboulema/TSVN",
"url": "https://github.com/sboulema/TSVN/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
249971447
|
Forward port #3397 fix addSbtPlugin to use the correct version of sbt
https://github.com/sbt/sbt/pull/3397
Fixed in https://github.com/sbt/sbt/pull/3442
|
gharchive/issue
| 2017-08-14T09:02:48 |
2025-04-01T06:40:20.154016
|
{
"authors": [
"dwijnand",
"eed3si9n"
],
"repo": "sbt/sbt",
"url": "https://github.com/sbt/sbt/issues/3435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
56047531
|
Added sbt-build-files-watcher
:point_right: https://github.com/tototoshi/sbt-build-files-watcher
Hi @tototoshi,
Thank you for your contribution! We really value the time you've taken to put this together.
We see that you have signed the Typesafe Contributors License Agreement before, however, the CLA has changed since you last signed it.
Please review the new CLA and sign it before we proceed with reviewing this pull request:
http://www.typesafe.com/contribute/cla
|
gharchive/pull-request
| 2015-01-30T15:35:46 |
2025-04-01T06:40:20.156091
|
{
"authors": [
"tototoshi",
"typesafehub-validator"
],
"repo": "sbt/website",
"url": "https://github.com/sbt/website/pull/98",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1206125053
|
Error In Macbook M1
Hello, I tried to run the project in my Macbook M1.
The LSPClient has this error:
Could not find module 'NimbleCore' for target 'x86_64-apple-macos'; found: arm64-apple-macos
Any ideas?
Btw, I'm in love with this project and Scade, looking forward for more!!
Hi @Ruivalim
Thanks for the feedback. We appreciate it.
What Xcode version do you use?
For the first, please try to clean the build folder and run it again.
If it doesn't help try to build Nimble Core with Xcode separately. To do that:
Select “New scheme…” in Product->Scheme->New Scheme…
Select as target “Nimble Core”
Build.
After that try to build the whole project (Nimble target). If you find some problems, please ask us, we will try to help you.
|
gharchive/issue
| 2022-04-16T14:29:41 |
2025-04-01T06:40:20.159838
|
{
"authors": [
"Ruivalim",
"bulantsevajo"
],
"repo": "scade-platform/Nimble",
"url": "https://github.com/scade-platform/Nimble/issues/201",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2041823796
|
Update wagmi to latest version
It looks lke the LedgerConnector (@ledgerhq/connect-kit-loader) has been compromised
Context:
https://twitter.com/wevm_dev/status/1735289737185837303
https://twitter.com/bantg/status/1735279127752540465
We don't use it on SE2 (we just use Rainbow's kit LedgerWallet) so we should be fine.
In any case, updating wagmi with the last version (where they remove the dependency): https://github.com/wevm/wagmi/commit/53ca1f7eb411d912e11fcce7e03bd61ed067959c
We should create the NPX back-merge after merging this.
|
gharchive/pull-request
| 2023-12-14T14:31:46 |
2025-04-01T06:40:20.162285
|
{
"authors": [
"carletex"
],
"repo": "scaffold-eth/scaffold-eth-2",
"url": "https://github.com/scaffold-eth/scaffold-eth-2/pull/660",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
263729132
|
Break apart a set of five tests
I was taking these tests for my first time and I found the first question confusing. When the answer I submitted failed, I could not tell whether it was because I didn't understand how .combine() worked at all, or because one of my answers was wrong.
This PR takes the fifth .combine() test and moves it to a separate question. I think this will be fine because the first 4 .combine() tests are so similar.
I looked at this repo because I had the same issue and wanted to fix that too :) great PR
|
gharchive/pull-request
| 2017-10-08T16:12:49 |
2025-04-01T06:40:20.165137
|
{
"authors": [
"doub1ejack",
"krasinski"
],
"repo": "scala-exercises/exercises-cats",
"url": "https://github.com/scala-exercises/exercises-cats/pull/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1228545583
|
Please remove the ExecutionContext.global warning
I'm seeing this warning in my project:
[error] ... The global execution context in Scala.js is based on JS Promises (microtasks).
[error] Using it may prevent macrotasks (I/O, timers, UI rendering) from running reliably.
[error]
[error] Unfortunately, there is no way with ECMAScript only to implement a performant
[error] macrotask execution context (and hence Scala.js core does not contain one).
[error]
[error] We recommend you use: https://github.com/scala-js/scala-js-macrotask-executor
[error] Please refer to the README.md of that project for more details regarding
[error] microtask vs. macrotask execution contexts.
[error]
[error] If you do not care about macrotask fairness, you can silence this warning by:
[error] - Adding @nowarn("cat=other") (Scala >= 2.13.x only)
[error] - Setting the -P:scalajs:nowarnGlobalExecutionContext compiler option (Scala < 3.x.y only)
[error] - Using scala.scalajs.concurrent.JSExecutionContext.queue
[error] (the implementation of ExecutionContext.global in Scala.js) directly.
[error]
[error] If you do not care about performance, you can use
[error] scala.scalajs.concurrent.QueueExecutionContext.timeouts().
[error] It is based on setTimeout which makes it fair but slow (due to clamping).
[error]
[error] Future(1).map { x =>
[error] ^
I understand the intent, or why usage of scala.concurrent.ExecutionContext.global may be problematic, however it's a standard import that often gets used for code that cross-compiles to both the JVM and JS, which is one of the primary strengths of Scala.js. Having the official compiler perpetually warn on standard functionality, and suggest a third-party library, isn't good IMO. And why isn't that warning and option available on Scala 3.x?
Personally, I see only 3 possibilities:
Fix global in Scala.js proper;
Deprecate global and remove it completely in a future version;
Leave it as is, and remove the warning;
As it is, removing that warning is a lot of work, especially in a project that compiles for multiple Scala versions. Updating minor Scala.js versions shouldn't be this hard.
Just a suggestion, thanks a lot for your work 🤗
Have you read https://github.com/scala-js/scala-js/issues/4129, which led to this warning? There is a lot of context in there that explains how we got to make this decision. Do you have any new information that would invalidate the reasoning made there?
Hi @sjrd,
I remember that issue, I even added some input at that point — which was that, out of all solutions, continuing with Promise.then is probably the least desirable solution, being non-standard and leaky.
My problem with it is that it's violating the principle of the least surprise, because people that want to use global (or Future), expect fairness guarantees, not performance. Seeing it used in Scala.js was a surprise to me, because in my JavaScript days I've never thought of using it like that.
When importing global, I would expect it to use setTimeout. It's the most obvious implementation for when setImmediate is not available, as that's what people used and still use in the browser. Also, I did some measurements on Node.js, and the clamping on successive is around 1ms (instead of the usual 4ms, which is what happens in browsers). Not great, but not terrible.
The issue I'm seeing is with the behavior of Future. After the BatchedExecutor optimizations from Scala 2.13.2, the behavior of global would be less relevant. However, AFAIK, after this issue was reported, the global optimizations were reverted, people expected to import ExecutionContext.batched. Which for Future should be another way to solve performance issues, if global would actually use setTimeout.
Speaking of, what sense does ExecutionContext.batched make in Scala.js, given that global is implemented with Promise.then? The two may be equivalent in Scala.js, but they shouldn't be, as trampolining Runnable tasks still makes sense.
In the browser, at least, setTimeout(0) is throttled for UI responsiveness. And I remember that the reason for why setImmediate never happened as a standard was due to setTimeout(0) being enough, and it did not make sense to throttle setTimeout while introducing a setImmediate workaround, which would have ended up throttled as well. It's what people used for making their callbacks stack-safe, prior to the introduction of async/await and Promise.
In my opinion, setTimeout is perfectly acceptable.
But if it isn't, due to performance reasons, then own the current implementation, instead of triggering a warning.
If Future really is the equivalent of Promise, then it needs to be usable out of the box, with no warnings.
Triggering a warning on usage of ExecutionContext.global is like providing the user with a button, and then complaining when the button gets pressed. Like, don't provide the button, if the implementation is that terrible. Otherwise own it.
I'm basically complaining about usability here.
I have taken a stab at grouping the discussion here abit and giving my POV.
Usability
And why isn't that warning and option available on Scala 3.x?
Fair point, but IIUC feature-parity (and compiler option syntax) between Scala 2.x / 3.x are a more general issue.
however it's a standard import that often gets used for code that cross-compiles to both the JVM and JS, which is one of the primary strengths of Scala.js. Having the official compiler perpetually warn on standard functionality, and suggest a third-party library, isn't good IMO.
Absolutely. This isn't good. But IMHO it's the least bad we could come up with. So unless we have a better option (see second section below), I do not know what you want us to do.
Deprecate global and remove it completely in a future version;
That's essentially what the warning is (also see response below). Actually removing it is a bit tricky because it's in the Scalalib, which the Scala.js project doesn't directly control. In any case, a new major Scala.js version is not on the horizon any time soon, so IMHO, no point in figuring out how to remove it right now.
Like, don't provide the button, if the implementation is that terrible.
Fair. But you need to think about this more like a deprecation warning (we cannot remove it due to backwards compatibility guarantees). The reason it isn't directly implemented as a deprecation warning is due to how Scala.js itself cross compiles the scala library.
Alternatives
being non-standard
Could you clarify what you mean by non-standard? IIUC, Promise.then is part of the ECMAScript standard.
and leaky
https://github.com/nodejs/node/issues/6673#issuecomment-599188223
suggests that the issues you point out depend on the exact usage of the API and are not inherent to using Promise.then. Whether or not the Scala.js implementation exposes this leak, I do not know. But if it does, that is a bug and we should fix it.
When importing global, I would expect it to use setTimeout
In my opinion, setTimeout is perfectly acceptable.
See: https://github.com/scala-js/scala-js/issues/4129#issuecomment-733061939
Please address this point when you're arguing for using setTimeout.
If Future really is the equivalent of Promise, then it needs to be usable out of the box, with no warnings.
If Future were the (full) equivalent of js.Promise it wouldn't even try to offer fairness guarantees, just like js.Promise.
Ownership
But if it isn't, due to performance reasons, then own the current implementation, instead of triggering a warning.
I'm not 100% sure what you mean by "owning' here. We maintain (to the best of our abilities) both implementations.
Batched Execution
Speaking of, what sense does ExecutionContext.batched make in Scala.js, given that global is implemented with Promise.then?
I do not know.
IIUC, Promise.then is part of the ECMAScript standard.
The signature, yes, the implementation, no — there are 3 major browser engines with 3 different implementations of Promise.then, with slightly different contracts implemented (last time I checked, maybe that changed, but I seriously doubt it).
If Future were the (full) equivalent of js.Promise it wouldn't even try to offer fairness guarantees, just like js.Promise.
Right. Well, it would also leak in flatMap "tail-recursive" loops, but only on Chrome and Firefox, not Safari.
I'm not 100% sure what you mean by "owning' here. We maintain (to the best of our abilities) both implementations.
"Owning it" as in living with the chosen default, with no regrets 🙂
I think this is a contentious issue for a usability concern, and recommending that project to people is useful enough, so I'm going to backtrack on my suggestion.
Cheers,
|
gharchive/issue
| 2022-05-07T06:39:33 |
2025-04-01T06:40:20.184414
|
{
"authors": [
"alexandru",
"gzm0",
"sjrd"
],
"repo": "scala-js/scala-js",
"url": "https://github.com/scala-js/scala-js/issues/4670",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2619354602
|
Update mill-main to 0.12.1
About this PR
📦 Updates com.lihaoyi:mill-main from 0.11.12 to 0.12.1
📜 GitHub Release Notes - Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.lihaoyi", artifactId = "mill-main" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "com.lihaoyi", artifactId = "mill-main" }
}]
labels: library-update, early-semver-major, semver-spec-minor, commit-count:1
Superseded by #60.
|
gharchive/pull-request
| 2024-10-28T19:29:46 |
2025-04-01T06:40:20.190169
|
{
"authors": [
"scala-steward"
],
"repo": "scala-steward-org/mill-plugin",
"url": "https://github.com/scala-steward-org/mill-plugin/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
115384330
|
add Jawn
green run with this change:
https://scala-ci.typesafe.com/job/scala-2.11.x-jdk8-integrate-community-build/91/
FYI @non
not bother thing to target 2.11.x/JDK6 here, just JDK8.
this will get merged into the 2.12.x community build next time I merge.
2.12.x merge went fine. (I had to disable 2 more support subprojects for now because the required libraries are currently commented out in the 2.12.x build.)
|
gharchive/pull-request
| 2015-11-05T21:57:24 |
2025-04-01T06:40:20.198000
|
{
"authors": [
"SethTisue"
],
"repo": "scala/community-builds",
"url": "https://github.com/scala/community-builds/pull/170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
25467286
|
Can't build a project that depends on pickling
% git clone https://github.com/xeno-by/sbt-example-pickling.git
Cloning into 'sbt-example-pickling'...
remote: Reusing existing pack: 14, done.
remote: Total 14 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (14/14), done.
Checking connectivity... done
% cd sbt-example-pickling
% ~/bin/sbt --version
sbt launcher version 0.13.1
% ~/bin/sbt compile
[info] Set current project to sbt-pickling-example (in build file:/Users/royston/code/sbt-example-pickling/)
[info] Updating {file:/Users/royston/code/sbt-example-pickling/}sbt-example-pickling...
[warn] Binary version (0.8.0-SNAPSHOT) for dependency org.scala-lang#scala-pickling_2.10;0.8.0-SNAPSHOT
[warn] in sbt-pickling-example#sbt-pickling-example_2.10;0.1-SNAPSHOT differs from Scala binary version in project (2.10).
[info] Resolving org.scala-lang#scala-pickling_2.10;0.8.0-SNAPSHOT ...
[warn] module not found: org.scala-lang#scala-pickling_2.10;0.8.0-SNAPSHOT
[warn] ==== local: tried
[warn] /Users/royston/.ivy2/local/org.scala-lang/scala-pickling_2.10/0.8.0-SNAPSHOT/ivys/ivy.xml
[warn] ==== typesafe-ivy-releases: tried
[warn] http://repo.typesafe.com/typesafe/ivy-releases/org.scala-lang/scala-pickling_2.10/0.8.0-SNAPSHOT/ivys/ivy.xml
[warn] ==== public: tried
[warn] http://repo1.maven.org/maven2/org/scala-lang/scala-pickling_2.10/0.8.0-SNAPSHOT/scala-pickling_2.10-0.8.0-SNAPSHOT.pom
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: org.scala-lang#scala-pickling_2.10;0.8.0-SNAPSHOT: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
sbt.ResolveException: unresolved dependency: org.scala-lang#scala-pickling_2.10;0.8.0-SNAPSHOT: not found
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:213)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:122)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:121)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:116)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:116)
at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:104)
at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:51)
at sbt.IvySbt$$anon$3.call(Ivy.scala:60)
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:98)
at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:81)
at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:102)
at xsbt.boot.Using$.withResource(Using.scala:11)
at xsbt.boot.Using$.apply(Using.scala:10)
at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:62)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:52)
at xsbt.boot.Locks$.apply0(Locks.scala:31)
at xsbt.boot.Locks$.apply(Locks.scala:28)
at sbt.IvySbt.withDefaultLogger(Ivy.scala:60)
at sbt.IvySbt.withIvy(Ivy.scala:101)
at sbt.IvySbt.withIvy(Ivy.scala:97)
at sbt.IvySbt$Module.withModule(Ivy.scala:116)
at sbt.IvyActions$.update(IvyActions.scala:121)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1161)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1159)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$73.apply(Defaults.scala:1182)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$73.apply(Defaults.scala:1180)
at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1184)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1179)
at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
at sbt.Classpaths$.cachedUpdate(Defaults.scala:1187)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1152)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1130)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
at sbt.std.Transform$$anon$4.work(System.scala:64)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:244)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
[error] (*:update) sbt.ResolveException: unresolved dependency: org.scala-lang#scala-pickling_2.10;0.8.0-SNAPSHOT: not found
[error] Total time: 2 s, completed Jan 12, 2014 12:31:37 PM
Works for me using Pickling 0.10.0. See https://github.com/xeno-by/sbt-example-pickling/pull/2.
|
gharchive/issue
| 2014-01-12T18:34:57 |
2025-04-01T06:40:20.200751
|
{
"authors": [
"eed3si9n",
"scroyston"
],
"repo": "scala/pickling",
"url": "https://github.com/scala/pickling/issues/103",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2064143647
|
Some in a macro reflect-based unapply method can crash the compiler
Compiler version
3.3.1, 3.4.0-RC1-bin-20231223-938d405-NIGHTLY (and presumably other)
Minimized code
UnapplyErrorMain.scala
@main def main() =
val Unapplier(result) = Some(5)
UnapplyErrorMacro.scala
import scala.quoted._
object Unapplier:
inline def unapplySeq(arg: Any): Option[Seq[Any]] = ${unapplyImpl('arg)}
def unapplyImpl(using Quotes)(argExpr: Expr[Any]): Expr[Option[Seq[Any]]] =
import quotes.reflect._
Match(
'{Option.empty[Int]}.asTerm,
List(
CaseDef(Unapply(TypeApply(Select.unique(Ref(Symbol.classSymbol("scala.Some").companionModule), "unapply"), List(TypeTree.of[Int])), Nil, List('{5}.asTerm)), None, '{Some(Seq(0))}.asTerm),
CaseDef(Wildcard(), None, '{Some(Seq(0))}.asTerm)
)
).asExprOf[Option[Seq[Any]]]
Output (click arrow to expand)
exception while retyping x1.value of class Select # -1
An unhandled exception was thrown in the compiler.
Please file a crash report here:
https://github.com/lampepfl/dotty/issues/new/choose
For non-enriched exceptions, compile with -Yno-enrich-error-messages.
while compiling: /Users/jchyb/Documents/workspace/dotty/UnapplyErrorMain.scala
during phase: MegaPhase{elimErasedValueType, pureStats, vcElideAllocations, etaReduce, arrayApply, elimPolyFunction, tailrec, completeJavaEnums, mixin, lazyVals, memoize, nonLocalReturns, capturedVars}
mode: Mode(ImplicitsEnabled)
library version: version 2.13.12
compiler version: version 3.4.0-RC1-bin-20231223-938d405-NIGHTLY-git-938d405
settings: -classpath /Users/jchyb/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/scala3-library_3/3.4.0-RC1-bin-20231223-938d405-NIGHTLY/scala3-library_3-3.4.0-RC1-bin-20231223-938d405-NIGHTLY.jar:/Users/jchyb/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/scala-library/2.13.12/scala-library-2.13.12.jar -d /Users/jchyb/Documents/workspace/dotty/.scala-build/dotty_3200b05eac-8d90288d6d/classes/main -java-output-version 17 -sourceroot /Users/jchyb/Documents/workspace/dotty
Exception while compiling /Users/jchyb/Documents/workspace/dotty/UnapplyErrorMacro.scala, /Users/jchyb/Documents/workspace/dotty/UnapplyErrorMain.scala
An unhandled exception was thrown in the compiler.
Please file a crash report here:
https://github.com/lampepfl/dotty/issues/new/choose
For non-enriched exceptions, compile with -Yno-enrich-error-messages.
while compiling: <no file>
during phase: parser
mode: Mode()
library version: version 2.13.12
compiler version: version 3.4.0-RC1-bin-20231223-938d405-NIGHTLY-git-938d405
settings: -classpath /Users/jchyb/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/scala3-library_3/3.4.0-RC1-bin-20231223-938d405-NIGHTLY/scala3-library_3-3.4.0-RC1-bin-20231223-938d405-NIGHTLY.jar:/Users/jchyb/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/scala-lang/scala-library/2.13.12/scala-library-2.13.12.jar -d /Users/jchyb/Documents/workspace/dotty/.scala-build/dotty_3200b05eac-8d90288d6d/classes/main -java-output-version 17 -sourceroot /Users/jchyb/Documents/workspace/dotty
Exception in thread "main" java.lang.AssertionError: assertion failed: no owner from <none>/ <none> in x1.value
at scala.runtime.Scala3RunTime$.assertFailed(Scala3RunTime.scala:8)
at dotty.tools.dotc.transform.Erasure$Typer.typedSelect(Erasure.scala:717)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3129)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedExpr(Typer.scala:3437)
at dotty.tools.dotc.transform.Erasure$Typer.$anonfun$7(Erasure.scala:855)
at dotty.tools.dotc.core.Decorators$.zipWithConserve(Decorators.scala:160)
at dotty.tools.dotc.transform.Erasure$Typer.typedApply(Erasure.scala:855)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3160)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedExpr(Typer.scala:3437)
at dotty.tools.dotc.transform.Erasure$Typer.$anonfun$7(Erasure.scala:855)
at dotty.tools.dotc.core.Decorators$.zipWithConserve(Decorators.scala:160)
at dotty.tools.dotc.transform.Erasure$Typer.typedApply(Erasure.scala:855)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3160)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedIf(Typer.scala:1267)
at dotty.tools.dotc.transform.Erasure$Typer.typedIf(Erasure.scala:888)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3169)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.traverse$1(Typer.scala:3374)
at dotty.tools.dotc.typer.Typer.typedStats(Typer.scala:3393)
at dotty.tools.dotc.transform.Erasure$Typer.typedStats(Erasure.scala:1058)
at dotty.tools.dotc.typer.Typer.typedBlockStats(Typer.scala:1193)
at dotty.tools.dotc.typer.Typer.typedBlock(Typer.scala:1197)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3168)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedLabeled(Typer.scala:1991)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3153)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3318)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.transform.Erasure$Typer.typedTyped(Erasure.scala:632)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3165)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3318)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.ReTyper.typedInlined(ReTyper.scala:100)
at dotty.tools.dotc.transform.Erasure$Typer.typedInlined(Erasure.scala:903)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3183)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedExpr(Typer.scala:3437)
at dotty.tools.dotc.typer.Typer.typedValDef(Typer.scala:2539)
at dotty.tools.dotc.transform.Erasure$Typer.typedValDef(Erasure.scala:912)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3133)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.traverse$1(Typer.scala:3347)
at dotty.tools.dotc.typer.Typer.typedStats(Typer.scala:3393)
at dotty.tools.dotc.transform.Erasure$Typer.typedStats(Erasure.scala:1058)
at dotty.tools.dotc.typer.Typer.typedBlockStats(Typer.scala:1193)
at dotty.tools.dotc.typer.Typer.typedBlock(Typer.scala:1197)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3168)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.$anonfun$14(Typer.scala:1294)
at dotty.tools.dotc.typer.Applications.harmonic(Applications.scala:2364)
at dotty.tools.dotc.typer.Applications.harmonic$(Applications.scala:350)
at dotty.tools.dotc.typer.Typer.harmonic(Typer.scala:121)
at dotty.tools.dotc.typer.Typer.typedIf(Typer.scala:1297)
at dotty.tools.dotc.transform.Erasure$Typer.typedIf(Erasure.scala:888)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3169)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.traverse$1(Typer.scala:3374)
at dotty.tools.dotc.typer.Typer.typedStats(Typer.scala:3393)
at dotty.tools.dotc.transform.Erasure$Typer.typedStats(Erasure.scala:1058)
at dotty.tools.dotc.typer.Typer.typedBlockStats(Typer.scala:1193)
at dotty.tools.dotc.typer.Typer.typedBlock(Typer.scala:1197)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3168)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedLabeled(Typer.scala:1991)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3153)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedExpr(Typer.scala:3437)
at dotty.tools.dotc.typer.Typer.typedValDef(Typer.scala:2539)
at dotty.tools.dotc.transform.Erasure$Typer.typedValDef(Erasure.scala:912)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3133)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.traverse$1(Typer.scala:3347)
at dotty.tools.dotc.typer.Typer.typedStats(Typer.scala:3393)
at dotty.tools.dotc.transform.Erasure$Typer.typedStats(Erasure.scala:1058)
at dotty.tools.dotc.typer.Typer.typedBlockStats(Typer.scala:1193)
at dotty.tools.dotc.typer.Typer.typedBlock(Typer.scala:1197)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3168)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedExpr(Typer.scala:3437)
at dotty.tools.dotc.typer.Typer.$anonfun$62(Typer.scala:2602)
at dotty.tools.dotc.inlines.PrepareInlineable$.dropInlineIfError(PrepareInlineable.scala:256)
at dotty.tools.dotc.typer.Typer.typedDefDef(Typer.scala:2602)
at dotty.tools.dotc.transform.Erasure$Typer.typedDefDef(Erasure.scala:959)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3136)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.traverse$1(Typer.scala:3347)
at dotty.tools.dotc.typer.Typer.typedStats(Typer.scala:3393)
at dotty.tools.dotc.transform.Erasure$Typer.typedStats(Erasure.scala:1058)
at dotty.tools.dotc.typer.Typer.typedClassDef(Typer.scala:2789)
at dotty.tools.dotc.transform.Erasure$Typer.typedClassDef(Erasure.scala:1047)
at dotty.tools.dotc.typer.Typer.typedTypeOrClassDef$1(Typer.scala:3148)
at dotty.tools.dotc.typer.Typer.typedNamed$1(Typer.scala:3152)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3243)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.traverse$1(Typer.scala:3347)
at dotty.tools.dotc.typer.Typer.typedStats(Typer.scala:3393)
at dotty.tools.dotc.transform.Erasure$Typer.typedStats(Erasure.scala:1058)
at dotty.tools.dotc.typer.Typer.typedPackageDef(Typer.scala:2922)
at dotty.tools.dotc.typer.Typer.typedUnnamed$1(Typer.scala:3194)
at dotty.tools.dotc.typer.Typer.typedUnadapted(Typer.scala:3244)
at dotty.tools.dotc.typer.ReTyper.typedUnadapted(ReTyper.scala:174)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3321)
at dotty.tools.dotc.typer.Typer.typed(Typer.scala:3325)
at dotty.tools.dotc.typer.Typer.typedExpr(Typer.scala:3437)
at dotty.tools.dotc.transform.Erasure.run(Erasure.scala:143)
at dotty.tools.dotc.core.Phases$Phase.runOn$$anonfun$1(Phases.scala:354)
at scala.runtime.function.JProcedure1.apply(JProcedure1.java:15)
at scala.runtime.function.JProcedure1.apply(JProcedure1.java:10)
at scala.collection.immutable.List.foreach(List.scala:333)
at dotty.tools.dotc.core.Phases$Phase.runOn(Phases.scala:360)
at dotty.tools.dotc.Run.runPhases$1$$anonfun$1(Run.scala:315)
at scala.runtime.function.JProcedure1.apply(JProcedure1.java:15)
at scala.runtime.function.JProcedure1.apply(JProcedure1.java:10)
at scala.collection.ArrayOps$.foreach$extension(ArrayOps.scala:1323)
at dotty.tools.dotc.Run.runPhases$1(Run.scala:337)
at dotty.tools.dotc.Run.compileUnits$$anonfun$1(Run.scala:348)
at dotty.tools.dotc.Run.compileUnits$$anonfun$adapted$1(Run.scala:357)
at dotty.tools.dotc.util.Stats$.maybeMonitored(Stats.scala:69)
at dotty.tools.dotc.Run.compileUnits(Run.scala:357)
at dotty.tools.dotc.Run.compileUnits(Run.scala:267)
at dotty.tools.dotc.Driver.finish(Driver.scala:58)
at dotty.tools.dotc.Driver.doCompile(Driver.scala:38)
at dotty.tools.dotc.Driver.process(Driver.scala:197)
at dotty.tools.dotc.Driver.process(Driver.scala:165)
at dotty.tools.dotc.Driver.process(Driver.scala:177)
at dotty.tools.dotc.Driver.main(Driver.scala:207)
at dotty.tools.dotc.Main.main(Main.scala)
Notes:
Writing reflect-based UnApply calls works for other classes and objects. It's just Some that is causing the issues.
macro call has to be a part of the extractor for this to trigger
-Xcheck-macros gives no additional hints
When using Some in a quoted expression it compiles (and when converting that Expr to a Term and back), but we cannot match the representation of that Term exactly, as we cannot insert Unapply into Typed (as Typed requires a Term, and Unapply isn't that)
This is correct behavior - back then, I did not realise we had the reflect' TypedOrTest abstraction on Typed that does indeed allow us to insert a Tree instead of a Term
|
gharchive/issue
| 2024-01-03T14:36:25 |
2025-04-01T06:40:20.223132
|
{
"authors": [
"jchyb"
],
"repo": "scala/scala3",
"url": "https://github.com/scala/scala3/issues/19362",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
800945339
|
Add cluster autoscaler for eks
Description
https://scalar-labs.atlassian.net/browse/DLT-7887
https://docs.aws.amazon.com/ja_jp/eks/latest/userguide/cluster-autoscaler.html
Add cluster autoscaler for auto scaling in EKS
ref: https://github.com/lablabs/terraform-aws-eks-cluster-autoscaler
Done
Add autoscaler to eks with helm_release
Default cluster_auto_scaling is false
📝 Just only suuport cluster_endpoint_public_access is public when first deploy
@ymorimo PTAL!
|
gharchive/pull-request
| 2021-02-04T05:29:02 |
2025-04-01T06:40:20.256603
|
{
"authors": [
"feeblefakie",
"tei-k"
],
"repo": "scalar-labs/scalar-terraform",
"url": "https://github.com/scalar-labs/scalar-terraform/pull/276",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
300579291
|
Add option to ignore empty lines for MethodLengthChecker
Fix for #300 and #302
Codecov Report
Merging #301 into master will not change coverage.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #301 +/- ##
=====================================
Coverage 0% 0%
=====================================
Files 59 59
Lines 1464 1470 +6
Branches 147 152 +5
=====================================
- Misses 1464 1470 +6
Impacted Files
Coverage Δ
...g/scalastyle/scalariform/MethodLengthChecker.scala
0% <0%> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5eb026f...098bb35. Read the comment docs.
Great! Thanks!
|
gharchive/pull-request
| 2018-02-27T10:47:50 |
2025-04-01T06:40:20.263679
|
{
"authors": [
"canoztokmak",
"codecov-io",
"matthewfarwell"
],
"repo": "scalastyle/scalastyle",
"url": "https://github.com/scalastyle/scalastyle/pull/301",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1130829091
|
Speed up geometry functions
Speeds up geometry functions for calculating polygon intersection, especially for rectangles.
Nice 👍
Did you happen to profile the runtime of this solution? I'm wondering where we're spending the most time.
Yes, I used cProfile and found that the majority of the time was being spent on numpy array creation. I think at some point I might convert this to a numba program so it compiles jit.
Nice 👍
Did you happen to profile the runtime of this solution? I'm wondering where we're spending the most time.
Yes, I used cProfile and found that the majority of the time was being spent on numpy array creation. I think at some point I might convert this to a numba program so it compiles jit.
Fun! LMK if you start trying it out 🙂
|
gharchive/pull-request
| 2022-02-10T20:00:13 |
2025-04-01T06:40:20.266467
|
{
"authors": [
"gatli",
"phil-scale"
],
"repo": "scaleapi/nucleus-python-client",
"url": "https://github.com/scaleapi/nucleus-python-client/pull/217",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
62701501
|
add intermediate method in active_for_authentication? for more flexibility
Small change which provides more flexibility for customizing active_for_authentication?. I needed to add additional condition and couldn't really do much as changing invited_to_sign_up? would break the behaviour in other methods and I ended up aliasing the "original" active_for_authentication?. Having intermediate method for it solves the problem.
Why overriding active_for_authentication? and calling super is not an option?
I was customizing some parts, so needed to change active_for_authentication? and using super where there was already added invited_to_sign_up? was not an option. Could only modify invited_to_sign_up? which would cause problems in other methods or use a reference to the active_for_authentication? from Devise itself and add custom conditional.
|
gharchive/pull-request
| 2015-03-18T14:00:54 |
2025-04-01T06:40:20.330871
|
{
"authors": [
"Azdaroth",
"scambra"
],
"repo": "scambra/devise_invitable",
"url": "https://github.com/scambra/devise_invitable/pull/539",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
975709029
|
Fitbit Sense . No data.
I have a new Sense and loved your clock face. Worked fine for a day or two but now no data is displayed just the icons..
Uninstalled, installed, rebooted, no joy. Fitbit help says ask you.
Can you fix this?
Most likely cause is battery saving preferences on your phone that prevent communication between the watch and the Fitbit app. The Fitbit app needs to be able to run in the background.
Fitbit has run in background permission. SPECTRUM face works fine and several others do as wellOn Aug 20, 2021 12:02 PM, Nathaniel Schaaf @.***> wrote:
Most likely cause is battery saving preferences on your phone that prevent communication between the watch and the Fitbit app. The Fitbit app needs to be able to run in the background.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
This is now happening on mine too. Fitbit recently released an update, so I'll have to see what they broke. It's likely the weather updating portion.
It seemed that the weather info disappeared first but I am not sure.It is a great face, lots of dat, but concise Thanks On Aug 20, 2021 2:49 PM, Nathaniel Schaaf @.***> wrote:
This is now happening on mine too. Fitbit recently released an update, so I'll have to see what they broke. It's likely the weather updating portion.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
Yeah. They are now returning an int where they used to return a string. I'll have to update the watch face. Sorry for the inconvenience.
Not to worry. Please let me know when I should try again.I appreciate the response On Aug 20, 2021 5:39 PM, Nathaniel Schaaf @.***> wrote:
Yeah. They are now returning an int where they used to return a string. I'll have to update the watch face. Sorry for the inconvenience.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
resolved 16747f1ae567390521ad78f8076780978d3625a3
I submitted the update to fitbit for review.
Thank youOn Aug 21, 2021 11:09 AM, Nathaniel Schaaf @.***> wrote:
I submitted the update to fitbit for review.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
This has been published to the Fitbit gallery.
I just installed the new version. Works great! Thank youOn Aug 21, 2021 12:38 PM, Tom Bella @.> wrote:Thank youOn Aug 21, 2021 11:09 AM, Nathaniel Schaaf @.> wrote:
I submitted the update to fitbit for review.
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
|
gharchive/issue
| 2021-08-20T15:24:25 |
2025-04-01T06:40:20.360129
|
{
"authors": [
"Tombella",
"schana"
],
"repo": "schana/carim-clock",
"url": "https://github.com/schana/carim-clock/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1099712937
|
It should be clearly documented that meta-data must be at the end of a task's line (no trailing tags, for example)
Expected Behavior
I expected all tasks with a priority above none to be returned.
Current Behavior
Several tasks are missing. If I remove #test from Task 4 it gets included in the result. It's something with it being the first item in the list, if I add a task to Task 1 it gets removed from the result.
Steps to Reproduce
Paste the following into a note:
# Obsidian Tasks Test
## First List Test
- [ ] Task 1 🔼
- [ ] Task 2 #test ⏫
- [ ] Task 3 #test
## Second List Test
- [ ] Task 4 🔼 #test
- [ ] Task 5
- [ ] Task 6 #test
- [ ] Task 7 🔼 #test
## Priority
```tasks
not done
priority is above none
heading includes Test
```
Context (Environment)
Obsidian version: 13.19
Tasks version: 1.4.1
[X] I have tried it with all other plugins disabled and the error still occurs
Thanks for a great plug btw! Really appreciating it and excited to start using it :) .
Hello @tiktuk.
I think that it may not be mentioned in the documentation, but I believe that -- with the sole exception of an Obsidian block ID -- all contents of a task item (including tags) must precede Tasks' date and priority emojis and their values.
I did look through all the docs to see if it was mentioned before creating the issue. Could be it's just not mentioned. I would hope tags at the end of tasks were supported, it looks more natural to have them there, I think.
Hey @tiktuk, thank you for reaching out. And thank you @therden for your response.
You are correct. Tasks does not support anything except a block link after the meta-data like dates, recurrence, priority, etc. You are also correct that the documentation regarding this is outdated and in the wrong place. It is only mentioned for dates from a time when there were only dates: https://schemar.github.io/obsidian-tasks/getting-started/dates/
You can only put block links (^link-name) after the dates. Anything else will break the parsing of dates and recurrence rules.
The documentation should be updated. It is unfortunately unfeasible to support tags after the meta-data.
Thanks for the clarification, @schemar. And it's perfectly fine, actually, I was thinking that I had to add tasks in the beginning of the line like you have in your examples with - [ ] #task take out the trash . But that's not the case, I see :) .
Thanks again for the plugin. And thanks for helping out too, @therden :) .
Thank you for the PR! :heart:
|
gharchive/issue
| 2022-01-11T23:28:58 |
2025-04-01T06:40:20.392727
|
{
"authors": [
"schemar",
"therden",
"tiktuk"
],
"repo": "schemar/obsidian-tasks",
"url": "https://github.com/schemar/obsidian-tasks/issues/484",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
317048336
|
TypeError: Object(...) is not a function
Getting TypeError: Object(...) is not a function when trying to implement this per the demo. Simply adding
import { Provider } from "react-alert";
import AlertTemplate from "react-alert-template-basic";
to the top of my file causes the error
Closing due to inactivity
Uncaught TypeError: Object(...) is not a function
at Provider (react-alert.js:303)
at mountIndeterminateComponent (react-dom.development.js:15425)
at beginWork (react-dom.development.js:15956)
at performUnitOfWork (react-dom.development.js:19102)
at workLoop (react-dom.development.js:19143)
at HTMLUnknownElement.callCallback (react-dom.development.js:147)
at Object.invokeGuardedCallbackDev (react-dom.development.js:196)
at invokeGuardedCallback (react-dom.development.js:250)
at replayUnitOfWork (react-dom.development.js:18350)
at renderRoot (react-dom.development.js:19261)
at performWorkOnRoot (react-dom.development.js:20165)
at performWork (react-dom.development.js:20075)
at performSyncWork (react-dom.development.js:20049)
at requestWork (react-dom.development.js:19904)
at scheduleWork (react-dom.development.js:19711)
at scheduleRootUpdate (react-dom.development.js:20415)
at updateContainerAtExpirationTime (react-dom.development.js:20441)
at updateContainer (react-dom.development.js:20509)
at ReactRoot.push../node_modules/react-dom/cjs/react-dom.development.js.ReactRoot.render (react-dom.development.js:20820)
at react-dom.development.js:20974
at unbatchedUpdates (react-dom.development.js:20292)
at legacyRenderSubtreeIntoContainer (react-dom.development.js:20970)
at render (react-dom.development.js:21037)
at Module../src/index.js (index.js:21)
at webpack_require (bootstrap:782)
at fn (bootstrap:150)
at Object.0 (tarotCard.js:148)
at webpack_require (bootstrap:782)
at checkDeferredModules (bootstrap:45)
at Array.webpackJsonpCallback [as push] (bootstrap:32)
at main.chunk.js:1
I am also getting this error trying to use the basic template:
react-hot-loader.development.js:285 TypeError: Object(...) is not a function
at Provider (react-alert.js:309)
at ProxyFacade (react-hot-loader.development.js:791)
at mountIndeterminateComponent (react-dom.development.js:14811)
at beginWork (react-dom.development.js:15316)
at performUnitOfWork (react-dom.development.js:18150)
at workLoop (react-dom.development.js:18190)
at renderRoot (react-dom.development.js:18276)
at performWorkOnRoot (react-dom.development.js:19165)
at performWork (react-dom.development.js:19077)
at performSyncWork (react-dom.development.js:19051)
Line 309. Little obfuscated because of webpack.
var root = Object(react__WEBPACK_IMPORTED_MODULE_0__["useRef"])(null);
My code:
import { transitions, positions, Provider as AlertProvider } from 'react-alert'
import AlertTemplate from 'react-alert-template-basic'
const alertOptions = {
// you can also just use 'bottom center'
position: positions.TOP_RIGHT,
timeout: 5000,
offset: '30px',
// you can also just use 'scale'
transition: transitions.SCALE,
}
const App = props => (
<AlertProvider template={AlertTemplate} {...alertOptions}>
// ...
</AlertProvider>
)
My React was out of date, upgrading has resolved it.
|
gharchive/issue
| 2018-04-24T02:30:11 |
2025-04-01T06:40:20.402885
|
{
"authors": [
"Santiago8888",
"abecks",
"grounded-warrior",
"schiehll"
],
"repo": "schiehll/react-alert",
"url": "https://github.com/schiehll/react-alert/issues/77",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
63719643
|
Custom SQL table is missing from DB init scripts
I'm having a hard time understanding how the mysql scripts are to be used to deploy the database. Per the documentation during install I have run:
mysql -p << EOF
create database Seccubus;
grant all privileges on Seccubus.* to seccubus@localhost identified by ‘seccubus’;
flush privileges;
EOF
mysql -u seccubus -pseccubus < /opt/seccubus/var/structure_v6.mysql
mysql -u seccubus -pseccubus Seccubus < /opt/seccubus/var/data_v6.mysql
But after some initial testing on the site I'm getting errors that the customsql table is missing. Running:
mysql -u seccubus -pseccubus Seccubus < /opt/seccubus/var/upgrade_v5_v6.mysql
Created the missing table but also errrored out:
ERROR 1062 (23000) at line 97: Duplicate entry '3' for key 'PRIMARY'
Is this a bug in the structure_v6.mysql file? Is it meant to create a full schema at that version and just missing the table? Or should I have run an earlier structure_vN file and then run the ugprade? The installation actually still says to use the _v4 structure and data files as it runs. Is that the correct approach or is that message outdated?
Fixed when we implemented DB upgrade unit tests see #226
|
gharchive/issue
| 2015-03-23T13:10:41 |
2025-04-01T06:40:20.453417
|
{
"authors": [
"seccubus"
],
"repo": "schubergphilis/Seccubus_v2",
"url": "https://github.com/schubergphilis/Seccubus_v2/issues/186",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1072925580
|
Remove fluent asserts from raylib
version (unittest)
{
import fluent.asserts;
}
pls delete this dead dependency from raymathext.d
This is not a dead dependency, the unittests use fluent asserts. Though I'm not sure we need it, I'm willing to accept a PR that switches to regular asserts.
Done in 4.2.0
|
gharchive/issue
| 2021-12-07T05:18:50 |
2025-04-01T06:40:20.469507
|
{
"authors": [
"crazymonkyyy",
"schveiguy"
],
"repo": "schveiguy/raylib-d",
"url": "https://github.com/schveiguy/raylib-d/issues/12",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
232069602
|
Assets path in stylechecker
As I indicated in the Google Group, the log of stylechecker is printing false even though the assets exists.
The project tree is:
.
├── aop.docx
├── aop.md
├── aop.pdf
├── aop.xml
├── errores.txt
└── img
├── img01.jpg
├── img02.jpg
├── img03.jpg
└── img04.jpg
In the XML, the links in each fig are:
<fig id="f01">
...
<graphic xlink:href="img/img01.jpg" />
</fig>
<fig id="f02">
...
<graphic xlink:href="img/img02.jpg" />
</fig>
<fig id="f03">
...
<graphic xlink:href="img/img03.jpg" />
</fig>
<fig id="f04">
...
<graphic xlink:href="img/img04.jpg" />
</fig>
In the parent directory, when I put:
$ stylechecker aop.xml > errores.txt
or:
$ stylechecker --assetsdir img/ aop.xml > errores.txt
or:
$ stylechecker --assetsdir ~/Descargas/scielo/aop/img/ aop.xml > errores.txt
or:
$ stylechecker --assetsdir ~/Descargas/scielo/aop/ aop.xml > errores.txt
or even:
$ stylechecker --assetsdir . aop.xml > errores.txt
I get the same output:
[
{
"_xml": "/home/nika-zhenya/Descargas/scielo/aop/aop.xml",
"assets": [
[
"img/img01.jpg",
false
],
[
"img/img02.jpg",
false
],
[
"img/img03.jpg",
false
],
[
"img/img04.jpg",
false
]
],
"dtd_errors": [],
"is_valid": true,
"style_errors": {}
}
]
I get the same output if in the XML I change the links to:
<fig id="fX">
...
<graphic xlink:href="./img/imgX.jpg" />
</fig>
Where X = each img number.
The stylechecker --sysinfo
{
"libxml_compiled_version": "2.9.3",
"libxml_version": "2.9.3",
"libxslt_compiled_version": "1.1.29",
"libxslt_version": "1.1.29",
"lxml_version": "3.7.3.0",
"packtools_version": "2.0.2",
"python_version": "3.6.1",
"system_path": [
"/usr/bin",
"/usr/lib/python36.zip",
"/usr/lib/python3.6",
"/usr/lib/python3.6/lib-dynload",
"/usr/lib/python3.6/site-packages"
],
"xml_catalog_files": "/usr/lib/python3.6/site-packages/packtools/catalogs/scielo-publishing-schema.xml"
}
Oh, is because i am putting relative paths img/imgX.jpg, when I shouldn't use them https://www.ncbi.nlm.nih.gov/pmc/pmcdoc/tagging-guidelines/article/genprac.html#links
|
gharchive/issue
| 2017-05-29T17:25:08 |
2025-04-01T06:40:20.500056
|
{
"authors": [
"NikaZhenya"
],
"repo": "scieloorg/packtools",
"url": "https://github.com/scieloorg/packtools/issues/134",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
169653693
|
Link para "Citado em SciELO"
Nos resultados de busca, "Citado em SciELO" deve ter um link que abre um modal com os artigos citantes contento:
Título
Autor
Periódico
Link
Os dados podem ser obtidos em http://citedby.scielo.org/api/v1/pid/?q=S0100-15742003000100008
onde S0100-15742003000100008 é o sufixo do DOI do artigo.
@deandr
O valor de q deve ser o PID SciELO, não o sufixo do DOI.
oops!
@alexxxmendonca
Tranquilo, vc não precisa saber de tudo, rsrsrs.
Trocar o texto em português para "Citado em SciELO"
A caixa modal abre mas só fica carregando. É normal isso?
OK para o item 1.
Para o item 2, não conheço o serviço mas acredito que seja o comportamento normal, visto que ao pesquisar por documentos que possuem citação ele retorna a lista. Por exemplo ao buscar pelo título exato (com aspas) "Educação ambiental, cidadania e sustentabilidade" que é o exemplo citado na descrição deste ticket ele retorna uma lista de documentos.
@alexxxmendonca
A resposta deste serviço é lenta mesmo. Estamos trabalhando para melhorar a performance, mas é uma query processada em tempo real.
Algumas respondem mais rápido pois tem uma camada de cache. Para as que já foram consultadas a resposta será mais rápida até que o cache expire.
@deandr ok, deu certo pesquisando pelo artigo mencionado.
Vou fechar o ticket quando o "Citado em SciELO" for corrigido.
@alexxxmendonca
Rótulo corrigido e atualizado no ambiente de homologação.
Testado e aprovado em http://homolog.search.scielo.org/
|
gharchive/issue
| 2016-08-05T17:11:39 |
2025-04-01T06:40:20.509728
|
{
"authors": [
"alexxxmendonca",
"deandr",
"fabiobatalha"
],
"repo": "scieloorg/search-journals",
"url": "https://github.com/scieloorg/search-journals/issues/281",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
653081178
|
Missing scheme in chart repo URI [skip ci]
The link was broken.
Great catch @pferreir, thanks for spotting it - I'll also rebase the gh-pages branch so it goes live on https://sciencemesh.github.io/charts/
|
gharchive/pull-request
| 2020-07-08T08:02:08 |
2025-04-01T06:40:20.513815
|
{
"authors": [
"SamuAlfageme",
"pferreir"
],
"repo": "sciencemesh/charts",
"url": "https://github.com/sciencemesh/charts/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2557254445
|
Migrate iiif print derivatives to file metadata objects
Summary
Verify derivatives get migrated when converting fedora objects into valkyrie resources.
In this example it looks like the derivatives did not get migrated.
Acceptance Criteria
[ ]
Screenshots or Video
Testing Instructions
To be filled out by dev
Notes
⚠ Does not require client review
I have confirmed the following:
when a work is migrated via edit-save, it does not migrate its filesets.
when a file set is migrated via edit-save, it does create file metadata objects for all of the derivatives as well.
prior to migrating a file set, the file set dropdown does not show the files to be downloaded (this is likely a hyrax bug).
after migrating a file set, the dropdown now includes the derivatives to download.
the universal viewer was broken before migration, and is still broken for the child work that I migrated. This will be investigated via #838
The results of migrating file set eb12e0f1-5a91-4607-bb0d-550e92091281 is shown below.
Migration of a fileset
files now appear in list
file metadata objects
[#<Hyrax::FileMetadata id=#<Valkyrie::ID:0x00007f29b8cc5ae0 @id="187a13cd-f455-4fd7-928d-2ea67c94a2dc"> internal_resource="Hyrax::FileMetadata" created_at=Tue, 01 Oct 2024 21:36:22.852141000 UTC +00:00 updated_at=Tue, 01 Oct 2024 21:36:22.852141000 UTC +00:00 new_record=false file_identifier=#<Valkyrie::ID:0x00007f29b8cc3e20 @id="disk:///app/samvera/derivatives/eb/12/e0/f1/-5/a9/1-/46/07/-b/b0/d-/55/0e/92/09/12/81-thumbnail.jpeg"> alternate_ids=[] file_set_id=#<Valkyrie::ID:0x00007f29b8cc4aa0 @id="eb12e0f1-5a91-4607-bb0d-550e92091281"> label=[] original_filename="81-thumbnail.jpeg" mime_type="image/jpeg" pcdm_use=[#<RDF::URI:0xc1c64 URI:http://pcdm.org/use#ThumbnailImage>] format_label=[] recorded_size=[5619] well_formed=[] valid=[] date_created=[] fits_version=[] exif_version=[] checksum=[] frame_rate=[] bit_rate=[] duration=[] sample_rate=[] height=[] width=[] bit_depth=[] channels=[] data_format=[] offset=[] file_title=[] creator=[] page_count=[] language=[] word_count=[] character_count=[] line_count=[] character_set=[] markup_basis=[] markup_language=[] paragraph_count=[] table_count=[] graphics_count=[] byte_order=[] compression=[] color_space=[] profile_name=[] profile_version=[] orientation=[] color_map=[] image_producer=[] capture_device=[] scanning_software=[] gps_timestamp=[] latitude=[] longitude=[] aspect_ratio=[]>,
#<Hyrax::FileMetadata id=#<Valkyrie::ID:0x00007f29b8c6eab0 @id="334cb409-39e8-42d5-9226-dff6bde66db9"> internal_resource="Hyrax::FileMetadata" created_at=Tue, 01 Oct 2024 21:36:29.336899000 UTC +00:00 updated_at=Tue, 01 Oct 2024 21:36:29.336899000 UTC +00:00 new_record=false file_identifier=#<Valkyrie::ID:0x00007f29b8c6ce40 @id="disk:///app/samvera/derivatives/eb/12/e0/f1/-5/a9/1-/46/07/-b/b0/d-/55/0e/92/09/12/81-xml.xml"> alternate_ids=[] file_set_id=#<Valkyrie::ID:0x00007f29b8c6dac0 @id="eb12e0f1-5a91-4607-bb0d-550e92091281"> label=[] original_filename="81-xml.xml" mime_type="application/xml" pcdm_use=[#<RDF::URI:0xc1c78 URI:http://pcdm.org/use#ExtractedText>] format_label=[] recorded_size=[73000] well_formed=[] valid=[] date_created=[] fits_version=[] exif_version=[] checksum=[] frame_rate=[] bit_rate=[] duration=[] sample_rate=[] height=[] width=[] bit_depth=[] channels=[] data_format=[] offset=[] file_title=[] creator=[] page_count=[] language=[] word_count=[] character_count=[] line_count=[] character_set=[] markup_basis=[] markup_language=[] paragraph_count=[] table_count=[] graphics_count=[] byte_order=[] compression=[] color_space=[] profile_name=[] profile_version=[] orientation=[] color_map=[] image_producer=[] capture_device=[] scanning_software=[] gps_timestamp=[] latitude=[] longitude=[] aspect_ratio=[]>,
#<Hyrax::FileMetadata id=#<Valkyrie::ID:0x00007f29b8c67260 @id="439030f2-a900-4afa-94b7-32d1bfc7be85"> internal_resource="Hyrax::FileMetadata" created_at=Tue, 01 Oct 2024 21:36:32.930822000 UTC +00:00 updated_at=Tue, 01 Oct 2024 21:36:35.298206000 UTC +00:00 new_record=false file_identifier=#<Valkyrie::ID:0x00007f29b8c65410 @id="disk:///app/samvera/hyrax-webapp/storage/files/eb/12/e0/eb12e0f15a914607bb0d550e92091281/20088972.ARCHIVAL--page-1.jpg"> alternate_ids=[#<Valkyrie::ID:0x00007f29b8c658c0 @id="eb12e0f1-5a91-4607-bb0d-550e92091281/files/818a5a41-4a4f-4893-8435-52cb8ecf5f14">] file_set_id=#<Valkyrie::ID:0x00007f29b8c66180 @id="eb12e0f1-5a91-4607-bb0d-550e92091281"> label=[] original_filename="20088972.ARCHIVAL--page-1.jpg" mime_type="image/jpeg" pcdm_use=[#<RDF::URI:0xc1c8c URI:http://pcdm.org/use#OriginalFile>] format_label=["JPEG File Interchange Format"] recorded_size=[1516897] well_formed=["true"] valid=[] date_created=[] fits_version=[] exif_version=[] checksum=["3e7d81a2f7e4bb6d89db8da6081e17f2"] frame_rate=[] bit_rate=[] duration=[] sample_rate=[] height=["4672"] width=["3306"] bit_depth=[] channels=[] data_format=[] offset=[] file_title=[] creator=[] page_count=[] language=[] word_count=[] character_count=[] line_count=[] character_set=[] markup_basis=[] markup_language=[] paragraph_count=[] table_count=[] graphics_count=[] byte_order=["big endian"] compression=["JPEG"] color_space=["YCbCr"] profile_name=["Artifex Software sRGB ICC Profile"] profile_version=["2.1.0"] orientation=[] color_map=[] image_producer=[] capture_device=[] scanning_software=[] gps_timestamp=[] latitude=[] longitude=[] aspect_ratio=[]>,
#<Hyrax::FileMetadata id=#<Valkyrie::ID:0x00007f29b8c60140 @id="748eda64-0ec6-464a-bd57-150603f9ef12"> internal_resource="Hyrax::FileMetadata" created_at=Tue, 01 Oct 2024 21:36:19.685388000 UTC +00:00 updated_at=Tue, 01 Oct 2024 21:36:19.685388000 UTC +00:00 new_record=false file_identifier=#<Valkyrie::ID:0x00007f29b8c7e488 @id="disk:///app/samvera/derivatives/eb/12/e0/f1/-5/a9/1-/46/07/-b/b0/d-/55/0e/92/09/12/81-json.json"> alternate_ids=[] file_set_id=#<Valkyrie::ID:0x00007f29b8c7f108 @id="eb12e0f1-5a91-4607-bb0d-550e92091281"> label=[] original_filename="81-json.json" mime_type="application/json" pcdm_use=[#<RDF::URI:0xc1ca0 URI:http://pcdm.org/use#ExtractedText>] format_label=[] recorded_size=[18642] well_formed=[] valid=[] date_created=[] fits_version=[] exif_version=[] checksum=[] frame_rate=[] bit_rate=[] duration=[] sample_rate=[] height=[] width=[] bit_depth=[] channels=[] data_format=[] offset=[] file_title=[] creator=[] page_count=[] language=[] word_count=[] character_count=[] line_count=[] character_set=[] markup_basis=[] markup_language=[] paragraph_count=[] table_count=[] graphics_count=[] byte_order=[] compression=[] color_space=[] profile_name=[] profile_version=[] orientation=[] color_map=[] image_producer=[] capture_device=[] scanning_software=[] gps_timestamp=[] latitude=[] longitude=[] aspect_ratio=[]>,
#<Hyrax::FileMetadata id=#<Valkyrie::ID:0x00007f29b8c791b8 @id="f414cca5-c16a-47b2-b017-e14847458f0f"> internal_resource="Hyrax::FileMetadata" created_at=Tue, 01 Oct 2024 21:36:26.139220000 UTC +00:00 updated_at=Tue, 01 Oct 2024 21:36:26.139220000 UTC +00:00 new_record=false file_identifier=#<Valkyrie::ID:0x00007f29b8c77548 @id="disk:///app/samvera/derivatives/eb/12/e0/f1/-5/a9/1-/46/07/-b/b0/d-/55/0e/92/09/12/81-txt.txt"> alternate_ids=[] file_set_id=#<Valkyrie::ID:0x00007f29b8c781c8 @id="eb12e0f1-5a91-4607-bb0d-550e92091281"> label=[] original_filename="81-txt.txt" mime_type="text/plain" pcdm_use=[#<RDF::URI:0xc1cb4 URI:http://pcdm.org/use#ExtractedText>] format_label=[] recorded_size=[3981] well_formed=[] valid=[] date_created=[] fits_version=[] exif_version=[] checksum=[] frame_rate=[] bit_rate=[] duration=[] sample_rate=[] height=[] width=[] bit_depth=[] channels=[] data_format=[] offset=[] file_title=[] creator=[] page_count=[] language=[] word_count=[] character_count=[] line_count=[] character_set=[] markup_basis=[] markup_language=[] paragraph_count=[] table_count=[] graphics_count=[] byte_order=[] compression=[] color_space=[] profile_name=[] profile_version=[] orientation=[] color_map=[] image_producer=[] capture_device=[] scanning_software=[] gps_timestamp=[] latitude=[] longitude=[] aspect_ratio=[]>]
Closing - no work to do
|
gharchive/issue
| 2024-09-30T17:21:43 |
2025-04-01T06:40:20.521157
|
{
"authors": [
"ShanaLMoore",
"laritakr"
],
"repo": "scientist-softserv/adventist_knapsack",
"url": "https://github.com/scientist-softserv/adventist_knapsack/issues/833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2223965705
|
:gift: Add custom rubocop rule to double combo
Summary
ref: https://assaydepot.slack.com/archives/C0313NKG2DA/p1712176060340309
Notes
https://github.com/samvera/hyrax/pull/6221/files
https://github.com/samvera/hyrax/commit/ef2ffa446fc1fccfa36793d2ba0404931dd35ce8
|
gharchive/issue
| 2024-04-03T21:31:19 |
2025-04-01T06:40:20.523933
|
{
"authors": [
"ShanaLMoore",
"kirkkwang"
],
"repo": "scientist-softserv/hykuup_knapsack",
"url": "https://github.com/scientist-softserv/hykuup_knapsack/issues/199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
524605923
|
linalg.eigh generalized eigenvalue problem call to LAPACK DSYTRD returns error for n >= 32.767
Hi all,
for my research project I have to deal with very high dimensional dense generalized eigenvalue problems and try to solve them with scipy.linalg.eigh.
Everytime the dimensions of the matrices exceed 32.766 x 32.766 the function returns an error.
The following example is sufficient to reproduce the error:
import numpy as np
import scipy.sparse
import scipy.linalg
n = 32.767
a = np.random.rand(n,n)
a = a.T.dot(a) + scipy.sparse.identity(n) # ensure that matrix is sym. pos. def.
b = np.random.rand(n,n)
b = b.T.dot(b) + scipy.sparse.identity(n)
scipy.linalg.eigh(a,b)
Warning: This example uses a LOT of RAM but is the smallest possible error example.
Error message:
** On entry to DSYTRD parameter number 9 had an illegal value
Segmentation fault (core dumped)
Scipy/Numpy/Python version information:
1.3.2 1.17.4 sys.version_info(major=3, minor=6, micro=8, releaselevel='final', serial=0)
Since LAPACK returns that the 9th parameter has an illegal value I suppose that there might be an error in the scipy call to the LAPACK function.
Thank you very much for your efforts in advance!
Best regards
Yes that's because LWORK is 32 bit signed integer. So you cannot use more than that to allocate address. However your optimal block size is probably more than 2 and hence you get a result that overflows the 32bit integer. See LWORK definition here.
Unless you use somehow a long integer fortran compiled lapack, you can't get passed that value. Unfortunately there is nothing for us to do on that front.
|
gharchive/issue
| 2019-11-18T20:38:21 |
2025-04-01T06:40:20.719373
|
{
"authors": [
"Schleuss94",
"ilayn"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/11080",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
937308769
|
STY: Maths formatting
This issue is linked to #14330
To be able to use tools (like, but not limited to Black), we need to define how we, as the scientific community and not just SciPy, want mathematical equations to be rendered.
The goal of this issue is to document and establish a strict set of rules to write maths. The rules must be coherent, extensive and opinionated (one way to do something, unambiguous wording) so they can be integrated in a tool (that tool may be Black).
I think such a document is missing from the scientific community and my hope is that we can all agree on something :smiley:
To quickstart things here are some ideas:
Formatting Mathematical Expressions
To format mathematical expressions, the following rules must be followed. These rules respect and complement the PEP8 (relevant sections includes id20and id28)
If operators with different priorities are used, add whitespace around the operators with the lowest priority(ies).
There is no space before and after **.
There is no space before and after operators *,/. Only exception is if the expression consist of a single operator linking two groups.
There a space before and after -, +. Except if : (i) the operator is used to define the sign of the number; (ii) the operator is used in a group to mark higher priority.
When splitting an equation, new lines should start with the operator linking the previous and next logical block. Single digit, brackets on a line are forbidden. Use the available horizontal space as much as possible.
# Correct:
i = i + 1
submitted += 1
x = x*2 - 1
hypot2 = x*x + y*y
c = (a+b) * (a-b)
dfdx = sign*(-2*x + 2*y + 2)
result = 2 * x**2 + 3 * x**(2/3)
y = 4*x**2 + 2*x + 1
c_i1j = (1./n**2.
* np.prod(0.5*(2.+abs(z_ij[i1, :])
+ abs(z_ij) - abs(z_ij[i1, :]-z_ij)), axis=1))
# Wrong:
i=i+1
submitted +=1
x = x * 2 - 1
hypot2 = x * x + y * y
c = (a + b) * (a - b)
dfdx = sign * (-2 * x + 2 * y + 2)
result = 2 * x ** 2 + 3 * x ** (2 / 3)
y = 4 * x ** 2 + 2 * x + 1
c_i1j = (1.
/ n ** 2.
* np.prod(0.5 * (2. + abs(z_ij[i1, :])
+ abs(z_ij) - abs(z_ij[i1, :] - z_ij)), axis=1))
I am -1 on any such attempt to enforce such strict, extensive, and opinionated rules. PEP8's recommendations are the right level for developer guidelines, IMO. I'm not sure that such algorithmically-complete rules exist that are simultaneously both terse enough to be implementable and also don't create unreadable horrors in specific circumstances.
Now, if you want to develop an auto-formatting algorithm that uses whitespace in mathematical expressions more readably than black, that's great! Develop it somewhere and see if people like it. I might even use it if it's opt-in, especially if I can use it through my editor over the current selection of lines, not the whole file.
@tupui I think the +/- part would benefit from "unary"/"binary" terminology.
And can you explain what you mean by
(ii) the operator is used in a group to mark higher priority.
(in the same place)?
And why in
* np.prod(0.5*(2.+abs(z_ij[i1, :])
there is no whitespace around the binary plus?
And is using 1., 2. a conscious choice or just force of habit? If this style guide takes off, taking a stance on the likes of 1. and .1 might be necessary.
That's an example of a (possibly-useful) ambiguity that can be used to make things more readable insofar as they communicate some subtle high-level semantics. That kind of ambiguity would be unavailable to algorithmic auto-formatters.
If the goal is to define rules that can be used to build an algorithmic auto-formatter, I recommend just going and implementing the algorithms and using the implementation as the object of discussion. Human brains just aren't good at predicting what the algorithm is going to do in all of the important cases just from the human-readable rules. Making a quick implementation gives us something concrete to discuss, we can throw real examples at it in bulk and evaluate the results quickly, and the process of implementation will make manifest all of the ambiguities.
As an outsider in scipy-dev, I resisted commenting on #14330. Here, however, you seem to be aiming to codify how the wider scientific community (probably restricted to "scientific python"?) writes math.
The goal of this issue is to document and establish a strict set of rules to write maths.
Why is a strict set of rules beyond "valid Python" necessary? The gaol appears to be aimed at reformatting working Python code that someone wrote, and quite likely someone else reviewed or has read, and likely someone else else modified. Scipy has a lot of contributors - lots of people have read the code. The clarity of the math cannot have been too bad or objectionable. If there isolated cases where it needs fixing, I'm pretty sure you do not need an "established strict set of rules" to clean up the code.
The rules must be coherent, extensive and opinionated (one way to do something, unambiguous wording)
The Zen of Python uses "should" and "obvious" when talking about "one way to do something". It does not mandate that there can be only one way to do something.
so they can be integrated in a tool (that tool may be Black).
Why would you want to do that? A key feature of Python is that the code is readable, and hard to make impenetrable to a reasonably knowledgable person. Never mind which "strict set of rules" is needed, why is any strict set of rules needed? Why is any code re-formatting tool needed?
When writing Python code with even a modest amount of care, you can be pretty sure that someone else (maybe yourself in 2 years) will be able to read and (at least sort of) understand what it tries to do. This notion that whitespace between operators or mixing of single and double quotes in a codebase will somehow cause cognitive dissonance or start formatting arguments is somewhat hard for me to even comprehend. Do such things actually happen, ever with Python? There were tabs/spaces arguments, are there "1 or 0 whitespaces around '+'" arguments?
Are people confused by single quotes?
Is there evidence that code formatting is a problem? What fraction of scipy, numpy, scikit-xxx PRs have had significant discussions (let alone "controversies") about Python formatting? How many of those are not resolved by "let's be sensible and mostly follow PEP8 when we can"?
I must say that when I first heard of Black I thought it might have been an elaborate hoax. It appears to misread the intent of the namesake quote about automobiles: At the time, there was one choice of color, and the question was whether to expand that choice. The quote was expressing: "don't focus on styling, focus on features and performance". Instead, Black asserts that variation in the formatting of working, valid Python code is a problem that needs fixing, focusing attention on the styling of already highly readable and working code at the expense of features and performance. It creates a problem where none existed.
The intention of Black is that PRs to fix bugs or add features will be held and more work demanded of the contributor in order to meet styling rules. The feature might be accepted, but only if it fits the style. Discrepancies end not in argument but in acquiescence (or perhaps in disgust -- pay attention to the ones who walk away). The intention of Black is that acquiescence ends any debate (was there any?). It enforces uniformity without exception or nuance, expelling non-compliant contributions when necessary. Many of us in the sciences are trying to fully internalize notions of belonging, access, equity, diversity, and inclusion. Would formatting of code submitted by the visually impaired be disadvantaged by these rules? Would it make screen readers more accurate? How does Black improve the community? The approach taken by Black is deliberately and proudly polarizing, basically for the sake of being proud about being polarizing. Let's have a little less of that, please.
If one wanted to follow the engineering wisdom from the Ford quote, they would be careful about formatting new code, try to be consistent and readable, but certainly not fix what ain't broke, and focus on features and performance over styling. They would be sensible. They probably would not even engage in this conversation. My apologies for not being strong enough to hold my tongue.
To format mathematical expressions, the following rules must be followed. These rules respect and complement the PEP8 (relevant sections includes id20and id28)
PEP8 is a guide, not a mandate. It says
"If operators with different priorities are used, consider adding whitespace around the operators with the lowest priority(ies). Use your own judgment; however, never use more than one space, and always have the same amount of whitespace on both sides of a binary operator:"
Somehow this got turned into 4 mandatory rules (with one exception!) about how much whitespace there will be around all binary operators. I did no read that as "Use your own judgment as long as you agree with me".
I think my main objection to this comes down to that line that reads "#Wrong" there, just above all the working code. That is code that is "not-PEP8 compliant", it is not "Wrong". The calculated values will not change. Is anyone confused by this code? If you're in there working on or reviewing the code and want to make it a bit more PEP8-ish, sure go ahead. If it looks readable, it is readable. If you decide that whitespace around a '+' sign isn't needed in something like np.sin(array[2*i+1, :]), well, maybe that's OK sometimes.
Sorry for the length.
Thanks @newville that's an opinionated but balanced take. I am probably the last person to defend Black, but I think you have taken its use and the problem it promises to solve a bit differently than intended. What black offers is a non-negotiable set of code standards. This becomes particularly effective when many coders have to touch the same codebase frequently. Some come from Java-like background who don't mind going off to the second screen horizontally and others coming from different backgrounds working on the same Python code.
The amount of time wasted on code reviews on that regard in terms of business hours is immense where one says I don't like pep8 line length the others say other things etc. Here the black use is pretty much justified since instead of bringing your developers to a common understanding, the team delegates the code structuring to Black and agrees to not discuss it. Then everything is Black'ened and whatever comes out of it hopefully makes sense. And quite often it does the sensible thing. Code reviews get saner (as much as it can, I mean we are talking about devs here). Now what we have accomplished is that we have adopted the standard of the core devs of Black and we are done.
However, as many people quickly found out in the past (including us after using it about 4 months) is that this standard is not written by scientific or number crunching people. And its strict standard often does not go together with say numpy standard or pandas .function(args).function(args) chains. That's a typical complaint and I think it is justified. So I'm not a fan of Black in that regard since it makes arrays wonky and uglier (IMO).
The discussion here is whether we should delegate code formatting to Black and be done with it. However, its choices as mentioned above especially about ** and operator precedence is almost always wrong for human eyes. For example, in the correct formatting above, instead I would have written it as
c_i1j = (1./(n ** 2.) * np.prod(0.5 * (2. + abs(z_ij[i1, :])
+ abs(z_ij) - abs(z_ij[i1, :] - z_ij)), axis=1))
because it is obvious that it would be a long line hence at least try to make it obvious by breaking it at a sensible point and not bring in strange staircase formatting. And Black starts to fail more often than not in the unary ops in terms of readability. But in any case you can see what was and is now in scikit-learn https://github.com/scikit-learn/scikit-learn/pull/18948 conversion.
Some lines are clearly disgusting in the "after" state to see but I tend to like the the relaxed 88 character line length since 79 is a bit too much in terms horizontal space constraint in my opinion but we won't need black to have that kind of relaxation.
Thank you @newville for expressing your sentiment on this.
As @ilayn pointed out, the goal is to same everyone endless discussions about styling.
Sure the PEP8 was written as a guide and even starts with A Foolish Consistency is the Hobgoblin of Little Minds. Still, over the decades we have seen that this guide was used as an authoritative way to write Python. And we could argue that it served the Python community well in general.
Having a common way of writing things across projects has an under rated value. Here I am not advocating to change the face of maths in all Python scripts used in science. I am more asking to reflect on how to write maths in large libraries such as SciPy and NumPy. The difference is paramount. For the developers having to navigate across these different projects, I think there is a great value having common practices. It enables faster onboarding of new contributors, removes lots of churn. Of course, long time contributors might not agree as they have years of experience navigating around these and other projects.
Newcomers, students, and as you rightfully noted, people with disabilities, would greatly benefit from a common ground. Having a unified language help lower the barriers and tools can be written to help them. Imagine if Black (or anything else) was used by every single project, you could more easily design tools that could read and write code for visually impaired people. Plus, things like that can/should be linked to pre-commit hooks. So no matter what you do, when you code will appear in the PR it will have the expected style without you having to do anything.
Yes, it removes the developer own style, sensitivity. But I will argue that we should not be able to see its mark in such large open source project. As developers, we read code all day long, and having to do this contextual change is not free, it can also lead to misreads, bugs. We certainly do not want to have a different developer style for every single files. You can make the parallel with standards in industry or rules in our society. It's not because the big think that everyone is depending upon is very strict on some aspects, that you have to do the same for your own project and are not free anymore.
Lastly, I would also note that we currently have tons of hard rules which involve so much more thinking and manual actions. Things like input validation, proper way to test, documentation, CI, etc. Here we are mostly talking about spacing that a machine would do for us so we don't have to talk about it.
the goal is to save everyone endless discussions about styling.
AFAICT, we largely do not have endless discussions about styling in the actual code reviews. We only have endless discussions about styling when someone proposes to use black.
Newcomers, students, and as you rightfully noted, people with disabilities, would greatly benefit from a common ground.
Citation needed. I have seen no evidence that the level of formatting that black and company provide any measurable benefits in this regard.
I've laid down this marker before, and I think it satisfies all of the evidenced benefits that you want from black: my ideal auto-formatting tool is one that leaves style-conforming code alone and only fixes up code that deviates. Somehow the benefits of some kind of auto-formatter got conflated with requiring a canonicalizing auto-formatter. black is not the only possible solution.
At minimum, a tool like darker can be fruitfully used by contributors to apply auto-formatting just to their contribution. All of the benefits with respect to the easing of writing code apply just as much to darker as to black. I recommend that you implement your preferred math styling rules in a way that can be plugged into that, and we can evaluate the results concretely rather than spinning out more endless discussions about styling in the abstract.
my ideal auto-formatting tool is one that leaves style-conforming code alone and only fixes up code that deviates.
It looks like autopep8 might fit that bill.
Let me jump in here, since apparently there's two things being mixed:
do we want/need a code formatter like black?
is it possible to code up with consistent guidelines for writing math?
This issue is not about 1, only about 2. No change to any SciPy way of working is proposed
What black does today for math is bad, really bad. Something like hypot2 = 2 * x + 3 * y ** 2 is code no numerical Python person would write by hand. PEP 8 also falls well short here, it for example is completely silent on the power operator. So the question is: is it possible to do better than black and PEP 8. I'm pretty the answer is yes, the question is just how much better. Once we have the answer, at least there's something to point tool authors to. Maybe black et al. can implement it, maybe not. If they did, it would be helpful.
This issue is linked to #14330
Yes, this is about 1. A change to the SciPy way of working has been proposed. This particular issue is a sub-discussion in response to a specific objection to that proposal.
I agree that there are useful attempts at improving auto-formatting for mathematical expressions. I don't think it's all that helpful to try to hash them out here if we're foreclosing the idea of changing the SciPy way of working. Just go implement it somewhere and ask for feedback on the mailing list.
FWIW, I circulated a project idea among students locally. Will see if
anything comes out of this. No definite ETA at the moment, but we'll
definitely report back if anything worth comes out if this.
вт, 6 июл. 2021 г., 17:06 Robert Kern @.***>:
This issue is linked to #14330 https://github.com/scipy/scipy/pull/14330
Yes, this is about 1. A change to the SciPy way of working has been
proposed. This particular issue is a sub-discussion in response to a
specific objection to that proposal.
I agree that there are useful attempts at improving auto-formatting for
mathematical expressions. I don't think it's all that helpful to try to
hash them out here if we're foreclosing the idea of changing the SciPy way
of working. Just go implement it somewhere and ask for feedback on the
mailing list.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/scipy/scipy/issues/14354#issuecomment-874791531, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAQI6SCBCVV3WWL3UHFFLXDTWMEXTANCNFSM473GMY5A
.
This issue is linked to #14330
Yes, this is about 1. A change to the SciPy way of working has been proposed. This particular issue is a sub-discussion in response to a specific objection to that proposal.
Not it is not at 1, Ralf is correct. I am sorry if I mislead you with linking to this issue. But read what I wrote in the description and the further reply. I am only asking to write some guidelines for mathematical equations. It had some conditional, I just added a bit more.
I agree that there are useful attempts at improving auto-formatting for mathematical expressions. I don't think it's all that helpful to try to hash them out here if we're foreclosing the idea of changing the SciPy way of working. Just go implement it somewhere and ask for feedback on the mailing list.
Sorry but I do not have the expertise (and time. I already followed this path a few times after you suggested this and I just lost time here...) to implement all the ideas that I have. And in this case, there might be existing tools doing the job and they might just need some directions. This is the goal of this issue, to agree on how we should write maths. Then whatever we do with this document is extra. It can start as a general PEP8-ish on our contributing guide up to something used by auto formatting tools.
The description is very focused on defining rules for black-like tools, not only guidelines. If that is no longer the intent, you may wish to amend the wording (preserving the old version for reference, of course).
To be able to use tools (like, but not limited to Black),
a strict set of rules to write maths.
The rules must be coherent, extensive and opinionated (one way to do something, unambiguous wording) so they can be integrated in a tool (that tool may be Black).
Those are worthwhile goals because the state of those tools is pretty pathetic for math expressions. But to formulate rules that work within the constraints of algorithmic auto-formatters, you really need to work from code first. The problem that these auto-formatters face is much more constrained than just a human-readable style guide that we can add to our contributor docs. It seems like these ought to be synergistic goals, that making progress on one will gain you progress on the other whichever order you do them, but I think the similarities are deceptive; there are conflicts due to the different constraints on who/what is performing the style recommendations. So there are two tracks that you can go down: build an auto-formatter that produces output that you like, and writing human-level style guides.
If you want to make progress on the former, I think you have to start with code. There's no benefit to having long discussions here on scipy/scipy about it. Just go build it and ask for feedback from our community on the mailing list. Until you are proposing that scipy use that tool, it's not really on-topic here on the issue tracker.
If you want to make progress on the latter, that's definitely on-topic here, but I think you need to relax the "extensive and opinionated so they can be integrated in a tool" constraints.
@tupui @rgommers
Let me jump in here, since apparently there's two things being mixed:
1 do we want/need a code formatter like black?
2 is it possible to code up with consistent guidelines for writing math?
This issue is not about 1, only about 2. No change to any SciPy way of working is proposed
@rgommers Um, yes it is. And not only for SciPy but "we, as the scientific community and not just SciPy". The goal is very clearly stated as defining how math is rendered so that tools like Black may be used. It is not isolated to point 2.
This issue is linked to #14330
Yes, this is about 1. A change to the SciPy way of working has been proposed. This particular issue is a sub-discussion in response to a specific objection to that proposal.
Not it is not at 1, Ralf is correct. I am sorry if I mislead you with linking to this issue. But read what I wrote in the description and the further reply. I am only asking to write some guidelines for mathematical equations. It had some conditional, I just added a bit more.
Huh? The message sent to the mailing list on 1 July (https://mail.python.org/pipermail/scipy-dev/2021-July/024924.html) has a subject line of "[SciPy-Dev] Code formatting: black".
Issue #14330 is titled "MAINT/STY: use Black formatting" and opens with "I propose to apply Black on our code base."
This issue begins:
This issue is linked to #14330
To be able to use tools (like, but not limited to Black), we need to define how we, as the scientific community and not just SciPy, want mathematical equations to be rendered.
Now you both say that this is not about using Black to reformat Scipy or other scientific code and apologize if some of the links might mistakenly lead someone to conclude that?
I think that if you are concerned about consistency, there may be somewhere closer to home that may need more attention than code formatting.
So clearly the issue description wasn't as focused as it should have been. @tupui discussed this with me, that's why I knew the goal was (2). I even pre-read what he wrote (but not thoroughly enough), so I'm partly to blame for it being unclear.
From initial discussion on gh-14330 it's clear that many people are -1 on using black because its math formatting is terrible. So that's on hold / rejected, unless math formatting can be fixed. There's no point continuing that discussion. Clearly using black is blocked. I'm happy to comment on the PR saying exactly that. We can also just close that PR.
By the way, I have no clear preference about any of this. I've only ever used black once, and it wasn't ideal. I'm happy to give it a chance though, if and only if all blocking concerns have been resolved.
Since this issue has obviously derailed, I suggest also closing this one and starting fresh. It's cleaner than trying to amend the initiating issue description and resuming an essentially new discussion mid-thread.
I still think it's questionable that the scipy/scipy issue tracker is the best place to have the amended discussion. Maybe the SPEC Discourse is a more appropriate venue?
Since this issue has obviously derailed, I suggest also closing this one and starting fresh. It's cleaner than trying to amend the initiating issue description and resuming an essentially new discussion mid-thread.
Agreed, let's close it.
I still think it's questionable that the scipy/scipy issue tracker is the best place to have the amended discussion. Maybe the SPEC Discourse is a more appropriate venue?
That does sound like a good suggestion. We never had a place like that, but we do now - it'd be good to try and start using it. There's still little traffic on that Discourse, but we can point people to it on the mailing list.
Sounds like a good idea, agreed.
Thank you all for the discussion. In future I would hope we could have discussions with a productive outcome and less emotions.
This would have been a good transfer to discussions had we had enabled it by the way
|
gharchive/issue
| 2021-07-05T18:59:12 |
2025-04-01T06:40:20.767553
|
{
"authors": [
"adeak",
"ev-br",
"ilayn",
"newville",
"rgommers",
"rkern",
"tupui"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/14354",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1198889193
|
import scipy.optimize calling
Describe your issue.
Thank you very much your development of scipy.
If you know solution, please tell me.
The error occurred when just import scipy.optimize on pypy where installed scipy etc.
$ pypy3
Python 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)
[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>> import scipy.optimize
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): IndentationError: ('unexpected indent', ('<string>', 2, 1, ' class pybind11_static_property(property):\n', 2))
Aborted
root@48933591bab3:~#
The pypy was installed based on official pypy docker image.
Env:
MacOS: catalina
Docker: 4.7.0 (77141)
PyPy 7.3.9, 7.3.8 (both)
pybind11 2.9.2
Reproducing Code Example
import scipy.optimize
Error message
terminate called after throwing an instance of 'pybind11::error_already_set'
what(): IndentationError: ('unexpected indent', ('<string>', 2, 1, ' class pybind11_static_property(property):\n', 2))
Aborted
SciPy/NumPy/Python version information
PyPy 7.3.9, Scipy 1.8.0, Numpy 1.22.3
It looks like there may have been a bug in PyPy; see https://github.com/conda-forge/pypy-meta-feedstock/issues/25. Please upgrade PyPy, SciPy (to 1.9.3), and open a new issue with a title that identifies the problem, e.g. "BUG: import scipy.optimize fails on PyPy" . (That said, I'm not sure if we support PyPy right now, so I can't guarantee that it will be addressed.)
|
gharchive/issue
| 2022-04-10T05:43:37 |
2025-04-01T06:40:20.774921
|
{
"authors": [
"HirotsuguMINOWA",
"mdhaber"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/issues/15966",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
614234694
|
ENH: Modifies shapiro to return a named tuple
Performs a change in the return of shapiro function, returning now a named tuple ShapirotestResult, which has "statistic" and "pvalue" as indexes. This modify was made to turn the function return similar to other functions like scipy,stats.normaltest and scipy.stats.ttest_ind.
Previously we had to create two objects, something like stats, p = scipy.stats.shapiro (x). Otherwise, we would have to access these values by index(using [0] or [1]), which makes understanding more difficult to someone who does not know exactly the behavior of the function.
With this implementation, we were able to make shapiro_test = scipy.stats.shapiro (x) and, for example, get the p-value with shapiro_test.pvalue.
The function description has also been updated.
@w-rfrsh Please test your added feature into scipy/stats/tests/test_morestats.py
@w-rfrsh Please test your added feature into scipy/stats/tests/test_morestats.py
Done :D
LGTM now, merged. Thanks @w-rfrsh
|
gharchive/pull-request
| 2020-05-07T17:47:29 |
2025-04-01T06:40:20.778931
|
{
"authors": [
"EverLookNeverSee",
"rgommers",
"w-rfrsh"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/12056",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
793306432
|
ENH: interpolate: add input validation to check input x-y is strictly increasing for fitpack.bispev and fitpack.parder
Reference issue
fix #8565
What does this implement/fix?
Some scipy users are confusing this fitpack error message.
https://github.com/scipy/scipy/blob/1d10a4afe95cdd4bcae80db5f312c466d9921d4e/scipy/interpolate/fitpack2.py#L910
like:
#8565,
https://github.com/cmbant/CAMB/issues/40
python - Unable to use `scipy.interpolate.RectBivariateSpline` with `matplotlib.pyplot,plot_surface` - Stack Overflow,
python: How to pass arrays into Scipy Interpolate RectBivariateSpline?
The reason of this error message is the input data is invalid, which is validated in fitpack.bispev
https://github.com/scipy/scipy/blob/2a9e4923aa2be5cd54ccf2196fc0da32fe459e76/scipy/interpolate/fitpack/bispev.f#L45-L50
and in fitpack.parder (when derivative is calculated)
https://github.com/scipy/scipy/blob/5f4c4d802e5a56708d86909af6e5685cd95e6e66/scipy/interpolate/fitpack/parder.f#L50-L54
This restriction offers that input array x and y needs to be strictly increasing, but the python code does not check it and just showing the FORTRAN error code.
Actually, the doc stated that "If grid is True: The arrays must be sorted to increasing order.", but it seems that some users do not recognize it.
So, I added the input validation for these fitpack functions to check input x-y is strictly increasing and its test.
I think there is no backward compatible breaking because the new validation throws ValueError as before.
Thanks @AtsushiSakai , @tylerjereddy, merged.
Hey, just wanted to say thank you, I just hit that issue and I'm really happy to see it's already been tackled! :)
|
gharchive/pull-request
| 2021-01-25T11:37:26 |
2025-04-01T06:40:20.785839
|
{
"authors": [
"AtsushiSakai",
"StanczakDominik",
"ev-br"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/13436",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1953176015
|
ENH: stats: add support for masked arrays for circular statistics functions
Reference issue
Towards #14651
What does this implement/fix?
Adds support for masked arrays in stats.circmean, stats.circvar, and stats.circstd
It looks like a lot of code that deals with NaNs is now unused and can be removed, right?
Removed the unnecessary code @mdhaber.
There seems to be something in the decorator's handling of masked arrays that promotes the dtype (or at least masked arrays are promoting from 32 to 64 for all three of these functions). Could your next PRs be to address this (if it is a bug in the decorator) and #19350 (comment)?
Yeah, will try to get a PR up tomorrow!
Hi @tirthasheshpatel , can you open that PR? SciPy 1.12 is scheduled to branch in about 2 weeks, and it's important to get that in. It would also be nice if we could get most of the remaining low-hanging fruit in there.
There seems to be something in the decorator's handling of masked arrays that promotes the dtype (or at least masked arrays are promoting from 32 to 64 for all three of these functions).
It doesn't seem to be decorator's fault:
In [1]: from scipy import stats
In [2]: import numpy as np
In [3]: stats.circmean(np.ones(5, dtype=np.float32), _no_deco=True).dtype
Out[3]: dtype('float64')
Interestingly, that's also because NumPy treats arrays and scalars differently when it comes to dtype promotion:
In [4]: (np.float32(9.) * 2.).dtype
Out[4]: dtype('float64')
In [5]: np.ones(5, dtype=np.float32) * 2.
Out[5]: array([2., 2., 2., 2., 2.], dtype=float32)
Might be an issue for a lot of function because of this behavior.
Right I actually found that, too. Glad we come to the same conclusion. If it's not the decorator's fault, don't worry about it for now.
But there is still https://github.com/scipy/scipy/pull/19350#issuecomment-1758711603.
|
gharchive/pull-request
| 2023-10-19T22:51:39 |
2025-04-01T06:40:20.791136
|
{
"authors": [
"mdhaber",
"tirthasheshpatel"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/19412",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1988423448
|
MAINT/DOC: stats: fix lint errors
Reference issue
Towards gh-19490.
What does this implement/fix?
Appeases the linter for all current errors related to stats, to stop potential future CI fails.
Additional information
Alternatively, we could use noqa, or even make the linter ignore these files, if that seems more appropriate.
I don't think this needs a commit ignore. It's a meaningful improvement to not redefine these functions once per iteration.
|
gharchive/pull-request
| 2023-11-10T21:24:34 |
2025-04-01T06:40:20.793086
|
{
"authors": [
"lucascolley",
"mdhaber"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/19507",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2595120435
|
MAINT: signal: lombscargle docstring tweaks
Reference issue
Closes https://github.com/scipy/scipy/issues/2162
Also addresses passing in inputs as lists, per https://github.com/scipy/scipy/issues/8787
What does this implement/fix?
Tweaks to the docstring to make the measurement baseline requirement more explicit, as well as other minor corrections.
Additional information
A few other minor corrections are being made that were noticed while updating the text suggested in the referenced issue.
@DietBru You had suggested in https://github.com/scipy/scipy/pull/21277#discussion_r1694270980 that I could use _rename_parameter() to rename the misleading x parameter to t in the function. Since I'm going for broke on refinements at this point, could I go ahead and commit that change here? Also, if so, does dep_version need to be specified?
@tylerjereddy Not sure when the cutoff date is, but this might also be worthwhile to get into 1.15.0.
Would you be able to tackle https://github.com/scipy/scipy/issues/8787#issuecomment-2421424091 here too?
I think because it's important to get these changes merged it would be best if renaming of arguments is left to a different pr as there might be more discussion required for that.
Would you be able to tackle #8787 (comment) here too?
I'll take a look at it today.
I think because it's important to get these changes merged it would be best if renaming of arguments is left to a different pr as there might be more discussion required for that.
Understood.
I can reproduce the error in https://github.com/scipy/scipy/issues/8787 , but I haven't yet figured out the solution.
I think this might be related to the "nyquist frequency" (not exactly the same thing for uneven sampling). I'm going to investigate further to see if I can test for this ahead of time instead of just throwing divide by zero error.
Yup. It is, but I don't see any way to test for this ahead of time, short of doing all of the calculations ahead of time.
All of the sample times are multiples of 1000 s, which leads to a "nyquist" frequency of 0.0031415926535897933 rad/s. One of the freqs (freq[0]) is exactly 5x this (0.015707963267948967 rad/s), and causes D to equal zero. However, if you take some smaller or larger multiple of the "nyquist", the D is always very small (< 1e-16), but not zero. So this is just a random numerical fluke.
As long as at least one sample times isn't a multiple of 1000, or as long as none of the freqs are 5x the "nyquist" it won't fail.
Very dumb solution. But it works. Just add these lines:
# at beginning
eps = np.finfo(freqs.dtype).eps
# when calculating D
D = CC * SS - CS * CS + eps
@neutrinoceros @jakevdp @mhvk @dhomeier @pllim Would one of you mind checking if the final commit causes any issues with astropy's tests? I was looking for a robust solution to the inputs given in the referenced issue (https://github.com/scipy/scipy/issues/8787). And this was the smallest, most robust, way to prevent any possible divisions by zero.
Huh, so you were able to reproduce gh-8787; I guess it is platform-dependent. I'd suggest adding the test and running that on CI separately to show that CI was sensitive to the failure to begin with, otherwise the unit test does not really demonstrate the fix.
If you can reproduce that, what about gh-13812? I had the file analyzed for safety before opening it; seemed OK, and did seem to contain just two CSV files.
Looks like there was also some work to avoid a zero-division error in the past (gh-3787) but perhaps that is different?
otherwise the unit test does not really demonstrate the fix.
I haven't added a test specifically for this numerical issue yet.
Thanks for the ping, @adammj . Would https://github.com/astropy/astropy/pull/17211 help?
Re: https://github.com/scipy/scipy/pull/21721#issuecomment-2423246777
Ooops... CI failed to build scipy from source.
p.s. Failed to build locally too on WSL2 via pip install git+https://github.com/adammj/scipy.git@lombscargle
@mdhaber I changed the added test to go back to the values provided in the original issue. Without the previous commit (D=eps) this test will fail. However, it passes now.
p.s. Failed to build locally too on WSL2 via pip install git+https://github.com/adammj/scipy.git@lombscargle
I managed to build locally on macOS-14.7-arm64-arm-64bit-Mach-O against OpenBLAS 0.3.28 and ran our full periodogram test suite successfully, but I don't think we have any tests pushing the precision to its limits.
Here's some results with both float64 and float32, showing the same test data, but with different frequency values.
I can only get it to fail on this one multiple (I haven't exhaustively tested this). But I wanted to show that even when the value of D < eps, that the calculations still work. It is only exactly D==0 that is the problem.
Whoops, I saw @dhomeier's comment only after running astropy's test suite myself... anyway, seconded !
@DietBru You had suggested in #21277 (comment) that I could use _rename_parameter() to rename the misleading x parameter to t in the function. Since I'm going for broke on refinements at this point, could I go ahead and commit that change here? Also, if so, does dep_version need to be specified?
It is @j-bowhay not me who made the suggestion :smiley: Hence, to I do not have any experience with _rename_parameter().
My 2 cents are to do this in a separate pull request, because reviewing a single change is always a bit easier.
_rename_parameter() to rename the misleading x parameter to t in the function. Since I'm going for broke on refinements at this point, could I go ahead and commit that change here? Also, if so, does dep_version need to be specified?
dep_version is specified if you want to deprecate and eventually stop accepting x. it is more disruptive because users will get a warning that they need to change their code to use t if they are using x, but it will allow you to remove the decorator (and its associated performance overhead) and any mention of x in the future. If this is considered worth the disruption, you would specify 1.15.0 as the version; if you're happy with leaving it in place indefinitely, you can leave it unspecified. Either way, yeah, it would be good to go in a separate PR. You would probably also want to change most existing tests that use the old name to the new name (but you'll always want to have at least one test with each to confirm that the decorator is working). Then you can post a message on the forum justifying the choices and ask for feedback there.
@DietBru here's a suggested change that makes it clear that we're just trying to prevent the divide-by-zero, not attempting to "massage" the equations for any other purpose.
I haven't found a better, more specific exception to catch this, as it's actually emitted as "RuntimeWarning: divide by zero encountered in scalar divide".
# calculate a and b
a_numerator = (YC * SS - YS * CS)
b_numerator = (YS * CC - YC * CS)
try:
# where: y(w) = a*cos(w) + b*sin(w) + c
a[i] = a_numerator / D
b[i] = b_numerator / D
# c = Y_sum - a * C_sum - b * S_sum # offset is not useful to return
except RuntimeWarning:
# there can be spurious numerical issues around the "pseudo-Nyquist"
a[i] = a_numerator / (eps * math.copysign(1.0, D))
b[i] = b_numerator / (eps * math.copysign(1.0, D))
If I understand what you want to do then this is a 2x2 linear system perturbation
$$
\left(
\begin{bmatrix}
CC & CS \
CS & SS
\end{bmatrix} + \epsilon I
\right)
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix} =
\begin{bmatrix}
y_1 \
y_2
\end{bmatrix}
$$
with the matrix on the left is rank deficient. But adding eps to determinant does not achieve this. So it is not doing what you want to do.
Here my small objection is about inverting this array. As the general mantra says don't invert the matrix, here same applies. You can eliminate this symmetric 2x2 array (depending on which diagonal is larger) and modify the resulting corner if it is exactly 0. Then this eps modification would indeed be a perturbation to the rank deficiency.
@ilayn I think I follow. And, this makes me realize I should’ve gone looking for common acceptable solutions to these numerical edge cases.
I was looking for a way to minimize the number of calculations and conditions that don’t exist in the paper, both to prevent slowing down the loop, but also so that it’s easier for the reader to follow what the code is doing and its relation to the formulas in the paper. Basically, in this very rare case, the solution is probably nonsense. But the values nearby are fine, so I’m trying to find a fix that is “imperceptible”.
I’ll take a look for some more robust numerical solution. But I’m curious if you already have specific code in mind.
You can also solve the system;
$$
\begin{bmatrix}
CC & CS \
CS & SS
\end{bmatrix}
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix} =
\begin{bmatrix}
y_1 \
y_2
\end{bmatrix}
$$
So if $CS=0$, then it is diagonal system. If $CC = 0.0$ then we can solve $y1= CSx_2, y_2=CSx_1 + SSx_2$, which is consistent. If $CC \neq 0$, then we "Gaussian eliminate" the second row with $-CS/CC$ hence solve triangularized systems. This is what LU solvers do anyways and how they detect exact 0.0s if any.
So that's a problem, there are no zeros in the equation/matrix (for this specific example from the linked issue).
CC = 0.09549150370681751
SS = 0.9045084962931825 (always 1-CC)
CS = -0.29389262737712285
It just works out that D is calculated to be 0 in the equation D = CC * SS - CS * CS. However, if I create a matrix and ask numpy for the determinate, it is not 0 (but smaller than eps).
M = np.array([[CC, CS], [CS, SS]], dtype=a.dtype)
np.linalg.det(M) # returns -1.4257357797566966e-17
Instead of re-inventing the wheel, I can just use scipy's LU solver. But, in this case, it's not doing anything special, per se, it's just that due to numerical differences in the paths (types and order of operations) it works out.
if D != 0:
a[i] = (YC * SS - YS * CS) / D
b[i] = (YS * CC - YC * CS) / D
else:
# If D==0, this is a rare numerical issue that can occur around the
# "pseudo-Nyquist". Use LU solver.
lu, piv = lu_factor(np.array([[CC, CS], [CS, SS]], dtype=a.dtype))
ab = lu_solve((lu, piv), np.array([YC, YS], dtype=a.dtype))
a[i] = ab[0]
b[i] = ab[1]
Comparing the results between the two branches for everywhere that D != 0, they are the same within 1.8e-12.
Sorry for leading you into wrong direction earlier, by not reading carefully :sweat_smile:. If $D$ is too close to zero, a ValueError should be raised.
This can be justified by looking into the derivation from Lomb. The $i$-ith measurement equation is
$$
y_i = a\cos(2\pi f t_i) + b\sin(2\pi f t_i) + v_i
$$
and the target function is
$$\begin{aligned}
J &= \frac{1}{2}\sum_i \Big|y_i - a\cos(2\pi f t_i) - b\sin(2\pi f t_i)\Big|^2 \
&= \frac{1}{2}\sum_i \Big(y_i^2 - 2y_i\cos(2\pi f t_i)a - 2y_i\sin(2\pi f t_i)b + \cos^2(2\pi f t_i) a^2- \sin^2(2\pi f t_i)b^2
+ 2ab\cos(2\pi f t_i)\sin(2\pi f t_i)\Big)\
&= \frac{1}{2}\left(\sum_i y_i^2 - 2a YC - 2b YS - ab CS + a^2 CC + b^2 SS \right)
\end{aligned}$$
This let's us write
$$\frac{d}{da}J = a\sum_i \cos^2(2\pi f t_i) + b\sum_i\cos(2\pi f t_i)\sin(2\pi f t_i) - \sum_i y_i\cos(2\pi f t_i) =: a CC + b CS - YC\stackrel{!}{=} 0 $$
$$\frac{d}{db}J = b\sum_i \sin^2(2\pi f t_i) + a\sum_i\cos(2\pi f t_i)\sin(2\pi f t_i) - \sum_i y_i\sin(2\pi f t_i) =: b SS + a CS - YS\stackrel{!}{=} 0 $$
which results in
$$
\begin{bmatrix}
CC & CS \\ CS & SS
\end{bmatrix}\begin{bmatrix}
a \\ b
\end{bmatrix} =
\begin{bmatrix}
YC\\ YS
\end{bmatrix}
$$
of which the symbolic solution for $[a\ b]^T$ is implemented, with $D$ being the determinant of the left-hand matrix. So if $D$ is zero, $[a\ b]^T$ is undetermined.
scipy.linallg.solve could be used for a simple implementation. Perhaps something like this (did not verify if correct):
AA, bb = np.array([[CC, CS], [CS, SS]]), np.array([YC, YS])
try:
xx = solve(AA, bb)
except LinAlgError, LinAlgWarning:
raise ValueError("Could not find solution ...")
a[i], b[i] = xx[0], xx[1]
Just to clarify: I checked that the symbolic solution of the vector matrix equation is what is implemented. I am not sure anymore, if the derivation by minimizing $J$ is correct...
Sorry for leading you into wrong direction earlier, by not reading carefully 😅. If D is too close to zero, a ValueError should be raised.
This can be justified by looking into the derivation from Lomb. The i -ith measurement equation is
y i = a c o s ( 2 π f t i ) + b s i n ( 2 π f t i ) + v i
and the target function is
J = 1 2 ∑ i | y i − a c o s ( 2 π f t i ) − b s i n ( 2 π f t i ) | 2 = 1 2 ∑ i ( y i 2 − 2 y i c o s ( 2 π f t i ) a − 2 y i s i n ( 2 π f t i ) b + c o s 2 ( 2 π f t i ) a 2 − s i n 2 ( 2 π f t i ) b 2 + 2 a b c o s ( 2 π f t i ) s i n ( 2 π f t i ) ) = 1 2 ( ∑ i y i 2 − 2 a Y C − 2 b Y S − a b C S + a 2 C C + b 2 S S )
This let's us write
d d a J = a ∑ i c o s 2 ( 2 π f t i ) + b ∑ i c o s ( 2 π f t i ) s i n ( 2 π f t i ) − ∑ i y i c o s ( 2 π f t i ) =: a C C + b C S − Y C = ! 0 d d b J = b ∑ i s i n 2 ( 2 π f t i ) + a ∑ i c o s ( 2 π f t i ) s i n ( 2 π f t i ) − ∑ i y i s i n ( 2 π f t i ) =: b S S + a C S − Y S = ! 0 which results in
[ C C C S C S S S ] [ a b ] = [ Y C Y S ]
of which the symbolic solution for [ a b ] T is implemented, with D being the determinant of the left-hand matrix. So if D is zero, [ a b ] T is undetermined.
scipy.linallg.solve could be used for a simple implementation. Perhaps something like this (did not verify if correct):
try:
a[i], b[i] = solve([[CC, CS], [CS, SS]], [YC, YS])
except LinAlgError, LinAlgWarning:
raise ValueError("Could not find solution ...")
I think we need to slightly careful in adding a linalg.solve I would image it's significantly slower than inverting the system by hand.
I think we need to slightly careful in adding a linalg.solve I would image it's significantly slower than inverting the system by hand.
Good point—would have to be verified. If the penalty is not to great, it is an elegant way of avoiding thinking about condition numbers.
I think you all are going to hate me, but I think going with the fully vectorized version and using the tau offset (so that I can remove the offending CS variable), makes it much easier to prevent the rare division by zero errors. I tested this against the current version in all possible combinations and the numerical differences are miniscule. It also (on my machine) passes all of the tests.
I would potentially consider splitting this pr in two, the handling of lists and docstring changes could probably be merged quickly (and the list handling is needed before the next release)
Done. Reverted this PR to only the docstring and asarray changes. I wasn't sure of the best way to continue with the discussion and code changes that were discussed for 8787, but I assume I'll have to do some work on the other branch once this one gets accepted.
The test failure seems unrelated.
Looks good to me for merging—unless @j-bowhay has a different opinion.
|
gharchive/pull-request
| 2024-10-17T15:46:40 |
2025-04-01T06:40:20.833722
|
{
"authors": [
"DietBru",
"adammj",
"dhomeier",
"ilayn",
"j-bowhay",
"mdhaber",
"neutrinoceros",
"pllim"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/21721",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
247137456
|
BUG: stats: fix nan result from multivariate_normal.cdf (#7669)
This pull request fixes nan results from multivariate_normal.cdf when the distribution is bivariate (Issue #7669). The underlying Fortran code in mvn.mvnun uses a dedicated function to handle the bivariate case and causes problems when called with negative infinity as the lower bound. As a solution, I replaced mvnun with mvndst and do the preprocessing of mvnun in Python. I also added an additional test for the bivariate case.
Isn't the problem actually that mvnun does not set the infin flags correctly?
Here's a simpler fix doing that: https://github.com/asnelt/scipy/pull/1
It seems to solve the issues without the more complicated Python code changes.
Just some supporting evidence:
I wanted to see if the changes might affect what I have in statsmodels sandbox, but I was avoiding mvnun (maybe because it didn't work for me) and use mvndst directly with equivalent adjustments to infin, AFAICS
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/sandbox/distributions/extras.py#L1064
In it goes, thanks all!
|
gharchive/pull-request
| 2017-08-01T17:05:07 |
2025-04-01T06:40:20.838853
|
{
"authors": [
"asnelt",
"josef-pkt",
"pv",
"rgommers"
],
"repo": "scipy/scipy",
"url": "https://github.com/scipy/scipy/pull/7698",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1178883500
|
Typos & Bad Links
Typos in "Tutorial"
"Gaubao" should be "Guabao".
"non-egative" should be "non-negative".
We should press \ to input Unicode characters.
Bad Links in "GCL Overview"
The link in leads to "https://scmlab.github.io/guabao/pages/pages/4-references.html", but it should lead to "https://scmlab.github.io/guabao/pages/4-references.html".
Thank you for your interest in this project and for spotting these typos! Sorry that it took so long to respond --- we were preparing a paper on Guabao and were too occupied.
I believe that these issues are fixed now.
If you have used Guabao and found bugs/errors, feel free reporting them here:
https://github.com/scmlab/gcl
Thank you again!
|
gharchive/issue
| 2022-03-24T03:08:13 |
2025-04-01T06:40:20.847471
|
{
"authors": [
"scmu",
"skylee03"
],
"repo": "scmlab/guabao",
"url": "https://github.com/scmlab/guabao/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
705552576
|
Add support for SRUpy library
see #5
A final few comments and then all that needs doing is making the tests run. For that you just need the EUROPEANA_API_KEY secret in your fork, with your API key.
Sorry, I don't know why this fails. In my fork it validates just fine ...
I see. Very weird. Ah well. In that case I shall merge.
|
gharchive/pull-request
| 2020-09-21T12:28:21 |
2025-04-01T06:40:20.848962
|
{
"authors": [
"alueschow",
"scmmmh"
],
"repo": "scmmmh/polymatheia",
"url": "https://github.com/scmmmh/polymatheia/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
419035687
|
Fully migrate to the Flask backend
[ ] NodeJS codebase should be removed after fully migrating to Flask
@Anmolbansal1 Can we remove the NodeJS codebase now?
|
gharchive/issue
| 2019-03-09T05:19:43 |
2025-04-01T06:40:20.860376
|
{
"authors": [
"ivantha"
],
"repo": "scorelab/fact-Bounty",
"url": "https://github.com/scorelab/fact-Bounty/issues/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
437989489
|
Remember install
Remember the installation status of packages across app restarts
Codecov Report
Merging #25 into master will decrease coverage by 5.51%.
The diff coverage is 46.01%.
@@ Coverage Diff @@
## master #25 +/- ##
==========================================
- Coverage 65.54% 60.03% -5.52%
==========================================
Files 24 24
Lines 595 663 +68
Branches 25 31 +6
==========================================
+ Hits 390 398 +8
- Misses 199 257 +58
- Partials 6 8 +2
Impacted Files
Coverage Δ
src/app/core/services/electron.service.ts
15.38% <0%> (ø)
:arrow_up:
src/app/core/services/mocks.ts
81.81% <100%> (+1.81%)
:arrow_up:
src/app/core/services/thunderstore.service.ts
50% <33.33%> (-22.1%)
:arrow_down:
src/app/core/services/database.service.ts
59.37% <35%> (-29.52%)
:arrow_down:
src/app/core/models/package.model.ts
43.24% <36.36%> (-3.19%)
:arrow_down:
src/app/core/services/package.service.ts
43.51% <40.54%> (-1.49%)
:arrow_down:
...selection/package-table/package-table.component.ts
56.79% <46.15%> (-2.12%)
:arrow_down:
...election/package-table/package-table-datasource.ts
45.71% <72.22%> (-0.19%)
:arrow_down:
...selection/packages-page/packages-page.component.ts
72.72% <75%> (-15.51%)
:arrow_down:
src/app/shared/selection-changeset.ts
82.6% <0%> (-17.4%)
:arrow_down:
... and 4 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d97ef2...4d47495. Read the comment docs.
|
gharchive/pull-request
| 2019-04-27T22:15:57 |
2025-04-01T06:40:20.875146
|
{
"authors": [
"codecov-io",
"scottbot95"
],
"repo": "scottbot95/RoR2ModManager",
"url": "https://github.com/scottbot95/RoR2ModManager/pull/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
114065639
|
Get crop coordinates on original image
Hello! Thank you for making and sharing cropit!!
For a project, I needed to only get the cropping coordinates on the original image. I needed to know the two (x1, y1) (x2, y2) points on the original image that would give me the selected part of the image. However, I couldn't manage to do this with the methods available in cropit by itself. It would be great to have a method that would give you these values because otherwise they're kind of tricky to obtain.
This is what I ended up doing (its not perfect and probably will only work with the settings I'm currently using):
var zoom = imgCropper.cropit('zoom');
var offset = imgCropper.cropit('offset');
var previewSize = imgCropper.cropit('previewSize');
var exportzoom = 1 / zoom;
var xstart = Math.abs(Math.floor(offset.x * exportzoom));
var ystart = Math.abs(Math.floor(offset.y * exportzoom));
var xend = Math.floor(exportzoom * previewsize.width) + xstart;
var yend = Math.floor(exportzoom * previewsize.height) + ystart;
Well, I guess that's all, let me know if you'd like to add a feature like this to cropit and I'd be happy to contribute.
This solution works well with the PHP ImageMagick crop method if you require compatibility with older versions of IE.
|
gharchive/issue
| 2015-10-29T14:47:42 |
2025-04-01T06:40:20.877669
|
{
"authors": [
"Naph",
"rafadev"
],
"repo": "scottcheng/cropit",
"url": "https://github.com/scottcheng/cropit/issues/119",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.