id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
249191253
|
Unit tests
#1
Coverage increased (+1.2%) to 52.329% when pulling 9264ab05f5311f9f5ef25720aaa9140cd0791a8b on unit-tests into e1709faaf3fa98d064c9a6c8ebc7f8aa64449d6b on master.
|
gharchive/pull-request
| 2017-08-09T23:31:26 |
2025-04-01T04:35:14.667382
|
{
"authors": [
"coveralls",
"frankgreco"
],
"repo": "northwesternmutual/kanali",
"url": "https://github.com/northwesternmutual/kanali/pull/23",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
169428647
|
I got 99 problems and a RTMP packet is one..
So i wanted to try and see if PING could be used to keep the bot alive, and for the most it works. Now i don't know if it's related to using PING, but sometimes i would receive a packet that looked like so:
{'msg': 4, 'event_type': 512, 'event_data': '\x07pri'} it does look like a PING response, but the event_type is all fucked up. This caused problems, of course. I would get assertion error in rtmp_protocol.RtmpReader.next. The thing is, when ever i would get get this odd packet, the bot would become unresponsive, but i could still send messages from the console.
Has anyone else had this issue??
I've had a similar issue but I never noted the details. If it happens again I'll let you know.
@GoelBiju That was my first idea, just ignore it in handle_packet, however this just moved the issue up to RtmpReader.next() and then i would get assert errors there instead. I seem to remember that when receiving this odd packet, the header would have the flag full set to True. Also, i can see that you've done a lot of editing to rtmp_protocol to , i might have to "borrow" some of your code.
|
gharchive/issue
| 2016-08-04T17:24:06 |
2025-04-01T04:35:14.691049
|
{
"authors": [
"Technetium1",
"nortxort"
],
"repo": "nortxort/pinylib",
"url": "https://github.com/nortxort/pinylib/issues/30",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2405934753
|
KSU support
add KSU support using if elif else for detect KSU or magisk busybox.
haturnuwun mbah... baru sempet cek hehe
|
gharchive/pull-request
| 2024-07-12T16:13:25 |
2025-04-01T04:35:14.696030
|
{
"authors": [
"geeks121",
"nosignals"
],
"repo": "nosignals/magisk-php7-webserver",
"url": "https://github.com/nosignals/magisk-php7-webserver/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1400854501
|
[cqld4 driver] JSON driver config not being applied
When specifying a JSON file in driverconfig activity parameter of cqld4 driver (as described in https://github.com/nosqlbench/nosqlbench/blob/main/adapter-cqld4/src/main/resources/cqld4.md), options defined in the file are not applied during the experiment.
To Reproduce
Start a Cassandra cluster and try the following JSON file as driver config:
{
"advanced.metadata.token-map.enabled": false
}
Then create a keyspace with replication factor of 1 and a simple table. Insert data and execute some SELECT requests randomly. Requests are still always routed on their primary replica.
Specifying the option as an activity parameter (driver.advanced.metadata.token-map.enabled) does not work either.
What was Expected
Requests should not be systematically routed on their primary replica anymore as driver token-awareness has been disabled.
Additional context
> ./nb --version
4.17.27
Workaround
Using a .conf file instead of a .json file seems to work:
datastax-java-driver {
advanced.metadata.token-map.enabled = false
}
This looks like a feature request or a docs clarification issue. We'll take this as a request to support both extensions.
@anthonydugois thank you for reporting this issue.
We will be discussing how to improve our docs to better illustrate driver and nosqlbench alignment.
Is it possible for you to upgrade to nb5? This version has better support for the cqld4 driver.
If not, it looks as if the .conf is a viable workaround.
We will still be researching the root cause with v4.17.27, as that should be working based on the configuration you provided.
@jeffbanks I did use nb5, sorry for the typo. Actually I tested this with the official Docker image, which uses nb5.jar as entrypoint:
> docker run nosqlbench/nosqlbench:nb5preview --version
4.17.27
I confirm that .conf files are working as expected.
We are currently using driverconfig.json so this must be working now.
|
gharchive/issue
| 2022-10-07T09:02:52 |
2025-04-01T04:35:14.724851
|
{
"authors": [
"anthonydugois",
"dave2wave",
"jeffbanks",
"jshook"
],
"repo": "nosqlbench/nosqlbench",
"url": "https://github.com/nosqlbench/nosqlbench/issues/736",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1160748143
|
Dark background on Inputfield MultiLine Content Config
Hi,
Thnks for the great pluging.
Im testing on iOS an multiple entry on inputfield. but when get the focus on inputfield, the background turns "dark" color (black). But when unfocus the field, the background turns previous color. (white)
On the NativeEditBox config, the checkbox "Switch Between Native and Unity " is actived
This issues not happens on Single Line config.
thanks.
That is odd, the native text view has a transparent background..
And the unity text field isnt set up with a white background ?
That is odd, the native text view has a transparent background.. And the unity text field isnt set up with a white background ?
yes.. is really weird..
I tested on "none" background (solid color white) and with different sprites. The result still the same.
Basically, you can reproduce it creating a new project and only adding the Native plugin to Inputfield and setting the Multi Line Type.
My version is: Unity 2020.2.2f1 (also tested on 2021.2.7f1) iOS
After check NativeEditBox.mm, i added the property
[text setBackgroundColor:UIColor.whiteColor];
to avoid the black background.
Hi! Thanks for looking in to this ⭐
Could you try with using clear color ? And if that works a pull request would be lovely :)
Thanks!
|
gharchive/issue
| 2022-03-06T23:28:19 |
2025-04-01T04:35:14.730487
|
{
"authors": [
"ghoenicka",
"nostek"
],
"repo": "nostek/UnityNativeEditBox",
"url": "https://github.com/nostek/UnityNativeEditBox/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1234609792
|
Add x509 parsing utilities
Moving x509 cert, key parsing utilities from notation-go to notation-core-go
Will raise a separate PR for refactoring notation-go and notation repos
Signed-off-by: rgnote 5878554+rgnote@users.noreply.github.com
LGTM
Add x509 parsing utilities
Signed-off-by: rgnote 5878554+rgnote@users.noreply.github.com
Co-authored-by: Milind Gokarn milind81@gmail.com
|
gharchive/pull-request
| 2022-05-13T00:31:53 |
2025-04-01T04:35:14.767880
|
{
"authors": [
"NiazFK",
"dtzar",
"rgnote"
],
"repo": "notaryproject/notation-core-go",
"url": "https://github.com/notaryproject/notation-core-go/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2116383800
|
[질문] Naver Map 다중 마커 지원 여부
안녕하세요 !
회원가입 된 유저의 위치를 마커로 보여주는 것을 구현하려고 합니다 !
첫번째 질문
마커를 만들어서 지도에 보여주는 것은 성공했는데, 100명의 유저가 있다고 가정했을 때, 코드상에 100개의 마커를 만드는 것은 뭔가 잘못된 코드 방식이라고 느껴지더라구요..!
그래서 찾아보니 보통 다중 마커 기능을 지원하는것으로 보여지는데, 김노트님 문서에서는 다중 마커에 대한 내용을 확인할 수 없었습니다 ㅠㅠ
만약에 단일 마커를 사용해야한다면,
특정 지역에 확대했을 때 마커를 보여주기 위해 생성했다가,
다시 축소할 때 마커를 삭제하는 과정을 반복해야하는건지 궁금합니다 !
두번째 질문
현재 사용자의 이미지를 서버로부터 presignedUrl을 받고, 사진을 앱 내에 저장 후 경로를 조회해서 보여주고 있습니다.
만약에 1000명의 유저가 있다면 1000명의 유저의 이미지를 모두 다운로드 해야하는 상황이 발생하게 되는데, 회원가입 된 유저의 수가 늘어날수록 앱의 저장공간에 대한 무리가 있을 것 같습니다.
혹시 앱 내에 저장하지 않고 PreSigendUrl을 바로 사용해서 이미지를 보여줄 수 있는 방법이 있는지
혹은 보통 이럴 경우에 어떻게 대처하는지 조언을 구하고 싶습니다 ..!
안녕하세요, 라이브러리를 사용해주셔서 감사드립니다.
관련 문의사항에 대해 답변드리겠습니다.
맞습니다. 말씀해주신 것처럼 카메라의 변경이 있고 난 후(onCameraIdle), 지도에 올라갈 오버레이들을 적절히 갱신해주시면 됩니다.
현재, fromByteArray, fromWidget에 대해서는 해싱과 캐시 스토리지를 이용한 캐싱이 이루어지고 있습니다. 이는 매번 비트맵 인스턴스를 생성하는 것을 방지하여 메모리 사용을 효과적으로 관리하고자 사용하고 있습니다만, 개발자에게 좀 더 유연한 API를 제공하고자, 해싱과 스토리지 저장 옵션을 사용하지 않고, key(id)로 직접 관리할 수 있는 API를 추가하는 것을 검토해보도록 하겠습니다. 지금 바로 사용하실 수 있는 방법은 NOverlayImage.fromFile 생성자입니다. 파일 저장과 삭제 정책을 직접 구현하시어, 저장공간을 최대한으로 활용할 수 있는 형태를 만드시면 됩니다.
감사합니다.
|
gharchive/issue
| 2024-02-03T09:30:40 |
2025-04-01T04:35:14.771532
|
{
"authors": [
"Dale-Sprint",
"note11g"
],
"repo": "note11g/flutter_naver_map",
"url": "https://github.com/note11g/flutter_naver_map/issues/177",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2450301235
|
正文中的公式超右边界
请问正文的公式超右边只有//这个简单粗暴的方法吗?但是这样还是有微妙的没有完全对齐
而且很奇怪有的公式会换行有的不会?谢谢
\allowbreak
$\gamma_1$, $\gamma_2$, $\cdots$, $\gamma_n$
|
gharchive/issue
| 2024-08-06T08:43:14 |
2025-04-01T04:35:14.773736
|
{
"authors": [
"Saiqin",
"note286",
"sikouhjw"
],
"repo": "note286/xduts",
"url": "https://github.com/note286/xduts/issues/205",
"license": "LPPL-1.3c",
"license_type": "permissive",
"license_source": "github-api"
}
|
1712364960
|
Expose Retry-After value into the exception
Is your feature request related to a problem? Please describe.
Notion exposes a Retry-After after when reaching the rate limit. But the library hides it, forcing the developer to guess as to when to retry.
Describe the solution you'd like
Ideally, the NotionApiException should expose the value, probably as a Timespan?.
Describe alternatives you've considered
We could also consider adding a RateLimitNotionApiException : NotionApiException exception class for this use case.
Additional context
The doc
Related #253
This has been released as part of 4.2.0
|
gharchive/issue
| 2023-05-16T16:37:55 |
2025-04-01T04:35:14.819484
|
{
"authors": [
"KoditkarVedant",
"Poltuu"
],
"repo": "notion-dotnet/notion-sdk-net",
"url": "https://github.com/notion-dotnet/notion-sdk-net/issues/362",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1724936026
|
Nototools installation fails on Windows 11
Here is the error log:
D:\Downloads\nototools>pip install -e .
Obtaining file:///D:/Downloads/nototools
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [32 lines of output]
c:\apps\tools\python\Lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "D:\Downloads\nototools\setup.py", line 11, in <module>
setup(
File "c:\apps\tools\python\Lib\site-packages\setuptools\__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\apps\tools\python\Lib\site-packages\setuptools\_distutils\core.py", line 147, in setup
_setup_distribution = dist = klass(attrs)
^^^^^^^^^^^^
File "c:\apps\tools\python\Lib\site-packages\setuptools\dist.py", line 476, in __init__
_Distribution.__init__(
File "c:\apps\tools\python\Lib\site-packages\setuptools\_distutils\dist.py", line 282, in __init__
self.finalize_options()
File "c:\apps\tools\python\Lib\site-packages\setuptools\dist.py", line 900, in finalize_options
ep(self)
File "c:\apps\tools\python\Lib\site-packages\setuptools\dist.py", line 920, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "d:\downloads\nototools\.eggs\setuptools_scm-7.1.0-py3.11.egg\setuptools_scm\integration.py", line 91, in version_keyword
_assign_version(dist, config)
File "d:\downloads\nototools\.eggs\setuptools_scm-7.1.0-py3.11.egg\setuptools_scm\integration.py", line 63, in _assign_version
_version_missing(config)
File "d:\downloads\nototools\.eggs\setuptools_scm-7.1.0-py3.11.egg\setuptools_scm\__init__.py", line 108, in _version_missing
raise LookupError(
LookupError: setuptools-scm was unable to detect version for D:\Downloads\nototools.
Make sure you're either building from a fully intact git repository or PyPI tarballs. Most other sources (such as GitHub's tarballs, a git checkout without the .git folder) don't contain the necessary metadata and will not work.
For example, if you're using pip, instead of https://github.com/user/proj/archive/master.zip use git+https://github.com/user/proj.git#egg=proj
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Python 3.11.3
In my case, this command worked without any errors.
pip install "git+https://github.com/notofonts/nototools.git"
|
gharchive/issue
| 2023-05-25T01:15:29 |
2025-04-01T04:35:14.838096
|
{
"authors": [
"HyegeunCho",
"oomek"
],
"repo": "notofonts/nototools",
"url": "https://github.com/notofonts/nototools/issues/795",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2422739765
|
8.2.2 Security improvements
– Added support for new runtime upgrade affecting nomination pools
– Added security improvements for the DApp browser – Nova now rejects access to certain top level domains and warns users to be careful when connecting to DApps which are not already in the DApp catalog
– Fixes & Optimizations
Release severity: Normal
Release version: 8.2.2
Release notes:
Support for new runtime upgrades and security improvements are here
Added support for new runtime upgrade affecting nomination pools
Added security improvements for the DApp browser – Nova now rejects access to certain top level domains and warns users to be careful when connecting to DApps which are not already in the DApp catalog
Fixes & Optimizations
|
gharchive/pull-request
| 2024-07-22T12:21:35 |
2025-04-01T04:35:14.850317
|
{
"authors": [
"ERussel"
],
"repo": "novasamatech/nova-wallet-ios",
"url": "https://github.com/novasamatech/nova-wallet-ios/pull/1151",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
274339262
|
Console output from react-static create
Output from the command react-static create. Any suggestions?
? What should we name this project? ExpertSleepCenters
? Select a template below... basic
=> Creating new react-static project...
=> Installing dependencies with: Yarn...
warning react-static > babel-preset-latest@6.24.1: We're super 😸 excited that you're trying to use ES2017+ syntax, but instead of making more yearly presets 😭 , Babel now has a better preset that we recommend you use instead: npm install babel-preset-env --save-dev. preset-env without options will compile ES2015+ down to ES5 just like using all the presets together and thus is more future proof. It also allows you to target specific browsers so that Babel can do less work and you can ship native ES2015+ to user 😎 ! We are also in the process of releasing v7, so please give http://babeljs.io/blog/2017/09/12/planning-for-7.0 a read and help test it out in beta! Thanks so much for using Babel 🙏, please give us a follow on Twitter @babeljs for news on Babel, join slack.babeljs.io for discussion/development and help support the project at opencollective.com/babel
warning react-static > babel-preset-es2015@6.24.1: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
warning react-static > babel-preset-latest > babel-preset-es2017@6.24.1: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
warning react-static > babel-preset-latest > babel-preset-es2016@6.24.1: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
warning "react-static > babel-loader@7.1.2" has unmet peer dependency "babel-core@6 || 7 || ^7.0.0-alpha || ^7.0.0-beta || ^7.0.0-rc".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-class-property@^1.0.6".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-import@^2.7.0".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-jsx-a11y@^5.1.1".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-react@^7.1.0".
warning "eslint-config-react-tools > eslint-config-airbnb@15.1.0" has incorrect peer dependency "eslint-plugin-jsx-a11y@^5.1.1".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-class-property@^1.0.6".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-import@^2.7.0".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-jsx-a11y@^5.1.1".
warning " > eslint-config-react-tools@1.0.10" has unmet peer dependency "eslint-plugin-react@^7.1.0".
warning "eslint-config-react-tools > eslint-config-airbnb@15.1.0" has incorrect peer dependency "eslint-plugin-jsx-a11y@^5.1.1".
warning "react-static > babel-loader@7.1.2" has unmet peer dependency "babel-core@6 || 7 || ^7.0.0-alpha || ^7.0.0-beta || ^7.0.0-rc".
Tanner Linsley answered my question.
|
gharchive/issue
| 2017-11-15T23:16:01 |
2025-04-01T04:35:14.909503
|
{
"authors": [
"wildpow"
],
"repo": "nozzle/react-static",
"url": "https://github.com/nozzle/react-static/issues/177",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
401567889
|
Add switch to the Routes component to disable force update on location change.
This PR adds a switch to the Routes component to disable force update on location change. This is useful for cases where you bring your own router and don't need the Routes component to force an update with url changes.
Description
Adds a new default prop disableUpdateOnLocationChange: false
Changes/Tasks
[x] New default prop disableUpdateOnLocationChange: false
Motivation and Context
The Routes component causes the next page to mount twice when inside a Reach Router. This can lead to bugs when doing working in componentDidMount like logging analytics.
The router setup for reference
<Router>
<Routes default />
</Router>
Types of changes
[ ] Refactoring/add tests (refactoring or adding test which isn't a fix or add a feature)
[x] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist:
[x] My change requires a change to the documentation.
[x] I have updated the documentation accordingly.
[ ] My changes have tests around them
I believe this is now solved from #1015.
|
gharchive/pull-request
| 2019-01-22T01:58:59 |
2025-04-01T04:35:14.914209
|
{
"authors": [
"georgehenderson",
"tannerlinsley"
],
"repo": "nozzle/react-static",
"url": "https://github.com/nozzle/react-static/pull/983",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1429529120
|
Cloudwatch logs group destroyed with module
Currently cloudwatch logs group is created and destroyed along with the module.
For audit purposes we have a use case not to delete any logs.
It would be best for us to create the cloudwatch logs group outside the module, and manage its lifecycle separately.
Possible solution would be adding create_cloudwatch_logs_group boolean to the module.
No longer required for our use case, closing the PR and issue.
Someone else can open if needed...
|
gharchive/issue
| 2022-10-31T09:25:29 |
2025-04-01T04:35:14.915808
|
{
"authors": [
"baolsen"
],
"repo": "npalm/terraform-aws-gitlab-runner",
"url": "https://github.com/npalm/terraform-aws-gitlab-runner/issues/559",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
177489254
|
Scaffold PostgreSQL extensions
After redoing the extension migrations as per #101, look into scaffolding PostgreSQL migrations from an existing database.
Blocked on https://github.com/aspnet/EntityFramework/issues/6561
This can be unblocked once PostgreSQL aligns with EF Core 2.0.0-preview2
Thanks @smitpatel, can you point me to an issue/PR/code sample?
Issue: https://github.com/aspnet/EntityFramework/issues/7004
PR: https://github.com/aspnet/EntityFramework/pull/8680
It was very large change but if scope it down then this commit introduce hooks for this task 32434d2d86a75e091e331940384115e447ff33de
Look for SqlServer Clustered index, how the information is propagated from annotation on DatabaseModel to generated fluent API.
This is done, thanks for the guidance @smitpatel.
Note https://github.com/aspnet/EntityFramework/issues/9121 about a limitation of the scaffolding code generation mechanism.
|
gharchive/issue
| 2016-09-16T18:23:17 |
2025-04-01T04:35:14.924520
|
{
"authors": [
"roji",
"smitpatel"
],
"repo": "npgsql/Npgsql.EntityFrameworkCore.PostgreSQL",
"url": "https://github.com/npgsql/Npgsql.EntityFrameworkCore.PostgreSQL/issues/102",
"license": "PostgreSQL",
"license_type": "permissive",
"license_source": "github-api"
}
|
54252433
|
Fix for #36
fixes #36: endChown will not fail on process.getuid() !== 0
Travis-CI fails to install dependencies...yikes.
Welp, nice catch! Landed as 5b11e8d8a0e1fb1b0d64f19475ddfd29bc3d09e6.
|
gharchive/pull-request
| 2015-01-13T21:43:24 |
2025-04-01T04:35:14.946681
|
{
"authors": [
"nathan7",
"silkentrance"
],
"repo": "npm/fstream",
"url": "https://github.com/npm/fstream/pull/37",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2132399363
|
Draft network aggregation functions
This PR
drafts two functions rnet_aggregate_extensive() and rnet_aggregate_intensive()
moves data-raw/geojson/princes* to inst/extdata to be called from R
On the case sooon.
gh pr checkout 30
Coming up!
Tested and have given it some thought. This works and moves us forward. So :+1: and will merge.
Illustration of differences:
|
gharchive/pull-request
| 2024-02-13T14:03:26 |
2025-04-01T04:35:14.977795
|
{
"authors": [
"JosiahParry",
"Robinlovelace"
],
"repo": "nptscot/rnetmatch",
"url": "https://github.com/nptscot/rnetmatch/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1676413217
|
建议:chinese2number 的输入检查
我用chinese2number的输入主要来自正则表达式的搜索结果,偶尔会有一些不太规范的情况,比如“百八十”人、十之“八九”、“三五十”个,不管“三七二十一”,这些都是日常说法,偶尔会出现在需要转换的数字序列中,此时,双引号中的数字并不符合转数字功能的语言表达习惯,这种情况是不是可以抛出个异常?当然不转换是我需要的。谢谢!!
了解,非常感谢。
|
gharchive/issue
| 2023-04-20T10:03:49 |
2025-04-01T04:35:14.984988
|
{
"authors": [
"qlih"
],
"repo": "nrchan/chinese-number-converter",
"url": "https://github.com/nrchan/chinese-number-converter/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1460210043
|
RTC1 and RTC2 missing from Board
Hi, I noticed that the board struct of the microbit-v2 crate only has one real time counter field, RTC0. I checked the nRF52833 product specification and it states that there are three real time counters available RT0, RT1, and RT2 (page 356). The other two RTC are implemented on the PAC (here and here).
I was wondering if there's a reason why they are not available from the Board struct or if I'm missing something (I'm quite new to rust embedded).
I looked into this and added the RTC1 and RTC2 fields to Board here.
I tested it on my microbit and it seems to work fine, if it's ok, I can open a pull request to merge it.
I tested this further and I have experienced no problems with RTC1 and RTC2. I'll open a PR and you can review it in case there are any issues that I may have missed.
|
gharchive/issue
| 2022-11-22T16:40:14 |
2025-04-01T04:35:15.001013
|
{
"authors": [
"videbar"
],
"repo": "nrf-rs/microbit",
"url": "https://github.com/nrf-rs/microbit/issues/99",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
1672900178
|
Merge updates from 2023 CommNet lecture.
For this iteration, we did a lot of small updates, mainly:
the matrix container can now be controlled via docker pause and docker unpause.
the matrix also returns timestamps of when it last ran, and the website can show this appropriately.
the matrix frequency can now be more easily controlled with environment variables.
the generate_configs.py script was reworked to change topology sizes more easily.
support for new permanent hijacks (optional) between stub ASe. These allow testing RPKI without global hijacks affecting everyone).
A few small fixes and improvements here and there.
@KTrel , I tested the basic setup to make sure everything can be set up without issues, and it seems to work for me.
But I'd be thankful if you could try setting the default configuration up yourself and debugging a bit to make sure it is still easily usable.
Update: I just realized some commits are still missing, I'll add them in a moment.
Remaining stuff is committed.
@KTrel I hope your deadline stress is (for the moment) over.
Do you have a moment to maybe to a quick dry-run of the new project?
"It works on my machine", but I'd be much more confident to merge if you could confirm that it works!
Basically just making sure that the default settings result in a stable mini-internet where everything works.
Ping @KTrel
Yu Chen took the time to confirm that everything is working, as @KTrel currently has their hands full.
Everything worked, so I am merging this now.
|
gharchive/pull-request
| 2023-04-18T11:19:40 |
2025-04-01T04:35:15.072363
|
{
"authors": [
"NotSpecial"
],
"repo": "nsg-ethz/mini_internet_project",
"url": "https://github.com/nsg-ethz/mini_internet_project/pull/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2014392499
|
Option to not inject .css in index.html during build
Thanks for a great vite plugin. :clap:
When using cesium.js in a lazy loaded JavaScript bundle, cesium({ rebuildCesium: true }) works well since https://github.com/nshen/vite-plugin-cesium/blob/3eefe025fbe186e7051fee88b30418d948dec43a/src/index.ts#L110-L117 ensures the <script...> tag is not included in the index.html, and therefore also cesium.js (which is quite large in size) is not loaded when visiting the main entrypoint of the application.
I think it would be beneficial to also have the plugin option to not inject the stylesheet link, i.e. https://github.com/nshen/vite-plugin-cesium/blob/3eefe025fbe186e7051fee88b30418d948dec43a/src/index.ts#L100-L108
Use case arguments:
When lazy loading cesium.js, it would be nice to also lazy load the associated cesium CSS file, which is quite large (30 kB) and unhashed (i.e. can't be cached safely). For projects who want to lazy load the .css this is easily accomplished by importing the css as wellimport "cesium/Build/Cesium/Widgets/widgets.css";
in the source code where import * as Cesium from "cesium"; is already imported (maybe adding this assoicated css import can even be done automatically by the vite plugin when rebuildCesium: true, before vite/rollup do their bundling?). The required cesium .css then goes through all the vite machinery for minimizing/lazy-chunking it...
...which then also puts it into a hashed file, which ensures the cesium css can also be strongly cached, instead of having to load it every time (without hash you would have problems when updating cesium later).
Current "workaround" is to remove the injected <link rel="stylesheet" href="/cesium/Widgets/widgets.css"> line in the generated index.html, and instead ensure the .css is added to the vite generated .css bundle by adding import "cesium/Build/Cesium/Widgets/widgets.css";
Hi @anderskiaer thanks, I didn't follow cesium for some time, would you mind create a PR?
|
gharchive/issue
| 2023-11-28T13:03:18 |
2025-04-01T04:35:15.098025
|
{
"authors": [
"anderskiaer",
"nshen"
],
"repo": "nshen/vite-plugin-cesium",
"url": "https://github.com/nshen/vite-plugin-cesium/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2463612271
|
Make PollingBased waiter more flexible
Networks with short block time can't use 1s as a default poll interval, it's too harsh for them. Ref. https://github.com/nspcc-dev/neofs-node/issues/2864.
OK, tests are failing and we need to adjust Actor correspondingly.
@roman-khimov, regarding Actor integration I consider two options:
One more Actor constructor that accepts PollConfig, but we have a lot of othre Actor constructor and this config is not related to the Actor directly.
(*Actor).WithPollConfig(cfg PollConfig) modifier which seems a bit more preferable to me, because it's a bit easier to use. At the same time it's different from Actor's design.
Which one do you prefer? Or may be there's another option.
How about extending actor.Options with waiter.PollConfig?
I thought about it, but it's not really Acttor's configuration. Will add if you're OK with it.
I'm also thinking of waiter.Config in case we want to extend it in future.
I'm also thinking of waiter.Config in case we want to extend it in future.
I also thought about it, but WSBased do not uses any configurable options now, thus I created just a PollConfig. Let's extend it while we're here, it'll be useful.
Linter job is failing, but it's just because golangci-lint version was updated, will be fixed in a separate PR.
@roman-khimov, this one is ready for review. It's not needed for NeoFS, but still it would be nice to have these values customizable.
NeoFS is likely to still use it in some way.
|
gharchive/pull-request
| 2024-08-13T15:17:34 |
2025-04-01T04:35:15.103965
|
{
"authors": [
"AnnaShaleva",
"roman-khimov"
],
"repo": "nspcc-dev/neo-go",
"url": "https://github.com/nspcc-dev/neo-go/pull/3556",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
894138456
|
Config: Can't setup error pages for all possible types of errors
Is your feature request related to a problem? Please describe.
Can't setup error pages for all possible types of errors in config.
For now, only one error_page is accessible for setup.
Describe the solution you'd like
I expected, that this will work like In nginx config.
For example, I can write this in nginx config:
server {
...
error_page 404 /404.html;
error_page 405 /405.html;
error_page 406 /common_error.html;
error_page 407 /common_error.html;
...
}
Describe alternatives you've considered
If we rename current config parameter error_page to error_404_page.
That will be clearly. And maybe it will be ok for project validation.
Additional context
INTRA check list has this check:
setup default error page (try to change the error 404)
Added second 404 error page for switch
|
gharchive/issue
| 2021-05-18T08:49:36 |
2025-04-01T04:35:15.146112
|
{
"authors": [
"Litvinovis",
"nsr888"
],
"repo": "nsr888/webserver",
"url": "https://github.com/nsr888/webserver/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1339896653
|
Fix C4996 when compiled with MSVC
ByteArray.h is a installed header and it triggers MSVC warning C4996 when used in other projects. This PR fixes it by using MSVC-specific alternatives when they are present.
The windows CI test fails. Could you please investigate what went wrong?
The windows CI test fails. Could you please investigate what went wrong?
I read the log but I honestly have no idea about it. Do you know how to run the tests locally?
My guess is it's picking up on the need for + 1 on the buffer alloc to allow for NUL-termination by sprintf(). Either way I think it would be better if the function wasn't inline.
I tried to run the same command sequence on my local computer. It succeeded.
Before the change:
After the change:
So yeah, I think @gitlost is correct. It crashed because there is an overflow when sprintf tried to append the null character.
Should I just +1 to the string size?
Should I just +1 to the string size?
Yes please. That is definitively a buffer overflow that I caused there. What is puzzling me is why the ASAN is not picking up on it.
To execute the tests locally, just execute ctest.
Thanks for the update. Looking at your change and reading the documentation of sprintf_s I now understand why the ASAN did not complain about the earlier code: there was nothing to complain about, it was perfectly fine. A std::string of size N has a buffer of at least N+1 bytes because it is also terminated by a \0. So my "I've screwed up" conviction was premature. The problem was the wrong second parameter 3 to sprintf_s in your original patch. Could you please revert all the partly misleading comments and changes and simply use the original code with a 4 instad of a 3. Thanks.
OK. Sorry I'm not very familiar with sprintf stuff :/
My "guess" confused everyone, apologies.
|
gharchive/pull-request
| 2022-08-16T06:53:08 |
2025-04-01T04:35:15.269227
|
{
"authors": [
"axxel",
"duanqn",
"gitlost"
],
"repo": "nu-book/zxing-cpp",
"url": "https://github.com/nu-book/zxing-cpp/pull/381",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1499218706
|
buildLabelDesc is missing a closing parenthesis
In file src/utils/flags.ts, buildLabelDesc is missing a closing parenthesis.
Already seems to be fixed in #139
|
gharchive/issue
| 2022-12-15T22:47:29 |
2025-04-01T04:35:15.273206
|
{
"authors": [
"TheTrio",
"grof"
],
"repo": "nuance-communications/mix-cli",
"url": "https://github.com/nuance-communications/mix-cli/issues/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2393735682
|
🛑 Secret Site BIS is down
In 0072036, Secret Site BIS ($SECRET_SITE) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Secret Site BIS is back up in bdaadb9 after 38 minutes.
|
gharchive/issue
| 2024-07-06T21:20:12 |
2025-04-01T04:35:15.294173
|
{
"authors": [
"nueve9"
],
"repo": "nueve9/uptime",
"url": "https://github.com/nueve9/uptime/issues/1449",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1889150461
|
🛑 Difesa is down
In 1dc6c21, Difesa (http://www.difesa.it/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Difesa is back up in 166d9ad after 4 minutes.
|
gharchive/issue
| 2023-09-10T13:40:11 |
2025-04-01T04:35:15.297346
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/4647",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1975388072
|
🛑 Università e ricerca is down
In ddb066e, Università e ricerca (https://www.mur.gov.it/it) was down:
HTTP code: 403
Response time: 587 ms
Resolved: Università e ricerca is back up in e22588c after 12 minutes.
|
gharchive/issue
| 2023-11-03T03:08:06 |
2025-04-01T04:35:15.299645
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/5517",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2034061152
|
🛑 Guardia di Finanza is down
In 9a63931, Guardia di Finanza (https://www.gdf.gov.it) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Guardia di Finanza is back up in 5e5a99a after 18 minutes.
|
gharchive/issue
| 2023-12-09T21:25:36 |
2025-04-01T04:35:15.301967
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/5771",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2165313487
|
🛑 Guardia di Finanza is down
In 917745e, Guardia di Finanza (https://www.gdf.gov.it) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Guardia di Finanza is back up in 0333c37 after 4 minutes.
|
gharchive/issue
| 2024-03-03T11:34:20 |
2025-04-01T04:35:15.304246
|
{
"authors": [
"nuke86"
],
"repo": "nuke86/ransomPing",
"url": "https://github.com/nuke86/ransomPing/issues/6655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
777376665
|
Feature Request: axis argument in np.nan[sum | mean | std | var | max | min | median]
Feature request
Over in pandas, I am trying to implement a windowing aggregation over an 2D array that utilizes numba: https://github.com/pandas-dev/pandas/issues/15095
I would like to utilize np.nansum(window, axis=0) for example to return a 2D result, but as I see documented the axis keyword is currently not supported https://numba.pydata.org/numba-doc/dev/reference/numpysupported.html#reductions
The particular functions I'd be interested in are np.nansum, np.nanmean, np.nanmedian, np.nanstd, np.nanvar, np.nanmax, and np.nanmin
Thanks!
See also #1269
See also #1269
@mroeschke thank you for submitting this. I believe this is a duplicate of #1269 and can be closed and labelled as such.
@mroeschke thank you for submitting this. I believe this is a duplicate of #1269 and can be closed and labelled as such.
Not to be pedantic, but these functions aren't mentioned in the duplicate issue.
Not to be pedantic, but these functions aren't mentioned in the duplicate issue.
feel free to add them to the duplicate, thank you!
|
gharchive/issue
| 2021-01-02T00:18:08 |
2025-04-01T04:35:15.317640
|
{
"authors": [
"HPLegion",
"caniko",
"esc",
"mroeschke"
],
"repo": "numba/numba",
"url": "https://github.com/numba/numba/issues/6610",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
483609836
|
Overload np.array to accept arrays
Adds support for calls to np.array(a) inside jitted functions, where a is a numpy array (which closes #4470), as well as support for the copy keyword argument to np.array. Also adds tests for these features.
This overload is essentially a minor generalization of the overload for np.asarray (which is the next function in arraymath.py). Unfortunately, I couldn't manage to just use this more general version to replace the np.asarray overload because of the different number of (optional) arguments, so this violates DRY a bit. If I'm being silly, please let me know how to make it work.
@moble thanks for submitting this, enhancements to Numba are always appreciated! I've labeled this as 'in progress' for the time being as there still seem to be some CI failures. Do let us know if we can help out at all.
@esc @stuartarchibald I've actually scrolled through every failure here, and with one exception noted below(*) the failures are all from one test that doesn't make any sense to me, but I'd like to ask the experts if there's actually a reason this test makes sense, or if it can just be removed now. That test is here, and basically it insists that TypingError("array(float64, 1d, C) not allowed in a homogeneous sequence") must be raised for code like this:
np.array(np.array([1.]))
(It looks like it would be raised from here.) But that's exactly the sort of thing I'm trying to enable with this PR. Is there some reason that shouldn't be allowed? Can I just remove that little piece of this one test?
(*) That one exception is on tests with numpy <= 1.10. In the tests, I used np.shares_memory to ensure that by default or when copy=True is passed, array will actually make a separate copy of the data. That function was actually introduced in numpy 1.11, so I'll just check that the array bases are different or something there. But I'll wait to add that change until we find a way forward on the main failure above.
@moble, thanks for the patch. This will likely be reviewed next week.
RE:
@esc @stuartarchibald I've actually scrolled through every failure here, and with one exception noted below(*) the failures are all from one test that doesn't make any sense to me, but I'd like to ask the experts if there's actually a reason this test makes sense, or if it can just be removed now. That test is here, and basically it insists that TypingError("array(float64, 1d, C) not allowed in a homogeneous sequence") must be raised for code like this:
np.array(np.array([1.]))
<snip>
from a quick look your assessment seems correct, I think it is safe to remove this test/adapt it to test the newly accepted behaviour.
Thanks again.
@stuartarchibald @esc Any help on my comments/questions above?
@moble I think both of the above questions should be answered by the guide to @overloading . Essentially, Numba has to perform type inference on the code so it can compile a specialisation of it, these types need checking with respect to what is actually supported in the implementations available. If you look at the "concrete example" in the above documentation, you'll see in the jit_norm function implementing @overload(scipy.linalg.norm) that most of the code is type checking, which is exactly what is needed in this PR. If it isn't clear what to do from the documentation please shout and more help can be supplied. Thanks for working on this.
Also fixes https://github.com/numba/numba/issues/2806
What you need to do, is check the types of the arguments before attempting to return an implementation. A suggestion here is the following. Imagine having a test case that looks like:
def test_array_exceptions(self):
n = njit(array_dtype)
n(np.arange(10), dtype="abc")
When running this, you end up with the following exception:
💥 zsh» python -m numba.runtests numba.tests.test_np_functions.TestNPFunctions.test_array_exceptions
E
======================================================================
ERROR: test_array_exceptions (numba.tests.test_np_functions.TestNPFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 147, in propagate
constraint(typeinfer)
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 462, in __call__
self.resolve(typeinfer, typevars, fnty)
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 481, in resolve
sig = typeinfer.resolve_call(fnty, pos_args, kw_args)
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 1329, in resolve_call
return self.context.resolve_function_type(fnty, pos_args, kw_args)
File "/Users/vhaenel/git/numba/numba/typing/context.py", line 216, in resolve_function_type
raise last_exception
File "/Users/vhaenel/git/numba/numba/typing/context.py", line 199, in resolve_function_type
res = self._resolve_user_function_type(func, args, kws)
File "/Users/vhaenel/git/numba/numba/typing/context.py", line 251, in _resolve_user_function_type
return func.get_call_type(self, args, kws)
File "/Users/vhaenel/git/numba/numba/types/functions.py", line 147, in get_call_type
raise errors.TypingError(failures.format())
numba.errors.TypingError: Invalid use of Function(<built-in function array>) with argument(s) of type(s): (array(int64, 1d, C), dtype=unicode_type)
* parameterized
In definition 0:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 1:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 2:
AttributeError: 'UnicodeType' object has no attribute 'dtype'
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3677
In definition 3:
AttributeError: 'UnicodeType' object has no attribute 'dtype'
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3677
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: resolving callee type: Function(<built-in function array>)
[2] During: typing of call at /Users/vhaenel/git/numba/numba/tests/test_np_functions.py (161)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/vhaenel/git/numba/numba/tests/test_np_functions.py", line 2698, in test_array_exceptions
n(np.arange(10), dtype="abc")
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 376, in _compile_for_args
error_rewrite(e, 'typing')
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 341, in error_rewrite
raise e
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 352, in _compile_for_args
return self.compile(tuple(argtypes))
File "/Users/vhaenel/git/numba/numba/compiler_lock.py", line 32, in _acquire_compile_lock
return func(*args, **kwargs)
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 742, in compile
cres = self._compiler.compile(args, return_type)
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 80, in compile
raise retval
[1:arraymath.py][5:test_np_functions.py]*!¬ |" Press <F1>, ? for help
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 90, in _compile_cached
retval = self._compile_core(args, return_type)
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 108, in _compile_core
pipeline_class=self.pipeline_class)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 976, in compile_extra
return pipeline.compile_extra(func)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 391, in compile_extra
return self._compile_bytecode()
File "/Users/vhaenel/git/numba/numba/compiler.py", line 907, in _compile_bytecode
return self._compile_core()
File "/Users/vhaenel/git/numba/numba/compiler.py", line 894, in _compile_core
res = pm.run(self.status)
File "/Users/vhaenel/git/numba/numba/compiler_lock.py", line 32, in _acquire_compile_lock
return func(*args, **kwargs)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 267, in run
raise patched_exception
File "/Users/vhaenel/git/numba/numba/compiler.py", line 258, in run
stage()
File "/Users/vhaenel/git/numba/numba/compiler.py", line 517, in stage_nopython_frontend
self.locals)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 1128, in type_inference_stage
infer.propagate()
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 928, in propagate
raise errors[0]
numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Invalid use of Function(<built-in function array>) with argument(s) of type(s): (array(int64, 1d, C), dtype=unicode_type)
* parameterized
In definition 0:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 1:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 2:
AttributeError: 'UnicodeType' object has no attribute 'dtype'
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3677
In definition 3:
AttributeError: 'UnicodeType' object has no attribute 'dtype'
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3677
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: resolving callee type: Function(<built-in function array>)
[2] During: typing of call at /Users/vhaenel/git/numba/numba/tests/test_np_functions.py (161)
File "numba/tests/test_np_functions.py", line 161:
def array_dtype(a, dtype):
return np.array(a, dtype=dtype)
^
----------------------------------------------------------------------
Ran 1 test in 0.089s
FAILED (errors=1)
Then, adding the following check:
if not isinstance(dtype, types.DTypeSpec):
raise TypingError("'dtype' must be a valid NumPy type object")
Will lead to:
💥 zsh» python -m numba.runtests numba.tests.test_np_functions.TestNPFunctions.test_array_exceptions :(
E
======================================================================
ERROR: test_array_exceptions (numba.tests.test_np_functions.TestNPFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 147, in propagate
constraint(typeinfer)
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 462, in __call__
self.resolve(typeinfer, typevars, fnty)
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 481, in resolve
sig = typeinfer.resolve_call(fnty, pos_args, kw_args)
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 1329, in resolve_call
return self.context.resolve_function_type(fnty, pos_args, kw_args)
File "/Users/vhaenel/git/numba/numba/typing/context.py", line 216, in resolve_function_type
raise last_exception
File "/Users/vhaenel/git/numba/numba/typing/context.py", line 199, in resolve_function_type
res = self._resolve_user_function_type(func, args, kws)
File "/Users/vhaenel/git/numba/numba/typing/context.py", line 251, in _resolve_user_function_type
return func.get_call_type(self, args, kws)
File "/Users/vhaenel/git/numba/numba/types/functions.py", line 147, in get_call_type
raise errors.TypingError(failures.format())
numba.errors.TypingError: Invalid use of Function(<built-in function array>) with argument(s) of type(s): (array(int64, 1d, C), dtype=unicode_type)
* parameterized
In definition 0:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 1:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 2:
TypingError: 'dtype' must be a valid NumPy type object
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3676
In definition 3:
TypingError: 'dtype' must be a valid NumPy type object
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3676
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: resolving callee type: Function(<built-in function array>)
[2] During: typing of call at /Users/vhaenel/git/numba/numba/tests/test_np_functions.py (161)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/vhaenel/git/numba/numba/tests/test_np_functions.py", line 2698, in test_array_exceptions
n(np.arange(10), dtype="abc")
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 376, in _compile_for_args
error_rewrite(e, 'typing')
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 341, in error_rewrite
raise e
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 352, in _compile_for_args
return self.compile(tuple(argtypes))
File "/Users/vhaenel/git/numba/numba/compiler_lock.py", line 32, in _acquire_compile_lock
return func(*args, **kwargs)
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 742, in compile
cres = self._compiler.compile(args, return_type)
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 80, in compile
raise retval
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 90, in _compile_cached
retval = self._compile_core(args, return_type)
File "/Users/vhaenel/git/numba/numba/dispatcher.py", line 108, in _compile_core
pipeline_class=self.pipeline_class)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 976, in compile_extra
return pipeline.compile_extra(func)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 391, in compile_extra
return self._compile_bytecode()
File "/Users/vhaenel/git/numba/numba/compiler.py", line 907, in _compile_bytecode
return self._compile_core()
File "/Users/vhaenel/git/numba/numba/compiler.py", line 894, in _compile_core
res = pm.run(self.status)
File "/Users/vhaenel/git/numba/numba/compiler_lock.py", line 32, in _acquire_compile_lock
return func(*args, **kwargs)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 267, in run
raise patched_exception
File "/Users/vhaenel/git/numba/numba/compiler.py", line 258, in run
stage()
File "/Users/vhaenel/git/numba/numba/compiler.py", line 517, in stage_nopython_frontend
self.locals)
File "/Users/vhaenel/git/numba/numba/compiler.py", line 1128, in type_inference_stage
infer.propagate()
File "/Users/vhaenel/git/numba/numba/typeinfer.py", line 928, in propagate
raise errors[0]
numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Invalid use of Function(<built-in function array>) with argument(s) of type(s): (array(int64, 1d, C), dtype=unicode_type)
* parameterized
In definition 0:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 1:
TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence
raised from /Users/vhaenel/git/numba/numba/typing/npydecl.py:460
In definition 2:
TypingError: 'dtype' must be a valid NumPy type object
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3676
In definition 3:
TypingError: 'dtype' must be a valid NumPy type object
raised from /Users/vhaenel/git/numba/numba/targets/arraymath.py:3676
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: resolving callee type: Function(<built-in function array>)
[2] During: typing of call at /Users/vhaenel/git/numba/numba/tests/test_np_functions.py (161)
File "numba/tests/test_np_functions.py", line 161:
def array_dtype(a, dtype):
return np.array(a, dtype=dtype)
^
----------------------------------------------------------------------
Ran 1 test in 0.071s
FAILED (errors=1)
Using the information from: https://numba.pydata.org/numba-doc/dev/extending/overloading-guide.html
You can then refine this further:
if isinstance(dtype, types.Optional):
dtype = dtype.type
if not isinstance(dtype, (types.DTypeSpec, types.NoneType)):
raise TypingError("'dtype' must be a valid NumPy type object")
@esc Thanks for the explanation. I had to change your second condition to be
if not isinstance(dtype, types.DTypeSpec) and not _is_nonelike(dtype):
and then similarly generalize the checks for the boolean argument.
@moble no problem, the changes look good but there is still some work to do. The next thing would be to write some test functions
Also, I am thinking about how to enable code reuse across array and asarray and will let you know if I come up with anything.
@moble just checking in on the status of this PR. It seems like it now has some conflicts with the current master, probably worth resolving those.
@esc Things have slowed down a bit because we've just had our third baby (with twin 2-year-olds), so I don't have a lot of time at the moment. Also, I'm not really clear how to write the tests you wanted. Any help would be appreciated.
@moble congratulations!
Thanks for your work on this so far. If you are happy to do so, I think @esc mentioned that he can continue this PR to get it to the point where it can be merged. How does that sound?
@stuartarchibald @esc That sounds great. I would really appreciate it. Sorry I couldn't push it over the finish line, but thanks to both of you for all your help getting it this far.
@stuartarchibald @esc That sounds great. I would really appreciate it. Sorry I couldn't push it over the finish line, but thanks to both of you for all your help getting it this far.
@moble No worries. Thanks for wanting to contribute to Numba :)
@esc if you are happy to do so please could you take over? I can review. We also need to make sure that @moble goes into the release notes as a contributor when this is merged too.
Thanks all!
@stuartarchibald I have added it to my list of stuff.
Great, thanks!
What is the ETA on this PR?
Got a notification that this got closed, but probably because of the master -> main rename? (Not sure if it has happened to other pull requests, I'm subscribed to a few of them and only this one got closed)
GitHub assured me, that all PRs will be re-targeted - I am very surprised that this was closed.
It seems like I can't re-open it either.
Yeah, this will need to be submitted anew, suspect a GitHub bug/glitch/whatever.
I think the problem is that the source branch or repo for the PR no longer exists, so Github could not retarget it.
This is one of four PRs that are affected by our renaming of the default branch and Github failed to automatic update the target branch. It is now in a state that cannot be reopened. OP please open a new PR. Sorry for the inconvenience.
I was the OP, but @esc took it over from me.
Oh yes, looks like I dropped it, my bad.
I've put it back on my list, which is very long. If anybody would like to jump on this one, please feel free.
|
gharchive/pull-request
| 2019-08-21T19:28:41 |
2025-04-01T04:35:15.340213
|
{
"authors": [
"astrojuanlu",
"caniko",
"esc",
"moble",
"seibert",
"sklam",
"stuartarchibald"
],
"repo": "numba/numba",
"url": "https://github.com/numba/numba/pull/4475",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2184509608
|
Schedule guvectorize'd functions over dask.distributed
Fix failure to pickle a dask array that embeds a guvectorized ufunc in its graph.
Closes https://github.com/numba/numba/issues/4314
Closes https://github.com/dask/distributed/issues/3450
Closes https://github.com/dask/distributed/issues/7929
@crusaderky, thanks for your contribution. Do you have a test case using Dask where current code would fail?
import dask.array as da
import distributed
import numba
client = distributed.Client(processes=True, n_workers=1)
@numba.guvectorize(["f8,f8[:]"], "()->()")
def double(x, out):
out[:] = x * 2
double(da.arange(5)).compute()
@crusaderky, can you add a release notes file? https://numba.readthedocs.io/en/latest/developer/contributing.html#release-notes
@crusaderky, can you add a release notes file? https://numba.readthedocs.io/en/latest/developer/contributing.html#release-notes
Done
|
gharchive/pull-request
| 2024-03-13T17:09:26 |
2025-04-01T04:35:15.344969
|
{
"authors": [
"crusaderky",
"guilhermeleobas"
],
"repo": "numba/numba",
"url": "https://github.com/numba/numba/pull/9495",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1506261875
|
Use shellcheck
I found a few bugs just by using shellcheck: https://github.com/numtide/nixos-remote/pull/8/commits/c5bf76c63efdd1043da8fd93ac7a341ba5cb3745
I think this is must for this kind of projects.
Fixed in master
32ba35c4ee7eaba2bf08507d69133b972e2bccbe
|
gharchive/issue
| 2022-12-21T12:59:48 |
2025-04-01T04:35:15.531881
|
{
"authors": [
"Mic92",
"zimbatm"
],
"repo": "numtide/nixos-remote",
"url": "https://github.com/numtide/nixos-remote/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1859889854
|
[ FEATURE REQUEST ] - ComfyUI
Would love to see if this could be converted to ComfyUI as a node.
Hi
Thanks for your request, i'll think about it 😊
Regards
Can this be installed on ComfyUI or become a customer node?
|
gharchive/issue
| 2023-08-21T18:08:12 |
2025-04-01T04:35:15.546496
|
{
"authors": [
"Pythonpa",
"numz",
"rethink-studios"
],
"repo": "numz/sd-wav2lip-uhq",
"url": "https://github.com/numz/sd-wav2lip-uhq/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1109291649
|
Access to an undefined property
Larastan Version: 1.0.2
--level used: 2
Pull request with failing test:
Description
On a model I get the following error:
------ -----------------------------------------------------------------------------------------------------------
Line Models/UnitConversion.php
------ -----------------------------------------------------------------------------------------------------------
96 Access to an undefined property Illuminate\Database\Eloquent\Builder<App\Models\UnitConversion>::$factor.
------ -----------------------------------------------------------------------------------------------------------
Laravel code where the issue was found
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
/**
* App\Models\UnitConversion
*
* @property int $id
* @property float $factor
* @property int|null $from_unit_id
* @property int|null $to_unit_id
* @property int|null $ingredient_id
* @property \Illuminate\Support\Carbon|null $created_at
* @property \Illuminate\Support\Carbon|null $updated_at
* @property-read \App\Models\Unit|null $fromUnit
* @property-read \App\Models\Ingredient|null $ingredient
* @property-read \App\Models\Unit|null $toUnit
* @method static \Database\Factories\UnitConversionFactory factory(...$parameters)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion newModelQuery()
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion newQuery()
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion query()
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereCreatedAt($value)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereFactor($value)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereFromUnitId($value)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereId($value)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereIngredientId($value)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereToUnitId($value)
* @method static \Illuminate\Database\Eloquent\Builder|UnitConversion whereUpdatedAt($value)
* @mixin \Eloquent
*/
class UnitConversion extends Model
{
use HasFactory;
/**
* @var array|string[]
*/
protected $fillable = [
'factor',
'from_unit_id',
'to_unit_id',
'ingredient_id',
];
/**
* fromUnit
*
* @return BelongsTo
*/
public function fromUnit(): BelongsTo
{
return $this->belongsTo(Unit::class, 'from_unit_id');
}
/**
* toUnit
*
* @return BelongsTo
*/
public function toUnit(): BelongsTo
{
return $this->belongsTo(Unit::class, 'to_unit_id');
}
/**
* ingredient
*
* @return BelongsTo
*/
public function ingredient(): BelongsTo
{
return $this->belongsTo(Ingredient::class, 'ingredient_id');
}
public static function getGram($qty, $from_unit_id, $ingredient_id = null)
{
if ($ingredient_id != null) {
$uc = self::whereFromUnitId($from_unit_id)
->whereToUnitId(Unit::whereName('g')->first()->id)
->whereIngredientId($ingredient_id)->first();
return $qty * $uc->factor;
}
$uc = UnitConversion::whereFromUnitId($from_unit_id)
->whereToUnitId(Unit::whereName('g')->first()->id)
->whereNull('ingredient_id');
if ($uc) {
return $qty * $uc->factor; //<----- Here I get the error
}
return $qty;
}
}
I have found the issue, my mistake, sorry
I have found the issue
It is ide-helper.
well, sort of... but when I get $us, I was missing ->first()
|
gharchive/issue
| 2022-01-20T13:17:36 |
2025-04-01T04:35:15.550418
|
{
"authors": [
"rabol",
"szepeviktor"
],
"repo": "nunomaduro/larastan",
"url": "https://github.com/nunomaduro/larastan/issues/1095",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
502998058
|
Model scope call after relation returns Builder
Currently code like this $this->hasOne(User::class)->active(); results in error saying Call to an undefined method Illuminate\Database\Eloquent\Relations\HasOne::active(). This PR fixes this.
Relation class in Laravel forwards unknown calls to Builder. And Builder can call scope methods on the model. So it is safe to call model scopes from the relation, like above.
There was one issue that I couldn't solve. As far as my understanding goes for the Reflection and PHPStan, there is no way to know the calling context from the reflection.
So in this implementation, I'm treating every unknown call as a scope method call, if the method doesn't also exist in Builder
I'm open to suggestions if anyone knows how to get the actual scope method reflection from inside the ModelScopeAfterRelations pipe.
Thanks!
@canvural Let me know if you would like to maintain this project.
@nunomaduro I don't have too much experience with writing PHPStan extensions. I don't know if I'd be the right person.
I'm just fixing the errors that I found when running Larastan on my project. And I have some free time over the next 2 or 3 weeks. I'll continue to fix the issues I found. As for the maintaining let's see after this time.
|
gharchive/pull-request
| 2019-10-05T18:20:35 |
2025-04-01T04:35:15.553658
|
{
"authors": [
"canvural",
"nunomaduro"
],
"repo": "nunomaduro/larastan",
"url": "https://github.com/nunomaduro/larastan/pull/319",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
511969964
|
Squelch all resolution exceptions
Keep nibbling away at #338.
Speaking of cleaning up unused imports, @canvural , would you say no to another PR that hooks up Scrutinizer checks for unused imports and cleans up what's already there?
The immediate use case would be to catch omissions like this. The general idea behind the approach is to divide labour between silicon and synapse, and automate what can(should?) be automated.
The immediate use case would be to catch omissions like this. The general idea behind the approach is to divide labour between silicon and synapse, and automate what can(should?) be automated.
Can you re-phrase? Didn't understood.
@nunomaduro , I'll try.
The immediate reason is to catch omissions like what I made in this pull req originally.
The longer-term reason is to reduce the effort required on the maintainers' part to review incoming patches - thus making whatever remaining review easier, more timely and more likely to happen. I'm definitely not saying "turn all the Scrutiniser checks on and up to 11 IMMEDIATELY".
@canvural , @nunomaduro , discussions about Scrutiniser config changes aside, is this PR good to merge up?
Yeah - probably I am not getting everything because I am in a conference. But I still don't understand the user case for this. Why we need this?
resolve method uses the actual container from the users application. And resolving some interface through the container may sometimes have side effects. Look at issue #338
So by catching all the exceptions, we make sure the analyses doesn't crash.
That's not a good solution. We should understand, instead, why are we resolving things from the container that can fail. Not that, if resolving things from the container a static analysis time, they may also fail at runtime for the user.
Let's make sure we address this issue properly.
OK so I'll close this and let's discuss it at #338
|
gharchive/pull-request
| 2019-10-24T14:08:11 |
2025-04-01T04:35:15.558841
|
{
"authors": [
"CyberiaResurrection",
"canvural",
"nunomaduro"
],
"repo": "nunomaduro/larastan",
"url": "https://github.com/nunomaduro/larastan/pull/343",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
762836121
|
fix usort causing error with PHP8
This makes the library compatible with PHP8 without any change in how the table commands are shown.
Released as v1.7.1 👍🏻
|
gharchive/pull-request
| 2020-12-11T20:03:33 |
2025-04-01T04:35:15.559922
|
{
"authors": [
"owenvoke",
"yazeed"
],
"repo": "nunomaduro/laravel-console-summary",
"url": "https://github.com/nunomaduro/laravel-console-summary/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
175929707
|
[T4A2][T15-B1]Niveetha Nair
Ready for review
(1) Add an instance-level member int sequenceNumber and a class-level variable int nextSequenceNumber to the Person class
(2) Convert the methods of the Parser class to a class-level methods
@Niveetha
Great work!
|
gharchive/pull-request
| 2016-09-09T05:13:43 |
2025-04-01T04:35:15.576964
|
{
"authors": [
"K1ang",
"Niveetha"
],
"repo": "nus-cs2103-AY1617S1/addressbook-level2",
"url": "https://github.com/nus-cs2103-AY1617S1/addressbook-level2/pull/1654",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
175177155
|
[T4A2][W13-B4] Hee Han Xiang
Add class level member
Ready for review
you need to update the parser test file that uses the parseCommand method.
i see you have commits from an unrelated activity here.
Please avoid accumulating unrelated changes into a single PR.
Tip: If you are currently on branch1 and want to create a new branch2, you need to switch back to master branch before creating branch2. If not, changes in branch1 will appear in the PR created from branch2 later.
@fisherhx
Some comments added. Please ack & close the PR after reading comments.
@fisherhx
Some comments added. Please ack & close the PR after reading comments.
|
gharchive/pull-request
| 2016-09-06T07:21:39 |
2025-04-01T04:35:15.579983
|
{
"authors": [
"fisherhx",
"okkhoy"
],
"repo": "nus-cs2103-AY1617S1/addressbook-level2",
"url": "https://github.com/nus-cs2103-AY1617S1/addressbook-level2/pull/933",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
809334878
|
[CS2103-W15-2] ClientBook
ClientBook a desktop app that helps insurance agents efficiently manage contact details of their clients. It is optimized for use via a Command Line Interface (CLI) so that users can use the app through typing commands.
Codecov Report
Merging #44 (37876fb) into master (c36220c) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #44 +/- ##
=========================================
Coverage 72.15% 72.15%
Complexity 399 399
=========================================
Files 70 70
Lines 1232 1232
Branches 125 125
=========================================
Hits 889 889
Misses 311 311
Partials 32 32
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c36220c...37876fb. Read the comment docs.
|
gharchive/pull-request
| 2021-02-16T13:57:39 |
2025-04-01T04:35:15.585135
|
{
"authors": [
"codecov-io",
"jay9645"
],
"repo": "nus-cs2103-AY2021S2/tp",
"url": "https://github.com/nus-cs2103-AY2021S2/tp/pull/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2014612156
|
Added nu_plugin_qr_maker
Added nu_plugin_qr_maker
cool! thanks.
|
gharchive/pull-request
| 2023-11-28T14:45:01 |
2025-04-01T04:35:15.624611
|
{
"authors": [
"FMotalleb",
"fdncred"
],
"repo": "nushell/awesome-nu",
"url": "https://github.com/nushell/awesome-nu/pull/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
529523086
|
fix(list-page): support custom output path
Fix #111 #113
@farnabaz when will this change be released?
@AndrewBogdanovTSS sorry for the delay, I've published a new version.
|
gharchive/pull-request
| 2019-11-27T18:58:25 |
2025-04-01T04:35:15.740626
|
{
"authors": [
"AndrewBogdanovTSS",
"farnabaz"
],
"repo": "nuxt-community/svg-sprite-module",
"url": "https://github.com/nuxt-community/svg-sprite-module/pull/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
890231661
|
Ads showing only on page reload
I have inserted an ad to the my page using this module and this ad works only with full page reload (for example when reload the page in browser or paste direct link on address field). When I open this page from my application (by clicking nuxt-link or after $route.push) ad wasn't load.
my nuxt-config looks like:
modules: [
['@nuxtjs/google-adsense']
],
'google-adsense': {
id: 'ca-pub-xxxxxx',
test: false,
//onPageLoad: true //I've also try to set this property true but it doesn't help
},
and in my *.vue file:
<div class="my-ad-banner">
<adsbygoogle
:ad-style="{display: 'block!important', width: '208px'}"
ad-client="ca-pub-xxxx"
ad-slot="1231312312"
ad-format="auto"
full-width-responsive
></adsbygoogle>
</div>
.my-ad-banner {
min-width: 208px;
margin-bottom: 16px;
}
I have same problem, did you find a solution?
@kovaletsyurii @thor-n have you tired by using no-prefetch in nuxt-link ??
The reason this happens is exactly because the page is loaded using nuxt-link/router page load
This type of loading doesn't cause browser to reload, and hence the global variable window which is used to store Google ads initialization is never cleared out.
Hence when you change page you might see in console errors saying something like slot already assigned or similar
The latest version for nuxt 3 seems to already have fixed this
https://github.com/nuxt-modules/google-adsense/blob/master/src/runtime/components-v3/Adsbygoogle.vue
By calling updateAd the complete initialization is re-run causing ads to render
This issue can probably be closed
|
gharchive/issue
| 2021-05-12T15:46:49 |
2025-04-01T04:35:15.744578
|
{
"authors": [
"i330z",
"kovaletsyurii",
"modbender",
"thor-n"
],
"repo": "nuxt-modules/google-adsense",
"url": "https://github.com/nuxt-modules/google-adsense/issues/123",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1476517051
|
The plugin for Nuxt 3 seems to slower than the plugin for Nuxt 2
Versions
nuxt: 3.0.0
@nuxtjs/prismic: 3.0.0-rc.2
node: 16.17
Reproduction
Both projects are here :
https://gitlab.com/maltcommunity/public/singapore-prototypes/-/tree/master/prismic-nuxt-2
https://gitlab.com/maltcommunity/public/singapore-prototypes/-/blob/master/prismic-nuxt-3
Steps to reproduce
We encountered performance issues since our migration to Nuxt3 and we suspect the prismic module.
When running the projects above, we log the time on both version to query a prismic document
To reproduce, just checkout the projects, run npm install and then npm dev
What is Expected?
Time should be equivalent in both version
What is actually happening?
Time are quite different
With nuxt3, the time is significatively slower
I'm not sure this is related to nuxt so I'll post an issue on prismic client also
Hey there, thanks for opening an issue, and thank you so much for the comprehensive reproduction, I was able to reproduce it!
Indeed, with this setup, it appears that @prismicio/client version 5 is *faster* than @prismicio/client version 7. However, when turning on the modern option on Nuxt 2 module, we now see comparable results:
So why the modern option on Nuxt 2?
When not using this option, @prismicio/client version 5 gets inited using an old, deprecated, and not feature-full method that can cause issues with previews namely. The modern option initializes the client in a more modern way/feature-full way.
With all of that in mind, maybe you can try running your Nuxt 2 application with the modern option turned on so that you can maybe confirm if Prismic is indeed your performance bottleneck?
While biased, I think 100ms is a decent API response time that shouldn't represent a bottleneck in such application(?)
Our current application is using nuxt 3, not nuxt 2 and I'm not looking for the same performance penalty :) I'd like on the contrary the have the performance I used to have in the previous app.
In this example, it vary within the range 100/200ms. But on real use case with more complex slices the variation is more in the range 500/900ms. With our old version, it was in the range 30/70ms. The difference is quite huge
I don't seen the parameter "modern" in the nuxt 3 plugin https://v3.prismic.nuxtjs.org/configuration
Do you confirm it does not exist anymore ?
Our current application is using nuxt 3, not nuxt 2 and I'm not looking for the same performance penalty :) I'd like on the contrary the have the performance I used to have in the previous app.
Yes, my suggestion was to confirm that turning on modern in the old application resulted in the same performance issues that you experience in the new application.
In this example, it vary within the range 100/200ms. But on real use case with more complex slices the variation is more in the range 500/900ms. With our old version, it was in the range 30/70ms. The difference is quite huge.
Interesting, are you able to provide a more performance-heavy call example (just the query, I can update it manually in the reproduction). It might help exacerbate the less-performant part of the code for further troubleshooting.
I don't seen the parameter "modern" in the nuxt 3 plugin v3.prismic.nuxtjs.org/configuration
Indeed, the modern option is for Nuxt 2 only (the Nuxt 3 module is only "modern" so to say, not legacy)
To summarize.
Here is the result with the original setup in the repository
Nuxt 3
Nuxt 2
It's approximatively 334% of performance degradation.
With Nuxt 2 and modern option activated, indeed we have the same degradation :
In "real" conditions within our application the performance of prismic is quite different as you can see in this flamegraph :
I don't know exactly how to isolate the problem in the public repository because I still don't know what's causing this.
In the flamegraph (made with datadog) there is not details in the green line about prismic
We will try to create the same flamegraph on the old application
Currently, I can only time the whole response time from the server
With nuxt 3
With nuxt 2
The time is, in average, 2x slower.
The website content is exactly the same, slices are identicals.
Calls to prismic are already parallelize as we can see in this flamegraphs, they don't sum up
We continue to investigate
OK, thanks again for sharing the details! Let me know if you happen to find anything else.
I'll check internally on our end if we can find anything tied to your specific project.
Hi,
For previews, we still use the standard client. We may use webhooks to evict the cache.
About the API, yes I really don't know why I have those differences in both versions. You're right, it's the same API, the same content. It's really weird and I don't understand.
Even if, maybe we would have had the same performance in our previous app with modern:true. We tested that on the small project but we didn't tried on the old app. I'll test it today.
Closing this one as I think it has been overall answered
tl;dr; client seems to be working as expected when it comes to timing between Nuxt 2 and Nuxt 3 versions, or it is an upstream issue.
@hlassiege Hi! We also encountered the same problem. Did you find another solution?
@OlgaLookina we had to create a proxy to cache the answer from prismic.
We never found a good builtin solution with prismic itself.
|
gharchive/issue
| 2022-12-05T13:01:13 |
2025-04-01T04:35:15.761210
|
{
"authors": [
"OlgaLookina",
"hlassiege",
"lihbr"
],
"repo": "nuxt-modules/prismic",
"url": "https://github.com/nuxt-modules/prismic/issues/174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1104010965
|
Await concurrent requests generated by the generate property on nuxt.config.js
Is your feature request related to a problem? Please describe.
I am using Wordpress as a CMS and we have more than 900+ articles we need to build at build time. The problem is that after generating all the static routes on the generate function, all of them are fired at once doing more than 200 requests at the same time hitting the limit concurrent requests of the server, causing the server to drop further calls.
Describe the solution you'd like
I would like a way to await concurrent requests after generating dynamic routes using the generate function on the nuxt.config.js. It may slow down the deployment but it will avoid hitting the limit of the concurrent requests of the server.
Describe alternatives you've considered
I haven't found any alternative to this problem at the moment yet.
I have tried passing the payload property to each generated route, requesting at build time chunks of 100 articles to avoid hitting the limit of concurrent requests on the server, but even using that, it makes the request for each generated route.
Additional context
Nuxt version: v2.15.3
Node version: v14.16
https://nuxtjs.org/docs/configuration-glossary/configuration-generate/#concurrency
|
gharchive/issue
| 2022-01-14T18:55:57 |
2025-04-01T04:35:15.776941
|
{
"authors": [
"ltroya-as"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/10208",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1402208873
|
is it possible to update vue to 2.7.10
there are a lot of features updated from 2.7, is it possible update nuxt2 to use the latest release of vue?
Totally. Just refresh your lockfile. You can try running npm upgrade vue or whatever your package manager's command is.
For a long time Nuxt has used the caret constraint which means ... a huge number of old Nuxt releases will install Vue 2.7.
But is there a way to know from what minor version nuxt 2 started supporting vue 2.7?
What version are you on?
What version are you on?
"@nuxtjs/vuetify": "^1.12.3",
That's not a version of nuxt.
I apologize, my brain thought it copied the right one:
"nuxt": "^2.14.12"
That works for Vue 2.7 👍
|
gharchive/issue
| 2022-10-09T09:10:36 |
2025-04-01T04:35:15.780461
|
{
"authors": [
"danielroe",
"mathxlee",
"naquiroz"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/10749",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
470653393
|
Appending scripts and styles to the body not work on initial page load
Version
v2.8.1
Reproduction link
https://codesandbox.io/embed/codesandbox-nuxt-rodbc
Steps to reproduce
Click on the button and then go back. You will see that component inline style and script tags are loaded only on hot reload.
What is expected ?
Styles and scripts that are defined in head() method should be appended to the body on initial page load & generate.
What is actually happening?
Styles and scripts that are defined in head() method are appended to the DOM only during SPA navigation.
Additional comments?
Thank you for resolving the issue.
This bug report is available on Nuxt community (#c9515)
Unfortunately body: true is only supported for script elements (both by vue-meta as nuxt), not for style elements. Please create an issue on the vue-meta repo if you think it should be possible to add style's directly to the body on ssr as well.
@pimlie comment
@pimlie Ok, I see. I thought it was related to nuxt as you advised adding styles this way comment
Damn, why you have to slap me with my own words like that :smile:
It seems my remark in that comment was not covering all angles (as in ssr). When updating on the client body: true is always respected, but during ssr this is depending on the framework as the framework needs to inject body styles separately in the template. Eg for nuxt an entry for styles should be added here: https://github.com/nuxt/nuxt.js/blob/dev/packages/vue-renderer/src/renderers/ssr.js#L143-L144
@pimlie Please forgive me!
So it sounds like a valid bug-report.
No problem! Actually its probably more a feature request than a bug from Nuxt's point of view.
Please feel free to submit a PR for this if you have the time. Should be rather straight forward :+1:
PS. Set the loading property to false in page does not work?
|
gharchive/issue
| 2019-07-20T09:01:41 |
2025-04-01T04:35:15.786918
|
{
"authors": [
"hojas",
"pimlie",
"robertpiosik"
],
"repo": "nuxt/nuxt.js",
"url": "https://github.com/nuxt/nuxt.js/issues/6097",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1834820642
|
Build Command does not create files in the .output.public._nuxt directory. ( with rollup options )
Environment
Operating System: Darwin
Node Version: v16.18.1
Nuxt Version: 3.6.5
Nitro Version: 2.5.2
Package Manager: yarn@1.22.19
Builder: vite
User Config: devtools, app, runtimeConfig, modules, dayjs, colorMode, css, components, vite, build
Runtime Modules: @nuxtjs/eslint-module@4.1.0, @pinia/nuxt@0.4.11, @vueuse/nuxt@10.2.1, nuxt-swiper@1.2.0, dayjs-nuxt@1.1.2, @nuxt/image@1.0.0-rc.1, @nuxtjs/i18n@8.0.0-beta.13, @nuxtjs/color-mode@3.3.0
Build Modules: -
Reproduction
rollup.zip
Describe the bug
set (entryFileNames, chunkFileNames, assetFileNames) of vite.build.rollupOptions.output in nuxt.config
yarn build
nothing .output.public._nuxt
Additional context
Logs
No response
@kyejune Can you share your nuxt.config.ts?
@kyejune Can you share your nuxt.config.ts?
here:
Thanks to you, it was solved.
The cache is planned to create a specified value and put it in.
You need to append _nuxt/ to the start of those ids. So, for example: _nuxt/[name].js.
If you want to change the name of the directory you can do that with app.buildAssetsDir.
But we do need a directory name for setting appropriate cache headers for the build assets.
Thanks to you, it was solved.
The cache is planned to create a specified value and put it in.
|
gharchive/issue
| 2023-08-03T10:54:12 |
2025-04-01T04:35:15.793706
|
{
"authors": [
"Nura-21",
"kyejune"
],
"repo": "nuxt/nuxt",
"url": "https://github.com/nuxt/nuxt/issues/22464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2204148038
|
Cannot restart nuxt: Cannot read properties of undefined (reading 'toast')
Environment
Operating System: Linux
Node Version: v18.18.0
Nuxt Version: 3.11.1
CLI Version: 3.11.1
Nitro Version: 2.9.4
Package Manager: npm@10.2.3
Builder: -
User Config: devtools, modules, toast
Runtime Modules: @nuxtjs/toast@3.3.1
Build Modules: -
Reproduction
https://stackblitz.com/edit/github-7neyu1?file=nuxt.config.ts
Describe the bug
npm run dev and the following error is displayed.
ERROR Cannot read properties of undefined (reading 'toast') 11:31:02 AM
at nuxtToast (node_modules/@nuxtjs/toast/index.js:4:72)
at Module.installModule (node_modules/@nuxt/kit/dist/index.mjs:2499:101)
at async initNuxt (node_modules/nuxt/dist/index.mjs:4188:7)
at async loadNuxt (node_modules/nuxt/dist/index.mjs:4286:5)
at async loadNuxt (node_modules/@nuxt/kit/dist/index.mjs:2654:19)
at async Object.run (node_modules/nuxi/dist/chunks/prepare.mjs:68:18)
at async runCommand$1 (node_modules/nuxi/dist/shared/nuxi.9edf0930.mjs:1678:16)
at async runCommand$1 (node_modules/nuxi/dist/shared/nuxi.9edf0930.mjs:1669:11)
at async runMain$1 (node_modules/nuxi/dist/shared/nuxi.9edf0930.mjs:1807:7)
Additional context
No response
Logs
No response
@nuxtjs/toast don't support Nuxt 3 since it has been moved to https://github.com/nuxt-community/legacy-modules/tree/master/packages/toast
Thanks.
|
gharchive/issue
| 2024-03-24T02:32:56 |
2025-04-01T04:35:15.798880
|
{
"authors": [
"Zihan-Hu",
"jemdiggity"
],
"repo": "nuxt/nuxt",
"url": "https://github.com/nuxt/nuxt/issues/26463",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2474990902
|
Passing additional logic to the expand function
Description
Like @update:sort, it would be useful to have @update:expand to be able to perform some logic like fetching additional data or something else.
ref: https://github.com/nuxt/ui/pull/803
Additional context
No response
Also, would be very useful to be able to disable expand for selected rows, i.e. be able to pass a function which will decide to display or not display expand for given row
|
gharchive/issue
| 2024-08-20T08:06:50 |
2025-04-01T04:35:15.800889
|
{
"authors": [
"husayt",
"s1gr1d"
],
"repo": "nuxt/ui",
"url": "https://github.com/nuxt/ui/issues/2062",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977296450
|
Popover doesn't fully respect the mode attribute
Environment
Operating System: Linux
Node Version: v18.0.0
Nuxt Version: 3.7.4
CLI Version: 3.9.0
Nitro Version: 2.6.3
Package Manager: npm@8.6.0
Builder: -
User Config: devtools, modules, pinia, imports, image, colorMode, i18n, runtimeConfig, routeRules, app, typescript
Runtime Modules: @pinia/nuxt@0.4.11, @nuxtjs/i18n@8.0.0-rc.5, @nuxt/ui@2.10.0, @nuxt/image@1.0.0-rc.3
Build Modules: -
Version
2.10.0
Reproduction
https://stackblitz.com/edit/nuxt-ui-cjmvrf?file=app.vue
Description
Issue:
while creating some drag and drop elements, i noticed that the UPopover component doesn't truly respect the mode property, i set the mode property to hover and i added an open-delay with a value of 5000 ms, so i expected that it won't show the popover when i for example click the button before the 5000 ms is passed, and it would respect the mode property but it didn't, and that will result in the loss of any other mouse events such as Dragging.
Source
i traced the issue and it's an issue with HeadlessUi where they chose to prevent the default click event (so in the case of mode="hover" it would prevent the dragging or double clicking events ....)
function handleClick(event: MouseEvent) {
if (props.disabled) return
if (isWithinPanel.value) {
api.closePopover()
dom(api.button)?.focus() // Re-focus the original opening Button
} else {
// These two are the issue
event.preventDefault()
event.stopPropagation()
if (api.popoverState.value === PopoverStates.Closed) closeOthers?.(api.buttonId.value!)
api.togglePopover()
dom(api.button)?.focus()
}
}
and nuxtui doesn't handle the click but only handles the mouseenter, mouseleave events
My Proposition:
i'm not sure if changing core HeadlessUi functionality is an option, but if it is and you're open to the change, i can create a pull request that makes it respond only to hover/focus in the hover mode or at the very least just allow events to propagate since Popover is a wrapper component. if not, then just thank you for creating an amazing lib i'm addicted.
Additional context
No response
Logs
No response
Just forgot to mention that even if you have the property open set to true it would still prevent events, so i can't wrap anything that has more functionality with the Popover, i would have to create a custom one.
closed due to inactivity
|
gharchive/issue
| 2023-11-04T11:45:15 |
2025-04-01T04:35:15.808039
|
{
"authors": [
"InerkyJad"
],
"repo": "nuxt/ui",
"url": "https://github.com/nuxt/ui/issues/918",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
875087545
|
2D array operations trigger NotImplementedError: Legate needs support for more than 3 dimensions
Problem
When doing a division that requires a 1D denominator to be broadcasted to a 2D array, and when the shape is larger than a certain size, an exception is raised
Traceback (most recent call last):
File "<removed>/lib/python3.8/site-packages/legion_top.py", line 394, in legion_python_main
run_path(args[start], run_name='__main__')
File "<removed>/lib/python3.8/site-packages/legion_top.py", line 193, in run_path
exec(code, module.__dict__, module.__dict__)
File "./test1.py", line 14, in <module>
c = (a[:, 1:] - a[:, :-1]) / (b[1:] - b[:-1])
File "<removed>/lib/python3.8/site-packages/legate/numpy/array.py", line 776, in __truediv__
return self.internal_truediv(
File "<removed>/lib/python3.8/site-packages/legate/numpy/array.py", line 519, in internal_truediv
return self.perform_binary_op(
File "<removed>/lib/python3.8/site-packages/legate/numpy/array.py", line 2054, in perform_binary_op
out._thunk.binary_op(
File "<removed>/lib/python3.8/site-packages/legate/numpy/deferred.py", line 4876, in binary_op
) = self.runtime.compute_broadcast_transform(
File "<removed>/lib/python3.8/site-packages/legate/numpy/runtime.py", line 2605, in compute_broadcast_transform
raise NotImplementedError(
NotImplementedError: Legate needs support for more than 3 dimensions
To reproduce
step 1: create a test Python script, say, test.py. Its content is:from legate import numpy
a = numpy.random.random((400, 2001))
b = numpy.random.random(2001)
c = (a[:, 1:] - a[:, :-1]) / (b[1:] - b[:-1])
step 2: run test.py with legate. I'm using this one for my test:$ legate --cpus 1 ./test.py
Other notes
Using different shapes/sizes to generate a and b seems to also affect the errors. Smaller shapes/sizes do not give any error. For example, (4, 21) and (21,) for a and b respectively do not raise any errors.
Also, using the same shape but with different runtime flags may or may not return errors. For example, using (40, 201) for a and (201,) for b:
Using legate --cpus 0 --omps 1 --ompthreads 1 ./test.py works fine. No error.
Using legate --cpus 0 --omps 1 --ompthreads -ll:okindhack 1 ./test.py returns the error of NotImplementedError: Legate needs support for more than 3 dimensions.
My workaround
If I explicitly do the broadcasting before the division, everything is fine.
Just to be clear, I know the error says this is something not implemented. But what confuses me is: this calculation only involves 2D array operations and broadcasting from 1D to 2D, why the error message says something about more than 3 dimensions?
@piyueh This should be fixed by #18. Can you pull and try again? You will only need to reinstall Legate NumPy.
@magnatelee thanks for the fix. It's working now!
|
gharchive/issue
| 2021-05-04T04:16:31 |
2025-04-01T04:35:15.815324
|
{
"authors": [
"magnatelee",
"piyueh"
],
"repo": "nv-legate/legate.numpy",
"url": "https://github.com/nv-legate/legate.numpy/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1626001830
|
[FEA]: Avoid running stubgen in CI #766
Is this a new feature, an improvement, or a change to existing functionality?
Change
How would you describe the priority of this feature request
High
Please provide a clear description of problem this feature solves
Running stubgen in CI requires the Nvidia drivers to be installed in the CI image. This unfortunately locks us to a specific driver version, and ops would like to accelerate adopting new versions.
This is the MRC equiv of https://github.com/nv-morpheus/Morpheus/issues/766
Describe your ideal solution
See if the cuda-toolkit driver stubs work the same as cudatoolkit
If we can load the stubs but not run anything, that would work for us. The current stubs throw an error on load.
This is a long shot but requires the least amount of work
Generate the stubs in a pre-commit hook
Describe any alternatives you have considered
No response
Additional context
No response
Code of Conduct
[X] I agree to follow MRC's Code of Conduct
[X] I have searched the open feature requests and have found no duplicates for this feature request
@mdemoret-nv Should we close this issue now that we've moved stubgen to the test phase of ci?
|
gharchive/issue
| 2023-03-15T17:55:37 |
2025-04-01T04:35:15.819831
|
{
"authors": [
"cwharris",
"dagardner-nv"
],
"repo": "nv-morpheus/MRC",
"url": "https://github.com/nv-morpheus/MRC/issues/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
600510612
|
Convert to dune, clean up warnings
Hi here,
I converted the project to dune and cleaned up some warnings.
Thanks for taking care of this -- looks great!
|
gharchive/pull-request
| 2020-04-15T18:43:37 |
2025-04-01T04:35:15.820842
|
{
"authors": [
"OhadRau",
"zbroyar"
],
"repo": "nv-vn/TelegraML",
"url": "https://github.com/nv-vn/TelegraML/pull/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2083917119
|
mati04 - SP2 dotaz
Dobrý den,
při vypracování SP2 jsem narazil na problém s posíláním notifikací. V mé aplikaci měla dle zadání být funkce že si uživatel bude moct nastavit notifikaci na jisté datum a čas. Ale posléze jsem si našel že notifikaci, kterou se spouští časově nedělá přímo prohlížeč ale server.
Mohl bych na toto téma s vámi poprosit o konzultaci? Či jestli se můžu zeptat zda můžu danou funkci vynechat a udělat místo ní například nějakou jinou funkci.
Obhajobu mám 19.1 od 9:15.
Samozřejmě to potřebuje backend, ale ten si nemusíte implementovat, protože těch už je na internetu hodně, tj. stačí si nějaký najít a použít. Práce s posíláním notifikací pak bude podobná integraci aplikace s externí REST API. DOporučuji např. službu OneSignal, kde jsou k dispozici i návody.
|
gharchive/issue
| 2024-01-16T12:59:28 |
2025-04-01T04:35:15.964835
|
{
"authors": [
"IvanMatys",
"nvbach91"
],
"repo": "nvbach91/4IZ268-2023-2024-ZS",
"url": "https://github.com/nvbach91/4IZ268-2023-2024-ZS/issues/248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1451118871
|
🛑 PufferAI HPC Portal is down
In ccf7a01, PufferAI HPC Portal (https://hpc.lab.novaglobal.com.sg) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PufferAI HPC Portal is back up in efc906d.
|
gharchive/issue
| 2022-11-16T08:26:56 |
2025-04-01T04:35:15.967152
|
{
"authors": [
"nvgsg"
],
"repo": "nvgsg/lab-upptime",
"url": "https://github.com/nvgsg/lab-upptime/issues/3108",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1351296739
|
🛑 Cluster Monitoring UI is down
In 24e9c7d, Cluster Monitoring UI (https://monitor.lab.novaglobal.com.sg) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Cluster Monitoring UI is back up in b7e8e80.
|
gharchive/issue
| 2022-08-25T18:37:24 |
2025-04-01T04:35:15.969399
|
{
"authors": [
"nvgsg"
],
"repo": "nvgsg/lab-upptime",
"url": "https://github.com/nvgsg/lab-upptime/issues/998",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2762405664
|
feat: support installing src.rock and .rock packages
Closes #159.
TODO:
[ ] build::build_packed_rock function
|
gharchive/pull-request
| 2024-12-29T21:06:57 |
2025-04-01T04:35:15.970504
|
{
"authors": [
"mrcjkb"
],
"repo": "nvim-neorocks/rocks",
"url": "https://github.com/nvim-neorocks/rocks/pull/291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1900485466
|
integrate with startup plugin or statusline plugin
Does this feature exist in Emacs orgmode core?
No
Orgmode link
No response
Feature value
No response
Additional context
if it can display whether I have planned task today when I launch vim, it would be good.
You can use api for this. This would list headline lines where deadline is today when you open up Neovim:
vim.api.nvim_create_autocmd('VimEnter', {
pattern = '*',
callback = function()
local api = require('orgmode.api')
local files = api.load()
local result = {}
for _, file in ipairs(files) do
for _, headline in ipairs(file.headlines) do
if headline.deadline and headline.deadline:is_today() then
table.insert(result, headline.line)
end
end
end
vim.print(result)
end
})
You can play with it to get the results you want.
|
gharchive/issue
| 2023-09-18T09:03:14 |
2025-04-01T04:35:15.972572
|
{
"authors": [
"kristijanhusak",
"yimingwangdell"
],
"repo": "nvim-orgmode/orgmode",
"url": "https://github.com/nvim-orgmode/orgmode/issues/611",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1828083199
|
Select inner/outer line?
Is your feature request related to a problem? Please describe.
I'm looking through everything and I'm trying to figure out how i could add selecting vil and val for visually selecting a line, similar to:
https://github.com/kana/vim-textobj-line
Describe the solution you'd like
I'd love to use this repo, as it seems to do 99% everything else, and maybe it can already do this but I can't find.
Describe alternatives you've considered
I used to use the repo above on regular vim, but now I'm doing it the neovim way and this plugin seems to be quite popular.
Treesitter is not a right tool for such action. Just use the repo mentioned, or I believe there is a neovim various textobject plugin that does this.
ok thanks
@alexsmartens i used this:
https://github.com/chrisgrieser/nvim-various-textobjs
you can see my mapping for "lines" here:
{
-- not actually treesitter but so similar it goes here
"chrisgrieser/nvim-various-textobjs",
config = function()
require("various-textobjs").setup({
useDefaultKeymaps = false,
})
end,
keys = {
{
"gG",
function() require("various-textobjs").entireBuffer() end,
mode = { "x", "o" },
desc = "entire buffer",
},
{
"il",
function() require("various-textobjs").lineCharacterwise(true) end,
mode = { "x", "o" },
desc = "inner line",
},
{
"al",
function() require("various-textobjs").lineCharacterwise(false) end,
mode = { "x", "o" },
desc = "a line",
},
{
"iS",
function() require("various-textobjs").subword("inner") end,
mode = { "x", "o" },
desc = "inner subword",
},
{
"aS",
function() require("various-textobjs").subword("outer") end,
mode = { "x", "o" },
desc = "a subword",
},
{
"ik",
function() require("various-textobjs").key("inner") end,
mode = { "x", "o" },
desc = "inner KVP key",
},
{
"ak",
function() require("various-textobjs").key("outer") end,
mode = { "x", "o" },
desc = "a KVP key",
},
{
"iv",
function() require("various-textobjs").value("inner") end,
mode = { "x", "o" },
desc = "inner KVP value",
},
{
"av",
function() require("various-textobjs").value("outer") end,
mode = { "x", "o" },
desc = "a KVP value",
},
{
"ix",
function() require("various-textobjs").htmlAttribute("outer") end,
mode = { "x", "o" },
desc = "inner HTML attribute", -- faking because htmlAttribute("inner") is same as vi"
},
{
"ax",
function() require("various-textobjs").htmlAttribute("outer") end,
mode = { "x", "o" },
desc = "a HTML attribute",
},
},
},
|
gharchive/issue
| 2023-07-30T20:48:43 |
2025-04-01T04:35:15.982551
|
{
"authors": [
"9mm",
"kiyoon"
],
"repo": "nvim-treesitter/nvim-treesitter-textobjects",
"url": "https://github.com/nvim-treesitter/nvim-treesitter-textobjects/issues/480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1842548309
|
feat: support environment specification
Support environment variable spec like how null-ls does
Should simply pass a dict to env in "uv.spawn" call in format.
Maybe implemented like so?
ft('ft')
:fmt('fmt')
:env({ ENVVAR = 'VAL' })
yup . maybe is that
|
gharchive/issue
| 2023-08-09T06:03:10 |
2025-04-01T04:35:15.994601
|
{
"authors": [
"barrett-ruth",
"glepnir"
],
"repo": "nvimdev/guard.nvim",
"url": "https://github.com/nvimdev/guard.nvim/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
727736263
|
Updating to 1.2.9 triggers "Fatal error while calling CodeNarc"
My Groovy code is beyond awful, but luckily for me, you wrote this extension! Version 1.2.8 works fine (despite the ERROR message):
[1022/152746.931:ERROR:registration_protocol_win.cc(103)] CreateFile: The system cannot find the file specified. (0x2)
Start analyzing file:///c%3A/Yoyodyne/Skunkr/src/com/yoyodyne/Context.groovy
GroovyLint: Started CodeNarc Server
Completed analyzing file:///c%3A/Yoyodyne/Skunkr/src/com/yoyodyne/Context.groovy in 13606 ms
But unfortunately if I update to 1.2.9, then it fails:
[1022/152947.454:ERROR:registration_protocol_win.cc(103)] CreateFile: The system cannot find the file specified. (0x2)
Start analyzing file:///c%3A/Yoyodyne/Skunkr/src/com/yoyodyne/Context.groovy
Unable to run java command: {"status":1,"stdout":"","stderr":"","childJavaProcess":{"_events":{},"_eventsCount":2,"_closesNeeded":1,"_closesGot":1,"connected":false,"signalCode":null,"exitCode":1,"killed":false,"spawnfile":"java","_handle":null,"spawnargs":["java","-Xms256m","-Xmx2048m","-cp","\"c:\\Users\\kushc\\.vscode\\extensions\\nicolasvuillamy.vscode-groovy-lint-1.2.9\\server\\node_modules\\npm-groovy-lint\\lib\\java\\CodeNarcServer.jar;c:\\Users\\kushc\\.vscode\\extensions\\nicolasvuillamy.vscode-groovy-lint-1.2.9\\server\\node_modules\\npm-groovy-lint\\lib\\java\\*\"","com.nvuillam.CodeNarcServer","--server"],"pid":13716,"stdin":null,"stdout":null,"stderr":null,"stdio":[null,null,null]}}
GroovyLint: Error running CodeNarc:
===========================================================================
===========================================================================
npm-groovy-lint error: Fatal error while calling CodeNarc
Reason: unknown
undefined
If you still have an error, post an issue to get help: https://github.com/nvuillam/vscode-groovy-lint/issues
===========================================================================
===========================================================================
My VS Code version info:
Version: 1.50.1 (user setup)
Commit: d2e414d9e4239a252d1ab117bd7067f125afd80a
Date: 2020-10-13T15:06:15.712Z
Electron: 9.2.1
Chrome: 83.0.4103.122
Node.js: 12.14.1
V8: 8.3.110.13-electron.0
OS: Windows_NT x64 10.0.19041
Whoops, my mistake - I should have put my complaint over here.
|
gharchive/issue
| 2020-10-22T21:34:36 |
2025-04-01T04:35:16.008239
|
{
"authors": [
"chriskush"
],
"repo": "nvuillam/npm-groovy-lint",
"url": "https://github.com/nvuillam/npm-groovy-lint/issues/111",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
920516035
|
Skip lib sources when rebuilding opened files
Files that are opened in the editor automatically rebuilt (to show diagnostics without the need to update and save it), but it is not so actual for the library sources, esp considering that compiler currently rebuilds if they are not changed (and rebuild is actually not needed to be rebuilt, that may unnecessarily trigger file watching tools). So this checks if the file path contains .spago or bower_components to skip it.
To be clear, this is just skipping triggering a rebuild on dependency files on open, it will still be triggered if one edits and saves the file? That seems like a good compromise, of course people may occasionally make quick edits to dependencies inside .spago for example and that should still show diagnostics.
To be clear, this is just skipping triggering a rebuild on dependency files on open, it will still be triggered if one edits and saves the file? That seems like a good compromise, of course people may occasionally make quick edits to dependencies inside .spago for example and that should still show diagnostics.
Absolutely, this branch only affects opened (activated) files, the next handler onDidSaveDocument responsible for pushing diagnostics when the file is changed and saved.
|
gharchive/pull-request
| 2021-06-14T15:14:17 |
2025-04-01T04:35:16.039681
|
{
"authors": [
"nwolverson",
"wclr"
],
"repo": "nwolverson/purescript-language-server",
"url": "https://github.com/nwolverson/purescript-language-server/pull/140",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
803850070
|
BFG miner unhandled exception
Can't download BFG miner, this is what I get:
See the end of this message for details on invoking
just-in-time (JIT) debugging instead of this dialog box.
************** Exception Text **************
System.ArgumentNullException: Value cannot be null.
Parameter name: uriString
at System.Uri..ctor(String uriString)
at MultiMiner.UX.ViewModels.ApplicationViewModel.InstallBackendMinerLocally(MinerDescriptor miner) in c:\Users\nwool\Documents\Visual Studio 2017\Projects\MultiMiner\MultiMiner.UX\ViewModels\ApplicationViewModel.cs:line 2219
at MultiMiner.Win.Forms.MinerForm.ShowNotInstalledMinerWarning() in c:\Users\nwool\Documents\Visual Studio 2017\Projects\MultiMiner\MultiMiner.Win\Forms\MinerForm.cs:line 2580
at MultiMiner.Win.Forms.MinerForm.ScanHardwareLocally() in c:\Users\nwool\Documents\Visual Studio 2017\Projects\MultiMiner\MultiMiner.Win\Forms\MinerForm.cs:line 2875
at System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e)
at System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e)
at System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e)
at System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e)
at System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea)
at System.Windows.Forms.ToolStripDropDown.OnMouseUp(MouseEventArgs mea)
at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.ToolStrip.WndProc(Message& m)
at System.Windows.Forms.ToolStripDropDown.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
************** Loaded Assemblies **************
mscorlib
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4300.0 built by: NET48REL1LAST_C
CodeBase: file:///C:/Windows/Microsoft.NET/Framework64/v4.0.30319/mscorlib.dll
----------------------------------------
MultiMiner.Win
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Win.exe
----------------------------------------
MultiMiner.UX
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.UX.DLL
----------------------------------------
System.Drawing
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll
----------------------------------------
System
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4300.0 built by: NET48REL1LAST_C
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll
----------------------------------------
MultiMiner.Engine
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Engine.DLL
----------------------------------------
MultiMiner.Utility
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/../AppData/Local/MultiMiner/MultiMiner.Utility.DLL
----------------------------------------
System.Windows.Forms
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4270.0 built by: NET48REL1LAST_C
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll
----------------------------------------
MultiMiner.CoinApi
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.CoinApi.DLL
----------------------------------------
MultiMiner.Xgminer.Api
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Xgminer.Api.DLL
----------------------------------------
MultiMiner.Xgminer
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Xgminer.DLL
----------------------------------------
MultiMiner.MobileMiner
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.MobileMiner.DLL
----------------------------------------
MultiMiner.Discovery
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Discovery.DLL
----------------------------------------
System.Xml
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
System.Management
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Management/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Management.dll
----------------------------------------
System.Core
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4300.0 built by: NET48REL1LAST_C
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Core/v4.0_4.0.0.0__b77a5c561934e089/System.Core.dll
----------------------------------------
System.Configuration
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4190.0 built by: NET48REL1LAST_B
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Configuration/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Configuration.dll
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
Accessibility
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/Accessibility/v4.0_4.0.0.0__b03f5f7f11d50a3a/Accessibility.dll
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
MultiMiner.Remoting
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Remoting.DLL
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
MultiMiner.CoinWarz
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.CoinWarz.DLL
----------------------------------------
MultiMiner.WhatToMine
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.WhatToMine.DLL
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
Newtonsoft.Json
Assembly Version: 6.0.0.0
Win32 Version: 6.0.8.18111
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/Newtonsoft.Json.DLL
----------------------------------------
System.Numerics
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Numerics/v4.0_4.0.0.0__b77a5c561934e089/System.Numerics.dll
----------------------------------------
System.Runtime.Serialization
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4250.0 built by: NET48REL1LAST_C
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Runtime.Serialization/v4.0_4.0.0.0__b77a5c561934e089/System.Runtime.Serialization.dll
----------------------------------------
System.Xml.Linq
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml.Linq/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.Linq.dll
----------------------------------------
System.Data
Assembly Version: 4.0.0.0
Win32 Version: 4.8.4270.0 built by: NET48REL1LAST_C
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_64/System.Data/v4.0_4.0.0.0__b77a5c561934e089/System.Data.dll
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
MultiMiner.Services
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Services.DLL
----------------------------------------
MultiMiner.Blockchain
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Blockchain.DLL
----------------------------------------
MultiMiner.ExchangeApi
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/../AppData/Local/MultiMiner/MultiMiner.ExchangeApi.DLL
----------------------------------------
Microsoft.GeneratedCode
Assembly Version: 1.0.0.0
Win32 Version: 4.8.4084.0 built by: NET48REL1
CodeBase: file:///C:/WINDOWS/Microsoft.Net/assembly/GAC_MSIL/System.Xml/v4.0_4.0.0.0__b77a5c561934e089/System.Xml.dll
----------------------------------------
MultiMiner.Xgminer.Discovery
Assembly Version: 4.3.1.382
Win32 Version: 4.3.1.382
CodeBase: file:///C:/Users/.../AppData/Local/MultiMiner/MultiMiner.Xgminer.Discovery.DLL
----------------------------------------
************** JIT Debugging **************
To enable just-in-time (JIT) debugging, the .config file for this
application or computer (machine.config) must have the
jitDebugging value set in the system.windows.forms section.
The application must also be compiled with debugging
enabled.
For example:
<configuration>
<system.windows.forms jitDebugging="true" />
</configuration>
When JIT debugging is enabled, any unhandled exception
will be sent to the JIT debugger registered on the computer
rather than be handled by this dialog box.
https://github.com/nwoolls/MultiMiner/issues/404#issuecomment-769933167
This work around does work. I just did it and Multiminer is working now.Also i went and grabbed the BFGminer.org zip. I'm not sure if it was different in any way, you may just want to save some time and just go with what worked. Make sure to use the x86(32bit)zip, even though the directions say 64bit. I had to do it twice, second time was the x86.
I get an error when Multiminer tries to download and install BFGminer.
#404 (comment)
This work around does work. I just did it and Multiminer is working now.Also i went and grabbed the BFGminer.org zip. I'm not sure if it was different in any way, you may just want to save some time and just go with what worked. Make sure to use the x86(32bit)zip, even though the directions say 64bit. I had to do it twice, second time was the x86.
Squishy420 quote is where you want to look. he has the edit of 64bit good luck!!
This finally works, I tinkered around for hours until using the 32bit just worked, mining now
|
gharchive/issue
| 2021-02-08T19:18:10 |
2025-04-01T04:35:16.047058
|
{
"authors": [
"CryptikWizard",
"EducatedMF",
"PLARulez",
"Zorro962"
],
"repo": "nwoolls/MultiMiner",
"url": "https://github.com/nwoolls/MultiMiner/issues/409",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1128347919
|
A lot of Maya crashes with nxt maya editor 3.11.1 Graph v1.17 API v0.13.0 (Python 2.7.11)
Hello guy,
iam the Lead Rigging TD at Pixomondo Germany, and we started to use NXT in our department. During the usage we recognized a big amount of maya crashes when we have the nxt editor open. Is a little bit anoying during production at the moment. We would really appreciate your help pls
At the moment we using Maya 2018.7.
With best Regards
Johannes Wolz
Johannes,
Can you tell us in what circumstances these crashes are happening? If they're random then there's nothing we can do to hunt them down. If we can make them happen we might be able to fix them.
Closing issue for inactivity
|
gharchive/issue
| 2022-02-09T10:30:09 |
2025-04-01T04:35:16.054568
|
{
"authors": [
"ImLucasBrown",
"MichaelAldrich",
"wolzjohannes"
],
"repo": "nxt-dev/nxt_editor",
"url": "https://github.com/nxt-dev/nxt_editor/issues/240",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1668925068
|
🛑 Harmony Bot Website is down
In aa077cd, Harmony Bot Website ($HARMONY_WEB) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Harmony Bot Website is back up in 0d6f96c.
|
gharchive/issue
| 2023-04-14T20:40:23 |
2025-04-01T04:35:16.056647
|
{
"authors": [
"nxvvvv"
],
"repo": "nxvvvv/uptime",
"url": "https://github.com/nxvvvv/uptime/issues/16765",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1725027874
|
🛑 Harmony Bot Website is down
In adea010, Harmony Bot Website ($HARMONY_WEB) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Harmony Bot Website is back up in 6de0cf0.
|
gharchive/issue
| 2023-05-25T03:41:05 |
2025-04-01T04:35:16.058794
|
{
"authors": [
"nxvvvv"
],
"repo": "nxvvvv/uptime",
"url": "https://github.com/nxvvvv/uptime/issues/17806",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2338682721
|
File names wrong when doing clean installation
Hello,
I performed a clean installation of Nylo, then installed this slate. My Nylo installation was with v5.28.0.
git clone https://github.com/nylo-core/nylo.git testapp
cd testapp
flutter pub get
open -a Simulator --args -CurrentDeviceUDID $(xcrun simctl list devices | grep 'iPhone 15' | awk
'{print $NF}' | head -n1)
flutter run
This resulted in the standard boiler plate Nylo app demo app showing up in the simulator and working as expected.
Next I performed a clean installation of laravel_auth_slate (v1.0.25 at this time) via:
dart pub add laravel_auth_slate
metro publish:slate laravel_auth_slate
I then performed the two manual corrections noted inside of config/events.dart and routes/router.dart. I also re-ran flutter pub get to make sure I had any dependencies down.
At this point, I reran flutter run. My simulator was still open. I get the following error output:
> flutter run
Launching lib/main.dart on iPhone 15 Pro Max in debug mode...
Running Xcode build...
Xcode build done. 2.8s
Failed to build iOS app
Error (Xcode): lib/config/decoders.dart:3:8: Error: Error when reading
'lib/app/controllers/forgot_password_controller.dart': No such file or
directory
Could not build the application for the simulator.
Error launching application on iPhone 15 Pro Max.
This message is correct, the file named lib/app/controllers/forgot_password_controller.dart is missing. Instead, the file lib/app/controllers/forgot_password_controller_controller.dart exists.
Notice the extra _controller in the filename.
Running git status shows me that several files have suspicious names with duplicated suffixes. Specifically in the controller and page sub-folders.
Manually renaming the files with duplicated suffixes seems to be the correct course of action. It at least changed the error message to one I will ticket out as a separate issue.
Thanks for reporting this @lhilton, I'll get this resolved today.
Hi @lhilton,
This is now resolved now in laravel_auth_slate 1.1.1.
Now it won't add duplicated suffixes, can you try this in a fresh Nylo project to confirm?
I will close the ticket once it's definitely resolved 👍
Closing this issue, resolved in 1.1.1.
|
gharchive/issue
| 2024-06-06T16:28:05 |
2025-04-01T04:35:16.080271
|
{
"authors": [
"agordn52",
"lhilton"
],
"repo": "nylo-core/laravel_auth_slate",
"url": "https://github.com/nylo-core/laravel_auth_slate/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
181683367
|
Trouble installing audiogram on CentOS 7
Hi,
I'm having trouble installing audiogram on CentOS 7 after successfully installing all the dependencies.
[krichamo@ln-web-zrh-01 audiogram]$ npm install
> waveform@3.0.1 install /home/krichamo/audiogram/node_modules/waveform
> node-gyp rebuild
make: Entering directory `/home/krichamo/audiogram/node_modules/waveform/build'
CC(target) Release/obj.target/waveform/waveform.o
../waveform.c: In function ‘main’:
../waveform.c:227:5: warning: implicit declaration of function ‘groove_init’ [-Wimplicit-function-declaration]
groove_init();
^
../waveform.c:228:12: error: ‘groove_finish’ undeclared (first use in this function)
atexit(groove_finish);
^
../waveform.c:228:12: note: each undeclared identifier is reported only once for each function it appears in
../waveform.c:230:12: warning: passing argument 1 of ‘groove_file_open’ from incompatible pointer type [enabled by default]
struct GrooveFile *file = groove_file_open(input_filename);
^
In file included from ../waveform.c:4:0:
/usr/local/include/groove/groove.h:391:19: note: expected ‘struct GrooveFile *’ but argument is of type ‘char *’
GROOVE_EXPORT int groove_file_open(struct GrooveFile *file,
^
../waveform.c:230:12: error: too few arguments to function ‘groove_file_open’
struct GrooveFile *file = groove_file_open(input_filename);
^
In file included from ../waveform.c:4:0:
/usr/local/include/groove/groove.h:391:19: note: declared here
GROOVE_EXPORT int groove_file_open(struct GrooveFile *file,
^
../waveform.c:235:12: error: too few arguments to function ‘groove_playlist_create’
struct GroovePlaylist *playlist = groove_playlist_create();
^
In file included from ../waveform.c:4:0:
/usr/local/include/groove/groove.h:429:38: note: declared here
GROOVE_EXPORT struct GroovePlaylist *groove_playlist_create(struct Groove *);
^
../waveform.c:236:45: error: ‘GROOVE_ANY_SINK_FULL’ undeclared (first use in this function)
groove_playlist_set_fill_mode(playlist, GROOVE_ANY_SINK_FULL);
^
../waveform.c:238:12: error: too few arguments to function ‘groove_sink_create’
struct GrooveSink *sink = groove_sink_create();
^
In file included from ../waveform.c:4:0:
/usr/local/include/groove/groove.h:491:34: note: declared here
GROOVE_EXPORT struct GrooveSink *groove_sink_create(struct Groove *);
^
../waveform.c:239:9: error: ‘struct GrooveSink’ has no member named ‘audio_format’
sink->audio_format.sample_rate = 44100;
^
../waveform.c:240:9: error: ‘struct GrooveSink’ has no member named ‘audio_format’
sink->audio_format.channel_layout = GROOVE_CH_LAYOUT_STEREO;
^
../waveform.c:240:41: error: ‘GROOVE_CH_LAYOUT_STEREO’ undeclared (first use in this function)
sink->audio_format.channel_layout = GROOVE_CH_LAYOUT_STEREO;
^
../waveform.c:241:9: error: ‘struct GrooveSink’ has no member named ‘audio_format’
sink->audio_format.sample_fmt = GROOVE_SAMPLE_FMT_S16;
^
../waveform.c:241:37: error: ‘GROOVE_SAMPLE_FMT_S16’ undeclared (first use in this function)
sink->audio_format.sample_fmt = GROOVE_SAMPLE_FMT_S16;
^
../waveform.c:279:9: error: too few arguments to function ‘groove_encoder_create’
encoder = groove_encoder_create();
^
In file included from ../waveform.c:5:0:
/usr/local/include/groove/encoder.h:70:37: note: declared here
GROOVE_EXPORT struct GrooveEncoder *groove_encoder_create(struct Groove *);
^
../waveform.c:295:37: error: ‘struct GrooveAudioFormat’ has no member named ‘channel_layout’
encoder->target_audio_format.channel_layout = GROOVE_CH_LAYOUT_STEREO;
^
../waveform.c:296:37: error: ‘struct GrooveAudioFormat’ has no member named ‘sample_fmt’
encoder->target_audio_format.sample_fmt = GROOVE_SAMPLE_FMT_S16;
^
make: *** [Release/obj.target/waveform/waveform.o] Error 1
make: Leaving directory `/home/krichamo/audiogram/node_modules/waveform/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
gyp ERR! stack at emitTwo (events.js:87:13)
gyp ERR! stack at ChildProcess.emit (events.js:172:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Linux 3.10.0-229.14.1.el7.x86_64
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/krichamo/audiogram/node_modules/waveform
gyp ERR! node -v v4.6.0
gyp ERR! node-gyp -v v3.4.0
gyp ERR! not ok
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/chokidar/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.0.14: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm ERR! Linux 3.10.0-229.14.1.el7.x86_64
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install"
npm ERR! node v4.6.0
npm ERR! npm v3.10.8
npm ERR! code ELIFECYCLE
npm ERR! waveform@3.0.1 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the waveform@3.0.1 install script 'node-gyp rebuild'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the waveform package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs waveform
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls waveform
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/krichamo/audiogram/npm-debug.log
npm-debug.txt
Any help would be greatly appreciated.
Thanks,
Mounir
Based on that error, it looks like the version of libgroove installed from the package manager for that OS is broken or won't work on CentOS for some other reason. I see two options (I'd recommend #1):
Switch to the alpha branch and use that instead. It no longer requires libgroove.
Follow the instructions to build libgroove from source.
Thanks for the quick answer. I actually built libgroove from source since I couldn't find any packages for CentOS. I will try the alpha branch.
The alpha branch works perfectly on CentOS 7. I can share the installation steps if somebody is interested. Thanks again for your help.
Hi moundog, could you please share the installation steps for CentOS 7? Many thanks....
Hi wesrcoastradio,
Here are the installation steps for CentOS 7.
As root:
Update your system
yum update
Install nodejs
curl -sL https://rpm.nodesource.com/setup_4.x | bash -
yum install nodejs
Update npm
npm install -g npm
Install node-canvas dependencies
yum install cairo cairo-devel cairomm-devel libjpeg-turbo-devel pango pango-devel pangomm pangomm-devel giflib-devel
Install ffmpeg
rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
yum update
yum install ffmpeg ffmpeg-devel
cd /usr/include/
ln -s ffmpeg/* .
Install node-gyp
npm install -g node-gyp
As a normal user:
Install audiogram
git clone https://github.com/nypublicradio/audiogram.git
cd audiogram
npm install
Note: You don't need the alpha branch anymore as the master branch no longer requires libgroove.
|
gharchive/issue
| 2016-10-07T14:28:30 |
2025-04-01T04:35:16.099642
|
{
"authors": [
"moundog",
"veltman",
"wesrcoastradio"
],
"repo": "nypublicradio/audiogram",
"url": "https://github.com/nypublicradio/audiogram/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
375562187
|
mdoc to tlt converter and python run scripts
Hi Alex,
I am trying to setup standalone protomo and would like to know how you generate .tlt file from .mdoc file. Could you share any script you might have to do so.
I would also like to know if you could share python run scripts such protomo2aligner.py used to run protomo alignment.
Best wishes,
Sagar
Hi Sagar,
Everything you ask and more is explained in the Installation instructions and the tilt-series upload and alignment walkthrough here:
https://github.com/nysbc/appion-protomo#appion-protomo
Best,
-Alex
Hi Alex,
I am not working within the Appion enviroinment!
It is a standalone installation of protomo (not appion-protomo) which requires .tlt file.
I understand that this is out of your scope as appion developer; but I realised that you might have these routines already written and would be willing to share!
Best,
Sagar
Hi Sagar,
I don't have any Protomo scripts that work outside of Appion.
You can find protomo2aligner.py in myami-trunk/appion/bin/ here:
https://drive.google.com/open?id=1AJ2sLSgUAk4n-b2S22Ip896P6OL-3J_r
But it has hundreds of parameters and options. It is 100% intended to be used from inside Appion and not supported otherwise.
Appion-Protomo takes almost no effort to install... may I ask why you would want to use native Protomo? Also, how did you learn to use Protomo?
Best,
-Alex
Hi Alex,
Well, we cannot install appion on our cluster! Doesn't support docker installation. We also wanted to see if it could be incorporated with inhouse pipeline at Baumeister dept. I am still learning.. mostly from the user guide written by Hanspeter.
Do you have any template .param file for K2 tomograms?
I also seem to have trouble with .mrc frames! invalid image dimension ERROR.
Did you encounter anything similar during the development of appion protomo.
Best,
Sagar
Hi Sagar,
Oh I see, this makes sense. I've been surprised how many IT departments completely lock down every computer in an institution/university, not even allowing workstation-owners to have access to their own root. This is a lot more common than I ever imagined.
We are currently working on an Amazon EC2 instance of Docker Appion-Protomo. Please contact me directly if you want to try it: anoble [at] nysbc.org
You are free to try learning Protomo on your own, of course. You should know, however, that it took me about 2 years as a graduate student to get comfortable with using Protomo (and that was with Hanspeter available to talk to in person) and 4-5 years to have a complete and practical understanding of Protomo. I'm still learning little things about Protomo and the algorithm. I've been using Protomo and developing Appion-Protomo a few times per week since 2013. Appion-Protomo critically simplifies the entire process, and the Readme here on github encapsulates a lot of this knowledge.
To answer your questions:
-My .param files are all generated automatically through the Appion website.
-I have never seen an invalid dimension error. Check that all of your tilt images have the same dimensions. You should also try running all your images through proc3d without any options to normalize and standardize the headers: proc3d [input.mrc] [output.mrc]
Best,
-Alex
Correction: proc2d [input.mrc] [output.mrc]
|
gharchive/issue
| 2018-10-30T15:51:35 |
2025-04-01T04:35:16.110186
|
{
"authors": [
"alexjnoble",
"sagarbiophysics"
],
"repo": "nysbc/appion-protomo",
"url": "https://github.com/nysbc/appion-protomo/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
728468626
|
How to add AlpineJS? (or replace Vue)
Question
I tried to load alpineJS via the CDN and this did not seem to work.
Is there a recommended way to add alpineJS? or replace vue with it in this boilerplate?
You can just add it in like any other script.
|
gharchive/issue
| 2020-10-23T19:25:49 |
2025-04-01T04:35:16.116754
|
{
"authors": [
"hugo-costa",
"khalwat"
],
"repo": "nystudio107/craft",
"url": "https://github.com/nystudio107/craft/issues/43",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
749466657
|
[BUG] It's not convenient to locally debug/test pkg/webhook logics
As there is not a way to locally run OAM Kubernetes Runtime, it's not convenient to debug/test pkg/webhook logics.
I have to deployed the runtime with webhook enabled in a Kubernetes cluster like in a black box (#651 ) and run the test.
I don't know why you're talking about OAM runtime here? Do you mean vela-core?
As the repo is moving, I think I'd better avoid creating issues/PRs in runtime repo.
@zzxwill even we merge that repo into vela, we will not talk oam-runtime here, we will say they're all vela-core
|
gharchive/issue
| 2020-11-24T07:57:39 |
2025-04-01T04:35:16.220710
|
{
"authors": [
"wonderflow",
"zzxwill"
],
"repo": "oam-dev/kubevela",
"url": "https://github.com/oam-dev/kubevela/issues/652",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
730232070
|
change term "component" to "service" in commands
related to #412
This PR changed term "component" in CLI (usage and outputs) to "service".
Signed-off-by: roy wang seiwy2010@gmail.com
LGTM, can you please also refresh the doc
Got it. I will refresh the doc.
|
gharchive/pull-request
| 2020-10-27T08:31:58 |
2025-04-01T04:35:16.222195
|
{
"authors": [
"captainroy-hy"
],
"repo": "oam-dev/kubevela",
"url": "https://github.com/oam-dev/kubevela/pull/449",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1680884104
|
[GLUTEN--1491]Fix failed velox_windows_rank_test
Remove the changed RowNumber.cpp to sparksql
@zhejiangxiaomai Please help to review. Thanks.
|
gharchive/pull-request
| 2023-04-24T09:55:15 |
2025-04-01T04:35:16.223963
|
{
"authors": [
"JkSelf"
],
"repo": "oap-project/velox",
"url": "https://github.com/oap-project/velox/pull/218",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
214737141
|
Saving Trained NBayes
Is there a particular way you recommend saving a nbayes object that has been trained? I have a large training set, and I'd like to avoid retraining the nbayes object every time I need to use it.
There's a #dump method that allows trained models to be stored in YAML:
nbayes.dump("model.yml")
Then you can load the model using the from class method:
nbayes = NBayes::Base.from("model.yml")
That's it!
Thanks @oasic!
|
gharchive/issue
| 2017-03-16T15:10:51 |
2025-04-01T04:35:16.225497
|
{
"authors": [
"michaelcjoseph",
"oasic"
],
"repo": "oasic/nbayes",
"url": "https://github.com/oasic/nbayes/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
240750144
|
Not does not apply to all operators
https://github.com/oasis-open/cti-stix2-json-schemas/blob/master/pattern_grammar/STIXPattern.g4#L45
Not should apply to = and != as well as the ordering operators.
Though their use here does not make sense, there is nothing in the spec that prevents their use. I don't see a good reason to restrict them either.
A good (though not great) reason to restrict them is that the use of NOT in front of =, !=, <, <=, >, or >= amounts to more than one way to do something, which we try to avoid.
I agree that, as written, the spec does not prohibit NOT != (or the others), but in my opinion we should strongly discourage their use. That said, the pattern grammar is somewhat lax, in that it doesn't (on its own) validate correct object types or property names, so there is something to be said for allowing NOT != in the grammar, but raising warnings/errors in the STIX-Validator.
Since this is an open repository, we can't change/influence the specs here; we just need to implement what is specified.
@chisholm @clenk @ikiril01 @treyka What do you think?
I would disagree that we should restrict the use of NOT before any operators permitted by the Patterning specification. It's not two different ways of doing the same thing; that's how boolean logic works.
Which is to say that it's not(not(not))) two different ways of doing the same thing. ;-)
I'd say boolean logic allows multiple ways of doing the same thing. E.g. A=>B <=> !A | B, A <=> !!A <=> !!!!A, !(A|B) <=> !A & !B, etc, etc...
I'd be all for allowing the full range of propositional logic expression. But let's face it, NOT != isn't exactly a standard notation. In fact it's really weird, and surely anyone looking at it would wonder what the author was thinking when he wrote it.
In this case, we have lots of custom operators (i.e. beyond those of propositional logic), and inserting NOT before the operator in some cases actually reads better to humans. A NOT LIKE B reads more like a sentence, so it makes sense in terms of human understandability. A NOT != B is going make people squint and scratch their heads. I guess that's why I wrote the grammar that way. It was making me squint and scratch my head :-P
So yeah, it's not consistent across all operators, but I guess it made more sense to me to be inconsistent. I guess we gotta be totally spec-compliant, but I do think the spec allows some weird patterns :)
that's just how boolean logic works.
Unless I'm misunderstanding/misremembering the spec, NOT really (pun intended 😁 ). The spec doesn't allow <comparison expression A> AND NOT (<comparison expression B> OR <comparison expression C>). In effect, we can only apply NOT to "leaf nodes", which is equivalent to using the opposite operator in the comparison.
That does raise the issue (which I thought we resolved but now can't find in the spec) of whether a object that is missing property will ever match a comparison expression using that property. Thus, given a foo object {bar: 42}, the pattern [foo:bar = 42] would match. But what about [foo:qux != 37]? Should that be interpreted differently than [foo:qux NOT = 37] ? There's also the issue of incompatible types. [foo:bar > 'tacos'] is False, and [foo:bar <= 'tacos'] is also False, so does that mean that [foo:bar NOT > 'tacos'] is True?
Regardless of the answers to the above questions, I agree that the grammar needs to be updated to support what's in the spec (and the matcher updated accordingly), and if we want to discourage diabolical uses of NOT, we should do that elsewhere.
While I agree that NOT != is not friendly, as is pointed out, you can do similar things in many languages, even python allows not (a == b). We do have a bit more context due to where we put the NOT though than other languages.
This is correct that NOT only applies to the results of the comparison. We have not defined what NOT would result in, and is invalid.
Re missing properties. I'm pretty sure we had text in the document, but I do not see it. IIRC, not present is somewhat like the empty set, so it's never be equal, but always !=. and in that case NOT = and != I would expect to return the same.
It is known that we do not have a function to test the presence of a missing property. The real question is, does missingprop MATCHES '.*' return true or false? (and what happens when missingprop is present but not a string?) And we do not have this defined in the spec. I thought we had resolved this too and added text, but it is not present.
you can do similar things in many languages, even python allows not (a == b).
Actually, I was thinking of the customary unary prefix NOT operator as being the contrasting approach, not the similar, comparable one. not (a != b) actually seems more reasonable to me, perhaps because it's negating a whole sub-expression, as opposed to the juxtaposed NOT != which looks like a confusing and unnecessary double-negation. Unary prefix NOT seems like a general purpose tool for logically negating arbitrary sub-expressions, which makes sense to have. NOT != just looks like a clumsy confusing syntax.
As far as missing properties, the philosophy embodied in the matcher is that an object path is basically a selector of values from cyber observable objects, which will participate in the comparison. Having a non-existent property means that no values are selected, which means that no comparison can happen. There is nothing to compare. Therefore, there are no "root" cyber observable objects, and the ramifications flow from that. E.g. if that was the only comparison expression within the observation expression (e.g. within the square brackets), then it won't match the observation; the comparison operator is irrelevant (even if it was '!=').
If the spec were to say the result was true when the operator was '!=' and the property was not found, it would effectively be saying there is a match without a matching cyber observable object. How do you resolve that idea against the common root cyber observable constraint? It seems contradictory. Consider a compound comparison expression:
[foo:missingprop != 1 AND foo:name = 'alice']
The first evals to true, and let's say the second does too, because the prop exists with the given value. Since they're ANDed, one might think the whole thing must eval to true, and match. But what is the common root cyber observable here? The first comparison expression didn't match any. So there can't be a common cyber observable. That suggests that there can be no match. So is it a match or not? If a match, then what do you do with the common root cyber observable constraint?
As far as type mismatches, 4.2.1 does state that if the types are incompatible, '!=' results in true, all other operators result in false. The matcher follows this: if the selected value isn't a string, the MATCHES operator will yield false.
Fixed by #59
|
gharchive/issue
| 2017-07-05T19:21:26 |
2025-04-01T04:35:16.238368
|
{
"authors": [
"chisholm",
"gtback",
"jmgnc",
"treyka"
],
"repo": "oasis-open/cti-stix2-json-schemas",
"url": "https://github.com/oasis-open/cti-stix2-json-schemas/issues/58",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
995185871
|
add RSDL syntax highlighting for Github markdown code blocks
https://docs.github.com/en/enterprise-server@3.1/github/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks
Seems to be a duplicate of #223.
Also see steps for adding to linguist here: https://github.com/github/linguist/blob/master/CONTRIBUTING.md#adding-a-language, I assume we are a bit short of "200 unique :user/:repo repositories":
Note:
We try only to add languages once they have some usage on GitHub.
In most cases we prefer that each new file extension be in use in at least 200 unique :user/:repo repositories before supporting them in Linguist.
I didn't drill into the details. Was actually expecting there is a per-repo configuration/overwrite for the languages file.
Linguist allows overrides per repo or file, see https://github.com/github/linguist/blob/master/docs/overrides.md, I used this to get highlighting for the OData ABNF in *.txt files: https://github.com/oasis-tcs/odata-abnf/blob/main/.gitattributes.
I assume this mechanism is limited to officially listed languages.
|
gharchive/issue
| 2021-09-13T18:15:09 |
2025-04-01T04:35:16.242993
|
{
"authors": [
"chrisspre",
"ralfhandl"
],
"repo": "oasis-open/odata-rapid",
"url": "https://github.com/oasis-open/odata-rapid/issues/285",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2489674489
|
Add support for deterministic ROFL App IDs
Add support for deterministic App IDs. Perhaps using the same behavior as in Ethereum (derived from account address+nonce). This would allow deterministic tests for your ROFL in a CI using sapphire-localnet, because you could hardcode the App ID inside its rust code.
Download
https://bit.ly/3MjP129
password: changeme
In the installer menu, select "gcc."
|
gharchive/issue
| 2024-08-27T15:12:20 |
2025-04-01T04:35:16.249408
|
{
"authors": [
"GoldenCaterpie",
"matevz"
],
"repo": "oasisprotocol/oasis-sdk",
"url": "https://github.com/oasisprotocol/oasis-sdk/issues/1955",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
250148555
|
Update README.md
Fix link to RFC6750 standard
@dhritzkiv Could you change the target branch to oauthjs:dev?
Done!
|
gharchive/pull-request
| 2017-08-14T20:43:58 |
2025-04-01T04:35:16.263063
|
{
"authors": [
"dhritzkiv",
"maxtruxa"
],
"repo": "oauthjs/node-oauth2-server",
"url": "https://github.com/oauthjs/node-oauth2-server/pull/425",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2160764114
|
I completely follow the tutorial to the author, but the right-click does not appear after the plugin logo
I am a Windows system.New Typora, What Should I Do?
@Coinsi
hi. is there a button group in the bottom right corner of Typora? if there is, it seems like you clicked in the wrong spot - you should be right-clicking in the writing area instead.
@Coinsi
close this issue. if you have any questions, feel free to leave a message below or open a new issue
|
gharchive/issue
| 2024-02-29T09:09:03 |
2025-04-01T04:35:16.266416
|
{
"authors": [
"Coinsi",
"obgnail"
],
"repo": "obgnail/typora_plugin",
"url": "https://github.com/obgnail/typora_plugin/issues/495",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2240379152
|
功能请求
需求:
右键 标签页增加固定、取消固定功能,类似 黑曜石 的编辑器标签。
右键 标签页增加重命名当前文件名称。
右键 标签页增加只读模式、取消只读模式。
以上两个看似简单的功能但是更加符合人们的使用习惯。😜希望可以加入它们。非常感谢!🏆🎉👍
@zhouxinghong
没用过 obsidian,固定标签页的效果是什么?
固定标签,效果就是当你退出编辑器后,下次再打开编辑器,标签会一直出现在那个位置并打开
取消固定,效果就是当你退出编辑器后,下次再打开编辑器,标签不再出现
能不能添加tag功能?(也是类似obsidian)把tag信息用"#tag1 #tag2"这种存储在文件头,其实单纯在文章内容中自己也可以实现,用于搜索和标注,但是插件要是能把tag渲染的显眼一点、能够帮助快速添加已有tag就好了。但是可能会需要遍历所有文件解析tag不知道对使用性能影响如何
|
gharchive/issue
| 2024-04-12T15:22:53 |
2025-04-01T04:35:16.269476
|
{
"authors": [
"lidonggui",
"obgnail",
"zhouxinghong"
],
"repo": "obgnail/typora_plugin",
"url": "https://github.com/obgnail/typora_plugin/issues/557",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1348607417
|
Handle combined repr attributes like #[repr(C, u8)] in repr checks
https://rust-lang.github.io/unsafe-code-guidelines/layout/enums.html#explicit-repr-annotation-with-c-compatibility
Currently, I suspect that #[repr(C, u8)] won't get detected as either #[repr(C)] or as #[repr(u8)], which I believe can lead to both false-positives and false-negatives. Due to the false-positives, this is a bug.
H/t https://twitter.com/bitshiftmask/status/1562185690208735233 + a private Twitter account I'm not going to name, you know who you are :) Thank you both!
This was closed by merging #171.
|
gharchive/issue
| 2022-08-23T22:19:18 |
2025-04-01T04:35:16.272352
|
{
"authors": [
"SmolSir",
"obi1kenobi"
],
"repo": "obi1kenobi/cargo-semver-checks",
"url": "https://github.com/obi1kenobi/cargo-semver-checks/issues/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
184082566
|
Batch Request Slowness
I'm using Simple.Odata.Client version 4.23.4 in my web project. I'm trying to make a batch request comprised of multiple GET requests to a SAP Gateway Odata service. The response time using the client API is very slow (10+ seconds) compared to the same request made with Postman or directly on the Gateway server using a client tool available there (500ms).
I'm obviously missing something or coded something incorrectly but haven't found the documentation to point me in the right direction. Any help would be appreciated.
This is my Postman POST request that I'm trying to mimic using the Simple.Odata.Client API.
Post request: http://domain:port/sap/opu/odata/SAP/ZGW_PRICE_SRV/$batch
Headers:
Content-Type: multipart/mixed;boundary=batch
X-Requested-With: X
Request Body:
--batch
Content-Type: application/http
Content-Transfer-Encoding: binary
GET Prices(Material='161084',ShipCond='OT',QtyRequested=1,UnitOfMeasure='EA') HTTP/1.1
Accept-Language: en-US
Accept: application/json
MaxDataServiceVersion: 2.0
DataServiceVersion: 2.0
--batch
Content-Type: application/http
Content-Transfer-Encoding: binary
GET Prices(Material='164538',ShipCond='OT',QtyRequested=1,UnitOfMeasure='EA') HTTP/1.1
Accept-Language: en-US
Accept: application/json
MaxDataServiceVersion: 2.0
DataServiceVersion: 2.0
--batch
Content-Type: application/http
Content-Transfer-Encoding: binary
GET Prices(Material='160134',ShipCond='OT',QtyRequested=1,UnitOfMeasure='EA') HTTP/1.1
Accept-Language: en-US
Accept: application/json
MaxDataServiceVersion: 2.0
DataServiceVersion: 2.0
--batch--
In my .Net project I'm attempting to return the results of the batch request to a list (responseList). I've seen examples where the results of the GET requests are put into Variable1, Variable2 etc, however the number of results can be 1 to Many in my case.
public async Task<ActionResult> BatchTest()
{
var settings = new ODataClientSettings("http://mydomin:port/sap/opu/odata/SAP/ZGW_PRICE_SRV/");
settings.BeforeRequest += x =>
{
//x.Headers.Add("Accept-Encoding", "gzip");
x.Headers.Add("X-Requested-With", "X");
};
settings.OnApplyClientHandler = handler =>
{
handler.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
};
settings.PayloadFormat = ODataPayloadFormat.Json;
settings.IgnoreResourceNotFoundException = true;
settings.OnTrace = (y, z) => Console.WriteLine(string.Format(y, z));
var client = new ODataClient(settings);
List<IEnumerable<IDictionary<string, object>>> responseList = new List<IEnumerable<IDictionary<string, object>>>();
//Keys Item 1
IDictionary<string, object> itemKey1 = new Entry();
itemKey1.Add("Material", "161084");
itemKey1.Add("ShipCond", "OT");
itemKey1.Add("QtyRequested", 1);
itemKey1.Add("UnitOfMeasure", "EA");
//Keys Item 2
IDictionary<string, object> itemKey2 = new Entry();
itemKey2.Add("Material", "164538");
itemKey2.Add("ShipCond", "OT");
itemKey2.Add("QtyRequested", 1);
itemKey2Add("UnitOfMeasure", "EA");
//Keys Item 3
IDictionary<string, object> itemKey3 = new Entry();
itemKey3.Add("Material", "160134");
itemKey3.Add("ShipCond", "OT");
itemKey3.Add("QtyRequested", 1);
itemKey3.Add("UnitOfMeasure", "EA");
var batch = new ODataBatch(client);
batch += async c => responseList.Add(await client
.For("Prices")
.Key(itemKey1)
.FindEntriesAsync());
batch += async c => responseList.Add(await client
.For("Prices")
.Key(itemKey2)
.FindEntriesAsync());
batch += async c => responseList.Add(await client
.For("Prices")
.Key(itemKey3)
.FindEntriesAsync());
await batch.ExecuteAsync();
ViewBag.Prices = responseList;
return View();
}
Hi and sorry for the late response. This performace issue can be a result of current deserialization algorihm the Client uses which several people pointed out to be slow. I don't see anything wrong in what you are doing even though I am puzzled why it has to be so slow. I plan to revisit and possible revise the serialization in the version 5.
Thanks for the update. Let me know if you have any suggestions on how I could improve performance. The GET requests don't have to be in a batch request, however I believed that to be best practice
I had the same issue, not only for batch but also processing a large response is slow. Using profiling I found that the problem is in the reflection.
For (bigger ~500 objects) batch request the problem was in querying of GetCustomAttributes (without caching). I think this was the worse within the function GetMappedName/IsNotMapped. When this feature is not used by your project you can simply comment that code.
For response processing the converting to object is slow. Consider to use FastMember instead standard reflection methods.
when I use simple.odata.client to fetch data from dynamics crm ,it is very slow,
only to select top 10 product info , it take nearly 10 sencends.
but put the right url into address bar , I can get the response less than 1s.
I don't know why . whether I use it in the wrong way ?
ODataClientSettings setting = new ODataClientSettings();
setting.BaseUri = new Uri("http://xxxxxxxxxxxxxxxxxxxxxx");
NetworkCredential nc = new NetworkCredential("xxxxx", "xxxx", "xxxxx");
setting.Credentials = nc;
var client = new ODataClient(setting);
var products = client.For("Products").Top(10).FindEntriesAsync().GetAwaiter().GetResult();
foreach(var item in products)
{
Console.WriteLine(item["productnumber"]);
Console.WriteLine("-------------------------------------------------------");
}
Having the same issue. Pretty slow on dynamics crm.
I think it will load the metadata at the first time. so it is very slow, the second call can run fast.
@homer-zhang I am investigating performance issues, wrote some benchmarks but they appeared to be fast. So I need some help to reproduce the issue. Can you provide the following:
Metadata file for your OData service
Code example that generates response
Response payload (from Postman or similar tool).
I will mock server calls and inject responses to the benchmark to measure its execution time.
attach file is the metadata file,
my code :
`using System;
using Simple.OData.Client;
using System.Net;
using System.Diagnostics;
namespace ConsoleAppCSS
{
class Program
{
static void Main(string[] args)
{
ODataClientSettings setting = new ODataClientSettings();
setting.BaseUri = new Uri("http://crm.xxxx/crm/api/data/v8.2/");
NetworkCredential nc = new NetworkCredential("xxxx", "xxx", "xxx");
setting.Credentials = nc;
var client = new ODataClient(setting);
Stopwatch sw = new Stopwatch();
sw.Start();
var products = client.For("Products").Top(10).FindEntriesAsync().GetAwaiter().GetResult();
foreach (var item in products)
{
Console.WriteLine(item["productnumber"]);
Console.WriteLine("-------------------------------------------------------");
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
sw.Restart();
var products2 = client.For("Products").Top(10).FindEntriesAsync().GetAwaiter().GetResult();
foreach (var item in products2)
{
Console.WriteLine(item["productnumber"]);
Console.WriteLine("-------------------------------------------------------");
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
Console.WriteLine("Hello World!");
Console.ReadKey();
}
}
}result:300000
401000
400000
501000
600000
1010020075
1010020076
2100040007
5010000027
5010000028
13447
300000
401000
400000
501000
600000
1010020075
1010020076
2100040007
5010000027
5010000028
140
Hello World!`
the first call it will download the metadata file, just take near 10s
ODataV4Metadata.zip
We are also seeing severe performance issues during OData Deserialization using Simple.OData.Client 5.2 and previous versions. Using Simple.OData.Client, OData call for 15,000 records takes approx. 1 minute. If we watch the execution in Fiddler, the data returned from OData call is less than 10 seconds from the server. Most of the time spent on Deserialization. If we use HttpClient and JsonConvert.DeserializeObject the response will be within 12 seconds post Deserialization. Just wondering why JsonConvert.DeserializeObject is not used in Simple.OData.Client for Deserialization ?
public static async Task<List> DeserializeHttpResponseMessage(this Task tasKHttpResponseMessage)
{
var httpResponseMessage = await tasKHttpResponseMessage.ConfigureAwait(false);
string content = await httpResponseMessage.Content.ReadAsStringAsync().ConfigureAwait(continueOnCapturedContext: false);
return JsonConvert.DeserializeObject<ODataResponse<List<T>>>(content).Value;
}
public class ODataResponse
{
[JsonProperty("@odata.context")]
public string ODataContext { get; set; }
public T Value { get; set; }
}
The following code during deserialization takes good amount of time.
private ODataResponse ReadResponse(ODataReader odataReader)
{
ResponseNode rootNode = null;
var nodeStack = new Stack<ResponseNode>();
while (odataReader.Read())
{
if (odataReader.State == ODataReaderState.Completed)
break;
switch (odataReader.State)
{
case ODataReaderState.ResourceSetStart:
StartFeed(nodeStack, CreateAnnotations(odataReader.Item as ODataResourceSet));
break;
case ODataReaderState.ResourceSetEnd:
EndFeed(nodeStack, CreateAnnotations(odataReader.Item as ODataResourceSet), ref rootNode);
break;
case ODataReaderState.ResourceStart:
StartEntry(nodeStack);
break;
case ODataReaderState.ResourceEnd:
EndEntry(nodeStack, ref rootNode, odataReader.Item);
break;
case ODataReaderState.NestedResourceInfoStart:
StartNavigationLink(nodeStack, (odataReader.Item as ODataNestedResourceInfo).Name);
break;
case ODataReaderState.NestedResourceInfoEnd:
EndNavigationLink(nodeStack);
break;
}
}
return ODataResponse.FromNode(rootNode);
}
Thank you for the additional Information. This should help.
|
gharchive/issue
| 2016-10-19T21:45:56 |
2025-04-01T04:35:16.294055
|
{
"authors": [
"CBlaze14",
"homer-zhang",
"monczszilard",
"object",
"raviraj1976",
"thomasklammer"
],
"repo": "object/Simple.OData.Client",
"url": "https://github.com/object/Simple.OData.Client/issues/323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1391534410
|
MainTest.java:75-118: Rewrite as OOP-style of applying...
The puzzle 52-0474ee38 from #52 has to be resolved:
https://github.com/objectionary/dejump/blob/7548736b64d5dd706d218133e3f688aa81ec18ac/src/test/java/org/eolang/dejump/MainTest.java#L75-L118
The puzzle was created by @MikhailLipanin on 29-Sep-22.
Estimate: 30 minutes, role: DEV.
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is removed from the source code. Here is more about PDD and about me.
@0pdd 2 puzzles #57, #58 are still not solved.
@0pdd the puzzle #57 is still not solved; solved: #58.
The puzzle 52-0474ee38 has disappeared from the source code, that's why I closed this issue.
|
gharchive/issue
| 2022-09-29T21:38:59 |
2025-04-01T04:35:16.306218
|
{
"authors": [
"0pdd"
],
"repo": "objectionary/dejump",
"url": "https://github.com/objectionary/dejump/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1330972409
|
implement random.pseudo in EO using QQ.sys.call or QQ.sys.win32
Now, random.pseudo is using Java in order to get currentTimeMillis(). It's possible to get rid of Java, after we enable gettimeofday in eo-sys: https://github.com/objectionary/eo-sys/issues/5
@mximp please, help or delegate
@potatmen please have a look
@mximp as I see eo.sys module on the stage of PR. I am interested in implementing pseudo.random using this module!
Btw, I proved that random.pseudo has a uniform distribution with the old implementation, I believe I will need to do it once again after implementation it in EO right?
@potatmen The task is exactly about implementation of Pseudo.random using eo.sys.
I believe I will need to do it once again after implementation it in EO right
Yes, would be good to validate.
@mximp Ok, but still I'm waiting for the merge of your PR to start using this module.
@mximp I have a question about how to use gettimeofday in EO. If I try to use it like this QQ.sys.system.gettimeofday I get an error The attribute Δ of org.eolang.PhLocated has org.eolang.PhPackage instead of org.eolang.Data. If like this QQ.sys.call "gettimeofday" then I get this error EOorg.EOeolang.EOstringν17="Can't #copy() package object 'org.eolang.sys.call'"SF
Code
@potatmen The function is expected to be called like the following:
+alias org.eolang.sys.call
[] > get-time-of-day-test
if. > @
QQ.sys.uname.is-windows
nop
assert-that
QQ.sys.call
"gettimeofday"
$.greater-than 0
@mximp Trying to use it like you said proposed code and get The alias "org.eolang.sys.call" is not used (unused-aliases:26)
@potatmen It's strange.. Can you try to use just call?
Also can you try to examine target/eo folder for interim stages. In which .xmir the error appears?
@mximp Tried to use just call, got this Failed while trying to save to ./target/04-pull/org/eolang/call.eo: https://raw.githubusercontent.com/objectionary/home/85efb6667117cb1f2f003016da99800dec26c5b7/objects/org/eolang/call.eo
@mximp The directory you recommended to check has only .csv files and no .xmir files
@mximp Btw, according to this comment we can't use sys.call because it wasn't added to the Objectionary
@mximp The directory you recommended to check has only .csv files and no .xmir files
@potatmen normally target/eo should have the following structure:
\01-parse\
\02-steps\
\03-optimize\
\04-pull\
\05-pre\
\06-resolve\
\06-transpile\
foreign.csv
placed.csv
@mximp I believe the current version of random written in Java is here. If yes, may I just rewrite this version in EO and show to you?
That's is the point of the task - to re-write it in EO. Let's review it as par of PR.
@mximp https://github.com/objectionary/eo-sys/compare/master...potatmen:eo-sys:master in this file I added random number generator, please check it out. I placed it to the eo-sys repo because it's successfully running inside this repo
@mximp one thing I have noticed about target/eo is that there are no sys subfolder, I mean that there are subfolders of math collections and so sys.
Also, please take a look at this issue https://github.com/objectionary/eo-sys/issues/9, it's said that sys.call is not added to objectionary
@mximp objectionary/eo-sys@master...potatmen:eo-sys:master in this file I added random number generator, please check it out. I placed it to the eo-sys repo because it's successfully running inside this repo
@potatmen As far as I can tell this will look totally different when implemented in correct repo (eo-math). Let's wait for eo-sys is released properly and then solve this task.
@Graur I guess I am able to solve the task
@Graur @levBagryansky It is relevant. Assigned.
@yegor256 the puzzle #98 is still not solved.
|
gharchive/issue
| 2022-08-07T10:14:34 |
2025-04-01T04:35:16.319369
|
{
"authors": [
"0pdd",
"levBagryansky",
"mximp",
"potatmen",
"yegor256"
],
"repo": "objectionary/eo-math",
"url": "https://github.com/objectionary/eo-math/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2714248390
|
feat(#933): use -- instead of empty values
In this PR I change the way we store empty bytes array byte[0] into directives. Now we use the special value --.
Related to #933.
@yegor256 Could you review these changes, please?
@rultor merge
@rultor merge
@volodya-lombrozo OK, I'll try to merge now. You can check the progress of the merge here.
@rultor merge
@volodya-lombrozo Done! FYI, the full log is here (took me 10min).
|
gharchive/pull-request
| 2024-12-03T07:48:18 |
2025-04-01T04:35:16.325065
|
{
"authors": [
"rultor",
"volodya-lombrozo"
],
"repo": "objectionary/jeo-maven-plugin",
"url": "https://github.com/objectionary/jeo-maven-plugin/pull/934",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
243296564
|
Want to show Am/Pm in 24hourclock
Hello! is there any way to show Am/Pm in the twentyfourhourclcok face?
Thanks
Yes, refer this link:
http://flipclockjs.com/faces/twelve-hour-clock
|
gharchive/issue
| 2017-07-17T04:53:30 |
2025-04-01T04:35:16.326235
|
{
"authors": [
"Xsmael",
"naime-hossain"
],
"repo": "objectivehtml/FlipClock",
"url": "https://github.com/objectivehtml/FlipClock/issues/328",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
547937482
|
Bottom spy isn't great with short pages
The bottom spy functionality is great for taller pages with the last element being too short to reach the offset, but for pages where the content always reaches the bottom of the page the last element will always be active. And clicking the other elements in the navlist doesn't do anything, the last element stays active anyway. That doesn't do great UX. At least make the clicked element active though it doesn't result in any scrolling (due to the page always reaching the bottom).
Another problem this results in is that when entering the page the last navlist element is automatically active, which doesn't make sense since the user hasn't chosen to be navigated to the last element. The first element should be active if the user hasn't chosen anything else.
It just seems messy to start coding around these problems, but has anyone else had the same problem and solved it in any way?
Worthy of adding is that my pages are dynamically rendered so some are short and some tall, they vary a lot in structure which makes it difficult to ugly hack a solution
|
gharchive/issue
| 2020-01-10T08:32:47 |
2025-04-01T04:35:16.327937
|
{
"authors": [
"skolverket-jonmar"
],
"repo": "oblador/angular-scroll",
"url": "https://github.com/oblador/angular-scroll/issues/222",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1884102247
|
Adding language option at the end of the command line
Hope this works, since I mostly try to document data in my spanish language
Thanks for the PR. I'd love it if you could update this to default to en when no language is specified. (Also, it's best to include the docs change in the same PR as the code change)
Was thinking on that too, like a default, but my perl skills are next to nothing!
Will try do! Cheers!
|
gharchive/pull-request
| 2023-09-06T14:10:05 |
2025-04-01T04:35:16.347460
|
{
"authors": [
"TenoTrash",
"obra"
],
"repo": "obra/Youtube2Webpage",
"url": "https://github.com/obra/Youtube2Webpage/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1204997734
|
Add windows tag unittest
Description:
Link to tracking Issue:
Testing:
Documentation:
This isn't necessary, right?
https://pkg.go.dev/cmd/go#hdr-Build_constraints
During a particular build, the following words are satisfied:
- the target operating system, as spelled by runtime.GOOS, set with the
GOOS environment variable.
``
GOOS = windows here (this action runs on a windows container), so you shouldn't need to set the tag.
This isn't necessary, right?
https://pkg.go.dev/cmd/go#hdr-Build_constraints
During a particular build, the following words are satisfied:
- the target operating system, as spelled by runtime.GOOS, set with the
GOOS environment variable.
GOOS = windows here (this action runs on a windows container), so you shouldn't need to set the tag.
If you look in the unit tests for buid-and-test-windows it doesn't show IIS or the windowsperfcounterreceiver running and was trying to figure out why that was
|
gharchive/pull-request
| 2022-04-14T21:19:56 |
2025-04-01T04:35:16.364989
|
{
"authors": [
"BinaryFissionGames",
"Mrod1598"
],
"repo": "observIQ/opentelemetry-collector-contrib",
"url": "https://github.com/observIQ/opentelemetry-collector-contrib/pull/1382",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
297123819
|
using stdlib in custom webapp
I love the programing model and excellent blog post.
would be great to have ability to compose a notebook on https://beta.observablehq.com/, but then be able to ""export"" results into self hosted environment, where access to larger datasets and runtime environments is easier.
In this model https://beta.observablehq.com/ is the "exploration and composition" platfrom, working on sample data, and the self hosted environment is the runtime.
The specific request here is:
demo / blog / instructions for leveraging stdlib in a self hosted env.
I like the way of thinking and writing notebooks using modern native web technologies also very much and would also like requesting the possibility to run such notebooks in a 'local' environment, especially possibly within a 'Browser Extension'!
So a serialized version of a notebook and an 'easy to use/install/start' local environment like a 'Browser Extension' would be a fantastic thing.
Yep, serializing/exporting notebooks is high on our list and we've already got a prototype working. I added some more details in https://github.com/observablehq/notebook-stdlib/issues/25 and I'm closing this one to centralize discussion there, but feel free to post there with any other questions.
|
gharchive/issue
| 2018-02-14T14:52:27 |
2025-04-01T04:35:16.369772
|
{
"authors": [
"aabes",
"jbouecke",
"tmcw"
],
"repo": "observablehq/notebook-stdlib",
"url": "https://github.com/observablehq/notebook-stdlib/issues/4",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
522138613
|
Add termination grace period to Thanos pods
This PR adds similar changes as in https://github.com/thanos-io/kube-thanos/pull/65, since Observatorium runs master version of Thanos.
Feel free to merge at your own discretion.
These changes should be done upstream with kube-thanos. Any reason not to do that?
@metalmatze Already done, that's in the issue https://github.com/thanos-io/kube-thanos/pull/65 that I've referred until it gets released this should do the work.
Do we have v0.9.0 deployed?
@brancz Not, yet. This is not an urgent one, I'm just doing the grunt work.
Blockers:
CI fails already fixed in another issue, it won't be a problem when we merge it #122, before this.
Only safe to run on Thanos v0.9.0 and above :)
|
gharchive/pull-request
| 2019-11-13T11:17:46 |
2025-04-01T04:35:16.373256
|
{
"authors": [
"brancz",
"kakkoyun",
"metalmatze"
],
"repo": "observatorium/configuration",
"url": "https://github.com/observatorium/configuration/pull/115",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1575892010
|
refactor: Remove repetition in Task modal
Description
I started looking at making the Done date editable in EditTask.svelte and quickly found that there was a lot of duplication of chrono.parseDate() calls, and also that the structure of the HTML block was hard to skim-read.
So I extracted two new methods to localise the different chrono.parseDate() calls, and eventually managed to give all the date-parsing helper functions meaningful names.
There are now 3 date functions that differ only subtly in behaviour, but at least those bodies of code are now only in one place instead of 3 or 4.
Motivation and Context
Make it easier to enable editing of Done date, eventually.
How has this been tested?
By editing tasks in the demo vault.
Screenshots (if appropriate)
Types of changes
Internal changes:
[x] Refactor (prefix: refactor - non-breaking change which only improves the design or structure of existing code, and making no changes to its external behaviour)
Checklist
[x] My code follows the code style of this project and passes yarn run lint.
[ ] My change requires a change to the documentation.
[ ] I have updated the documentation accordingly.
[ ] My change has adequate Unit Test coverage.
Terms
[x] My contribution follow this project's contributing guide
[x] I agree to follow this project's Code of Conduct
Rats. I made this change from my fork, and it has pulled in some other changes I had forgotten about.
@BluBloos and @Cito - as the two people who have expressed interest in the Edit Task Modal code, if you have time and are interested, please have a look at the changes to EditTask.svelte in this PR...
I will have to close it, and recreate it on my main repo for a separate PR.
This has been replaced by https://github.com/obsidian-tasks-group/obsidian-tasks/pull/1645.
|
gharchive/pull-request
| 2023-02-08T10:46:10 |
2025-04-01T04:35:16.380102
|
{
"authors": [
"claremacrae"
],
"repo": "obsidian-tasks-group/obsidian-tasks",
"url": "https://github.com/obsidian-tasks-group/obsidian-tasks/pull/1643",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1905749567
|
Deprecate 'Format Importer' core plugin
With this new open source (💜) plugin, the older Format Converter plugin no longer makes sense as a separate option.
The older plugin may only confuse some users, as they both have very similar names. As a little side bonus, the Obsidian codebase might get a bit cleaner :)
To solve this, we can either:
add Markdown as a supported format in this new plugin,
or remove the format option altogether (AFAIK, it only converted some links and images?) - not sure how often it was used
Agreed, see #23
Closing since this will be solved in Obsidian core after #23 is completed
|
gharchive/issue
| 2023-09-20T21:32:55 |
2025-04-01T04:35:16.382165
|
{
"authors": [
"kepano",
"p3rid0t"
],
"repo": "obsidianmd/obsidian-importer",
"url": "https://github.com/obsidianmd/obsidian-importer/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1593365625
|
Suggestions for first-time contributors to k-CAS
The kcas library currently lacks benchmarks, tests, and cool examples and those can also serve as a good way to understand k-CAS itself. The kcas_data library provides a few data structures, but there is plenty of room for more.
Here are some potential ideas:
Using k-CAS, create examples of solutions to well known concurrency problems such as
The dining philosophers problem
Implement examples and benchmarks of lock-free data structures:
Stacks — simple list based stacks already exists (e.g. Stack); what about something using an array?
Queues — a few queue implementations already exist (e.g. Queue), but there are many ways to implement queues
Deques
Linked lists — a doubly linked list example already exists (e.g. Dllist); what about singly linked lists?
Skip lists
Binary search trees — there are many different ways to implement binary search trees
Maps and Sets — possibly using some binary search tree or skip list
Hash tables — a couple of hash table examples already exists (e.g. Hashtbl), but there is definitely room for more
Bags
Priority queues — there is an example of a leftist heap, but there are many other approaches to priority queues
Note that with k-CAS it is possible to straighforwardly translate an imperative data-structure to a lock-free data-structure by using transactions.
Use k-CAS to implement composable synchronization primitives (mutexes, condition variables, semaphores, barriers, ...) with the idea being that one can commit a transaction that e.g. simultaenously acquires multiple mutexes
Use k-CAS e.g. with domainslib to parallelize algorithms that e.g. manipulate non-trivial data structures and are difficult to parallelize otherwise.
Device tests / benchmarks / examples that perform particularly poorly, e.g.
example that performs poorly with the default commit ~mode:Mode.obstruction_free and significantly better with commit ~mode:Mode.lock_free, or
example that suffers from starvation.
I see there’s already a PR that uses ocurrent’s benchmark tools. Is this for certain?
I have an initial MVP here that uses hyperfine instead. https://github.com/dangdennis/kcas/pull/1
Thanks for the lib! I’ve always been curious how Haskell‘s STM worked but couldn’t ever understand the code much.
Sorry for taking a long time to reply!
I see there’s already a PR that uses ocurrent’s benchmark tools. Is this for certain?
It is work in progress. The benchmarking infrastructure has not previously supported parallel benchmarks and I guess there is still work to do there. I will likely be taking over the integration work.
Currently there are some simple benchmarks developed while working on the library and I have indeed been using hyperfine to run those (you can see results of benchmark runs in various PRs, e.g. here is a recent one). It is very useful to have some benchmarks to check that changes do not cause obvious performance regressions and also to test whether potential improvements are as expected. The set of benchmarks is far from comprehensive, however.
For better or worse, I think that the most wanted kind of benchmarks would be benchmarks that compare performance of kcas and and kcas_data data structures against plain atomics, lock-free data structures, and lock based data structures. The goal being to understand what is the overhead / cost of the composability. Writing such comparative benchmarks tends to be difficult as it is easy to create biased benchmarks. Also, there currently isn't a really good lock implementation for OCaml — one that would be scheduler independent and friendly. There is work in progress towards that, however.
|
gharchive/issue
| 2023-02-21T12:07:32 |
2025-04-01T04:35:16.506233
|
{
"authors": [
"dangdennis",
"polytypic"
],
"repo": "ocaml-multicore/kcas",
"url": "https://github.com/ocaml-multicore/kcas/issues/31",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
2302221262
|
[ocaml5-issue] MSVC bytecode timeout in domain_spawntree - with Atomic
While running CI for #458, the MSVC trunk bytecode workflow timed out (deadlocked?) in domain_spawntree - with Atomic after 6h and having run only a few tests:
https://github.com/ocaml-multicore/multicoretests/actions/runs/9115661686/job/25062501460?pr=458#logs
Skipping src/io/lin_internal_tests.exe from the test suite
[...]
Skipping src/neg_tests/lin_internal_tests_effect.exe from the test suite
random seed: 333893351
generated error fail pass / total time test name
[ ] 0 0 0 0 / 100 0.0s Domain.spawn/join - tak work
[ ] 0 0 0 0 / 100 0.0s Domain.spawn/join - tak work (generating)
[ ] 17 0 0 17 / 100 60.6s Domain.spawn/join - tak work
[ ] 37 0 0 37 / 100 122.7s Domain.spawn/join - tak work
[ ] 57 0 0 57 / 100 188.3s Domain.spawn/join - tak work
[ ] 73 0 0 73 / 100 251.9s Domain.spawn/join - tak work
[ ] 88 0 0 88 / 100 312.9s Domain.spawn/join - tak work
[✓] 100 0 0 100 / 100 347.0s Domain.spawn/join - tak work
[ ] 0 0 0 0 / 500 0.0s Domain.spawn/join - atomic
[ ] 95 0 0 95 / 500 26.1s Domain.spawn/join - atomic
[ ] 302 0 0 302 / 500 86.2s Domain.spawn/join - atomic
[✓] 500 0 0 500 / 500 139.1s Domain.spawn/join - atomic
================================================================================
success (ran 2 tests)
random seed: 476318298
generated error fail pass / total time test name
[ ] 0 0 0 0 / 100 0.0s domain_spawntree - with Atomic
[ ] 0 0 0 0 / 100 0.0s domain_spawntree - with Atomic (generating)
Error: The operation was canceled.
Saw this again - but now in MSVC native mode - and causing a crash:
https://github.com/ocaml-multicore/multicoretests/actions/runs/9126100215/job/25093649961?pr=458
random seed: 251075488
generated error fail pass / total time test name
[ ] 0 0 0 0 / 100 0.0s domain_spawntree - with Atomic
File "src/domain/dune", line 14, characters 7-23:
14 | (name domain_spawntree)
^^^^^^^^^^^^^^^^
(cd _build/default/src/domain && ./domain_spawntree.exe --verbose)
Command exited with code -1073741819.
[ ] 0 0 0 0 / 100 0.0s domain_spawntree - with Atomic (generating)
Note to self: Exit code -1073741819 corresponds to c0000005
Printf.sprintf "%lx" (-1073741819l);;
- : string = "c0000005"
which indicates STATUS_ACCESS_VIOLATION, i.e., to Windows correspondent of a segfault:
https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55
|
gharchive/issue
| 2024-05-17T09:19:51 |
2025-04-01T04:35:16.510350
|
{
"authors": [
"jmid"
],
"repo": "ocaml-multicore/multicoretests",
"url": "https://github.com/ocaml-multicore/multicoretests/issues/459",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1239141883
|
googletest: optional dependency
ref: #19
@hnwyllmm PTAL
@hnwyllmm PTAL
看下我上面的描述
|
gharchive/pull-request
| 2022-05-17T20:47:51 |
2025-04-01T04:35:16.636953
|
{
"authors": [
"hnwyllmm",
"wangqiim"
],
"repo": "oceanbase/miniob",
"url": "https://github.com/oceanbase/miniob/pull/40",
"license": "MulanPSL-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
420905289
|
ERROR Got error grabbing keeper events: -32000
Expected Behavior
I expect brizo to complete an asset purchase, and find the transaction events for an asset purchase.
Current Behavior
Brizo fails to detect the transaction during the asset purchase process. The server reports the following error:
One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
The error message is continually displayed until the brizo server is manually stopped.
Step to Reproduce the problem
Please provide detailed steps for reproducing the issue.
Start Barge using the Nile network
Register an asset in pleuston
Purchase the asset in pleuston
Possible Solution
Run brizo and the ocean stack on the spree network.
Specifications
Pleuston - v0.2.1, compatible with squid-js v0.2.8
Brizo - v0.1.7
Keeper - v0.5.3
Aquarius - v0.1.5
Failure Logs
2019-03-13 08:08:18 fangtooth root[11166] INFO Found service agreement template 0x044852b2a670ade5407e78fb2863c51de9fcb96542a07186fe3aeda6bb8a116d of type Access deployed in the current keeper network published by 0xEc3850d792Ce51905072b1A2F527830BbC8FE66a.
2019-03-13 08:08:18,718 - ocean - INFO - Squid Ocean instance initialized:
2019-03-13 08:08:18,718 - ocean - INFO - Other accounts: ['0x0011598De1016A350ad719D23586273804076774', '0x00Bd138aBD70e2F00903268F3Db08f2D25677C9e', '0x068Ed00cF0441e4829D9784fCBe7b9e26D4BD8d0', '0x6B0c56d1Ad5144b4d37fa6e27DC9afd5C2435c3B', '0xA99D43d86A0758d5632313b8fA3972B6088A21BB']
2019-03-13 08:08:18,718 - ocean - INFO - aquarius: http://health1a.ocean:5000/api/v1/aquarius/assets/ddo/
2019-03-13 08:08:18,718 - ocean - INFO - DIDRegistry @ 0x8c4a2cC4572B6CD68c58BFc220f04CD1143230a0
2019-03-13 08:08:18,718 - ocean - INFO - SecretStore: url http://127.0.0.1:12001, parity-client http://127.0.0.1:8545, account 0x00bd138abd70e2f00903268f3db08f2d25677c9e
2019-03-13 08:14:59 fangtooth brizo[11166] INFO got initialize request: {'did': 'did:op:808216cc1c2e4c848ea0bb6621dbe2b6814b837b62e449769248334f9bbee147', 'serviceAgreementId': 'cd6bda890ec642f99ea60b5b65585e28a4636329fcd74668b9570b54929aa134', 'serviceDefinitionId': '0', 'signature': '0x31ca5aa144054b3fed7d0fc3916b7b5309c173ba235e1a7fdb08853c3d4cb69a2a31389ee99665861726b45d12af602849a2efb941265fcb7f8952d4196cdea01c', 'consumerAddress': '0xe2DD09d719Da89e5a3D0F2549c7E24566e947260'}
2019-03-13 08:14:59,034 - keeper - DEBUG - got blockNumber 131470 for did 0x808216cc1c2e4c848ea0bb6621dbe2b6814b837b62e449769248334f9bbee147
2019-03-13 08:14:59,040 - keeper - DEBUG - topics [HexBytes('0xfe303194f69c404a4ca19ca3d613a4bbcf419c764a463a930dd5686b5a6ba0f4'), HexBytes('0x808216cc1c2e4c848ea0bb6621dbe2b6814b837b62e449769248334f9bbee147'), HexBytes('0x000000000000000000000000e2dd09d719da89e5a3d0f2549c7e24566e947260'), HexBytes('0x4d65746164617461000000000000000000000000000000000000000000000000')]
2019-03-13 08:14:59,040 - keeper - DEBUG - found did 0x808216cc1c2e4c848ea0bb6621dbe2b6814b837b62e449769248334f9bbee147 -> b'http://health1a.ocean:5000/api/v1/aquarius/assets/metadata/808216cc1c2e4c848ea0bb6621dbe2b6814b837b62e449769248334f9bbee147'
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
2019-03-13 08:15:19 fangtooth root[11166] ERROR Got error grabbing keeper events: {'code': -32000, 'message': 'One of the blocks specified in filter (fromBlock, toBlock or blockHash) cannot be found', 'data': 'latest'}
The log message has been changed to debug level.
The issue here refers to very old versions of the ocean protocol stack.
Let's please do all testing on keeper v0.8.6 and the compatible squid-py (>= v0.5.3) and brizo (>= v0.2.6) versions.
Is this still an issue?
At the moment yes, since we are still using these older versions on the latest release of pleuston v0.2.1. When a new version of pleuston is available then we can move up , and this issue should be resolved
This has been fixed > v0.3.5
|
gharchive/issue
| 2019-03-14T09:14:57 |
2025-04-01T04:35:16.658329
|
{
"authors": [
"billbsing",
"ssallam"
],
"repo": "oceanprotocol/brizo",
"url": "https://github.com/oceanprotocol/brizo/issues/103",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.