id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
988764888 | Scroll USFM Editor Reference
On click of a verse in the USFM-editor component then the reference USFM-editor should scroll to that verse and highlight it. Try scrolling so that the verse is in the center of the section.
Same as #450.
| gharchive/issue | 2021-09-06T04:54:44 | 2025-04-01T06:38:44.599404 | {
"authors": [
"joelthe1"
],
"repo": "friendsofagape/autographa",
"url": "https://github.com/friendsofagape/autographa/issues/286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
505857431 | 'first_task'
Попытка номер раз
/home/vsts/work/1/s/lab-tests/01/index.js
1:4 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
2:56 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
3:82 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
4:3 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
5:36 error Trailing spaces not allowed no-trailing-spaces
5:37 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
6:5 error Expected { after 'if' condition curly
6:74 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
7:9 error Expected no linebreak before this statement nonblock-statement-body-position
7:21 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
8:1 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
9:27 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
10:43 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
11:5 error Opening curly brace does not appear on the same line as controlling statement brace-style
11:6 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
12:9 error Expected { after 'if' condition curly
12:56 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
13:13 error Expected no linebreak before this statement nonblock-statement-body-position
13:38 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
14:14 error Expected { after 'if' condition curly
14:18 error Unary word operator 'typeof' must be followed by whitespace space-unary-ops
14:39 error Strings must use singlequote quotes
14:48 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
15:13 error Expected no linebreak before this statement nonblock-statement-body-position
15:25 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
16:6 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
17:1 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
18:9 error 'intNumber' is never reassigned. Use 'const' instead prefer-const
18:21 error Missing radix parameter radix
18:53 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
19:9 error 'res' is never reassigned. Use 'const' instead prefer-const
19:36 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
20:9 error 'arr' is never reassigned. Use 'const' instead prefer-const
20:18 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
21:22 error Expected '===' and instead saw '==' eqeqeq
21:27 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
22:5 error Opening curly brace does not appear on the same line as controlling statement brace-style
22:6 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
23:26 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
24:9 error Expected blank line before this statement padding-line-between-statements
24:20 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
25:6 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
26:1 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
27:5 error Expected { after 'for' condition curly
27:41 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
28:9 error Expected no linebreak before this statement nonblock-statement-body-position
28:18 error Missing radix parameter radix
28:36 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
29:1 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
30:16 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
31:2 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
32:1 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
33:19 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
34:14 error Expected linebreaks to be 'LF' but found 'CRLF' linebreak-style
34:14 error Missing trailing comma comma-dangle
35:2 error Missing semicolon semi
35:2 error Newline required at end of file but not found eol-last
| gharchive/pull-request | 2019-10-11T13:36:02 | 2025-04-01T06:38:44.632786 | {
"authors": [
"baygulovd",
"frontend-tinkoff-ekb"
],
"repo": "frontend-tinkoff-ekb/rtf-lab-1",
"url": "https://github.com/frontend-tinkoff-ekb/rtf-lab-1/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
165608547 | [São Paulo] Livro Web Design Responsivo - Caelum
Pessoal,
Estou com esse livro novo ( No Plástico ) que ganhei no Front in Sampa 2016 se alguém quiser ou conhecer alguém que esteja iniciando seus estudos seria uma boa leitura =)
@daniellucindo ué, eu moro em Uberaba-MG...se tivesse como enviar via Correios, eu pagaria o frete.
Muito show @daniellucindo
Peço só que quando conseguirem doar o livro, fechem a issue =D
E meus parabéns @daniellucindo pela boa vontade ^^
Eu sou de São Paulo! Tenho interesse. Você consegue entregar em alguma estação do metrô?
Opa
Imaginei que não seria tão rápido.
Pow quem quiser tira um 2 ou 1 hehehe #zueira
Leticia você vai querer mesmo?
Trabalho na Berrini e moro na Parada Inglesa Zona Norte então qualquer estação nessas linhas posso te entregar.
Caso você não queira e não aparece mais ninguém ( iniciante ) o destino desse livro será Uberaba-MG.
=)
Tudo bom, Daniel?
Eu vou querer sim!
Me interesso bastante pelos cursos da Caelum, mas o preço sempre me deixa
com o pé atrás xD
Costumo passar sempre na linha verde e azul, é só me falar o horário e dia
que for melhor pra ti que eu me organizo pra ir buscar o livro!
Em 15 de julho de 2016 17:11, Daniel Lucindo notifications@github.com
escreveu:
Opa
Imaginei que não seria tão rápido.
Pow quem quiser tira um 2 ou 1 hehehe #zueira
Leticia você vai querer mesmo?
Trabalho na Berrini e moro na Parada Inglesa Zona Norte então qualquer
estação nessas linhas posso te entregar.
Caso você não queira e não aparece mais ninguém ( iniciante ) o destino
desse livro será Uberaba-MG.
=)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/frontendbr/doe-um-livro/issues/6#issuecomment-233057998,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ATgRvK_d0oFqVK9fOTTbxGQufLFJC8HEks5qV-l8gaJpZM4JMojo
.
Leticia,
Passa seus contatos pra alinhar a entrega :)
Meu celular: 9 5242-5449
Pode chamar no whats que respondo rápido
Em 18 de jul de 2016 11:35, "Daniel Lucindo" notifications@github.com
escreveu:
Leticia,
Passa seus contatos pra alinhar a entrega :)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/frontendbr/doe-um-livro/issues/6#issuecomment-233346665,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ATgRvOioz0-KfmVJuGEFsjmZP2jUUmekks5qW47ugaJpZM4JMojo
.
@leticia0805 @daniellucindo posso fechar a issue?
Pode sim :)
Já marcamos a entrega do livro
Em 18 de jul de 2016 16:20, "Vitor Mendrone" notifications@github.com
escreveu:
@leticia0805 https://github.com/leticia0805 @daniellucindo
https://github.com/daniellucindo posso fechar a issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/frontendbr/doe-um-livro/issues/6#issuecomment-233430333,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ATgRvMouBPxXxBXNlEHlb4vhJLhv-orQks5qW9HmgaJpZM4JMojo
.
Eu fiquei na dúvida, não deveria fechar somente quando o livro fosse entregue? Ou já marcado pode fechar?
Good question.
Podemos reabrir se der problema... o que acha?
Pode Ser Feh =)
O Livro já esta em boas mãos :)
Show =)
| gharchive/issue | 2016-07-14T16:52:39 | 2025-04-01T06:38:44.647257 | {
"authors": [
"LFeh",
"Mendrone",
"cmpereirasi",
"daniellucindo",
"leticia0805",
"twobanks",
"willianjusten"
],
"repo": "frontendbr/doe-um-livro",
"url": "https://github.com/frontendbr/doe-um-livro/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
602122826 | [website] Remove app-ads.txt Entries from MobFox
Here are the lines to be removed:
mobfox.com,62070,DIRECT,5529a3d1f59865be
appnexus.com,2637,RESELLER,f5ab79cb980f11d1
tremorhub.com, l85rl-uxscq, RESELLER, 1a4e959a1b50034a
video.unrulymedia.com, 3589813809, RESELLER
pocketmath.com, 29, RESELLER
appnexus.com,2850,RESELLER,f5ab79cb980f11d1
openx.com,539249210,RESELLER,6a698e2ec38604c6
pubmatic.com,156451,RESELLER,5d62403b186f2ace
rhythmone.com,3589813809,RESELLER,a670c89d4a324e47
rubiconproject.com,13132,RESELLER,0bfd66d529a55807
rubiconproject.com,17494,RESELLER,0bfd66d529a55807
xad.com,589,RESELLER,81cbf0a75a5e0e9a
aceex.io,6,RESELLER
app-ads.txt.tpl
-aceex.io,6,RESELLER
-appnexus.com,2637,RESELLER,f5ab79cb980f11d1
-appnexus.com,2850,RESELLER,f5ab79cb980f11d1
-mobfox.com,62070,DIRECT,5529a3d1f59865be
-openx.com,539249210,RESELLER,6a698e2ec38604c6
-pocketmath.com,29,RESELLER
-pubmatic.com,156451,RESELLER,5d62403b186f2ace
-rhythmone.com,3589813809,RESELLER,a670c89d4a324e47
-rubiconproject.com,13132,RESELLER,0bfd66d529a55807
-rubiconproject.com,17494,RESELLER,0bfd66d529a55807
-tremorhub.com,l85rl-uxscq,RESELLER,1a4e959a1b50034a
-video.unrulymedia.com,3589813809,RESELLER
-xad.com,589,RESELLER,81cbf0a75a5e0e9a
| gharchive/issue | 2020-04-17T17:21:07 | 2025-04-01T06:38:44.693862 | {
"authors": [
"gubatron",
"kad3mlia"
],
"repo": "frostwire/frostwire",
"url": "https://github.com/frostwire/frostwire/issues/887",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
732414379 | Enable PHP 8 builds
PHP 8 is almost upon us so we should make sure laravel-cors runs properly on it in Github Actions.
In draft until https://github.com/asm89/stack-cors/pull/83 is merged.
I've tagged stack-cors, so converted this PR to ready for review + rerun tests. Might take a few minutes for the tag to be found though.
I can't seem to retrigger the builds myself for some reason..
Oh phpunit is the culprit. Might need to update that.
Hmm https://github.com/laravel/dusk is not yet PHP8 compatible?
@barryvdh trying to get that fixed in a few. Gonna concentrate on the skeleton atm first 👍
Seems to be passing now, thanks!
| gharchive/pull-request | 2020-10-29T15:25:57 | 2025-04-01T06:38:44.715771 | {
"authors": [
"barryvdh",
"driesvints"
],
"repo": "fruitcake/laravel-cors",
"url": "https://github.com/fruitcake/laravel-cors/pull/511",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
867578581 | Update s3 to 2.16.47
Updates software.amazon.awssdk:s3 from 2.16.45 to 2.16.47.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "software.amazon.awssdk", artifactId = "s3" } ]
labels: library-update, semver-patch
Superseded by #380.
| gharchive/pull-request | 2021-04-26T11:03:12 | 2025-04-01T06:38:44.723958 | {
"authors": [
"scala-steward"
],
"repo": "fs2-blobstore/fs2-blobstore",
"url": "https://github.com/fs2-blobstore/fs2-blobstore/pull/375",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
332046496 | Stopped working in Slack Version 3.2.0?
Hi,
I used your script for quite some time now and it always worked fine. But now, since a week or so, it's just not rendering anymore.
I'm not sure if it's the new Slack version (I have v3.2.0 installed)?
I already tried everything like downloading the script again from the Github, re-running the script and checking with some basic math that should normally work.
Thanks in advance for any help!
Hi Kilian,
Me and another user both have this issue with Slack v3.2.0 (and now 3.2.1) on Ubuntu 18.04. Do you also use Ubuntu 18.04 or do you use another OS?
It doesn't work on Windows 10 with Slack v3.2.0 either
Hi Laurent,
I'm using MacOS High Sierra (latest version).
Hi @Michielskilian and thanks for the bug report. Please try this script: https://github.com/fsavje/math-with-slack/blob/d385db532d6698b16a79c6d06fe1bdcb52732888/math-with-slack.sh Please let me know how it works.
(Working on Windows version at the moment, @KeAWang )
@KeAWang Could you please try whether the updated script works, and report back to here?
Hi @fsavje, the script you attached works!
Thank you for the quick fix!
Unfortunately the newest version is not working for me in Win10, v3.2.0
A few of the paths aren't working even though the .js files are in the directory. The previous release, v0.2.3 runs without a hitch but none of the math is rendering either way...
@fsavje It still doesn't seem to work on Windows 10. The new script gives the following errors
Using Slack installation at: ...\slack\app-3.2.0\resources\app.asar.unpacked\src\static
The system cannot find the path specified.
FINDSTR: Cannot open ...\slack\app-3.2.0\resources\app.asar.unpacked\src\ssb-interop.js
The system cannot find the path specified.
FINDSTR: Cannot open ...\slack\app-3.2.0\resources\app.asar.unpacked\src\ssb-interop.js
Backup already exists: ...\slack\app-3.2.0\resources\app.asar.unpacked\src\static/ssb-interop.js.mwsbak
Press any key to continue . . .
@Servinjesus1 and @KeAWang Thanks for trying it out. I don't have access to a Windows machine atm, so I can't debug this properly. Could you try this script: https://raw.githubusercontent.com/fsavje/math-with-slack/7d7bc39d2723e8c299bff47d4d80d465d044a1af/math-with-slack.bat Thanks!
Works great now, thanks!
Thanks for checking!
| gharchive/issue | 2018-06-13T15:25:23 | 2025-04-01T06:38:44.731976 | {
"authors": [
"KeAWang",
"LaurentHayez",
"Michielskilian",
"Servinjesus1",
"fsavje"
],
"repo": "fsavje/math-with-slack",
"url": "https://github.com/fsavje/math-with-slack/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2575866603 | 🛑 Play Elemental is down
In b2f2de6, Play Elemental (https://playelemental.seesink.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Play Elemental is back up in 9539122 after 28 minutes.
| gharchive/issue | 2024-10-09T12:59:00 | 2025-04-01T06:38:44.734809 | {
"authors": [
"fseesink"
],
"repo": "fseesink/upptime",
"url": "https://github.com/fseesink/upptime/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
172269410 | 无法连接
无法通过 ss 连接网络,下面是服务器上的提示
Aug 20 20:32:27 INFO -----------------------------------------
Aug 20 20:32:27 INFO Multi-User Shadowsocks Server Starting...
Aug 20 20:32:27 INFO Current Server Version: 3.1.1
Aug 20 20:32:27 INFO Now using MultiUser API as the user interface
Aug 20 20:32:27 INFO Now starting manager thread...
Aug 20 20:32:32 INFO Now starting user pulling thread...
Aug 20 20:32:33 INFO Server Added: P[1025], M[rc4-md5], E[xxx@xxx.com]
Aug 20 20:32:33 INFO Server Added: P[13018], M[rc4-md5], E[xxxx@qq.com]
Aug 20 20:32:37 INFO Now starting user pushing thread...
服务器日志又没异常。
检查iptables和安全组之类的东西,看看是否被block了。
防火墙端口没开吧。。
| gharchive/issue | 2016-08-20T12:44:02 | 2025-04-01T06:38:44.738606 | {
"authors": [
"fsgmhoward",
"shenlw66",
"xxsxx"
],
"repo": "fsgmhoward/shadowsocks-py-mu",
"url": "https://github.com/fsgmhoward/shadowsocks-py-mu/issues/22",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
321788424 | Implement GC for Repo, Build Results and Images
Automatically clean up:
repo
build-results
gce images
Determine how many of each makes the most sense.
repo trimmed down to 5 snapshots.
Only keep around the latest release for the mirrors: https://github.com/fspin-k8s/fspin-infrastructure/pull/31
Only keep around 5 builder images: https://github.com/fspin-k8s/fspin-infrastructure/pull/33
| gharchive/issue | 2018-05-10T02:31:43 | 2025-04-01T06:38:44.773780 | {
"authors": [
"damaestro"
],
"repo": "fspin-k8s/fspin-infrastructure",
"url": "https://github.com/fspin-k8s/fspin-infrastructure/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2724758046 | Uses u32 for answers, today's answer required u64.
I wasted a lot of time because of this :(
It's documented in the readme and should error in non-release mode, but maybe we should just use u64 everywhere.
I would accept a PR that changes this to u64 by default.
| gharchive/issue | 2024-12-07T18:22:57 | 2025-04-01T06:38:44.775045 | {
"authors": [
"MaxenceDC",
"fspoettel"
],
"repo": "fspoettel/advent-of-code-rust",
"url": "https://github.com/fspoettel/advent-of-code-rust/issues/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
145308081 | Output to ES6? (Possible? Worth it?)
It would be possible to output "pure" ES6 code? (instead of "ES5 with some ES6 classes"?)
I know Babel is supposed to translate ES6 to ES5. But... just being curious.:)
Why?
I'm including the generated .js files in a project written in ES6. And I already have a build pipeline to convert ES6 to ES5. It would be nice to use only one language for everything.
Currently I'm simply importing the Fable-generated files with
import MyModule from '../imports/MyModule';
and it works nicely.
I don't know if this would have any benefit. Just because. ;)
In the next release it will be possible to output ES6 modules to allow tree shaking with tools like WebPack. I wanted to make Fable as easy to use as possible and that's why I'm including the necessary Babel plugins to compile to ES5 by default. But it shouldn't be difficult to add an option to compile to ES6 (ES5 being the default) like TypeScript has, you only need to omit all Babel plugins but the first four (I've found Babel has problems when you don't transform the property mutators):
https://github.com/fsprojects/Fable/blob/master/src/fable-js/index.js#L127-L134
I'll try to do it for the next release :)
Yeah! After I posted this I did read more about WebPack 2 (currently in beta) planned optimizations and improvements to ES6. Great!
And thanks for the pointers about BabelPlugins. It was extremely valuable for learning more about how Fable works! :+1:
I will play with them. Let's see.. ;)
This has been also implemented in the imports branch with the --ecma argument :)
Note: For consistency with --module argument, es2015 must be used instead of es6 (though we may add es6 as an alternative).
| gharchive/issue | 2016-04-01T23:22:10 | 2025-04-01T06:38:44.782662 | {
"authors": [
"alfonsogarciacaro",
"fdcastel"
],
"repo": "fsprojects/Fable",
"url": "https://github.com/fsprojects/Fable/issues/89",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
466995334 | Remove Android designer files
Removing these files since they are automatically generated each time, and they pollute our commits
/azp run full build
| gharchive/pull-request | 2019-07-11T16:34:02 | 2025-04-01T06:38:44.783840 | {
"authors": [
"TimLariviere"
],
"repo": "fsprojects/Fabulous",
"url": "https://github.com/fsprojects/Fabulous/pull/515",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
253092156 | [Discussion] Magic mode working in a team ? align teamate version ?
Description
Hello there,
in the past we encounter many times issue because we had to tell teamate to update paket CLI
once done we had other issue using magic mode (aka #2656)
Expected behavior
Looking for an idea to align version of paket for developer + CI
we got about 30 developer, 10 build agents, the number of update per date can be hazardous sometimes
I'm wondering if there's a way to use magic mode across different multiple computer but not to the latest, a specified version
It could be an ENV_VAR that for example point to a :
server url
shared folder
exposed from a Git repo used by the team
...
Actual behavior
Magic mode always get latest (if there's a bug we have to fallback manually and communicate fast + change agent configuration)
Known workarounds
Manually change paket.exe from magic mode to "normal mode", slow and kind of flacky
You can "lock" the version in your paket.dendencies file:
version 5.86.0-alpha005
source http://....
nuget MyPackage
The "Magic"-Mode-Bootstrapper will download the version locked in the depsfile.
well we would have to align 60 repositories
if we decide to change, we have 60 repo to changes (we got like 100 repo only for our team, and 40 of it are yet to be migrated to paket)
I do understand it is a step forward but this is still a huge manual change
Yes I agree that it is not perfect. But is there a reason that all repositories need to use the same version?
/cc @forki what do you think about this scale?
It's more like we need to be sure the CI Agent will run on the exact same configuration as any developer
We ended up rolling back mutually but nobody had the same version
but even CI had an older version (and this is not easy to change CI versions)
We have a process doing HASH on paket.exe on CI + a server dedicated of maintaining CI integrity
it ensure us that all agents are aligned not only for paket but also SDK, etc ...
Another way would be to specified another URL for the Magic mode
for example, the CI Agents does not have Firewall right to access github.com (this site specifically)
So if you could for example say to the magic mode
please dont use github.com/fsproject/paket/releases, use this other feed
it would solve everything
say to the magic mode
It can be done by options in dependencies file.
E.g. line version <version> --force-nuget --nuget-source=<feed>
We use network share as source for paket package in our projects.
sooo paket.exe is released as a nuget ? and nuget-source is an private nuget feed that provide paket as a nuget ?
this is no CLI argument right ?
i dont want to confuse people says asking for them to add this param every time
Also magic mode is reading a config file automatically. Put paket.exe.config alongside paket.exe. It allows you to pin the version
@forki as said, i want to be driving by an external feed of each repo, avoviding us to edit 60 times the .config for each repo
IIRC there was already an url parameter in the config file. If not we would
accept a pr. This url could provide a blessed version.
Am 26.08.2017 16:18 schrieb "TeBeCo" notifications@github.com:
@forki https://github.com/forki as said, i want to be driving by an
external feed of each repo, avoviding us to edit 60 times the .config for
each repo
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/fsprojects/Paket/issues/2668#issuecomment-325131848,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNNg1H2tLsfO4Bx32-c2ws5CyTChuks5scCkigaJpZM4PDe3n
.
edit 60 times the .config for each repo
If you need synchronize paket version (and its changing) among many repo you can set nuget-feed in each dependencies file and not set version (also set --max-file-age to 0).
For update paket everywhere you can just add new version of paket package to you nuget feed.
will dig into paket code to see that nuget-feed
thx everyone here :)
Ok seems url is not yet in
https://fsprojects.github.io/Paket/bootstrapper.html#In-application-settings
But I bet it's not hard to add. It's even a C# project ;-)
Am 26.08.2017 4:41 nachm. schrieb "TeBeCo" notifications@github.com:
will dig into paket code to see that nuget-feed
thx everyone here :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/fsprojects/Paket/issues/2668#issuecomment-325133950,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNCJ26dDoDt_msTLIUViU9xakO09Wks5scC6NgaJpZM4PDe3n
.
The reason I added version configuration in the paket.dependencies for magic mode is exactly this case : Synchronizing versions between all devs and our CI while still using magic mode.
BTW having the version pinned per-repository also allow you to build old git commits with old paket versions and get the same result. It can also allow teams to update paket at their own pace instead of forcing "big bang" upgrades but I can understand that upgrading a big number of repo can be painful. (We're mostly in monorepo so we sidestep that)
As for CI not having access to github.com it's also our case and we use the already cited --force-nuget --nuget-source= in our "version" line in paket.dependencies to point to our local NuGet cache (ProGet)
You can use the following solution to sync paket versions internally:
Add paket.bootstrapper.exe 5.92.2 or later and rename it to paket.exe to your repos.
Add the following paket.exe.config file as well:
<configuration>
<appSettings>
<add key="ForceNuget" value="True"/>
<add key="IgnoreCache" value="True"/>
<add key="NugetSource" value="http://internal-feed" />
</appSettings>
</configuration>```
Now all users will pick you that latest NuGet package that you push to your internal feed.
btw the version line in paket.dependencies can accomplish the same
version -f --force-nuget --nuget-source=http://internal-feed
(version being a misleading name in this uncommon case, it's in fact the command line of the bootstrapper)
| gharchive/issue | 2017-08-26T13:22:14 | 2025-04-01T06:38:44.811452 | {
"authors": [
"forki",
"lexarchik",
"matthid",
"tebeco",
"trondd",
"vbfox"
],
"repo": "fsprojects/Paket",
"url": "https://github.com/fsprojects/Paket/issues/2668",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
466858634 | net471 doesn't default to netstandard but a older net framework
When taking a dependency into a net471 project, i would expect a netstandard2.0 dependency to be favoured over an older (pre standard) framework version.
Since netstandard2.0 is fully compatible with net471, a project maintainer is unable to support newer features in an older framework version (as seen here: https://github.com/EasyNetQ/EasyNetQ/issues/623#issuecomment-510444144)
This means that if a project maintainer is using a multi target switch to support the newer features, and is publishing these as netstandard2.0 and say net451 paket will always default to the net451 version because it's framework.
Short of asking the maintainer to also publish the netstandard2.0 version as net471 (which isn't always practical) would it not make more sense for paket to use netstandard2.0 with the same level of "penalty" as the directly compatible framework version? - or at least give users a preference that they would like to use netstandard over older netframework versions.
This has nothing to do with paket. Target framework preferences are spec'd by NuGet and I doubt they will change them. In any case you should open an issue over there.
so paket isn't the one choosing to use the net451 over netstandard2.0 when both are present in the package?
I'm a little confused, since paket is adding the reference to the net471 project and stating to use the net451 version. for example:
<Choose>
<When Condition="$(TargetFrameworkIdentifier) == '.NETFramework' And $(TargetFrameworkVersion) == 'v4.7.1'">
<ItemGroup>
<Reference Include="EasyNetQ">
<HintPath>..\..\packages\EasyNetQ\lib\net451\EasyNetQ.dll</HintPath>
<Private>True</Private>
<Paket>True</Paket>
</Reference>
</ItemGroup>
</When>
</Choose>
on package: https://www.nuget.org/packages/EasyNetQ/ which has net451 and netstandard2.0
The code is in paket, but the spec is on NuGet, we follow the spec closely.
What I mean is: If you use nuget client or dotnet sdk it will/should reference the same dll.
If we diverge here this change will not be in dotnet sdk style projects as there we only forward to NuGet.
As we don't care to much about legacy projects we won't do such a radical change for old-style projects only. And I don't see this working on dotnet sdk style projects
Ok, I think I understand.
Thanks for clarifying.
I'm going to speak to this project maintainer and see if we get a net461 version added with the releavant newer framework features added!
| gharchive/issue | 2019-07-11T12:20:45 | 2025-04-01T06:38:44.817457 | {
"authors": [
"BlythMeister",
"matthid"
],
"repo": "fsprojects/Paket",
"url": "https://github.com/fsprojects/Paket/issues/3613",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
151545134 | Revert Netstandard to see if build works
don't merge
@matthid I think I reverted to something that already built. I don't really understand that error. Is that a mono issue?
I would say nunit bug, because appveyor fails as well. Can you try to revert that update as well.
Now it fails for different reasons. So looks like nunit is really broken
yep, looks like either paket bug or not updated unit tests to me...
| gharchive/pull-request | 2016-04-28T04:05:43 | 2025-04-01T06:38:44.819394 | {
"authors": [
"forki",
"matthid"
],
"repo": "fsprojects/Paket",
"url": "https://github.com/fsprojects/Paket/pull/1651",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
99739128 | Skip object identifiers when checking for unused symbols
UserVoice request: http://vfpt.uservoice.com/forums/247560-general/suggestions/8793808-do-not-mark-orange-in-the-right-bar-unused-self-re
Graying out object identifiers could be annoying. Since it has little practical value, we skip it while checking for unused references.
I'm afraid I disagree. What's more, that User Voice has only 4 votes. I like how it works now. Maybe we should ask people?
Sure, please ask around on Twitter and Slack.
@cloudroutine wrote on Slack:
I'd prefer another toggle like "Gray out unused member identifiers" and a toggle for whether unused items show up in the scroll bar
I agree, we should add two options:
Syntax coloring
Gray out unused opens
Gray out unused declarations
Gray out unused self identifiers
Show unused symbols on scroll bar
BTW, what do you think about using "Highlight" instead of "Gray out"?
Highlight is more accurate since this is in the context of syntax highlighting and because the color for unused items doesn't have to be grey.
Gray out unused self identifiers: the option is too fine-grained to support a very specific use case. I wouldn't want to go that way.
Show unused symbols on scroll bar: It might be useful, but it doesn't solve the problem here.
What do you think about using "Highlight" instead of "Gray out"?
Yes, it's better with 'Highlight' as @cloudRoutine said.
If you don't like the idea to add a setting for "Gray out unused self identifiers" and not all people like removing the feature (including me), I suggest to drop this PR.
Please do make it optional. I consider the "this" identifier to be used as soon as a member is declared using the member this.membername syntax. I don't like being told that it isn't being used when it is required by the syntax (it feels like returning a false positive).
this is completely redundant. It's just a variable, and if you don't need it, you should claim about it explicitly, like _self or __. I think putting this everywhere is just old C# habit.
This tweet convinced me https://twitter.com/CarstenK_Dev/status/631013723700903936.
Happy to close this and forget it forever.
| gharchive/pull-request | 2015-08-07T22:14:44 | 2025-04-01T06:38:44.826593 | {
"authors": [
"ashtonkj",
"cloudRoutine",
"dungpa",
"vasily-kirichenko"
],
"repo": "fsprojects/VisualFSharpPowerTools",
"url": "https://github.com/fsprojects/VisualFSharpPowerTools/pull/1070",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1415543864 | Fix for lat/lon 1D array in grib2
Fixes #238
Needs a test
I've tested this with the GEFS files, it does the job.
OK, merging this, but finding a suitable test is still TODO. I don't know if we can persuade xarray to write a grib2 file of this sort.
| gharchive/pull-request | 2022-10-19T20:26:54 | 2025-04-01T06:38:44.828102 | {
"authors": [
"lucien-sim",
"martindurant"
],
"repo": "fsspec/kerchunk",
"url": "https://github.com/fsspec/kerchunk/pull/239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
816405132 | verbose logging in grok_exporter?
First, thanks for creating and maintaining this tool.
Forgive my lack of knowledge, but I am looking if there is an easy way to give a --verbose flag to the tool, (or --debug, or --trace) which can help me debug my configuration when I am working on the matchers.
thanks in advance!
Unfortunately there is no good verbose option right now.
If you want to debug grok pattern, there are multiple grok pattern test websites on the Internet where you can test a pattern and example log lines.
For debugging within grok_exporter itself, the best way to debug is to look the grok_exporter_lines_matching_total built-in metric. I am currently working on improving this (branch built-in-metrics-improvement), but that's work in progress.
Thanks for the update @fstab. Looking forward to improved debug/logging abilities later :)
| gharchive/issue | 2021-02-25T12:47:06 | 2025-04-01T06:38:44.830849 | {
"authors": [
"fstab",
"jesperronn"
],
"repo": "fstab/grok_exporter",
"url": "https://github.com/fstab/grok_exporter/issues/145",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2054850765 | just a suggestion: ad module
is it possible to put the ad module into yt2009 or atleast emulate it and even at flash form
it can be enabled by checking emulate_ads in the flags
haven't messed with the ad module as i won't be adding ads into yt2009. leaving this open however as a reminder
Any update? I really want to try out ads on YT2009 (mobile and web), which i'm making a new issue on...
| gharchive/issue | 2023-12-23T17:30:11 | 2025-04-01T06:38:44.833335 | {
"authors": [
"ftde0",
"gigigigi53",
"jackhacksren"
],
"repo": "ftde0/yt2009",
"url": "https://github.com/ftde0/yt2009/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
322561652 | Add feature of translating non-english into korean
파파고에서 지원하는 언어 조합 추가
현재 other -> korean 가능
( korean -> other의 경우, window.js에서 언어 조합 설정 탭 추가 시 이용 가능)
@JahunSeo
오 자헌님 코드가 좋네요!! 👍 몇가지 요청사항이 있는데 한 번 해보시겠어요??
setTranslateQuery와 다른 apis 들의 레벨이 다른 것 같아요. apis는 파파고 API를 사용하는 함수들인데 이 것을 분리해보면 어떨까요?
detect 후에 번역이 지원되지 않는 언어인 경우 reject로 빠져나와서 에러처리를 해보면 좋을 것 같아요.
setTranslateQuery 내부에서 source와 target이 정해지는 코드가 조금 깔끔하게 가다듬어지면 좋겠어요!
코드를 커밋하기 전에는 필요하지 않는 console.log는 지우는게 좋아요!
생각보다 자헌님 코드가 좋아서 놀랐어요!ㅋㅋㅋㅋ 굳굳입니다!!
히히.. 고맙습니다!!ㅋㅋㅋ 카페가서 바로 수정해볼게요!!
| gharchive/pull-request | 2018-05-13T03:53:04 | 2025-04-01T06:38:44.847309 | {
"authors": [
"JahunSeo",
"the6thm0nth"
],
"repo": "ftto/amigo",
"url": "https://github.com/ftto/amigo/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
198290414 | Projection support
Read and write only what you need.
Added.
Added.
| gharchive/issue | 2017-01-02T01:55:09 | 2025-04-01T06:38:44.852703 | {
"authors": [
"fugroup"
],
"repo": "fugroup/mongocore",
"url": "https://github.com/fugroup/mongocore/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2149967420 | Filter is too greedy when checking for pre-existing nbsp
Using a no-break space before the word preceding the punctuation prevents punctuation from spacing correctly.
In Markdown, typing the text on the left gives that on the right:
test: --> test : (CORRECT)
1 test: --> 1 test : (CORRECT)
1\ test: --> 1 test: (INCORRECT)
Something like 1\ test: would be a sensible thing to type for example when wanting to keep a number with its unit.
| gharchive/issue | 2024-02-22T21:26:50 | 2025-04-01T06:38:44.855475 | {
"authors": [
"tytyvillus"
],
"repo": "fuhrmanator/pandoc-filter-fr-nbsp",
"url": "https://github.com/fuhrmanator/pandoc-filter-fr-nbsp/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2534644084 | ResEmoteNet:顔感情認識における精度と損失低減の橋渡し
タイトル: ResEmoteNet:顔感情認識における精度と損失低減の橋渡し
リンク: https://arxiv.org/abs/2409.10545
概要:
人間の顔は、表情を通して感情や思考を表現する、静かなコミュニケーション手段です。近年、コンピュータービジョンの進歩に伴い、顔感情認識技術は著しい進歩を遂げ、機械は顔の微妙な表情を読み取ることができるようになりました。本研究では、畳み込みニューラルネットワーク、スクイーズ・アンド・エキテーション(SE)ブロック、および残差ネットワークを組み合わせた、顔感情認識のための新しい深層学習アーキテクチャであるResEmoteNetを提案します。SEブロックを含めることで、人間の顔の重要な特徴に選択的に焦点を当て、特徴表現を強化し、関連性の低い特徴を抑制します。これは、損失を減らし、モデル全体のパフォーマンスを向上させるのに役立ちます。また、SEブロックを3つの残差ブロックと統合することで、より深い層を通じてより複雑なデータ表現を学習できるようにしています。ResEmoteNetを、FER2013、RAF-DB、AffectNetの3つのオープンソースデータベースで評価した結果、それぞれ79.79%、94.76%、72.39%の精度を達成しました。提案するネットワークは、3つのデータベースすべてにおいて、最先端のモデルを上回る性能を示しました。ResEmoteNetのソースコードは、https://github.com/ArnabKumarRoy02/ResEmoteNet で公開されています。
論文要約
ResEmoteNet:顔感情認識における精度と損失低減の橋渡し - 論文要約
目的: 顔の表情から感情をより正確に認識できる深層学習モデルの開発
提案手法: ResEmoteNet (畳み込みニューラルネットワーク、SEブロック、残差ネットワークを組み合わせたアーキテクチャ)
SEブロックの役割:
顔の重要な特徴に焦点を当てる
特徴表現を強化
関連性の低い特徴を抑制
結果として、損失を減らし、モデル全体のパフォーマンスを向上
残差ブロックとの統合: より深い層を通じてより複雑なデータ表現を学習
評価: FER2013、RAF-DB、AffectNetの3つのデータセットを使用
結果:
3つのデータセット全てにおいて、既存の最先端モデルを超える精度を達成
FER2013: 79.79%
RAF-DB: 94.76%
AffectNet: 72.39%
ソースコード: https://github.com/ArnabKumarRoy02/ResEmoteNet にて公開
@yukihiko-fuyuki が以下のラベルを提案し、適用しました:
image-classification
loss-function
paper-implementation
| gharchive/issue | 2024-09-18T20:18:32 | 2025-04-01T06:38:44.869806 | {
"authors": [
"fulfulggg"
],
"repo": "fulfulggg/Information-gathering",
"url": "https://github.com/fulfulggg/Information-gathering/issues/307",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2437907157 | 🛑 NGR System Status is down
In be1a36a, NGR System Status (https://my.ngr.com.au/api/v2/web_service/service_test) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NGR System Status is back up in 2bc8ba3 after 18 minutes.
| gharchive/issue | 2024-07-30T13:55:33 | 2025-04-01T06:38:44.895391 | {
"authors": [
"SG2019"
],
"repo": "fullprofile/agridigital-status-monitor",
"url": "https://github.com/fullprofile/agridigital-status-monitor/issues/197",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1171728324 | Xero Scheduled Maintenance In Progress
Xero is currently conducting scheduled maintenance. This outage may result in the delayed processing of invoices status.
Xero is currently conducting scheduled maintenance. This outage may result in the delayed processing of invoices status.
Xero status page - Click here
Xero scheduled maintenance has been completed. Xero status page.
Click here
Xero is currently conducting scheduled maintenance. This outage may result in the delayed processing of invoices status.
Xero status page - Click here
| gharchive/issue | 2022-03-17T00:29:21 | 2025-04-01T06:38:44.897875 | {
"authors": [
"SG2019"
],
"repo": "fullprofile/fullprofile-status-monitor",
"url": "https://github.com/fullprofile/fullprofile-status-monitor/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
993786463 | Add Swagger Filter
Added Swagger Filters to TokensController to enable adding tenant Id to header for requests.
there seem to be some errors with the build, could you check it. And also add in a brief description of what all is covered in this commit . Thanks :)
there seem to be some errors with the build, could you check it. And also add in a brief description of what all is covered in this commit . Thanks :)
Sure. Will do
there seem to be some errors with the build, could you check it. And also add in a brief description of what all is covered in this commit . Thanks :)
Build errors resolved.
Edited Comment to include a description.
| gharchive/pull-request | 2021-09-11T08:35:49 | 2025-04-01T06:38:44.902701 | {
"authors": [
"akema-trebla",
"iammukeshm"
],
"repo": "fullstackhero/dotnet-webapi-boilerplate",
"url": "https://github.com/fullstackhero/dotnet-webapi-boilerplate/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1567157100 | ImportError: cannot import name 'ball_query_ext' from partially initialized module 'mmdet3d.ops.ball_query'
I got this error when running
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0 --canbus ./data
(open-mmlab) goroyeh56@Goros-MacBook-Air mmdetection3d % python3 tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0 --canbus ./data
Traceback (most recent call last):
File "tools/create_data.py", line 6, in <module>
from tools.data_converter import kitti_converter as kitti
File "/Users/goroyeh56/mmdetection3d/tools/data_converter/kitti_converter.py", line 8, in <module>
from mmdet3d.core.bbox import box_np_ops
File "/Users/goroyeh56/mmdetection3d/mmdet3d/core/__init__.py", line 3, in <module>
from .bbox import * # noqa: F401, F403
File "/Users/goroyeh56/mmdetection3d/mmdet3d/core/bbox/__init__.py", line 5, in <module>
from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
File "/Users/goroyeh56/mmdetection3d/mmdet3d/core/bbox/iou_calculators/__init__.py", line 2, in <module>
from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D,
File "/Users/goroyeh56/mmdetection3d/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 6, in <module>
from ..structures import get_box_type
File "/Users/goroyeh56/mmdetection3d/mmdet3d/core/bbox/structures/__init__.py", line 2, in <module>
from .base_box3d import BaseInstance3DBoxes
File "/Users/goroyeh56/mmdetection3d/mmdet3d/core/bbox/structures/base_box3d.py", line 6, in <module>
from mmdet3d.ops.iou3d import iou3d_cuda
File "/Users/goroyeh56/mmdetection3d/mmdet3d/ops/__init__.py", line 6, in <module>
from .ball_query import ball_query
File "/Users/goroyeh56/mmdetection3d/mmdet3d/ops/ball_query/__init__.py", line 1, in <module>
from .ball_query import ball_query
File "/Users/goroyeh56/mmdetection3d/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module>
from . import ball_query_ext
ImportError: cannot import name 'ball_query_ext' from partially initialized module 'mmdet3d.ops.ball_query' (most likely due to a circular import) (/Users/goroyeh56/mmdetection3d/mmdet3d/ops/ball_query/__init__.py)
(open-mmlab) goroyeh56@Goros-MacBook-Air mmdetection3d
Anyone knows how to resolve this?
Thank you!
I have the same problem. It's related to the version of mmdet-3d and mmcv
Could you please provide a suggestion?
Thanks
@whai362 Thanks for your awesome work.
I also meet the problem. I install mmdetection3d following this official install tutorial [https://github.com/fundamentalvision/BEVFormer/blob/master/docs/install.md],
mmcv-full = 1.4.0 ,
mmdet = 2.14.0
mmsegmentation = 0.14.1
mmdet3d = v0.17.1
I install all the requirement without error, but I meet the bug. the same as the issue.
when i run the Prepare nuScenes Data , i got the error.
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0 -mini --canbus ./data
the error info is as follows.
Traceback (most recent call last): File "tools/create_data.py", line 6, in <module> from data_converter.create_gt_database import create_groundtruth_database File "/home/ubt2t/AL/BEVFormer/tools/data_converter/create_gt_database.py", line 11, in <module> from mmdet3d.core.bbox import box_np_ops as box_np_ops File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/core/__init__.py", line 3, in <module> from .bbox import * # noqa: F401, F403 File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/core/bbox/__init__.py", line 5, in <module> from .iou_calculators import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/core/bbox/iou_calculators/__init__.py", line 2, in <module> from .iou3d_calculator import (AxisAlignedBboxOverlaps3D, BboxOverlaps3D, File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/core/bbox/iou_calculators/iou3d_calculator.py", line 6, in <module> from ..structures import get_box_type File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/core/bbox/structures/__init__.py", line 2, in <module> from .base_box3d import BaseInstance3DBoxes File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/core/bbox/structures/base_box3d.py", line 6, in <module> from mmdet3d.ops.iou3d import iou3d_cuda File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/ops/__init__.py", line 6, in <module> from .ball_query import ball_query File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/ops/ball_query/__init__.py", line 1, in <module> from .ball_query import ball_query File "/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/ops/ball_query/ball_query.py", line 4, in <module> from . import ball_query_ext ImportError: cannot import name 'ball_query_ext' from partially initialized module 'mmdet3d.ops.ball_query' (most likely due to a circular import) (/home/ubt2t/AL/BEVFormer/mmdetection3d/mmdet3d/ops/ball_query/__init__.py)
Thanks your reply , Sincerely!!
Same problem here!
Same problem
| gharchive/issue | 2023-02-02T01:26:55 | 2025-04-01T06:38:44.938696 | {
"authors": [
"Aiuan",
"GoroYeh56",
"LadissonLai",
"YoushaaMurhij",
"samueleruffino99"
],
"repo": "fundamentalvision/BEVFormer",
"url": "https://github.com/fundamentalvision/BEVFormer/issues/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1458092131 | Hardcoded .output/public causing issues on Netlify
Hey!
I've run into some issues with this module when deploying a statically generated Nuxt app (using nuxt generate) to Netlify.
This module seems to assume that the output dir is always .output/public in the Nuxt source directory, because sitemaps are always basically written to path.join(nuxtInstance.options.srcDir, '.output/public), but Nitro recognizes Netlify (probably by checking process.env.NETLIFY), sets NITRO_PRESET to netlify and that changes the output directory to dist instead of .output/public. I tried to fix this in a PR, but I'm still getting some failing tests.
Could you look into this? Consider my PR just an experiment, I'm not a very experienced open source dev :sweat_smile:.
You can mimic Nitro's behavior on Netlify by running NITRO_PRESET=netlify yarn build-module. If ran, the current tests fail too.
Thanks! I will have a look
Hey @d3xter-dev, I edited the original post, because I found out it has to do with the NITRO_PRESET environment variable. You can find all Nitro presets here: https://github.com/unjs/nitro/blob/main/src/presets/index.ts. This is the file where Nitro presets are resolved: https://github.com/unjs/nitro/blob/main/src/options.ts.
Changes look good to me. @d3xter-dev Any chance this can be merged / released?
Tbh this probably needs some extra work if some tests are failing. It just needs an extra set of eyes for a few hours.
Any updates?
As a workaround for now, you can use this guide, replace the query with the dynamic routes.
https://content.nuxtjs.org/guide/recipes/sitemap
import { SitemapStream, streamToPromise } from 'sitemap'
export default defineEventHandler(async (event) => {
const config = useRuntimeConfig()
const links = await $fetch('/api/routes')
const sitemap = new SitemapStream({
hostname: config.public.storyblok.siteUrl,
})
for (const link of links) {
sitemap.write({
url: link,
changefreq: 'monthly'
})
}
sitemap.end()
return streamToPromise(sitemap)
})
same issue here with vercel
at least now I know what's the issue 😅
thanks @Anoesj for the fix
@d3xter-dev is there a plan to merge this? otherwise I need to look for another solution, but this one would be the best
Any plans to merge this?
| gharchive/pull-request | 2022-11-21T15:02:27 | 2025-04-01T06:38:44.945100 | {
"authors": [
"Anoesj",
"ErwinAI",
"andrevferreiraa",
"d3xter-dev",
"jankohlbach",
"mariuscdejong",
"offline-first"
],
"repo": "funkenstudio/sitemap-module-nuxt-3",
"url": "https://github.com/funkenstudio/sitemap-module-nuxt-3/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
332236351 | Discord rss
Find bot (or make one) To post RSS feed to #socialfeed Or #changelogs
Or maybe even webhook? (Not sure if its possible)
Added bot. Awaiting rss edit to test
Bot working
Ticket closed
| gharchive/issue | 2018-06-14T03:07:18 | 2025-04-01T06:38:44.953350 | {
"authors": [
"Bencey"
],
"repo": "funkypenguin/geek-cookbook",
"url": "https://github.com/funkypenguin/geek-cookbook/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1734644572 | 🛑 webpoll (legacy) is down
In 2c71726, webpoll (legacy) (http://webpoll.co.kr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: webpoll (legacy) is back up in 316ca3c.
| gharchive/issue | 2023-05-31T17:07:39 | 2025-04-01T06:38:44.958678 | {
"authors": [
"fureweb-com"
],
"repo": "fureweb-com/upptime.fureweb.com",
"url": "https://github.com/fureweb-com/upptime.fureweb.com/issues/143",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2384825008 | 🛑 wiki.fureweb.com is down
In c250804, wiki.fureweb.com (https://wiki.fureweb.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: wiki.fureweb.com is back up in 64976f0 after 8 minutes.
| gharchive/issue | 2024-07-01T22:18:57 | 2025-04-01T06:38:44.961638 | {
"authors": [
"fureweb-com"
],
"repo": "fureweb-com/upptime.fureweb.com",
"url": "https://github.com/fureweb-com/upptime.fureweb.com/issues/202",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
302142692 | Not Found Response
Hi,
I use livereload on angulardart 5.
pub run livereload --spa on
[INFO] Serving a WebSocket server at ws://localhost:4242
[INFO] Entrypoint: Generating build script...
[INFO] Entrypoint: Generating build script completed, took 324ms
...
[INFO] Build: Succeeded after 215ms with 0 outputs
[INFO] `build_runner` starts serving `on` on port 8080
Browse your web app at http://localhost:8000
Accessing localhost:8000 return 404.
When i run with pub run build_runner serve it displays page normally.
From the log:
[INFO] `build_runner` starts serving `on` on port 8080
It started serving the directory on instead of web .
I'm sorry, the document is not clear enough.
To enable SPA, you don't need to specify the option because it's the default. If you want to disable it, here's the way.
pub run livereload --no-spa
| gharchive/issue | 2018-03-05T01:08:04 | 2025-04-01T06:38:44.968584 | {
"authors": [
"furrary",
"iambudi"
],
"repo": "furrary/livereload",
"url": "https://github.com/furrary/livereload/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
284143148 | Problems using Node APIs in Electron renderer
I'm attempting to use fuse-box (2.5.0-beta.1) to bundle the renderer side of my Electron app, which makes use of the native Node modules available in the environment, ie, path, fs, etc.
Even after setting target: 'electron' these modules are empty.
I've read through a lots of issues here on this matter, eg. serverBundle option - which appears to have been removed now, and looked through the codebase, the way that these modules check FuseBox.isServer, and that FuseBox.isServer = !isBrowser
It seems to be that the most obvious fix for this is to have both FuseBox.isBrowser & FuseBox.isServer set to true for the 'electron' target. Unless isServer has some deeper meaning that i've not discovered yet.
If you agree with this course of action i'd be happy to experiment, test, and submit a PR - or any other suggestions?
hi @jollytoad
Thanks for submitting ;-). Let's determine first the cause of the issue. Maybe you could create a repository which reproduces the bug? and we could move forward after that.
And please, use the latest fuse-box@next
Hmm, seemed to be a configuration issue, coming from an ejected 'create-react-app', after stripping out old configs and restarting with something more like the fuse-box-electron-seed project i'm having more luck.
@jollytoad there still an issue with server polyfill, this will be fixed soon enough ;-)
| gharchive/issue | 2017-12-22T11:42:17 | 2025-04-01T06:38:44.976555 | {
"authors": [
"jollytoad",
"nchanged"
],
"repo": "fuse-box/fuse-box",
"url": "https://github.com/fuse-box/fuse-box/issues/989",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
153056241 | [requires.io] dependency update on master branch
This change is
Reviewed 1 of 1 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
| gharchive/pull-request | 2016-05-04T16:24:48 | 2025-04-01T06:38:44.980598 | {
"authors": [
"mithrandi"
],
"repo": "fusionapp/fusion-index",
"url": "https://github.com/fusionapp/fusion-index/pull/45",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
33251536 | OAuth2 Support
I needed to modify dropbox-api to support OAuth2 and figured I'd submit my changes as a PR. Dropbox mentions in their docs that:
OAuth 1.0 continues to be supported for all API requests, but OAuth 2.0 is now preferred.
This change adds a new config setting auth_type which can be either oauth (default) or oauth2:
Dropbox::API::Config.auth_type = "oauth2" # default is oauth
It also slightly changes the way that Access Tokens are requested when OAuth2 is used:
## Manual Access Token retrieval:
consumer = ::Dropbox::API::OAuth2.consumer(:authorize)
authorize_uri = consumer.authorize_url(client_id: APP_KEY, response_type: 'code')
# open authorize_uri in browser, sign in, grant permission, copy code that is displayed
access_token = consumer.auth_code.get_token('code_from_dropbox')
# Browser-based Access Token retrieval:
consumer = ::Dropbox::API::OAuth2.consumer(:authorize)
authorize_uri = consumer.authorize_url(client_id: APP_KEY, response_type: 'code', redirect_uri: 'https://yoursite.com/dropbox_landing', state: 'optional string')
# redirect user to authorize_uri
# upon return to https://yoursite.com/dropbox_landing?code=SOME_CODE&uid=SOME_ID
access_token = consumer.auth_code.get_token('code_from_query_string')
I'm closing this PR because I accidentally submitted it from the master branch of my fork. Will re-open a new PR using a feature branch.
| gharchive/pull-request | 2014-05-10T23:43:24 | 2025-04-01T06:38:44.999163 | {
"authors": [
"disbelief"
],
"repo": "futuresimple/dropbox-api",
"url": "https://github.com/futuresimple/dropbox-api/pull/47",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
308814281 | Is this what I think it is?
This looks really cool. Is it based on the John Hughes paper?
@deusaquilus
| gharchive/issue | 2018-03-27T04:01:00 | 2025-04-01T06:38:45.015348 | {
"authors": [
"deusaquilus",
"fwbrasil"
],
"repo": "fwbrasil/arrows",
"url": "https://github.com/fwbrasil/arrows/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1902833490 | Промахи в физической атаке
Сейчас промахи отсутствуют совсем, при вводе шанса предлагаю еще добавить скейл от dex - con.
Идея в том что бы дать лучнику и войну возможность влиять на промахи (но не допускать ситуации когда в лука нельзя попасть совсем) при этом воин с раздутым dex и con тоже не может "уворачиваться как 🐍 "
Сюда бы формулу по которой шанс высчитывать
| gharchive/issue | 2023-09-19T12:06:00 | 2025-04-01T06:38:45.019152 | {
"authors": [
"catHD",
"kyvg"
],
"repo": "fwo-online/fwo-tg",
"url": "https://github.com/fwo-online/fwo-tg/issues/293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
849989002 | Completely fix libsvg
brew install --build-from-source libsvg completes on my M1 MacBook Pro
brew reinstall --build-from-source libsvg also completes, after removing CarloCab's additional lines.
[ ] Have you followed the guidelines for contributing?
[ ] Have you checked that there aren't other open pull requests for the same formula update/change?
[ ] Have you built your formula locally with brew install --build-from-source <formula>, where <formula> is the name of the formula you're submitting?
[ ] Is your test running fine brew test <formula>, where <formula> is the name of the formula you're submitting?
[ ] Does your build pass brew audit --strict <formula> (after doing brew install <formula>)?
Please open it as a PR to homebrew-core's repo, not on my branch
| gharchive/pull-request | 2021-04-05T00:02:55 | 2025-04-01T06:38:45.048302 | {
"authors": [
"citelao",
"fxcoudert"
],
"repo": "fxcoudert/homebrew-core",
"url": "https://github.com/fxcoudert/homebrew-core/pull/3",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1487688304 | [Request] Commit 1.19 code
I noticed that the Curseforge page has been updated to 1.19, however this repository has not, so I was wondering if you could commit the changes made in 1.19 if you still have them, thanks!
Just updated to 1.19.4 #3
| gharchive/issue | 2022-12-10T00:26:26 | 2025-04-01T06:38:45.056354 | {
"authors": [
"Thanos1716",
"lunathelemon"
],
"repo": "fxys/Super-Secret-Settings",
"url": "https://github.com/fxys/Super-Secret-Settings/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1364506555 | View team advance request revamp
Note
These were Sandeep's changes, we had to revert due to some issues faced during the release
Previously approved PR - https://github.com/fylein/fyle-mobile-app/pull/1082
Changes I've made:
Used the previously done changes
Added a bunch of refactoring
Fixed the bugs which were faced during testing last time
Note - This is a new PR checked out from the latest master, I've manually added the previous changes to this branch
| gharchive/pull-request | 2022-09-07T11:15:50 | 2025-04-01T06:38:45.062314 | {
"authors": [
"Dimple16"
],
"repo": "fylein/fyle-mobile-app",
"url": "https://github.com/fylein/fyle-mobile-app/pull/1325",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2204862320 | Indexing Error with codeqai on Conda Environment: Continuous Indexing Without Completion
While using the codeqai tool within a conda environment, I encountered an issue during the indexing process where it continuously attempts to index without completion. This problem occurred when I tried to utilize codeqai's search functionality in my project directory. Specifically, the error IndexError: list index out of range was thrown, indicating an issue with handling the document vector indexing. Below are the detailed steps to reproduce, along with the specific environment setup.
Steps to Reproduce:
Installed codeqai using pip within a conda environment.
Ran codeqai configure and configured the tool with the following settings:
Selected "y" for using local embedding models.
Chose "Instructor-Large" for the local embedding model.
Selected "N" for using local chat models and chose "OpenAI" with "gpt-4" as the remote LLM.
Attempted to start the codeqai search by navigating to my project directory (2-006) that includes .m, .mat, .txt. files. Running codeqai search in the terminal.
Received a message indicating no vector store was found for 2-006 and that initial indexing may take a few minutes. Shortly after, the indexing process started but then failed with an IndexError: list index out of range.
Expected Behavior:
The indexing process should be completed, allowing for subsequent searches within the codebase using codeqai.
Actual Behavior:
The application failed to complete the indexing process due to an IndexError in the vector indexing step, specifically indicating a problem with handling the document vectors.
Environment:
codeqai version: 0.0.14
langchain-community version: 0.0.17
sentence-transformers version: 2.3.1
Python version: 3.11
Conda version: 4.12.0
Operating System: Windows (with Conda environment)
Full Terminal Output and Error
{GenericDirectory>}conda activate condaqai-env
(condaqai-env) {GenericDirectory>}codeqai search
Not a git repository. Exiting.
(condaqai-env) {GenericDirectory>}ls
'ls' is not recognized as an internal or external command,
operable program or batch file.
(condaqai-env) {GenericDirectory>}cd 2-006
(condaqai-env) {GenericDirectory}\2-006>codeqai search
No vector store found for 2-006. Initial indexing may take a few minutes.
⠋ 💾 Indexing vector store...Traceback (most recent call last):
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\Edge\anaconda3\envs\condaqai-env\Scripts\codeqai.exe\__main__.py", line 7, in <module>
sys.exit(main())
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\site-packages\codeqai\__main__.py", line 5, in main
app.run()
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\site-packages\codeqai\app.py", line 146, in run
vector_store.index_documents(documents)
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\site-packages\codeqai\vector_store.py", line 34, in index_documents
self.db = FAISS.from_documents(documents, self.embeddings)
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\site-packages\langchain_core\vectorstores.py", line 508, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\site-packages\langchain_community\vectorstores\faiss.py", line 960, in from_texts
return cls.__from(
File "C:\Users\Edge\anaconda3\envs\condaqai-env\lib\site-packages\langchain_community\vectorstores\faiss.py", line 919, in __from
index = faiss.IndexFlatL2(len(embeddings[0]))
IndexError: list index out of range
⠴ 💾 Indexing vector store...
Additional Context:
This issue seems to stem from the vector indexing process within the langchain-community package, possibly due to an empty or malformed document set being processed for vectorization. Given the configuration steps and the use of a conda environment, there might be specific dependencies or configurations that contribute to this problem.
Thanks for that detailed report! I think the cause is probably an empty split set for a document, as you also mentioned already.
| gharchive/issue | 2024-03-25T04:30:19 | 2025-04-01T06:38:45.124539 | {
"authors": [
"TeomanEgeSelcuk",
"fynnfluegge"
],
"repo": "fynnfluegge/codeqai",
"url": "https://github.com/fynnfluegge/codeqai/issues/38",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
161643746 | Question: Using name and column type guessers
I am not using any ORM libraries. Is it possible to use name and column type guessers with just having Faker library? Something like $faker->book should guess it should give a book name(which is some sentence).
Check out https://github.com/nelmio/alice for a complement to Faker.
Other than that, good idea for a feature! Feel free to work on it.
| gharchive/issue | 2016-06-22T10:15:37 | 2025-04-01T06:38:45.126472 | {
"authors": [
"Achilles-96",
"fzaninotto"
],
"repo": "fzaninotto/Faker",
"url": "https://github.com/fzaninotto/Faker/issues/941",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1742587981 | 7 months is a long time, is there a release coming to work with flutter 3.10.x and dart 3 please
thanks so much :)
Sure! Firstly try to use master branch before releasing. Btw do you know that, new features have been added to the master branch ;) Mainly:
run the same test code using host machine without a simulator, much faster (e.g. 10x) and stabler
a simple yet useful monkey (to be open sourced)
Because of the new features, I hope I can find a bit of time to update README and then release the new version.
is this project dead?
NO! I use it personally everyday!
v1.3.0 is published :)
| gharchive/issue | 2023-06-05T20:40:38 | 2025-04-01T06:38:45.128963 | {
"authors": [
"ahmdt",
"fzyzcjy",
"neiljaywarner"
],
"repo": "fzyzcjy/flutter_convenient_test",
"url": "https://github.com/fzyzcjy/flutter_convenient_test/issues/341",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
759776590 | Fix tls_certificate_check build errors on OTP 20.1+, when on top of macOS Big Sur
As described under tls_certificate_check's issue #3.
Released under 1.13.1 (published to Hex as well.)
| gharchive/issue | 2020-12-08T20:59:24 | 2025-04-01T06:38:45.130402 | {
"authors": [
"g-andrade"
],
"repo": "g-andrade/locus",
"url": "https://github.com/g-andrade/locus/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1298987393 | 🛑 Hacker News is down
In b60bb8e, Hacker News (https://news.ycombinator.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Hacker News is back up in 276a6fe.
| gharchive/issue | 2022-07-08T12:55:06 | 2025-04-01T06:38:45.150594 | {
"authors": [
"g33k247"
],
"repo": "g33k247/awesome_uptime",
"url": "https://github.com/g33k247/awesome_uptime/issues/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1931423050 | 🛑 Hacker News is down
In ede4033, Hacker News (https://news.ycombinator.com) was down:
HTTP code: 502
Response time: 86 ms
Resolved: Hacker News is back up in 29e9bdf after 9 minutes.
| gharchive/issue | 2023-10-07T16:57:24 | 2025-04-01T06:38:45.152993 | {
"authors": [
"g33k247"
],
"repo": "g33k247/awesome_uptime",
"url": "https://github.com/g33k247/awesome_uptime/issues/317",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2506883708 | Проверка работоспособности
Консольный бот работает нормально
В телеграм боте почему-то не отрабатывают кнопки, подвисают, и количество повторов не изменяется. После этого и другие сообщения не отрабатывают, он опять присылает кнопки
Можешь проверить?
Проверил. Работает. Сделал клон репозитория и запустил по инструкции. Прошу проверить еще раз.
| gharchive/issue | 2024-09-05T06:28:28 | 2025-04-01T06:38:45.155500 | {
"authors": [
"gKrokod",
"roman-bodavskiy"
],
"repo": "gKrokod/botReborn",
"url": "https://github.com/gKrokod/botReborn/issues/11",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
410552154 | Editable display name for organizations and collections
Feature Request
Desired behaviour
Organizations and Collections would have an editable display name that is separate from the UUID and url. This display name should be unique (given on a first come, first served basis), and allow for special characters and spaces. The display name should be editable on the user interface for organization/collections maintainers.
┆Issue is synchronized with this Jira Story
┆Sprint: Seabright Sprint 3 Electric
┆Issue Number: DOCK-516
Just to clarify, will the display name ever be used to link directly to an organisation?
Also, this was what I had in mind for displaying the display name alongside the main name
Ah Twitter. That's actually a good example. Twitter handles don't change, but you do the see the display change in lists, replies, and the like.
What should we allow? alphanumeric, spaces, and punctuation (\p{Punct})
During review, one issue:
Expected the non-display names (just called "names") to be used in the URLs when browsing around normally (currently using database ids)
Expected display name to be used when browsing (seems working)
What should we allow? alphanumeric, spaces, and punctuation (\p{Punct})
Was this either specified? I'm finding that only some special characters and punctuation are accepted. Good news: consistent between organizations and collections, and also consistent between UI and swagger.
Accepted: &(),'-_
Rejected:`~!@#$%^*+={}[]|.?<>;:"
@Ldcabansay Denis and I decided on the accepted &(),'-_ and not all punctuation. The logic was that it is better to be more restrictive at first and slowly relax constraints than to go the other way. I had updated the constraints in the code but I forgot to update them here in this issue, thanks for looking into it.
Verified that this works on 1.6.0-beta.3
| gharchive/issue | 2019-02-15T00:25:03 | 2025-04-01T06:38:45.287072 | {
"authors": [
"Ldcabansay",
"agduncan94",
"denis-yuen"
],
"repo": "ga4gh/dockstore",
"url": "https://github.com/ga4gh/dockstore/issues/2119",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
188130243 | Replace call to run_tests in documentation
In http://ga4gh-reference-implementation.readthedocs.io/en/latest/installation.html#installing-the-development-version-on-mac-os-x
Under Test and run when I ran the script for testing it wouldn't ran, and complained IOError: [Errno 2] No such file or directory: u'.travis.yml'
That should probably just say python -m nose tests to run tests instead. That script isn't in the source tree, is it?
The ga4gh_common package should install a executable called ga4gh_run_tests which replaces the scripts/run_tests.py scripts
Closed with https://github.com/ga4gh/ga4gh-server/commit/77d42db46268875549511edfffab2dca22528c5e
| gharchive/issue | 2016-11-08T23:15:10 | 2025-04-01T06:38:45.289905 | {
"authors": [
"achave11",
"david4096",
"dcolligan"
],
"repo": "ga4gh/ga4gh-server",
"url": "https://github.com/ga4gh/ga4gh-server/issues/1463",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
19482587 | Update to yajl 2.0
It would be great if you could update to yajl 2.0.
Released new pod as YAJLO. See new README.
| gharchive/issue | 2013-09-14T00:05:17 | 2025-04-01T06:38:45.325109 | {
"authors": [
"gabriel",
"rdingman"
],
"repo": "gabriel/yajl-objc",
"url": "https://github.com/gabriel/yajl-objc/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
856237479 | sectorTrend and industryTrend not working
output is empty for TSLA AAPL and etc: {"industryTrend":{"maxAge":1,"symbol":null,"estimates":[]}}
Please, please, please, fill out the issue template! It exists for a reason! Please open a new issue with the issue template filled out! Now nobody has any idea about what the problem is!
Quick question, though: How did you open a new issue without a label? You shouldn't be able to.
This is what the API gives us. Go to https://query2.finance.yahoo.com/v10/finance/quoteSummary/AAPL?modules=sectorTrend and you will get the same output. This is not our problem. This is Yahoo's problem. Maybe they removed the submodule. We have no control over it.
Hey @GameGC, thanks for your issue. I believe we found the problem why your issue didn't have any template, so don't worry about that. As @PythonCreator27 said, however, all this library does make it easier to consume data from Yahoo, and certain data is only available for particular stocks / symbols / markets.
As Yahoo's API isn't a public service, there's no documentation about what we can expect where (but fortunately, this library goes to great lengths to ensure data is always returned in a consistent format). However, if you figure out any patterns in this regard, and feel inclined to add it to the wiki here on this repository, I think many other users could benefit from it too.
Thanks and good luck!
| gharchive/issue | 2021-04-12T18:32:38 | 2025-04-01T06:38:45.341102 | {
"authors": [
"GameGC",
"PythonCreator27",
"gadicc"
],
"repo": "gadicc/node-yahoo-finance2",
"url": "https://github.com/gadicc/node-yahoo-finance2/issues/131",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
965210518 | Failed validation: #/definitions/SearchResult
Validation Error
Minimal Reproduction
y2.search("AMZN").then(d=> d.quotes);
Symbol(s) that it happened for
AMZN
Error Message
> The following result did not validate with schema: #/definitions/SearchResult
[
{
instancePath: '',
schemaPath: '#/additionalProperties',
keyword: 'additionalProperties',
params: { additionalProperty: 'screenerFieldResults' },
message: 'must NOT have additional properties'
},
{
instancePath: '',
schemaPath: '#/additionalProperties',
keyword: 'additionalProperties',
params: { additionalProperty: 'timeTakenForScreenerField' },
message: 'must NOT have additional properties'
}
]
This may happen intermittently and you should catch errors appropriately.
However: 1) if this recently started happening on every request for a symbol
that used to work, Yahoo may have changed their API. 2) If this happens on
every request for a symbol you've never used before, but not for other
symbols, you've found an edge-case. Please see if anyone has reported
this previously:
https://github.com/gadicc/node-yahoo-finance2/issues?q=is%3Aissue+Failed%20validation%3A%20%23%2Fdefinitions%2FSearchResult
or open a new issue (and mention the symbol):
https://github.com/gadicc/node-yahoo-finance2/issues/new?labels=bug%2C+validation&template=validation.md&title=Failed%20validation%3A%20%23%2Fdefinitions%2FSearchResult
For information on how to turn off the above logging or skip these errors,
see https://github.com/gadicc/node-yahoo-finance2/tree/devel/docs/validation.md.
Uncaught FailedYahooValidationError: Failed Yahoo Schema validation
at Object.validate [as default] (/Users/aronrodrigues/workspace/tribie-app/functions/node_modules/yahoo-finance2/dist/cjs/src/lib/validateAndCoerceTypes.js:194:15) {
result: {
explains: [],
count: 10,
quotes: [ [Object], [Object], [Object], [Object], [Object], [Object] ],
news: [ [Object], [Object], [Object], [Object] ],
nav: [],
lists: [],
researchReports: [],
screenerFieldResults: [],
totalTime: 83,
timeTakenForQuotes: 478,
timeTakenForNews: 700,
timeTakenForAlgowatchlist: 400,
timeTakenForPredefinedScreener: 400,
timeTakenForCrunchbase: 400,
timeTakenForNav: 400,
timeTakenForResearchReports: 0,
timeTakenForScreenerField: 0
},
errors: [
{
instancePath: '',
schemaPath: '#/additionalProperties',
keyword: 'additionalProperties',
params: [Object],
message: 'must NOT have additional properties'
},
{
instancePath: '',
schemaPath: '#/additionalProperties',
keyword: 'additionalProperties',
params: [Object],
message: 'must NOT have additional properties'
}
]
}
// Returned quotes:
[
{
exchange: 'NMS',
shortname: 'Amazon.com, Inc.',
quoteType: 'EQUITY',
symbol: 'AMZN',
index: 'quotes',
score: 31857500,
typeDisp: 'Equity',
longname: 'Amazon.com, Inc.',
isYahooFinance: true
},
{
exchange: 'MEX',
shortname: 'AMAZON COM INC',
quoteType: 'EQUITY',
symbol: 'AMZN.MX',
index: 'quotes',
score: 20328,
typeDisp: 'Equity',
longname: 'Amazon.com, Inc.',
isYahooFinance: true
},
{
exchange: 'NEO',
shortname: 'AMAZON.COM CDR (CAD HEDGED)',
quoteType: 'EQUITY',
symbol: 'AMZN.NE',
index: 'quotes',
score: 20152,
typeDisp: 'Equity',
isYahooFinance: true
},
{
exchange: 'BUE',
shortname: 'AMAZON COM INC',
quoteType: 'EQUITY',
symbol: 'AMZN.BA',
index: 'quotes',
score: 20064,
typeDisp: 'Equity',
longname: 'Amazon.com, Inc.',
isYahooFinance: true
},
{
exchange: 'OPR',
shortname: 'AMZN Aug 2021 3300.000 put',
quoteType: 'OPTION',
symbol: 'AMZN210813P03300000',
index: 'quotes',
score: 20026,
typeDisp: 'Option',
isYahooFinance: true
},
{
exchange: 'OPR',
shortname: 'AMZN Aug 2021 3400.000 call',
quoteType: 'OPTION',
symbol: 'AMZN210813C03400000',
index: 'quotes',
score: 20025,
typeDisp: 'Option',
isYahooFinance: true
}
]
Environment
Node
Node version 16.5.0
Npm version: 6.14.13
Library version 1.14.3
Additional Context
Hey @aronrodrigues, thanks for reporting. Looks like Yahoo just added this field. This is fixed in our devel branch, but since it's very new, I'm not releasing yet until we've had a little more experience with it.
In particular, we're accepting { screenerFieldResults?: Array<any>; } since we have no data yet on what this new field is meant to contain.
Hey @aronrodrigues, v1.14.4 partially fixes this, so that it no longer throws an error, but we still don't actually know what this data looks like. Will track further developments in #259, but closing at least this validation error for now. Thanks again for reporting!
| gharchive/issue | 2021-08-10T18:03:30 | 2025-04-01T06:38:45.346839 | {
"authors": [
"aronrodrigues",
"gadicc"
],
"repo": "gadicc/node-yahoo-finance2",
"url": "https://github.com/gadicc/node-yahoo-finance2/issues/255",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
197447269 | How to use Google Light?
I want to use Google Light
{
"plugins": ["prism", "-highlight"]
}
"pluginsConfig": {
"prism": {
"css": [
"syntax-highlighting/assets/css/prism/prism-base16-google.light.css"
]
}
}
But I get the error:
Error: Cannot find module 'syntax-highlighting/assets/css/prism/prism-base16-google.light.css'
Did you npm i atelierbram/syntax-highlighting -D?
I installed atelierbram/syntax-highlighting from GitHub and it works, thank you!
npm install https://github.com/atelierbram/syntax-highlighting/tarball/master
| gharchive/issue | 2016-12-24T04:12:16 | 2025-04-01T06:38:45.356496 | {
"authors": [
"kiwenlau",
"robmcguinness"
],
"repo": "gaearon/gitbook-plugin-prism",
"url": "https://github.com/gaearon/gitbook-plugin-prism/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1679310252 | fix bug 2370
fixes #2370
Can you point me to where to update the docs?
Thanks for the contribution!
Can you point me to where to update the docs?
Docs can be updated in the /// rustdoc above the relevant code items
| gharchive/pull-request | 2023-04-22T03:46:26 | 2025-04-01T06:38:45.390510 | {
"authors": [
"elizabethdinella",
"prestwich"
],
"repo": "gakonst/ethers-rs",
"url": "https://github.com/gakonst/ethers-rs/pull/2371",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
118341976 | Include the package pkg-config on role.
This necessary for the correct installation of many galaxy tools and it is also listed in the page https://wiki.galaxyproject.org/Admin/Config/ToolDependenciesList page as a required package.
Thanks.
| gharchive/pull-request | 2015-11-23T09:10:53 | 2025-04-01T06:38:45.400597 | {
"authors": [
"afgane",
"fabiorjvieira"
],
"repo": "galaxyproject/ansible-galaxy-os",
"url": "https://github.com/galaxyproject/ansible-galaxy-os/pull/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
465275656 | Repair of a hand-on box
Repairing a box, Issue #1467
thanks!!
| gharchive/pull-request | 2019-07-08T14:01:36 | 2025-04-01T06:38:45.468543 | {
"authors": [
"Delphine-L",
"shiltemann"
],
"repo": "galaxyproject/training-material",
"url": "https://github.com/galaxyproject/training-material/pull/1503",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
673628570 | 101 for everyone: fix workflow configuration & licensing concerns
The customization of the scatterplot steps that needs to be done by the
user in the workflow run dialog was described incompletely, which caused
problems in trainings when users selected column 4 both for plotting and
for grouping. The updated version explains in more detail the purpose of
the customization and clearly states which columns need to be selected
for plotting and grouping.
In addition, the "4Cs of diamonds" image got removed from the tutorial
(and is now only linked to) because of licensing concerns.
fixes #1972
@annefou @jennaj does this look good to you?
Thanks!
| gharchive/pull-request | 2020-08-05T15:27:16 | 2025-04-01T06:38:45.470707 | {
"authors": [
"bgruening",
"wm75"
],
"repo": "galaxyproject/training-material",
"url": "https://github.com/galaxyproject/training-material/pull/2023",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
381419746 | How to add a coupon in my order?
I tried to use one of the coupons from the list. In this case, I'm using the coupon number 9175 which is "Any Large Specialty Pizza". The number is supposed to retrieve the product code; but, in the function, add_coupon from the object class Order says missing 1 required positional argument: 'code'
What am I missing in here?
@zedin27 Could you share some code that would let me reproduce your issue? Everything you need to do from from pizzapi import * to Order.add_coupon() would be very helpful.
@gamagori sure thing. I knew what was my mistake after realizing I missed one tiny line in the code. I forgot to have an order variable to store what I'm supposed to order before adding the coupon. However, I encountered another error after trying to add the coupon by doing order.add_coupon(9103), which display a KeyError: 9013. This is what I have in my python file (this is my first time playing with python lol):
from pizzapi import *
def ordertest():
zeid = Customer('Zeid', 'Tisnes', 'zeidtisnes@gmail.com', '5555555555')
address = Address('ur address here', 'ur city', 'UR', '00000')
local_dominos = address.closest_store(zeid)
menu = local_dominos.get_menu()
order = Order(local_dominos, zeid, address)
order.add_coupon(9103) #1 Medium 3 Topping Pizza (here is where it complains)
Error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ztisnes/Desktop/pizzapi/pizzapi/order.py", line 52, in add_coupon
item = self.menu.variants[code]
KeyError: 9118
You can remove ordertest and just write everything I have from that function in the interpreter.
@zedin27 I see what's going on - this is a code issue, not a usage issue.
In the add_coupon() function, the item = self.menu.variants[code] is looking for the coupon ID that you're passing it, but it's looking for it in the 'variants' part of the menu, rather than the 'coupons' part of the menu.
You should be able to fix this locally by changing that line to item = self.menu.coupons[code]. I need to get some other things in place before I can fix and test this properly, but I'll be sure to let you know once it's good to go.
I made a PR for that specifically. Let me know when it is all good :). Thank you!
That still wont work as menu.coupons just returns a list of the coupon objects...
| gharchive/issue | 2018-11-16T02:25:37 | 2025-04-01T06:38:45.487415 | {
"authors": [
"bryceswarm",
"gamagori",
"zedin27"
],
"repo": "gamagori/pizzapi",
"url": "https://github.com/gamagori/pizzapi/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1152661540 | 領収書: 代表名の役職、但し書きはどうするか?いまはDBに存在しない。
ついでに、領収番号は請求番号でいいのか?
いまは、暫定的に請求番号を入れている。
役職は特に必要無いと思うとの事。
但し書きは確かに変えれるようにした方が良いとの事。SohoLifeでは印刷時に自由に入力できるようになっていて、デフォルトでは件名が入るようになっている。
領収番号は請求番号と同じで問題ない。
では、現状次のようにする。
①但し書きは件名とする。
(現時点では印刷時の入力は対応しない)
②領収番号は請求番号と同じ。
③役職名欄は削除。
| gharchive/issue | 2022-02-27T02:26:30 | 2025-04-01T06:38:45.489616 | {
"authors": [
"gamasenninn",
"monomonosu"
],
"repo": "gamasenninn/soho-caddie",
"url": "https://github.com/gamasenninn/soho-caddie/issues/557",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
715982079 | Error bars
(With the danger that you take away my github account, I'll continue starting issues, even for minor stuff)
Noticed that we have to think as well about error bars.
Looking at the following plot make me think that there are some missing? Would that be difficult?
Same for effective focal length, d80, ...
I agree.
The difficulty really depends on the specifics of each plot.
In this case, the eff area comes from a fraction of photons detected/simulated. A binomial uncertainty should give what we want.
I'll implement that.
It's important to note that it won't always be so easy nor worth it. In this case, the eff area is basically flat, which makes very small fluctuations look so ugly. If the error calculation here was complicated, I'd say it's not worth it because it's in general very small.
Agree - don't put a huge amount of work into it. I didn't notice the very suppressed y-axis, which amplifies the flucutations.
Errors bars should be wherever they make sense and can be achieved with reasonable effort.
Ok.
I think it's better to implement this ones later. I'll finish going through the documentation issues first.
I can help implementing error bars here if you wish. Also, I am not sure the fluctuations are that small (1-2 m^2 is non-negligible no?).
Orel is actually right - where does the uncertainty of 1 m^2 come from? Although this is a ray-tracing experiment, the result should be almost deterministic. Of course we should see partly the imprint of the structure / beams etc.
You don't see the imprint of the structure because we do not simulate it in sim_telarray. To include it we use the telescope transmission function, but for the LST (which is what I assume is simulated, should add a label to the plot), that function is flat.
However, shouldn't we see the imprint of the camera? That we do simulate I think.
I suggest to close this issue.
The requirement on having uncertainties on all results is a generic task and written down in the requirements and concept document.
| gharchive/issue | 2020-10-06T19:58:27 | 2025-04-01T06:38:45.556513 | {
"authors": [
"GernotMaier",
"RaulRPrado",
"orelgueta"
],
"repo": "gammasim/simtools",
"url": "https://github.com/gammasim/simtools/issues/79",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2645143147 | Gang Huo - Submission of assignment 2 questionnaire design part_a
… for sampling course
What changes are you trying to make? (e.g. Adding or removing code, refactoring existing code, adding reports)
Completed all the tasks in assignment 2 (questionnaire design part a) of the sampling course.
What did you learn from the changes you have made?
Based on the scenario #1, I described the survey purpose, target population, sampling frame, and sampling strategy of the survey and designed some survey questions. I learned how to design the survey questions.
Was there another approach you were thinking about making? If so, what approach(es) were you thinking of?
Were there any challenges? If so, what issue(s) did you face? How did you overcome it?
How were these changes tested?
A reference to a related issue in your repository (if applicable)
Checklist
[ ] I can confirm that my changes are working as intended
A2 Observational units missing, please be more specific about sampling units
I don't understand what you means by observational units missing. Can you please elaborate about that? Is my assignment 2 complete or not? Please confirm. Thanks
A2 Observational units missing, please be more specific about sampling units
Hi Amanda, I have made amendment to my answer based on your comments. Please confirm if my assignment 2 is complete or not. Look forward to your early reply. Thanks
| gharchive/pull-request | 2024-11-08T20:56:18 | 2025-04-01T06:38:45.610812 | {
"authors": [
"ganghuo2024"
],
"repo": "ganghuo2024/sampling",
"url": "https://github.com/ganghuo2024/sampling/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
133618107 | improve the compatibility for python 3
For python3 (I was using python 3.4), there are two cases as follows that flask-swagger throws exceptions. This commit is to fix it.
python code: swagger(app, template=a_template)
command line: flaskswagger -h
Thanks!
| gharchive/pull-request | 2016-02-15T03:58:31 | 2025-04-01T06:38:45.612342 | {
"authors": [
"atlithorn",
"wushaobo"
],
"repo": "gangverk/flask-swagger",
"url": "https://github.com/gangverk/flask-swagger/pull/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
345189091 | Question No.3 - Answer example incorrect
The function defined in the prototype function displayIncreasedSalary() is trying to call the private function increaseSalary() defined in the constructor function. This doesn't work.
I don't think we need that section of code anyway
| gharchive/issue | 2018-07-27T11:19:31 | 2025-04-01T06:38:45.615636 | {
"authors": [
"ganqqwerty",
"mavrik"
],
"repo": "ganqqwerty/123-Essential-JavaScript-Interview-Questions",
"url": "https://github.com/ganqqwerty/123-Essential-JavaScript-Interview-Questions/issues/45",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2355429790 | 建议|功能增强:路径优化
建议在展示前端时路径可以隐藏“01@”
建议在展示前端时路径可以隐藏“01@”
嗯,目前设计的还是比较简单
| gharchive/issue | 2024-06-16T05:12:50 | 2025-04-01T06:38:45.618943 | {
"authors": [
"echs-top",
"gaowei-space"
],
"repo": "gaowei-space/markdown-blog",
"url": "https://github.com/gaowei-space/markdown-blog/issues/69",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
808847076 | 用python复现ch7中手写高斯牛顿法解pnp 无法收敛及陷入局部最优
高博@gaoxiang12 及其他看到这个issue的朋友们新年好。我在尝试用python手写14讲书中介绍的高斯牛顿法解PnP。但是遇到一些问题:1. 对相机外参的初始值很依赖,如果将外参初始化为[1,0,0,0;0,1,0,0;0,0,1,0;0,0,0,1],也就是sophuspy这个库定义SE3时默认的值,则迭代无法收敛,重投影误差会变大。2. 如果把迭代次数加大,把break条件取消,虽然收敛了,但是会陷入局部最优,与cv2求接结果差距较大。
(见代码
R = sp.SO3(R_init)
pose = sp.SE3(R.matrix(), t_init)
# pose = sp.SE3()
# if I use pose = sp.SE3() to initialize pose, the iteration cannot converge)
#若把pose = sp.SE3(R.matrix(), t_init)注释掉,把# pose = sp.SE3()的注释取消则与高博书中或者github的代码一致(我认为)。
如果使用cv2求解出的外参(R_init,t_init)作为初始值,则可以正常收敛。
主要参考高博的https://github.com/gaoxiang12/slambook2/blob/master/ch7/pose_estimation_3d2d.cpp
中line 172~244的函数。 我用了sophuspy(https://github.com/craigstar/SophusPy), numpy两个库。
虽然说迭代法会对初值敏感,也有可能陷入局部最优,但我觉得我遇到的现象是不正常的,理由是我的solvepnp函数什么都没有设置,使用的应该是LM方法,cv2能求出一个正确结果的话高斯牛顿方法不应该误差那么大。
请高博以及诸位帮我看看我的函数是否存在错误,我的完整代码如下:
import sophus as sp
import numpy as np
import math
import cv2
cam1_matrix = np.array(([330.6,0,326.8336],[0,330.6,251.8424],[0,0,1]),dtype=np.double)
object_3d = np.array(([-8.6, 147, 31.2],
[8.6, 147, 31.2],
[8.6, 147, 14],
[-8.6, 147, 14]),
dtype=np.double)
object_2d = np.array(([306.0,167.0],
[381.0,165.0],
[385.0,239.0],
[307.0,241.0]),
dtype=np.double)
dist_coefs1 = np.array([-0.302587121740377,0.068024115878877,0.001645011282553,0.002976587367681,0],dtype=np.double)
found, rvec, tvec = cv2.solvePnP(object_3d, object_2d, cam1_matrix, dist_coefs1)
rotM = cv2.Rodrigues(rvec)[0]
cvresult=sp.SE3(rotM,tvec)
print(cvresult)
def gaussnewtonpnp(points_3d,points_2d,K,R_init,t_init):
iterations = 20
cost = 0.0
last_cost = 0.0
fx = K [0][0]
fy = K [1][1]
cx = K [0][2]
cy = K[1][2]
R = sp.SO3(R_init)
pose = sp.SE3(R.matrix(), t_init)
# pose = sp.SE3()
# if I use pose = sp.SE3() to initialize pose, the iteration cannot converge
iter_num = 0
while iter_num<iterations:
H = np.zeros((6, 6), dtype=np.double)
b = np.zeros((6, 1), dtype=np.double)
cost = 0
i = 0
#compute cost
while i<(np.size(points_3d)/3):
pc = pose * points_3d[i]
# print (pc)
inv_z = 1.0 / pc[2]
inv_z2 = inv_z * inv_z
proj = np.array((fx*pc[0]/pc[2]+cx,fy*pc[1]/pc[2]+cy),dtype=np.double)
# print(proj)
e = points_2d[i] - proj
cost += np.linalg.norm(e, 2) * np.linalg.norm(e, 2)
# print(cost)
J = np.array(([-fx * inv_z, 0, fx * pc[0] * inv_z2, fx * pc[0] * pc[1] * inv_z2,
-fx - fx * pc[0] * pc[0] * inv_z2, fx * pc[1] * inv_z],
[0, -fy * inv_z, fy * pc[1] * inv_z2, fy + fy * pc[1] * pc[1] * inv_z2,
-fy * pc[0] * pc[1] * inv_z2, -fy * pc[0] * inv_z]
), dtype=np.double)
H += np.dot(J.T, J)
b = np.dot(-J.T, e)
i +=1
dx = np.zeros((6, 1), dtype=np.double)
dx = np.linalg.solve(H, b)
# print(dx)
# if math.isnan(dx):
# print('result is nan')
# break
# if iter_num>0 and cost>=last_cost:
# print('update is not good')
# break
# update estimation
pose = sp.SE3.exp(dx)*pose
# print(sp.SE3.exp(dx))
last_cost=cost
if np.linalg.norm(dx)<0.00001:
print('converged')
break
iter_num += 1
return iter_num,pose
num,pose = gaussnewtonpnp(object_3d,object_2d,cam1_matrix,rotM,tvec)
print(num,pose)
我又看了一下opencv的迭代法,它确实是要先用DLT方法,解SVD求一个初始值出来,在此基础上再调用LM求解器减少重投影误差。但我现在还是好奇为什么高博的代码(如下)没有进行外参的初始化就能直接计算了呢?是这个算例特殊吗?还是我有什么理解不对的地方。
cout << "calling bundle adjustment by gauss newton" << endl;
**Sophus::SE3d pose_gn;//在这里初始化外参为默认值?[1,0,0,0;0,1,0,0;0,0,1,0;0,0,0,1]?**
t1 = chrono::steady_clock::now();
**bundleAdjustmentGaussNewton(pts_3d_eigen, pts_2d_eigen, K, pose_gn);**
t2 = chrono::steady_clock::now();
time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
cout << "solve pnp by gauss newton cost time: " << time_used.count() << " seconds." << endl;
Hello @yorklyb
书上那个算例确实也没考虑收敛性。
但是你这个算例应该用的是Gauss-Newton而非L-M?G-N的话确实可能存在误差增长使得无法继续迭代的情况。你可以考虑用python实现一个L-M?
Hello @yorklyb
书上那个算例确实也没考虑收敛性。
但是你这个算例应该用的是Gauss-Newton而非L-M?G-N的话确实可能存在误差增长使得无法继续迭代的情况。你可以考虑用python实现一个L-M?
感谢高博的回复。确实是用的GN方法,我仿照您书中给的例子写的。并且我看到您书中之前的章节确实有提到迭代需要一个好的初始值。我现在对这个没有疑问了。
| gharchive/issue | 2021-02-15T22:10:41 | 2025-04-01T06:38:45.625267 | {
"authors": [
"gaoxiang12",
"yorklyb"
],
"repo": "gaoxiang12/slambook2",
"url": "https://github.com/gaoxiang12/slambook2/issues/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1801193723 | 关于Aria2c 未知错误
今天复现出来了,原因是因为机器上有残留的aria2c进程,这个运行中的进程使用的配置文件不对。正常由APP自身自动运行的aria2c进程如下:
/Applications/小白羊云盘.app/Contents/Resources/engine/aria2c --stop-with-process=31288 --conf-path=/Applications/小白羊云盘.app/Contents/Resources/engine/aria2.conf --rpc-listen-port=16800 -D
解决办法:杀死残留进程,重启应用即可。
mac, linux 可以执行: ps -ef |grep aria2c 查看进程PID,执行kill -9 PID
windows:
tasklist | findstr "aria2c"
taskkill /F /PID pid pid修改为上个命令查处的pid
有用!
关于洗码--选择要洗码的文件夹 下面的选择框能不能调的长一点 有时候选了想看一下里面内容太短了很不方便
@gaozhangmin 感谢大佬
@gaozhangmin 大佬,我操作 tasklist | findstr "aria2c" 之后,没有显示有残留进程
我执行完tasklist | findstr "aria2c"之后,没有显示有残留进程,但是我重启之后还是不能下载 @gaozhangmin
也可以参考该问题下的解决方法:#216
无法关闭, 因为不停得在启动, 有bug, pid刷新的很频繁:
btop查看发现这个:
这个sleep命令的进程,无法kill掉,因为pid变得很快。
我执行完 tasklist | findstr "aria2c"之后,没有显示有残留进程,但是我重启之后还是不能下载 @gaozhangmin
找到一种可能的解决方法:在aria2.conf中将disable-ipv6改为true(我也跟你一样,出现了这个问题,现在用这个解决了)
可以用远程模式。
关于ARIA2C一直无法连接的问题,github上的解决方式都试过了,一直显示不成功。包括ipv6关闭的办法,没有反应,显示aria2c启动失败。后来通过关闭系统防火墙,成功解决这个问题。应该是防火墙阻挡,导致这个问题。上几个版本还没有问题,在26版本出现,系统22621.2361。
Aria2c dns解析失败怎么办?
为什么还是报错啊,已经按照上面的方法试过了
是不能用了吗?最近几天都下不了东西 T.T
| gharchive/issue | 2023-07-12T15:10:06 | 2025-04-01T06:38:45.634347 | {
"authors": [
"178065310",
"CZXW",
"Michael-J-ScofieldOVO",
"YangtseSu",
"daivinding",
"gaozhangmin",
"huimeng0812",
"m-dnxbf",
"ogios",
"yetier55",
"yzw20020919",
"zpoyqx"
],
"repo": "gaozhangmin/aliyunpan",
"url": "https://github.com/gaozhangmin/aliyunpan/issues/228",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
695857712 | hassio version 115 issue
the integration will not load due to missing constant UNIT_PERCENTAGE
They have removed the UNIT_ from the constant
to fix edit all instances of UNIT_PERCENTAGE to PERCENTAGE in the init.py file
When replacing all instances from UNIT_PERCENTAGE to PERCENTAGE, I get this error in the logs:
Setup failed for ecowitt: Unable to import component: cannot import name 'PERCENTAGE' from 'homeassistant.const' (/usr/src/homeassistant/homeassistant/const.py)
23:20:11 – setup.py (ERROR)
My mistake, this error appeared before I updated to the 115 beta. It disappears when upgraded from 114. I'm still left with these errors and no created entities:
2020-09-13 23:32:38 ERROR (MainThread) [aiohttp.server] Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/aiohttp/web_protocol.py", line 418, in start
resp = await task
File "/usr/local/lib/python3.8/site-packages/pyecowitt/ecowitt.py", line 262, in handler
weather_data = self.convert_units(data_copy)
File "/usr/local/lib/python3.8/site-packages/pyecowitt/ecowitt.py", line 163, in convert_units
data["windspdkmh_avg10m"] = float(data["windspdmph_avg10m"]
TypeError: float expected at most 1 argument, got 2
2020-09-13 23:33:16 WARNING (MainThread) [custom_components.ecowitt] Unhandled sensor type maxdailygustms
``
I can confirm that the issue is solved with changing UNIT_PERCENTAGE to PERCENTAGE in version 0.115.
Confirm - replacing UNIT_PERCENTAGE to PERCENTAGE solved error. It's related to 0.115 as I made change on 0.114.4 prior to update and addon couldn't start.
I just updated to 115.1 and get this notification
and this is display
Been working fine up until 115.1
I've created a Pull request to get this solved.
The suggestion above worked for my install as well. Thank you.
I've created a Pull request to get this solved.
Seems like the pull request was not succesfull:
File "/hacs/custom_components/hacs/repositories/integration.py", line 32, in localpath
return f"{self.hacs.core.config_path}/custom_components/{self.data.domain}"
AttributeError: type object 'HacsCore' has no attribute 'config_path'
@arjen-w, to be honest, I have no idea how to pass the HACS validator.
The changed lines of code is working fine with HASS.IO v0.115 without any issue. If you have experience with HACS validator, then please pass your advise here, and I will look at it. Meanwhile the owner of the integration/repo has not replied to any messages, neither here, neither on the Home Assistant community forum.
If we can't get a PR into this repo can we just create a new fork (until the repo owner want's to get back in to it) and move over to that ?
What is the etiquette in doing that on Git ?
If we can't get a PR into this repo can we just create a new fork (until the repo owner want's to get back in to it) and move over to that ?
What is the etiquette in doing that on Git ?
Not sure, but as long as you are referring to this project as source in your release notes and code, I personally think it won't be an issue.
On the other hand, for me everything is working after fixing things manually, so for me having a fork isn't necessary at the moment.
Sorry about this.. this is now fixed in 0.3.1 Was very busy for a few months with annoying life stuff.
| gharchive/issue | 2020-09-08T13:11:56 | 2025-04-01T06:38:45.672774 | {
"authors": [
"GSzabados",
"MimbaMonkeyHouse",
"arjen-w",
"cpuks",
"garbled1",
"gilperme",
"mr-sneezy",
"ronjtaylor",
"scooper1"
],
"repo": "garbled1/homeassistant_ecowitt",
"url": "https://github.com/garbled1/homeassistant_ecowitt/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2167049334 | Enable renovate for UI and CLI repositories
/kind enhancement
What this PR does / why we need it:
With this PR the UI and CLI repositories will be watched by renovate 🤖
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Awesome, welcome 🥳
/lgtm
/approve
| gharchive/pull-request | 2024-03-04T14:57:48 | 2025-04-01T06:38:45.676058 | {
"authors": [
"oliver-goetz",
"petersutter"
],
"repo": "gardener/ci-infra",
"url": "https://github.com/gardener/ci-infra/pull/1274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1279900541 | [ci:component:github.com/gardener/autoscaler:v1.21.0->v1.22.0]
Release Notes:
sync the changes till v1.22.0 of upstream autoscaler
IT retry to scale up and down the pod in case on conflicts. Retry of 5 times with interval of 10 millisecond is kept
This needs manual change will update it .
The PR doing the relevant change https://github.com/gardener/gardener/pull/6163
| gharchive/pull-request | 2022-06-22T10:11:41 | 2025-04-01T06:38:45.679461 | {
"authors": [
"gardener-robot-ci-3",
"himanshu-kun"
],
"repo": "gardener/gardener",
"url": "https://github.com/gardener/gardener/pull/6160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1619182169 | Provide valuegardener.seed.name for controller registration helm charts.
How to categorize this PR?
/area control-plane
/kind enhancement
What this PR does / why we need it:
The value gardener.seed.identity is deprecated, as "identity" implies global uniqueness. Instead it is recommended to use the value gardener.seed.clusterIdentity. For the dns-shoot-service we would prefer to continue with using the seed name as part of the the DNSOwner id, as it has contains the gardener.garden.clusterIdentity to make it unique.
With introducing gardener.seed.name, we can still use the seed name for better readability.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Related to #2851
Release note:
provide value `gardener.seed.name` for controller registration helm charts.
/invite @rfranzke
/test pull-gardener-integration
/test pull-gardener-e2e-kind-upgrade
/test pull-gardener-integration
| gharchive/pull-request | 2023-03-10T15:41:49 | 2025-04-01T06:38:45.684026 | {
"authors": [
"MartinWeindel"
],
"repo": "gardener/gardener",
"url": "https://github.com/gardener/gardener/pull/7624",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1864527074 | Split codegen target and further enhance generate script
How to categorize this PR?
/area dev-productivity
/kind enhancement
What this PR does / why we need it:
This PR splits the codegen target as well for various groups, and also fixes a minor bug to make it run in parallel. Now, MODE is also available for codegen target.
Invalid options for codegen are skipped.
It is possible to pass any folder path for manifests target.
❯ make generate PRINT_HELP=y
# Usage: make generate [WHAT="<targets>"] [MODE="<mode>"] [CODEGEN_GROUPS="<groups>"] [MANIFESTS_FOLDERS="<folders>"]
#
# Options:
# WHAT - Specify the targets to run (e.g., "protobuf codegen manifests logcheck gomegacheck monitoring-docs")
# CODEGEN_GROUPS - Specify which groups to run the 'codegen' target for, not applicable for other targets (e.g., "authentication core extensions resources operator seedmanagement operations settings operatorconfig controllermanager admissioncontroller scheduler gardenlet resourcemanager shoottolerationrestriction shootdnsrewriting provider_local extensions_config")
# MANIFESTS_FOLDERS - Specify which folders to run the 'manifests' target in, not applicable for other targets (Default folders are "charts cmd example extensions imagevector pkg plugin test")
# MODE - Specify the mode for the 'manifests' or 'codegen' target (e.g., "parallel" or "sequential")
#
# Examples:
# make generate
# make generate WHAT="codegen protobuf"
# make generate WHAT="codegen protobuf" MODE="sequential"
# make generate WHAT="manifests" MANIFESTS_FOLDERS="pkg/component plugin" MODE="sequential"
# make generate WHAT="codegen" CODEGEN_GROUPS="core extensions"
# make generate WHAT="codegen manifests" CODEGEN_GROUPS="operator controllermanager" MANIFESTS_FOLDERS="charts example/provider-local"
#
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
/cc @timuthy @seshachalam-yv
Release note:
NONE
/test pull-gardener-unit
/test pull-gardener-e2e-kind-upgrade
/test pull-gardener-e2e-kind-ha-multi-zone
/test pull-gardener-e2e-kind-ha-multi-zone
/hold
/test pull-gardener-unit
@seshachalam-yv I have reworked this PR to allow passing any folder to manifests target. Can you PTAL?
/test pull-gardener-unit
/unhold
ping @seshachalam-yv
/assign
PTAL! @ary1992 @ialidzhikov
| gharchive/pull-request | 2023-08-24T06:55:22 | 2025-04-01T06:38:45.691697 | {
"authors": [
"ialidzhikov",
"rfranzke",
"seshachalam-yv",
"shafeeqes"
],
"repo": "gardener/gardener",
"url": "https://github.com/gardener/gardener/pull/8389",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1610031095 | Deployment
#DEPLOYMENT
##User Story
As a boot camp student,
I want the prework notes to be structured on a webpage,
So that I can easily find and read the information.
#Acceptance Criteria
GIVEN a Prework Study Guide website,
WHEN I visit the website using the URL,
THEN I can access my website from any browser
Deployment completed through GitHub Pages
| gharchive/issue | 2023-03-05T02:48:56 | 2025-04-01T06:38:45.695116 | {
"authors": [
"garfias06"
],
"repo": "garfias06/prework-study-guide",
"url": "https://github.com/garfias06/prework-study-guide/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
708899572 | Update props
Thanks for the library. Excellent. Please see all props and add them to the documentation. For instance - textContainerStyle;
Add the ability to indicate your own icon (an arrow next to the flag);
Ability to change the size of the flag
Thanks! package 5 stars
Thanks for the advice. Kindly check v1.1.0 for new updates.
| gharchive/issue | 2020-09-25T12:32:45 | 2025-04-01T06:38:45.697796 | {
"authors": [
"dev-event",
"garganurag893"
],
"repo": "garganurag893/react-native-phone-number-input",
"url": "https://github.com/garganurag893/react-native-phone-number-input/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2736350573 | Add Tingle Map Shuffle
Shuffles Tingle and his Maps into the Pool.
Adds CAN_USE_PROJECTILE Logic Check to the list.
Adds TWIN_ISLANDS to the Regions List
Renamed Dungeon Map RI's since overworld maps are listed as such.
Build Artifacts
2ship-linux.zip
2ship-mac.zip
2ship-windows.zip
Merged
| gharchive/pull-request | 2024-12-12T16:21:08 | 2025-04-01T06:38:45.705810 | {
"authors": [
"Caladius",
"garrettjoecox"
],
"repo": "garrettjoecox/2ship2harkinian",
"url": "https://github.com/garrettjoecox/2ship2harkinian/pull/60",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
849671039 | 能不能增加支持R3G机型呢?
R3G硬件配置和AC2100差不多,连breed都是通用的。能不能加个这个型号?
同求,一个R3G的固件
| gharchive/issue | 2021-04-03T14:41:12 | 2025-04-01T06:38:45.706581 | {
"authors": [
"Omoinemie",
"youcai2016"
],
"repo": "garypang13/OpenWrt",
"url": "https://github.com/garypang13/OpenWrt/issues/346",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1063761262 | [Netty] Failed in the runtime
After upgrade to version 3.7.1 catch the error during the launch. The error appears after choose simulation and fill description.
I tried to do this:
change different JDKs (8, 11)
get demo project and launch there
Error log:
computerdatabase.BasicSimulation is the only simulation, executing it.
Select run description (optional)
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000133fb11f2, pid=1636, tid=5635
#
# JRE version: OpenJDK Runtime Environment AdoptOpenJDK-11.0.11+9 (11.0.11+9) (build 11.0.11+9)
# Java VM: OpenJDK 64-Bit Server VM AdoptOpenJDK-11.0.11+9 (11.0.11+9, mixed mode, tiered, compressed oops, g1 gc, bsd-amd64)
# Problematic frame:
# C [libnetty_tcnative_osx_x86_6417779003216119739014.dylib+0x1671f2] __isPlatformOrVariantPlatformVersionAtLeast.cold.1+0x152
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/pavelbairov/IdeaProjects/gatling-maven-plugin-demo-scala/hs_err_pid1636.log
#
# If you would like to submit a bug report, please visit:
# https://github.com/AdoptOpenJDK/openjdk-support/issues
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
And there is full log
hs_err_pid1636.log
As you can see from the log, the problem is somewhere in the netty library itself...
Weird, I don't have any issue.
% uname -a
Darwin Stephanes-MacBook-Pro 20.5.0 Darwin Kernel Version 20.5.0: Sat May 8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64 x86_64
% java -version
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (Zulu 8.58.0.13-CA-macosx) (build 1.8.0_312-b07)
OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macosx) (build 25.312-b07, mixed mode)
Could you please try a different OpenJDK build than the ones from AdoptOpenJDK, eg Zulu?
If you still experience an issue, please provide your uname -a and your java -version.
No issue either with Java 11:
% java -version
openjdk version "11.0.13" 2021-10-19 LTS
OpenJDK Runtime Environment Zulu11.52+13-CA (build 11.0.13+8-LTS)
OpenJDK 64-Bit Server VM Zulu11.52+13-CA (build 11.0.13+8-LTS, mixed mode)
Nor Java 17:
% java -version
openjdk version "17.0.1" 2021-10-19 LTS
OpenJDK Runtime Environment Zulu17.30+15-CA (build 17.0.1+12-LTS)
OpenJDK 64-Bit Server VM Zulu17.30+15-CA (build 17.0.1+12-LTS, mixed mode, sharing)
uname -a
Darwin bairov.local 19.0.0 Darwin Kernel Version 19.0.0: Thu Oct 17 16:17:15 PDT 2019; root:xnu-6153.41.3~29/RELEASE_X86_64 x86_64
I tried Zulu. Also Corretto, AdoptOpenJDK
Something strange. Ok, I will try JDK 17 and then maybe try to update my OS
Yeah, your MacOS version is pretty old, that could be the issue.
Still, if that fixes your issue, it might be worth reporting it to Netty.
Great!
| gharchive/issue | 2021-11-25T16:12:57 | 2025-04-01T06:38:45.719631 | {
"authors": [
"Amerousful",
"slandelle"
],
"repo": "gatling/gatling",
"url": "https://github.com/gatling/gatling/issues/4170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
231651502 | Fix feeder shuffle description
Problem
Feeder shuffle description had a typo:
shuffle entries, then behave live queue
Solution
Change live -> like
Thanks! I had already fixed on our private branch, but I'll cherry-pick yours instead so you're credited with the commit :)
:-) thanks
cherry-picked
| gharchive/pull-request | 2017-05-26T14:59:39 | 2025-04-01T06:38:45.722004 | {
"authors": [
"joemeszaros",
"slandelle"
],
"repo": "gatling/gatling",
"url": "https://github.com/gatling/gatling/pull/3309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
668492312 | CONFERENCE FAILED: conference.authenticationRequired
throwing this error, i didn't find solution
<Jitsi
roomName={roomName}
displayName={displayName}
password={password}
config={{ startWithAudioMuted: true, startWithVideoMuted: true }}
interfaceConfig={{ filmStripOnly: false }}
containerStyle={{ width: '100%', height: '100%' }}
onAPILoad={handleAPI}
domain={process.env.REACT_APP_JITSI_SERVER}
/>
i m using react-jitsi library
sorry for the noise, its mistake in my displayName and password
| gharchive/issue | 2020-07-30T08:33:25 | 2025-04-01T06:38:45.835790 | {
"authors": [
"kishan-tocca"
],
"repo": "gatteo/react-jitsi",
"url": "https://github.com/gatteo/react-jitsi/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1039594249 | ignored gear
Bug Description
the bot ignores already provided netherite gear to make iron gear for some reason on all modes
Steps to Reproduce (as best as you can)
have netherite gear, doa command like @gamer or something
Expected Behavior
utilize the provided gear
Actual Behavior
completely ignore the provided gear and make iron tools
Crashlogs and Screenshots (if applicable)
Fix Idea (Personal Noes for when I fix this later)
gearSatisfied(Slot) -> returns whether a gear of a certain material or better is in that slot. From now on use that whenever checking for gear equipment.
EquipArmorOrBetterTask -> Keep EquipArmorTask, but have this extra option where it will equip the best armor in our inventory and use gearSatisfied to determine if it has enough gear to equip.
btw this applies to tools also
I believe that this has been applied for tools. Possibly armor.
| gharchive/issue | 2021-10-29T13:54:11 | 2025-04-01T06:38:45.839485 | {
"authors": [
"GaiaB0t",
"JamesGreen31",
"adrisj7"
],
"repo": "gaucho-matrero/altoclef",
"url": "https://github.com/gaucho-matrero/altoclef/issues/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
815222371 | feat: add option to link from top level menu items
From the mailing list
It is possible to add a landing page to the name of the dropdown menu?
It is not possible. Simply because I find it a bit confusing to have such a menu, but my limited capacity shouldn't hinder others from doing so.
This needs more testing and consideration. It is a bit too easy to create nonsense menus.
Mobile support needs thinking through and probably also some guidelines on how to do proper landing pages for such top level links
| gharchive/issue | 2021-02-24T07:56:10 | 2025-04-01T06:38:45.872718 | {
"authors": [
"MortenHofft"
],
"repo": "gbif/hosted-portals",
"url": "https://github.com/gbif/hosted-portals/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
209803966 | Failed to update DOI
IPT version: 2.3.3-r01
When trying to publish a new version of a dataset the following message appears:
Publication log:
An error of type DOI was encountered during publishing: Failed to update doi:10.15472/xcpipy metadata: HTTP 500:
Restored version #2.0 of resource bios_microorg_seaflower_2016 after publishing failure 3:58:38 PM
I've already checked both data and metadata for inconsistencies but they are fine.
Any advice?
Thanks Nestor,
As per DataCite's API documentation when encountering a 500 server internal error, you should "try later and if problem persists please contact us". Can you please try again and report back if it works this time?
Otherwise, I'd recommend turning on debugging mode in your IPT settings and looking for more detailed information in your IPT logs.
By the way, it says your IPT is based on version "2.3.3-r01" but the officially released version is "2.3.3-rdb4ab13". I should raise again the warning in https://github.com/gbif/ipt/issues/1319#issuecomment-280310595 regarding using non-officially released versions in production.
If you are just customizing the style there is no problem of course. Here is one problem with the re-styling that I noticed by the way:
Thanks Kyle, today finally the publication was successful. I'll check the issues with the re-styling.
Wonderful, glad to hear that. Thanks for this feedback.
I will close this issue then.
| gharchive/issue | 2017-02-23T16:01:35 | 2025-04-01T06:38:45.877872 | {
"authors": [
"kbraak",
"nestorjal"
],
"repo": "gbif/ipt",
"url": "https://github.com/gbif/ipt/issues/1323",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1456658436 | Fetch demo: cats count input on:input callback is not working
I used code from fetch example.
When running this example locally, nothing changes when I change number of cats.
However, if I add on:input callback to input's label, then it works:
<label on:input=move |ev| {
let val = event_target_value(&ev).parse::<u32>().unwrap_or(0);
set_cat_count(val);
}>
"How many cats would you like?"
<input type="number"
value={move || cat_count.get().to_string()}
/>
</label>
Removing label also fixes callback not firing.
Hm... I'm not able to reproduce this.
I'm going to guess that wrapping "How many cats" in a <span> also fixes it? Is that correct?
Are you running this within the repository, or as a separate example and if so, using what Leptos version? (If you're running locally using leptos = "0.0" is it 0.0.16 or 0.0.17? Is it fixed if you update to 0.0.17 or use a git dependency on the repo, i.e., leptos = { git = "https://github.com/gbj/leptos" } in your Cargo.toml)
I was running it using leptos = "0.0.16". Switching to 0.0.17 actually fixed the issue!
Perfect! Yeah 0.0.16 had a rendering bug unfortunately.
| gharchive/issue | 2022-11-19T21:00:24 | 2025-04-01T06:38:45.930164 | {
"authors": [
"alordash",
"gbj"
],
"repo": "gbj/leptos",
"url": "https://github.com/gbj/leptos/issues/96",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1270389157 | Phrokka gff contig name
phrokka version: 0.1.0
Python version: 3.9
Operating System: MacOS
Description
Small thing, but I noticed that in the .gff output that the names for the CDS and the tRNA hits are different - seems trimmed.
Should be fixed with v0.1.7 removal of hhsuite
| gharchive/issue | 2022-06-14T07:40:10 | 2025-04-01T06:38:45.932733 | {
"authors": [
"gbouras13",
"ronepz"
],
"repo": "gbouras13/pharokka",
"url": "https://github.com/gbouras13/pharokka/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1619005934 | options.styles.keywords isn't applied to some highlights
In lua/nord/treesitter.lua
-- Keywords
["@keyword"] = vim.tbl_extend("force", { fg = c.frost.artic_water }, options.styles.keywords), -- various keywords
["@keyword.function"] = vim.tbl_extend("force", { fg = c.frost.artic_water }, options.styles.functions), -- keywords that define a function (e.g. `func` in Go, `def` in Python)
["@keyword.operator"] = { fg = c.frost.artic_water }, -- operators that are English words (e.g. `and` / `or`)
["@keyword.return"] = { fg = c.frost.artic_water }, -- keywords like `return` and `yield`
["@conditional"] = { fg = c.frost.artic_water }, -- keywords related to conditionals (e.g. `if` / `else`)
["@repeat"] = { fg = c.frost.artic_water }, -- keywords related to loops (e.g. `for` / `while`)
["@debug"] = { fg = c.snow_storm.origin }, -- keywords related to debugging
["@label"] = { fg = c.frost.polar_water }, -- GOTO and other labels (e.g. `label:` in C)
["@include"] = { fg = c.frost.artic_water }, -- keywords for including modules (e.g. `import` / `from` in Python)
["@exception"] = { fg = c.frost.artic_water }, -- keywords related to exceptions (e.g. `throw` / `catch`)
options.styles.keywords isn't applied to "@keyword.operator" etc.
I wonder whether this is a bug or a feature?
You're right, I'll fix it ;)
Fixed
| gharchive/issue | 2023-03-10T13:46:45 | 2025-04-01T06:38:45.936257 | {
"authors": [
"Isrothy",
"gbprod"
],
"repo": "gbprod/nord.nvim",
"url": "https://github.com/gbprod/nord.nvim/issues/27",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
244830216 | add Distributive functor
This PR provides the notion that is categorically dual to Traversable.
A Distributive Functor is one that you can push any functor inside of.
distribute :: (Functor f, Distributive g) => f (g a) -> g (f a)
Compare this with the corresponding Traversable notion, sequence.
sequence :: (Applicative f, Traversable g) => g (f a) -> f (g a)
Is there a reason why Distributive and Traversable don't have corresponding fantasy interfaces?
AFAIK FantasyTraversable does exist. FantasyDistributive seems not possible since the outer type is HKT<G, _>. What do you propose?
The spec for fantasy-land Traversable is here:
https://github.com/fantasyland/fantasy-land#traversable
If I understand correctly, Distributive would just be a reversion of the arrows?
AFAIK FantasyTraversable does exist
I mean that is already defined in fp-ts https://github.com/gcanti/fp-ts/blob/ee67de4b897cb9a5b87be9d39cee6751a1353251/src/Traversable.ts#L11
The signature
distribute :: (Functor f, Distributive g) => f (g a) -> g (f a)
suggests that adding a method to the prototype of Distributive g is useless since the value at hand is a f (g a)
Oh my bad, right, it's FantasyFilterable that doesn't exist, I looked at the wrong file.
I'm not sure I understand how your version of Distributive works yet, I'll play around with it a bit tomorrow and look if I can come up with a sensible Fantasy version.
it's FantasyFilterable that doesn't exist
Ah you are right (and Witherable as well). Would you like to send a PR for them? Writing the Fantasy* instances for Filterable and Witherable should make clear why I think is not possible to do the same for Distributive: both the missing definitions involve a HKT<F, A> value where F is the "main" type parameter of the interfaces while in Filterable the main type parameter is F but the involved value is G-parametrised. Does it make sense?
Ah, cool, yeah, I was finishing up one of the Immutable collections yesterday and ran into this part of the library. I'll have a look if I can solve the conundrum.
You are correct.
I looked up all the theory behind it (and why a monadic operation such as distribute or traverse only needs respectively a Functor or an Applicative) and I get it now. Without extension methods in TypeScript, you cannot attach the Distributive to anything. You can always write a Fantasy type of a dual, unless that dual forces the focus type into the output slot. Makes total sense. There's no hidden way to express it that would give rise a Fantasy type.
| gharchive/pull-request | 2017-07-22T07:18:17 | 2025-04-01T06:38:45.955539 | {
"authors": [
"SimonMeskens",
"gcanti"
],
"repo": "gcanti/fp-ts",
"url": "https://github.com/gcanti/fp-ts/pull/167",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
485933858 | Bug report: JWT Verify doesn't require an algorithm
As detailed here, JWT verification functions should require specifying the algorithm that should have been used, in order to prevent an attacker from changing the algorithm to a symmetric algorithm from an asymmetric one and using the public key to sign the token. Probably low priority for this particular app, but it would be good to at least have the option.
Hi Josh,
Correct me if i am wrong or misunderstand you, but isn't the problem you link to, a serverside issue where developers simply trust whatever alg the token specifies? Removing "none" from Cyberchef won't help the problem. Servers need to interpret the tokens with the alg it writes the tokens with.
Having the "none" alg there is a great tool for testing your implementation of JWT verification, to catch the problems mentioned in the link.
I don't mean removing the none algorithm, I meaning allowing the user to specify which algorithm is expected and raise an error if the jwt defines it differently. The relevant part of the article is this: https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/#RSA-or-HMAC-. Specifically "If a server is expecting a token signed with RSA, but actually receives a token signed with HMAC, it will think the public key is actually an HMAC secret key." Basically, the solution is, rather than trusting the jwt alg field, allowing the user to define the algorithm. Hope that makes more sense.
Yeah, that would be good especially since the JWT header isn't shown at all. I recommend using https://jwt.io/ for playing with JWT tokens.
| gharchive/issue | 2019-08-27T17:18:58 | 2025-04-01T06:38:45.985972 | {
"authors": [
"LabanSkollerDefensify",
"joshbarth",
"mifriis"
],
"repo": "gchq/CyberChef",
"url": "https://github.com/gchq/CyberChef/issues/624",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
240869851 | Add example of applying validation to multiple properties in a schema
This should be added to the Schema walkthrough in the dev guide.
Merged into develop.
| gharchive/issue | 2017-07-06T07:38:53 | 2025-04-01T06:38:45.986978 | {
"authors": [
"p013570"
],
"repo": "gchq/Gaffer",
"url": "https://github.com/gchq/Gaffer/issues/1044",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
270973372 | Parquet Store generates too many files for HDFS to handle when importing RDD
The is an issue where importing an RDD with lots of partitions and wanting to split the data into lots of files per group, will worst case cause
x x files to be generated which HDFS will struggle to handle.
This bug was fixed during the major rewrite of the Parquet store in #1884 .
| gharchive/issue | 2017-11-03T13:13:02 | 2025-04-01T06:38:45.988312 | {
"authors": [
"ac74475",
"gaffer01"
],
"repo": "gchq/Gaffer",
"url": "https://github.com/gchq/Gaffer/issues/1493",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1997097085 | Gh-3098 Improve Testing of GafferEntityGenerator
Improves tests for the GafferEntityGenerator coverage now > 80%
Small tweak to the main class so it checks for null values and to use easier to read lambdas.
Related issue
Resolve #3098
You've added null checks for the vertex properties, but this is showing as untested.
I haven't added them they were in the existing logic although looking at the code a bit more is impossible to actually make a GafferPop Edge or Vertex with a property that has a null key as it is validated when its added and also the properties are technically stored in a HashMap which also fundamentally does not allow it. Therefore these checks likely can just be removed as they are adding nothing.
| gharchive/pull-request | 2023-11-16T15:14:51 | 2025-04-01T06:38:45.990504 | {
"authors": [
"tb06904"
],
"repo": "gchq/Gaffer",
"url": "https://github.com/gchq/Gaffer/pull/3099",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
503971006 | PAL 185 - Added .clone() .equals() .hashCode() methods to hr-generator types
While not likely necessary for production, this is required for testing to validate (a lack of) changes to records after applying Rules
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Held back for a bit by a JUnit bug, be aware if using Theories in the future
https://github.com/junit-team/junit4/issues/1629#issue-504602826
| gharchive/pull-request | 2019-10-08T10:51:08 | 2025-04-01T06:38:45.993375 | {
"authors": [
"CLAassistant",
"dev930018"
],
"repo": "gchq/Palisade",
"url": "https://github.com/gchq/Palisade/pull/476",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
443252704 | Allow an operation chain to be execute on load of the UI
Create a url query param 'preQuery' that a user/system could set to invoke an operation chain when the UI first loads. This would allow click through from other apps and queries to be shared between users.
Merged into develop
| gharchive/issue | 2019-05-13T07:48:07 | 2025-04-01T06:38:45.994530 | {
"authors": [
"d47853",
"p013570"
],
"repo": "gchq/gaffer-tools",
"url": "https://github.com/gchq/gaffer-tools/issues/728",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
390672163 | SupersededOutputHelper is not initialised
20181213.txt
For some reason the ReferenceDataLoad pipeline is trying to write an output stream. This isn't expected to be the case.
The SupersededOutputHelper will no longer check that output is superseded for pipelines that exist outside of normal processing.
Change will be available in v6.0-beta.21 onwards
| gharchive/issue | 2018-12-13T13:10:54 | 2025-04-01T06:38:45.996066 | {
"authors": [
"stroomdev10",
"stroomdev66"
],
"repo": "gchq/stroom",
"url": "https://github.com/gchq/stroom/issues/1024",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1995523296 | 🛑 Burotic is down
In 3b332c2, Burotic ($BUROTIC) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Burotic is back up in a95cc0c after 13 minutes.
| gharchive/issue | 2023-11-15T20:33:37 | 2025-04-01T06:38:46.004597 | {
"authors": [
"louisgcom"
],
"repo": "gcommeuneidee/upptime",
"url": "https://github.com/gcommeuneidee/upptime/issues/683",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2547701311 | How to check for alarm stops when stopping from notification stop button? and onTap of Notification.
Alarm plugin version
4.0.0-dev.3
How i check for is my alarm is stopped from notification stop button.
In my app to stop alarm i am using Alarm.stop(id); to stop the alarm so here after i called it i can do something in my code,
but what about stopping alarm from notification button? here i have to call a function when i stop alarm from notification
as me, you can provide onStop method when stop from notification stop button.
If there is method like this then everyone can perform some task on alarm stop from stop button of notification.
And also need onTap of notification
If there is onTap for notification then user can perform task accordingly.
Additional context
For my app i have to show non dismissible notification when user stops the alarm. This is possible when i use Alarm.stop(id); but need to do from notification stop button. I used flutter_local_notifications for non dismissible notification.
And also want to open Alarm screen when tap on notification, so for it onTap is useful.
Hi @dhavalmnjtech
Thanks for your interest in the package.
I'll consider adding a onStop callback when the alarm is stopped from the notification and a onTap when app is opened from the notification. I'll keep you updated here.
Hello @gdelataillade
I was working on it but faced a technical limitation on Android. If someone with Kotlin experience could help me this PR #275 it would be great !
| gharchive/issue | 2024-09-25T11:16:49 | 2025-04-01T06:38:46.020925 | {
"authors": [
"dhavalmnjtech",
"gdelataillade"
],
"repo": "gdelataillade/alarm",
"url": "https://github.com/gdelataillade/alarm/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2214808041 | 버그 수정
스페이스바 눌러서 나오는 공백을 입력하면
node:_http_outgoing:652
throw new ERR_HTTP_HEADERS_SENT('set');
^
Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
at ServerResponse.setHeader (node:_http_outgoing:652:11)
at ServerResponse.header (D:\20220904-backup\Documents\imitated-seed-2-master\node_modules\express\lib\response.js:794:10)
at ServerResponse.send (D:\20220904-backup\Documents\imitated-seed-2-master\node_modules\express\lib\response.js:174:12)
at IncomingMessage.eval (eval at (D:\20220904-backup\Documents\imitated-seed-2-master\routes\router.js:27:5), :120:8) {
code: 'ERR_HTTP_HEADERS_SENT'
}
Node.js v20.10.0
오류가 뜹니다
스페이스바를 어디에 입력하신건가요?
여기요
근데 타 위키에서는 또 검색이 잘 되고 문서로 가기 누르면 대문으로 가지더라고요
엔진은 최신버전인가요?
네
| gharchive/issue | 2024-03-29T07:13:52 | 2025-04-01T06:38:46.054905 | {
"authors": [
"NOAH01112",
"wikiengine",
"yoojinwoo12"
],
"repo": "gdl-blue/imitated-seed-2",
"url": "https://github.com/gdl-blue/imitated-seed-2/issues/258",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1730242681 | IPV6端口转发问题
网络结构:主路由为硬路由拨号(IPv6 Native 并关闭防火墙),旁路由 iStoreOS 21.02.3 2023052616 (添加了 LAN6 接口并把防火墙加入了 LAN 域)
可以通过/64前缀的公网IPv6地址访问旁路由后台
Lucky:通过 iStore 安装的 v1.10.8 版本,非 docker
端口转发配置(访问主路由web管理页面)
访问/64前缀公网IPv6地址的对应端口,显示502 Bad Gateway nginx/1.12.2
端口转发规则日志
2023-05-29 15:24:25 {"ExtInfo":{},"level":"info","msg":"[端口转发][开启][tcp6@:30000 ===> [192.168.31.1]:80]"}
2023-05-29 15:25:03 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70:45d8:8424:afc4:7839]:60962]安全检查通过"}
2023-05-29 15:25:03 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70:45d8:8424:afc4:7839]:60963]安全检查通过"}
2023-05-29 15:25:03 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12401]安全检查通过"}
2023-05-29 15:25:03 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12403]安全检查通过"}
2023-05-29 15:25:10 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12405]安全检查通过"}
2023-05-29 15:25:10 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12407]安全检查通过"}
2023-05-29 15:25:10 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12409]安全检查通过"}
2023-05-29 15:25:10 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12411]安全检查通过"}
2023-05-29 15:25:10 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12413]安全检查通过"}
2023-05-29 15:25:10 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12415]安全检查通过"}
2023-05-29 15:25:13 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12417]安全检查通过"}
2023-05-29 15:25:13 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12419]安全检查通过"}
2023-05-29 15:25:13 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12421]安全检查通过"}
2023-05-29 15:25:16 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12423]安全检查通过"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"error","msg":"tcp6@:30000 ===> [192.168.31.1]:80 error:dial tcp 192.168.31.1:80: i/o timeout"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12423 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12421 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12419 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12417 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12415 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12413 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12411 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12409 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12407 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12405 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12403 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12401 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70:45d8:8424:afc4:7839]:60962 断开连接"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12427]安全检查通过"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12429]安全检查通过"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12431]安全检查通过"}
2023-05-29 15:25:21 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12433]安全检查通过"}
2023-05-29 15:25:22 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12435]安全检查通过"}
2023-05-29 15:25:22 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12437]安全检查通过"}
2023-05-29 15:25:22 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000]新连接[[240e:398:45b2:8c70::1]:12439]安全检查通过"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70:45d8:8424:afc4:7839]:60963 断开连接"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12427 断开连接"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12429 断开连接"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12431 断开连接"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12433 断开连接"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12435 断开连接"}
2023-05-29 15:25:23 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12437 断开连接"}
2023-05-29 15:25:27 {"ExtInfo":{},"level":"error","msg":"tcp6@:30000 ===> [192.168.31.1]:80 error:dial tcp 192.168.31.1:80: i/o timeout"}
2023-05-29 15:25:27 {"ExtInfo":{},"level":"info","msg":"[tcp6@:30000][240e:398:45b2:8c70::1]:12439 断开连接"}
- 程序日志
2023/05/29 14:58:59 Set LoalTimeZone CST-8 [+8.00]
2023/05/29 14:58:59 可以将下面指令加入cron计划任务,每分钟定时检测lukcy进程防止奔溃影响使用。
2023/05/29 14:58:59 */1 * * * * test -z "$(pidof lucky)" && /usr/bin/lucky -c /etc/lucky/lucky.conf -p 16601 >/dev/null 2>&1
2023/05/29 14:58:59 RunMode:prod
2023/05/29 14:58:59 version:1.10.8 commit 65c9d93e6491301115fe027096f6e9b188fdd8f8, built at 2023-04-23T11:13:16Z
2023/05/29 14:58:59 后台http端口[16601]来自启动命令指定
2023/05/29 14:58:59 AdminWeb(Http) listen on :16601
2023/05/29 14:58:59 后台登录网址: http://[IP]:16601
2023/05/29 14:58:59 模块[动态域名]已启用
2023/05/29 14:58:59 模块[端口转发]已启用
2023/05/29 14:58:59 模块[Web服务]已启用
2023/05/29 14:58:59 模块[STUN内网穿透]已启用
2023/05/29 14:58:59 模块[WebDAV]已启用
2023/05/29 14:58:59 模块[网络唤醒]已启用
2023/05/29 14:58:59 模块[ACME]已启用
2023/05/29 14:58:59 模块[计划任务]已启用
2023/05/29 14:59:08 [192.168.31.185]后台登录成功
2023/05/29 14:59:19 执行首次NTP同步时间
我记得小米路由器不能直接转发80端口的,你试试8080端口
我记得小米路由器不能直接转发80端口的,你试试8080端口
换成 8080 端口可以了,非常感谢。另外我也试了 TrueNAS 的 80 端口也可以转发。(我应该早点试试的
| gharchive/issue | 2023-05-29T07:32:21 | 2025-04-01T06:38:46.081126 | {
"authors": [
"gdy666",
"ramondsq"
],
"repo": "gdy666/lucky",
"url": "https://github.com/gdy666/lucky/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
542609411 | update composer.json to fix install in site/plugins
update composer.json to fix install in site/plugins
Thank you very much for fixing this issue. This will make the plugin even more convenient 👍
| gharchive/pull-request | 2019-12-26T17:10:54 | 2025-04-01T06:38:46.097616 | {
"authors": [
"felixhaeberle",
"gearsdigital"
],
"repo": "gearsdigital/kirby-reporter",
"url": "https://github.com/gearsdigital/kirby-reporter/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2296398 | Fix for a MatchData issue
I was getting an exception using this formatter:
rspec-formatter-webkit/lib/rspec/core/formatters/webkit.rb:176:in `expand_path': can't convert nil into String (TypeError)
from rspec-formatter-webkit/lib/rspec/core/formatters/webkit.rb:176:in `block in backtrace_line'
The issue (for me at least) was that gsub wasn't matching the line appropriately and wasn't generating the correct MatchData variables ($1, $2).
I fixed the .gsub syntax that was not parsing lines correctly and switched to .match instead, which cleared up my issue.
nevermind, I was dumb and didn't see what all this is actually doing. I've lost some functionality in this commit. I'll rework it, sorry about that.
So weird, I can't reproduce the error now. Ah well, guess I was jumping at shadows :)
| gharchive/issue | 2011-11-20T18:16:00 | 2025-04-01T06:38:46.099414 | {
"authors": [
"joshsz"
],
"repo": "ged/rspec-formatter-webkit",
"url": "https://github.com/ged/rspec-formatter-webkit/issues/1",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2190747774 | Generated Python code for Game of 2048 largely inconsistent
Bug description
I tried to create the Game of 2048 as instructed in the Intro section and I used the OpenAI model "gpt-4-turbo-preview". I set up MetaGPT using conda on my macOS system, using Python 3.9. I initialized the config to use OpenAI with a valid API-KEY and the best model available (see above).
Unfortunately, the generated code is not self-consistent. MetaGPT created multiple Python files and the generated code looked superficiously good. But when trying to run the main.py code, many errors were raised. Most errors are the result of code references to either Python constants, methods, or constructors of generated Python code that did not exist or were referenced with an incorrect name.
The errors are too numerous to fix. I have attached the generated Python code for further details. I renamed all Python files to use the filename extension '.txt', since an upload of Python files seems to be prohibited by Github.
main.txt
game.txt
ui.txt
logic.txt
constants.txt
Let me add that I tried the --code-review as well as the -no-code-review option. Moreover, the --run-tests option does not seem to work. Here I would have expected that MetaGPT recognizes that there are errors and tries to incrementally fix the errors through revised code generation trials.
About the generated code is not self-consistent: LLM does not always produce good design and then write good code. I
suggest you retry if it failed.
About --run-tests option does not work: The value of --n-round is too small. --n-round is a safety valve that will terminate the inference if the number of reference round exceeds it. Unit testing will not start until the code is successfully written. The QA is not working indicating that the code has not been successfully written.
Thanks for the specific suggestions on how to get better results with MetaGPT. I will certainly try the suggestions.
| gharchive/issue | 2024-03-17T16:19:07 | 2025-04-01T06:38:46.137387 | {
"authors": [
"crjaensch",
"iorisa"
],
"repo": "geekan/MetaGPT",
"url": "https://github.com/geekan/MetaGPT/issues/1022",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.