id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2104278273 | update to handlebars v5
updates handlebars to 5.1.1
version 5 simplified some lifetimes and introduced the use of RenderErrorReason to create RenderError. RenderError::new and other functions have been deprecated.
Looks like GHA failed at the build dev image step
| gharchive/pull-request | 2024-01-28T17:58:13 | 2025-04-01T06:39:13.868354 | {
"authors": [
"campeis"
],
"repo": "josh-project/josh",
"url": "https://github.com/josh-project/josh/pull/1310",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1605245103 | Test RFID
Use test code for RFID
Wire RFID to raspberry pi
Test if the RFID and tag is working
The RFID code has been made and tested.
The output: The RFID reader and tag works
| gharchive/issue | 2023-03-01T16:02:30 | 2025-04-01T06:39:13.871077 | {
"authors": [
"irbahanifah"
],
"repo": "joshakbar14/Smart-ParkingLot",
"url": "https://github.com/joshakbar14/Smart-ParkingLot/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1152664751 | GSD Request
--- GSD JSON ---
{
"vendor_name": "Linux",
"product_name": "Kernel",
"product_version": "versions from to before v4.19.231",
"vulnerability_type": "unspecified",
"affected_component": "unspecified",
"attack_vector": "unspecified",
"impact": "unspecified",
"credit": "",
"references": [
"https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=6312f6a53fd3ea38125dcaca5e3c9aa7d8a60cf7"
],
"extended_references": [
{
"type": "commit",
"value": "6312f6a53fd3ea38125dcaca5e3c9aa7d8a60cf7",
"note": "fixed"
}
],
"reporter": "joshbressers",
"reporter_id": 1692786,
"notes": "",
"description": "net: ieee802154: at86rf230: Stop leaking skb's\n\nThis is an automated ID intended to aid in discovery of potential security vulnerabilities. The actual impact and attack plausibility have not yet been proven.\nThis ID is fixed in Linux Kernel version v4.19.231 by commit 6312f6a53fd3ea38125dcaca5e3c9aa7d8a60cf7. For more details please see the references link."
}
--- GSD JSON ---
This issue has been assigned GSD-2022-1000325
| gharchive/issue | 2022-02-27T02:29:52 | 2025-04-01T06:39:13.872506 | {
"authors": [
"joshbressers"
],
"repo": "joshbressers/gsd-database",
"url": "https://github.com/joshbressers/gsd-database/issues/565",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
277842261 | ActionType mismatch
Hi!
I've been fiddling a bit with cycle.js and cycle-fire recently and I couldn't seem to get the first example (sourcing and sinking a number) in your readme to work! The sourcing went fine, and manual changes to the database through the Firebase console got reflected immediately. Sinking, on the other hand, didn't propagate to Firebase. The number did not change, and no errors showed up.
Pardon my debugging, since my TypeScript skills are next to none. What I did was to augment both of the addListener calls in driver.js (yes, the generated JS file) like this (and I'm guessing there's a much better way of doing this...):
....addListener({
complete: function () {},
error: function (x) { console.error('ERROR', x) },
next: function () {},
});
When then clicking on the shuffle button, to trigger the set action, I got a message like the following:
{
code: "auth/argument-error",
message: "confirmPasswordReset failed: First argument "code" must be a valid string."
}
Looking at the enums and the ActionType declaration, I found that they share number designations:
AuthActionType[AuthActionType["ApplyActionCode"] = 0] = "ApplyActionCode";
AuthActionType[AuthActionType["CheckActionCode"] = 1] = "CheckActionCode";
AuthActionType[AuthActionType["ConfirmPasswordReset"] = 2] = "ConfirmPasswordReset";
...
ReferenceActionType[ReferenceActionType["Push"] = 0] = "Push";
ReferenceActionType[ReferenceActionType["Remove"] = 1] = "Remove";
ReferenceActionType[ReferenceActionType["Set"] = 2] = "Set";
Just changing these manually so that each number is unique fixes the issue and everything works as expected! Like I said, my TypeScript foo sucks, so I'm unable to provide you with a PR, but hopefully this will guide you in the right direction.
Looking forward to hearing from you!
Hey, thanks a ton for the investigation. I've gone back and forth with how to type the actions properly, and made some assumptions about how TS transpiles union enums which have turned out to be incorrect. You stumbled upon the problem in a way I probably should have.
Anyway, I'll be trying some conversions and see which method actually works as I've intended and give updates here. Thanks again.
Alright, so I just renamed the base TS ActionTypes and collapsed them into a single enum to fix the problem in v2.3.0. Arguably a tad messier, but it fixes the problem.
Thanks again for the research and issue.
Sweet, works now!
| gharchive/issue | 2017-11-29T17:30:47 | 2025-04-01T06:39:13.885786 | {
"authors": [
"joshforisha",
"khueue"
],
"repo": "joshforisha/cycle-fire",
"url": "https://github.com/joshforisha/cycle-fire/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
439357440 | Add a removeAll method to the ToastManager to allow dismissing all active toasts
This is useful for things like top level navigation or a big change in what is on screen where the toasts that might still be up pending a timer no longer make sense. If you save a resource and then navigate away to a totally different section of the app, the toast might be on top of something very different and weird. So, its handy to have an imperative way to just dismiss everything to make sure users don't get that dissonance.
ping @jossmac any chance we could get this in & released?
ping @jossmac ?
@airhorns good idea, seems like it should live in "consumer land" though. I think this will be easier now with hooks:
const { removeToast, toasts } = useToasts()
const removeAllToasts = () => toasts.forEach(({ id }) => removeToast(id))
| gharchive/pull-request | 2019-05-01T22:47:32 | 2025-04-01T06:39:13.936330 | {
"authors": [
"airhorns",
"jossmac"
],
"repo": "jossmac/react-toast-notifications",
"url": "https://github.com/jossmac/react-toast-notifications/pull/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1315907863 | Fix some problems in current implementation of public API
Code repetition is reduced
Access control is lifted for /v1/comments/x/y. Other routes are still restricted. That's nice.
(the embed still works cross-domain)
With this implementation, we will have to do 2 queries for each GET request. For each POST request, we also query the same document twice (the page, once to check whether it exists and once to get the auto-approve configuration). Quite wasteful.
For CORS, it would be better if we use dynamic origin, by attaching the origin after we query the page to check whether it exists. Not very sure whether this is necessary though.
For CORS, it would be better if we use dynamic origin, by attaching the origin after we query the page to check whether it exists. Not very sure whether this is necessary though.
Since the API is intended to be used everywhere, I think allowing all origins in all cases should be preferred.
The (old) boilerplate implementation only executes 1 query on each request to the public API, but I have to copy the code, which is not a good practice.
I will probably fix that using higher-order functions, for example:
const createComment = createCommentWith((t, pageRef) => getDocumentInTransaction<Page>(t, pageRef));
const publicCreateComment = createCommentWith((t, pageRef, siteId) => getPageInTransactionWithSiteId(t, pageRef, siteId));
@VietAnh1010 Wait, but doesn’t that mean /api/pages/:pid/comments (from which this code is written) is inefficient too? It is used as the GET route in the embed page so it will be used a lot more than this public API. I suggest fixing that too.
@joulev I should have elaborated that query here is the number of request we send to Firebase.
/api/pages/:pid/comments is efficient, as we only send 1 request to Firebase to get the comments.
For the public API, the current implementation send 2 requests, one to check whether the page exists and one to get the comments. If I use Transaction, then I can group the two read operations together into 1 request, and it should improve the API's performance.
| gharchive/pull-request | 2022-07-24T13:27:27 | 2025-04-01T06:39:13.948330 | {
"authors": [
"VietAnh1010",
"joulev"
],
"repo": "joulev/ezkomment",
"url": "https://github.com/joulev/ezkomment/pull/215",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
293560495 | Keep getting "'final_response' must be set" when testing Google Assistant
I got through the Hello World tutorial just fine, and thought I'd give the AdventureGame one a try. I'm using Google Assistant only (not Alexa) and have everything set up (at least well enough for the HelloWorld example to work!) but now whenever I try to use the Google Assistant simulator, I just get the response "My test app isn't responding right now. Try again soon."
and error message (under Validation Errors):
'final_response' must be set
Am I missing something? I have set up my BlueDoorIntent and RedDoorIntent as instructed, have the NodeJS server up and running, ngrok proxy URL put into the Fulfillment section on DialogFlow, set Fulfillment to "use webhook" on each Intent, etc.
Hi! Could you enable logging in the app configuration and tell me what the logs say? Also, is there any stack trace you could share with us? Thanks!
I'm not sure how to enable logging in the app configuration?
There is nothing going on in the app itself (the NodeJS part). Everything works as expected, including responding when I type in the "Try it now" section of DialogFlow. It's only when in Google Assistant simulator that I get this problem.
Hmm, yes, I've been running in this issue sometimes, that the Simulator isn't working as expected. It might be, for example, that another Action is still enabled for testing. You could also try to fix it be setting an invocation name for your app by filling out the draft information (it shows an error if you don't fill out all the information, but it should still save the invocation name)
Ahhhh... The invocation naming worked perfectly.
Perfect!
Thank you!
No problem! Feel free to join our community, if you have more questions: www.jovo.tech/slack
I'm getting the same error. can you tell me how to set invocation name?
| gharchive/issue | 2018-02-01T14:57:27 | 2025-04-01T06:39:13.952802 | {
"authors": [
"anselanza",
"jankoenig",
"reeju10"
],
"repo": "jovotech/jovo-framework-nodejs",
"url": "https://github.com/jovotech/jovo-framework-nodejs/issues/74",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1861526445 | Bravo ! 👍
Bonjour,
Un grand bravo pour ce plugin ! 👍 C'est exactement ce qui nous manquait pour palier l'absence de compatibilité entre la (très) mauvaise appli Aldes Connect et HomeKit...
Une question tout de même... :) J'ai eu beau fouiller dans les fichiers, je n'ai pas vu s'il y avait une possibilité de passer les thermostats en OFF via l'API Aldes. J'ai l'impression que l'option OFF d'un thermostat via l'appli Maison coupe intégralement le T.One.
Ce serait top afin de pouvoir ensuite le lier avec un capteur d'ouverture de fenêtre pour faire des coupures de clim/chauffage à l'aération des pièces par exemple.
Du coup pour palier ce "manque", j'ai modifié les thermostats afin que l'on puisse les régler jusqu'à 31 par exemple. Ainsi, je peux monter le thermostat à 31 pour couper la clim dans une pièce (il faudra juste penser à modifier l'automatisation en hiver haha).
Par contre si je souhaite lier à un capteur d'ouverture, c'est OK pour monter le thermostat à l'ouverture d'une fenêtre et ainsi couper la clim, mais je n'ai pas la mémoire de la précédente température programmée pour remettre la clim à la fermeture de la fenêtre.
Si jamais il existe une solution à ça, je suis preneur à 100%.
Et je te renouvelle une fois de plus mes remerciements pour ce plugin qui m'est très utile au quotidien !
Bonne journée !
Sylvain
Merci pour le retour 😁 content de voir que c'est utile
L'action d'éteindre dans le thermostat de home trigger le mode A qui éteint le chauffage/climatisation du T.one
Idem via la "multiprise" que j'ai aussi rajouté et qui permet de gérer plus finement les modes (cool boost, comfort prog ABCD,...) qui correspondent aux différentes programmations chauffage et climatisation
Chez moi j'ai un capteur d'ouverture de baies vitrées et quand c'est ouvert alors je trigger le bouton labellisé off sur la multiprise et ça éteint chauffage ou climatisation
Bonjour Jean-Philippe,
Afin de permettre une plage de 16 à 30 degrés sur les thermostats (pour couper la clim dans une pièce en passant le thermostat à 30), j'avais modifié le fichier dist/ThermostatAccessory.js, ligne 25 en modifiant la valeur "maxValue" à 30.
Malheureusement, je paramétrage ne fonctionne plus aujourd'hui. J'ai de nouveau modifié le fichier mais la valeur max du thermostat sur l'appli Maison reste 26.
J'ai pourtant bien redémarré pour RPI et le service Homebridge pour que la modif soit bien prise en compte.
J'ai surement oublié quelque chose, mais je ne vois pas.
As-tu une idée ?
Merci beaucoup et bonne journée !
Sylvain
| gharchive/issue | 2023-08-22T13:58:11 | 2025-04-01T06:39:13.965612 | {
"authors": [
"jp-gouin",
"sylvain640"
],
"repo": "jp-gouin/aldes-homebridge-plugin",
"url": "https://github.com/jp-gouin/aldes-homebridge-plugin/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1369146295 | HTML
User Story
As a bootcamp student
I want the prework notes to be structured on a webpage
so that I can easily find and read information
##Acceptance Criteria
GIVEN a Prework Study Guide website
WHEN I visit the website in my browser
THEN I see four boxes titles HTML, CSS, GIT, and JavaScript with associated notes listed
Able to merge updated section of code.
| gharchive/issue | 2022-09-12T01:29:57 | 2025-04-01T06:39:13.967455 | {
"authors": [
"jpace2022"
],
"repo": "jpace2022/prework-study-guide",
"url": "https://github.com/jpace2022/prework-study-guide/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2190124548 | Add support for Umami Analytics
This commit adds support for Umami Analytics :sparkles:
As discussed and approved in #829 I'm sending you a small modification to the layouts/partials/analytics.html file that introduces support for Umami Analytics.
Umami Analytics ( https://umami.is ) is another analytics service that allows you to manage up to three domains and up to 10,000 hits per month for free.
To enable this feature, you have to add this parameter to your config file:
[umamiAnalytics]
site = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
region = "eu" # can be either "eu" or "us"
(Usually, the file you need to modify is located under config/_default/params.toml)
I will edit Congo documentation to add further information about using this feature by Sunday evening, March 16th.
Thanks for the PR, @fmaida! I'm happy to merge this in.
| gharchive/pull-request | 2024-03-16T16:30:26 | 2025-04-01T06:39:13.979767 | {
"authors": [
"fmaida",
"jpanther"
],
"repo": "jpanther/congo",
"url": "https://github.com/jpanther/congo/pull/832",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
129462898 | Ability to push static DNS servers to containers?
Much like how the IP and GW are able to be defined, can the same be accomplished with DNS servers?
I am running pipework and a container on top of unRAID and use it to provide static IPs to my containers. However, these containers also acquire their DNS servers from br0 (bridge on unRAID) which I have configured to use Google's DNS servers.
I would like a handful of my containers to use my VPN providers DNS servers instead otherwise I am vulnerable to DNS leaks. My workaround for now is to have unRAID use my VPN providers DNS servers but I would prefer to not do this.
NM...can do this with the Docker run line.
| gharchive/issue | 2016-01-28T14:11:04 | 2025-04-01T06:39:14.006047 | {
"authors": [
"johnodon"
],
"repo": "jpetazzo/pipework",
"url": "https://github.com/jpetazzo/pipework/issues/188",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
144608111 | What values would you suggest to match randomColor.js?
Hi, this looks like a cool library - I found it after searching for Ruby alternatives to randomColour.js, and think it could just hit the spot for what I need.. although I'm looking for a little guidance.
The readme says I can specify the saturation and lightness I want for generating the colours, and looking at what the randomColour.js readme says:
...randomColor produces bright colors with a reasonably high saturation.
...suggests I should be able to set similar results from this library if I use the correct values - but I'm not sure which values I should be using when I instantiate the object.
I'm very new to playing with colours, so am actually a little confused by some of the terms used, so apologies if I'm asking silly questions.
Thank you.
The values go from 0 to 1, so the highest saturation is 1.0. It's worth learning more about colorspaces, especially HSV and HSL. In HSV, if you set :value to 1.0 you may still get a dark color (e.g. pure blue #0000ff is not light). So, I recommend using HSL with :lightness.
Hi, thank you for your help. I spent a bit of time playing with it last night, and was quite happy with the following:
ColorGenerator.new(saturation: 0.75, lightness: 0.4)
| gharchive/issue | 2016-03-30T14:22:04 | 2025-04-01T06:39:14.014230 | {
"authors": [
"LimeBlast",
"jpmckinney"
],
"repo": "jpmckinney/color-generator",
"url": "https://github.com/jpmckinney/color-generator/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1196634534 | DN0032-BG: Optimize process order
手順の流れを最適化。
いまのところ、これでよし
| gharchive/issue | 2022-04-07T22:34:31 | 2025-04-01T06:39:14.051195 | {
"authors": [
"jpskenn"
],
"repo": "jpskenn/Nora",
"url": "https://github.com/jpskenn/Nora/issues/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
332872013 | make error
Hi,
I got the following error when executing make. Looks like the thread directory does not exist in Kaldi any more. Could you fix it? Thanks.
make -C bin
make[1]: Entering directory /home/ubuntu/kaldi-decoders/bin' g++ -std=c++11 -I.. -I/home/ubuntu/kaldi/tools/openfst/include -Wno-sign-compare -Wno-unused-variable -I/home/ubuntu/kaldi/src -Wall -Wno-sign-compare -Wno-unused-local-typedefs -Wno-deprecated-declarations -Winit-self -DKALDI_DOUBLEPRECISION=0 -DHAVE_EXECINFO_H=1 -DHAVE_CXXABI_H -DHAVE_ATLAS -I/home/ubuntu/kaldi/tools/ATLAS_headers/include -msse -msse2 -pthread -g -fPIC -DHAVE_CUDA -I/usr/local/cuda/include -DKALDI_NO_EXPF -c -o decode-lazylm-faster-mapped.o decode-lazylm-faster-mapped.cc g++ -Wl,-rpath=/home/ubuntu/kaldi/tools/openfst/lib -rdynamic -Wl,-rpath=/home/ubuntu/kaldi/src/lib decode-lazylm-faster-mapped.o /home/ubuntu/kaldi/src/decoder/libkaldi-decoder.so /home/ubuntu/kaldi/src/lat/libkaldi-lat.so /home/ubuntu/kaldi/src/fstext/libkaldi-fstext.so /home/ubuntu/kaldi/src/hmm/libkaldi-hmm.so /home/ubuntu/kaldi/src/tree/libkaldi-tree.so /home/ubuntu/kaldi/src/util/libkaldi-util.so /home/ubuntu/kaldi/src/matrix/libkaldi-matrix.so /home/ubuntu/kaldi/src/base/libkaldi-base.so /home/ubuntu/kaldi/tools/openfst/lib/libfst.so /usr/lib/libatlas.so.3 /usr/lib/libf77blas.so.3 /usr/lib/libcblas.so.3 /usr/lib/liblapack_atlas.so.3 -lm -lpthread -ldl -o decode-lazylm-faster-mapped g++ -std=c++11 -I.. -I/home/ubuntu/kaldi/tools/openfst/include -Wno-sign-compare -Wno-unused-variable -I/home/ubuntu/kaldi/src -Wall -Wno-sign-compare -Wno-unused-local-typedefs -Wno-deprecated-declarations -Winit-self -DKALDI_DOUBLEPRECISION=0 -DHAVE_EXECINFO_H=1 -DHAVE_CXXABI_H -DHAVE_ATLAS -I/home/ubuntu/kaldi/tools/ATLAS_headers/include -msse -msse2 -pthread -g -fPIC -DHAVE_CUDA -I/usr/local/cuda/include -DKALDI_NO_EXPF -c -o latgen-lazylm-faster-mapped.o latgen-lazylm-faster-mapped.cc g++ -Wl,-rpath=/home/ubuntu/kaldi/tools/openfst/lib -rdynamic -Wl,-rpath=/home/ubuntu/kaldi/src/lib latgen-lazylm-faster-mapped.o /home/ubuntu/kaldi/src/decoder/libkaldi-decoder.so /home/ubuntu/kaldi/src/lat/libkaldi-lat.so /home/ubuntu/kaldi/src/fstext/libkaldi-fstext.so /home/ubuntu/kaldi/src/hmm/libkaldi-hmm.so /home/ubuntu/kaldi/src/tree/libkaldi-tree.so /home/ubuntu/kaldi/src/util/libkaldi-util.so /home/ubuntu/kaldi/src/matrix/libkaldi-matrix.so /home/ubuntu/kaldi/src/base/libkaldi-base.so /home/ubuntu/kaldi/tools/openfst/lib/libfst.so /usr/lib/libatlas.so.3 /usr/lib/libf77blas.so.3 /usr/lib/libcblas.so.3 /usr/lib/liblapack_atlas.so.3 -lm -lpthread -ldl -o latgen-lazylm-faster-mapped make[1]: Leaving directory /home/ubuntu/kaldi-decoders/bin'
make -C gmmbin
make[1]: Entering directory /home/ubuntu/kaldi-decoders/gmmbin' g++ -std=c++11 -I.. -I/home/ubuntu/kaldi/tools/openfst/include -Wno-sign-compare -Wno-unused-variable -I/home/ubuntu/kaldi/src -Wall -Wno-sign-compare -Wno-unused-local-typedefs -Wno-deprecated-declarations -Winit-self -DKALDI_DOUBLEPRECISION=0 -DHAVE_EXECINFO_H=1 -DHAVE_CXXABI_H -DHAVE_ATLAS -I/home/ubuntu/kaldi/tools/ATLAS_headers/include -msse -msse2 -pthread -g -fPIC -DHAVE_CUDA -I/usr/local/cuda/include -DKALDI_NO_EXPF -c -o gmm-decode-lazylm-faster.o gmm-decode-lazylm-faster.cc g++ -Wl,-rpath=/home/ubuntu/kaldi/tools/openfst/lib -rdynamic -Wl,-rpath=/home/ubuntu/kaldi/src/lib gmm-decode-lazylm-faster.o /home/ubuntu/kaldi/src/decoder/libkaldi-decoder.so /home/ubuntu/kaldi/src/lat/libkaldi-lat.so /home/ubuntu/kaldi/src/fstext/libkaldi-fstext.so /home/ubuntu/kaldi/src/hmm/libkaldi-hmm.so /home/ubuntu/kaldi/src/feat/libkaldi-feat.so /home/ubuntu/kaldi/src/transform/libkaldi-transform.so /home/ubuntu/kaldi/src/gmm/libkaldi-gmm.so /home/ubuntu/kaldi/src/tree/libkaldi-tree.so /home/ubuntu/kaldi/src/util/libkaldi-util.so /home/ubuntu/kaldi/src/matrix/libkaldi-matrix.so /home/ubuntu/kaldi/src/thread/libkaldi-thread.so /home/ubuntu/kaldi/src/base/libkaldi-base.so /home/ubuntu/kaldi/tools/openfst/lib/libfst.so /usr/lib/libatlas.so.3 /usr/lib/libf77blas.so.3 /usr/lib/libcblas.so.3 /usr/lib/liblapack_atlas.so.3 -lm -lpthread -ldl -o gmm-decode-lazylm-faster g++: error: /home/ubuntu/kaldi/src/thread/libkaldi-thread.so: No such file or directory make[1]: *** [gmm-decode-lazylm-faster] Error 1 make[1]: Leaving directory /home/ubuntu/kaldi-decoders/gmmbin'
make: *** [gmmbin] Error 2
Commit 088a6a9 should fix this. Thanks for reporting!
| gharchive/issue | 2018-06-15T18:23:01 | 2025-04-01T06:39:14.054388 | {
"authors": [
"jpuigcerver",
"yanyufish"
],
"repo": "jpuigcerver/kaldi-decoders",
"url": "https://github.com/jpuigcerver/kaldi-decoders/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
474938900 | android引入报错
import io.dcloud.common.DHInterface.IWebview;
import io.dcloud.common.DHInterface.StandardFeature;
import io.dcloud.common.util.JSUtil;
这三个类hbuilder引入报错
这三个类是 Hbuilder 类,属于 libs/pdr.jar,建议检查下是否引入成功。非 JPush 问题关闭。
| gharchive/issue | 2019-07-31T05:07:04 | 2025-04-01T06:39:14.055706 | {
"authors": [
"JoshLipan",
"yoyo0926"
],
"repo": "jpush/jpush-hbuilder-demo",
"url": "https://github.com/jpush/jpush-hbuilder-demo/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
9331205 | Uppercase Text measures wrong
Plugin doesn't work properly when using uppercase text.
Actually, he is right. I am using uppercase text (css-transformed) and I run into the same issue. If I turn off the uppercase style it fits perfectly. If I don't, it doesn't fit.
| gharchive/issue | 2012-12-17T12:16:10 | 2025-04-01T06:39:14.095961 | {
"authors": [
"BabyDead",
"renestalder"
],
"repo": "jquery-textfill/jquery-textfill",
"url": "https://github.com/jquery-textfill/jquery-textfill/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
261685671 | "Required" message persists for date field
Checklist for this issue
If someone would forget to put in the date and then click submit we would get "Date of Visit is required" as expected, but then after the date is populated the "required" message doesn't disappear as it does with every other field. I can still click submit and it works fine but the project owner feels this may confuse people and wants it to disappear as it is supposed to when the field is filled out properly. I have 1.16 of jQuery.Validation in my NuGet packages, there are no updates.
The form validates and continues on even though this warning is there.
Subject of the issue
Required text does not disappear after field has been validated.
Your environment
1.16 (the latest that is in NuGet)
I can duplicate it on IE 11 and Chrome
Steps to reproduce
In my view I have this:
<div class="form-group">
@Html.LabelFor(m => m.DateOfVisit, new { @class = "form-label" })
@Html.TextBoxFor(m => m.DateOfVisit, new { @class = "form-control", type = "Date", placeholder = "MM/DD/YYYY", @Value = "" })
@Html.ValidationMessageFor(m => m.DateOfVisit, string.Empty, new { @class = "text-danger" })
</div>
In my model I have this
[Required(ErrorMessage = "Date of Visit is required")]
[DataType(DataType.Date)]
[Display(Name = "Date of Visit")]
public DateTime DateOfVisit { get; set; }
Expected behaviour
I expect after I type in the value that the "required" message goes away like it does with every other field.
Actual behaviour
The "required" message persists even after the field is filled.
This appears to be with the HTML5 date field. When using standard text type this validation works.
in firefox no matter what you enter it always says "Please enter a valid date."
even i have no required and date: true applied on field
Thank you xiano8494, I kind of guessed that was the case after I did my workaround. I just added in some jQuery to hide the empty warning if the field is not empty.
If I change it to a text field instead of date does jquery validate have some sort of date picker or would we have to get a separate component for that?
in firefox it was a "date" class on input field which was triggering the date rule
| gharchive/issue | 2017-09-29T15:49:30 | 2025-04-01T06:39:14.103892 | {
"authors": [
"luckyali55",
"shellwe",
"xiano8494"
],
"repo": "jquery-validation/jquery-validation",
"url": "https://github.com/jquery-validation/jquery-validation/issues/2084",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
310000408 | No option to disable default error labels
There is no option to disable default error labels and use the existing ones present in forms.
Issue 1:
If I change errorElement to span, it does not pick existing error elements that are already there in form. For example: If I already have a span error label like below:
<input type="text" name="someName" />
<span class="error"></span>
In validate configuration:
$(form).validate({
...
errorElement: 'span'
});
It only works if errorElement is <label> and the one present in DOM has a for= attribute.
<input type="text" name="someName" />
<label for="someName" class="error"></span>
In validate configuration:
$(form).validate({
...
errorElement: 'label' // <-- This is default in plugin and works as expected
});
Issue 2
Alternatively, i would like to disable the default error labels feature, and toggle existing labels with the help of highlight and unhighlight callbacks. The only option that worked for me was to pass errorPlacement option with an empty function which again didn't work properly when existing error elements were <label>s with for= attribute.
$(form).validate({
...
errorPlacement: function () {} // Seems like a hack to me
});
I hope I have made the descriptions clear to understand.
Thanks in Advance!
Sachin
Can please someone address to this thread?
Hi @scssyworks,
Sorry about that. I will reopen the issue and come back to you when I get some free time.
| gharchive/issue | 2018-03-30T07:46:39 | 2025-04-01T06:39:14.108878 | {
"authors": [
"Arkni",
"scssyworks"
],
"repo": "jquery-validation/jquery-validation",
"url": "https://github.com/jquery-validation/jquery-validation/issues/2157",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
10204738 | Doesn't push to Boxcar
I compiled and installed the module on my vps (centos 5) and configured it.
I got a notification in boxcar once that znc was added to my services.
However mentions / private messages don't get pushed to boxcar.
I didn't set "secret" because i couldn't find it on the boxcar website and the readme shows to use subscribe instead. Do i have to set this too?
Hello,
I am trying to get this working with boxcar and I'm running into an issue. The steps I used were:
/msg *push set debug on
/msg *push set service boxcar
/msg *push set username myemail@domain.tld
/msg *push subscribe
When I try to subscribe I get a 403 Forbidden error.
I have tried both the email I signed up for boxcar with, as well as the notification email from boxcar settings and neither worked.
What am I missing?
Thanks!
It's possible that Boxcar has changed their API. I don't have a good way to test this service though, so I can't really help track down the solution. Sorry.
I left the 'username' and 'secret' field blank for BoxCar. But when I try to issue /msg *push subscribe it tells me "Error: username not set". What username am I supposed to put in? In BoxCar's app there is no username for me to fill in.
http://help.boxcar.io/support/solutions/articles/6000004813-how-to-send-a-notification-to-boxcar-for-ios-users
jreese that is their API guide. I guess that you have to use the API key from the app to auth now. Should not be so hard to implement this change?
| gharchive/issue | 2013-01-22T19:02:19 | 2025-04-01T06:39:14.147395 | {
"authors": [
"Bensge",
"DataRepository",
"bhaap",
"jreese",
"jruels"
],
"repo": "jreese/znc-push",
"url": "https://github.com/jreese/znc-push/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
366620801 | Add Zip extension method
#31 Add Zip method.
Hi @jreina,
Please review !
Your repo is quite fun and interesting ! I will keep contributing.
Thank you! Looking at this implementation, it looks like you've implemented the normal zip the way it is implemented in Lodash, but the System.Linq implementation implements it as Zip<T, U, V>(vals: Array<U>, ((x: T, y: U) => V)): Array<V>; which is more like _.zipWith in Lodash.
Can you implement the System.Linq version? It would be nice to have the current implementation as an overload if possible.
@mahasak Any word on this? If you would like to implement it per the spec I can be patient, otherwise, I would like to free this up for other potential contributors.
| gharchive/pull-request | 2018-10-04T04:47:39 | 2025-04-01T06:39:14.149829 | {
"authors": [
"jreina",
"mahasak"
],
"repo": "jreina/ShittyLINQ.js",
"url": "https://github.com/jreina/ShittyLINQ.js/pull/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1214249634 | Access denied in windows 11
i want to update version in my pc.
i download it with wget(windows) then extract it in previously directory.
PS C:\Portables\Tools> .\checkip.exe
Program 'checkip.exe' failed to run: Access is deniedAt line:1 char:1
+ .\checkip.exe
+ ~~~~~~~~~~~~~.
At line:1 char:1
+ .\checkip.exe
+ ~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
i search a clue about wich file are denies but no luck.
i have .checkip.yaml in same directory also i copied it in to my user path. c:\users\yunus.checkip.yaml
do you have any idea how can i run it. ?
Hi @cdprf. I don't have access to Windows (I use Linux + Mac) but a quick googling gave me this https://stackoverflow.com/questions/69671055/program-a-exe-failed-to-run-access-is-deniedat-line1-char1. Are you maybe running some antivirus program?
thanks for the info,
yes i guess its related with some kind of security issue, i am using Windows defender, that folder was in exlusion list.
somehow its blocking. even i disable av i still cannot run.
i installed Mcafee , then scan my pc, then remove it. also tryed to resetting Windows defender settings completely but no luck atm.
current situaition of my console like this.
PS C:\Users\yunus> checkip.exe 172.64.146.*** C:\Portables\Tools\checkip.exe: Ping: socket: The requested protocol has not been configured into the system, or no implementation for it exists. C:\Portables\Tools\checkip.exe: mkdir C:\Users\yunus\.checkip: Cannot create a file when that file already exists. C:\Portables\Tools\checkip.exe: mkdir C:\Users\yunus\.checkip: Cannot create a file when that file already exists. C:\Portables\Tools\checkip.exe: mkdir C:\Users\yunus\.checkip: Cannot create a file when that file already exists. C:\Portables\Tools\checkip.exe: Tls: remote error: tls: handshake failure db-ip.com Toronto, Canada iptoasn.com CLOUDFLARENET malicious 0% (0/2) ✅
still not working, but i found the problem.
microsoft's new 365 defender stuf has a "Attack surface reduction" and its blocking powershell windows, then block checkip.exe too.
now i am trying to leave that microsoft 365 thingy.
https://docs.microsoft.com/en-us/mem/intune/protect/endpoint-protection-windows-10#attack-surface-reduction-rules
thanks for the help.
| gharchive/issue | 2022-04-25T09:48:29 | 2025-04-01T06:39:14.154943 | {
"authors": [
"cdprf",
"jreisinger"
],
"repo": "jreisinger/checkip",
"url": "https://github.com/jreisinger/checkip/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
957755352 | [docker] Ability to add extra file to a distribution/repo
usecase: trying to configure the docker publisher but I like to add a readme and a action.yml (to make it a github action docker repo).
I tried adding src/jreleaser/distributions/jbang/docker/README.tpl but still only got a dockerfile.
Is there a way to add additional files for the distributions that updates/modifiies a repo ?
note, I see there are things like ${docker.templateDirectory}/assembly but they dont seem to support using templates ?
also, in case of docker these files are not meant for the image but for the repo.
Any files placed under src/jreleaser/distributions/<jbang>/docker/ will be automatically copied to the prepare and package directories found in out/jreleaser[prepare|package]/distributions/<name>/docker/. These files are not added to the generated image. For example
src/
├── jreleaser
│ ├── distributions
│ │ └── app
│ │ └── docker
│ │ ├── README.adoc
│ │ └── README.md.tpl
Results in
If src/jreleaser/distributions/<jbang>/docker/assembly exists then any files placed there will be added to the generated image.
Templates will be evaluated for all files, which is something I find a bit troublesome specifically with binary files (such as images). Templates should be evaluated only for files ending with .tpl IMHO.
| gharchive/issue | 2021-08-02T05:15:15 | 2025-04-01T06:39:14.158666 | {
"authors": [
"aalmiray",
"maxandersen"
],
"repo": "jreleaser/jreleaser",
"url": "https://github.com/jreleaser/jreleaser/issues/330",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1359071766 | [core] Expose project copyright and description as flags in release auto-config
Follow up to #918
Properties to be added:
copyright
description
inceptionYear
stereotype
authors
Released in v1.3.0 -> https://github.com/jreleaser/jreleaser/releases/tag/v1.3.0
| gharchive/issue | 2022-09-01T15:55:39 | 2025-04-01T06:39:14.160917 | {
"authors": [
"aalmiray"
],
"repo": "jreleaser/jreleaser",
"url": "https://github.com/jreleaser/jreleaser/issues/937",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
171566368 | Latest commit to paperclip master breaks gem
https://github.com/thoughtbot/paperclip/commit/09d6bb7865c06de9e4b90328f507977ae6146814
@jeffblake Thanks. This should be now fixed on master
| gharchive/issue | 2016-08-17T03:27:40 | 2025-04-01T06:39:14.168329 | {
"authors": [
"jeffblake",
"morgoth"
],
"repo": "jrgifford/delayed_paperclip",
"url": "https://github.com/jrgifford/delayed_paperclip/issues/191",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1304242410 | Bit length, useful for ordinary numbers?
I wrote this in the explainer:
In contrast to other operations – such as abs and sqrt from the BigInt Math proposal – the new bit-length API would probably be useful only on BigInts and not on ordinary numbers.
Is this really true? One potential use case for ordinary-number bit lengths might be conveniently determining whether an ordinary number integer will fit within e.g. an Int16Array.
Of course, this would be complicated by what to do with non-integer numbers (something to do with IEEE floating-point representation? truncate? throw a TypeError?).
Either way, before or even during Stage 1, it is worth explicitly considering bit lengths for ordinary numbers.
If I needed it for an integer number, i'd do BigInt(number).bitLength. I don't think there's a sensible answer for non-integer Numbers, and it'd be very weird to have an accessor that throws on some Numbers and not others.
I agree that “properties should not throw” is a good principle, that .bitLength would probably be best as a property, and that BigInt(number).bitLength probably is the best general solution. However, bit length is not necessarily a property; it could still be a static function like Math.bitLength() that throws or truncates on non-integer numbers. For example:
Math.bitLength(255) // 1?
Math.bitLength(255.5) // ???
Math.bitLength(255n) // 1
Basically, I’m trying to think about how consistent we should be between this proposal, proposal-popcount (which I would definitely use on 8-, 16-, and 32-bit integer numbers), as well as maybe the existing Math.clz32:
Function
Useful for non-ints?
Useful for int numbers?
Useful for BigInts?
Non-int behavior
Bit length
No
???
Yes
???
Math.popcnt()*
No
Yes
Yes
Truncation?
Math.clz32()
No
Yes
Yes
Truncation with ToUint32
* Might be renamed Math.popCount, Math.bitCount, Math.nonzeroBitCount, Math.countOnes, etc. Uncertain whether bit length would be explicitly specified in the method name, e.g., Math.popcnt32.
Is there a fundamental difference between bit length, popcount, and clz32 such that that we should make popcount and clz32 apply to integral ordinary numbers but not to bit length? (I’m thinking probably yes.) If so, then we should give bit length a special API that is unlike popcount or clz32.
In general, when JavaScript developers use the bits of integral ordinary numbers (e.g., from Int8Array), they already know the bit length that they are dealing with, (e.g., 8, 16, 32). If they do not know the narrowest word width that fits certain values, they can just coerce to a BigInt like you suggest (i.e., BigInt(i).bitLength) or use < with an exponent of 2 (e.g. i < 2 ** 8, i < 2 ** 16, i < 2 ** 32).
(It may be useful being able to determine whether a non-integer number fits in a 32- versus 64-bit IEEE floating-point representation, but I don’t really see a way to fit that into the same API as integers’ bit lengths.)
So I’m inclined to agree to keep bit lengths specific to BigInts and use BigInt(i).bitLength as necessary, but it’s probably still worth exploring in these early stages whether elegantly fitting in ordinary numbers is possible. Having said that, the answer is probably, “No, there’s no elegant way to fit ordinary numbers in”.
Either way, the committee has been very clear that they do not want things that throw on some numbers and not others, or else number/bigint mixtures would have worked much more intuitively.
Unless there's a reasonable answer for every number, then it doesn't belong there.
Some method like frexp from C library can be useful for number type, it returns floor(log2(value)) + 1 (If I am not mistaking), which is something similar to bitLength).
The use case is reimplementation of Math.sin/Math.exp/... in JS itself :-) using algorithms from fdlibm - you need to extract exponent for this.
| gharchive/issue | 2022-07-14T04:36:20 | 2025-04-01T06:39:14.227876 | {
"authors": [
"Yaffle",
"js-choi",
"ljharb"
],
"repo": "js-choi/proposal-bigint-bit-length",
"url": "https://github.com/js-choi/proposal-bigint-bit-length/issues/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1717915124 | 🛑 Piwigo is down
In bda8907, Piwigo ($SERVER_BASE/piwigo/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Piwigo is back up in 9860d9e.
| gharchive/issue | 2023-05-19T22:49:16 | 2025-04-01T06:39:14.255061 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/1538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1867789765 | 🛑 API is down
In 46493c1, API ($SERVER_BASE/api/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API is back up in 81be0e3 after 547 days, 5 hours, 42 minutes.
| gharchive/issue | 2023-08-25T23:47:32 | 2025-04-01T06:39:14.257371 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/2239",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1868359511 | 🛑 Piwigo is down
In e4cbd07, Piwigo ($SERVER_BASE/piwigo/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Piwigo is back up in 49479b8 after 548 days, 12 hours, 14 minutes.
| gharchive/issue | 2023-08-27T05:09:41 | 2025-04-01T06:39:14.259519 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/2365",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2071372540 | 🛑 Tandoor is down
In be6fce4, Tandoor ($SERVER_BASE/recipes/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tandoor is back up in 07162b6 after 5 minutes.
| gharchive/issue | 2024-01-08T23:13:39 | 2025-04-01T06:39:14.261679 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/3960",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2072391974 | 🛑 Grafana is down
In 111f893, Grafana ($SERVER_BASE/grafana/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Grafana is back up in 0467887 after 10 minutes.
| gharchive/issue | 2024-01-09T13:40:35 | 2025-04-01T06:39:14.263794 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/3985",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2080507688 | 🛑 Linus-Wordpress is down
In e5ba5e3, Linus-Wordpress ($SERVER_BASE/wordpress/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Linus-Wordpress is back up in fb5cb9f after 11 minutes.
| gharchive/issue | 2024-01-13T23:13:49 | 2025-04-01T06:39:14.265963 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/4308",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
59319044 | Request: disallow spaces when indexing into array or object
Maybe there's an existing way to do this, that I've missed. But I would like to ensure that you can't do arr[ 0 ] or obj[ prop ]. However - I do have requireSpacesInsideArrayBrackets: all, so it shouldn't conflict with that
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/9035978-request-disallow-spaces-when-indexing-into-array-or-object?utm_campaign=plugin&utm_content=tracker%2F281640&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F281640&utm_medium=issues&utm_source=github).
This seems related to #923 which was closed by 2df4009. But it doesn't look like the feature was added as described.
So I think the desire here is to have a separate rule for array literals/instantiation from array access. That way spaces can be required (for instance) when creating array literals, but disallowed when accessing/indexing. Which doesn't seem to have been added by 2df4009
I think this is the same as #875 if I'm not mistaken?
| gharchive/issue | 2015-02-28T01:51:06 | 2025-04-01T06:39:14.270072 | {
"authors": [
"dmitrig01",
"hzoo",
"jasonkarns"
],
"repo": "jscs-dev/node-jscs",
"url": "https://github.com/jscs-dev/node-jscs/issues/1122",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
779502055 | 🧤 Gloves: provide an easier alternative to writing harnesses
As discussed here https://github.com/jscutlery/test-utils/issues/4 and here https://github.com/angular/components/issues/20871, component harnesses are easy to use but hard to implement.
It would be nice to provide another approach with the following constraints:
[ ] It should be easier than writing a harness,
[ ] It should be easier than hacking the DOM,
[ ] It should be framework agnostic,
[ ] 🤔 Should it be able to interact with the state?
I understand that the goal is:
to provide a way of using existing harnesses with Cypress
One of the key features of harnesses is the test environment abstraction and harness reuse through environments (TestBed, Protractor etc...) but if I am using TestBed and Cypress and if I can't reuse my harnesses with Cypress then it somewhat defeats the purpose of harnesses.
This means that what this library can control is:
What is in the test spec
Loading a harness
custom commands to make more readable tests and enable them to have proper types.
base abstractions for new test harnesses, e.g. default implementation of static with and shared functions like selecting elements
In regards to the below constraints and the goal:
It should be easier than writing a harness - we need to write harnesses in a cdk way and the only thing we can do to make it easier is abstractions and shared functions
It should be easier than hacking the DOM - to allow harnesses to be used outside of cypress they need to exclusively use the TestElement, i.e. the harness cannot use cy internally.
It should be framework agnostic - the harness class should be framework agnostic, but the instantiation of it should be framework specific, i.e. getHarness in this library is specifically getting a cypress version of the harness. Maybe there could also be a lighter weight instantiation for when you don't need the TestBed, e.g. in storybook and E2E tests.
Should it be able to interact with the state? What do you mean by state. In general, I think the state should only be changed by interacting with the UI as that is what you are testing.
Can you define what you think the library should have in terms of the:
base ComponentTestHarness class for abstractions and shared functions
custom commands to make more readable tests and enable them to have proper types
harness instantiation
I understand that the goal is:
to provide a way of using existing harnesses with Cypress
One of the key features of harnesses is the test environment abstraction and harness reuse through environments (TestBed, Protractor etc...) but if I am using TestBed and Cypress and if I can't reuse my harnesses with Cypress then it somewhat defeats the purpose of harnesses.
This means that what this library can control is:
What is in the test spec
Loading a harness
custom commands to make more readable tests and enable them to have proper types.
base abstractions for new test harnesses, e.g. default implementation of static with and shared functions like selecting elements
In regards to the below constraints and the goal:
It should be easier than writing a harness - we need to write harnesses in a cdk way and the only thing we can do to make it easier is abstractions and shared functions
It should be easier than hacking the DOM - to allow harnesses to be used outside of cypress they need to exclusively use the TestElement, i.e. the harness cannot use cy internally.
It should be framework agnostic - the harness class should be framework agnostic, but the instantiation of it should be framework specific, i.e. getHarness in this library is specifically getting a cypress version of the harness. Maybe there could also be a lighter weight instantiation for when you don't need the TestBed, e.g. in storybook and E2E tests.
Should it be able to interact with the state? What do you mean by state. In general, I think the state should only be changed by interacting with the UI as that is what you are testing.
Can you define what you think the library should have in terms of the:
base ComponentTestHarness class for abstractions and shared functions
custom commands to make more readable tests and enable them to have proper types
harness instantiation
| gharchive/issue | 2021-01-05T20:02:03 | 2025-04-01T06:39:14.280780 | {
"authors": [
"srlee309",
"yjaaidi"
],
"repo": "jscutlery/test-utils",
"url": "https://github.com/jscutlery/test-utils/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
101725166 | Possible tu use with Electron?
Possible to use webdriver-sync with Electron & chromedriver binary?
Usage of selenium-webdriver with Electron is basically the same with upstream, except that you have to manually specify how to connect chrome driver and where to find Electron's binary:
selenium-webdriver provided a Node package for Electron with web driver.
An example
var webdriver = require('selenium-webdriver');
var driver = new webdriver.Builder()
// The "9515" is the port opened by chrome driver.
.usingServer('http://localhost:9515')
.withCapabilities({chromeOptions: {
// Here is the path to your Electron binary.
binary: '/Path-to-Your-App.app/Contents/MacOS/Atom'}})
.forBrowser('electron')
.build();
driver.get('http://www.google.com');
driver.findElement(webdriver.By.name('q')).sendKeys('webdriver');
driver.findElement(webdriver.By.name('btnG')).click();
driver.wait(function() {
return driver.getTitle().then(function(title) {
return title === 'webdriver - Google Search';
});
}, 1000);
driver.quit();
I'm not very familiar with Electron. Is that like an editor?
No. Electron is a framework that lets you write cross-platform desktop applications using JavaScript, HTML and CSS. It is based on io.js and Chromium. Electron uses web pages as its GUI, so you could also see it as a minimal Chromium browser, controlled by JavaScript.
If you can view the project in a browser, then you should be able to test it with webdriver-sync.
| gharchive/issue | 2015-08-18T18:52:00 | 2025-04-01T06:39:14.284286 | {
"authors": [
"LeMoussel",
"jsdevel"
],
"repo": "jsdevel/webdriver-sync",
"url": "https://github.com/jsdevel/webdriver-sync/issues/124",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2029336906 | Animated: useNativeDriver was not specified.
Receiving a warning when I swipe my listitem. Animated: useNativeDriver was not specified. This is a required option and must be explicitly set to true or false. This occurs when you are using an animation and not setting the native driver to true or false. Can you please update?
For example:
Animated.timing(animationState, { toValue: 1, duration: 250, useNativeDriver: true }).start()
I solved this by adding the patch to following line in react-native-swipeable/lib/index.js
useNativeDriver : true
| gharchive/issue | 2023-12-06T20:33:09 | 2025-04-01T06:39:14.301420 | {
"authors": [
"Ankur-Float",
"chakafasano88"
],
"repo": "jshanson7/react-native-swipeable",
"url": "https://github.com/jshanson7/react-native-swipeable/issues/136",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
11982226 | Option to assume strict mode (for nodejs)
Now that nodejs is es5 compliant it can be run in strict mode by default without including (function () { "use strict"; /* code here */ }());.
Will you add an option implicitstrict so that it's not required to have the boilerplate code?
Nearly two years since this was opened, is this actually implemented?
It's not yet implemented, a patch was refused because it was mixed with other unrelated stuffs.
Please, it would be nice to implement what @valueof had in mind: strict multivalue option
Any word on when we will have a release with this in it? I've been waiting for it for a while, and it looks like it landed in master.
@jugglinmike can tell you, I think one is due soon.
This gets very annoying with Babel, as it puts 'use strict'; in front of everything and explicit 'use strict' declarations are not necesssary anymore. @lukeapage, could we have a new 2.8.1 release?
@jugglinmike is working hard to get the next release out. It has alot of
work in it and a few regressions were identified which slowed things down.
we hope it will be soon.
| gharchive/issue | 2013-03-13T16:44:48 | 2025-04-01T06:39:14.311370 | {
"authors": [
"adriengibrat",
"coolaj86",
"englercj",
"lukeapage",
"mik01aj"
],
"repo": "jshint/jshint",
"url": "https://github.com/jshint/jshint/issues/924",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
532609766 | Expose get_rates() function
This is necessary where we want to make sure we get the rates to be saved to DB without requesting them directly through the store itself (and therefore possibly get different data).
Coverage remained the same at 100.0% when pulling dfe88b52d81688331d6a74abe57f759c1adaa8b1 on strzibny:expose-rates into 543c624c355e948b9215057c17197151a1f94552 on jshmrtn:master.
@strzibny Released as v1.0.0-alpha.2.
| gharchive/pull-request | 2019-12-04T11:00:34 | 2025-04-01T06:39:14.313664 | {
"authors": [
"coveralls",
"maennchen",
"strzibny"
],
"repo": "jshmrtn/currency-conversion",
"url": "https://github.com/jshmrtn/currency-conversion/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1433721344 | Allow for jump through for bastion host before install
Hi there, loving hashi-up, just wondering if it has the ability to install via a jump host. If so is there any documentation regarding it
Thanks
Hi @octdanb, thanks for your interest in hashi-up
Installation via a jump host is not supported, but I have a workaround to achieve this:
I'm sure the ssh client will use your SSH config, so you should be able to create a connection with local port forwarding, eg:
My SSH config:
Host 192.168.5.101
ProxyJump ubuntu@192.168.5.1
First bring the SSH service of the target to your local machine:
ssh -L 2222:localhost:22 ubuntu@192.168.5.101
Now you can install nomad with hashi-up, using the local port:
hashi-up install nomad --ssh-target-addr localhost:2222 ...
| gharchive/issue | 2022-11-02T20:08:05 | 2025-04-01T06:39:14.316945 | {
"authors": [
"jsiebens",
"octdanb"
],
"repo": "jsiebens/hashi-up",
"url": "https://github.com/jsiebens/hashi-up/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
183586288 | Filter expression return array can not use [index] get item inside
eg. I want get the price for book "Sayings of the Century"
👍 $.store.book[?(@.title=='Sayings of the Century')] will return an book array
👍 $.store.book[?(@.title=='Sayings of the Century')].price will return an price array
😂 $.store.book[?(@.title=='Sayings of the Century')][0] will return an empty array
😂 $.store.book[?(@.title=='Sayings of the Century')].price[0] will return an empty array
I think $.store.book[?(@.title=='Sayings of the Century')][0] should return a book
$.store.book[?(@.title=='Sayings of the Century')].price[0] should return a price
Somehow i have a similar issue here https://github.com/openhab/openhab/issues/4768
i there any way to address the item after a filtering has been done?
I found out a possible solution:
$.store.book[?(@.title=='Sayings of the Century')].price.min()
would take the one value from the list...
A path must point to something in the document. That is the case for:
'''$.store.book[?(@.title=='Sayings of the Century')]'''
but not with:
'''$.store.book[?(@.title=='Sayings of the Century')][0]'''
where the [0] actually is expected to be applied to the result of the path evaluation. I agree that this would useful in many situations but it should not be confused with the actual path.
My 2 cents.
This is a workaround using read method on the result of the filter:
String filterResult = JsonPath.read(fullJson, "$.store.book[?(@.title == 'Sayings of the Century')]").toJSONString();
Double price = JsonPath.read(filterResult, "$[0].price");
Hope it could help until we will be able to do sort of:
$.store.book[?(@.title=='Sayings of the Century')][0].price
Any chance of getting a way to support this?
@jochenberger whats your thoughts on this?
That's a tough one. It's apparently not part of the original JsonPath spec and is not supported on any of the implementations.
I have a similar use case in the project I use JsonPath for and I have decided to create helper methods findAll(object, path) and findFirst(object, path) where the latter calls JsonPath.parse(object).limit(1).read(path) and returns an appropriate response.
I'd say we should stick to the spec and not support this, but it's not a very strong opinion.
I would agree that going off spec is not the best idea yes because then you have an excuse to just add anything even if it deviates spec. Maybe custom functions or some sort of extension capability which is separate from the base project (which is pure spec).
What are the contribution guidelines in terms of "accepting any terms" or processes. I thought I might try experimenting.
I don't think there are any terms. Adding tests is a good way to get PRs merged, so is not breaking existing ones. ;-)
#243 seems related btw.
And there's also #191 and #197.
Struggeling with the same and @kallestenflo have a hard time to understand your argument:
A path must point to something in the document. That is the case for:
$.store.book[?(@.title=='Sayings of the Century')]
but not with:
$.store.book[?(@.title=='Sayings of the Century')][0]
where the [0] actually is expected to be applied to the result of the path evaluation. I agree that this would useful in many situations but it should not be confused with the actual path.
The result of the path operation after the filter is a JSONArray, so at that point he document is a JSON Array, e.g. with 1 element.
So now this is a new document and [0] points into the first element of this new intermediate document.
So why disallow such functionality, where is it not inline with 'the JSON path spec'?
After each '.' in a JSON path a new intermediate resulting document results and the next operator applies to that, at least that is how I understand the concept of a 'path operation'.
Compare with xpath, it'd works exactly like that, with some added magic to deal with resulting arrays.
For example I'd expect
$.*[?(@.name=='MyMongoDb')][0].credentials.hostname
to return the same as
$.*[?(@.name=='MyMongoDb')].credentials.hostname[0]
or $.*[?(@.name=='MyMongoDb')].credentials[0].hostname
This seems so essential path operation logic that I cannot understand why all related cases get quickly closed and people are building their own experimental forks to workaround the lack of functionality.
@jochenberger is there any support on this. Filtering should not lead to a non accessible array.
The problem is getting old and a solution is not found yet in which this is handled within the JSONPATH call itself without additional script functions
Good to see I am not the only one struggling with this. I would expect such a filter to return whatever the content type is, not forced in a single result array.
My workaround is to parse the one result to a string and either add the following to the assert somewhere:
expectedResult = "[\"" + expectedResult + "\"]"
or strip the last and first two characters from the String returned but somehow I feel that is worse.
Is it because you stay with the content type list and filter inside that? In which case I would have to say I see why you went with a single entry list and I will alter my approach to use something like what I found at @fhoeben 's commit and use result.get(0)
Yeah, it all makes a lot more sense now.
Still would like it to return the object type of the actual object referenced to.
How to get he array name?
My json data is as below
{
"store": {
"book": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99
}
],
"bicycle": {
"color": "red",
"price": 19.95
}
},
"expensive": 10
}
I am looking for list of store only i.e. in above case output should be just
"book"
"bicycle"
For this what should be json path .
Also would like to apply the filter init as price should be greater than 1.
I know that this is not the same as being able to select any given element, but I think a lot of people end up here because they are looking for a way to select the first (or maybe last) element of the resulting array from an applied filter. Therefore I don't think we should necessarily go for:
$.store.book[?(@.title=='Sayings of the Century')][0]
And expect a specific element of the array, because square brackets, for JSON path, is either square bracket notation or applying a filter (and there's nothing in the spec that specifically covers this use case where we want to mix both).
I would argue the closest way to adhere to the spec is to add some more methods, called after the filter, the same way we do for .min() and .max() - except they are not tied to the values of the array, but instead the elements, namely: .first() and .last().
It's not as powerful but could be easier to implement and resolve a number of people's issues?
I'm having problems understanding how come a filtered array is not an array, that is without any knowledge of library internals.
Any update on this issue?
Hey!
What is workaround to get this work in a single path selector?
Apparently there is no workaround in a single path selector. The workaround is to read in 2 stages.
I had to build my own parser because of this issue, it was surprisingly easy with ANTLR4
@zakjan any chance you published that parser? Maybe others could benefit also.
Yeah, I'll try to extract it and share
Hi @zakjan I'd be interested in seeing this too if you don't mind sharing? Thanks in advance.
I like @apocheau workaround suggestion (it less hacky) but I think this might also be a good one:
List<String> filterResult = JsonPath.read(fullJson, "$.store.book[?(@.title == 'Sayings of the Century')]");
then
Double price = filterResult.get(0);
My workaround for kotlin applications is to extend DocumentContext with a function to read a String directly, like this:
private fun DocumentContext.readString(path: String): String{
this.read<List<String>>(path)[0]
}
Then it can be used like this:
val singleValue: String = JsonPath.parse(myJsonString).readString("$.sampleArray[0]")
@fhoeben @bhreinb Sorry for late response. My parser is already published at https://github.com/zakjan/objectpath . It supports more advanced cases, might be too complex for general use cases. Feel free to use it as a reference for building your own parser.
Almost 5 years people are struggling with it and unfortunately no any progress here :( Too sad :(
This is really an issue for us also.
We migrate a project from an older version of the library "json-path-0.8.1.jar" to "json-path-2.4.0.jar".
In 0.8.1 the result was not wrapped in an [ ]. Why is it now??
I know that this is not the same as being able to select any given element, but I think a lot of people end up here because they are looking for a way to select the first (or maybe last) element of the resulting array from an applied filter. Therefore I don't think we should necessarily go for:
$.store.book[?(@.title=='Sayings of the Century')][0]
And expect a specific element of the array, because square brackets, for JSON path, is either square bracket notation or applying a filter (and there's nothing in the spec that specifically covers this use case where we want to mix both).
I would argue the closest way to adhere to the spec is to add some more methods, called after the filter, the same way we do for .min() and .max() - except they are not tied to the values of the array, but instead the elements, namely: .first() and .last().
It's not as powerful but could be easier to implement and resolve a number of people's issues?
Of all the solutions mentioned, I like this solution the best as it doesn't break the original spec
just a suggestion on the jsonpath syntax:
$.store.book[?(@.title=='Sayings of the Century')] will return a book array
$.store.book[?(@.title=='Sayings of the Century')[0]] will return the first book item
$.store.book[?(@.title=='Sayings of the Century')].price will return a price array
$.store.book[?(@.title=='Sayings of the Century')[0]].price will return the first price
it should not break compatibility to existing syntax.
Another workaround to get arrays, objects or values as string:
JsonPath.parse(BOOKS).read("$.store.book[?(@.title=='Sayings of the Century')]", List.class).get(0).toString();
I've just come across this issue as well. I'd assumed that JSONPath was the JSON version of XPath for XML documents.
The not being able to index the result of a filter is quite a pain. This ticket has been open for 7 years now. Any prospect of it happening?
I encountered exactly the same issue as discussed on this page. Because the order in my response array under test is pretty complex, I want to test the individual entries for the correct values baased on the unique "token" field of each response array element.
Of course, after filtering I expect a single element (of zero or more are found the test should fail) and then verify some fields based on this single element. But as discussed in this post, the filter actually results in a single-elemnt array. There is no way to access this by using [0] or a firstElement function.
I solved it bu converting putting the value tot test against also in an array using the Java List.of() function.. It does not look very nivce but is straightforward and does not need any scripts or custom functions:
Result after filtering:
[
{
"token": "6064364892108791641",
"productId": 390403,
"importStatus": "SUCCESS",
"cardPrintDate": "2022-10-05T21:46:07.270792",
"expirationDate": null,
"importErrorCode": "",
"importErrorMessage": ""
}
]
And in the unit test:
.andExpect(jsonPath("$[?(@.token == '6064364892108791641')].importStatus", equalTo(List.of("SUCCESS"))))
And this works for now.
I would argue the closest way to adhere to the spec is to add some more methods, called after the filter, the same way we do for .min() and .max() - except they are not tied to the values of the array, but instead the elements, namely: .first() and .last().
It's not as powerful but could be easier to implement and resolve a number of people's issues?
It would not help to paginate the array, would it? Custom function are evil. The array from the filter expression should work like any other array. I don't see anything in spec. contradicting it and saying "filter expression can not be composed with other operations".
did you find any workaround?
This issue is still open. 8 years, incredible!
| gharchive/issue | 2016-10-18T04:13:32 | 2025-04-01T06:39:14.377264 | {
"authors": [
"AntonioDell",
"CMoH",
"DinoChiesa",
"GrayedFox",
"Klaas68",
"RamakrishnanArun",
"amiduai",
"apocheau",
"bhreinb",
"consultantleon",
"fhoeben",
"gauravphoenix",
"genezx",
"gideonaina",
"gitdode",
"gsambasiva",
"gyk001",
"ivan-kleshnin",
"jhlweb",
"jochenberger",
"kallestenflo",
"keetron",
"kohlsalem",
"mredeker",
"nanonull",
"okainov",
"pahaderajesh",
"v-mwalk",
"zakjan",
"zhavir"
],
"repo": "json-path/JsonPath",
"url": "https://github.com/json-path/JsonPath/issues/272",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1853863127 | readthedocs: specify build.os
readthedocs deprecated build.image. We need to specify the OS used by the build these days.
Related-to: https://blog.readthedocs.com/use-build-os-config/
I don't have a good way to test this locally so I'm kinda editing this blind. Hopefully this works ;-)
LGTM!
| gharchive/pull-request | 2023-08-16T20:29:59 | 2025-04-01T06:39:14.400075 | {
"authors": [
"Theelx",
"davvid"
],
"repo": "jsonpickle/jsonpickle",
"url": "https://github.com/jsonpickle/jsonpickle/pull/462",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
77702640 | Remove unused files
Removing a few files that don't seem to be used anymore.
Gonna re-run the build once I've merged #59 which should fix a few things.
:+1: nice work
| gharchive/pull-request | 2015-05-18T15:36:01 | 2025-04-01T06:39:14.401081 | {
"authors": [
"PeterDaveHello",
"ThomWright"
],
"repo": "jsonresume/registry-server",
"url": "https://github.com/jsonresume/registry-server/pull/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
66987596 | 2015-06-10 Practical Open Linked Data Mega Meetup
Last year there was a really good talk at UXOxford about the importance of linked data. It would be great to follow that up with talks around the practicalities of consuming and presenting open linked data sources.
This theme definitely lends itself to being a super mega meetup. OxRug are onboard (whoop), @LuRsT would you guys be keen to get pythonistas involved?
Date
Wednesday 10th June 2015, 7:30
Speakers
TBA, Leigh Dodds from ODI is potentially keen, @spikeheap will touch base and see if these dates work.
We could do with another (preferably local) speaker too.
Core actions
[ ] Confirm sponsorship for snacks/drinks (@spikeheap)
[ ] Confirm venue (@benfoxall, as it's super-mega, we might want to consider a bigger venue for this one...)
[ ] Organise speakers
Event organisation
[ ] Ensure someone has carried out the actions above
[ ] Add to MeetUp.com (just prior to preceding event)
[ ] Announce event
We sure are!
:+1:, with unicorns and rainbows and such.
I've added the draft Meetup.com event: http://www.meetup.com/JSOxford/events/221686847/
Happy to fill the gap if needed.
"Tom Talks About Trains Again" - running draft title.
This happened.
| gharchive/issue | 2015-04-07T20:50:55 | 2025-04-01T06:39:14.407140 | {
"authors": [
"LuRsT",
"spikeheap",
"tomlane"
],
"repo": "jsoxford/jsoxford.github.com",
"url": "https://github.com/jsoxford/jsoxford.github.com/issues/94",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
784907922 | fix: columns header title attribute use wrong attr
https://github.com/jspreadsheet/jexcel/commit/0a3c6a82ac7ba092106559393804d146751bae28#comments
It has been merged. Thanks
| gharchive/pull-request | 2021-01-13T08:45:17 | 2025-04-01T06:39:14.417982 | {
"authors": [
"hodeware",
"klren0312"
],
"repo": "jspreadsheet/ce",
"url": "https://github.com/jspreadsheet/ce/pull/1287",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
894372610 | My updates
Hello all,
I have been busy coding and I must say that I really enjoy writing with jsPsych. It is easy and fun.
I have put together scripts for delivering questionnaires. I have a setup file for each questionnaire that contains the questions and scoring info. The only thing that needs to be changed in the HTML is which setup file to import. The scoring of the questionnaire is also performed and saved in the output file. I have completed the CFI, CFS, PANAS, STAI and AES. These are easy to make so I will add more later. By having the text and the "guts" separate it will be "easy" to have the language translated since someone does not have to dig through any code to get to the text.
I have the Baddeley's gramatical reasoning test completed.
I have an attentional network task done (This is like a complicated flankers task).
I have my own staircase procedure task, I know this will be at least the third version available.
I am also working on a verbal delayed match to sample task.
I have first drafts completed on these so far. Once I do some more internal testing I will make a pull request. They are all in my fork if anyone is interested right now.
Take care,
Jason
Hi Jason.
Sounds great. A few comments:
Is there any way you can do separate pull requests for each of these?
Is the attentional network task a Posner cueing task, or do you mean something else by that? I have a student working on the Posner cueing task is why I ask.
Best,
Peter
Is the staircase procedure just your version of using a staircase to measure thresholds?
oh and I would be great if you could raise issues labeled "DemoTaskDevelopment" for each of these, just so we are following the procedure we are trying to enforce.
Yes, I will try my best to do separate pull requests for these once I feel they are good enough.
The ANT task is the one discussed here: Fan, J., McCandliss, B. D., Sommer, T., Raz, A., & Posner, M. I. (2002). Testing the efficiency and independence of attentional networks. Journal Cognitive Neuroscience, 14(3), 340-7.
The staircase procedure is for measuring thresholds. I tried to use: https://github.com/hadrienj/StaircaseJS but I could not figure out how to use it. I did get to learn how to use objects in javascript by making my own. So that is good.
I cannot see how I can assign a label to an issue, sorry.
well, raise the issues and I will assign the labels.
I think any type of staircase experiment would be good for this library.
I wonder if we could somehow merge different versions of the Posner cueing experiment into one experiment on here?
Fine with me. If your student wants to take what I have and make it work best for you, that is fine. I have all the functionality complete along with the coding of the trial types for the results file. All that is left is setting the trial/ITI timing so that it matches the original article. It was a pain finding the best arrows and asterix to use. What I have also been doing for different tasks is using tables to align multiple items on the screen. I make functions like "PutStuffIntoTable" that takes some items and arranges them in a table. And if the margins and layout of the tables are set correctly than things show up in the same spot on the screen for every trial. That I think has been one of the most challenging things.
@steffejr you may want to explore a combination of relative and absolute CSS positioning to get items in particular locations on the screen.
Here's an example posner task stimulus that I made:
{
type: 'html-keyboard-response',
stimulus: `<div style="position: relative; width:100vw; height: 100vh;">
<div style="font-size: 100px; position: absolute; top: 50%; left: 25%; transform: translate(-50%, -50%); width: 200px; height: 200px; border: 1px solid #555;">
</div>
<div style="font-size: 100px; position: absolute; top: 50%; left: 75%; transform: translate(-50%, -50%); width: 200px; height: 200px; border: 1px solid #555;">
<p style="line-height: 200px; margin: 0;">🔵</p>
</div>
<div style="position: absolute; bottom: 10%; width: 100%;">
<p>If the circle appears in the right box, press P.</p>
<p>Press P to continue.</p>
</div>
</div>`,
choices: ['p']
},
My full experiment is here:
https://github.com/jodeleeuw/219-eyetrack-example-full
| gharchive/issue | 2021-05-18T13:18:00 | 2025-04-01T06:39:14.426854 | {
"authors": [
"jodeleeuw",
"pjkohler",
"steffejr"
],
"repo": "jspsych/experiment-demos",
"url": "https://github.com/jspsych/experiment-demos/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
817794506 | jsPsych.pluginAPI.compareKeys handling null keys
Hi, just trying out jsPsych 6.3.0 (great update btw), and noticed that jsPsych.pluginAPI.compareKeys fails when one of the keys is null. Would you consider allowing null inputs? So the function would return false if any one input is null and true if both are null.
I'm handling null responses now as follows:
if (data.response != null) {
data.hit = jsPsych.pluginAPI.compareKeys(data.response, data.correct_response);
} else {
data.hit = false;
}
Hi @klanderson, I've now changed this on the master branch version of jspsych.js, so jsPsych.pluginAPI.compareKeys will handle null values in the next release.
As of right now there haven't been any other changes to the master branch jspsych.js file since the 6.3.0 release on Feb 21. This means you could replace your v6.3.0 jspsych.js file with the master branch version to get this new functionality, without risking any compatibility issues. If you do this, it might be a good idea to double-check the jspsych.js commit history before downloading, to see if there have been any other changes made in the meantime.
Thank you!
| gharchive/issue | 2021-02-27T02:08:08 | 2025-04-01T06:39:14.430650 | {
"authors": [
"becky-gilbert",
"klanderson"
],
"repo": "jspsych/jsPsych",
"url": "https://github.com/jspsych/jsPsych/issues/1577",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
261353660 | push git://github.com/jstrachan-testing/ngx-base.git master
UpdateBot pushed version changes from the source code in repository: git://github.com/jstrachan-testing/ngx-base.git ref: master
UpdateBot commands:
updatebot push --ref master git://github.com/jstrachan-testing/ngx-base.git
| gharchive/pull-request | 2017-09-28T15:07:15 | 2025-04-01T06:39:14.439105 | {
"authors": [
"jstrachan-testing"
],
"repo": "jstrachan-testing/ngx-fabric8-wit",
"url": "https://github.com/jstrachan-testing/ngx-fabric8-wit/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1416014801 | [Feature]: Add loader to pages
Feature Title
Add loader
Description
Whenever we switch between the pages it takes time so I want to add loader in it
Motivation
Whenever we switch between the pages it takes time so I want to add loader in it
Issue Type
Frontend
Screenshot of the feature if done ?
No response
Are you taking this fetaure implementaion
yes
hey @jsvigneshkanna Can I add loader ?
Sure @kuldeepmangla , great idea
You can work on this issue
closing as done @kuldeepmangla ,
Check other issues like https://github.com/jsvigneshkanna/tailwind_ui_components/issues/40 https://github.com/jsvigneshkanna/tailwind_ui_components/issues/53 https://github.com/jsvigneshkanna/tailwind_ui_components/issues/54
| gharchive/issue | 2022-10-20T06:02:48 | 2025-04-01T06:39:14.442946 | {
"authors": [
"jsvigneshkanna",
"kuldeepmangla"
],
"repo": "jsvigneshkanna/tailwind_ui_components",
"url": "https://github.com/jsvigneshkanna/tailwind_ui_components/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1773319693 | [Bug]: semver dep security vulnerability
Is there an existing issue for this?
[X] I have searched the existing issues and my issue is unique
[X] My issue appears in the command-line and not only in the text editor
Description Overview
When installing package using npm, audit fails with:
$ npm audit
# npm audit report
semver <7.5.2
Severity: moderate
semver vulnerable to Regular Expression Denial of Service - https://github.com/advisories/GHSA-c2qf-rxjj-qqgw
fix available via `npm audit fix --force`
Will install eslint-plugin-react@7.25.3, which is a breaking change
node_modules/semver
eslint-plugin-react 7.19.0 || >=7.26.0
Depends on vulnerable versions of semver
node_modules/eslint-plugin-react
2 moderate severity vulnerabilities
To address all issues (including breaking changes), run:
npm audit fix --force
Running npm audit fix --force downgrades to eslint-plugin-react@7.25.3 :eyes:
Expected Behavior
No security vulnerabilities.
eslint-plugin-react version
7.32.2
eslint version
8.43.0
node version
18.16.1
It’s not a vulnerability here - like most transitive dep CVEs, it’s a false positive - and we can’t upgrade because v7 drops support for engines we need to support.
The babel team has backported the fix to semver v6, can we use that? https://github.com/babel/babel/pull/15742
I’d really rather not use a fork if we can avoid it.
| gharchive/issue | 2023-06-25T14:32:03 | 2025-04-01T06:39:14.447061 | {
"authors": [
"1EDExg0ffyXfTEqdIUAYNZGnCeajIxMWd2vaQeP",
"AviVahl",
"ljharb"
],
"repo": "jsx-eslint/eslint-plugin-react",
"url": "https://github.com/jsx-eslint/eslint-plugin-react/issues/3589",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
841593776 | NtCreateThreadEx crashes the process
I have spent quite a few hours in debugging and finally nailed down why process injection is failing. The call to NtCreateThreadEX using SysWhispers2 isn't really working for me. The process crashes as soon as the thread is injected.
My code is very simple that I open the process using Pid, create a virtual memory and then create RemoteThreadEX. I have ported all the calls to Syswhispers from High level APis but when I call NtCreateThreadEX, the process is crashing. When I just call CreateRemoteThread Directly, it works fine.
This is how I am calling the function
NtCreateThreadEx(&hThread, GENERIC_EXECUTE, NULL, process_handle, pointer_after_allocated, pointer_after_allocated, FALSE, NULL, NULL, NULL, NULL);
I am trying to follow this tutorial but in my code, I take PID to inject.
https://sevrosecurity.com/2020/04/08/process-injection-part-1-createremotethread/
replace this with the following.
def _get_function_asm_code(self, function_name: str) -> str:
function_hash = self._get_function_hash(function_name)
# Generate 64-bit ASM code.
code = ''
code += f'{function_name} PROC\n'
code += '\tmov [rsp +8], rcx ; Save registers.\n'
code += '\tmov [rsp+16], rdx\n'
code += '\tmov [rsp+24], r8\n'
code += '\tmov [rsp+32], r9\n'
code += '\tsub rsp, 28h\n'
code += f'\tmov ecx, 0{function_hash:08X}h ; Load function hash into ECX.\n'
code += '\tcall SW2_GetSyscallNumber ; Resolve function hash into syscall number.\n'
code += '\tadd rsp, 28h\n'
code += '\tmov rcx, [rsp +8] ; Restore registers.\n'
code += '\tmov rdx, [rsp+16]\n'
code += '\tmov r8, [rsp+24]\n'
code += '\tmov r9, [rsp+32]\n'
code += '\tmov r10, rcx\n'
code += '\tsyscall ; Invoke system call.\n'
code += '\tret\n'
code += f'{function_name} ENDP\n'
return code
Do you have a working VS project? I have tried the new code, it seems like SYSCalls being failed. For e.g. I use High-level API to allocate memory and copy the code, but I call SYSWhisper CreateRemoteThread call to create a thread, I can see in the debugger that the thread is not created. I am not sure why these syscalls aren't working for me. I am not sure even if NTWriteVirutal memory syscall or NTVirtualAllocateMemory syscall is even working for me.
Can you upload some code for reviewing? It works for me. However, it may be a misligned data structure or stack.
void
InjectDll(const HANDLE hProcess, const char* dllPath)
{
HANDLE hThread = NULL;
LPVOID lpAllocationStart = NULL;
SIZE_T szAllocationSize = lstrlenA(dllPath);
LPVOID lpStartAddress = GetProcAddress(GetModuleHandleA("kernel32.dll"), "LoadLibraryA");
NTSTATUS Status;
Status = NtAllocateVirtualMemory(
hProcess,
&lpAllocationStart,
0,
&szAllocationSize,
MEM_COMMIT | MEM_RESERVE,
PAGE_EXECUTE_READWRITE
);
if(!NT_SUCCESS(Status)) {
printf("NtAllocateVirtualMemory failed : %08lX\n", Status);
goto Cleanup;
}
Status = NtWriteVirtualMemory(
hProcess,
lpAllocationStart,
(PVOID)dllPath,
lstrlenA(dllPath),
NULL
);
if(!NT_SUCCESS(Status)) {
printf("NtWriteVirtualMemory failed : %08lX\n", Status);
goto Cleanup;
}
Status = NtCreateThreadEx(
&hThread,
THREAD_ALL_ACCESS,
NULL,
hProcess,
lpStartAddress,
lpAllocationStart,
FALSE,
0,
0,
0,
NULL
);
if(!NT_SUCCESS(Status)) {
printf("NtCreateThreadEx failed : %08lX\n", Status);
}
Cleanup:
if(lpAllocationStart) {
NtFreeVirtualMemory(
hProcess,
&lpAllocationStart,
0,
MEM_RELEASE
);
}
printf("Leaving. Status : %s\n",
NT_SUCCESS(Status) ? "OK" : "FAILED");
}
Thanks, I also found the issues. pointer_after_allocated has to be assigned to nullptr. I guess it became a dangling pointer during the testing and therefore was causing the issue.
| gharchive/issue | 2021-03-26T05:08:24 | 2025-04-01T06:39:14.488035 | {
"authors": [
"odzhan",
"philross88"
],
"repo": "jthuraisamy/SysWhispers2",
"url": "https://github.com/jthuraisamy/SysWhispers2/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1332488616 | read tar file by requested block
Problem
An .tar file is entirely read during BlockReader::read_block_FileTar.
This may cause problems for very large compressed files (the s4 program will have the entire unarchived file in memory; it will use too much memory).
This is due to design of the tar crate. The crate does not provide a method to store tar::Archive<File> instance and tar::Entry<'a, R: 'a + Read> instance due to inter-instance references and explicit lifetimes. (or is prohibitively complex; I made many attempts using various strategies involving references, lifetimes, pointers, etc.)
A tar::Entry holds a reference to data within the tar::Archive<File>. I found it impossible to store both related instances during new() or read_block_FileTar() and then later, during another call to read_block_FileTar(), utilize the same tar::Entry.
A new tar::Entry could be created per call to read_block_FileTar(). But then to read the requested BlockOffset, the entire .tar file entry would have to re-read. This means reading an entire file entry within a .tar file would be an O(n^2) algorithm.
Solution
Read an .tar file per block request, as done for normal files.
Similar problem as Issue #12.
Current code https://github.com/jtmoon79/super-speedy-syslog-searcher/blob/0.0.32/src/readers/blockreader.rs#L2012-L2032
Uses https://github.com/alexcrichton/tar-rs/releases/tag/0.4.38
See solution idea in https://github.com/jtmoon79/super-speedy-syslog-searcher/issues/12#issuecomment-2016681186
Not relevant after #283
What is wanted is to create an tar::Entry and later make calls to Entry.read_exact.
Creating an tar::Entry requires creating tar::Archive<File>, tar::Entries<File>, tar::Entry<'a, File>. But this is greatly complicated in that tar::Entry is borrowing tar::Archive and also the tar::Entry is derived from the tar::Entries. So AFAICT, a later call to Entry.read_exact requires all three instances must remain in existence.
Here are the permutation of technical approaches I have tried:
storing a Box<...>
storing a Pin<Box<...>>; this was to avoid an error of the Archive becoming overwritten
storing the same but using unsafe blocks to read from Entry; Entry instance became corrupted
using thread_local! (lazy_static! requires Sync and Send to be implemented)
forcibly allocating on the heap with the help of the copyless crate; this was to attempt to avoid lifetime problem of Archive
storing Archive, Entries, Entry in a struct and annotating with ouroboros::self_referencing; ouroboros macros did not like the < symbol in the T
trying the same with self_cell::self_cell
trying Serialize, Deserialize from crate serde; tar::Archive does not support Sync and Send
Again, I tried many permutations of all of the prior.
The closest I got was
use std::env;
use std::io::prelude::*;
use std::fs::File;
use std::cell::RefCell;
use std::ops::DerefMut;
use std::pin::Pin;
use ::copyless;
use ::tar::Archive;
use ::tar::Entries;
use ::tar::Entry;
std::thread_local! {
static MyArchive4: RefCell<Option<Box<Archive<File>>>> = {
eprintln!("thread_local! MyArchive4");
RefCell::new(None)
};
static MyEntry4: RefCell<Option<Box<Entry<'static, File>>>> = {
eprintln!("thread_local! MyEntry4");
RefCell::new(None)
};
static MyEntries4: RefCell<Option<Box<Entries<'static, File>>>> = {
eprintln!("thread_local! MyEntries4");
RefCell::new(None)
};
}
fn main() {
let args: Vec<String> = env::args().collect();
let filename = &args[1];
MyArchive4.with(|rca| {
let file: File = File::open(filename).unwrap();
unsafe {
// https://stackoverflow.com/a/59368947/471376
// forcibly allocate on the heap
let mut bx = <Box<Archive<File>> as copyless::BoxHelper<Archive<File>>>::alloc();
rca.borrow_mut().replace(
copyless::BoxAllocation::init(
bx,
Archive::<File>::new(file)
)
);
}
MyEntries4.with(|rces| {
unsafe {
let mut bx = <Box<Entries<'_, File>> as copyless::BoxHelper<Entries<'_, File>>>::alloc();
MyEntry4.with(|rce| {
let mut bx = <Box<Entry<'_, File>> as copyless::BoxHelper<Entry<'_, File>>>::alloc();
rce.borrow_mut().replace(
copyless::BoxAllocation::init(
bx,
rca.borrow_mut().as_mut().unwrap().entries().unwrap().nth(0).unwrap().unwrap()
)
);
});
}
});
});
}
but this results in error
298 | MyArchive4.with(|rca| {
| ---
| |
| `rca` is a reference that is only valid in the closure body
| has type `&'1 RefCell<Option<Box<Archive<std::fs::File>>>>`
...
333 | rca.borrow_mut().as_mut().unwrap().entries().unwrap().nth(0).unwrap().unwrap()
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| `rca` escapes the closure body here
| argument requires that `'1` must outlive `'static`
| gharchive/issue | 2022-08-08T22:50:45 | 2025-04-01T06:39:14.501833 | {
"authors": [
"jtmoon79"
],
"repo": "jtmoon79/super-speedy-syslog-searcher",
"url": "https://github.com/jtmoon79/super-speedy-syslog-searcher/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1275276329 | Segfault when running headscale nodes list --output json when backend is not running.
When running headscale nodes list --output json against a server that is offline, a segfault is encountered. headscale nodes list fails gracefully with Could not connect..... This segfault is problematic for frontends that consume json output exclusively.
$ headscale nodes list --output json
panic: runtime error: invalid memory address or nil pointer dereference
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xb1feab]
goroutine 1 [running]:
google.golang.org/grpc.(*ClientConn).Close(0x0)
google.golang.org/grpc@v1.46.0/clientconn.go:998 +0x4b
panic({0x1124f80, 0x1c52bc0})
runtime/panic.go:838 +0x207
google.golang.org/grpc.(*ClientConn).Invoke(0x80203cf18?, {0x1464490?, 0xc00054ec60?}, {0x1292a76?, 0x0?}, {0x118afc0?, 0xc00054c040?}, {0x118b080?, 0xc00054c180?}, {0x0, ...})
google.golang.org/grpc@v1.46.0/call.go:32 +0x5e
github.com/juanfont/headscale/gen/go/headscale/v1.(*headscaleServiceClient).ListMachines(0xc00007a780, {0x1464490, 0xc00054ec60}, 0xc0002f4420?, {0x0, 0x0, 0x0})
github.com/juanfont/headscale/gen/go/headscale/v1/headscale_grpc.pb.go:195 +0xce
github.com/juanfont/headscale/cmd/headscale/cli.glob..func16(0x1c666e0?, {0x125c23c?, 0x2?, 0x2?})
github.com/juanfont/headscale/cmd/headscale/cli/nodes.go:168 +0x2d0
github.com/spf13/cobra.(*Command).execute(0x1c666e0, {0xc000550980, 0x2, 0x2})
github.com/spf13/cobra@v1.4.0/command.go:860 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x1c68260)
github.com/spf13/cobra@v1.4.0/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.4.0/command.go:902
github.com/juanfont/headscale/cmd/headscale/cli.Execute()
github.com/juanfont/headscale/cmd/headscale/cli/root.go:85 +0x25
main.main()
github.com/juanfont/headscale/cmd/headscale/headscale.go:42 +0x235
To Reproduce
Invoke headscale with --output json against a backend that is not running.
Context info
Version of headscale used: v0.16.0-beta4 (and v0.15.0)
Version of tailscale client: N/A
OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version: FreeBSD 13.1-RELEASE
@theonemcdonald Does this mean headscale is coming someday to pfSense? 👀
| gharchive/issue | 2022-06-17T17:04:32 | 2025-04-01T06:39:14.516234 | {
"authors": [
"luckman212",
"theonemcdonald"
],
"repo": "juanfont/headscale",
"url": "https://github.com/juanfont/headscale/issues/652",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1349978792 | bugfix: Added base key validity period of 60 minutes.
[x] read the CONTRIBUTING guidelines
[x] raised a GitHub issue or discussed it on the projects chat beforehand
[ ] added unit tests
[ ] added integration tests
[x] updated documentation if needed
[x] updated CHANGELOG.md
Fixes #764
PR Description,
Currently headscale initializes machine key's expiry time with &time.Time{}. This is formatted to, 0001-01-01T00:00:00Z(in RFC3339 format).
This means, The keys expiry time will always be in the past and clients will be stuck in a loop renewing the key.
This PR addresses this problem by adding a minimum validity of 60 minutes to the generated keys.
This issue has not been fixed completely.
Android clients are working fine and are getting the correct expiry time but Linux clients are still getting,
expiry | 0001-01-01 05:53:28+05:53:28
Fixed.
OIDC renewal code mentios that the function accepts current time but then passes, &time.Time{}. This has been changed to pass, time.Now().Add(DefaultKeyExpireTime).
In protocol_common.go, In the refresh operation it was not renewing the key. expire field in database was being overwritten to 0001-01-01T00:00:00Z.
Digging into this, upstream only seems to say about key expiration this (in RegisterRequest)
// Expiry optionally specifies the requested key expiry.
// The server policy may override.
// As a special case, if Expiry is in the past and NodeKey is
// the node's current key, the key is expired.
Perhaps could be DefaultKeyExpireTime could be something configurable, with a default time of 60 minutes...
Hi! as part of #1473, we have reorganised a lot of the code.
To clear PRs that needs to be rebased or redone, we are closing open PRs that will require significant code change to be merged.
In addition, the issue of the PR might in some cases have been fixed, change or no longer relevant, so it would be great if this is considered as well.
Thank you for your contribution!
If it is still relevant and the PR is reopened, we will aim at getting the changes into the next release after the reorg if accepted.
| gharchive/pull-request | 2022-08-24T20:18:48 | 2025-04-01T06:39:14.523103 | {
"authors": [
"ishanjain28",
"juanfont",
"kradalby"
],
"repo": "juanfont/headscale",
"url": "https://github.com/juanfont/headscale/pull/765",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1360003803 | using tomogram generated by AreTomo
Hi Tim-Oliver and Thorsten,
This is more a question rather than a issue, but thought it would be helpful to write here for others as well.
I am now using AreTomo to generate tomograms, and the AreTomo output files don't include all the etomo output files listed in your previous python script to generate even/odd frame tomogram .
Now, my question is whether it's okay to generate EVN/ODD tomograms using AreTomo using the same parameter I used for full frame tomogram. Otherwise, what would you suggest? I don't have access to warp yet because we don't have a window workstation.
Cheers,
Joy
Hi Joy,
In my workflow, I am also using AreTomo-aligned tomograms for cryoCARE denoising. The routine is MotionCor2 alignment with -Splitsum 1 -> AreTomo alignment of the "main" tiltseries -> applying the same transformations (saved in the .aln output) to the EVN/ODD stacks with the flag -AlnFile ts.aln -> reconstruction and trimming using imod's tilt.
In my experience, this works quite nicely. I have a Python script which makes this all relatively convenient. Let me know if this would help you.
Hope this helps,
Benedikt
I would be interested :-) Please share it.
06.10.2022 15:46:36 Benedikt Wimmer @.***>:
Hi Joy,
In my workflow, I am also using AreTomo-aligned tomograms for cryoCARE denoising. The routine is MotionCor2 alignment with -Splitsum 1 -> AreTomo alignment of the "main" tiltseries -> applying the same transformations (saved in the .aln output) to the EVN/ODD stacks with the flag -AlnFile ts.aln -> reconstruction and trimming using imod's tilt.
In my experience, this works quite nicely. I have a Python script which makes this all relatively convenient. Let me know if this would help you.
Hope this helps,
Benedikt
—
Reply to this email directly, view it on GitHub[https://github.com/juglab/cryoCARE_pip/issues/26#issuecomment-1270081262], or unsubscribe[https://github.com/notifications/unsubscribe-auth/AAIP66JC5MR77OXZHIRBOZLWB3J3XANCNFSM6AAAAAAQDEV5Z4].
You are receiving this because you are subscribed to this thread.[Verfolgungsbild][https://github.com/notifications/beacon/AAIP66NFOZ27RN2L5ME3UPDWB3J3XA5CNFSM6AAAAAAQDEV5Z6WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTSLWPTO4.gif]
| gharchive/issue | 2022-09-02T10:27:39 | 2025-04-01T06:39:14.548563 | {
"authors": [
"bwmr",
"jychoi0616",
"thorstenwagner"
],
"repo": "juglab/cryoCARE_pip",
"url": "https://github.com/juglab/cryoCARE_pip/issues/26",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2056760172 | [OPTIMIZATION] System namespace filter is unnecessary in system pod page
What happened:
From the juicefs csi deployment perspective, if my understanding is correct, there is only one system namespace, so no need to enable the system namespace filter in system pod page.
What you expected to happen:
disable the system namespace search in system pod page
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?
Environment:
JuiceFS CSI Driver version (which image tag did your CSI Driver use): latest
Kubernetes version (e.g. kubectl version):
Object storage (cloud provider and region):
Metadata engine info (version, cloud provider managed or self maintained):
Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
Others:
closed by https://github.com/juicedata/juicefs-csi-driver/pull/841
| gharchive/issue | 2023-12-26T23:18:38 | 2025-04-01T06:39:14.553635 | {
"authors": [
"showjason",
"zwwhdls"
],
"repo": "juicedata/juicefs-csi-driver",
"url": "https://github.com/juicedata/juicefs-csi-driver/issues/840",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1493425819 | fix oracle ep regex parsing error
#3073
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.neo.chen seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2022-12-13T05:40:52 | 2025-04-01T06:39:14.556409 | {
"authors": [
"CLAassistant",
"neocxf"
],
"repo": "juicedata/juicefs",
"url": "https://github.com/juicedata/juicefs/pull/3074",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
182878628 | Incrementing the revision for kubeapi-load-balancer and kubernetes-worker.
Version bump with our work today.
bump bump bump it up 👍
| gharchive/pull-request | 2016-10-13T19:29:27 | 2025-04-01T06:39:14.557270 | {
"authors": [
"chuckbutler",
"mbruzek"
],
"repo": "juju-solutions/bundle-canonical-kubernetes",
"url": "https://github.com/juju-solutions/bundle-canonical-kubernetes/pull/91",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1554823838 | Error: internal error, Client(5) claimed to return *client.Client but returned *client.ClientV5
Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform Version
terraform -v
Terraform v1.3.7
on linux_amd64
+ provider registry.terraform.io/juju/juju v0.3.1
Affected Resource(s)
Please list the resources as a list, for example:
juju_application
Terraform Configuration Files
https://gist.github.com/VariableDeclared/4822b6acebf0a90815fffb03a82512c8
Debug Output
https://gist.github.com/VariableDeclared/bc38016916cb014ef42aac786bd801d5
Expected Behavior
Juju model is added and charmed deployed to this model
Actual Behavior
Terraform failed to provision the applications
Steps to Reproduce
place the k8s tf file
terraform init
terraform apply (accept/enter yes)
Important Factoids
Nothing out of the ordinary as far as I know. Running juju 2.9.38
References
None
It seems you're using an old version of the provider. Could you try with the latest 0.4.3?
thanks! @juanmanuel-tirado that was it :)
| gharchive/issue | 2023-01-24T11:40:37 | 2025-04-01T06:39:14.589726 | {
"authors": [
"VariableDeclared",
"juanmanuel-tirado"
],
"repo": "juju/terraform-provider-juju",
"url": "https://github.com/juju/terraform-provider-juju/issues/128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2436858919 | Update writing-tests.md - Shown test code is wrong
Change errorf to Errorf in shown test code..
Playing a bit with the code reveals that in order to pass the test GetPi needs to return 3.14159265358979323846264338327950288419716939937510582097494459.
However, outln cannot print out the full number, only 3.14159.
Is this by design?
Thanks
Playing a bit with the code reveals that in order to pass the test GetPi needs to return 3.14159265358979323846264338327950288419716939937510582097494459.
However, outln cannot print out the full number, only 3.14159. Is this by design?
In many languages, will not be printed with full-precision and precision varies implementation to implementation.
In Jule, the built-in outln function uses API's conversion algorithms. Naturally, std::conv is not implemented for API, therefore the C++ API uses conversion algorithm of C++ STL by default. And STL converts as 3.14159. But if you are using std::fmt to print values, it will use std::conv package to convert string. The default format call is conv::FmtFloat(f64(arg), 'g', -1, 64) for f64 types, quoted from implementation source code. And it will be converted as 3.141592653589793.
For example:
use math for std::math
use fmt for std::fmt
fn main() {
outln(math::Pi)
fmt::Println(math::Pi)
}
If you need more precision or any custom format, call the relevant function of std::conv package.
For example:
use math for std::math
use conv for std::conv
fn main() {
outln(conv::FmtFloat(math::Pi, 'f', 5, 64))
outln(conv::FmtFloat(math::Pi, 'g', -1, 64))
outln(conv::FmtFloat(math::Pi, 'f', 48, 64))
outln(conv::FmtFloat(math::Pi, 'e', 48, 64))
outln(conv::FmtFloat(math::Pi, 'f', 20, 64))
outln(conv::FmtFloat(math::Pi, 'f', 8, 64))
}
BTW: Just realized that there is not only the errorf in the code, but there is a second one in the text; which I overlooked. Sorry.
No problem. I'll update relevant outdated codes.
Thanks for your contribution.
Thank you for the thorough explanation about the print functions.
Maybe it's a good idea to copy and paste what you explained here into the manual, as I'm sure I'm not the only one to find this explanation excellent.
| gharchive/pull-request | 2024-07-30T04:22:23 | 2025-04-01T06:39:14.596921 | {
"authors": [
"mertcandav",
"sparkylein"
],
"repo": "julelang/manual",
"url": "https://github.com/julelang/manual/pull/14",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
187166438 | Resources implementation refactoring
We've been eliminating a lot of duplicated code by storing them in the module XClarityClient::ManagementMixin which is included in management classes
@juliancheal it is ready for merge
| gharchive/pull-request | 2016-11-03T19:25:16 | 2025-04-01T06:39:14.598858 | {
"authors": [
"terciodemelo"
],
"repo": "juliancheal/xclarity_client",
"url": "https://github.com/juliancheal/xclarity_client/pull/15",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
38381283 | Ordering of transform properties is not preserved
I want
rotateX(abc) rotateY(cde) and then translateZ(asdf)
but sometimes
I get rotateX(abc) translateZ(asdf) and then rotateY(cde)
because of that I am unable to achieve desired effect.
I implemented a solution for this. Basically, there is an options argument which is by default an empty array. It can be filled with an ordered list of transform properties. It will sort the transformCache before applying it. 7fc7bdd815c01639df96a24728f607d81f0b84f2.
Interesting......................... You should clear the whitespace change commits and open a new GH issue with your changes and a copy of the description you just wrote. I really like this.
Yeah I just updated it, check 643cd0b9ef6dd62af51c9c49f22b5b8895a479da
| gharchive/issue | 2014-07-22T09:04:54 | 2025-04-01T06:39:14.628766 | {
"authors": [
"julianshapiro",
"parin2092",
"samuelhorwitz"
],
"repo": "julianshapiro/velocity",
"url": "https://github.com/julianshapiro/velocity/issues/197",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1998671109 | QR code not scanning, onDetect not calling
onDetect callback not calling. I tried to downgrade to 3.2.0 and upgraded to 3.5.2. 2 weeks before it worked with 3.2.0 but the QR code not detecting both Android and iOS.
I tried the unbundled version too.
W/OnBackInvokedCallback(13664): Set 'android:enableOnBackInvokedCallback="true"' in the application manifest. D/TransportRuntime.SQLiteEventStore(13664): Storing event with priority=DEFAULT, name=FIREBASE_ML_SDK for destination cct D/DeferrableSurface(13664): Surface created[total_surfaces=1, used_surfaces=0](androidx.camera.core.SurfaceRequest$2@bc92979} D/CameraOrientationUtil(13664): getRelativeImageRotation: destRotationDegrees=0, sourceRotationDegrees=90, isOppositeFacing=true, result=90 D/TransportRuntime.JobInfoScheduler(13664): Upload for context TransportContext(cct, DEFAULT, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning... D/CameraOrientationUtil(13664): getRelativeImageRotation: destRotationDegrees=0, sourceRotationDegrees=90, isOppositeFacing=true, result=90 D/CameraOrientationUtil(13664): getRelativeImageRotation: destRotationDegrees=0, sourceRotationDegrees=90, isOppositeFacing=true, result=90 D/DeferrableSurface(13664): Surface created[total_surfaces=2, used_surfaces=0](androidx.camera.core.impl.ImmediateSurface@6c8566c} D/CameraOrientationUtil(13664): getRelativeImageRotation: destRotationDegrees=0, sourceRotationDegrees=90, isOppositeFacing=true, result=90 D/CameraOrientationUtil(13664): getRelativeImageRotation: destRotationDegrees=0, sourceRotationDegrees=90, isOppositeFacing=true, result=90 E/FileUtils(13664): err write to mi_exception_log D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Use case androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548 ACTIVE D/UseCaseAttachState(13664): Active and attached use case: [] for camera: 0 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Use case androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837 ACTIVE D/UseCaseAttachState(13664): Active and attached use case: [] for camera: 0 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Use cases [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] now ATTACHED D/UseCaseAttachState(13664): All use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/UseCaseAttachState(13664): Active and attached use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Resetting Capture Session D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Releasing session in state INITIALIZED D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Attempting to force open the camera. D/CameraStateRegistry(13664): tryOpenCamera(Camera@ed0ef0[id=0]) [Available Cameras: 1, Already Open: false (Previous state: CLOSED)] --> SUCCESS D/CameraStateRegistry(13664): Recalculating open cameras: D/CameraStateRegistry(13664): Camera State D/CameraStateRegistry(13664): ------------------------------------------------------------------- D/CameraStateRegistry(13664): Camera@f0633fa[id=1] UNKNOWN D/CameraStateRegistry(13664): Camera@ed0ef0[id=0] OPENING D/CameraStateRegistry(13664): ------------------------------------------------------------------- D/CameraStateRegistry(13664): Open count: 1 (Max allowed: 1) D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Opening camera. D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Transitioning camera internal state: INITIALIZED --> OPENING D/CameraStateMachine(13664): New public camera state CameraState{type=OPENING, error=null} from OPENING and null D/CameraStateMachine(13664): Publishing new public camera state CameraState{type=OPENING, error=null} D/UseCaseAttachState(13664): All use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 W/libc (13664): Access denied finding property "persist.vendor.camera.privapp.list" D/CameraExtImplXiaoMi(13664): initCameraDevice: 0 W/libc (13664): Access denied finding property "vendor.camera.aux.packagelist" W/CameraManagerGlobal(13664): ignore the torch status update of camera: 3 W/libc (13664): Access denied finding property "vendor.camera.aux.packagelist" W/CameraManagerGlobal(13664): ignore the torch status update of camera: 4 W/libc (13664): Access denied finding property "vendor.camera.aux.packagelist" W/CameraManagerGlobal(13664): ignore the torch status update of camera: 5 W/libc (13664): Access denied finding property "vendor.camera.aux.packagelist" W/CameraManagerGlobal(13664): ignore the torch status update of camera: 6 W/libc (13664): Access denied finding property "vendor.camera.aux.packagelist" W/CameraManagerGlobal(13664): ignore the torch status update of camera: 7 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Use case androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548 ACTIVE D/UseCaseAttachState(13664): Active and attached use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Use case androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837 ACTIVE D/UseCaseAttachState(13664): Active and attached use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Issue capture request D/UseCaseAttachState(13664): Active and attached use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} CameraDevice.onOpened() D/Camera2CameraImpl(13664): {Camera@ed0ef0[id=0]} Transitioning camera internal state: OPENING --> OPENED D/CameraStateRegistry(13664): Recalculating open cameras: D/CameraStateRegistry(13664): Camera State D/CameraStateRegistry(13664): ------------------------------------------------------------------- D/CameraStateRegistry(13664): Camera@f0633fa[id=1] UNKNOWN D/CameraStateRegistry(13664): Camera@ed0ef0[id=0] OPEN D/CameraStateRegistry(13664): ------------------------------------------------------------------- D/CameraStateRegistry(13664): Open count: 1 (Max allowed: 1) D/CameraStateMachine(13664): New public camera state CameraState{type=OPEN, error=null} from OPEN and null D/CameraStateMachine(13664): Publishing new public camera state CameraState{type=OPEN, error=null} D/UseCaseAttachState(13664): All use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/UseCaseAttachState(13664): Active and attached use case: [androidx.camera.core.Preview-8753efdb-cbe1-44de-af96-a040e4be711b83214548, androidx.camera.core.ImageAnalysis-2417b284-bf34-4955-a969-d3fc925d2e6689907837] for camera: 0 D/SyncCaptureSessionBase(13664): [androidx.camera.camera2.internal.SynchronizedCaptureSessionBaseImpl@6375a46] getSurface...done D/CaptureSession(13664): Opening capture session. D/DeferrableSurface(13664): New surface in use[total_surfaces=2, used_surfaces=1](androidx.camera.core.SurfaceRequest$2@bc92979} D/DeferrableSurface(13664): use count+1, useCount=1 androidx.camera.core.SurfaceRequest$2@bc92979 D/DeferrableSurface(13664): New surface in use[total_surfaces=2, used_surfaces=2](androidx.camera.core.impl.ImmediateSurface@6c8566c} D/DeferrableSurface(13664): use count+1, useCount=1 androidx.camera.core.impl.ImmediateSurface@6c8566c D/CameraDevice-JV-0(13664): waitUntilIdle: E. id = 0 D/CameraDevice-JV-0(13664): waitUntilIdle: X D/CaptureSession(13664): Attempting to send capture request onConfigured D/CaptureSession(13664): Issuing request for session. D/Camera2CaptureRequestBuilder(13664): createCaptureRequest D/CaptureSession(13664): Issuing capture request. D/Camera2CaptureRequestBuilder(13664): createCaptureRequest D/CaptureSession(13664): CameraCaptureSession.onConfigured() mState=OPENED D/CaptureSession(13664): CameraCaptureSession.onReady() OPENED I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed I/olutions.festiv(13664): createIfNeeded: Recreate new EGLImage since dataspace changed D/TransportRuntime.SQLiteEventStore(13664): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct D/TransportRuntime.JobInfoScheduler(13664): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning... D/TransportRuntime.SQLiteEventStore(13664): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct D/TransportRuntime.JobInfoScheduler(13664): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning... D/TransportRuntime.SQLiteEventStore(13664): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct D/TransportRuntime.JobInfoScheduler(13664): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning... D/TransportRuntime.SQLiteEventStore(13664): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct D/TransportRuntime.JobInfoScheduler(13664): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning... W/System (13664): A resource failed to call Surface.release. I/BpBinder(13664): onLastStrongRef automatically unlinking death recipients: <uncached descriptor>
The Camera is still open and the controller configuration is
MobileScannerController cameraController = MobileScannerController( autoStart: true, facing: CameraFacing.back, formats: [BarcodeFormat.qrCode], torchEnabled: false, detectionSpeed: DetectionSpeed.noDuplicates, );
Any solutions or workarounds?
Hi, its happening too me also, is there any solutions yet?
Found the solution, need to widen the scanWindow if you have declare the scanWindow
Still having the same issue, I'm not using the scanWindow
| gharchive/issue | 2023-11-17T09:28:00 | 2025-04-01T06:39:14.636396 | {
"authors": [
"CollinsMunene",
"KaiExa",
"mahendravijay"
],
"repo": "juliansteenbakker/mobile_scanner",
"url": "https://github.com/juliansteenbakker/mobile_scanner/issues/866",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1618955572 | GitHub Actions: use MacOS runner
macOS hosted runners have more cores and more ram and allow graphics HW acceleration.
See: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources
Despite the more cores and ram the build takes far longer on the macOS runner. Closing this.
| gharchive/pull-request | 2023-03-10T13:09:01 | 2025-04-01T06:39:14.646993 | {
"authors": [
"julioromano"
],
"repo": "julioromano/tmdb-client-android",
"url": "https://github.com/julioromano/tmdb-client-android/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2034466690 | Can the example rfv command use a starting directory? (Help request)
In the fzf advanced examples, there is a command defined (rfv):
#!/usr/bin/env bash
# 1. Search for text in files using Ripgrep
# 2. Interactively restart Ripgrep with reload action
# 3. Open the file in Vim
RG_PREFIX="rg --column --line-number --no-heading --color=always --smart-case "
INITIAL_QUERY="${*:-}"
: | fzf --ansi --disabled --query "$INITIAL_QUERY" \
--bind "start:reload:$RG_PREFIX {q}" \
--bind "change:reload:sleep 0.1; $RG_PREFIX {q} || true" \
--delimiter : \
--preview 'bat --color=always {1} --highlight-line {2}' \
--preview-window 'up,60%,border-bottom,+{2}+3/3,~3' \
--bind 'enter:become(vim {1} +{2})'
This is very useful and allows me to type:
rfv <search>
How can this be extended to start searching from a different directory? This would search subdirectories too. I'd like to have:
rfv <search> <start directory>
At the moment, I'm having to change directory first.
Ideally, I'd like to get the same working in the Switching between Ripgrep mode and fzf mode script as well.
Many thanks.
It's possible to use getopts to allow for flags like -l (location), -i (ignore case), or other tailored customizations.
rfv() {
local case="--case-sensitive"
local location="$PWD"
while getopts il: cmd_arg; do
case "$cmd_arg" in
i) case="--ignore-case" ;;
l) location="${OPTARG}" ;;
*)
echo "Invalid flag provided"
return 1
;;
esac
done
shift "$((OPTIND - 1))"
RG_PREFIX="rg --column --line-number --no-heading --color=always $case "
INITIAL_QUERY="${*:-}"
: | fzf --ansi --disabled --query "$INITIAL_QUERY" \
--bind "start:reload:$RG_PREFIX {q} -- $location" \
--bind "change:reload:sleep 0.1; $RG_PREFIX {q} -- $location || true" \
--header "Search Directory: $location" \
--delimiter : \
--preview 'bat --color=always {1} --highlight-line {2}' \
--preview-window 'up,60%,border-bottom,+{2}+3/3,~3' \
--bind 'enter:become(vim {1} +{2})'
}
@LangLangBart Perfect! Thank you very much for taking the time to answer and demonstrate the solution.
| gharchive/issue | 2023-12-10T17:43:45 | 2025-04-01T06:39:14.666313 | {
"authors": [
"LangLangBart",
"Praful"
],
"repo": "junegunn/fzf",
"url": "https://github.com/junegunn/fzf/issues/3535",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1931540643 | Add --builtin-filter-dirs
fzf's builtin filesystem walker is faster then overridding via FZF_DEFAULT_COMMAND, especially on Windows
If this switch is specified, fzf will read directories instead of files if FZF_DEFAULT_COMMAND is not specified and there's no input from tty
I'm open to ideas, but using FZF_DEFAULT_COMMAND or passing through std-in is way slower than using the default walker in fzf.
I agree that design-wise it makes sense, but pragmatically it's just faster to walk the filesystem in native Go.
Here's an example comparing the built-in go walker vs Window's dir built-in command:
Thanks. Is the performance difference due to the performance limit of dir, or is it a fundamental limitation of the pipe mechanism in Windows? I mean, can we achieve better performance by using another program that is faster than dir?
Closed in #3649.
| gharchive/pull-request | 2023-10-07T23:08:24 | 2025-04-01T06:39:14.669830 | {
"authors": [
"junegunn",
"kelleyma49"
],
"repo": "junegunn/fzf",
"url": "https://github.com/junegunn/fzf/pull/3464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
524313203 | Prevent panic on unsupported char for MultiCell
Currently if an unsupported char is inserted (like 😀) the library panics.
I think it would be better to return an error.
Thanks @oliverpool -- this is a real improvement.
| gharchive/pull-request | 2019-11-18T11:43:12 | 2025-04-01T06:39:14.673540 | {
"authors": [
"jung-kurt",
"oliverpool"
],
"repo": "jung-kurt/gofpdf",
"url": "https://github.com/jung-kurt/gofpdf/pull/337",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
963394303 | docs(typo) : modify Compile-time Type Checking Section
Before : - useColor('#00ffzz')
After : - useColor('#00ffzaz')
Change the wrong typo.
Hi, @GwonHeeJun! Thanks for contributing. ☺️
I had really no idea of that typo and haven’t found it without your help.
These are my feedback;
Can you change the comment, not the code? The intended example(#00ffzz) is to show that it detects wrong digits like z, even it has the ordinary length. I'll be hoping you to change the content in your commit as well.
This is an somewhat minor request, and it's about this our project's commit convention. Maybe it'll be nicer if you remove the spaces before the parentheses?
Codecov Report
Merging #4 (1a07e13) into main (2c892c3) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #4 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 9 9
Lines 89 89
Branches 22 23 +1
=========================================
Hits 89 89
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2c892c3...1a07e13. Read the comment docs.
Can I change it like this?
- useColor('#00ffzz')
// Argument of type '"#00ffzz"' is not assignable to parameter of type '...'.ts(2345)
Can I change it like this? @junhoyeo
Yup. That will be wonderful 🎉
Can I change it like this? @junhoyeo
Yup. That will be wonderful 🎉
I fixed it! Check it out plz XD @junhoyeo
| gharchive/pull-request | 2021-08-08T09:43:45 | 2025-04-01T06:39:14.686656 | {
"authors": [
"GwonHeeJun",
"codecov-commenter",
"junhoyeo"
],
"repo": "junhoyeo/use-color",
"url": "https://github.com/junhoyeo/use-color/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2666586795 | Add a plugin tutorial
This adds a tutorial for creating basic plugins with Jupyter Book. It covers creating and registering plugins, creating a directive, and creating a transform.
It also cleans up some navigation and adds two how-tos related to plugins.
This is a result of some self-learning I had to do in creating some plugins for my website, so hopefully this makes things easier to learn for future readers!
Follow ups
A few things I opened along the way:
https://github.com/jupyter-book/mystmd/issues/1651
https://github.com/jupyter-book/mystmd/issues/1649
https://github.com/jupyter-book/mystmd/issues/1652
https://github.com/jupyter-book/mystmd/issues/1654
https://github.com/jupyter-book/mystmd/issues/1655
Awesome, thx @choldgraf!! Pinging both @minrk and @ryanlovett who have been thinking about this as well, they may have input/suggestions.
I added a brief update to show off how to use the ctx command and put it in a howto as well. @agoose77 feel free to take a pass and merge whenever you like!
I've added another little how-to that has boilerplate for how to create a role, directive, and transform. It doesn't really add any new content, just gives quick boilerplate users can copy into their own plugins.
I also added a warning to make clear that ctx.parseMyst is experimental and might change.
I'll plan to merge this one in and we can iterate on the content from there - I'm happy to make changes if folks spot something we shouldn't have put up there!
| gharchive/pull-request | 2024-11-17T21:56:55 | 2025-04-01T06:39:14.766749 | {
"authors": [
"choldgraf",
"fperez"
],
"repo": "jupyter-book/jupyter-book",
"url": "https://github.com/jupyter-book/jupyter-book/pull/2264",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1559775825 | Update to protoco 2.1.0 (echo_update)
Add comments around the update protocol
Unconditionally send echo_update on front_end change
Update protocol version
@SylvainCorlay @martinRenou @maartenbreddels @jasongrout this is a PR to add the "echo_update" feature to XWidgets.
If you have a moment, would you mind checking that it looks correct?
This is still missing the mechanism to deactivate "echo_update" on specific properties but that might require some bigger changes on the way we handle properties.
| gharchive/pull-request | 2023-01-27T13:30:16 | 2025-04-01T06:39:14.784301 | {
"authors": [
"AntoinePrv"
],
"repo": "jupyter-xeus/xwidgets",
"url": "https://github.com/jupyter-xeus/xwidgets/pull/251",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
268583350 | adding repo name parsing
this is an attempt at adding more parseable image names to the pushed image. I'm not sure if this is the right location to do this (maybe better in binderhub?) but LMK thoughts
Closing this and we'll figure it out another way!
| gharchive/pull-request | 2017-10-25T23:31:48 | 2025-04-01T06:39:14.819675 | {
"authors": [
"choldgraf"
],
"repo": "jupyter/repo2docker",
"url": "https://github.com/jupyter/repo2docker/pull/124",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
610757810 | UI to show resource usage (e.g. CPU, memory) if limits are set
Proposed change
Include a component in the default notebook UI that shows when excessive resources are being used.
This came to mind after a question in the UKRSE slack, where it turned out a notebook wouldn't run because it had hit the mybinder memory limit. I think it'd be nice if there was a way to see current resource usage along with the limits.
Alternative options
Who would use this feature?
New users to mybinder who aren't aware of the resource limits.
Existing users who may find it interesting to see the resource consumption of their notebooks
(Optional): Suggest a solution
It might be enough to include https://github.com/yuvipanda/nbresuse in repo2docker though I'll need to check whether it will correctly show the K8S resource limits. It's also undergoing heavy redevelopment at the moment, especially with regards to jupyterlab.
Sounds like a good idea. nbresuse should "just work" (I've used it on z2jh deployments). Do you know if it works with lab as well?
Do you know if it works with lab as well?
Lab shows the memory usage in the status bar if the nbresuse data is available:
There are also some alternative frontends using nbresuse but displaying the data differently:
https://github.com/jtpio/jupyterlab-system-monitor
https://github.com/NERSC/jupyterlab-cpustatus
| gharchive/issue | 2020-05-01T13:43:34 | 2025-04-01T06:39:14.824291 | {
"authors": [
"betatim",
"jtpio",
"manics"
],
"repo": "jupyterhub/binderhub",
"url": "https://github.com/jupyterhub/binderhub/issues/1097",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1727336818 | Source file of docs/source/_static/images/architecture.png
@consideRatio do you have the SVG file of docs/source/_static/images/architecture.png?
It's made in Google Slides: https://docs.google.com/presentation/d/1t5W4Rnez6xBRz4YxCxWYAx8t4KRfUosbCjS4Z1or7rM/
| gharchive/issue | 2023-05-26T10:21:31 | 2025-04-01T06:39:14.825960 | {
"authors": [
"minrk",
"rgaiacs"
],
"repo": "jupyterhub/binderhub",
"url": "https://github.com/jupyterhub/binderhub/issues/1706",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2324188569 | Rewrite the frontend
/* If this file gets over 200 lines of code long (not counting docs / comments), start using a framework
This was the first line I wrote when I started writing the existing frontend, and of course that line is still there.
This PR cleans up and rewrites the entire frontend, to 'start using a framework'. There's no functional change to the UI itself, so it should be treated as a pure upgrade / refactor.
Demo
Main page
https://github.com/jupyterhub/binderhub/assets/30430/4c629efb-2210-4f43-aeb3-3b3a0a4d65b8
Loading page
https://github.com/jupyterhub/binderhub/assets/30430/da6eefbe-8044-42aa-8562-03f7a79520ed
Functional Status
The following functional pieces need to be completed:
[x] Landing page link generator
[x] Loading page
[x] Actually launching servers correctly
[x] Progress bars for launching
[x] Stream logs
[x] Favicon switching to indicate progress
[x] About page
[x] Extra scripts at the bottom (for google analytics / similar)
[x] nbviewer in loading page
[x] Help text in the main page
[x] Banner on top
[x] Donation button on top
[x] Badge generator for markdown + rst
[x] view raw logs
[x] social cards
[x] error page
[x] fix copy button icon
[x] Faithfully replicate layout
[x] Fix markdown and rST icons
Technology changes
[x] Upgrades to Bootstrap 5, latest version. Nicely matches JupyterHub upgrade in version 5.
[x] Use react
[x] Use JSDoc type annotations rather than typescript. It feels to me this gives me a good balance between the positives of optional type checking without the extra community investment needed for full typescript. This PR does switch the compiler we use from babel to tsc, but type checking is not enforced. We may choose to do so later, but not now.
[x] Use react-router for URL parsing. This makes the BinderHub UI a SPA, which may be split into its own package separately in the future if so desired.
[x] Move the _config endpoint to a refactored /api/repoproviders. This is with an eye on allowing us to implement #844 eventually, as well as being able to implement the correct frontend bits in other users of the binderhub API (like https://github.com/yuvipanda/jupyterhub-fancy-profiles)
[x] Deprecate direct google analytics functionality, where we embedded GA code into our source. Instead, extra_footer_scripts can continue to be used - that's what we use for matomo.
Functionality changes
[x] Progress bar is now also shown in the loading page.
[x] Learning from experience with nbgitpuller, the link generator now prefers outputting only urlpath - both when the user enters a file to open or a URL to open. This prevents the issue we had when we tried to change the default app that was going to open from classic notebook to lab, and broke a lot of people's stuff. By only outputting urlpath, URLs will always have this information encoded in them. The older filepath and labpath are still accepted as input, because Cool URIs don't break
[x] Slightly better validation for the link generator, but most of this should be instead implemented as part of #844
Maintenance changes
One primary goal here is to make the frontend safer to change, so it's less fragile and brittle.
[x] Add some frontend tests. This should be much easier now thanks to all the componentization.
[ ] Provide thorough jsdoc inline documentation for everything
[x] Refactor whatever is in binderhub-client package to make sure it only contains api-client related functionality - all UI stuff should be in a separate package.
[ ] Unify the current 'split' between the repo's root js/ and the binderhub/static/js sources of JS files. Pick up best practices from other Jupyterish projects for what to do here.
Timeline
My hope is to slowly but consistently work on this, and get it fully complete before end of June. I have also asked for some frontend review help from @batpad (either directly or via someone else), as he has significant experience in this kinda frontend work (even though he is less experienced in the JupyterHub community itself).
Fixes https://github.com/jupyterhub/binderhub/issues/774
I've added functionality for the top banner here, and poked around to make sure that the existing banner can display well. It needs to be redone to use bootstrap 5 utility classes, but here it is:
<div class="container-fluid position-relative" >
<div>
Thanks to <a href="https://www.ovh.com/">OVH</a>, <a href="https://notebooks.gesis.org">GESIS Notebooks</a> and <a href="https://curvenote.com">Curvenote</a> for supporting us! 🎉
<br />
mybinder.org has updated the base image to Ubuntu 22.04! See the <a href="https://repo2docker.readthedocs.io/en/latest/howto/breaking_changes.html">upgrade guide</a> for details.
</div>
<div class="top-0 end-0 position-absolute">
<a class="btn" style="width:fit-content;height:fit-content;padding:10px;background-color:#e66581;color:white;font-weight:bold;"
onmouseover="this.style.backgroundColor='#d15b75'" onmouseout="this.style.backgroundColor='#e66581'"
href="https://numfocus.salsalabs.org/donate-to-binder" target="_blank">
🤍 Donate to mybinder.org!
</a>
</div>
</div>
This works well!
Have asked @oliverroick to do a review here since his React eyes are much, much sharper than mine.
Getting closer and closer!
We've finally got some dedicated time from @oliverroick so hopefully with that we can push this one to completion
Will need to be rebased on https://github.com/jupyterhub/binderhub/pull/1891 but otherwise pretty close!
I've merged #1891 here, and the failing tests are all actually genuine failures! \o/
Looking at the build token and seeing how that is generated:
build_token = jwt.encode(
{
"exp": int(time.time()) + self.settings["build_token_expires_seconds"],
"aud": provider_spec,
"origin": self.token_origin(),
},
key=self.settings["build_token_secret"],
algorithm="HS256",
)
It contains:
Host
Provider spec
While host is still available in the backend, as the PR is written, provider_spec is not available in the backend code. There are two paths to fixing this:
Remove provider_spec from the jwt. This means the token will now be valid for all providers in the origin
Change the handler so we do know what provider_spec is in the backend, and don't change the build token generator algorithm.
Reading the original PR that introduced it (https://github.com/jupyterhub/binderhub/pull/1309), my understanding is that this means that launching from the home page will be rate limited by IP, but launching from a link will not be rate limited by IP. I don't think I fully understand the considerations here, and will poke and ask for help!
I don't recall super well, but the main thing was to limit cross-site launches, e.g. from thebe or API requests, but not from regular browser visits. I think you're right that the scheme may not be properly affecting home page builds, which I think isn't right, but also home page builds are a tiny fraction of launches in the current setup.
I think perhaps we may be able to get to a better, simpler result today using sec-fetch headers. The federation makes this complicated, but perhaps still manageable.
Oh, I forgot to comment on the removal of jinja template support. I think most of the changes are AOK by me, but GESIS currently relies on this, but only to add some outer material. As I understand it, that should still work today. So instead of removing all documentation/support of templates, maybe we can update it to clarify that you can still re-skin page.html to some degree, but not reach into the 'App' itself, which I don't think anybody actually wants to do, and probalby doesn't really work today, anyway. WDYT? Or do you want to write new docs for how to accomplish these same goals in The New Way, e.g. with extra_header, extra_footer, extra_css (or theme_ etc.)?
Looks like we've kinda uncovered a rats nest of 'when do we expect repo to be encoded'? For example, with zenodo, we don't expect it to be encoded - localhost:8000/services/binder/v2/zenodo/10.5281/zenodo.3242074/ is a valid URI, but 10.5281/zenodo.3242074/ is the repo, even though it looks like repo/ref.
The only times repo is actually uriEncoded is:
When using gl provider (due to explicit check)
When using git provider (as repo is a full http URL, and it matches the :// check)
When using ckan provider (same :// check)
Everything else is unencoded, including when / are present (as it is for DOIs).
The existing route is (r"/v2/([^/]+)/(.+)", ParameterizedMainHandler),. Basically I think we lucked out that repoproviders that aren't listed in https://github.com/jupyterhub/binderhub/pull/1856#issuecomment-2529602076 happen to have / in the repo part because DOIs have / in them! I don't think that was intentional.
CC @rgaiacs https://github.com/jupyterhub/binderhub/pull/1856#issuecomment-2527354774
I think in most cases we don't want to double-encode the spec unless we really have to (i.e. the spec is invalid as a URL path, as in git, or there is ambiguity as in gitlab), and in most cases we don't have to, because the spec is already defined by the repoprovider as a URL fragment. But you're right, we can only do that for repoproviders where the spec is a valid URL path fragment, either guaranteeing a certain number of / (github) or not needing to at all (DOI). For frontend purposes, it should be an explicit property of the provider whether the input to the resource / repo / etc. needs encoding or not.
The existing route is (r"/v2/([^/]+)/(.+)", ParameterizedMainHandler),. Basically I think we lucked out that repoproviders that aren't listed in https://github.com/jupyterhub/binderhub/pull/1856#issuecomment-2529602076 happen to have / in the repo part because DOIs have / in them! I don't think that was intentional.
I don't think so. The first group (([^/]+)) matches the repoprovider name (figshare or dataverse or gh), while the (.+) matches the full spec to pass to the provider, which may have zero or more slashes.
For frontend purposes, it should be an explicit property of the provider whether the input to the resource / repo / etc. needs encoding or not.
Yep, this is now my implementation plan!
yay! all tests pass!
All this work is awesome! @arnim and I are planning to dedicated January 2025 to mybinder.org. We hope to make a huge progress and have things in a better state.
@rgaiacs yay!
if we merge this as is, would that be ok with you? Or would we need to add some more customization features before we do that so we don't break your installs?
@yuvipanda can we delay this to be merged on Tuesday? I'm finishing my day and I will not be able to provide support until Tuesday. Thanks for the understanding.
@rgaiacs oh yeah, no problem at all!
if we merge this as is, would that be ok with you?
@yuvipanda this should be OK. But I prefer if this could be merged on Tuesday for me to be ready to act if things stop on GESIS side.
Or would we need to add some more customization features before we do that so we don't break your installs?
@arnim and I need to do a frontend update on our installation because of GESIS change of corporate design. We are planning to do this in January and we will catch up with this changes.
@rgaiacs ah excellent.
This still needs more review, so won't be merged before tuesday i think!
The new interface is a lot narrower than the existing one, and the form background is lighter which I find makes it difficult to distinguish the input fields from the text labels, especially since the input fields have placeholder text that looks very similar to the label text. Do you think you could change the styling to make the input fields more obvious?
@manics the color contrast changes are the result of upgrading bootstrap. I do agree with you though, it could be better. I'll see what I can do.
@manics this is how it looks now. what do you think?
While the form controls didn't flag in the WCAG Contrast Checker some other things did. I'll poke at those.
| gharchive/pull-request | 2024-05-29T20:25:19 | 2025-04-01T06:39:14.862361 | {
"authors": [
"arnim",
"batpad",
"manics",
"minrk",
"rgaiacs",
"yuvipanda"
],
"repo": "jupyterhub/binderhub",
"url": "https://github.com/jupyterhub/binderhub/pull/1856",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
204082732 | Plugin doessn't recognize existing cluster and creates a duplicate
When I delete the jupyterhub container for a user, and then attempt to start the server, instead of recreating the container, it tries to create a new cluster. It is not finding the existing cluster and reusing it.
[E 2017-01-30 17:47:37.596 JupyterHub web:1548] Uncaught exception GET /jupyter/hub/user/carolynvs (108.166.30.187)
Traceback (most recent call last):
File "/opt/conda/lib/python3.5/site-packages/tornado/web.py", line 1469, in _execute
result = yield result
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 484, in get
yield self.spawn_single_user(current_user)
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 306, in spawn_single_user
yield gen.with_timeout(timedelta(seconds=self.slow_spawn_timeout), f)
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 245, in spawn
raise e
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 226, in spawn
yield gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
File "/srv/jupyterhub/src/jupyterhub-carina/jupyterhub_carina/CarinaSpawner.py", line 151, in start
cluster = yield self.create_cluster()
File "/srv/jupyterhub/src/jupyterhub-carina/jupyterhub_carina/CarinaSpawner.py", line 169, in create_cluster
return (yield self.carina_client.create_cluster(self.cluster_name))
File "/srv/jupyterhub/src/jupyterhub-carina/jupyterhub_carina/CarinaOAuthClient.py", line 120, in create_cluster
response = yield self.execute_oauth_request(request)
File "/srv/jupyterhub/src/jupyterhub-carina/jupyterhub_carina/CarinaOAuthClient.py", line 244, in execute_oauth_request
return (yield self.execute_request(request, raise_error))
File "/srv/jupyterhub/src/jupyterhub-carina/jupyterhub_carina/CarinaOAuthClient.py", line 275, in execute_request
return (yield http_client.fetch(request, raise_error=raise_error))
tornado.httpclient.HTTPError: HTTP 413: Payload Too Large
The Carina API in this case is returning 413 Payload Too Large, indicating that I've hit my 3 cluster quota. If I had had fewer clusters, it would have made a duplicate cluster with the same name.
Oops, I should have realized this would be a problem! The old Carina would ignore requests to create a new cluster, if one already existed with the same name. Now that you can create multiple clusters with the same name, I need to check for it and reuse the cluster if found.
| gharchive/issue | 2017-01-30T17:51:52 | 2025-04-01T06:39:14.866251 | {
"authors": [
"carolynvs"
],
"repo": "jupyterhub/jupyterhub-carina",
"url": "https://github.com/jupyterhub/jupyterhub-carina/issues/7",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1994560990 | refactor(connector): [Square] change error message from NotSupported to NotImplemented
Type of Change
[ ] Bugfix
[ ] New feature
[ ] Enhancement
[x] Refactoring
[ ] Dependency updates
[ ] Documentation
[ ] CI/CD
Description
Consistent error messages for not implemented payment method.
Motivation and Context
Resolves #2861
Checklist
[x] I formatted the code cargo +nightly fmt --all
[x] I addressed lints thrown by cargo clippy
[x] I reviewed the submitted code
[ ] I added unit tests for my changes where possible
[ ] I added a CHANGELOG entry if applicable
Hey @nain-F49FF806 , Thanks for PR.
But we'd prefer that contributors comment on an issue before opening a PR for it, so that other contributors are aware that you are working on it. This would also help reduce duplicate effort towards solving the same issue. Please ensure you follow it moving forward.
@swangi-kumari Noted.
Saw no one seemed assigned, so just went ahead to have a look at the code before making a comment.
I didn't know if I was going to take it, until I had already finished. Sorry. :sweat_smile:
On a more fortunate note, I only laid eyes on this today, So hopefully not much time wasted. :crossed_fingers:
Regards.
Hey @nain-F49FF806 ,
It's fine if you missed. But can you go to respective issue for which you raised this PR and comment on that issue, so that I can assign it to you.
Hey @nain-F49FF806 ,
Please resolve the conflicts from your branch.
@swangi-kumari Have rebased onto the most recent main.
Please have a look and let me know if there's anything else needed.
| gharchive/pull-request | 2023-11-15T11:06:58 | 2025-04-01T06:39:15.003331 | {
"authors": [
"nain-F49FF806",
"swangi-kumari"
],
"repo": "juspay/hyperswitch",
"url": "https://github.com/juspay/hyperswitch/pull/2875",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2407313669 | [Client] Add support for zig client library
Description
Adding support for zig client for superposition. The steps to work with the client are mentioned in the README.md file in the client.
@NishantJoshi00 can we remove the merge commit? It breaks semantic guidelines here. Please rebase if possible
@NishantJoshi00 can we remove the merge commit? It breaks semantic guidelines here. Please rebase if possible
I have done that 👍 .
But, I am curious. What merge policy do you use?
rebase and merge/squash and merge, depending on the use-case
squash and merge
In this case, my commit messages shouldn't matter, only the PR title would be of significance, right?
@NishantJoshi00 yup
@juspay/sdk-backend can we start reviewing this?
| gharchive/pull-request | 2024-07-14T07:51:53 | 2025-04-01T06:39:15.007072 | {
"authors": [
"Datron",
"NishantJoshi00"
],
"repo": "juspay/superposition",
"url": "https://github.com/juspay/superposition/pull/160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
382906604 | Face detection don't work on AWS Lambda due to node-canvas not working on node 8.10
Hi there!
I am trying to get this up and running on AWS Lambda but are having trouble as it seem the 'canvas' package is broken on node 8.10. I am trying to get some face detection going.
https://github.com/Automattic/node-canvas/issues/1252#issuecomment-437598572
It looks like its kind of a pain to try and run any other version of node on AWS Lambda, so I started trying to use it without the canvas package. As you mention in the README and I also noticed that the faceapi.locateFaces() function also takes a Tensor4D class as input. I have never used Tensorflow and I am a little confused as how to turn a ArrayBuffer from axios into a correctly shaped Tensor4D object.
I am fetching a jpeg image using axios.
I found the tf.tensor4d function but not sure what shape and dtype it should be.
Do you have any idea?
My code so far:
const { data: imageBuffer } = await axios.get(url, {
responseType: 'arraybuffer'
})
const imageTensor = tf.tensor4d(imageBuffer, [?, ?, ? ,?])
const faces = await detector.locateFaces(imageTensor)
Error messages look similar to this one:
Error: Based on the provided shape, [1,2,3,4], and dtype float32, the tensor should have 24 values but has 68695
Any help is greatly appreciated!
Hey man, hows it going?
Ohh that's a pity, node-canvas is obviously the easiest way to use face-api.js with node.
Well you can either pass in tf.Tensor3Ds or tf.Tensor4Ds with a batch size of 1:
const imageTensor = tf.tensor3d(imageBuffer, [height, width, channels]), where channels should be 3 and in RGB order.
However, the imageBuffer you receive from axios is most likely jpeg or png encoded right? You have to create the image tensor from the raw image data, so you will have to decode the data first. There are probably npm packages, which can do that for you. For example I have seen people using ffmpeg for this.
Thanks!
Tried the get-pixels npm package and it works!
get-pixels return a 4 channel array all the time, even for jpegs 🤷 , so had to remove the alpha channel.
const pixels = await new Promise<Pixels>(resolve => {
getPixels(url, (err, pixels: Pixels) => {
if (err) {
throw err
}
console.log('pixels:', pixels)
resolve(pixels)
})
})
// remove alpha channel
const RGBValues = []
pixels.data.forEach((px, i) => {
if ((i + 1) % 4 != 0) {
RGBValues.push(px)
}
})
const imageTensor = tf.tensor3d(RGBValues, [pixels.shape[1], pixels.shape[0], 3]) as any
const faces = await detector.locateFaces(imageTensor)
Now to the task of getting it up on Lambda without breaking their code size limit, wish me luck!
Thanks!
Tried the get-pixels npm package and it works!
get-pixels return a 4 channel array all the time, even for jpegs 🤷 , so had to remove the alpha channel.
const pixels = await new Promise<Pixels>(resolve => {
getPixels(url, (err, pixels: Pixels) => {
if (err) {
throw err
}
console.log('pixels:', pixels)
resolve(pixels)
})
})
// remove alpha channel
const RGBValues = []
pixels.data.forEach((px, i) => {
if ((i + 1) % 4 != 0) {
RGBValues.push(px)
}
})
const imageTensor = tf.tensor3d(RGBValues, [pixels.shape[1], pixels.shape[0], 3]) as any
const faces = await detector.locateFaces(imageTensor)
Now to the task of getting it up on Lambda without breaking their code size limit, wish me luck!
Thanks, that helped me. But it's quite slowly (1000ms) compared to canvas ( < 10ms ).
tensorflow team is working on a solution: https://github.com/tensorflow/tfjs/issues/298#issuecomment-442263569
Nice!
It wasn't easy, but I finally got it running in a Lambda on AWS.
The tfjs-node package includes a native library and has to be built using the correct environment. I achieved this using Docker and the lambci/lambda:nodejs8.10 image from the https://github.com/lambci/docker-lambda project. This image seem to be one the best images to simulate the Lambda environment.
The built tfjs-node module includes a 110mb binary file.
@tensorflow/tfjs-node/deps/lib/libtensorflow.so
The Lambda size limit is 250mb (uncompressed).
Well, with 140mb left for other modules it should be fine right?
I tried deploying the package using serverless deploy resulting in this cryptic error message.
ENOENT: no such file or directory, open '/Users/bobmoff/Projects/picular/serverless-face-api/node_modules/@tensorflow/tfjs-node/build/Release/libtensorflow.so'
The file exists alright, but its a symlink that points to
libtensorflow.so -> /var/task/node_modules/@tensorflow/tfjs-node/deps/lib/libtensorflow.so
Ahh, the symlink was created inside the docker container and it is a not a relative link. Ok, so i delete the symlink and created a new one that is relative.
libtensorflow.so -> ../../deps/lib/libtensorflow.so
But zipping up the package, using serverless deploy results in a zip that when uncompressed is 343mb ?? Using Disk Inventory X on the uncompressed folder I saw this:
Why are there 2 large binaries? Shouldn't one of them just be a symlink ? The symlink is gone, and replaced with a copy of the binary. Hmm.
After a bit of research I learned that the default behaviour when zipping symlinks is that they get "resolved" (link is followed and original content is copied). Ok. But there is a --symlinks flag for the zip program that can modify this behaviour to keep symlinks as they are, instead of resolving them. This solved the size issue, nut now I realised that I depend on a local project that is linked that actually need to be resolved when packaged. I couldn't find any way to specify what folder/file should be resolved or not when packaging through serverless. There is a way to exclude files/folders from package though. I excluded the symlink.
package:
exclude:
- node_modules/@tensorflow/tfjs-node/build/Release/libtensorflow.so
So using the following commands I first package the project (excluding the symlink) and then manually include the symlink again into the zip archive. (-y is the shorthand for --symlinks)
sls package --package my-artifacts
zip -y my-artifacts/serverless-face-api.zip node_modules/@tensorflow/tfjs-node/build/Release/libtensorflow.so
Now I have a package that is 233mb! Yeah, 17mb to spare! Hehe. Ofc there are more size reduction that can be made be excluding more stuff, like excluding unused weights/models from face-api etc.. But I was just happy to get below the limit.
Happy as a unicorn on christmas I deployed my neat "little" package to AWS.
sls deploy --package my-artifacts
.. and lived happily ever after.
libtensorflow.so appears to be 183mb now! don't suppose you could share your package.json for version numbers?
Sure, here are my deps, haven upgraded in a while :)
{
"@tensorflow/tfjs-node": "^0.2.1",
"axios": "^0.18.0",
"face-api.js": "^0.17.1",
"get-pixels": "^3.3.2",
"lodash": "^4.17.11",
"module-alias": "^2.1.0",
"moment": "^2.23.0",
"monk": "^6.0.6",
"url-join": "^4.0.0"
}
The @tensorflow package (version 1.2.3) is now 294MB! libtensorflow.so.1.14.0 is now 216MB, and libtensorflow_framework.so.1.14.0 is another 35MB. Pretty sure there is now way to get this package into Lambda.
@MatthewAlner did you manage to get it working with a newer version? Or do I need to roll all the way back to the versions used by @bobmoff
Perhaps you could try to deploy using https://www.openfaas.com/
I didn't in the end 😞
Thanks for all the helpful info in this issue, and of course thanks again to @justadudewhohacks . I did eventually manage to get it working on Lambda, but it wasn't easy. I ran into all the same problems that @bobmoff did, and if I'd read his post more carefully at the start I would have saved myself a lot of time (in particular the libtensorflow.so linking issue). I'm using the Node 10.x runtime.
The newer versions of tfjs-node just get bigger and bigger, especially their bundled .so files. There is no way I could find to make the newer versions fit within Lambda's 250MB limit, so I had to roll way back to v0.2.1. You also need to roll face-api.js back to a similarly old version, as it seems face-api.jsis coupled to a specific version oftfjs-node`. Below is my deps:
"dependencies": {
"@tensorflow/tfjs-node": "^0.2.1",
"face-api.js": "^0.17.1",
"get-pixels": "^3.3.2"
}
If you really wanted to use the latest version of Tensorflow and Face-api, I suspect you could use a technique like this https://github.com/lucleray/tensorflow-lambda, which basically uploads a zipped copy of the dependencies, and then unzips them into /tmp before running them. There is obviously overhead for unzipping the files, but if your lambdas instances are often reused then that initial overhead fades away.
Hey @bobmoff, just wondering if you had any issues with memory leaks in your setup? My face detection lambdas get reused most of the time because they get called so often, and I've noticed the memory used slowly grows with each invocation, until it finally breaks the Lambda memory limit and starts again. No idea if it's get-pixels, tfjs-node or face-api.js that is the issue.
Howdy @henrydawson. No, sorry. I havent experienced that, but that is probably due to the fact that I havent had the pressure like you seem to have, so my containers/lambdas have probably been recycled before that happenes.
Hi @henrydawson and all,
just some feedback on canvas running on lambda and also a question.
I've managed to make canvas run on lambda by using https://github.com/lambci/docker-lambda to make some tests. Thanks to the "build" image I had to run a bash shell on the docker build image to:
1/ run npm install
2/ copy 3 libraries from the docker build image /lib64 to the root of my project: libblkid.so.1, libmount.so.1, libuuid.so.1
I'm now trying to use https://github.com/lucleray/tensorflow-lambda for tensorflow but with no luck.
My little project is based on one of the examples from the repository. It uses the common directory as in the examples directory and I've tryed to subsitute the import '@tensorflow/tfjs-node'; from the env.ts file with import "tensorflow-lambda" but it does not work.
It looks like tfjs-node have a mechanism to supersede the tfjs-core functions but this is not happening with tensorflow-lambda. Any hint on how I could make this happen ?
Thanks in advance !
@bobmoff i am new to this, and wanted to ask if you've got an example code to get started with face-api.js and AWS lambda?
@bobmoff i am new to this, and wanted to ask if you've got an example code to get started with face-api.js and AWS lambda?
Sorry. Not using this any more. But Lambda have upped the limit to 10GB for each function so size should not be a problem any longer.
@bobmoff I really wanna move this facial recognition to lambda. Any help would be highly appreciated. I tried with this version also
"dependencies": {
"@tensorflow/tfjs-node": "^0.2.1",
"face-api.js": "^0.17.1",
"get-pixels": "^3.3.2"
}
but still limit exceeding.
Are you reaching the max limit of 10gb ?
The issue I had was getting below 250mb.
Are you reaching the max limit of 10gb ?
The issue I had was getting below 250mb.
Going above deployment package unzipped limit size (250mb)
If you switch using containers you can use 10 gb.
https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/
Hi, I did get this working on Lambda a long time ago, but that was with older versions, and I ran into a lot of issues (especially the hard to resolve memory issues mentioned above). Eventually, I gave up on Lambda and just run it on EC2 using spot instances, which was cheaper and better for my workload. It was also simpler as I didn't have mess around with the file size issues.
That said, I recently used EFS with Lambda for some image processing that required extra space, and it was really easy to use. I imagine you could use EFS to get around all the size limits.
https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications/
@bobmoff @henrydawson thank you both of you for pointing me in the direction.
@bobmoff I have successfully deployed in Lambda using containers. But I'm facing one issue. I'm just doing faceapi.detectSingleFace().withFaceLandmarks().withFaceDescriptor() in lambda to detect single face and returning that output as a response via API Gateway. I can't able to return without doing JSON.stringify(result) and If I do stringify the result, that result not working in final comparison. I dont know what I'm doing wrong.
| gharchive/issue | 2018-11-21T00:38:45 | 2025-04-01T06:39:15.054626 | {
"authors": [
"MatthewAlner",
"bobmoff",
"henrydawson",
"justadudewhohacks",
"lpsBetty",
"nasr18",
"truff77",
"wangsijie"
],
"repo": "justadudewhohacks/face-api.js",
"url": "https://github.com/justadudewhohacks/face-api.js/issues/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
318669466 | Add iubenda.com
Added iubenda.com entry to sites.json
Hi @MrHeliox , I also made a PR request, but it seems this repo isn't maintained anymore. Luckily an active fork can be found here: https://github.com/jdm-contrib/jdm
Ok thanks !
| gharchive/pull-request | 2018-04-28T21:19:15 | 2025-04-01T06:39:15.070926 | {
"authors": [
"MrHeliox",
"mheesters"
],
"repo": "justdeleteme/justdelete.me",
"url": "https://github.com/justdeleteme/justdelete.me/pull/770",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
381515384 | Fix analyser issues on testing framework code
Summarise the changes this Pull Request makes.
Fix various analyser issues on testing framework code
Please include a reference to a GitHub issue if appropriate.
#401
I wanted to enable most analyser rules on the code, but it seems that we aren't ready for that yet. Getting a clean bill of health from the analyser on e.g. JustSaying.csproj is when it starts analysing JustSaying.TestingFramework, and when that's clear then the 250+ messages come in from the unit tests.
It might be possible soon to enable analysers with only a few rules disabled, on the non-test projects.
There's also a level of pragmatism to apply to analyzers with regards to the test code. For example, requiring ConfigureAwait(...) just adds visual noise. I think it's fine for the test infrastructure parts, but in the tests themselves it's not necessary.
In this lib I turned off that rule, plus the ban on underscores and something else that I can't remember what it is off the top of my head.
https://github.com/justeat/httpclient-interception/blob/6756febbd23879077bea037c24a5b7014c8b4fc9/tests/HttpClientInterception.Tests/JustEat.HttpClientInterception.Tests.csproj#L4
Right, what I did here Is to use a second, more forgiving ruleset for the tests.
I find one rule set with overrides per-project via NoWarn easier to maintain than files which are 90% the same, but whichever.
Suggest running Build.ps1 locally before committing to check it actually compiles, rather than relying on the CI to do it.
| gharchive/pull-request | 2018-11-16T09:27:48 | 2025-04-01T06:39:15.075596 | {
"authors": [
"AnthonySteele",
"martincostello"
],
"repo": "justeat/JustSaying",
"url": "https://github.com/justeat/JustSaying/pull/434",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1994420131 | 🛑 Misskey is down
In e7322df, Misskey (https://mkacg.social) was down:
HTTP code: 530
Response time: 156 ms
Resolved: Misskey is back up in a5e4408 after 10 minutes.
| gharchive/issue | 2023-11-15T09:48:29 | 2025-04-01T06:39:15.078162 | {
"authors": [
"justforlxz"
],
"repo": "justforlxz/status.mkacg.com",
"url": "https://github.com/justforlxz/status.mkacg.com/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
936272930 | Add JustList: {JustList name}
Before submission deletes this line:
THIS IS NOT A TOKEN LISTING REQUEST FORM. IF YOU DO NOT FOLLOW THE FORMAT OR MAKE A GENERIC TOKEN REQUEST YOUR ISSUE WILL BE DELETED WITHOUT COMMENT
YOUR JUSTLIST MUST FOLLOW THE JSON SPECIFICATION
https://github.com/justswaporg/justlists/example.justlists.ts
Checklist
[x] I understand that this is not the place to request a token listing.
[x] I have tested that my JustList is compatible by pasting the URL into the add a list UI at justswap.org.
[x] I understand that filing an issue or adding liquidity does not guarantee addition to the justlists website.
Please provide the following information for your token.
JustList URL must be HTTPS.
JustList URL:
JustList Name:
Link to the official homepage of the JustList manager:
Sorry, your issue will be closed as you did not submit your information in the correct format.
| gharchive/issue | 2021-07-03T15:09:38 | 2025-04-01T06:39:15.098943 | {
"authors": [
"Magenta-1",
"jeffwami"
],
"repo": "justswaporg/justlists",
"url": "https://github.com/justswaporg/justlists/issues/1511",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1144737710 | 🛑 Main Website is down
In e115651, Main Website (https://tecnoscientifica.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Main Website is back up in ef82ab8.
| gharchive/issue | 2022-02-19T14:51:24 | 2025-04-01T06:39:15.109198 | {
"authors": [
"justudin"
],
"repo": "justudin/tsp-status",
"url": "https://github.com/justudin/tsp-status/issues/216",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
138238645 | initial code
Some notes
Borrowed some code from https://github.com/atom/markdown-preview because we want a similar behavior w.r.t. to opening a new tab in the right pane when running a juttle or switching to an existing tab if the juttle has already been run.
I couldn't get the styles from juttle-client-library to load properly using an import so copied them from node_modules into styles and checked it in.
@mnibecker
Before merge:
I would use 0.7.0 jcl instead, deconstruct your view object on close, and also check for inputs in a program, and also fix the annoying dropdown css things
@mnibecker addressed your feedback, take a look? (will squash afterwards)
+1
| gharchive/pull-request | 2016-03-03T17:07:28 | 2025-04-01T06:39:15.112370 | {
"authors": [
"go-oleg",
"mnibecker"
],
"repo": "juttle/atom-juttle-viewer",
"url": "https://github.com/juttle/atom-juttle-viewer/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
126536626 | authors: add initial AUTHORS.md
This includes the list of current project contributors as well as
all of the people who contributed to the project before the project
became open source.
@dmehra
+1
| gharchive/pull-request | 2016-01-13T23:00:24 | 2025-04-01T06:39:15.114098 | {
"authors": [
"demmer",
"dmehra"
],
"repo": "juttle/juttle",
"url": "https://github.com/juttle/juttle/pull/169",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1515164155 | QueenParseTreeVisitor.java: Finish implementing reference...
The puzzle 1964704442 originating from #49 has to be resolved:
https://github.com/jvmqueen/queen-of-java/blob/a57821a29b40406177b70a9b243fccf2a35eb56a/src/main/java/org/queenlang/transpiler/QueenParseTreeVisitor.java#L864-L864
"Finish implementing reference types with array types.".
The puzzle was created by amihaiemil at 2022-12-31 19:11:52 +0200.
Estimation is 60 minutes.
If you have any technical questions, don't ask me, I won't be able to help. Open new issues instead.
@amihaiemil this is your task now, please go ahead. Deadline (when this ticket should be closed) is 2023-01-10T17:20:34.813658.
Estimation here is 60 minutes, that's how much you will be paid.
Remember, you don't have to solve everything in this ticket - you can solve it partially and leave todo markers in the code, which will become future tasks.
If you have any questions don't ask me, I'm not a technical person. Open new tickets instead.
If you don't have time or simply don't want to work on this, you can always resign.
@amihaiemil Don't forget to close this ticket before the deadline (2023-01-10T17:20:35). You are past the first half of the allowed period.
Puzzle disappeared from the code, that's why I closed this ticket.
@amihaiemil thank you for resolving this ticket. I've just added it to your active invoice. You can always check all your invoices and more on the Contributor Dashboard.
| gharchive/issue | 2022-12-31T17:12:14 | 2025-04-01T06:39:15.128356 | {
"authors": [
"zoeself"
],
"repo": "jvmqueen/queen-of-java",
"url": "https://github.com/jvmqueen/queen-of-java/issues/57",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
257220195 | Subdomain 502 Bad gateway
Using any subdomain gives me a 502 Bad gateway screen. Im new at using this tool, but have checked the certs folder, and it looks like all certs for my domain (and new subdomains) are the same key. Any tips on getting LetsEncrypt to work with subdomains?
I'm having the same issue. The wildcard crt and key are readable in the container, it redirects from http to https, and then gives:
nginx.1 | 2017/09/15 01:55:25 [error] 37#37: *1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 47.180.[redacted].[redacted], server: jira.my.network, request: "GET /list HTTP/2.0", upstream: "https://172.20.0.4:8080/list", host: "jira.my.network"
The certs are included (from /etc/nginx/conf.d/default.conf):
server {
server_name jira.my.network;
listen 443 ssl http2 ;
...
ssl_certificate /etc/nginx/certs/my.network.crt;
ssl_certificate_key /etc/nginx/certs/my.network.key;
...
Look at my issue https://github.com/jwilder/nginx-proxy/issues/930 , One of your issues is likely that proxy pass is pulling from your service via https and not https.
I've been intermittently having the same problem and have found this workaround which resolves the issue, until it surfaces again:
stop nginx, nginx-gen and ngingx-letsencrypt,
delete the volume for /etc/nginx/conf.d/ (keep all other volumes including certs).
bring everything up again
Voila (at least for my setup). I wish I knew how to reproduce this issue, but have yet to figure out what triggers it.
My docker-compose.yml file is based on https://github.com/jwilder/nginx-proxy/blob/master/docker-compose-separate-containers.yml, the only addition is the letsencrypt service:
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: always
volumes:
- nginx-certs:/etc/nginx/certs/:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx
environment:
NGINX_DOCKER_GEN_CONTAINER: 'nginx-gen'
This bug was actually getting so annoying that I created a small script which does the above 3 steps for me:
#!/bin/bash
# name of current directory
project_name=${PWD##*/}
# remove hyphen
project_name=${project_name/[-]}
# name of the volume which holds nginx configuration files
volume_to_delete=${project_name}_nginx-conf
# Stop proxy services (nginx, nginx-gen, nginx-letsencrypt)
$(docker-compose stop| tee /dev/tty)
# Delete the `nginx-conf` volume, this should force `nginx-gen` to create a new one from scratch, and
# that seems to fix the problem, until it surfaces again...
docker volume rm $volume_to_delete
# Bring proxy back up
$(docker-compose up -d| tee /dev/tty)
Hello everyone . I need your help . I tried all the solutions proposed in the nginx-proxy docker and alternative solutions on google, nothing changes.
How to fix this problem.
i have 2 servers :
S1: Docker CE:
Nginx-proxy
Letsencryp
Emby
Ombi (I had problems to validate it)
Deluge (web access impossible functional daemon)
(I did not manage to install organizr, and/or heimdall)
OS: Ubuntu 18.04 lts
S2: Docker CE:
Nginx-proxy
Letsencryp
Subsonic
Transmission
(I was unable to install organizr and/or heimdall)
OS: Ubuntu 16.04 lts
`nginx.1 | 2018/12/12 10:46:12 [error] 221#221: *8607 no live upstreams while connecting to upstream, client: 77.147.234.230, server: heimdall.nomdedomain.com, request: "GET / HTTP/2.0", upstream: "http://heimdall.nomdedomaine.com/", host: "heimdall.nomdedomaine.com"
nginx.1 | heimdall.nomdedomaine.com 77.147.234.230 - - [12/Dec/2018:10:46:12 +0000] "GET / HTTP/2.0" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0"
nginx.1 | 2018/12/12 10:46:13 [error] 221#221: *8607 no live upstreams while connecting to upstream, client: 77.147.234.230, server: heimdall.nomdedomaine.com, request: "GET /favicon.ico HTTP/2.0", upstream: "http://heimdall.nomdedomaine.com/favicon.ico", host: "heimdall.nomdedomaine.com"
nginx.1 | heimdall.nomdedomaine.com 77.147.234.230 - - [12/Dec/2018:10:46:13 +0000] "GET /favicon.ico HTTP/2.0" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0"
nginx.1 | ombi.nomdedomaine.com 77.147.234.230 - - [12/Dec/2018:10:46:13 +0000] "GET /favicon.ico HTTP/2.0" 404 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0"
nginx.1 | 2018/12/12 10:49:27 [error] 221#221: *8612 upstream prematurely closed connection while reading response header from upstream, client: 77.147.234.230, server: deluge.nomdedomaine.com, request: "GET / HTTP/2.0", upstream: "http://172.17.0.4:8112/", host: "deluge.nomdedomaine.com"
nginx.1 | deluge.nomdedomaine.com 77.147.234.230 - - [12/Dec/2018:10:49:27 +0000] "GET / HTTP/2.0" 502 173 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0"
nginx.1 | 2018/12/12 10:49:27 [error] 221#221: *8612 upstream prematurely closed connection while reading response header from upstream, client: 77.147.234.230, server: deluge.nomdedomaine.com, request: "GET /favicon.ico HTTP/2.0", upstream: "http://172.17.0.4:8112/favicon.ico", host: "deluge.nomdedomaine.com"
my docker
docker run -d -p 80:80 -p 443:443 \
--name nginx-proxy\
--restart always \
-e DEFAULT_HOST=nomdedomain.com \
-v /home/user/docker/nginx-proxy/certs:/etc/nginx/certs:ro \
-v /home/user/docker/nginx-proxy/dhparam:/etc/nginx/dhparam \
-v /etc/nginx/vhost.d \
-v /etc/nginx/conf.d \
-v /usr/share/nginx/html \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
--label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy \
jwilder/nginx-proxy:latest
I did the max test the container are ok, the ports are ok (I think)
I tried this tutorial but I can not do that the file is read loading the container.
https://wiki.ssdt-ohio.org/display/rtd/Adjusting+nginx-proxy+Timeout+Configuration
Any help is welcome . Thank you all
| gharchive/issue | 2017-09-13T00:09:59 | 2025-04-01T06:39:15.164798 | {
"authors": [
"dasmedium",
"geirgp",
"mabushey",
"renfyld"
],
"repo": "jwilder/nginx-proxy",
"url": "https://github.com/jwilder/nginx-proxy/issues/925",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
270385894 | Varnish Cache and SSL termination
I would like to use nginx-proxy with a Varnish cache layer in front of it. In my normal nginx configuration I listen to port 443 and simply proxy to port 80 where Varnish will listen.
Could you implement this feature?
In my normal nginx configuration I listen to port 443 and simply proxy to port 80 where Varnish will listen.
In this scenario, you can replace nginx with nginx-proxy. When you start your Varnish container to listen on port 80, as long as nginx-proxy has the SSL certificate for the give hostname, it will listen on 443 and send the requests back to Varnish on 80 by default.
I would like to use nginx-proxy with a Varnish cache layer in front of it.
This is the opposite of the other sentence. If you want Varnish in front if nginx-proxy, just add it as backend in VCL.
In my normal nginx configuration I listen to port 443 and simply proxy to port 80 where Varnish will listen.
In this scenario, you can replace nginx with nginx-proxy. When you start your Varnish container to listen on port 80, as long as nginx-proxy has the SSL certificate for the give hostname, it will listen on 443 and send the requests back to Varnish on 80 by default.
I would like to use nginx-proxy with a Varnish cache layer in front of it.
This is the opposite of the other sentence. If you want Varnish in front if nginx-proxy, just add it as backend in VCL.
Varnish is the last piece component I am trying to put together in my Wordpress Stack.
I ran varnish by running:
`
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
restart: always
ports:
- "443:443"
volumes:
- conf:/etc/nginx/conf.d
- vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- dhparam:/etc/nginx/dhparam
- ./certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- nginx-proxy
varnish:
image: varnish
container_name: varnish
# depends_on:
# - nginx-proxy
restart: always
volumes:
- ./default.vcl:/etc/varnish/default.vcl
ports:
- "80:80"
expose:
- 80
- 443
networks:
- nginx-proxy
- internal
./default.vcl
vcl 4.0;
backend sentrytire {
.host = "172.19.0.3";
.port = "80";
}
sub vcl_recv {
}
sub vcl_backend_response {
set beresp.ttl = 10s;
set beresp.grace = 1h;
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "cached";
} else {
set resp.http.X-Cache = "uncached";
}
}
I confirm if checking curl -IL http://example.com, I am getting varnish cache working fine.
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 14 Sep 2019 00:35:52 GMT
Content-Type: text/html; charset=UTF-8
Vary: Accept-Encoding
X-Powered-By: PHP/7.3.6
Link: http://example.com/wp-json/; rel="https://api.w.org/"
Link: http://example.com/; rel=shortlink
X-Varnish: 65540 32777
Age: 1
Via: 1.1 varnish (Varnish/6.2)
X-Cache: cached
Accept-Ranges: bytes
Content-Length: 168575
Connection: keep-alive
`
Now here come the hard part setting up jwilder/nginx-proxy to reverse proxy port 443 to varnish on port 80.
By default, its nginx.tmpl will generate the /etc/nginx/conf.d/default.conf. I follow some tutorial to edit the SERVER part of example.com to
location / { # NETWORK nginx-proxy IP of Varnish Container is 172.18.0.4 proxy_pass http://172.18.0.4:80; }
But doing service nginx restart will reset /etc/nginx/conf.d/default to its nginx.tmpl file.
So I am thinking to edit the nginx.tmpl but i have no idea where the file reside. I also would like to run multiple container with multiple domain to cache thru 1 varnish container. I still have question mark on that. If anyone has successfully setting up this stack, I really appreciate your inputs and answers on this.
following. I'm also trying to get this scenario working where I have one varnish container with to cache all my wordpress and send back to nginx for https to end user.
| gharchive/issue | 2017-11-01T17:26:30 | 2025-04-01T06:39:15.179800 | {
"authors": [
"Bram-Zijp",
"jfrancais",
"kamermans",
"kimlonglecis"
],
"repo": "jwilder/nginx-proxy",
"url": "https://github.com/jwilder/nginx-proxy/issues/965",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
544191033 | Placeholder not updating
I am trying to update the placeholder but it is not working. any solution please
"dependencies": { "@react-native-community/async-storage": "^1.6.1", "@react-native-community/netinfo": "^4.2.1", "axios": "^0.19.0", "jetifier": "^1.6.4", "jsonwebtoken": "^8.5.1", "jwt-decode": "^2.2.0", "native-base": "^2.13.8", "pubnub-react": "^1.3.2", "query-string": "^6.8.3", "react": "16.8.6", "react-native": "0.60.4", "react-native-calendars": "^1.212.0", "react-native-camera": "git+https://git@github.com/react-native-community/react-native-camera.git", "react-native-chart-kit": "^3.3.1", "react-native-check-box": "^2.1.7", "react-native-datepicker": "^1.7.2", "react-native-document-picker": "^3.2.4", "react-native-gesture-handler": "^1.4.1", "react-native-gifted-chat": "^0.11.3", "react-native-image-picker": "^1.1.0", "react-native-linear-gradient": "^2.5.6", "react-native-progress": "^3.6.0", "react-native-push-notification": "^3.1.9", "react-native-qrcode-svg": "^5.2.0", "react-native-select-multiple": "^2.1.0", "react-native-svg": "^9.9.3", "react-native-table-component": "^1.2.1", "react-native-tag-input": "0.0.21", "react-native-vector-icons": "^6.6.0", "react-native-video": "^5.0.2", "react-native-video-controls": "^2.2.3", "react-native-webview": "^7.0.5", "react-navigation": "^3.12.1", "react-redux": "^7.1.1", "redux": "^4.0.4", "redux-logger": "^3.0.6", "redux-promise": "^0.6.0", "redux-thunk": "^2.3.0", "socket.io-client": "^2.3.0", "uuid": "^3.3.3" },
The placeholder is set through the inputProps param. I haven't tried updating the placeholder before, but you'd need to make sure the inputProps param itself changes, since TagInput is a PureComponent. If you replace the inputProps parameter with a different one that contains a different placeholder param, and the placeholder doesn't change, it could be a bug in TextInput.
Unfortunately I won't be able to help you further unless I can reproduce the issue you're describing. If you can provide an MCVE that reproduces the issue, in the form of an Expo Snack or a react-native init'd repo hosted on GitHub, I can try and investigate it further.
I found the solution to change the placeholder thanks for the reply.
| gharchive/issue | 2019-12-31T13:36:51 | 2025-04-01T06:39:15.186165 | {
"authors": [
"Ashoat",
"gauravsbagul"
],
"repo": "jwohlfert23/react-native-tag-input",
"url": "https://github.com/jwohlfert23/react-native-tag-input/issues/80",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1306120157 | Fix NoMethodError when a 2 segment token is missing 'alg' header
This PR fixes an issue that was introduced at https://github.com/jwt/ruby-jwt/pull/425
Currently if one passes a token of 2 segments without alg header the decode method raises NoMethorError because algorithm is nil but it tries to call casecmp on it here:
https://github.com/jwt/ruby-jwt/blob/master/lib/jwt/decode.rb#L116
The expected result would be to raise an exception that is inherited from JWT::DecodeError instead of a generic exception like NoMethodError.
Great catch and nice with additional tests to ensure this behaviour in the future.
Added a comment about the added condition. Would suggest making the check more generic to avoid checking for a certain method.
@anakinj updated
Looks great. Would you be so kind and add a changelog entry to the CHANGELOG.md under the fixes?
@anakinj done
Super! Thank you for the effort fixing this!❤️
| gharchive/pull-request | 2022-07-15T14:41:19 | 2025-04-01T06:39:15.197093 | {
"authors": [
"anakinj",
"cmrd-senya"
],
"repo": "jwt/ruby-jwt",
"url": "https://github.com/jwt/ruby-jwt/pull/502",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
467811758 | how to choose cuda for compiling in pytorch-1.0 branch?
Hi, thanks for your work.
Would you please tell me how to change my CUDA_PATH for compiling in pytorch-1.0
I have two CUDA (9.0 and 9.2). But the one that I want to use is not saved in /usr/local/cuda/.
What should be modified in setup.py to change the CUDA_PATH. It always uses the CUDA in /usr/local/cuda/ by default.
`export LD_LIBRARY_PATH=/path/cuda/cuda-10.0/cuda/lib64:/path/cuda/cuda-10.0/cudnn/v7.5.0/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/path/cuda/cuda-10.0/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME`
export LD_LIBRARY_PATH=/path/cuda/cuda-10.0/cuda/lib64:/path/cuda/cuda-10.0/cudnn/v7.5.0/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/path/cuda/cuda-10.0/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
thanks
export LD_LIBRARY_PATH=/path/cuda/cuda-10.0/cuda/lib64:/path/cuda/cuda-10.0/cudnn/v7.5.0/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/path/cuda/cuda-10.0/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
could you please tell me where to set these words?
export LD_LIBRARY_PATH=/path/cuda/cuda-10.0/cuda/lib64:/path/cuda/cuda-10.0/cudnn/v7.5.0/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/path/cuda/cuda-10.0/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
could you please tell me where to set these words?
in faster-rcnn.pytorch/lib/make.sh
thanku but i run pytorch1.0
There is no make.sh file here
------------------ 原始邮件 ------------------
发件人: "YunqiuXu"<notifications@github.com>;
发送时间: 2019年12月10日(星期二) 晚上7:12
收件人: "jwyang/faster-rcnn.pytorch"<faster-rcnn.pytorch@noreply.github.com>;
抄送: "jimmyfa"<1123701799@qq.com>; "Comment"<comment@noreply.github.com>;
主题: Re: [jwyang/faster-rcnn.pytorch] how to choose cuda for compiling in pytorch-1.0 branch? (#605)
export LD_LIBRARY_PATH=/path/cuda/cuda-10.0/cuda/lib64:/path/cuda/cuda-10.0/cudnn/v7.5.0/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/path/cuda/cuda-10.0/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
could you please tell me where to set these words?
in faster-rcnn.pytorch/lib/make.sh
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
export LD_LIBRARY_PATH=/path/cuda/cuda-10.0/cuda/lib64:/path/cuda/cuda-10.0/cudnn/v7.5.0/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/path/cuda/cuda-10.0/cuda
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
could you please tell me where to set these words?
in faster-rcnn.pytorch/lib/make.sh
i run with pytorch1.0,there is no make.sh here >-<
Maybe you can write and run a shell script by yourself like this:
export XXXX
python setup.py build develop
or just try to export these things using command line, before run python setup.py build develop
Maybe you can write and run a shell script by yourself like this:
export XXXX
python setup.py build develop
or just try to export these things using command line, before run python setup.py build develop
emmm其实我不太会
| gharchive/issue | 2019-07-14T08:49:10 | 2025-04-01T06:39:15.212318 | {
"authors": [
"Songtingt",
"XuYunqiu",
"junqiangwu"
],
"repo": "jwyang/faster-rcnn.pytorch",
"url": "https://github.com/jwyang/faster-rcnn.pytorch/issues/605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160203366 | Activate on Enter
Most of the comment documentors activate the comment block once the user presses Enter instead of when they type ///.
For me, I like making comment blocks that look like this:
///////////////////////////////////////////////////////////////////////////
/// Comment block that sticks out!
///////////////////////////////////////////////////////////////////////////
When I do so it attempts to make a document block.
So if you could make it so when you pressed ///Enter it gets activated instead of the current way, that would be awesome!
v.0.0.6
settings.json (Code > Preferences > Workspace Settings)
{
"docomment.activateOnEnter": true
}
| gharchive/issue | 2016-06-14T14:57:00 | 2025-04-01T06:39:15.270293 | {
"authors": [
"TheColorRed",
"k--kato"
],
"repo": "k--kato/vscode-docomment",
"url": "https://github.com/k--kato/vscode-docomment/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1363441787 | PR to Support Netbox 3.3
This is my first attempt at a contribution to this plugin. We are deploying ISIS/MPLS-SR currently, and this plugin is very helpful in documenting our network.
So here goes...
Netbox recently refactored the API Serializers. The BGP Plugin needs these updated to function.
I was able to test this in my development instance here. With a version bump in the init.py, the plugin seems to work after this change.
Hi, we have recently upgrade to netbox3.3 and BGP plugin prevent netbox from starting up.
I will be grateful if this issue can be investigated and resolved as soon as possible.
Hello all, we have the same problem. I would appreciate if you can resolve it as soon as possible
FWIW, you can take the changes in this PR and add them to your production instance, it should get it running again. YMMV.
At a glance it looks like this change should also be backwards compatible with version 3.2.x although I haven't verified yet.
Thanks for raising this PR, I was going to raise the same PR.
You have missed a couple of things;
The version number in the README: https://github.com/k01ek/netbox-bgp/blob/main/README.md?plain=1#L20
The version number in the Makefile: https://github.com/k01ek/netbox-bgp/blob/main/Makefile#L2
I have run this against NetBox 3.3.2 and 3.2.9 and both are working for me.
Made those updates, wasn't sure the next version number was for me to decide, since I am not the owner of this.
#106
| gharchive/pull-request | 2022-09-06T15:12:14 | 2025-04-01T06:39:15.278010 | {
"authors": [
"Omripresent",
"fahimeh2010",
"jwbensley",
"k01ek",
"kvedder-amplex",
"wtayyeb"
],
"repo": "k01ek/netbox-bgp",
"url": "https://github.com/k01ek/netbox-bgp/pull/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1713703138 | Panic may occur during load testing
Although not known to be reproducible, the following error occurred in some cases.
$ runn loadt --load-concurrent 3 --duration 20s tests/**/*.yml
panic: runtime error: index out of range [7] with length 7
goroutine 26 [running]:
github.com/k1LoW/runn.(*operator).recordAsMapped(...)
/Users/k2tzumi/go/pkg/mod/github.com/k1!lo!w/runn@v0.69.1/operator.go:374
github.com/k1LoW/runn.(*operator).recordNotRun(0x140aa154780, 0x103f8fee0?)
/Users/k2tzumi/go/pkg/mod/github.com/k1!lo!w/runn@v0.69.1/operator.go:343 +0x370
github.com/k1LoW/runn.(*operator).runInternal(0x140aa154780, {0x103fb0778, 0x14002caa000})
/Users/k2tzumi/go/pkg/mod/github.com/k1!lo!w/runn@v0.69.1/operator.go:962 +0x53c
github.com/k1LoW/runn.(*operator).run(0x140aa154780, {0x103fb0778?, 0x14002caa000})
/Users/k2tzumi/go/pkg/mod/github.com/k1!lo!w/runn@v0.69.1/operator.go:808 +0x144
github.com/k1LoW/runn.(*operators).runN.func1()
/Users/k2tzumi/go/pkg/mod/github.com/k1!lo!w/runn@v0.69.1/operator.go:1283 +0x138
github.com/k1LoW/concgroup.(*Group).Go.func1()
/Users/k2tzumi/go/pkg/mod/github.com/k1!lo!w/concgroup@v1.0.0/concgroup.go:37 +0xc8
golang.org/x/sync/errgroup.(*Group).Go.func1()
/Users/k2tzumi/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75 +0x5c
created by golang.org/x/sync/errgroup.(*Group).Go
/Users/k2tzumi/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:72 +0xa4
Thank you for your report!
We are looking for reproduction codes.
If we can reproduce it in past versions, we will try to verify it in the latest version.
The following commands also caused errors in case of runn version 0.70.1
$ runn run --verbose --shard-n 4 --shard-index 0 tests/**/*.yml
.. snip..
panic: runtime error: index out of range [2] with length 2
goroutine 1245 [running]:
github.com/k1LoW/runn.(*operator).recordAsMapped(...)
/Users/runner/work/runn/runn/operator.go:374
github.com/k1LoW/runn.(*operator).recordNotRun(0xc034f00d20, 0xc00047c7a0?)
/Users/runner/work/runn/runn/operator.go:343 +0x325
github.com/k1LoW/runn.(*operator).runInternal(0xc034f00d20, {0x1db5e78, 0xc0352521e0})
/Users/runner/work/runn/runn/operator.go:973 +0x74c
github.com/k1LoW/runn.(*operator).run(0xc034f00d20, {0x1db5e78?, 0xc0352521e0})
/Users/runner/work/runn/runn/operator.go:808 +0x23e
github.com/k1LoW/runn.(*operators).runN.func1()
/Users/runner/work/runn/runn/operator.go:1283 +0x26d
github.com/k1LoW/concgroup.(*Group).Go.func1()
/Users/runner/go/pkg/mod/github.com/k1!lo!w/concgroup@v1.0.0/concgroup.go:37 +0x96
golang.org/x/sync/errgroup.(*Group).Go.func1()
/Users/runner/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
/Users/runner/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:72 +0xa5
Error: Process completed with exit code 2.
If an error occurs during the execution of the loop portion, it appears to be an Issue error, and the original error may be hidden?
| gharchive/issue | 2023-05-17T11:33:52 | 2025-04-01T06:39:15.299354 | {
"authors": [
"k1LoW",
"k2tzumi"
],
"repo": "k1LoW/runn",
"url": "https://github.com/k1LoW/runn/issues/513",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1261028431 | [WIP] Pruned transducer stateless5 recipe for AISHELL4
I am trying to build a pruned-transducer-stateless5 recipe for AISHELL4.
Here is a problem that I want to ask for some suggestions on.
In aishell4, the text includes the unit <sil>, such as:
呃<sil>一个惊喜吧呃对合影惊喜自己当作是留念然后
那就可以让部长申两个活动,一个是<sil>联欢会的一个加分儿的项目,还有一个就是<sil>
那由D同学做一个P P T吧,他是<sil>学计算机的,然后就让他
So, I need to process these texts which include <sil>.
The first choice for me is to remove the <sil> when generating text. And also I need to remove it in the testing text.
The second choice is to treat it as a single modeling unit, being the same as a char unit.
I am not sure which is better, or maybe there are any other better good ideas for this case.
Are there recipes in other frameworks? What are others doing about it?
A good idea.
Do we accept perl codes for processing text?
I think maybe it's better if I change it to python format.
Do we accept perl codes for processing text?
If there are only several lines of perl script that can be embedded in prepare.sh, that would be fine.
Otherwise, I recommend you to use Python.
With num_workers=0 it has to be slow, that's expected.
When I use num-workers=2 to run again based on one GPU, there are some training logs:
(k2-python) luomingshuang@de-74279-k2-train-2-0602201035-5fb6d86964-mclm7:~/codes/icefall-pruned-rnnt5-aishell4/egs/aishell4/ASR$ CUDA_VISIBLE_DEVICES='4' python pruned_transducer_stateless5/train.py --max-duration 220 --num-workers 2 --exp-dir pruned_transducer_stateless5/exp-one-gpu
2022-06-08 07:55:57,171 INFO [train.py:877] Training started
2022-06-08 07:55:57.403222: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /ceph-sh1/fangjun/software/cuda-10.2.89/lib:/ceph-sh1/fangjun/software/cuda-10.2.89/lib64:
2022-06-08 07:55:57.403275: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-06-08 07:56:00,409 INFO [train.py:887] Device: cuda:0
2022-06-08 07:56:00,476 INFO [lexicon.py:176] Loading pre-compiled data/lang_char/Linv.pt
2022-06-08 07:56:00,486 INFO [train.py:898] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 1, 'reset_interval': 100, 'valid_interval': 200, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 50, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+git.4198446.dirty', 'torch-version': '1.11.0', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'icefall-pruned-rnnt5-aishell4', 'icefall-git-sha1': '296303a-dirty', 'icefall-git-date': 'Tue Jun 7 22:36:20 2022', 'icefall-path': '/ceph-meixu/luomingshuang/icefall', 'k2-path': '/ceph-ms/luomingshuang/k2_latest/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-meixu/luomingshuang/anaconda3/envs/k2-python/lib/python3.8/site-packages/lhotse-1.3.0.dev0+git.4198446.dirty-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'world_size': 1, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp-one-gpu'), 'lang_dir': 'data/lang_char', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 4000, 'keep_last_k': 30, 'average_period': 100, 'use_fp16': False, 'num_encoder_layers': 24, 'dim_feedforward': 1536, 'nhead': 8, 'encoder_dim': 384, 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 220, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'training_subset': 'L', 'blank_id': 0, 'vocab_size': 3284}
2022-06-08 07:56:00,487 INFO [train.py:900] About to create model
2022-06-08 07:56:01,081 INFO [train.py:904] Number of model parameters: 94337552
2022-06-08 07:56:04,849 INFO [asr_datamodule.py:429] About to get train cuts
2022-06-08 07:56:04,853 INFO [asr_datamodule.py:231] About to get Musan cuts
2022-06-08 07:56:04,854 INFO [asr_datamodule.py:238] Enable MUSAN
2022-06-08 07:56:04,962 INFO [asr_datamodule.py:263] Enable SpecAugment
2022-06-08 07:56:04,962 INFO [asr_datamodule.py:264] Time warp factor: 80
2022-06-08 07:56:04,962 INFO [asr_datamodule.py:276] Num frame mask: 10
2022-06-08 07:56:04,962 INFO [asr_datamodule.py:289] About to create train dataset
2022-06-08 07:56:04,963 INFO [asr_datamodule.py:318] Using DynamicBucketingSampler.
2022-06-08 07:56:09,181 INFO [asr_datamodule.py:334] About to create train dataloader
2022-06-08 07:56:09,181 INFO [asr_datamodule.py:437] About to get dev cuts
2022-06-08 07:56:09,183 INFO [asr_datamodule.py:365] About to create dev dataset
2022-06-08 07:56:09,772 INFO [asr_datamodule.py:384] About to create dev dataloader
2022-06-08 07:56:09,773 INFO [train.py:1054] Sanity check -- see if any of the batches in epoch 1 would cause OOM.
2022-06-08 08:00:27,984 INFO [train.py:815] Epoch 1, batch 0, loss[loss=0.9558, simple_loss=1.912, pruned_loss=9.052, over 5285.00 frames.], tot_loss[loss=0.9558, simple_loss=1.912, pruned_loss=9.052, over 5285.00 frames.], batch size: 23, lr: 3.00e-03
2022-06-08 08:00:30,086 INFO [train.py:815] Epoch 1, batch 1, loss[loss=0.7711, simple_loss=1.542, pruned_loss=9.094, over 5263.00 frames.], tot_loss[loss=0.8632, simple_loss=1.726, pruned_loss=9.073, over 10495.15 frames.], batch size: 21, lr: 3.00e-03
2022-06-08 08:01:10,457 INFO [train.py:815] Epoch 1, batch 2, loss[loss=0.8648, simple_loss=1.73, pruned_loss=9.108, over 5434.00 frames.], tot_loss[loss=0.8637, simple_loss=1.727, pruned_loss=9.085, over 15824.20 frames.], batch size: 99, lr: 3.00e-03
2022-06-08 08:01:11,278 INFO [train.py:815] Epoch 1, batch 3, loss[loss=0.6862, simple_loss=1.372, pruned_loss=9.249, over 5430.00 frames.], tot_loss[loss=0.818, simple_loss=1.636, pruned_loss=9.127, over 21095.96 frames.], batch size: 33, lr: 3.00e-03
2022-06-08 08:01:21,158 INFO [train.py:815] Epoch 1, batch 4, loss[loss=0.5969, simple_loss=1.194, pruned_loss=9.26, over 5197.00 frames.], tot_loss[loss=0.774, simple_loss=1.548, pruned_loss=9.154, over 26082.00 frames.], batch size: 16, lr: 3.00e-03
2022-06-08 08:01:22,072 INFO [train.py:815] Epoch 1, batch 5, loss[loss=0.6028, simple_loss=1.206, pruned_loss=9.481, over 5392.00 frames.], tot_loss[loss=0.7444, simple_loss=1.489, pruned_loss=9.21, over 31213.18 frames.], batch size: 18, lr: 3.00e-03
2022-06-08 08:01:27,888 INFO [train.py:815] Epoch 1, batch 6, loss[loss=0.5526, simple_loss=1.105, pruned_loss=9.342, over 5306.00 frames.], tot_loss[loss=0.7163, simple_loss=1.433, pruned_loss=9.23, over 36207.05 frames.], batch size: 22, lr: 3.00e-03
2022-06-08 08:01:28,707 INFO [train.py:815] Epoch 1, batch 7, loss[loss=0.5148, simple_loss=1.03, pruned_loss=9.262, over 5371.00 frames.], tot_loss[loss=0.69, simple_loss=1.38, pruned_loss=9.234, over 41215.97 frames.], batch size: 29, lr: 3.00e-03
2022-06-08 08:01:42,786 INFO [train.py:815] Epoch 1, batch 8, loss[loss=0.4969, simple_loss=0.9939, pruned_loss=9.257, over 5310.00 frames.], tot_loss[loss=0.6678, simple_loss=1.336, pruned_loss=9.236, over 46113.82 frames.], batch size: 18, lr: 3.00e-03
2022-06-08 08:01:43,608 INFO [train.py:815] Epoch 1, batch 9, loss[loss=0.5297, simple_loss=1.059, pruned_loss=9.424, over 5346.00 frames.], tot_loss[loss=0.6533, simple_loss=1.307, pruned_loss=9.256, over 50998.68 frames.], batch size: 25, lr: 3.00e-03
2022-06-08 08:02:01,751 INFO [train.py:815] Epoch 1, batch 10, loss[loss=0.554, simple_loss=1.108, pruned_loss=9.458, over 5376.00 frames.], tot_loss[loss=0.6438, simple_loss=1.288, pruned_loss=9.275, over 55864.69 frames.], batch size: 37, lr: 3.00e-03
2022-06-08 08:02:02,639 INFO [train.py:815] Epoch 1, batch 11, loss[loss=0.4267, simple_loss=0.8533, pruned_loss=9.177, over 5476.00 frames.], tot_loss[loss=0.6242, simple_loss=1.248, pruned_loss=9.267, over 60782.04 frames.], batch size: 17, lr: 3.00e-03
2022-06-08 08:02:24,529 INFO [train.py:815] Epoch 1, batch 12, loss[loss=0.6245, simple_loss=1.249, pruned_loss=9.511, over 5390.00 frames.], tot_loss[loss=0.6242, simple_loss=1.248, pruned_loss=9.287, over 65564.22 frames.], batch size: 49, lr: 3.00e-03
2022-06-08 08:02:25,393 INFO [train.py:815] Epoch 1, batch 13, loss[loss=0.5387, simple_loss=1.077, pruned_loss=9.375, over 5440.00 frames.], tot_loss[loss=0.6176, simple_loss=1.235, pruned_loss=9.294, over 70348.58 frames.], batch size: 31, lr: 3.00e-03
2022-06-08 08:02:41,581 INFO [train.py:815] Epoch 1, batch 14, loss[loss=0.5919, simple_loss=1.184, pruned_loss=9.385, over 5390.00 frames.], tot_loss[loss=0.6158, simple_loss=1.232, pruned_loss=9.3, over 75035.09 frames.], batch size: 49, lr: 3.00e-03
2022-06-08 08:02:42,388 INFO [train.py:815] Epoch 1, batch 15, loss[loss=0.5398, simple_loss=1.08, pruned_loss=9.256, over 5431.00 frames.], tot_loss[loss=0.6106, simple_loss=1.221, pruned_loss=9.297, over 79715.74 frames.], batch size: 62, lr: 3.00e-03
2022-06-08 08:02:55,783 INFO [train.py:815] Epoch 1, batch 16, loss[loss=0.5844, simple_loss=1.169, pruned_loss=9.465, over 5343.00 frames.], tot_loss[loss=0.6089, simple_loss=1.218, pruned_loss=9.308, over 84261.59 frames.], batch size: 33, lr: 3.00e-03
2022-06-08 08:02:56,663 INFO [train.py:815] Epoch 1, batch 17, loss[loss=0.4624, simple_loss=0.9247, pruned_loss=9.312, over 5159.00 frames.], tot_loss[loss=0.6004, simple_loss=1.201, pruned_loss=9.308, over 88577.97 frames.], batch size: 14, lr: 3.00e-03
2022-06-08 08:03:14,726 INFO [train.py:815] Epoch 1, batch 18, loss[loss=0.5623, simple_loss=1.125, pruned_loss=9.351, over 5383.00 frames.], tot_loss[loss=0.5982, simple_loss=1.196, pruned_loss=9.31, over 93075.19 frames.], batch size: 47, lr: 3.00e-03
2022-06-08 08:03:15,588 INFO [train.py:815] Epoch 1, batch 19, loss[loss=0.4958, simple_loss=0.9917, pruned_loss=9.32, over 5322.00 frames.], tot_loss[loss=0.5926, simple_loss=1.185, pruned_loss=9.311, over 97466.44 frames.], batch size: 28, lr: 3.00e-03
2022-06-08 08:03:44,387 INFO [train.py:815] Epoch 1, batch 20, loss[loss=0.5633, simple_loss=1.127, pruned_loss=9.368, over 5469.00 frames.], tot_loss[loss=0.591, simple_loss=1.182, pruned_loss=9.314, over 101960.77 frames.], batch size: 61, lr: 3.00e-03
2022-06-08 08:03:45,262 INFO [train.py:815] Epoch 1, batch 21, loss[loss=0.5749, simple_loss=1.15, pruned_loss=9.378, over 5434.00 frames.], tot_loss[loss=0.5902, simple_loss=1.18, pruned_loss=9.317, over 106375.17 frames.], batch size: 72, lr: 3.00e-03
2022-06-08 08:03:51,698 INFO [train.py:815] Epoch 1, batch 22, loss[loss=0.4955, simple_loss=0.991, pruned_loss=9.392, over 5433.00 frames.], tot_loss[loss=0.5856, simple_loss=1.171, pruned_loss=9.321, over 110744.42 frames.], batch size: 27, lr: 3.00e-03
2022-06-08 08:03:52,595 INFO [train.py:815] Epoch 1, batch 23, loss[loss=0.5053, simple_loss=1.011, pruned_loss=9.519, over 5275.00 frames.], tot_loss[loss=0.5819, simple_loss=1.164, pruned_loss=9.33, over 114911.97 frames.], batch size: 18, lr: 3.00e-03
2022-06-08 08:04:01,234 INFO [train.py:815] Epoch 1, batch 24, loss[loss=0.4291, simple_loss=0.8583, pruned_loss=9.412, over 5441.00 frames.], tot_loss[loss=0.5749, simple_loss=1.15, pruned_loss=9.334, over 119203.85 frames.], batch size: 12, lr: 3.00e-03
2022-06-08 08:04:17,872 INFO [train.py:815] Epoch 1, batch 25, loss[loss=0.5338, simple_loss=1.068, pruned_loss=9.364, over 5457.00 frames.], tot_loss[loss=0.5731, simple_loss=1.146, pruned_loss=9.335, over 123468.81 frames.], batch size: 60, lr: 3.00e-03
2022-06-08 08:04:18,744 INFO [train.py:815] Epoch 1, batch 26, loss[loss=0.4739, simple_loss=0.9479, pruned_loss=9.445, over 5341.00 frames.], tot_loss[loss=0.5689, simple_loss=1.138, pruned_loss=9.34, over 127575.12 frames.], batch size: 25, lr: 3.00e-03
2022-06-08 08:04:32,789 INFO [train.py:815] Epoch 1, batch 27, loss[loss=0.4645, simple_loss=0.929, pruned_loss=9.407, over 5286.00 frames.], tot_loss[loss=0.5647, simple_loss=1.129, pruned_loss=9.342, over 131585.37 frames.], batch size: 20, lr: 3.00e-03
2022-06-08 08:04:33,656 INFO [train.py:815] Epoch 1, batch 28, loss[loss=0.4937, simple_loss=0.9875, pruned_loss=9.484, over 5393.00 frames.], tot_loss[loss=0.5619, simple_loss=1.124, pruned_loss=9.348, over 135662.52 frames.], batch size: 28, lr: 3.00e-03
2022-06-08 08:04:46,499 INFO [train.py:815] Epoch 1, batch 29, loss[loss=0.4878, simple_loss=0.9755, pruned_loss=9.323, over 5469.00 frames.], tot_loss[loss=0.559, simple_loss=1.118, pruned_loss=9.347, over 139774.89 frames.], batch size: 39, lr: 3.00e-03
2022-06-08 08:04:47,382 INFO [train.py:815] Epoch 1, batch 30, loss[loss=0.4376, simple_loss=0.8751, pruned_loss=9.239, over 5087.00 frames.], tot_loss[loss=0.5547, simple_loss=1.109, pruned_loss=9.343, over 143464.15 frames.], batch size: 12, lr: 3.00e-03
When I use num-workers=4 to run again based on one GPU, there are some training logs:
(k2-python) luomingshuang@de-74279-k2-train-2-0602201035-5fb6d86964-mclm7:~/codes/icefall-pruned-rnnt5-aishell4/egs/aishell4/ASR$ CUDA_VISIBLE_DEVICES='4' python pruned_transducer_stateless5/train.py --max-duration 220 --num-workers 4 --exp-dir pruned_transducer_stateless5/exp-one-gpu
2022-06-08 08:05:43,112 INFO [train.py:877] Training started
2022-06-08 08:05:43.287827: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /ceph-sh1/fangjun/software/cuda-10.2.89/lib:/ceph-sh1/fangjun/software/cuda-10.2.89/lib64:
2022-06-08 08:05:43.287867: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-06-08 08:05:45,859 INFO [train.py:887] Device: cuda:0
2022-06-08 08:05:45,916 INFO [lexicon.py:176] Loading pre-compiled data/lang_char/Linv.pt
2022-06-08 08:05:45,924 INFO [train.py:898] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 1, 'reset_interval': 100, 'valid_interval': 200, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 50, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+git.4198446.dirty', 'torch-version': '1.11.0', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'icefall-pruned-rnnt5-aishell4', 'icefall-git-sha1': '296303a-dirty', 'icefall-git-date': 'Tue Jun 7 22:36:20 2022', 'icefall-path': '/ceph-meixu/luomingshuang/icefall', 'k2-path': '/ceph-ms/luomingshuang/k2_latest/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-meixu/luomingshuang/anaconda3/envs/k2-python/lib/python3.8/site-packages/lhotse-1.3.0.dev0+git.4198446.dirty-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0602201035-5fb6d86964-mclm7', 'IP address': '10.177.74.202'}, 'world_size': 1, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp-one-gpu'), 'lang_dir': 'data/lang_char', 'initial_lr': 0.003, 'lr_batches': 5000, 'lr_epochs': 6, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'save_every_n': 4000, 'keep_last_k': 30, 'average_period': 100, 'use_fp16': False, 'num_encoder_layers': 24, 'dim_feedforward': 1536, 'nhead': 8, 'encoder_dim': 384, 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 220, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 4, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'training_subset': 'L', 'blank_id': 0, 'vocab_size': 3284}
2022-06-08 08:05:45,925 INFO [train.py:900] About to create model
2022-06-08 08:05:46,400 INFO [train.py:904] Number of model parameters: 94337552
2022-06-08 08:05:50,115 INFO [asr_datamodule.py:429] About to get train cuts
2022-06-08 08:05:50,118 INFO [asr_datamodule.py:231] About to get Musan cuts
2022-06-08 08:05:50,120 INFO [asr_datamodule.py:238] Enable MUSAN
2022-06-08 08:05:50,228 INFO [asr_datamodule.py:263] Enable SpecAugment
2022-06-08 08:05:50,228 INFO [asr_datamodule.py:264] Time warp factor: 80
2022-06-08 08:05:50,229 INFO [asr_datamodule.py:276] Num frame mask: 10
2022-06-08 08:05:50,229 INFO [asr_datamodule.py:289] About to create train dataset
2022-06-08 08:05:50,229 INFO [asr_datamodule.py:318] Using DynamicBucketingSampler.
2022-06-08 08:05:54,631 INFO [asr_datamodule.py:334] About to create train dataloader
2022-06-08 08:05:54,631 INFO [asr_datamodule.py:437] About to get dev cuts
2022-06-08 08:05:54,633 INFO [asr_datamodule.py:365] About to create dev dataset
2022-06-08 08:05:55,577 INFO [asr_datamodule.py:384] About to create dev dataloader
2022-06-08 08:05:55,578 INFO [train.py:1054] Sanity check -- see if any of the batches in epoch 1 would cause OOM.
2022-06-08 08:10:30,286 INFO [train.py:815] Epoch 1, batch 0, loss[loss=0.9558, simple_loss=1.912, pruned_loss=9.052, over 5285.00 frames.], tot_loss[loss=0.9558, simple_loss=1.912, pruned_loss=9.052, over 5285.00 frames.], batch size: 23, lr: 3.00e-03
2022-06-08 08:10:31,157 INFO [train.py:815] Epoch 1, batch 1, loss[loss=0.7711, simple_loss=1.542, pruned_loss=9.094, over 5263.00 frames.], tot_loss[loss=0.8632, simple_loss=1.726, pruned_loss=9.073, over 10495.15 frames.], batch size: 21, lr: 3.00e-03
2022-06-08 08:10:47,571 INFO [train.py:815] Epoch 1, batch 2, loss[loss=0.8648, simple_loss=1.73, pruned_loss=9.109, over 5434.00 frames.], tot_loss[loss=0.8637, simple_loss=1.727, pruned_loss=9.085, over 15824.20 frames.], batch size: 99, lr: 3.00e-03
2022-06-08 08:10:48,376 INFO [train.py:815] Epoch 1, batch 3, loss[loss=0.6862, simple_loss=1.372, pruned_loss=9.248, over 5430.00 frames.], tot_loss[loss=0.818, simple_loss=1.636, pruned_loss=9.127, over 21095.96 frames.], batch size: 33, lr: 3.00e-03
2022-06-08 08:10:49,258 INFO [train.py:815] Epoch 1, batch 4, loss[loss=0.5969, simple_loss=1.194, pruned_loss=9.258, over 5197.00 frames.], tot_loss[loss=0.774, simple_loss=1.548, pruned_loss=9.153, over 26082.00 frames.], batch size: 16, lr: 3.00e-03
2022-06-08 08:10:50,107 INFO [train.py:815] Epoch 1, batch 5, loss[loss=0.6028, simple_loss=1.206, pruned_loss=9.48, over 5392.00 frames.], tot_loss[loss=0.7444, simple_loss=1.489, pruned_loss=9.21, over 31213.18 frames.], batch size: 18, lr: 3.00e-03
2022-06-08 08:10:57,468 INFO [train.py:815] Epoch 1, batch 6, loss[loss=0.5526, simple_loss=1.105, pruned_loss=9.341, over 5306.00 frames.], tot_loss[loss=0.7163, simple_loss=1.433, pruned_loss=9.229, over 36207.05 frames.], batch size: 22, lr: 3.00e-03
2022-06-08 08:10:58,273 INFO [train.py:815] Epoch 1, batch 7, loss[loss=0.5148, simple_loss=1.03, pruned_loss=9.262, over 5371.00 frames.], tot_loss[loss=0.69, simple_loss=1.38, pruned_loss=9.234, over 41215.97 frames.], batch size: 29, lr: 3.00e-03
2022-06-08 08:11:01,953 INFO [train.py:815] Epoch 1, batch 8, loss[loss=0.4969, simple_loss=0.9939, pruned_loss=9.256, over 5310.00 frames.], tot_loss[loss=0.6678, simple_loss=1.336, pruned_loss=9.236, over 46113.82 frames.], batch size: 18, lr: 3.00e-03
2022-06-08 08:11:02,764 INFO [train.py:815] Epoch 1, batch 9, loss[loss=0.5297, simple_loss=1.059, pruned_loss=9.423, over 5346.00 frames.], tot_loss[loss=0.6533, simple_loss=1.307, pruned_loss=9.256, over 50998.68 frames.], batch size: 25, lr: 3.00e-03
2022-06-08 08:11:08,665 INFO [train.py:815] Epoch 1, batch 10, loss[loss=0.5538, simple_loss=1.108, pruned_loss=9.457, over 5376.00 frames.], tot_loss[loss=0.6437, simple_loss=1.287, pruned_loss=9.275, over 55864.69 frames.], batch size: 37, lr: 3.00e-03
2022-06-08 08:11:09,524 INFO [train.py:815] Epoch 1, batch 11, loss[loss=0.4266, simple_loss=0.8533, pruned_loss=9.177, over 5476.00 frames.], tot_loss[loss=0.6242, simple_loss=1.248, pruned_loss=9.266, over 60782.04 frames.], batch size: 17, lr: 3.00e-03
2022-06-08 08:11:15,845 INFO [train.py:815] Epoch 1, batch 12, loss[loss=0.6245, simple_loss=1.249, pruned_loss=9.511, over 5390.00 frames.], tot_loss[loss=0.6242, simple_loss=1.248, pruned_loss=9.286, over 65564.22 frames.], batch size: 49, lr: 3.00e-03
2022-06-08 08:11:16,639 INFO [train.py:815] Epoch 1, batch 13, loss[loss=0.5386, simple_loss=1.077, pruned_loss=9.375, over 5440.00 frames.], tot_loss[loss=0.6176, simple_loss=1.235, pruned_loss=9.293, over 70348.58 frames.], batch size: 31, lr: 3.00e-03
2022-06-08 08:11:29,614 INFO [train.py:815] Epoch 1, batch 14, loss[loss=0.5918, simple_loss=1.184, pruned_loss=9.385, over 5390.00 frames.], tot_loss[loss=0.6157, simple_loss=1.231, pruned_loss=9.3, over 75035.09 frames.], batch size: 49, lr: 3.00e-03
2022-06-08 08:11:30,393 INFO [train.py:815] Epoch 1, batch 15, loss[loss=0.5396, simple_loss=1.079, pruned_loss=9.256, over 5431.00 frames.], tot_loss[loss=0.6106, simple_loss=1.221, pruned_loss=9.297, over 79715.74 frames.], batch size: 62, lr: 3.00e-03
2022-06-08 08:11:31,175 INFO [train.py:815] Epoch 1, batch 16, loss[loss=0.5844, simple_loss=1.169, pruned_loss=9.466, over 5343.00 frames.], tot_loss[loss=0.6089, simple_loss=1.218, pruned_loss=9.308, over 84261.59 frames.], batch size: 33, lr: 3.00e-03
2022-06-08 08:11:32,022 INFO [train.py:815] Epoch 1, batch 17, loss[loss=0.4623, simple_loss=0.9246, pruned_loss=9.311, over 5159.00 frames.], tot_loss[loss=0.6004, simple_loss=1.201, pruned_loss=9.308, over 88577.97 frames.], batch size: 14, lr: 3.00e-03
2022-06-08 08:11:48,858 INFO [train.py:815] Epoch 1, batch 18, loss[loss=0.5627, simple_loss=1.125, pruned_loss=9.35, over 5383.00 frames.], tot_loss[loss=0.5982, simple_loss=1.196, pruned_loss=9.31, over 93075.19 frames.], batch size: 47, lr: 3.00e-03
2022-06-08 08:11:49,684 INFO [train.py:815] Epoch 1, batch 19, loss[loss=0.4959, simple_loss=0.9917, pruned_loss=9.319, over 5322.00 frames.], tot_loss[loss=0.5926, simple_loss=1.185, pruned_loss=9.311, over 97466.44 frames.], batch size: 28, lr: 3.00e-03
2022-06-08 08:11:50,537 INFO [train.py:815] Epoch 1, batch 20, loss[loss=0.563, simple_loss=1.126, pruned_loss=9.368, over 5469.00 frames.], tot_loss[loss=0.591, simple_loss=1.182, pruned_loss=9.314, over 101960.77 frames.], batch size: 61, lr: 3.00e-03
2022-06-08 08:11:51,418 INFO [train.py:815] Epoch 1, batch 21, loss[loss=0.575, simple_loss=1.15, pruned_loss=9.378, over 5434.00 frames.], tot_loss[loss=0.5902, simple_loss=1.18, pruned_loss=9.317, over 106375.17 frames.], batch size: 72, lr: 3.00e-03
2022-06-08 08:12:00,784 INFO [train.py:815] Epoch 1, batch 22, loss[loss=0.4955, simple_loss=0.991, pruned_loss=9.393, over 5433.00 frames.], tot_loss[loss=0.5855, simple_loss=1.171, pruned_loss=9.321, over 110744.42 frames.], batch size: 27, lr: 3.00e-03
2022-06-08 08:12:01,681 INFO [train.py:815] Epoch 1, batch 23, loss[loss=0.5052, simple_loss=1.01, pruned_loss=9.521, over 5275.00 frames.], tot_loss[loss=0.5819, simple_loss=1.164, pruned_loss=9.33, over 114911.97 frames.], batch size: 18, lr: 3.00e-03
2022-06-08 08:12:04,138 INFO [train.py:815] Epoch 1, batch 24, loss[loss=0.4291, simple_loss=0.8582, pruned_loss=9.416, over 5441.00 frames.], tot_loss[loss=0.5749, simple_loss=1.15, pruned_loss=9.334, over 119203.85 frames.], batch size: 12, lr: 3.00e-03
2022-06-08 08:12:05,003 INFO [train.py:815] Epoch 1, batch 25, loss[loss=0.5337, simple_loss=1.067, pruned_loss=9.363, over 5457.00 frames.], tot_loss[loss=0.5731, simple_loss=1.146, pruned_loss=9.335, over 123468.81 frames.], batch size: 60, lr: 3.00e-03
2022-06-08 08:12:12,781 INFO [train.py:815] Epoch 1, batch 26, loss[loss=0.4739, simple_loss=0.9478, pruned_loss=9.444, over 5341.00 frames.], tot_loss[loss=0.5689, simple_loss=1.138, pruned_loss=9.34, over 127575.12 frames.], batch size: 25, lr: 3.00e-03
2022-06-08 08:12:13,682 INFO [train.py:815] Epoch 1, batch 27, loss[loss=0.4643, simple_loss=0.9287, pruned_loss=9.404, over 5286.00 frames.], tot_loss[loss=0.5647, simple_loss=1.129, pruned_loss=9.342, over 131585.37 frames.], batch size: 20, lr: 3.00e-03
2022-06-08 08:12:17,462 INFO [train.py:815] Epoch 1, batch 28, loss[loss=0.4937, simple_loss=0.9874, pruned_loss=9.483, over 5393.00 frames.], tot_loss[loss=0.5619, simple_loss=1.124, pruned_loss=9.348, over 135662.52 frames.], batch size: 28, lr: 3.00e-03
2022-06-08 08:12:20,571 INFO [train.py:815] Epoch 1, batch 29, loss[loss=0.4876, simple_loss=0.9752, pruned_loss=9.322, over 5469.00 frames.], tot_loss[loss=0.559, simple_loss=1.118, pruned_loss=9.347, over 139774.89 frames.], batch size: 39, lr: 3.00e-03
2022-06-08 08:12:27,676 INFO [train.py:815] Epoch 1, batch 30, loss[loss=0.4398, simple_loss=0.8795, pruned_loss=9.228, over 5087.00 frames.], tot_loss[loss=0.5548, simple_loss=1.11, pruned_loss=9.343, over 143464.15 frames.], batch size: 12, lr: 3.00e-03
I also have a question when I use 4 GPUs for running this recipe, there are 3 GPUs whose utilization is 100% while another GPU's utilization is 0% most of the time. The case is shown as the following picture:
It seems 4 workers is still not enough to saturate the GPUs. You have a few seconds spike of latency every 4 batches. Try increasing it further if your system resources allow.
If 1 GPU is idle, it actually means that all GPUs are idle; the others are just waiting on a spin-lock inside some kernel, waiting to be able to sync the data via nccl.
I'm a little surprised to see (from your screenshot) that it is unzipping data. Can you tell me what kind of archive you are dealing with here, or what format? I didn't realize we had any setup where unzipping data is something that the data loaders would do.
Yes, it is the jsonl.gz format.
When I create the WebDataset for testing, I find that the speed of this process is very slow. (The following picture shows that it will take many hours to finish it. The following video is the process of creating webdataset that I use py-spy to look. ) Are there some methods or suggestions to improve the speed?
https://user-images.githubusercontent.com/37799481/172527657-75f0cd88-943b-40d2-9974-a196980b11f7.mp4
So it's the decoding process that is slow?
From the stack traces in the video, it appears to be decoding the audio rather than just loading the features. Is this expected?
I think if you have just one CPU thread responsible for decoding encoded audio, it's not surprising that that would be slower than ctual decoding on a GPU.
... I am also a bit confused about what decode.py does. Is that ASR decoding? Why is it creating tarball?
Based on the current decode.py in this pr (I remove the Webdataset part), it decode fastly.
In my video, it doesn't run to the decoding process, just the process of creating WebDataset.
So it's the decoding process that is slow? From the stack traces in the video, it appears to be decoding the audio rather than just loading the features. Is this expected? I think if you have just one CPU thread responsible for decoding encoded audio, it's not surprising that that would be slower than ctual decoding on a GPU.
About this pruned_transducer_stateless5 recipe trained with AISHELL4, my best results for testing set are as follows:
epoch
avg
decoding-method
CER(%)
30
25
greedy_search
30.05
30
25
modified_beam_search
29.16
30
25
fast_beam_search
29.20
A baseline result for testing set from (wenet)[https://github.com/wenet-e2e/wenet/tree/main/examples/aishell4/s0] is 32.58%(CER). Our results are better than theirs.
@luomingshuang
Could you also upload a torchscript model to https://huggingface.co/luomingshuang/icefall_asr_aishell4_pruned_transducer_stateless5/tree/main/exp
?
I will add it to https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition
after you upload it.
| gharchive/pull-request | 2022-06-05T11:40:53 | 2025-04-01T06:39:15.319805 | {
"authors": [
"csukuangfj",
"danpovey",
"luomingshuang",
"pzelasko"
],
"repo": "k2-fsa/icefall",
"url": "https://github.com/k2-fsa/icefall/pull/399",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
717539635 | Master failing to join cluster - TLS handshake error: bad certificate
K3s Version:
k3s version v1.17.2+k3s1 (cdab19b0)
Node(s) CPU architecture, OS, and Version:
Linux 3.10.0-1062.12.1.el7.x86_64 #1 SMP Thu Dec 12 06:44:49 EST 2019 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
4 masters, 8 agents
Describe the bug:
Master node dropped from the cluster and is unable to rejoin due to certificate issues.
Steps To Reproduce:
Running airgap installation with the below service file
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
EnvironmentFile=/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/opt/app/mdx/MdxCache1/k3s/bin/k3s \
server \
'--data-dir' \
'/opt/app/mdx/MdxCache1/k3s-data' \
'--datastore-endpoint' \
'https://...:2379,https://...:2379,https://...:2379,https://....:2379,https://....:2379' \
'--datastore-cafile' \
'/opt/app/mdx/MdxCache1/k3s/etcd/ca.crt' \
'--datastore-certfile' \
'/opt/app/mdx/MdxCache1/k3s/etcd/etcd-server.crt' \
'--datastore-keyfile' \
'/opt/app/mdx/MdxCache1/k3s/etcd/etcd-server.key' \
'--tls-san' \
'KUBERNETES-CA' \
Installed K3s:
Expected behavior:
Master node should join the cluster
Actual behavior:
Master failed to join the cluster with http: TLS handshake error from 127.0.0.1:40404: remote error: tls: bad certificate
Additional context / logs:
Oct 06 17:06:00 lonrs13760 k3s[12866]: http: TLS handshake error from 127.0.0.1:40396: remote error: tls: bad certificate
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.382530597+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.382558967+01:00" level=info msg="Run: k3s kubectl"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.382571906+01:00" level=info msg="k3s is up and running"
Oct 06 17:06:00 lonrs13760 systemd[1]: Started Lightweight Kubernetes.
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.382810224+01:00" level=info msg="module overlay was already loaded"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.382836321+01:00" level=info msg="module nf_conntrack was already loaded"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.382851414+01:00" level=info msg="module br_netfilter was already loaded"
Oct 06 17:06:00 lonrs13760 k3s[12866]: I1006 17:06:00.385210 12866 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Oct 06 17:06:00 lonrs13760 k3s[12866]: http: TLS handshake error from 127.0.0.1:40404: remote error: tls: bad certificate
Oct 06 17:06:00 lonrs13760 k3s[12866]: http: TLS handshake error from 127.0.0.1:40410: remote error: tls: bad certificate
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.418355513+01:00" level=info msg="Using registry config file at /etc/rancher/k3s/registries.yaml"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.421453594+01:00" level=info msg="Logging containerd to /opt/app/mdx/MdxCache1/k3s-data/agent/containerd/containerd.log"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.421624206+01:00" level=info msg="Running containerd -c /opt/app/mdx/MdxCache1/k3s-data/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /opt/app/mdx/MdxCache1/k3s-data/agent/containerd"
Oct 06 17:06:00 lonrs13760 k3s[12866]: time="2020-10-06T17:06:00.421784727+01:00" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused\""
Have tried removing the certs from /opt/app/mdx/MdxCache1/k3s-data/server/tls and deleting /api/v1/namespaces/kube-system/secrets/k3s-serving secret to regenerate new certificates and restarting but no improvement.
You don't appear to be supplying the --token to joining servers - are you doing this via the environment file, or some other means?
Yes, but you still need a token as it is used to encrypt/decrypt the bootstrap data, which includes the CA certificates:
https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#cluster-options
https://github.com/rancher/k3s/blob/master/pkg/cluster/storage.go#L31
Just to clarify, this a master node which disconnected from the cluster and is attempting to rejoin. We've never used the token when installing the master nodes since they connect to the data store to get the shared secret.
Will try passing in the token at service start.
I've attempted to remove all the certificates in k3s-data/server/tls folder so it would regenerate new ones on starting the service with no luck. Have also forced regeneration of the k3s-serving secret but that didn't help either.
I tried adding the token when spinning up the master but had no luck with that.
Tied removing the node from the cluster via kubectl delete node and reinstalling which also did allow the node to join the cluster with certificate issues.
In the end I ended up deleting all under k3s-data/server/cred/ and k3s-data/server/tls which I saw were not being cleaned up via the uninstall script.
Can I get a bit more insight on what exactly is in the cred folder, more specifically the below
ipsec.psk
node-passwd
passwd
I understand that k3s-serving secret is used to create everything in the tls folder, which then those certificates are used in the kubeconfig for each of the services. But I'm not entirely sure how the above files fit into this.
I have same issue when trying to connect k3s server to etcd.
It's worth noting, that CA, certificate and key are correct, because I'm able connect to etcd using wget and openssl:
wget \
--ca-certificate=etcd_ca_bundle.crt \
--certificate=master1_client.crt \
--private-key=master1_client.key \
-O - https://etcd1:2379/metrics
openssl s_client -showcerts \
-connect etcd1:2379 \
-CAfile etcd_client_ca_bundle.crt \
-cert master1_client.crt \
-key master1_client.key
Both of those work fine for me.
My CA chain is like this: Root CA -> Client CA -> etcd1 & master1 certificates. The etcd_client_ca_bundle.crt consists of Root CA and Client CA.
K3s does not use the Kubernetes verbosity flags - can you try k3s --debug server or setting the K3S_DEBUG=1 environment variable?
@brandond Yes I'm sure because there are no issues when using wget and OpenSSL. I have also tried illegal certificates just to test what happens if I supply wrong CA or certificate. Then I get error from wget and OpenSSL as I should. Both of those tests confirm that certificate chain is correct. The only thing which I could currently think of is that I'm using ECC certificates and k3s fails to deal with that.
I'm already using --debugflag. Commands are as follows:
curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_START="true" sh -s -
sudo /usr/local/bin/k3s \
--debug \
server \
-v=3 \
--log=/srv/k3s/log/k3s.log \
--data-dir=/srv/k3s/data/ \
--alsologtostderr \
--node-taint CriticalAddonsOnly=true:NoExecute \
--tls-san="api.kube" \
--datastore-endpoint="https://etcd1:2379,https://etcd2:2379" \
--datastore-cafile="/srv/k3s/ssl/etcd_client_ca_bundle.crt" \
--datastore-certfile="/srv/k3s/ssl/master1_client.crt" \
--datastore-keyfile="/srv/k3s/ssl/master1_client.key"
Output from terminal:
INFO[0000] Preparing data dir /srv/k3s/data/data/688c8ca42a6cd0c042322efea271d6f3849d3de17c850739b0da2461f6c69ee8
DEBU[0002] Running /srv/k3s/data/data/688c8ca42a6cd0c042322efea271d6f3849d3de17c850739b0da2461f6c69ee8/bin/k3s-server [/usr/local/bin/k3s --debug server -v=3 --log=/srv/k3s/log/k3s.log --data-dir=/srv/k3s/data/ --alsologtostderr --node-taint CriticalAddonsOnly=true:NoExecute --tls-san=api.kube --datastore-endpoint=https://etcd1:2379,https://etcd2:2379 --datastore-cafile=/srv/k3s/ssl/etcd_client_ca_bundle.crt --datastore-certfile=/srv/k3s/ssl/master1_client.crt --datastore-keyfile=/srv/k3s/ssl/master1_client.key]
time="2020-10-15T12:10:49.439802664+03:00" level=info msg="Starting k3s v1.18.9+k3s1 (630bebf9)"
And then it just goes to etcd connection loop. From etcd side:
2020-10-15T12:11:04.470810+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:11:04.470+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47246","server-name":"etcd1","error":"remote error: tls: bad certificate"}
2020-10-15T12:11:13.182322+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:11:13.181+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47250","server-name":"etcd1","error":"remote error: tls: bad certificate"}
2020-10-15T12:11:28.817367+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:11:28.816+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47254","server-name":"etcd1","error":"remote error: tls: bad certificate"}
2020-10-15T12:11:59.561427+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:11:59.560+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47272","server-name":"etcd1","error":"remote error: tls: bad certificate"}
2020-10-15T12:12:46.740392+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:12:46.739+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47302","server-name":"etcd1","error":"remote error: tls: bad certificate"}
2020-10-15T12:14:03.222264+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:14:03.221+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47346","server-name":"etcd1","error":"remote error: tls: bad certificate"}
2020-10-15T12:16:00.722905+03:00 etcd1 etcd: {"level":"warn","ts":"2020-10-15T12:16:00.721+0300","caller":"embed/config_logging.go:279","msg":"rejected connection","remote-addr":"master1:47400","server-name":"etcd1","error":"remote error: tls: bad certificate"}
Can you share the actual output of the openssl s_client command? Does it work properly if you use rsa instead of ecdsa?
@brandond, it seems to me, that intermediate certificate (Client CA) causes the issue for k3s (full chain isn't being recursively checked?). If I change chain to Root CA > etcd1 & master1 then everything works fine with both RSA and ECC.
I also changed etcd1 configuration so it only has Client CA as client CA (not full chain/bundle like previously), etcd1 cert is issued by Client CA. Both wget and openssl are still working fine when specifying correct Root CA or Root CA & Client CA bundle. Authentication also works fine when specifying master1 cert (also issued by Client CA) and key.
Output is here thou I'd slightly had to redact it. As You can see, verification succeeds.
$ openssl s_client -showcerts \
> -connect etcd1:2379 \
> -CAfile root_ca.crt \
> -cert master1.crt \
> -key master1.key
CONNECTED(00000003)
depth=2 CN = Root CA
verify return:1
depth=1 CN = Client CA
verify return:1
depth=0 CN = etcd1
verify return:1
---
Certificate chain
0 s:/CN=etcd1
i:/CN=Client CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
1 s:/CN=Client CA
i:/CN=Root CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=etcd1
issuer=/CN=Client CA
---
Acceptable client certificate CA names
/CN=Client CA
Client Certificate Types: RSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA512:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA512:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2430 bytes and written 1208 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-ECDSA-AES256-GCM-SHA384
Server public key is 384 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-ECDSA-AES256-GCM-SHA384
Session-ID: ...
Session-ID-ctx:
Master-Key: ...
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
TLS session ticket:
...
Start Time: ...
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad Requestclosed
If I don't specify authentication cert & key I still can connect correctly, but receive cert error, as I should, but the verification still succeeds:
$ openssl s_client -showcerts \
> -connect etcd1:2379 \
> -CAfile root_ca.crt
CONNECTED(00000003)
depth=2 CN = Root CA
verify return:1
depth=1 CN = Client CA
verify return:1
depth=0 CN = etcd1
verify return:1
139800371771280:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42
139800371771280:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
---
Certificate chain
0 s:/CN=etcd1
i:/CN=Client CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
1 s:/CN=Client CA
i:/CN=Root CA
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=etcd1
issuer=/CN=Client CA
---
Acceptable client certificate CA names
/CN=Client CA
Client Certificate Types: RSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA512:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA512:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 1585 bytes and written 138 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-ECDSA-AES256-GCM-SHA384
Server public key is 384 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-ECDSA-AES256-GCM-SHA384
Session-ID:
Session-ID-ctx:
Master-Key: ...
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: ...
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
In that case you'll probably need to concatenate the root and intermediate (or root and client) CAs into a single file, and pass that as the --datastore-cafile.
@brandond I have also tried both concatenated (bundle) and non-concatenated ways, no luck with k3s. Both ways worked fine with OpenSSL and wget.
Also, as far as I know the bundle (Root CA + Intermediate CA) needs to be only on server side, since full certificate chain is being sent from server to client anyway. Client should only have Root CA to confirm the "starting" point of the full chain, to verify the chain.
Have you tried with etcdctl? That might be a more relevant test, since it would use the same Go crypto routines as k3s.
Same with when using etcdctl from master1. I think I just figured out what's the issue. Sadly (or luckily) issue is between the keyboard and chair. I completely ignored the fact that checking CN is not mandatory when SAN field is present, so I just assumed that etcdctl crypto routines behave like "everyone else" and still check CN as last resort measure. Obviously it's not that way and I need to add DNS: etcd1 to X509v3 Subject Alternative Name also... Sad thing is that wget (and many other) completely ignore that part in RFC to maintain backwards compatibility. Also the scripts I used to generate the certificates weren't smart enough to put the domain also to subjectAltName.
So, issue solved for me. Thank You for the help @brandond . Running etcdctl from master1 really helped to figure this out for me.
Yeah, ai think that the CN fallback was removed from the golang crypto routines in 1.15.
https://golang.org/doc/go1.15#commonname
It can be temporarily re-enabled by adding the value x509ignoreCN=0 to the GODEBUG environment variable.
Closing as per https://github.com/k3s-io/k3s/issues/2369#issuecomment-713555350
maybe do not set hostname
| gharchive/issue | 2020-10-08T17:42:16 | 2025-04-01T06:39:15.350179 | {
"authors": [
"BeardyC",
"brandond",
"daenney",
"gynter",
"oppshorer"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/2369",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1142430798 | insecure network interface
I have multiple network interfaces on the host. One of them is connected to an insecure network, so I have existing iptable rules to block INPUT traffic on that interface.
K3s seems to open the NodePort access on that insecure interface by default. How can I re-order and make sure the k3s iptables rules and existing/manual iptables won't step on each other?
ens192 is the insecure interface, if I manually insert the reject rule to the top, client on the insecure network will no longer be able to access the nodeport:
iptables -I FORWARD -i ens192 -j REJECT
BTW, before I install k3s on the host, there was only one reject rule in the FORWARD chain:
sudo iptables -S FORWARD
-P FORWARD ACCEPT
-A FORWARD -i ens192 -j REJECT --reject-with icmp-port-unreachable
After installing k3s, it somehow got moved to the 4th position,
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -i ens192 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -s 10.42.0.0/16 -j ACCEPT
-A FORWARD -d 10.42.0.0/16 -j ACCEPT
so the reject is no longer taking affect. KUBE-FORWARD installed by k3s let through the insecure traffic.
When you mention nodePort access, do you mean the port created by servicelb? By default it opens 443 and 80 for traefik, right?
I was talking about NodePort (those high ports on the node), but the Ingress/Traefik ports (80/443) has the same issue. My pre-existing iptable rules block inbound access to any ports on the node stopped working, those NodePorts and LB Ports are all open to the insecure interface.
The nodePort is, by default, open for all interfaces.
There is one thing I don't understand. After deploying k3s, when you run iptables -I FORWARD -i ens192 -j REJECT, you rule will be on top, so it will be the first one to be executed, doesn't that solve your problem?
Are you suggesting the issue I reported (k8s/flannel moves my custom iptable rule) only happens when deploying k3s? If I add customer rules afterwards, k3s will no longer change the iptable rules order during its lifecycle?
Are you suggesting the issue I reported (k8s/flannel moves my custom iptable rule) only happens when deploying k3s? If I add customer rules afterwards, k3s will no longer change the iptable rules order during its lifecycle?
From my experience, yes. The network policy chain (KUBE-ROUTER-FORWARD) might move before it but the rest should stay below (KUBE-FORWARD included). Can you please give it a try?
Yeah, I had my k3s cluster running (idle) for a few days and don't see the rules moving. Since I don't know exact which action triggers the move, it is hard to test.
The nodePort is, by default, open for all interfaces.
@manuelbuil Would it be possible to configure which interface NodePorts are exposed on?
The nodePort is, by default, open for all interfaces.
@manuelbuil Would it be possible to configure which interface NodePorts are exposed on?
Sorry for the late reply, I was on parental leave @laszlocph . Yes! https://kubernetes.io/docs/concepts/services-networking/service/#service-nodeport-custom-listen-address
Hope you had a good one! And thanks @manuelbuil
| gharchive/issue | 2022-02-18T06:02:31 | 2025-04-01T06:39:15.359287 | {
"authors": [
"gfrankliu",
"laszlocph",
"manuelbuil"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/5142",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
150609815 | Subject appear twice
Expected behaviour
In Settings / Global settings / Display, i've plugged "Correspondent above subject".
Actual behaviour
Correspondent disappear, and the subject appear twice.
Steps to reproduce
Environment
K-9 Mail version:
5.010
Android version:
5.1
Account type (IMAP, POP3, WebDAV/Exchange):
Gmail, IMAP
UP: sorry for the last issue in french, i've just made it too fast, was thinking of french dev...
Thanks for anderstanding, nothing against you.
Here's a screenshot with "Correspondent above subject" enabled:
And here's one with the setting disabled:
As you can see, with the setting disabled, the subject is just repeated at the start of the preview, where I'd now expect to see the correspondent appear.
This only happens when switching the setting and returning to the message list. The list will be displayed properly if the message list screen is closed and reopened.
This should be fixed in recent beta versions (5.7xx).
| gharchive/issue | 2016-04-24T00:20:38 | 2025-04-01T06:39:15.380704 | {
"authors": [
"cketti",
"frederiiiic",
"madduck"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/1326",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
246157154 | Play store version stuck at 5.206
Please search to check for an existing issue (including closed issues, for which the fix may not have yet been released) before opening a new issue: https://github.com/k9mail/k-9/issues?q=is%3Aissue
Expected behavior
The Google Play store version should be the latest - 5.207.
Actual behavior
The Google Play store version appears to be stuck at 5.206 for some users.
Steps to reproduce
Open https://play.google.com/store/apps/details?id=com.fsck.k9
Look in additional info section
Version is listed as 5.206. This is the same version installed on my Android devices and I am unable to force an upgrade to 5.207 through the Play Store.
Environment
K-9 Mail version: 5.206
Android version: 7.0
A quick IRC chat with Khaytsus indicates that they are seeing 5.207. However, I am not the only person seeing the version stuck at 5.206. See https://android.stackexchange.com/questions/174070/google-play-store-dont-show-latest-update-for-k-9-mail.
Here is a copy-and-paste from https://play.google.com/store/apps/details?id=com.fsck.k9 on my Mac:
ADDITIONAL INFORMATION
Updated
22 March 2017
Installs
5,000,000 - 10,000,000
Current Version
5.206
Thanks. 5.207 was only released for alpha testers. I pushed that version to the release channel just now.
Fix confirmed. Google Play just notified me of the update and I've upgraded to 5.207 through the Play Store.
Thanks!
| gharchive/issue | 2017-07-27T20:19:39 | 2025-04-01T06:39:15.386861 | {
"authors": [
"cketti",
"n-6"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/2646",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
285241432 | Feature: add option to couple images in inbox to email identities instead of contact photos
I would love to have a clear indication in my inbox to which of my 10+ identities an email was send. I have no use for these big images of each contact. I would much rather display a picture or icon of the receiving identity, rahter than the sending contact.
This could be an optional setting.
So instead of those linked to the name of the contact, I would like them linked to the name of an identity.
(Btw I am sure there are loads people who do like it as is, I am merely suggestion to have an optional alternative to it)
Environment
K-9 Mail version:
5.400
Android version:
7.0
Account type (IMAP, POP3, WebDAV/Exchange):
IMAP
pulled that image of the interwebs btw, was to lazy to screencap my own inbox
| gharchive/issue | 2017-12-31T05:45:27 | 2025-04-01T06:39:15.389632 | {
"authors": [
"Meteor0id"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/3014",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
646763678 | A strange German translation inconsistency
Hi there,
I'm currently using K-9 Mail v5.600 (installed from F-Droid) on three devices: a bq Aquaris M4.5 smartphone (Android 5.1), a Cubot Quest smartphone (Android 9) and a Samsung Galaxy Tab A 2016 tablet (Android 8.1). The language is set to German on each device. The inbox screen shows a translation inconsistency: The case of the first letter of the German word "in" depends on the device (or the Android version?)
See the screenshots:
** Cubot smartphone and Samsung tablet:**
bq smartphone:
Strange, isn't it? The correct spelling is shown on the latter screenshot with the lower-case "i". What's going on here?
The cycling cat
We were appending the output of DateUtils.getRelativeTimeSpanString() to "Next poll", or in your case "Nächster Abruf". Looks like that output is different depending on the Android version.
But this text has been removed and is no longer part of the app. Current beta versions are not affected by this. Closing this issue.
| gharchive/issue | 2020-06-27T20:57:53 | 2025-04-01T06:39:15.393000 | {
"authors": [
"cketti",
"cyclingcat"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/4858",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
534651598 | Docker pull fails for images during appsody init for newly built collection
Describe the bug
A clear and concise description of what the bug is.
After building a collection and pushing the images to an internal registry, added the new collection hub to appsody and performed and appsody init and received error Error response from daemon: manifest for default-route-openshift-image-registry.apps.obliges.os.fyre.ibm.com/kabanero/java-spring-boot2:0.3 not found: manifest unknown: manifest unknown
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behavior
A clear and concise description of what you expected to happen.
Expected that the pull would work as it has in the past for similar collection builds
Actual behaviour
What is the actual behaviour.
The docker pull for the image failed....
Checking stack requirements...
Skipping Appsody - No requirements set.
Skipping Buildah - No requirements set.
Skipping Docker - No requirements set.
Running appsody init...
Downloading java-spring-boot2 template project from http://kabanero-collections.apps.obliges.os.fyre.ibm.com/incubator.java-spring-boot2.v0.3.17.templates.default.tar.gz
Download complete. Extracting files from /Users/barbarafjones/kabcoltest/400/newsb2/java-spring-boot2.tar.gz
Setting up the development environment
Your Appsody project name has been set to newsb2
Pulling docker image default-route-openshift-image-registry.apps.obliges.os.fyre.ibm.com/kabanero/java-spring-boot2:0.3
Running command: docker pull default-route-openshift-image-registry.apps.obliges.os.fyre.ibm.com/kabanero/java-spring-boot2:0.3
Error response from daemon: manifest for default-route-openshift-image-registry.apps.obliges.os.fyre.ibm.com/kabanero/java-spring-boot2:0.3 not found: manifest unknown: manifest unknown
[Warning] Docker image pull failed: exit status 1
[Warning] The stack init script failed: Could not find the image either in docker hub or locally: default-route-openshift-image-registry.apps.obliges.os.fyre.ibm.com/kabanero/java-spring-boot2:0.3
[Warning] Your local IDE may not build properly, but the Appsody container should still work.
[Warning] To try again, resolve the issue then run `appsody init` with no arguments.
Running the docker pull command standalone also failed, but if the tag was modified from 0.3 to 0.3.17 (e.g. docker pull default-route-openshift-image-registry.apps.obliges.os.fyre.ibm.com/kabanero/java-spring-boot2:0.3.17) that works.
Noted that in the internal registry only one tag was present 0.3.17.
Environment Details (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
If applicable please specify:
CLI version:
Collection you are using:
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
I believe this was introduced with the following change https://github.com/kabanero-io/collections/commit/a8b50ebe5e024e58ec2e83fd6fd592c6975f503b. With this change (by default), the stack images are published with the 3 version part only. However, in https://github.com/kabanero-io/collections/blob/release-0.3/ci/package.sh#L144, the .appsody-config.yaml is only either updated with one or two part version.
@jarek This seems to be caused by the commit you mentioned. This was a workaround for releasing a fixpack to Kabanero 0.3 when Kabanero 0.4 had already been released. As part of this change we hadn't taken into consideration when a new collection was being created.
As a workaround for ICP4A or someone just working on Kabanero 0.3 it would be easiest to set the LATEST_RELEASE environment variable to true on the command line before calling the build..sh script.
We are working on updating the the build scripts to detect when specific changes have been made to stacks and to only release / push the stacks that have changed. This will be available in upcoming releases.
Adding stop-ship per discussion with @jgawor and @bconey
PR merged. New release will be cut if required.
Fix went into https://github.com/kabanero-io/collections/releases/tag/0.3.5
Verified.
| gharchive/issue | 2019-12-09T02:21:00 | 2025-04-01T06:39:15.429732 | {
"authors": [
"bconey",
"gecock",
"groeges",
"jgawor",
"marikaj123"
],
"repo": "kabanero-io/collections",
"url": "https://github.com/kabanero-io/collections/issues/213",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.