id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
497063146
Added logo change for blog site Currently on blog site (https://kubernetes.io/blog/) if you scroll down the page, logo (top left corner) disappears because the background is being changed to white. On the main page, this is fixed by changing the logo to the blue version on scroll down but it wasn't working on blog site. Added the same behaviour. /retest /approve /lgtm
gharchive/pull-request
2019-09-23T12:19:56
2025-04-01T06:39:20.517696
{ "authors": [ "DavidZisky", "steveperry-53" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/16514", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1048727079
[ko] Update outdated in dev-1.21-ko.8 (M10-M12) Ref #29253 Task: M10 - M12 M10. content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md | 3(+XS) 3(-) M11. content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md | 3(+XS) 3(-) M12. content/en/docs/concepts/services-networking/ingress-controllers.md | 1(+XS) 0(-) /assign @pjhwa 리뷰 감사합니다! 리뷰 의견을 반영 (살짝 수정) 하여 PR 업데이트 하였습니다. /lgtm /approve
gharchive/pull-request
2021-11-09T15:16:13
2025-04-01T06:39:20.520169
{ "authors": [ "ClaudiaJKang", "pjhwa", "seokho-son" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/30417", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1711259801
removed hyperlink in restrict a container Removed invalid link in Restrict a container Under Securing a Pod in note Upgrade path to GA There is no valid link that can be replaced with it This PR Resolves issue #40865 /lgtm /lgtm Please mind our policy on trivial edits @AmarNathChary This is OK to accept; a review of the whole page would be even more helpful @sftim I understand the importance of the policy and I will make sure to comply. Thanks for the approve.
gharchive/pull-request
2023-05-16T05:24:35
2025-04-01T06:39:20.523194
{ "authors": [ "AmarNathChary", "tengqm" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/41163", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2207322201
Add kirkonru to sig-docs-ru Since we have @kirkonru in the organisation, we can add him to the reviewers and owners list for the Russian localisation :tada: Related PR: https://github.com/kubernetes/org/pull/4846 Waiting for confirmation from Russian localization approvers Currently, our approvers' list has me & @Arhell only (that's why we need Kirill so much). Ihor, please assist :pray: As an ru approver, @Arhell confirmed /approve
gharchive/pull-request
2024-03-26T05:53:47
2025-04-01T06:39:20.525288
{ "authors": [ "reylejano", "sftim", "shurup" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/45672", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
308802441
Fix NodePodSelector annotation name Resubmitted to master branch. I can't tell whether I'm nitpicking or catching an issue here. Should the annotation key be the same wherever you use it? If so, should it be scheduler.alpha.kubernetes.io/node-selector? Or scheduler.alpha.kubernetes.io/nodeSelector? /assign @tengqm 👋 looks like this needs a rebase, and I didn't see any answer to @Bradamant3 question about the content. /assign Closing because the master branch already has this fix.
gharchive/pull-request
2018-03-27T02:45:08
2025-04-01T06:39:20.527396
{ "authors": [ "Bradamant3", "heckj", "tengqm" ], "repo": "kubernetes/website", "url": "https://github.com/kubernetes/website/pull/7865", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1365834838
panic: runtime error: index out of range [3] with length 3 Using the version installed today using brew on a MAC M1 (seems that brew version is not the latest) kubescape scan . [info] ARMO security scanner starting [warning] current version 'v2.0.166' is not updated to the latest release: 'v2.0.170' panic: runtime error: index out of range [3] with length 3 goroutine 1 [running]: github.com/armosec/go-git-url/gitlabparser/v1.(*GitLabURL).Parse(0x140003fc1c0, {0x1400005bc20?, 0x1046a5ce0?}) github.com/armosec/go-git-url@v0.0.15/gitlabparser/v1/parser.go:89 +0x33c github.com/armosec/go-git-url/gitlabparser/v1.NewGitLabParserWithURL({0x1400005bc20, 0x3e}) github.com/armosec/go-git-url@v0.0.15/gitlabparser/v1/parser.go:28 +0x98 github.com/armosec/go-git-url.NewGitURL({0x1400005bc20, 0x3e}) github.com/armosec/go-git-url@v0.0.15/init.go:28 +0x1b0 github.com/armosec/kubescape/v2/core/cautils.metadataGitLocal({0x16d837657?, 0x3?}) github.com/armosec/kubescape/v2/core/cautils/scaninfo.go:405 +0xe8 github.com/armosec/kubescape/v2/core/cautils.setContextMetadata(0x14000c3d3f0, {0x16d837657, 0x3}) github.com/armosec/kubescape/v2/core/cautils/scaninfo.go:358 +0x364 github.com/armosec/kubescape/v2/core/cautils.scanInfoToScanMetadata(0x140002fc4e0) github.com/armosec/kubescape/v2/core/cautils/scaninfo.go:289 +0x328 github.com/armosec/kubescape/v2/core/cautils.NewOPASessionObj({0x0, 0x0, 0x0}, 0x0, 0x140002fc4e0) github.com/armosec/kubescape/v2/core/cautils/datastructures.go:43 +0x5c github.com/armosec/kubescape/v2/core/pkg/policyhandler.(*PolicyHandler).CollectResources(0x14000b9b808, {0x14000611500, 0x5, 0x8}, 0x140002fc4e0) github.com/armosec/kubescape/v2/core/pkg/policyhandler/handlenotification.go:26 +0x40 github.com/armosec/kubescape/v2/core/core.(*Kubescape).Scan(0x103f05b60?, 0x140002fc4e0) github.com/armosec/kubescape/v2/core/core/scan.go:142 +0x618 github.com/armosec/kubescape/v2/cmd/scan.getFrameworkCmd.func2(0x1045bd120?, {0x14000427680, 0x2, 0x140006b1560?}) github.com/armosec/kubescape/v2/cmd/scan/framework.go:102 +0x3ac github.com/armosec/kubescape/v2/cmd/scan.GetScanCommand.func1(0x140000ff680?, {0x140006b1560, 0x1, 0x1?}) github.com/armosec/kubescape/v2/cmd/scan/scan.go:45 +0x180 github.com/spf13/cobra.(*Command).ValidateArgs(...) github.com/spf13/cobra@v1.5.0/command.go:1018 github.com/spf13/cobra.(*Command).execute(0x140000ff680?, {0x140006b1540?, 0x1?, 0x1?}) github.com/spf13/cobra@v1.5.0/command.go:841 +0x3a4 github.com/spf13/cobra.(*Command).ExecuteC(0x140000ff400) github.com/spf13/cobra@v1.5.0/command.go:990 +0x354 github.com/spf13/cobra.(*Command).Execute(...) github.com/spf13/cobra@v1.5.0/command.go:918 github.com/armosec/kubescape/v2/cmd.Execute() github.com/armosec/kubescape/v2/cmd/root.go:84 +0x34 main.main() github.com/armosec/kubescape/v2/main.go:9 +0x1c This also happens with the latest v2.0.170 release as well. origin git@gitlab.com:foobar/machine-learning/cluster-manifests.git (fetch) origin git@gitlab.com:foobar/machine-learning/cluster-manifests.git (push) This bug occurs on GitLab with versions v2.0.165-172. Version v2.0.164 seems to work, so until patch is ready will resort to use that. @dazzag24 Have you done here any progress? @dazzag24 Have you done here any progress? I think Aman123lug volunteered to work on this. @Aman123lug Any updates? @dwertent hello. I've looked at it briefly and the offending line is: https://github.com/kubescape/go-git-url/blame/master/gitlabparser/v1/parser.go#L86 Wrapping it with a check on the "-" particular path part is enough to avoid a panic: if splittedRepo[index] == "-" { index += 1 // skip "-" symbol in URL } However, I am not really sure about the consistency of the eventual result. This parser obviously is not designed in the first place to support general URLs. @Aman123lug Any updates? I think @dazzag24 working on it Also I've detected that there are 2 versions of this go-git-url repo being used: one under the kubescape owner, one under the armosec owner: while the armosec version is still being pulled as a direct dependency, the other version is pulled indirectly... I have a small patch ready if you guys are interested. This assumes a few things. I'd need a piece of advice to be sure I am heading in the right direction. I wouldn't want to interfere with some other people's work. Let me know if you want a PR (actually 2 since 2 repos are concerned). kubescape/go-git-url: added the simple check above. In the case of such "scp-like" git-urls, it no longer panics if the input is not exactly as expected. However, the OP-provided remote ```````````git@gitlab.com:foobar/machine-learning/cluster-manifests.git` won't really work as expected: in this example, "foobar" is considered the owner, "machine learning" the repo and "cluster..." the branch. With a correct origin like so "git@gitlab.com:gitlab-tests/sample-project.git", the owner & repo are inferred correctly. I am not sure that this is acceptable behavior. Parsed "pseudo-URL" results in : === RUN TestFred fred_test.go:26: remote: git@gitlab.com:foobar/gitlab-tests/sample-project.git (*v1.GitLabURL)(0xc00030a380)({ host: (string) (len=10) "gitlab.com", owner: (string) (len=6) "foobar", repo: (string) (len=12) "gitlab-tests", project: (string) "", branch: (string) (len=18) "sample-project.git", path: (string) "", token: (string) "" }) --- PASS: TestFred (0.00s) PASS kubescape/kubescape: replaced deps on github.com/armosec/go-git-url by github.com/kubescape/go-git-url (both repos are the same right now). I assume this is the right repo to look into. Will need to upgrade dep on go-git-url. @dwertent here is a proposal for a fix. @dazzag24, @Aman123lug feel free to discard this if you've already started something better. For sure, the fix is not perfect: just assuring that wrong/unexpected input doesn't panic the CLI.
gharchive/issue
2022-09-08T08:59:24
2025-04-01T06:39:20.537553
{ "authors": [ "Aman123lug", "dazzag24", "dwertent", "fredbi", "operon-io" ], "repo": "kubescape/kubescape", "url": "https://github.com/kubescape/kubescape/issues/789", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1053739999
Create separate Form editor for common metadata properties Since every Resource Kind has a metadata section with the same properties we can put these in a separate Form in the Form Editor - this form would be in a collapsable "Resource Metadata" section at the top of the panel - with the unique resource properties shown in a separate Form below as discussed - lets move this into a separate tab instead of a section on top of the existing form editor https://user-images.githubusercontent.com/20525304/151196553-2818d44a-eb41-42d3-aa3b-ae53c765a838.mp4
gharchive/issue
2021-11-15T14:38:26
2025-04-01T06:39:20.539829
{ "authors": [ "mortada-codes", "olensmar" ], "repo": "kubeshop/monokle", "url": "https://github.com/kubeshop/monokle/issues/666", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1284979569
Docs: add instructions about chartmuseum Signed-off-by: Charlie Chiang charlie_c_0129@outlook.com Updated Build your Own Registry with ChartMuseum and addon push command Use addon init command in Build Your Own Addon Corresponding feature PR: https://github.com/kubevela/kubevela/pull/4261 does this doc for 1.5 only? Yes, 1.5 or later
gharchive/pull-request
2022-06-26T17:11:08
2025-04-01T06:39:20.581860
{ "authors": [ "charlie0129" ], "repo": "kubevela/kubevela.io", "url": "https://github.com/kubevela/kubevela.io/pull/792", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1639547861
Bug 2158550: Display MigrationPolicy page after renaming correctly 📝 Description Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2158550 Display MigrationPolicy details page or MigrationPolicies list correctly with no error, after changing the name of some MigrationPolicy resource (depending on the actual location while renaming the policy). More details: The core of the problem was here where the original name of the policy was searched in the WHOLE URL and not at its end, where it belongs, so it was only logical that the first occurence of "a" was found "earlier" in the url string than expected and replaces by the new name, so "migraations" occurred in the url, that lead to the error, as such page wasn't found. 🎥 Screenshots Before: Error after renaming MigrationPolicy and incorrect url, especially if MigrationPolicy had a very simple name like "a": URL: /k8s/cluster/migraations.kubevirt.io~v1alpha1~MigrationPolicy/a After: No error after renaming MigrationPolicy (to 'aa'), page rendered correctly: URL: /k8s/cluster/migrations.kubevirt.io~v1alpha1~MigrationPolicy/aa /lgtm /retest /retest /retest
gharchive/pull-request
2023-03-24T14:55:35
2025-04-01T06:39:20.587225
{ "authors": [ "hstastna", "metalice", "pcbailey" ], "repo": "kubevirt-ui/kubevirt-plugin", "url": "https://github.com/kubevirt-ui/kubevirt-plugin/pull/1189", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1376423565
Make overview page namespace aware 📝 Description This PR adds the namespace bar to the overview page and modifies its extension to make it namespace aware. 🎥 Screenshot @avivtur @hstastna @metalice @upalatucci @vojtechszocs PTAL /lgtm
gharchive/pull-request
2022-09-16T20:11:15
2025-04-01T06:39:20.589099
{ "authors": [ "pcbailey", "upalatucci" ], "repo": "kubevirt-ui/kubevirt-plugin", "url": "https://github.com/kubevirt-ui/kubevirt-plugin/pull/880", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2118274696
node-labeller: Remove obsolete functionalities What this PR does Before this PR: node labeller supported deprecated annotations (since v0.40 release). After this PR: node labeller will not support these annotations, as kubevirt does not support upgrade from this release anymore. Fixes # Why we need it and why it was done in this way The following tradeoffs were made: The following alternatives were considered: Links to places where the discussion took place: Special notes for your reviewer Checklist This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR. Approvers are expected to review this list. [x] Design: A design document was considered and is present (link) or not required [x] PR: The PR description is expressive enough and will help future contributors [x] Code: Write code that humans can understand and Keep it simple [x] Refactor: You have left the code cleaner than you found it (Boy Scout Rule) [x] Upgrade: Impact of this change on upgrade flows was considered and addressed if required [x] Testing: New code requires new unit tests. New features and bug fixes require at least on e2e test [x] Documentation: A user-guide update was considered and is present (link) or not required. You want a user-guide update if it's a user facing feature / API change. [x] Community: Announcement to kubevirt-dev was considered Release note node-labeller: Remove obsolete functionalities It's strange that the unit test lane failed, as I didn't add/remove any in this PR.. /test pull-kubevirt-unit-test-arm64 Hey, I have a question ^^ Thanks for asking! Please see answer /test pull-kubevirt-e2e-arm64 /hold cancel
gharchive/pull-request
2024-02-05T11:16:43
2025-04-01T06:39:20.618861
{ "authors": [ "RamLavi", "enp0s3" ], "repo": "kubevirt/kubevirt", "url": "https://github.com/kubevirt/kubevirt/pull/11146", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1276081262
test: move execute functions from utils Move execute command on pod functions from utils to a new package, and also move CopyFromPod from utils to imageupload to make utils shorter. Signed-off-by: Ben Oukhanov boukhanov@redhat.com Release note: NONE /sig code-quality /retest-required It's ready for review, and previous discussion was resolved. @dankenigsberg Please let me know if we can remove hold label. Hey @codingben! Good job! I like the direction this PR is going towards, but still have a few concerns. I don't really understand why we need 2 (or 3 in the current PR implementation) different functions to execute a command on a Pod. Let me explain why: First of all, the only difference between ExecuteCommandOnPod() and ExecuteCommandOnPodV2() is that the the first one returns stdout only, while V2 returns stdout and stderr. Even if we set aside the horrible naming for these functions, I don't really see what is the motivation for having them two. Performance-wise there is no difference at all since V2 calls "V1" and simply does not return the stderr part (or more accurately returns it as an error). Secondly, ExecCommandOnPod() was a private helper function, now it is public. I can't see why we need it to be public, or even need it at all. What I would do is keep one function, ExecuteCommandOnPod(). This function should have V2's signature, or IOW, should return stdout, stderr and an error. This is how it would be used: // If we need stderr stdout, stderr, err := ExecuteCommandOnPod(virtCli, pod, containerName, command) // If we don't need stderr stdout, _, err := ExecuteCommandOnPod(virtCli, pod, containerName, command) Please note again that under the hood nothing is really changed, since stderr is fetched either way (whether you use V1 or V2). Implementation-wise, this new function needs to have the code of current ExecCommandOnPod() inside it. So eventually we end up with only one unified function. Another small note: please squash the last commit with spaces only. I would make life difficult for rebases / backports, etc. Another small note: please squash the last commit with spaces only. I would make life difficult for rebases / backports, etc. Sorry, do you mean to not have that style commit and make it as part of previous commit? Yes, exactly By the way, I'd choose squash commits option when merging this PR. Is there any reason why kubevirt-bot isn't doing it? It's popular approach in other open source projects, for example in Angular. Not sure who made the decision for Kubevirt, but tbh I don't like squash commits :) I think it's valuable to be able to view commits history as they were originally. For many PRs, if all of their commits were squashed, the changes were very difficult to grasp when looking through history. Secondly, ExecCommandOnPod() was a private helper function, now it is public. I can't see why we need it to be public, or even need it at all. It's public because it's used here, and it's giving to ExecuteCommandOnPodWithOptions custom options. In your example, there's no options parameter. WDYT? Secondly, ExecCommandOnPod() was a private helper function, now it is public. I can't see why we need it to be public, or even need it at all. It's public because it's used here, and it's giving to ExecuteCommandOnPodWithOptions custom options. In your example, there's no options parameter. WDYT? We can have ExecuteCommandOnPod and ExecuteCommandOnPodWithOptions. ExecuteCommandOnPod will be merged into ExecuteCommandOnPodV2. Aha! Got you :) Sounds good to me, I also like your naming. Also, if that's the case, I guess ExecuteCommandOnPod would internally call ExecuteCommandOnPodWithOptions, providing default options. /test pull-kubevirt-e2e-k8s-1.22-sig-compute /test pull-kubevirt-e2e-k8s-1.22-sig-compute @codingben why is it DRAFT? Are you still experimenting? If the design is final and you expect review please change from draft to final with the use of "Ready for Review". I looks final to me. It looks good. I like it now, after you implemented @iholder-redhat suggestions. @brybacki Hi, yes, it's final design. It's on Draft to not trigger all CI tests. I tried to run pull-kubevirt-e2e-k8s-1.22-sig-compute twice here and it's failing - I tried to run locally and it's failing on this error: Delete "https://127.0.0.1:49178/api/v1/namespaces/kubevirt-test-default1/serviceaccounts/kubevirt-subresource-test-sa": dial tcp 127.0.0.1:49178: connect: connection refused I think I didn't setup something properly. I'd like to try to execute another test to make sure some local tests passed before we'll execute all CI tests. I'm going to make cluster-up :) I'll remove Draft once I'll verify tests locally. @codingben I think You need to rebase, then change the PR to ready for review, so all the tests run, and we can finalize the review process I'll open another PR to just move execute functions from utils without refactoring them.
gharchive/pull-request
2022-06-19T14:38:43
2025-04-01T06:39:20.632315
{ "authors": [ "brybacki", "codingben", "iholder-redhat" ], "repo": "kubevirt/kubevirt", "url": "https://github.com/kubevirt/kubevirt/pull/7944", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1673524672
api: Move the core API storage version to v1 and deprecate v1alpha3 What this PR does / why we need it: This change moves the storage version for all core API CRDs to v1. This does not impact existing objects that will continue to be stored using their original v1alpha3 version while being served as v1alpha3 or v1, as was the case previously. Work will be required in the future to ensure all stored v1alpha3 objects are read and updated to v1 but for the time being this isn't required as part of this PR. For more context please review the following k8s documentation: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning Additionally the following KubeVirt dev ML thread covers this topic: https://groups.google.com/g/kubevirt-dev/c/bSayedthHmY Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged): Fixes # Special notes for your reviewer: Interested in this PR? Then you will love https://github.com/kubevirt/kubevirt/pull/9575 ! Release note: * The `kubevirt.io/v1` `apiVersion` is now the default storage version for newly created objects * The `kubevirt.io/v1alpha3` `apiVersion` is now deprecated and will be removed in a future release /test pull-kubevirt-apidocs /test pull-kubevirt-generate /test pull-kubevirt-e2e-k8s-1.26-sig-compute /retest-required /retest-required /retest /retest /retest /retest-required /hold Need to create the runbook for the alert first @sradco Moved https://github.com/kubevirt/kubevirt/pull/9724/commits/91a2b7a002c957995036e5543f2b92a674324e7b to separate PR https://github.com/kubevirt/kubevirt/pull/9724 /unhold
gharchive/pull-request
2023-04-18T17:04:19
2025-04-01T06:39:20.640428
{ "authors": [ "0xFelix", "lyarwood" ], "repo": "kubevirt/kubevirt", "url": "https://github.com/kubevirt/kubevirt/pull/9628", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1695576810
Removes dependency on go-ps What this PR does / why we need it: What it does: $subject Why we need: 1) the library isn't updated since a while, 2) KubeVirt needs only UNIX part of the code from the original repo Which issue(s) this PR fixes: Fixes #9671 Special notes for your reviewer: Release note: NONE /cc @alicefr /ok-to-test @dharmit it will be nice to have a couple of unit tests checking the new functions @dharmit it will be nice to have a couple of unit tests checking the new functions Sure, I'll add them and ping back. Please, if you copied the functions from the original library, please write a comment with the reference to it Please, if you copied the functions from the original library, please write a comment with the reference to it I'd like to take a step back and confirm if it's OK to copy from the original library? Or do you recommend I implement it afresh. I don't mind doing the latter (and would even prefer going that route, as that would help me learn). Please, if you copied the functions from the original library, please write a comment with the reference to it I'd like to take a step back and confirm if it's OK to copy from the original library? Or do you recommend I implement it afresh. I don't mind doing the latter (and would even prefer going that route, as that would help me learn). The original library is under the MIT license. Under the license is possible to take and modify the code. The only thing is we need to include the copyright not. Hence, you could put a comment before the code and adding the link to the library and the comment with license note Please, if you copied the functions from the original library, please write a comment with the reference to it I'd like to take a step back and confirm if it's OK to copy from the original library? Or do you recommend I implement it afresh. I don't mind doing the latter (and would even prefer going that route, as that would help me learn). The original library is under the MIT license. Under the license is possible to take and modify the code. The only thing is we need to include the copyright not. Hence, you could put a comment before the code and adding the link to the library and the comment with license note What this PR does / why we need it: $subject The subject does not state why we need this. Please take a few seconds to write a sentence about what problem this PR is solving. Thank you What this PR does / why we need it: $subject The subject does not state why we need this. Please take a few seconds to write a sentence about what problem this PR is solving. Thank you Thanks @jean-edouard. I've updated it. What this PR does / why we need it: What it does: $subject Why we need: the library isn't updated since a while Does it need updating? Can't we submit pull requests to them for what needs to be updated? KubeVirt needs only UNIX part of the code from the original repo It is normal not to use every aspect of the things we import, we could probably make similar statements about virtually every other library we use! Importing a bunch of code into KubeVirt increases the maintenance burden/cost, which I guess is fine if there's a good reason for it, but I don't see it here... What this PR does / why we need it: What it does: $subject Why we need: the library isn't updated since a while Does it need updating? Can't we submit pull requests to them for what needs to be updated? We can, I think. At least, I can't think of why we can't. :) KubeVirt needs only UNIX part of the code from the original repo It is normal not to use every aspect of the things we import, we could probably make similar statements about virtually every other library we use! 👍🏾 I take it back. Importing a bunch of code into KubeVirt increases the maintenance burden/cost, which I guess is fine if there's a good reason for it, but I don't see it here... @alicefr can you PTAL? @jean-edouard This was my suggestion, there are a couple of go libraries that implement this. We could fix it in their repo, but since they are just a couple of functions, I thought it would be better to get rid of the dependency. At least, it was my thought about it. Please, let me know what you think No fixes are needed so far. It was a suggestion for refactoring. If we want to keep it, then we can open fixes there I'm closing this since it looks like we have an agreement to open PRs on the original repo, if need be. Thanks for your help @alicefr @0xFelix, and for the discussion @jean-edouard. :) /close
gharchive/pull-request
2023-05-04T08:59:54
2025-04-01T06:39:20.654392
{ "authors": [ "alicefr", "dharmit", "jean-edouard" ], "repo": "kubevirt/kubevirt", "url": "https://github.com/kubevirt/kubevirt/pull/9696", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1652158209
Allow running vm-console-proxy on Kubernetes Is this a BUG REPORT or FEATURE REQUEST?: Uncomment only one, leave it on its own line: /kind bug /kind enhancement What happened: With the current configuration supplied vm-console-proxy is only able to run on OKD / OpenShift. What you expected to happen: I expected vm-console-proxy to be available on upstream Kubernetes too. E.g. there is at least documentation on how to run it without OpenShift. How to reproduce it: Run make deploy. /remove-lifecycle stale
gharchive/issue
2023-04-03T14:12:19
2025-04-01T06:39:20.665159
{ "authors": [ "0xFelix", "akrejcir" ], "repo": "kubevirt/vm-console-proxy", "url": "https://github.com/kubevirt/vm-console-proxy/issues/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1845452956
feat: policy optimizer This PR introduces a new cli tool called policy-optimizer. This binary will download the policies and optimize them. The goal is to implement what is described inside of this RFC. Currently the code doesn't do any download/optimization. It just leverages the Lease primitive declared by Kubernetes to ensure only one process can have write access to the directory where the optimized policies are going to be written. The actual code similates the download & optimize work with a simple sleep. The change is pretty invasive as you can see, I don't want to merge it in the main branch yet. Closing in favor of https://github.com/kubewarden/policy-server/pull/519, which is open against a feature branch of policy-server
gharchive/pull-request
2023-08-10T15:45:29
2025-04-01T06:39:20.667744
{ "authors": [ "flavio" ], "repo": "kubewarden/policy-server", "url": "https://github.com/kubewarden/policy-server/pull/518", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1222580459
Fix policy metadata The policy metadata was broken: The free form key/value section requires the values to be strings, the policy was trying to use an array There was an indexing error inside of the description I think we should just tag 0.1.1 once this is merged Reopening, the metadata in hub doesn't seem to be updated: https://github.com/kubewarden/policy-hub/blob/main/web/policies/kubewarden:verify-image-signatures.json Fixed, it's all good now
gharchive/pull-request
2022-05-02T07:35:16
2025-04-01T06:39:20.673537
{ "authors": [ "flavio", "viccuad" ], "repo": "kubewarden/verify-image-signatures", "url": "https://github.com/kubewarden/verify-image-signatures/pull/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1717940294
Not running on MacOs I have used code from both the macOS branch and master branch, and can run the initial setup and download GUI, however when I try to run the program, I get the following error: "System is not supported" - when running code from master branch or "ERROR:aoimage.AoImage:System is not supported" when running from macOS branch. Should have mentioned that I'm using an Intel based Mac, not M1. Also happens on M1 macs: ~/Projects/autoortho on  main! ⌚ 9:37:00 $ build/autoortho.pyz Mac OS Version is 13.3.1 and patch enabled so applying the patch Applyting Mac OS 12.3+ Alpha Channel fix. Your default Alpha Channel is now 0.99 Config file found /Users/aaron/.autoortho reading... Saving config ... Wrote config file: /Users/aaron/.autoortho INFO:downloader:Looking for regions ... INFO:downloader:Last release refresh time: 2023-05-24 09:35:38.501631 INFO:downloader:Using cache ... INFO:downloader:Using scenery dir /Users/aaron/X-Plane 12/Custom Scenery INFO:downloader:Found region eur version 0.0.50 INFO:downloader: ... eur not setup yet INFO:downloader:Found region na version 0.0.49 INFO:downloader: ... na not setup yet INFO:downloader:Found region sa version 0.0.46-1 INFO:downloader: ... sa not setup yet INFO:downloader:Found region afr version 0.0.45-1 INFO:downloader: ... afr not setup yet INFO:downloader:Found region asi version 0.0.44-1 INFO:downloader: ... asi not setup yet INFO:downloader:Found region aus_pac version 0.0.42-1 INFO:downloader: ... aus_pac not setup yet Saving config ... INFO:aoconfig:Wrote config file: /Users/aaron/.autoortho Wrote config file: /Users/aaron/.autoortho Config file found /Users/aaron/.autoortho reading... INFO:aoconfig:Config file found /Users/aaron/.autoortho reading... Setting download dir to /Users/aaron/.autoortho-data/downloads INFO:downloader:Download na INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_00.zip 100.00% 30.38 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_01.zip 100.00% 52.70 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_02.zip 100.00% 22.25 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_03.zip 100.00% 51.62 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_04.zip 100.00% 25.42 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_05.zip 100.00% 52.05 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_06.zip 100.00% 53.14 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_07.zip 100.00% 31.36 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_08.zip 100.00% 33.05 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_09.zip 100.00% 34.26 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_10.zip 100.00% 36.03 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_11.zip 100.00% 29.08 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_12.zip 100.00% 42.20 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_13.zip 100.00% 41.08 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_14.zip 100.00% 24.73 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_15.zip 100.00% 31.99 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_16.zip 100.00% 28.30 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_17.zip 100.00% 53.04 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_18.zip 100.00% 55.66 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_19.zip 100.00% 46.10 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_20.zip 100.00% 30.74 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_21.zip 100.00% 41.91 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_22.zip 100.00% 47.21 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_23.zip 100.00% 43.49 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_24.zip 100.00% 46.83 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_25.zip 100.00% 37.20 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_26.zip 100.00% 33.81 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_27.zip 100.00% 33.85 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_28.zip 100.00% 44.56 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_29.zip 100.00% 45.99 MBpsINFO:downloader: DONE! INFO:downloader:ORTHOS DOWNLOADED INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/y_na_overlays.zip.00 100.00% 30.18 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/y_na_overlays.zip.01 100.00% 27.81 MBpsINFO:downloader: DONE! INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/y_na_overlays.zip.02 100.00% 47.31 MBpsINFO:downloader: DONE! INFO:downloader:OVERLAYS DOWNLOADED Setting extract dir to /Users/aaron/X-Plane 12/Custom Scenery INFO:downloader: ... na not setup yet INFO:downloader:Ready to extract archives for na v0.0.49! INFO:downloader:Split zip detected for ('/Users/aaron/.autoortho-data/downloads/y_na_overlays.zip',) INFO:downloader:ZIPNAME /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip INFO:downloader:Split zip detected for ('/Users/aaron/.autoortho-data/downloads/y_na_overlays.zip',) INFO:downloader:ZIPNAME /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip INFO:downloader:Split zip detected for ('/Users/aaron/.autoortho-data/downloads/y_na_overlays.zip',) INFO:downloader:ZIPNAME /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_00.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_01.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_02.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_03.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_04.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_05.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_06.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_07.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_08.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_09.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_10.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_11.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_12.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_13.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_14.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_15.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_16.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_17.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_18.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_19.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_20.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_21.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_22.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_23.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_24.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_25.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_26.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_27.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_28.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_29.zip... INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip... INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_00.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_01.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_02.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_03.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_04.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_05.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_06.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_07.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_08.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_09.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_10.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_11.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_12.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_13.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_14.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_15.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_16.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_17.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_18.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_19.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_20.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_21.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_22.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_23.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_24.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_25.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_26.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_27.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_28.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/z_na_29.zip INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_21 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_19 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_26 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_10 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_28 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_17 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_29 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_16 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_11 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_18 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_27 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_20 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_02 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_05 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_04 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_03 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_25 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_22 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_14 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_13 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_12 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_15 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_23 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_24 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_06 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_01 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_08 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_09 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_00 INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_07 INFO:downloader:Copy /Users/aaron/X-Plane 12/Custom Scenery/z_ao_na/textures to /Users/aaron/X-Plane 12/Custom Scenery/z_autoortho/_textures INFO:downloader:Done with extract INFO:downloader: ... na up to date and validated. Updating config. Saving config ... INFO:aoconfig:Wrote config file: /Users/aaron/.autoortho Wrote config file: /Users/aaron/.autoortho Config file found /Users/aaron/.autoortho reading... INFO:aoconfig:Config file found /Users/aaron/.autoortho reading... SectionParser(# x-plane custom scenery path='', scenery_path='/Users/aaron/X-Plane 12/Custom Scenery', # directory where satellite images are cached='', cache_dir='/Users/aaron/.autoortho-data/cache', # set directory for temporary downloading of scenery and other support files='', download_dir='/Users/aaron/.autoortho-data/downloads', # changing log_file dir is currently not supported='', log_file='/Users/aaron/.autoortho-data/logs/autoortho.log') Updating config. Saving config ... INFO:aoconfig:Wrote config file: /Users/aaron/.autoortho Wrote config file: /Users/aaron/.autoortho Config file found /Users/aaron/.autoortho reading... INFO:aoconfig:Config file found /Users/aaron/.autoortho reading... Exiting ... root: /Users/aaron/X-Plane 12/Custom Scenery/z_autoortho/_textures mountpoint: /Users/aaron/X-Plane 12/Custom Scenery/z_autoortho/textures INFO:aostats:Creating stats object INFO:autoortho:Running in multi-threaded mode. INFO:autoortho:Running in FUSE mode. ERROR:aoimage.AoImage:System is not supported Hey there! I was wondering if anyone has had success using Autoortho on a Mac. I was able to get it up and running on my PC, but I'm feeling a bit lost as to where to begin on my Mac. Any guidance you can offer would be greatly appreciated. Thank you! Currently Mac isn't supported yet, it's possible it will be in the future. What's needed is mostly related to compiling several binary dependencies. On Sun, May 28, 2023 at 6:12 PM avroliner780 @.***> wrote: Hey there! I was wondering if anyone has had success using Autoortho on a Mac. I was able to get it up and running on my PC, but I'm feeling a bit lost as to where to begin on my Mac. Any guidance you can offer would be greatly appreciated. Thank you! — Reply to this email directly, view it on GitHub https://github.com/kubilus1/autoortho/issues/145#issuecomment-1566282233, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVEDLC4NTSWKYJHS5F2RNTXIPEVTANCNFSM6AAAAAAYILBM2U . You are receiving this because you are subscribed to this thread.Message ID: @.***> Thank you so much for your quick reply. I'm really excited for the Mac version to come out! If you need any help testing it, please don't hesitate to let me know. I'm more than happy to lend a hand. Will do, definitely will need some help testing once that happens. On Sun, May 28, 2023, 8:09 PM avroliner780 @.***> wrote: Thank you so much for your quick reply. I'm really excited for the Mac version to come out! If you need any help testing it, please don't hesitate to let me know. I'm more than happy to lend a hand. — Reply to this email directly, view it on GitHub https://github.com/kubilus1/autoortho/issues/145#issuecomment-1566318738, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVEDLAFGUMNZ34QHXX36YLXIPSLBANCNFSM6AAAAAAYILBM2U . You are receiving this because you commented.Message ID: @.***> btw, I also attempted to build the "macos" branch of autoortho, and did get that building, but I ran into the same error that the macos_prmerge branch ended with, and that is the tile building doesn't produce the DDS files needed by x-plane. kubilus1: I would gladly help in debugging the builds for MacOS, as I have the two branches building (macos and macos_prmerge) but the data flow in each is not producing the correct scenery files needed. Just let me know if you can afford a few minutes to walk-through the data flow, so I can help in debugging the MacOS builds.
gharchive/issue
2023-05-19T23:40:12
2025-04-01T06:39:20.687816
{ "authors": [ "acehoss", "avroliner780", "donks", "jimblair", "kubilus1" ], "repo": "kubilus1/autoortho", "url": "https://github.com/kubilus1/autoortho/issues/145", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
460496362
kafka: upgrade kafka to 2.2.1 and enable metrics this PR updates Kafka framework to use a Kafka 2.2.1 docker image. enable metrics for kafka. add an option to enable advertised listeners for clients, and not just limited to localhost. log.dirs to be configurable using the env variable. Let's merge this before we start migrating packages thank you @alenkacz
gharchive/pull-request
2019-06-25T15:33:10
2025-04-01T06:39:20.690794
{ "authors": [ "alenkacz", "zmalik" ], "repo": "kudobuilder/frameworks", "url": "https://github.com/kudobuilder/frameworks/pull/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
434901159
Adding the kep for the previous values in update plans What type of PR is this? /kind kep What this PR does / why we need it: Knowing what were the previous values will help someone who develop a KUDO framework to define the logic that needs to take place when an update occurs. Which issue(s) this PR fixes: Fixes # Special notes for your reviewer: Does this PR introduce a user-facing change?: Closed as stale. We will re-open if we can sanely re-evaluate this, but let's re-evaluate this once we have server-side apply in (Kubernetes 1.15) because we may end up with more previous state than we thought once that's in.
gharchive/pull-request
2019-04-18T18:30:03
2025-04-01T06:39:20.694190
{ "authors": [ "djannot", "gerred" ], "repo": "kudobuilder/kudo", "url": "https://github.com/kudobuilder/kudo/pull/199", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
992201534
docs(dns) add extra documentation on the DNS CoreDNS wasn't mentionned anywhere, explain how DPP DNS works and how to update the template Signed-off-by: Charly Molter charly.molter@konghq.com I'm unsure about your suggestion I feel like you don't need to understand it to set it up. You need it for advance use cases which is why it's lower down in the docs. Otherwise I did all the updates you suggested and sorry about the future tense (bad habits die hard!) Heh future tense is all over the place and I'm not consistent about correcting it either. Iterate, iterate ... Point of clarification (I might revisit in a separate PR) -- I made the suggestion to move the explanation of how DNS works up the file to avoid duplication, not to explain something users don't really need :D. So I'll go back to the file as a whole and rethink organization generally (this is a common issue throughout the docs, not limited to this page). I'm also not seeing commits for some of the suggestions I made, but they aren't a big deal -- we can revisit another time ... I took the suggestions in the general commit where other suggestions were added.
gharchive/pull-request
2021-09-09T13:06:09
2025-04-01T06:39:20.717472
{ "authors": [ "Bradamant3", "lahabana" ], "repo": "kumahq/kuma-website", "url": "https://github.com/kumahq/kuma-website/pull/530", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
233560000
support for ionic angular 2 any demo to work with ionic 2 , angular 2? thanks.. Here is what I did to make it work in Ionic 2. By way of example, here it is added to a simple Ionic 2 app: ionic start ionictest blank // Builds a simple Hello World style app. cd ionictest ionic cordova platform add android // Only works on Android or iOS ionic cordova run android At this point I got an error about Gradle not being in the path. This seems to be a recent bug. This line will fix it (at some point in the future this won't be necessary); then run again: cordova platform update android@6.1.1 ionic cordova run android So far, we haven't touched the plugin. Assuming all is good, let's add the plugin: ionic cordova plugin add https://github.com/kunder-lab/cl.kunder.webview.git Now edit pages/home/home.html to add a button (after the "If you get lost ..." paragraph): <!-- Any URL will do --> <button ion-button (click)="launch('http://MichaelTague.com')"> Simple Web Site </button> And then in pages/home/home.ts, add this right after the imports: declare var webview: any; and then inside the "export class HomePage" block, put this: launch(url: string) { webview.Show(url); } The "declare" tells TypeScript that is is OK to refer to "webview" without it otherwise being imported or instantiated. Note: there is no import of the plugin. The webview.Show(...) will open this URL in a second webview. It seems to have no trouble loading the HTML and related files such as an image. However, if you touch a link, the webview will open a browser. As for how to get it to talk to an existing cordova plugin, I'm still working on that! Maybe someone else will comment. Good luck, Michael Tague (tague@win.net). Thanks @MichaelTague for your tutorial!
gharchive/issue
2017-06-05T11:19:42
2025-04-01T06:39:20.728822
{ "authors": [ "MichaelTague", "arturokunder", "frndxyz" ], "repo": "kunder-lab/cl.kunder.webview", "url": "https://github.com/kunder-lab/cl.kunder.webview/issues/26", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1721846124
[New game]: Tic Tac Tow 🎮 Game Request Players take turns placing a mark in one of the cells of the grid. The goal of the game is for players to position their marks so that they make a continuous line of three cells vertically, horizontally, or diagonally. An opponent can prevent a win by blocking the completion of the opponent's line. will use HTML CSS and javascript. Point down the features The game is played on a grid that's 3 squares by 3 squares. You are X , your friend (or the computer in this case) is O . Players take turns putting their marks in empty squares. The first player to get 3 of her marks in a row (up, down, across, or diagonally) is the winner. When all 9 squares are full, the game is over. If no player has 3 marks in a row, the game ends in a tie. Select program in which you are contributing GSSoC23 Code of Conduct [X] I follow CONTRIBUTING GUIDELINE of this project. Hey @lmalkam! We are already having a similar game request in #245 👀 Make sure you come up with a cool unique idea 😀 Waiting for your new game idea 💗. Hey @lmalkam ! Thank you so much for your raising the issue💗 It’s all yours, you can come anytime again and make some contributions! 🚀 Alone, we can do little, but together we can do so much! 😇
gharchive/issue
2023-05-23T11:18:28
2025-04-01T06:39:20.733014
{ "authors": [ "kunjgit", "lmalkam" ], "repo": "kunjgit/GameZone", "url": "https://github.com/kunjgit/GameZone/issues/305", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1738416543
[New game]: 3d planet game 🎮 Game Request player needs to avoid junks and destroy asteroids Point down the features Game points will be awarded on the basis of tokens collected and missions completed Select program in which you are contributing GSSoC23 Code of Conduct [X] I follow CONTRIBUTING GUIDELINE of this project. Hey @S-ishita ! Thank you for raising an issue 💗 You can self assign the issue by commenting /assign in comment 😀 Make sure you follow CODE OF CONDUCT and CONTRIBUTING GUIDELINES 🚀 Don’t Forget to ⭐ our GameZone🎮 Make sure you join our Discord🕹️ Hey @S-ishita! We are already having a similar game request in #730 👀 Make sure you come up with a cool unique idea 😀 Waiting for your new game idea 💗. Hey @S-ishita ! Thank you so much for your raising the issue💗 It’s all yours, you can come anytime again and make some contributions! 🚀 Alone, we can do little, but together we can do so much! 😇
gharchive/issue
2023-06-02T15:34:29
2025-04-01T06:39:20.738379
{ "authors": [ "S-ishita", "kunjgit" ], "repo": "kunjgit/GameZone", "url": "https://github.com/kunjgit/GameZone/issues/731", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
690649026
希望启动多个kcp实例 做负载均衡 有解决办法吗 希望启动多个kcp实例 做负载均衡 有解决办法吗 自己复制修改init脚本,然后启用就行了 @sqliuchang 能分享一下你修改后的init脚本么,我简单的改了一下开头的KCPTUN=kcptun 、 CONFIG_FOLDER=/var/etc/$KCPTUN 以及相应的配置,结果还是不能用 `#!/bin/sh /etc/rc.common Copyright 2016-2019 Xingwang Liao kuoruan@gmail.com Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. START=99 USE_PROCD=1 KCP0=kcp0 CONFIG_FOLDER=/var/etc/$KCP0 if [ -r /usr/share/libubox/jshn.sh ]; then . /usr/share/libubox/jshn.sh elif [ -r /lib/functions/jshn.sh ]; then . /lib/functions/jshn.sh else logger -p daemon.err -t "$KCP0" "Package required: jshn." exit 1 fi _log() { local level="$1" local msg="$2" logger -p "daemon.${level}" -t "$KCP0" "$msg" } gen_client_config_file() { local config_file="$1" json_init json_add_string "remoteaddr" "${server_addr}:${server_port}" json_add_string "localaddr" "${listen_addr}:${listen_port}" add_configs() { local type="$1"; shift local k v for k in "$@"; do v="$(eval echo "\$$k")" if [ -n "$v" ]; then if [ "$type" = "string" ]; then json_add_string "$k" "$v" elif [ "$type" = "int" ]; then json_add_int "$k" "$v" elif [ "$type" = "boolean" ]; then if [ "$v" = "true" ]; then json_add_boolean "$k" "1" else json_add_boolean "$k" "0" fi fi fi done } add_configs "string" key crypt mode add_configs "int" conn autoexpire mtu sndwnd rcvwnd datashard parityshard dscp \ nodelay interval resend nc sockbuf smuxver smuxbuf streambuf keepalive scavengettl snmpperiod add_configs "boolean" nocomp acknodelay quiet tcp if [ -n "$log_file" ]; then json_add_string "log" "$log_file" fi json_close_object json_dump -i >"$config_file" } add_iptables_rule() { local port="$1" iptables-restore --noflush <<-EOF 2>/dev/null *nat :KCP0 - -A KCP0 -p tcp --dport $port -j ACCEPT -A INPUT -p tcp -j KCP0 COMMIT EOF } clear_iptables_rule() { iptables-save --counters | grep -vi "KCP0" | iptables-restore --counters } validate_config_section() { uci_validate_section "$KCP0" general "$1" 'server:uciname' 'client_file:string' 'daemon_user:string:root' 'enable_logging:bool:0' 'log_folder:directory:/var/log/kcp0' } validate_server_section() { uci_validate_section "$KCP0" servers "$1" 'server_addr:host' 'server_port:port:29900' 'listen_addr:host:0.0.0.0' 'listen_port:port:12948' 'key:string' 'crypt:string:aes' 'mode:or("normal","fast","fast2","fast3","manual"):fast' 'conn:min(1)' 'autoexpire:uinteger' 'scavengettl:min(-1)' 'mtu:range(64,9200)' 'sndwnd:min(1)' 'rcvwnd:min(1)' 'datashard:uinteger' 'parityshard:uinteger' 'dscp:uinteger' 'nocomp:or("true", "false")' 'quiet:or("true", "false")' 'tcp:or("true", "false")' 'nodelay:bool' 'interval:uinteger' 'resend:range(0,2)' 'nc:bool' 'acknodelay:or("true", "false")' 'sockbuf:uinteger' 'smuxver:or("1", "2")' 'smuxbuf:uinteger' 'streambuf:uinteger' 'keepalive:uinteger' 'snmpperiod:min(1)' } validate_client_file() { local file="$1" if [ ! -f "$file" ]; then return 1 fi test -x "$file" || chmod 755 "$file" ( $file -v 2>/dev/null | grep -q "kcp" ) } start_kcptun_instance() { local section="$1" if ! validate_config_section "$section" ; then _log "err" "Config validate failed." return 1 fi if [ -z "$server" ] || [ "$server" = "nil" ]; then _log "info" "No server selected, Client will stop." return 0 elif ! validate_server_section "$server"; then _log "err" "Server config validation failed." return 1 elif [ -z "$server_addr" ] || [ -z "$listen_port" ]; then _log "err" "Server config validation failed." return 1 fi if [ -z "$client_file" ]; then _log "err" "Please set client file path, or use auto download." return 1; elif ! validate_client_file "$client_file"; then _log "err" "Client file validation failed." return 1 fi is_ipv6_address() { echo "$1" | grep -q ":" } is_ipv6_address "$server_addr" && server_addr="[${server_addr}]" is_ipv6_address "$listen_addr" && listen_addr="[${listen_addr}]" test -d "$CONFIG_FOLDER" || mkdir -p "$CONFIG_FOLDER" log_file="" if [ "x$enable_logging" = "x1" ]; then mkdir -p "$log_folder" chown -R "$daemon_user" "$log_folder" log_file="${log_folder}/client.${section}.log" fi local config_file="${CONFIG_FOLDER}/client.${section}.json" if ! ( gen_client_config_file "$config_file" ); then _log "err" "Can't create config file". return 1 fi procd_open_instance procd_set_param command "$client_file" procd_append_param command -c "$config_file" procd_set_param respawn procd_set_param user "$daemon_user" procd_set_param file "$config_file" procd_close_instance } service_triggers() { procd_add_reload_trigger "$KCP0" } start_service() { sleep 1 config_load "$KCP0" config_foreach start_kcptun_instance "general" } ` 我这个是老版本的脚本没有更新,而且去除了开放端口的部分,你们对照着原版看我改了什么可以自己修改 @sqliuchang 你的模板跟(1.5.2-1)很接近,应该是同一版 另外创建了 /var/etc/kcp0/client.general.json,给init加了x权限 不过用了你的模板还是开不起来 是不是还有什么步骤遗漏的? kcp0.zip @GTGraphics3 应该是你config文件没配置好吧,你在luci下配置好kcptun之后,停止kcptun,然后复制这个配置文件/etc/config/kcptun到/etc/config/kcp0,然后启用kcp0试一下。还是不行的话可以用sh -x来调试。 @sqliuchang 非常感谢,在配置过/etc/config/kcp0后多开成功了!
gharchive/issue
2020-09-02T01:40:54
2025-04-01T06:39:20.757723
{ "authors": [ "GTGraphics3", "JoveYu", "sqliuchang" ], "repo": "kuoruan/luci-app-kcptun", "url": "https://github.com/kuoruan/luci-app-kcptun/issues/56", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
3126924
Documentation is quite outdated It's really hard to follow the readme when 90% of carpool specific calls are no longer working. I am trying to read through the source code but it would be nice if the examples in the readme were updated. As you can see the last commit for this was back in 2010. You should maybe look at using something like oauth (which is what we moved on to) as its pretty easy to implement a oauth provider with something like devise/omniauth. I'm not sure whats changed since the readme was updated but if you want to update any of it I'll definitely merge in any pull requests.
gharchive/issue
2012-02-07T16:46:12
2025-04-01T06:39:20.759546
{ "authors": [ "anlek", "brentkirby" ], "repo": "kurbmedia/carpool", "url": "https://github.com/kurbmedia/carpool/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1964570453
fix: builder args incorrectly configured as new args were added to geth the hardcoded indexes are not configured properly. This makes cmd args for builder more robust. Thanks for the quick fix, totally missed it!
gharchive/pull-request
2023-10-27T00:37:00
2025-04-01T06:39:20.760479
{ "authors": [ "avalonche", "barnabasbusa" ], "repo": "kurtosis-tech/ethereum-package", "url": "https://github.com/kurtosis-tech/ethereum-package/pull/343", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2553613870
chore(main): release 0.3.2 :robot: I have created a release beep boop 0.3.2 (2024-09-28) Bug Fixes add the --service flag in the kardinal flow telepresence intercept command (#259) (5d22282) fix broken website CSS by refactoring styled-components SSR logic (#257) (505e885) This PR was generated with Release Please. See documentation. :robot: Release is at https://github.com/kurtosis-tech/kardinal/releases/tag/0.3.2 :sunflower:
gharchive/pull-request
2024-09-27T19:37:25
2025-04-01T06:39:20.764434
{ "authors": [ "lostbean" ], "repo": "kurtosis-tech/kardinal", "url": "https://github.com/kurtosis-tech/kardinal/pull/258", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2578501165
Add release process documentation Closes #18 Hm ideally we should also add a sentence to the readme to draw users' attention to the fact that binary releases are available?
gharchive/pull-request
2024-10-10T11:07:06
2025-04-01T06:39:20.765337
{ "authors": [ "kuruczgy" ], "repo": "kuruczgy/x1e-nixos-config", "url": "https://github.com/kuruczgy/x1e-nixos-config/pull/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1716055130
Certain metrics should be gauges not counters As I evaluate certain metrics in Grafana, it seems likely that some counter metrics are likely originated as gauges such as logstash_stats_pipeline_queue_events_count. This needs some careful testing to better verify, which I'll do as I find time. Logstash docs as a reference: https://www.elastic.co/guide/en/logstash/current/node-stats-api.html (Though it doesn't answer this question). Here's a code pointer as well: https://github.com/elastic/logstash/blob/main/logstash-core/lib/logstash/api/commands/stats.rb#L38-L41C49 Though the use of += make me less sure. Again, this needs some smoke testing to validate. I'll close this, as it does not seem to be the case. Indeed these metrics are monotonic counters.
gharchive/issue
2023-05-18T18:08:52
2025-04-01T06:39:20.767758
{ "authors": [ "excalq" ], "repo": "kuskoman/logstash-exporter", "url": "https://github.com/kuskoman/logstash-exporter/issues/121", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1665935626
add jv_kind_of_jv Since jv and jv_kind are different, string_of_jv_kind cannot be applied to values of type jv. I added a function jv_kind_of_jv that converts a value of type jv to the corresponding value of type jv_kind. close as Json.clasify_jv added in https://github.com/kxcteam/kxclib-ocaml/commit/df8c805ba1eeb3ee159cd71bf797f6d49a9a4b76 (🙏 @kxc-wraikny )
gharchive/pull-request
2023-04-13T08:10:16
2025-04-01T06:39:20.821862
{ "authors": [ "haochenx", "kxc-wraikny" ], "repo": "kxcteam/kxclib-ocaml", "url": "https://github.com/kxcteam/kxclib-ocaml/pull/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
306052549
Refactor docs This PR unifies the docs and adds extensions for automatic rendering in GitHub. I'm not sure if this matters anymore (maybe this should just work on GitHub and that's fine) but I think I went with .rst only because PyPi only supports it (and not Markdown). Does anybody care about the docs being rendered in HTML on PyPi? Ah, yes. FWIW, the conversion is simple enough, but perhaps needless. I can make the README an rst instead since GitHub can render either. Yeah, let's do that instead! One last small request: for context, can you rewrite the sentence from the old README.md to read something like: "Note that the GitHub repository is a (primarily read-only) mirror to enable bug reports and outside contributions." On Fri, Mar 16, 2018 at 3:48 PM, Elijah Rippeth notifications@github.com wrote: Ah, yes. FWIW, the conversion is simple enough, but perhaps needless. I can make the README an rst instead since GitHub can render either. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kylebgorman/Pynini/pull/2#issuecomment-373825819, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJuOYjxmtWot1nVDhpL3tamGlzgHRZ7ks5tfBcngaJpZM4SuVP4 . It's here. Eep, don't merge yet... rst isn't rendering quite correctly... OK, at last... I've made the changes so rst can render. What a silly markup. :-)
gharchive/pull-request
2018-03-16T19:36:25
2025-04-01T06:39:20.834256
{ "authors": [ "erip", "kylebgorman" ], "repo": "kylebgorman/Pynini", "url": "https://github.com/kylebgorman/Pynini/pull/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
664103642
Handle MySQL renames https://github.com/kyleconroy/sqlc/issues/610 This change is  I've merged #608. Could you update this to return the error? Thanks
gharchive/pull-request
2020-07-22T23:11:17
2025-04-01T06:39:20.846171
{ "authors": [ "alecbz", "kyleconroy" ], "repo": "kyleconroy/sqlc", "url": "https://github.com/kyleconroy/sqlc/pull/613", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
9133833
Exclude some column from being sorted. Is there any way I can specify column that should not be sorted ? Quite old but may be useful. I just set the class of the column to "column-unsortable" and did the following: $('table').tablesort(); $(".column-unsortable").unbind(); This functionality is now included in jquery-tablesort (0.0.3). To prevent a column from being sortable, just add the no-sort class to your th: <th class="no-sort">Photo</th> Try the "Photo" column in the demo: https://dl.dropboxusercontent.com/u/780754/tablesort/index.html
gharchive/issue
2012-12-10T10:04:17
2025-04-01T06:39:20.851322
{ "authors": [ "Gelbotron", "kasimbadami", "kylefox" ], "repo": "kylefox/jquery-tablesort", "url": "https://github.com/kylefox/jquery-tablesort/issues/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
801722287
Can't route all traffic through VPN I want to migrate my existing OpenVPN install to use this Docker container but I'm having some trouble finding the right settings so that it can route all Internet traffic through the VPN. I'm trying to set an OpenVPN instance with the following setup: Using TCP VPN internal network should be 10.9.0.0/24 VPN clients should be able machines in the host network 192.168.1.0/24 VPN clients should be able to tunnel Internet traffic through the VPN In order to build a config for this, I configuring the following service in my docker-compose.yml file: openvpn: image: kylemanna/openvpn container_name: openvpn restart: unless-stopped cap_add: - NET_ADMIN volumes: - $MY_HOST_CONF_DIR:/etc/openvpn ports: - 1194:1194 And I ran the following commands: $ docker-compose run --rm openvpn ovpn_genconfig -N -d -u tcp://$MY_DNS -s 10.9.0.0/24 -p "route 192.168.1.0 255.255.255.0" $ docker-compose run --rm openvpn ovpn_initpki $ docker-compose run --rm openvpn easyrsa build-client-full $MY_CLIENT nopass $ docker-compose run --rm openvpn ovpn_getclient $MY_CLIENT > $MY_CLIENT.ovpn I'm now trying to connect to connect with Tunnelblick. If I connect with the "Route all IPv4 traffic through the VPN" option I can't reach either 192.168.9.0/24 addresses nor Internet addresses. If I connect without this option I can access 192.168.9.0/24 addresses. I'm not an expert in networking or OpenVPN configuration, so I may be missing something obvious. What am I doing wrong? I'm facing the same issue, do you find any solution? Unfortunately not, I've made no progress so far. Documentation seems to assume that all traffic is routed through the VPN by default, but I can't get it to work even with the default config. Maybe one of the maintainers can help with this? I tried adding this to /etc/docker/daemon.json file: { "iptables": true } and it worked. That didn't work for me unfortunately and it's surprising that it worked for you, given that iptables should be true by default. Can you share the exact config you used (minus public IPs and other sensitive info)? @ruippeixotog If you are running on this on GCP or other cloud services make sure your VM has "IP Forwarding" enabled. @batesenergy I was trying to run it in my own server, which used to run OpenVPN outside Docker without any problems. In any case, I ended up moving to WireGuard, which is simpler and has a much better supported Docker image.
gharchive/issue
2021-02-04T23:49:18
2025-04-01T06:39:20.859074
{ "authors": [ "batesenergy", "ivanNieto13", "ruippeixotog" ], "repo": "kylemanna/docker-openvpn", "url": "https://github.com/kylemanna/docker-openvpn/issues/638", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1400914282
Allow to retrigger suspension of expired instance Description Changes proposed in this pull request: ... ... ... Related issue(s) /hold fixes: https://github.tools.sap/kyma/backlog/issues/3038 /unhold
gharchive/pull-request
2022-10-07T09:49:58
2025-04-01T06:39:20.894905
{ "authors": [ "piotrmiskiewicz" ], "repo": "kyma-project/control-plane", "url": "https://github.com/kyma-project/control-plane/pull/2124", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2239269439
Extend GH Workflows to support experimental functionality Description Extend Istio module build and release process to support experimental functionality. GH Actions need to include building experimental image, that later on can be used in testing. Release process also needs to be extended to produce experimental artefacts, that later on can be used to rollout experimental offering. ACs: [ ] experimental image build (prow) [ ] Update CI/CD documentation [x] experimental release artefacts exists [ ] experimental release notes created [ ] release documentation updated [x] execute experimental tests Reasons Support experimental offering DoD: - [ ] Provide unit and integration tests. [ ] Provide documentation. [ ] Verify if the solution works for both open-source Kyma and SAP BTP, Kyma runtime. - [ ] If you changed the resource limits, explain why it was needed. - [ ] Verify that your contributions don't decrease code coverage. If they do, explain why this is the case. - [ ] Add release notes. Attachments Prow build doesn't support building a different tag for pull request builds. The decision was made to have a new section for experimental features in the release notes template and to not have experimental builds for PRs for now. PRs: https://github.com/kyma-project/istio/pull/731 https://github.com/kyma-project/test-infra/pull/10408 https://github.tools.sap/kyma/documentation/pull/550 Issue for support of a custom tag for PR builds is created: https://github.com/kyma-project/test-infra/issues/10415 After merge of https://github.com/kyma-project/istio/pull/731, we need to update the links to the newly added jobs in the CI/CD documentation.
gharchive/issue
2024-04-10T06:28:55
2025-04-01T06:39:20.902479
{ "authors": [ "strekm", "triffer" ], "repo": "kyma-project/istio", "url": "https://github.com/kyma-project/istio/issues/732", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2102326976
bump k8s version used with envtest Description Changes proposed in this pull request: bump k8s version used with envtest to 1.27 series Related issue(s) /retest
gharchive/pull-request
2024-01-26T14:40:20
2025-04-01T06:39:20.938072
{ "authors": [ "halamix2" ], "repo": "kyma-project/serverless", "url": "https://github.com/kyma-project/serverless/pull/666", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2112377020
docs: Add OTLP Logs PoC documentation Description Changes proposed in this pull request (what was done and why): Add documentation about OpenTelemetry logging PoC Changes refer to particular issues, PRs or documents: https://github.com/kyma-project/telemetry-manager/issues/720 Traceability [ ] The PR is linked to a GitHub issue. [ ] New features have a milestone set. [ ] New features have defined acceptance criteria in a corresponding GitHub Issue, and all criteria are satisfied with this PR. [ ] The corresponding GitHub issue has a respective area and kind label. [ ] The follow-up issues (if any) are linked in the Related Issues section. [ ] Adjusted the documentation if the change is user-facing. [ ] The feature is unit-tested [ ] The feature is e2e-tested /unhold
gharchive/pull-request
2024-02-01T12:16:03
2025-04-01T06:39:20.942153
{ "authors": [ "chrkl" ], "repo": "kyma-project/telemetry-manager", "url": "https://github.com/kyma-project/telemetry-manager/pull/762", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1857333181
🛑 senoshaidodelasmanos.love is down In 1e8bc36, senoshaidodelasmanos.love (https://senoshaidodelasmanos.love) was down: HTTP code: 0 Response time: 0 ms Resolved: senoshaidodelasmanos.love is back up in 3dfcd08.
gharchive/issue
2023-08-18T21:58:53
2025-04-01T06:39:20.950797
{ "authors": [ "kyryl-bogach" ], "repo": "kyryl-bogach/upptime", "url": "https://github.com/kyryl-bogach/upptime/issues/247", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1842174831
Sliced large number of flows Closes #164 Summary Added a function to slice a large number of flows to lists of 200 Local Tests Reproduced test on issue Let's include this on 2023.1, since it ended up being tagged late, let's take the opportunity.
gharchive/pull-request
2023-08-08T22:17:38
2025-04-01T06:39:20.955192
{ "authors": [ "Alopalao", "viniarck" ], "repo": "kytos-ng/flow_manager", "url": "https://github.com/kytos-ng/flow_manager/pull/167", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1980243423
Added vlan_range support Closes #18 Closes #309 Summary This PR needs kytos PR Added support for vlan_range epic. Currently only when both UNIs have the same list of tags Local Tests Created, updated and deleted circuit. Restarted kytos to check consistency work. Partial updates for list of tags also tested. For example if both UNI have [[20, 30]] and they are being updated to [[25, 30]]. Added and updated tests. End-To-End Tests ============================= test session starts ============================== platform linux -- Python 3.9.2, pytest-7.2.0, pluggy-1.3.0 rootdir: /tests plugins: timeout-2.1.0, rerunfailures-10.2, anyio-3.6.2 collected 244 items tests/test_e2e_01_kytos_startup.py .. [ 0%] tests/test_e2e_05_topology.py .................. [ 8%] tests/test_e2e_10_mef_eline.py ..........ss.....x.....x................ [ 24%] tests/test_e2e_11_mef_eline.py ...... [ 27%] tests/test_e2e_12_mef_eline.py .....Xx. [ 30%] tests/test_e2e_13_mef_eline.py ....Xs.s.....Xs.s.XXxX.xxxx..X........... [ 47%] . [ 47%] tests/test_e2e_14_mef_eline.py x [ 47%] tests/test_e2e_15_mef_eline.py .... [ 49%] tests/test_e2e_20_flow_manager.py ..................... [ 58%] tests/test_e2e_21_flow_manager.py ... [ 59%] tests/test_e2e_22_flow_manager.py ............... [ 65%] tests/test_e2e_23_flow_manager.py .............. [ 71%] tests/test_e2e_30_of_lldp.py .... [ 72%] tests/test_e2e_31_of_lldp.py ... [ 74%] tests/test_e2e_32_of_lldp.py ... [ 75%] tests/test_e2e_40_sdntrace.py ............. [ 80%] tests/test_e2e_41_kytos_auth.py ........ [ 84%] tests/test_e2e_42_sdntrace.py .. [ 84%] tests/test_e2e_50_maintenance.py ........................ [ 94%] tests/test_e2e_60_of_multi_table.py ..... [ 96%] tests/test_e2e_70_kytos_stats.py ........ [100%] Last commit is more stable. Tested with this script. This script work with the last update from kytos, topology and of_lldp PRs. It runs as python3 evcs.py 5. It set tag ranges to "01:1" and "02:1" interfaces and creates a set number of circuits. The result should an empty available_tags["vlan"] for "01:1" and "02:1" interfaces. Changelog also hasn't been updated Bypassing checking of tags for use_tags() and make_tags_available() since these tags are not managed by the user. Commit a29489619e26a6ebea63ad69ce6c386992644dfc Closing this since Aldo's PR #407 has landed. Nicely done, Aldo.
gharchive/pull-request
2023-11-06T23:11:23
2025-04-01T06:39:20.961442
{ "authors": [ "Alopalao", "viniarck" ], "repo": "kytos-ng/mef_eline", "url": "https://github.com/kytos-ng/mef_eline/pull/396", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
516808519
Avoid crashes on comments before the first move. Example PGN: [Event "?"] [White "Lelax IV"] [Black "Autaxx"] [FEN "3x2o/7/7/7/2o4/7/6x o 3 2"] [Adjudicated "Engine crashed"] [Result "1-0"] { engine crashed } 1-0 Thanks for the fix.
gharchive/pull-request
2019-11-03T10:32:36
2025-04-01T06:39:21.022880
{ "authors": [ "gcp", "kz04px" ], "repo": "kz04px/python-ataxx", "url": "https://github.com/kz04px/python-ataxx/pull/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2073790218
🛑 LR Store is down In dcfd99d, LR Store (https://la-razon.ventas.com.bo) was down: HTTP code: 0 Response time: 0 ms Resolved: LR Store is back up in 499eb19 after 10 minutes.
gharchive/issue
2024-01-10T07:48:49
2025-04-01T06:39:21.044506
{ "authors": [ "la-razonbo" ], "repo": "la-razonbo/udweb", "url": "https://github.com/la-razonbo/udweb/issues/149", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1217166816
Update iam.tf Updating IAM permissions to fix https://github.com/kubernetes/autoscaler/issues/3216 error @saurabh-paystack thak you!
gharchive/pull-request
2022-04-27T10:41:07
2025-04-01T06:39:21.047821
{ "authors": [ "dojci", "saurabh-paystack" ], "repo": "lablabs/terraform-aws-eks-cluster-autoscaler", "url": "https://github.com/lablabs/terraform-aws-eks-cluster-autoscaler/pull/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
197515623
cuda works @lachs0r, from https://mpv.srsfckn.biz/changes/2016-12-25/ : but if you’re interested, try to use it and report back so I can update this note. Just tried the latest stable build, and I can confirm that CUDA hwdec works fine 👍 Thanks.
gharchive/issue
2016-12-25T21:22:48
2025-04-01T06:39:21.076379
{ "authors": [ "lachs0r", "pavelxdd" ], "repo": "lachs0r/mingw-w64-cmake", "url": "https://github.com/lachs0r/mingw-w64-cmake/issues/27", "license": "isc", "license_type": "permissive", "license_source": "bigquery" }
170151196
Lagotto fails Elsevier cookie test This is nothing new, but things may have changed recently. This is causing deposits not to process. Elsevier's DOIs have a long and complicated series of redirects with cookies, presumably to prevent people from following their DOIs in an automated way. Lagotto falls into the trap. e.g. http://doi.org/10.1016/j.dld.2006.06.008 has the following redirects using the current Networkable module: http://doi.org/10.1016/j.dld.2006.06.008 http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556 /retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1 http://www.dldjournalonline.com/retrieve/pii/S1590865806002556 https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=1&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=2&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=3&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=4&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=5&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=6&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=7&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=8&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=9&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=10&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site http://secure.jbs.elsevierhealth.com:443/action/cookieAbsent Before failing. The cookies set at each step: Current cookies: {}, Redirect to: http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556 Current cookies: {}, Redirect to: /retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1 Current cookies: { "linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{ "visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}}}, Redirect to: http://www.dldjournalonline.com/retrieve/pii/S1590865806002556 Current cookies: { "linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}}, Redirect to: https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=1&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"SERVER"=>#<HTTP::Cookie:name="SERVER", value="WZ6myaEXBLEIcey8uceZQQ==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:25 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:25 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="aaaQG3oCs1xzae42BZJzv", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:25 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=2&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"SERVER"=>#<HTTP::Cookie:name="SERVER", value="WZ6myaEXBLEIcey8uceZQQ==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:25 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:25 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="aaaQG3oCs1xzae42BZJzv", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:25 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=3&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"SERVER"=>#<HTTP::Cookie:name="SERVER", value="WZ6myaEXBLEIcey8uceZQQ==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:26 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:26 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="aaaQG3oCs1xzae42BZJzv", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:26 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=4&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site JAR Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"SERVER"=>#<HTTP::Cookie:name="SERVER", value="WZ6myaEXBLEIcey8uceZQQ==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:26 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:26 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="aaaQG3oCs1xzae42BZJzv", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=nil, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:26 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=5&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"SERVER"=>#<HTTP::Cookie:name="SERVER", value="WZ6myaEXBLEIcey8uceZQQ==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:28 +0000, accessed_at=2016-08-09 09:25:28 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?rc=4&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site>, "MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:27 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>, "JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="aaaQG3oCs1xzae42BZJzv", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:28 +0000, accessed_at=2016-08-09 09:25:28 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?rc=4&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="aaaQG3oCs1xzae42BZJzv", domain="jbs.elsevierhealth.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:28 +0000, accessed_at=2016-08-09 09:25:28 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?rc=4&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site>, "SERVER"=>#<HTTP::Cookie:name="SERVER", value="WZ6myaEXBLEIcey8uceZQQ==", domain="jbs.elsevierhealth.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:28 +0000, accessed_at=2016-08-09 09:25:28 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?rc=4&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site>}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=6&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:28 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=7&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:28 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=8&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:29 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=9&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:30 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{}}}, Redirect to: https://secure.jbs.elsevierhealth.com:443/action/getSharedSiteSession?rc=10&redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&code=ydld-site Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:31 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{}}}, Redirect to: http://secure.jbs.elsevierhealth.com:443/action/cookieAbsent Current cookies: {"linkinghub.elsevier.com"=>{ "/retrieve/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="linkinghub.elsevier.com", for_domain=false, path="/retrieve/", secure=false, httponly=true, expires=nil, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/pii/S1590865806002556>}, "/"=>{"visitorId"=>#<HTTP::Cookie:name="visitorId", value="anh7JKf87rBcdaY5Muus", domain="linkinghub.elsevier.com", for_domain=false, path="/", secure=false, httponly=false, expires=2084-08-27 12:46:09 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://linkinghub.elsevier.com/retrieve/articleSelectPrefsTemp?Redirect=http%3A%2F%2Fwww.dldjournalonline.com%2Fretrieve%2Fpii%2FS1590865806002556&key=71835a2ddc744fbddf6d9a5a9003a4aced4b81a1>}}, "www.dldjournalonline.com"=>{ "/"=>{"JSESSIONID"=>#<HTTP::Cookie:name="JSESSIONID", value="5A3119DDF9CB5B9D9505D7391EC4BF2B.sJpWfCVoJNGP6fbRpTDcA", domain="www.dldjournalonline.com", for_domain=true, path="/", secure=false, httponly=false, expires=1994-12-01 16:00:00 UTC, max_age=nil, created_at=2016-08-09 09:25:24 +0000, accessed_at=2016-08-09 09:25:24 +0000 origin=http://www.dldjournalonline.com/retrieve/pii/S1590865806002556>}}, "secure.jbs.elsevierhealth.com"=>{ "/"=>{"MAID"=>#<HTTP::Cookie:name="MAID", value="XWnDoS4xJFnOjiBZ7C1h+A==", domain="secure.jbs.elsevierhealth.com", for_domain=false, path="/", secure=false, httponly=false, expires=2017-06-05 09:32:03 UTC, max_age=nil, created_at=2016-08-09 09:25:25 +0000, accessed_at=2016-08-09 09:25:31 +0000 origin=https://secure.jbs.elsevierhealth.com/action/getSharedSiteSession?redirect=http%3A%2F%2Fwww.dldjournalonline.com%3A80%2Fretrieve%2Fpii%2FS1590865806002556&rc=0&code=ydld-site>}}, "jbs.elsevierhealth.com"=>{ "/"=>{}}}, Note dates in the past. To demonstrate: $ a = Facebook.new $ a.get_canonical_url("http://doi.org/10.1016/j.dld.2006.06.008") => {:error=>"end of file reached (EOFError) for http://doi.org/10.1016/j.dld.2006.06.008", :status=>400} I have tried this with the standard cookie jar and with https://github.com/sparklemotion/http-cookie . Using phantomjs works, so there is a possible route if we can't get it working. There is two issues: how do we approach this in general, and should we fix how we follow redirects. It is for example possible that the number of redirects or size of cookie file becomes too large. Based on another discussion I am willing to consider resolving DOIs using a client library that understands javascript. Background for example here: https://developers.google.com/webmasters/ajax-crawling/docs/learn-more I am actually currently doing R&D on a project to do this. I collected 1 million DOIs over the weekend! I'll let you know when there is something to share. I'm quite far with an agent to do this in a centralised manner.
gharchive/issue
2016-08-09T12:15:13
2025-04-01T06:39:21.134415
{ "authors": [ "afandian", "mfenner" ], "repo": "lagotto/lagotto", "url": "https://github.com/lagotto/lagotto/issues/631", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2017203126
Revert initial gas check fix. Revert initial gas check fix Description Description of the pull request changes and motivation. Checklist [ ] Linked to Github Issue [ ] Unit tests added [ ] Integration tests added. [ ] This change requires new documentation. [ ] Documentation has been added/updated. Codecov Report Attention: 181 lines in your changes are missing coverage. Please review. Comparison is base (87d0c98) 74.27% compared to head (b4e273b) 73.81%. Report is 1 commits behind head on main. Files Patch % Lines src/bin/cairo-native-compile.rs 0.00% 134 Missing :warning: src/libfuncs/stark_net.rs 0.00% 32 Missing :warning: src/ffi.rs 88.46% 15 Missing :warning: Additional details and impacted files @@ Coverage Diff @@ ## main #356 +/- ## ========================================== - Coverage 74.27% 73.81% -0.46% ========================================== Files 96 97 +1 Lines 21889 22189 +300 ========================================== + Hits 16257 16379 +122 - Misses 5632 5810 +178 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2023-11-29T18:43:49
2025-04-01T06:39:21.151292
{ "authors": [ "azteca1998", "codecov-commenter" ], "repo": "lambdaclass/cairo_native", "url": "https://github.com/lambdaclass/cairo_native/pull/356", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1779468407
Fix coverage workflow Fix coverage workflow Description Fixes coverage workflow by installing nightly Codecov Report Merging #704 (8ab690a) into main (708dc65) will increase coverage by 0.00%. The diff coverage is 94.73%. @@ Coverage Diff @@ ## main #704 +/- ## ======================================= Coverage 91.97% 91.97% ======================================= Files 52 52 Lines 11335 11341 +6 ======================================= + Hits 10425 10431 +6 Misses 910 910 Impacted Files Coverage Δ src/definitions/block_context.rs 100.00% <ø> (ø) crates/starknet-contract-class/src/lib.rs 80.20% <90.00%> (+0.63%) :arrow_up: .../api/contract_classes/deprecated_contract_class.rs 96.92% <100.00%> (+0.07%) :arrow_up: src/storage/errors/storage_errors.rs 100.00% <100.00%> (ø)
gharchive/pull-request
2023-06-28T18:20:15
2025-04-01T06:39:21.159921
{ "authors": [ "codecov-commenter", "matias-gonz" ], "repo": "lambdaclass/starknet_in_rust", "url": "https://github.com/lambdaclass/starknet_in_rust/pull/704", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1013878352
SGB Pallet/Border Support? Would it be possible to add a way to toggle the border and pallets for SGB titles that make use of it? Would be very nice to have this sorta feature! I was planning to at least have some pre packaged border in the major version. I'll see how much effort it would take to implement SGB features but I probably wont implement all SGB features since it would require emulating the super nintendo sound chip and possibly the CPU to do correctly and I don't want to commit to that much effort. If implementing partial features is viable I will consider it. Yeah. Borders, and possible the pallets are the only 2 i can see being supported. Trying to do things like what Donkey Kong and one other title dude (using the snes hardware itself) is virtually impossible Loving your project. I was also hoping to see SGB border support. Don't much care for emulating the rest of the features. But if there's a simple way to pull the border file out and apply that around the game (maybe as one of the zoom options), I'd love to see it.
gharchive/issue
2021-10-02T01:38:52
2025-04-01T06:39:21.162040
{ "authors": [ "KevDoy", "Zeldaboy14", "lambertjamesd" ], "repo": "lambertjamesd/gb64", "url": "https://github.com/lambertjamesd/gb64/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1899913788
Potential speed boost from using 4-bit textures If I understand everything correctly, this demo uses 16-bit 32x32 tiles for rendering. 16-bit tiles could be replaced with 4-bit tiles with individual palettes, reducing the memory load approximately 4 times. 16 colors actually should be enough, considering that the size of each tile is only 32x32 pixels. With a proper palette generation algorithm, everything should look fine. As I remember, pngquant even had an option to quantize colors to RGB555, which should provide a good palette within N64 rendering limits. That would actually work really well for more toonish textures. It would also help reduce the size of the ROM which is the real limitation of the technique. I don't think I will be doing any more work on this any time soon but I'll keep this issue open I'm pretty sure it should look good enough even on realistic textures given the tiles resolution, but this should be tested I'm thinking a fps partially running on portal64's map work utilizing the megatextures, shaders, and shadows.
gharchive/issue
2023-09-17T20:48:37
2025-04-01T06:39:21.164063
{ "authors": [ "MesaBlack", "lambertjamesd", "rmn20" ], "repo": "lambertjamesd/n64brew2023", "url": "https://github.com/lambertjamesd/n64brew2023/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2319918209
Added two more things Command relevant to my server and the landfallcord link cool. merged
gharchive/pull-request
2024-05-28T00:22:59
2025-04-01T06:39:21.272681
{ "authors": [ "SuperAgentAlex", "ZorroSvardendahl" ], "repo": "landfallgames/tabg-word-list", "url": "https://github.com/landfallgames/tabg-word-list/pull/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2163170925
🛑 vipgifts.net is down In ca2c9d2, vipgifts.net (https://vipgifts.net) was down: HTTP code: 0 Response time: 0 ms Resolved: vipgifts.net is back up in 5b6303f after 7 minutes.
gharchive/issue
2024-03-01T11:09:42
2025-04-01T06:39:21.284970
{ "authors": [ "lanen" ], "repo": "lanen/bs-site", "url": "https://github.com/lanen/bs-site/issues/10130", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2368283040
🛑 vipgifts.net is down In ec468bd, vipgifts.net (https://vipgifts.net) was down: HTTP code: 567 Response time: 1091 ms Resolved: vipgifts.net is back up in 7a67cfc after 25 minutes.
gharchive/issue
2024-06-23T05:26:51
2025-04-01T06:39:21.288012
{ "authors": [ "lanen" ], "repo": "lanen/bs-site", "url": "https://github.com/lanen/bs-site/issues/11956", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1857693206
🛑 vipgifts.net is down In f389bcc, vipgifts.net (https://vipgifts.net) was down: HTTP code: 567 Response time: 964 ms Resolved: vipgifts.net is back up in 370137c after 610 days, 13 hours, 44 minutes.
gharchive/issue
2023-08-19T12:16:34
2025-04-01T06:39:21.291646
{ "authors": [ "lanen" ], "repo": "lanen/bs-site", "url": "https://github.com/lanen/bs-site/issues/2922", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1760594667
🛑 Plex is down In bf6e17d, Plex (http://langasg1.ddns.net:32400/web/index.html) was down: HTTP code: 0 Response time: 0 ms Resolved: Plex is back up in 432fdc2.
gharchive/issue
2023-06-16T12:58:36
2025-04-01T06:39:21.294022
{ "authors": [ "langasg" ], "repo": "langasg/upptime", "url": "https://github.com/langasg/upptime/issues/234", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2748529425
ChatAnthropicVertex prompt caching support Hello, As of recently, prompt caching is supposedly in preview in Vertex AI. Can you add support for it to ChatAnthropicVertex? Thanks! I just want to bump this, and clarify that the "regular" method I use for cache prompting in the standard ChatAnthropic causes an error: ` content = [{ "text": "Do something or other...", "type": "text", "cache_control": {"type": "ephemeral"} }] prompt = ChatPromptTemplate.from_messages( [ SystemMessage(content=content), ("placeholder", "{messages}"), ] ) ` This method fails when giving this prompt to ChatAnthropicVertex with the error: File ".../python3.11/site-packages/langchain_google_vertexai/_anthropic_utils.py", line 143, in _format_messages_anthropic raise ValueError( ValueError: System message must be a string, instead was: <class 'list'> So simply modifying it to support a list rather than a string would be enough to allow caching. Could be a quick fix
gharchive/issue
2024-12-18T18:53:14
2025-04-01T06:39:21.296958
{ "authors": [ "ShaharZivanOnvego", "jthack" ], "repo": "langchain-ai/langchain-google", "url": "https://github.com/langchain-ai/langchain-google/issues/651", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2134532581
convert_to_openai_function drop some (nested?) properties Checked other resources [X] I added a very descriptive title to this issue. [X] I searched the LangChain documentation with the integrated search. [X] I used the GitHub search to find a similar question and didn't find it. [X] I am sure that this is a bug in LangChain rather than my code. Example Code from typing import Set, Literal from langchain_core.utils.function_calling import convert_to_openai_function class UserInfos(BaseModel): "general information about a user" gender: Literal["male", "female", "other"] preferences: Set[Literal["games", "books"]] Error Message and Stack Trace (if applicable) No response Description The resulting function is not well defined and missing some properties. Output { "name": "UserInfos", "description": "general information about a user", "parameters": { "type": "object", "properties": { "gender": { "enum": [ "male", "female", "other" ], "type": "string" } }, "required": [ "gender", "preferences" ] } } Excepted NOTE: This is produced by the deprecated convert_pydantic_to_openai_function function. { "name": "UserInfos", "description": "general information about a user", "parameters": { "properties": { "gender": { "enum": [ "male", "female", "other" ], "type": "string" }, "preferences": { "items": { "enum": [ "games", "books" ], "type": "string" }, "type": "array", "uniqueItems": true } }, "required":[ "gender", "preferences" ], "type":"object" } } System Info System Information OS: Linux OS Version: #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2 Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Package Information langchain_core: 0.1.23 langchain: 0.1.7 langchain_community: 0.0.20 langsmith: 0.0.87 langchain_openai: 0.0.6 Packages not installed (Not Necessarily a Problem) The following packages were not found: langgraph langserve Related: https://github.com/langchain-ai/langchain/issues/14899 @francisc0garcia I believe this happens if you have pydantic v2 installed and aren't using langchain_core.pydantic_v1. If you change your pydantic imports from langchain_core.pydantic_v1, should work: @baskaryan I can confirm this point, but that's a bit problematic in my case because I use some features of v2. Are there plans to upgrade pydantic to v2 soon? In the meantime I can use the convert_pydantic_to_openai_function function. @baskaryan I can confirm this point, but that's a bit problematic in my case because I use some features of v2. Are there plans to upgrade pydantic to v2 soon? In the meantime I can use the convert_pydantic_to_openai_function function. Note you can have pydantic v2 installed and use langchain_core.pydantic_v1, but yea under the hood it'll use pydantic.v1 classes so it won't have all the pydantic v2 features. A lot of our community still runs on pydantic v1 so we definitely want to continue supporting it for the moment. Hard to estimate when we'll fully switch to v2 since that depends on factors outside of our control (ie what % of our users need pydantic v1 support).
gharchive/issue
2024-02-14T14:53:30
2025-04-01T06:39:21.307606
{ "authors": [ "baskaryan", "francisc0garcia", "yoch" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/issues/17531", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2155814048
Langchain Expression Language (LCEL) pass through does not work with two consecutive chain Checked other resources [X] I added a very descriptive title to this issue. [X] I searched the LangChain documentation with the integrated search. [X] I used the GitHub search to find a similar question and didn't find it. [X] I am sure that this is a bug in LangChain rather than my code. Example Code There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this. Reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval `from operator import itemgetter from langchain_community.vectorstores import FAISS from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnableLambda, RunnablePassthrough from langchain_openai import ChatOpenAI, OpenAIEmbeddings vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) retriever = vectorstore.as_retriever() template = """Answer the question based only on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) model = ChatOpenAI() chain = ( {"context": retriever, "question": RunnablePassthrough()} ### Only line added to the example | {'context': itemgetter('context'), "question": itemgetter('question')} | prompt | model | StrOutputParser() ) chain.invoke("where did harrison work?") ' Error Message and Stack Trace (if applicable) Traceback (most recent call last): File "/Users/zhengisamazing/1.python_dir/vigyan-llm-api/dev/langchain_playground.py", line 110, in chain.invoke("where did harrison work?") File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2056, in invoke input = step.invoke( ^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in invoke output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in output = {key: future.result() for key, future in zip(steps, futures)} ^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3504, in invoke return self._call_with_config( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1243, in _call_with_config context.run( File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3378, in _invoke output = call_func_with_variable_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args return func(input, **kwargs) # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^ TypeError: string indices must be integers, not 'str' Description There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this. Provided reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval System Info langchain==0.1.7 langchain-cli==0.0.21 langchain-community==0.0.20 langchain-core==0.1.27 langchain-google-genai==0.0.9 langchain-openai==0.0.6 platform: mac python version:3.11.7 The reason this doesn't work is because python parses the AST in left to right order. You are piping two stdlib python dicts together before it touches any langchain code, which implicitly deletes the first dict. See for yourself: {"context": retriever, "question": RunnablePassthrough()} | { "context": itemgetter("context"), "question": itemgetter("question"), } Results in output: {'context': operator.itemgetter('context'), 'question': operator.itemgetter('question')} To fix, explicitly create the langchain object using RunnableParallel: RunnableParallel({"context": retriever, "question": RunnablePassthrough()}) ### Only line added to the example | {'context': itemgetter('context'), "question": itemgetter('question')} | prompt | model | StrOutputParser() ) Then the first function in the sequence is a langchain object, which can be composed with dicts, runnables, etc. as intended. I ran into a similar issue, and it took me a while to figure out that I needed to replace dicts with RunnableParallel. I would suggest either: Updating the docs and examples to make this behavior clear and explicit (particularly the "TIP" section on https://python.langchain.com/docs/expression_language/primitives/parallel/, which states that a dict and a RunnableParallel are equivalent. In this case they aren't.) Finding a way to make this syntax work. It is easy to run into this as soon as you want to implement a somewhat complex chain, and it's not intuitive for a non-Python expert that a dict would work for one step but not for a second one.
gharchive/issue
2024-02-27T06:57:56
2025-04-01T06:39:21.327490
{ "authors": [ "SoulEvill", "hinthornw", "yfontana" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/issues/18173", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2287481626
DOC: No example of usage implementation is provided for the langchain.chains.query_constructor.base.load_query_constructor_runnable function Checklist [X] I added a very descriptive title to this issue. [X] I included a link to the documentation page I am referring to (if applicable). Issue with current documentation: Description: Currently, the load_query_constructor_runnable function documentation lacks doesn't have usage examples or scenarios, making it challenging for developers to understand. URL to the documentation: https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.load_query_constructor_runnable.html#langchain.chains.query_constructor.base.load_query_constructor_runnable Idea or request for content: I tried running the function and below is the complete code and output: from langchain.chains.query_constructor.base import load_query_constructor_runnable from langchain.chains.query_constructor.schema import AttributeInfo from langchain_openai import ChatOpenAI from langchain.chains.query_constructor.ir import ( Comparator, Comparison, Operation, Operator, StructuredQuery, ) # Define your document contents and attribute information document_contents = """ product_name: Widget, price: $20 product_name: Gadget, price: $35 product_name: Gizmo, price: $50 """ attribute_info: AttributeInfo = [ {"name": "product_name", "type": "string"}, {"name": "price", "type": "number"}, ] model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5) # Create a runnable for constructing queries runnable = load_query_constructor_runnable( llm=model, document_contents=document_contents, attribute_info=attribute_info, allowed_comparators=[Comparator.EQ, Comparator.LT, Comparator.GT], allowed_operators=[Operator.AND, Operator.NOT, Operator.OR], enable_limit=True, schema_prompt="Describe the query schema using allowed comparators and operators.", fix_invalid=True, ) # Now you can use the runnable to construct queries based on user input user_input = "Show me products with price less than 30" query = runnable.middle[0].invoke(user_input).content print(f"Constructed query: {query}") Output: Constructed query: 1. Wireless Bluetooth Earbuds - $29.99 2. Portable Phone Charger - $24.99 3. Travel Makeup Bag - $19.99 4. Insulated Water Bottle - $15.99 5. LED Desk Lamp - $27.99 6. Resistance Bands Set - $12.99 7. Stainless Steel Mixing Bowls - $19.99 8. Yoga Mat - $24.99 9. Essential Oil Diffuser - $28.99 10. Electric Handheld Milk Frother - $14.99 However the output is wrong and is not providing the references to the original documents provided. Needed usage implementation. run you code, and client will send prompt as following: Your goal is to structure the user\'s query to match the request schema provided below. Describe the query schema using allowed comparators and operators. << Example 1. >> Data Source: '''json { "content": "Lyrics of a song", "attributes": { "artist": { "type": "string", "description": "Name of the song artist" }, "length": { "type": "integer", "description": "Length of the song in seconds" }, "genre": { "type": "string", "description": "The song genre, one of "pop", "rock" or "rap"" } } } ''' User Query: What are songs by Taylor Swift or Katy Perry about teenage romance under 3 minutes long in the dance pop genre Structured Request: '''json { "query": "teenager love", "filter": "and(or(eq(\\"artist\\", \\"Taylor Swift\\"), eq(\\"artist\\", \\"Katy Perry\\")), lt(\\"length\\", 180), eq(\\"genre\\", \\"pop\\"))" } ''' << Example 2. >> Data Source: '''json { "content": "Lyrics of a song", "attributes": { "artist": { "type": "string", "description": "Name of the song artist" }, "length": { "type": "integer", "description": "Length of the song in seconds" }, "genre": { "type": "string", "description": "The song genre, one of "pop", "rock" or "rap"" } } } ''' User Query: What are songs that were not published on Spotify Structured Request: '''json { "query": "", "filter": "NO_FILTER" } ''' << Example 3. >> Data Source: '''json { "content": "Lyrics of a song", "attributes": { "artist": { "type": "string", "description": "Name of the song artist" }, "length": { "type": "integer", "description": "Length of the song in seconds" }, "genre": { "type": "string", "description": "The song genre, one of "pop", "rock" or "rap"" } } } ''' User Query: What are three songs about love Structured Request: '''json { "query": "love", "filter": "NO_FILTER", "limit": 2 } ''' << Example 4. >> Data Source: '''json { "content": "Hardware Products Price List", "attributes": { "product_name": { "type": "string" }, "price": { "type": "number" } } } ''' User Query: Show me products with price less than 30 Structured Request: so I change the document_contents content, and get the correct answer. # Define your document contents and attribute information document_contents = "Hardware Products Price List" attribute_info: AttributeInfo = [ {"name": "product_name", "type": "string"}, {"name": "price", "type": "number"}, ] # Create a runnable for constructing queries runnable = load_query_constructor_runnable( llm=llm, document_contents=document_contents, attribute_info=attribute_info, allowed_comparators=[Comparator.EQ, Comparator.LT, Comparator.GT], allowed_operators=[Operator.AND, Operator.NOT, Operator.OR], enable_limit=True, schema_prompt="Describe the query schema using allowed comparators and operators.", fix_invalid=True, ) # Now you can use the runnable to construct queries based on user input user_input = "What are products that price less than 30" query = runnable.invoke(user_input) print(f"Constructed query: {query}") you can try, query will been one StructuredQuery object.
gharchive/issue
2024-05-09T11:22:20
2025-04-01T06:39:21.335936
{ "authors": [ "moneebullah25", "wood001" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/issues/21478", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1958187003
updated integrations/providers/microsoft Added several missed tools, utilities, toolkits to the Microsoft page. amazing, thanks @leo-gan!
gharchive/pull-request
2023-10-23T23:19:59
2025-04-01T06:39:21.337315
{ "authors": [ "baskaryan", "leo-gan" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/pull/12177", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2737997302
docs: added region parameter for awsBedrockParamsOrDefault in ChatModelTabs.js Thank you for contributing to LangChain! [X] PR title: "package: description" Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. Example: "community: add foobar LLM" [X] PR message: **Description:**This PR contains the change in docs for in chatModelTabs.js In default parameters of awsBedrockParamsOrDefault, region should be mandatory for ChatBedRock without region getting this validation error, so region should be there Twitter handle: https://twitter.com/BhargavPrince18 Additional guidelines: Make sure optional dependencies are imported within a function. Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. Most PRs should not touch more than one package. Changes should be backwards compatible. If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. in general the docs recommend setting this in an environment variable (similar to access creds), so will close this instead of adding to that block! Everyone do that only, but people need to know right they have to use region parameter inside ChatBedRock. I am following the docs and I got the error, It took almost 20 min for me to understand that I have to give region parameter inside AwsBedrockParams. Let's give atleast region parameter remove us-east-1 value and users will give whatever they want Got it. We can consider linking the api ref for the overall classes in the tabs, but in general these tabs aren't for documenting the end-to-end use of the provider - they're just showing how the chat mdoels are used in each so can i link api references to the model classes in each tab
gharchive/pull-request
2024-12-13T10:00:42
2025-04-01T06:39:21.344168
{ "authors": [ "Bhargav2525", "efriis" ], "repo": "langchain-ai/langchain", "url": "https://github.com/langchain-ai/langchain/pull/28706", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1910299017
Retry logic for OpenAI timeouts I'm seeing the following error in prod: Error [TimeoutError]: Request timed out. at wrapOpenAIClientError (file:///app/node_modules/langchain/dist/util/openai.js:6:17) at file:///app/node_modules/langchain/dist/chat_models/openai.js:518:31 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async RetryOperation._fn (/app/node_modules/p-retry/index.js:50:12) { attemptNumber: 1, retriesLeft: 6 } It's getting captured in my catch block, so I'm fairly sure the retries aren't happening, unless the first attempt is the one that gets re-thrown or something confusing like that. Is it possible this doesn't meet the criteria for retryable? Could this be addressed using the FailedAttemptHandler interface? I've not set a timeout for the LLM. I'm having a hard time figuring out the default value. /** * Custom handler to handle failed attempts. Takes the originally thrown * error object as input, and should itself throw an error if the input * error is not retryable. */ onFailedAttempt?: FailedAttemptHandler; The default failure handler looks like the culprit: const STATUS_NO_RETRY = [ 400, // Bad Request 401, // Unauthorized 402, // Payment Required 403, // Forbidden 404, // Not Found 405, // Method Not Allowed 406, // Not Acceptable 407, // Proxy Authentication Required 408, // Request Timeout // <<<<<<<<<<<<<<<<<<<< 409, // Conflict ]; const defaultFailedAttemptHandler = (error: any) => { if ( error.message.startsWith("Cancel") || error.message.startsWith("TimeoutError") || // <<<<<<<<<< error.name === "TimeoutError" || error.message.startsWith("AbortError") || error.name === "AbortError" ) { throw error; } // eslint-disable-next-line @typescript-eslint/no-explicit-any if ((error as any)?.code === "ECONNABORTED") { throw error; } const status = // eslint-disable-next-line @typescript-eslint/no-explicit-any (error as any)?.response?.status ?? (error as any)?.status; if (status && STATUS_NO_RETRY.includes(+status)) { throw error; } // eslint-disable-next-line @typescript-eslint/no-explicit-any if ((error as any)?.error?.code === "insufficient_quota") { const err = new Error(error?.message); err.name = "InsufficientQuotaError"; throw err; } }; However, reviewing OpenAI's documentation: A `Timeout` error indicates that your request took too long to complete and our server closed the connection. This could be due to a network issue, a heavy load on our services, or a complex request that requires more processing time. If you encounter a Timeout error, please try the following steps: **Wait a few seconds and retry your request.** Sometimes, the network congestion or the load on our services may be reduced and your request may succeed on the second attempt. Check your network settings and make sure you have a stable and fast internet connection. You may need to switch to a different network, use a wired connection, or reduce the number of devices or applications using your bandwidth. If the issue persists, check out our persistent errors next steps section. It sounds like this should be retryable to me. Going to try this out in my service, will make a PR if it solves my problem ^ Some more details here: https://github.com/openai/openai-node/blob/master/README.md#retries Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default. You can use the maxRetries option to configure or disable this: Maybe this is all we need, to increase maxRetries on the OAI client? @wcummings Thanks for hunting this down. Did that change solve the problem? If so, will you submit a PR? I just found out that retries are not happening at all, despite setting maxRetries to a valid value in the langchain ChatOpenAI object. This sort of bug remaining unsolved for so long makes me doubt whether anyone uses langchainjs at all. Can the maintainers fix this? +1 would like to see this implemented I can confirm I am seeing this same issue where no retry is happening even when I pass a maxRetries param. Will have to switch to openAI's native lib which has this working until it is fixed. For anyone else following this issue, the change made in https://github.com/langchain-ai/langchainjs/issues/2706#issuecomment-1734422202 was to remove "TimeoutError" from the list of things not to retry. I can confirm that does fix the issue. That said, we can't just make that change to fix this because this handler is used for more than just openai. I notice there are many places where we explicitly set maxRetries to 0 in calls to the native openai lib. Perhaps the best route would be to change that to use the maxRetries value for the langchain openai model? @codenameakshay what is the status of this PR that is supposed to fix maxRetries param not working? @codenameakshay what is the status of this PR that is supposed to fix maxRetries param not working? The PR doesn't actually fix the bug as I discussed with @jacoblee93. It is still an open issue. See https://github.com/langchain-ai/langchainjs/pull/3370#discussion_r1402660699 I do still really want to get to the bottom of it 😕 but yeah we need to differentiate between user defined timeouts, which probably shouldn't be retried as the user expects some resolution in a timeframe, vs OpenAI default timeouts. I am not following. Why would the timeouts be different? Seems like we are just dealing with a timeout value that has a default if the user doesn’t supply one. Since OpenAI already handles retrying timeouts why do we need langchain to try and handle retries on timeouts as well? Couldn’t we just pass the user timeout value to OpenAI? Or maybe it is less about timeouts and more about retries. Since OpenAI now handles retries in the library natively, seems like we should just let it do its thing rather than use a separate mechanism outside the library. If that was their desire won't they just setup retries to 0? Yeah but now maxRetries param is not working as expected so I think this is necessary change. I'm surprised to see that the bot's suggestion above (https://github.com/langchain-ai/langchainjs/issues/2706#issuecomment-1732617130) was disliked that much. It actually inspired us to come up with a working solution: extend LangchainLLMChain into a class that calls withRetry() (owned by one of its ancestor class: Runnable) every time we call invoke. class LLMChainWithRetry extends LangchainLLMChain { async call(values, config = undefined) { const runnableChain = super.withRetry({ stopAfterAttempt: 3, onFailedAttempt: (error) => { if (error.name === 'TimeoutError') { console.log(`[LLMChainWithRetry] Attempt ${error.attemptNumber} failed. There are ${error.retriesLeft} retries left.`); } else { throw error; } } }); return await runnableChain.invoke(values, config); } } Oh, you should also just pass onFailedAttempt where you'd be able to pass maxRetries and it would work as well. I don't think you'd need to subclass. Closing for now given the above. I don't understand why this is being closed. Seems to me that there is still a clear bug here even if there is a workaround. Because it's not clear what should be happening IMO Does anyone know how this is handled in the python version? Could that perhaps guide us to a solution? IMHO it is fine to make a breaking change and make this retryable. The timeout is not the same as a deadline, and if a request is retried for any other reason the total request time will exceed the timeout anyway. Here's a draft of what I did in my codebase: https://github.com/langchain-ai/langchainjs/pull/4633 Thanks for all your patience and especially @wcummings for the PR! New behavior will be live in the next core release (probably today).
gharchive/issue
2023-09-24T16:37:52
2025-04-01T06:39:21.358598
{ "authors": [ "SagefulAI", "adrienjoly", "codenameakshay", "dhruv-anand-aintech", "drewB", "getcreatr", "icelic", "jacoblee93", "stevenmilstein", "wcummings" ], "repo": "langchain-ai/langchainjs", "url": "https://github.com/langchain-ai/langchainjs/issues/2706", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1987000241
”自主拦截域名“能否添加”高级搜索模式“ 在搜索一些技术类文章时,需要屏蔽百度文库和csdn,但拦截完之后,通常网页上就只剩下一两个结果甚至一个结果都没有了(全被拦截了)。 本质上,还是因为自主拦截域名,会把符合规则的搜索结果的元素直接删除(或屏蔽),下一页不在黑名单的结果不会挤到上一页。 那能不能实现这样的功能,就是给一个选项,不再拦截符合规则的搜索结果,而是通过高级搜索功能,把屏蔽网站自动附加到搜索关键词中。这样就可以使用搜索引擎的原生功能屏蔽不想要的结果,也避免了静态屏蔽导致页面空白 考虑过,做出来之后的效果不佳,实现起来必要性不大,并且容易导致很多小白误操作,各种问题 感谢反馈
gharchive/issue
2023-11-10T06:28:30
2025-04-01T06:39:21.367341
{ "authors": [ "Sallee1", "langren1353" ], "repo": "langren1353/GM_script", "url": "https://github.com/langren1353/GM_script/issues/656", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2329544323
Bing搜索在单列情况下部分按键错位严重且自己设置的背景图片不会正常显示 Bing搜索在单列情况下错位严重而且自己设置的背景图片不会正常显示,百度则正常。 背景图设置连接 https://i0.hdslb.com/bfs/new_dyn/602a9994ec3ce962f9eb0cf5a75be0e2233114659.jpg 错位问题另说 图片不可见,换图床,例如:https://imgur.la/
gharchive/issue
2024-06-02T08:54:59
2025-04-01T06:39:21.369715
{ "authors": [ "gugulu18", "langren1353" ], "repo": "langren1353/GM_script", "url": "https://github.com/langren1353/GM_script/issues/687", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2466785888
[trivial][bugfix] add include guard PR Summary In PR #330 I missed an include guard in the new tests. Here the guard is added. Resolves #405 . PR Checklist [ ] Adds a test for any bugs fixed. Adds tests for new features. [x] Format your changes by using the make format command after configuring with cmake. [ ] Document any new features, update documentation for changes made. [ ] Make sure the copyright notice on any files you modified is up to date. [ ] After creating a pull request, note it in the CHANGELOG.md file. [ ] LANL employees: make sure tests pass both on the github CI and on the Darwin CI If preparing for a new release, in addition please check the following: [ ] Update the version in cmake. [ ] Move the changes in the CHANGELOG.md file under a new header for the new release, and reset the categories. [ ] Ensure that any when='@main' dependencies are updated to the release version in the package.py As a note for the future (or an additional one line change to add to this MR) would we maybe want to consider removing the spiner build from the minimal tests on github? https://github.com/lanl/singularity-eos/blob/09cf65cd06eb249ed6f7de736f8c1f4165020a78/.github/workflows/tests_minimal.yml#L31 As a note for the future (or an additional one line change to add to this MR) would we maybe want to consider removing the spiner build from the minimal tests on github? https://github.com/lanl/singularity-eos/blob/09cf65cd06eb249ed6f7de736f8c1f4165020a78/.github/workflows/tests_minimal.yml#L31 Good suggestion. :+1: Done.
gharchive/pull-request
2024-08-14T20:38:24
2025-04-01T06:39:21.386868
{ "authors": [ "Yurlungur", "jhp-lanl" ], "repo": "lanl/singularity-eos", "url": "https://github.com/lanl/singularity-eos/pull/406", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1708653356
cannot resolve 3jehr.log.xxx.online: Unknown host ping log.xxx.online可以解析,ping 3jehr.log.xxx.online则提示未知主机 另外,readme中dns方面的配置,太粗略了,而且有点乱,初次配置的人看的很迷糊。。 参照其他教程解决了 如何解决的呢,我也遇到了这个问题
gharchive/issue
2023-05-13T15:21:17
2025-04-01T06:39:21.396930
{ "authors": [ "dead5nd", "ybdt" ], "repo": "lanyi1998/DNSlog-GO", "url": "https://github.com/lanyi1998/DNSlog-GO/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1251876379
Added 500ms debounce to key value field inputs @danharrin As we discussed in the discord channel key value field is now pretty hard to type when used with ->reactive(). Until we can debounce $entangle statement this should do the trick. Thanks
gharchive/pull-request
2022-05-29T12:30:15
2025-04-01T06:39:21.409665
{ "authors": [ "alperenersoy", "danharrin" ], "repo": "laravel-filament/filament", "url": "https://github.com/laravel-filament/filament/pull/2598", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
217251213
Artisan commands will not show line numbers of errors on php7+ Laravel Version: 5.4.16 (and previous) PHP Version: 7+, 7.1+ Description: When running an artisan command on Windows that contains an error the error message is shown without a line-number on php 7+. On php 5.6 the error is shown with the line number. This does not happen on all files, for instance adding an error to the actual artisan file will show the line number but adding an error to MigrateCommand.php will not. Steps To Reproduce: Windows with php 7+ Create a fresh install of Laravel. Add an error to \artisan and run php artisan - the line number will be shown with the error Remove above error and add one to \vendor\laravel\framework\src\Illuminate\Database\Console\Migrations\MigrateCommand.php - the error will be shown without the line number PHP 5.6 PHP 7 @SirriC can you try running PHP script that throws an error? do you see line numbers? @Dylan-DPC Yes, if I have a simple php file with an error it’ll show the line number. I also get line numbers up to a point in Laravel, so I can edit some files and see them but then they stop showing. I think it might be when the errors are handled by Symfony - see the bottom screenshot in my original post. The first time I run artisan it files the error on line 16. On the second run the error is much deeper into the application and no line number is shown. It turns out this is not just on Windows. My colleague was actually running php 5.6. I have also tried php 7.1 on two linux machines and neither show line numbers for errors. Try running with the -v option Use --verbose to get more information about the error if you want. This is still a serious issue, and -v or --verbose has no effect in my Install. Why is this closed? Agreed, having same issue here.... I'm having the same issue. Using Windows 10, PHP 5.6, Laravel 5.4. --verbose flag doesn't work: For those experiencing a similar issue, I found that's it's more verbosely logging the error in storage/logs/laravel.log Seems to be an issue with the php.ini being shipped with PHP >= 7.0. Since Xdebug 2.4 there's a new option available xdebug.show_error_trace, this should be set to 1 in your xdebug-configuration (or just your php.ini), note that this should be the php.ini for CLI, not FPM. I.e, if using Homestead just put: xdebug.show_error_trace=1 somewhere in /etc/php/7.1/cli/php.ini and it should work. Discussion on Twitter about this: https://twitter.com/mattiasgeniar/status/905450118152953857
gharchive/issue
2017-03-27T13:46:40
2025-04-01T06:39:21.423418
{ "authors": [ "Dylan-DPC", "SirriC", "austinjherman", "marathonstudios", "ntzm", "olssonm", "themsaid", "websanova" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/18515", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
276600169
Translator case insensitivity on Windows Laravel Version: 5.5.21 PHP Version: 7.0.10 Database Driver & Version: Irrelevant Description: The translator class looks for the translation file in a case insensitive matter on Windows. Causing it to look up a PHP translation file instead of looking for a string in the JSON file. Expected result:* a string from the JSON translation file (e.g. resources/lang/en.json) Actual result:* the content from a PHP translation file (e.g. resources/lang/en/faq.php) __('faq') should return the contents of the file resources/lang/en/faq.php. __('FAQ') (or any other non-lowercase variant) should return a string from resources/lang/en.json. Steps To Reproduce: Create a file named faq.php in resources/lang/en and make it return an array: <?php return [ 'key' => 'value', ]; Make sure resources/lang/en.json does not exist or does not contain the key FAQ. Call __('FAQ') or app('translator')->getFromJson('FAQ') from a controller. The quick fix for this issue to add the keys causing problems to resources/lang/en.json. Yeah you need to watch out from this edge case if you're using both translation methods in your project. @themsaid This is not very clear from reading the documentation page. Also, I assume this is not an issue on Unix-like systems where file names are case-sensitive. I haven't tested it, yet. Laravel will first check for a JSON translation string before trying to find a PHP file, so yes including a JSON translation line for that key would fix the issue.
gharchive/issue
2017-11-24T12:13:13
2025-04-01T06:39:21.429709
{ "authors": [ "royvanv", "themsaid" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/22196", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
697235452
Comma at the end on the method is invalid https://github.com/laravel/framework/blob/a4c365f9a964f1d9346a662b426476dc1bffc660/src/Illuminate/Bus/BusServiceProvider.php#L50 Error is thrown when booting app require '/vendor/autoload.php'; $app = require('/bootstrap/app.php'); $app->make(\Illuminate\Contracts\Console\Kernel::class)->bootstrap(); ParseError syntax error, unexpected ')' at /vendor/laravel/framework/src/Illuminate/Bus/BusServiceProvider.php:51 47▕ return new DatabaseBatchRepository( 48▕ $app->make(BatchFactory::class), 49▕ $app->make('db')->connection(config('queue.batching.database')), 50▕ config('queue.batching.table', 'job_batches'), ➜ 51▕ ); 52▕ }); 53▕ } 54▕ 55▕ /** This is valid syntax in PHP 7.3 which is the minimum supported version for Laravel 8. https://wiki.php.net/rfc/trailing-comma-function-calls Make sure that your PHP-FPM is running PHP 7.3+. Thanks @X-Coder264 i'm dumb...was running my test in 7.2
gharchive/issue
2020-09-09T23:30:15
2025-04-01T06:39:21.434053
{ "authors": [ "X-Coder264", "keepanitreel" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/34242", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
777625440
routes with 'group' as binding does not work Laravel Version: 8.20.1 PHP Version: 7.4 Database Driver & Version: Description: After upgrade from version 6 to latest, all routes having 'group' binding failing to work. Changing it to anything else works perfectly. Behavior reproducible with and without using explicit binding. All other 100+ bindings are totally fine Steps To Reproduce: create route in following format Route::get('groups/{group}', 'GroupsController@show'); 1.1 update RouteServiceProvider to use default namespace for above to work create controller and model php artisan make:controller --api --model=Group GroupsController create new table and fill with few rows or set any existing table to be used with model make a call to route above with id existing in table above exception "Target class [Group] does not exist." thrown even with explicit binging being set like that Route::model('group', \App\Models\Group::class); Unable to recreate. Works for me. Unable to recreate. Works for me.
gharchive/issue
2021-01-03T12:08:48
2025-04-01T06:39:21.438394
{ "authors": [ "danikp", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/35767", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1075972723
Deprecated warning Laravel Version: 8.75.0 PHP Version: 8.1.0 Description: I'm getting several depreciation alerts when running tests in phpunit, for example: Deprecated: explode(): Passing null to parameter #2 ($string) of type string is deprecated in /var/www/html/vendor/laravel/framework/src/Illuminate/Collections/Arr.php on line 276 ..... Deprecated: explode(): Passing null to parameter #2 ($string) of type string is deprecated in /var/www/html/vendor/laravel/framework/src/Illuminate/Collections/Arr.php on line 276 . Deprecated: explode(): Passing null to parameter #2 ($string) of type string is deprecated in /var/www/html/vendor/laravel/framework/src/Illuminate/Collections/Arr.php on line 276 . Deprecated: explode(): Passing null to parameter #2 ($string) of type string is deprecated in /var/www/html/vendor/laravel/framework/src/Illuminate/Collections/Arr.php on line 276 . Deprecated: explode(): Passing null to parameter #2 ($string) of type string is deprecated in /var/www/html/vendor/laravel/framework/src/Illuminate/Collections/Arr.php on line 276 Try adding this line to your logging.php file: https://github.com/laravel/laravel/pull/5711/files#diff-9ba917899def7f26cdd2c34da749863fbb2797a40aed5e41a1166a022b10d7dfR33
gharchive/issue
2021-12-09T20:20:14
2025-04-01T06:39:21.441267
{ "authors": [ "driesvints", "yurih567" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/39964", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
56938545
Registering middleware conditionally In L4, we used to be able to load service providers based on the environment using append_config. The recommended way now in L5 is to instead load it conditionally in AppServiceProvidor: if ($this->app->environment('local')) { $this->app->register('LocalOnlyServiceProvider'); } The same can't be said for middleware. There seems to currently be no way to do the same for middleware. The only way I was able to accomplish conditional middleare loading was by extending the app kernel's constructor and load it there. Ugh :cry: Another possible solution might be to create my own ConditionalMiddleware that always runs, and then pass the request through to various additional middleare conditionally. Again, ugh! For reference: slack discussion I think there should be a built-in way to do this, but I'm not sure the exact approach to take. Ideas? Perhaps add the middleware option back to App, like before? That was also proposed to simplify middleware in packages.. $this->app->middleware('MyConditionalMiddleware'); Doesn't the order of middlewares matter? This partially duplicates https://github.com/laravel/framework/issues/6211. Closing this. Taylor says to just do it in the kernel's handle method, like this.
gharchive/issue
2015-02-08T04:29:26
2025-04-01T06:39:21.445840
{ "authors": [ "Arrilot", "GrahamCampbell", "JosephSilber", "barryvdh" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/issues/7331", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
274620450
[5.6] Allow different collection classes to be returned Just an idea -- this PR would allow different collection classes to be returned by setting a property instead of the newCollection method. Would rather people just override the method.
gharchive/pull-request
2017-11-16T18:39:51
2025-04-01T06:39:21.446856
{ "authors": [ "mateusjatenee", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/22104", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
707363632
[8.x] Allow dynamic factory methods to obey newFactory method on model PR is for issue #34490 Benefit is that models that have to define a factory via the newFactory method can utilise the magic factory methods for has and for. What if the class isn't using that trait?
gharchive/pull-request
2020-09-23T13:23:39
2025-04-01T06:39:21.448065
{ "authors": [ "btaskew", "taylorotwell" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/34492", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1222656101
[9.x] Add to_action helper After replacing the route redirects with the to_route helper, it makes sense to use the same naming for action redirects. I appreciate the consistency, but I'm not a huge fan of action based routing. Controllers can be moved / renamed, etc. which makes action routing a bit brittle compared to named routes.
gharchive/pull-request
2022-05-02T09:08:44
2025-04-01T06:39:21.449388
{ "authors": [ "taylorotwell", "usernotnull" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/42214", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1985382793
[10.x] ExpectsTable fails if new table prompt method is used This PR tries to fix the following bug: <?php namespace App\Console\Commands; use Illuminate\Console\Command; use function Laravel\Prompts\table; class DemoCommand extends Command { protected $signature = 'run-demo-command'; protected $description = 'Command description'; public function handle(): void { table(['name', 'email'], [['joe doe', 'joe.doe@example.com']]); } } // test file namespace Tests\Feature; use Tests\TestCase; class ExampleTest extends TestCase { /** * A basic test example. */ public function test_demo_command(): void { $this->artisan('run-demo-command') ->expectsTable(['name', 'email'], [['joe doe', 'joe.doe@example.com']]); } } // There was 1 failure: // 1) Tests\Feature\ExampleTest::test_demo_command // Output "+---------+---------------------+" was not printed Based on my understanding, when a table is rendered, the method 'write' from the OutputStyle class is called and not 'doWrite' from BufferedOutput. That's why the expectation fails. I tried to fix it, but I don't know what to pass to the mock here for the $table variable: $mock->shouldReceive('write') ->once() ->ordered() ->with($table, Mockery::any()) ->andReturnUsing(function () use ($i) { unset($this->test->expectedTables[$i]); }); Feel free to resend this once you have time.
gharchive/pull-request
2023-11-09T11:03:35
2025-04-01T06:39:21.451532
{ "authors": [ "driesvints", "jcergolj" ], "repo": "laravel/framework", "url": "https://github.com/laravel/framework/pull/48955", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1651901908
Unable to install new dependencies when using Sail Laravel Version: 10.5.1 PHP Version: 8.2.4 Database Driver & Version: mysql 8.0 OS: Ubuntu 20.04, 22.04 Description: I installed Laravel with Sail using WSL2 and whenever I tried to install new packages, it gave the following error from ZipDownloader.php: The archive may contain identical file names with different capitalization (which fails on case insensitive filesys tems): ZipArchive::extractTo(/var/www/html/vendor/composer/e0174f41/symfony-psr-http-message-bridge-a125b93/Argumen tValueResolver/PsrServerRequestResolver.php): Operation failed: Operation not permitted ZipArchive::extractTo(/var/www/html/vendor/composer/e0174f41/symfony-psr-http-message-bridge-a125b93/ArgumentValueR esolver/PsrServerRequestResolver.php): Operation failed: Operation not permitted Steps To Reproduce: Download Ubuntu (any version, I used 20.04, 22.04) from Microsoft Store. Open Ubuntu and start a fresh installation of Laravel Sail. Execute up command with: sail up. Install specific package like laravel/passport: sail composer require laravel/passport. Observe the error as per screenshot. I've made a search in Laracasts, a few people reporting this issue ran "composer clearcache" to address this issue. Can you try it, and let me know how it goes? No worries, if the issue persist, I will re-open this issue. I think this can be closed @nunomaduro since I have found the issue. It is probably related to the mounting operation between Windows and WSL2 not relating to Laravel sail itself. I added a wsl.conf like this and my problem is solved `[boot] systemd=true [automount] enabled = true options = "metadata" mountFsTab = false [user] default=tuannpa` I found some helpful info with below links: https://stackoverflow.com/questions/66620301/laravel-sail-on-wsl2-wrong-permissions Fstab: https://superuser.com/questions/1710001/how-do-you-configure-windows-subsystem-for-linux-2-wsl2-to-use-fstab-to-automa
gharchive/issue
2023-04-03T10:15:44
2025-04-01T06:39:21.469458
{ "authors": [ "nunomaduro", "tuannpa" ], "repo": "laravel/sail", "url": "https://github.com/laravel/sail/issues/570", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1192768106
how to use tree_templates? is there a more clear document? I want to convert a grammar to another grammar, and I can't understand the example in the document(py3 to py2) beacaue I don't know about tree_templates, so how can I learn to use tree_templates? Thank you! try this https://lark-parser.readthedocs.io/en/latest/examples/advanced/py3to2.html
gharchive/issue
2022-04-05T07:55:42
2025-04-01T06:39:21.482940
{ "authors": [ "HendricksRichard", "notghettolenny" ], "repo": "lark-parser/lark", "url": "https://github.com/lark-parser/lark/issues/1134", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1202812952
lex - When an error happens, how can I display all tokens matched so far? When an error happens (lark.exceptions.UnexpectedCharacters), there is usually some "Previous tokens" information such as this: Previous tokens: Token('__ANON_0', 'CL79')` That only seems to contain the token immediately preceding the error, but not the ones before. Am I doing something wrong, or is there a way to display all tokens matched so far? It's possible to collect all the tokens by writing a postlexer. Another way is to parse using the interactive parser. Mind if I ask what you need it for? @erezsh I was trying to see what tokens were being matched so I could debug the lexer rules Can I still use the postlexer in my case, where an exception is thrown (so the lexing process isn't yet complete)? Yes, the postlexer gets the tokens one by one, so if you save them somewhere (like in a global list, or inside the postlexer instance), you will have the latest list. Lark doesn't save those tokens, because we want to support memory-efficient streaming. But perhaps we could do it when debug=True. Lark doesn't save those tokens, because we want to support memory-efficient streaming. But perhaps we could do it when debug=True. That would be wonderful for ease of development
gharchive/issue
2022-04-13T06:17:44
2025-04-01T06:39:21.486754
{ "authors": [ "erezsh", "mbBRCM" ], "repo": "lark-parser/lark", "url": "https://github.com/lark-parser/lark/issues/1135", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
532636619
question: is incremental parsing possible? I have the use case where I need to parse a text line by line. At the first line, the parser can start at its start position. At the end of the line, the parser should store its state (e.g. in which context is is) in a hashable object. When parsing any line that is not the first line, the state of the end of the previous line is restored and the current line is parsed. (This is how QSyntaxHighlighter (from Qt) is organized. When the user types in a certain line, that line is re-highlighted. And when its state at the end is different than it was beforehand, the next line is also re-highlighted, and so on.) Would this be possible using the lark parser? From my own experimentation and docs reading I can only parse a full text (or file) in one go. (I wrote the simple module slexer.py that can do this years ago, but maintaining large grammars with that module is becoming quite tedious, and I would really like to use lark instead of my own module.) Lark currently doesn't support such a feature. But, it shouldn't be too hard to implement, at least for the LALR algorithm. Thanks for your quick reply :-) I was already trying to use the serialize() method but that complained about a missing '__serialize_fields__' attribute. Serialize would work, but it would be very inefficient, because it would reconstruct all the instances. Still, if you want to use it, see the correct usage here: https://github.com/lark-parser/lark/blob/master/lark/tools/standalone.py#L105 @erezsh This is an interesting use case for everything we are currently talking about. Yes, I agree. It should be possible to let the parser save its state at certain points, using a "puppet" (or a capsule, or however we name it), and then allow the user to resume from either one of the save points. It will affect performance, but it might be good enough for the purpose of an IDE. I'm still not sure what's the best interface to specify these save-points. :+1: for this issue. I could use a much simpler API even - example: with open(my_file) as lines: parser = Lark(...) for data in parser.parse_incrementally(lines): my_process(data) Give it an iterator of lines - result is an iterator of data. The typical example is reading a long file containing a large number of small JSON or JSON-like data. I want to feed in each line and have the results come out as they are finished, not have to read the whole file into memory before the first one comes out! Or it might be a long-running socket connection.... The problem is what is data?, e.g. which element of the grammar is it? @rec I think you misunderstood what incremental parsing means. This issue is about storing "savepoints" while parsing, and being able to restart the run from these savepoints, for example if the input has changed. What I think you're asking for, which is to parse a large file without having to load all of into memory, is something that Lark already does. Python automatically buffers files as they are being read Lark supports the transformer=... option, which applies a transformer as the text is being parsed, rule by rule, instead of building a whole tree first. You don't have to create a tree, or store the parsed data, if you choose to. All you have to do is something like this: parser = Lark.open("my_grammar.lark", parser="lalr", transformer=MyTransformer()) with open("my_input.json") as f: result = parser.parse(f) If for some reason that doesn't work, open a new issue and we'll fix it. Gosh, you guys are responsive. :-) Thanks! Megalng: The "data" is the top-level terminal in the grammar. erezsh: Brilliant answer! I was somewhat aware that my "incremental" parsing wasn't what was described here. The literature doesn't really have any consistent name for what I was describing and some people call it incremental parsing. The solution you present seems good except that the code has to re-read my_grammar.lark for each transaction. If I knew that the previous parser had just successfully parsed a top-level terminal, then I could just re-use the previous parser, but it might have failed, or I might be reading two separate streams in parallel. I need to deliver a proof-of-concept, though, so this is just not a big deal! Another hit, another home run(*) from the lark team. Thanks again! (* - or insert favorite sport here!) And we will be even more responsive :-P. The "data" is the top-level terminal in the grammar. Although erezsh did correctly spot the misunderstanding, this might not work since the top-level structure is not completly finished after one line. And if we parse multiple lines we have no benefits over just parsing everything in one action. The solution you present seems good except that the code has to re-read my_grammar.lark for each transaction. No. The Lark object should be completely save to reuse, even in a multi-threaded environment. Contradictory behavior is a bug. @rec I don't think I fully understood your use-case. But please, let's move this to a new issue (feel free to open one, and express exactly what you're after), or continue on gitter. I think I understand what he wants. (and I feel like I want the same thing). I think the example would be that ... if you had a 60gb json file, which is just a list of requests (each being very small); Is there a way to iterate over the file and have the parser return one request at a time, without parsing/lexing all of them at once. In other words, if the text being parsed is just repetitions of a single element type, is there a way to have the parser work as a generator for such elements? Do you have any idea on how this could be done? It would be great to have something like ... parser = Lark(grammar, transformer=MyTransformer(), iter='dict') # Specify what rules should be yielded with open('massive_file.json', 'r') as f: for my_entry in parser.iter_parse(): some_random_process(my_entry) I am so far loving the package and I can see how this could be handy. Thank you so much for the hard work! Sorry for not replying before, I suddenly started a new job. Yes, you have exactly my use case - a large file with a large number of small items. By the way, we are now using Lark in that new job, and like always, it worked flawlessly and I didn't even have to get involved to help! The existing iterative parser can be used to create a small wrapper that does this. from queue import Queue from lark import Discard, Lark json_grammar = r""" ?start: "[" [command ("," command)*] "]" command: value ?value: object | array | string | SIGNED_NUMBER -> number | "true" -> true | "false" -> false | "null" -> null array : "[" [value ("," value)*] "]" object : "{" [pair ("," pair)*] "}" pair : string ":" value string : ESCAPED_STRING %import common.ESCAPED_STRING %import common.SIGNED_NUMBER %import common.WS %ignore WS """ class Transformer: def __init__(self, callback): self.callback = callback def command(self, children): self.callback(children[0]) return Discard def iter_parser(*args, transformer, **kwargs): queue = Queue() if not kwargs.setdefault("parser", "lalr") == "lalr": raise ValueError("The lalr parser is required") kwargs['transformer'] = transformer(queue.put) parser = Lark(*args, **kwargs) def parse(text, start=None): interactive = parser.parse_interactive(text, start) token = None for token in interactive.iter_parse(): while not queue.empty(): yield queue.get() interactive.feed_eof(token) while not queue.empty(): yield queue.get() return parse p = iter_parser(json_grammar, parser="lalr", transformer=Transformer) test_text = """ [ {"command": "print", "args": ["argument", 0, {"some": "object"}]}, {"command": "input", "args": ["some prompt"]} ] """ for c in p(test_text): print("got", c) Super, mega-cool! Im not sure if I am more impressed by the response time or the actual solution ... This is amazing! I definitely feel like this could be part of the tutorials. Right now I do not have time to write it myself and submit a PR, but if there is interest I can give it a go at a later time point.
gharchive/issue
2019-12-04T11:54:09
2025-04-01T06:39:21.503096
{ "authors": [ "MegaIng", "erezsh", "jspaezp", "rec", "wbsoft" ], "repo": "lark-parser/lark", "url": "https://github.com/lark-parser/lark/issues/488", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1580936509
Testfix/webdriverio v8 upgrade Hi!, Now, this is the equivalent code for the Husky implementation in Klassi JS. IMPORTANT: the need to use yarn install --network-concurrency 1 in projects using Klassi JS as a dependency if they have CI runs might prove a breaking change, so maybe we wouldn't want to do this. Cheers! closing as no longer needed
gharchive/pull-request
2023-02-11T17:15:40
2025-04-01T06:39:21.526711
{ "authors": [ "carlosbermejop", "larryg01" ], "repo": "larryg01/klassi-js", "url": "https://github.com/larryg01/klassi-js/pull/115", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1191086098
Update cats-effect to 3.3.10 Updates org.typelevel:cats-effect from 3.3.9 to 3.3.10. GitHub Release Notes - Version Diff I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.typelevel", artifactId = "cats-effect" } ] labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1 Codecov Report Merging #634 (287a342) into master (21be08f) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #634 +/- ## ======================================= Coverage 62.54% 62.54% ======================================= Files 39 39 Lines 1303 1303 Branches 7 7 ======================================= Hits 815 815 Misses 488 488 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 21be08f...287a342. Read the comment docs.
gharchive/pull-request
2022-04-03T23:58:09
2025-04-01T06:39:21.552962
{ "authors": [ "codecov-commenter", "scala-steward" ], "repo": "laserdisc-io/laserdisc", "url": "https://github.com/laserdisc-io/laserdisc/pull/634", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
438587127
Should fontspec support square brackets? Should this work? \documentclass{article} \usepackage{unicode-math} \setmainfont{[texgyrepagella-regular]} \begin{document} hello \emph{hello} \end{document} It currently does but only by coincidence (without the square brackets it doesn't work, but with a literal .otf appended fontspec parses that out and does the right thing). I should decide whether this should (continue to) work or not and include it in the docs. I guess for backwards compatibility I should keep it working, but parse out the [ and ] chars from the name and then automatically call the Path feature. Let's not promulgate shorthand syntax where it doesn't belong. I'll close this and assume any accidental support for [] doesn't catch on…
gharchive/issue
2019-04-30T03:09:51
2025-04-01T06:39:21.595924
{ "authors": [ "wspr" ], "repo": "latex3/fontspec", "url": "https://github.com/latex3/fontspec/issues/363", "license": "LPPL-1.3c", "license_type": "permissive", "license_source": "github-api" }
590088048
[Postgres] Add support for INTERVAL [ ] Encode support for std::time::Duration, chrono::Duration, and time::Duration ( the second and third under chrono and time features ) [ ] Encode and Decode support for https://crates.io/crates/pg_interval ( under an interval feature ) Thank you for this project. Is there a way to go around this issue using the query macro (like falling back to a String while type checking the INTERVAL type)? Other than that I would be willing to help on this issue. However I'm not very familliar with sqlx currently.
gharchive/issue
2020-03-30T08:20:29
2025-04-01T06:39:21.601403
{ "authors": [ "dimtion", "mehcode" ], "repo": "launchbadge/sqlx", "url": "https://github.com/launchbadge/sqlx/issues/197", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2595408826
Fix: Cannot query Postgres INTERVAL[] error: unsupported type INTERVAL[] of column #2 ("event_offsets") Hello, I'm querying array of intervals, but sqlx give that error. That PR fixes it @abonander hello, can you review please?
gharchive/pull-request
2024-10-17T18:07:37
2025-04-01T06:39:21.602592
{ "authors": [ "Ddystopia" ], "repo": "launchbadge/sqlx", "url": "https://github.com/launchbadge/sqlx/pull/3566", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
421962362
Add filter to show only public commits A lot of the top users have many commits in private repos, some with suspiciously many counts. In general, it is hard to verify private commits, which may allow people to "game" the top list. Please consider adding a filter to show top users with most public, verifiable commits. I’ll need to check the APIs but this may be impossible, as the current contributions counts don’t differentiate between public or private. This thing started as a fun little hobby project so it’s interesting to see that people would actually start to sacrifice their profiles in order to get to the top. 🤔 This would definitely be a really nice feature. Some people (myself included) use Github for work, so naturally I have a lot of commits. What would be interesting to see it how I rank up to others on my open-source contributions (public). Good news! This'll soon be possible. There is indeed a big difference between public and public+private lists. My current plan is to make public contributions the default and showing private contributions an option. This is now in production
gharchive/issue
2019-03-17T19:24:41
2025-04-01T06:39:21.640088
{ "authors": [ "MatissJanis", "brylie", "lauripiispanen" ], "repo": "lauripiispanen/most-active-github-users-counter", "url": "https://github.com/lauripiispanen/most-active-github-users-counter/issues/37", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
309785145
How to get the canvas element? Hey, I need a reference to the canvas element or at least its container. I need to call getClientBuondingRect upon it. How do you do it? ref.stage.getStage() returns the javascript konva object, how do I get the dom node? Sorry, was in the konva docs, it's ref.stage.getStage().container()
gharchive/issue
2018-03-29T14:48:38
2025-04-01T06:39:21.642847
{ "authors": [ "marcofugaro" ], "repo": "lavrton/react-konva", "url": "https://github.com/lavrton/react-konva/issues/187", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1079871544
Missing that last release. Since we missed the last release for the new websockets version (which pushed the next release until February). Is it possible to publish the websockets version to HACS so that we can have a larger beta adoption to test things out? I would like to see how things are operating so I can begin working on adding in additional devices (locks, and sensors) which may also need a larger discussion. Thanks again for providing the time and energy to the library, I know more than anyone else that it can be really tough to keep up. Is it ready at all? Seems to still be in PR https://github.com/home-assistant/core/pull/60465 Looks to have been idle for a while, I assume they have been busy. Hi everyone! Thanks for your continued interest in this project! I have been testing the code in https://github.com/home-assistant/core/pull/60465 for a week now, but there's still one big thing that needs to be fixed: The director auth token doesn't get updated once it expires every 24h, since it used to be refreshed when we would poll the Control4 system. Since we are using WebSockets instead of polling, we need to find some other way to refresh the token every 24h/if it gets revoked for some other reason. When the director auth token expires, commands cannot be sent to the Control4 system, but the Websockets updates keep coming. Does anyone have an idea as to how we can get Home Assistant to update our token every 24h (like on a schedule/before a certain expiry date)? Hi @mellis @Xorso, Please try adding this repo https://github.com/lawtancool/hass-control4 to your HACS as a custom repository. See https://hacs.xyz/docs/faq/custom_repositories for instructions on how to do that. I have added code to automatically update the director token before it expires, please test and let me know if you can still turn on/off lights 24h after restarting Home Assistant. @lawtancool Thank you! I thought I had that worked that out on a websocket disconnect to retrieve the director token. I will definitely get this installed and start testing. I would love to see it be a part of the core. Thanks again for the time and effort. I installed HACS and the integration from the repository. Now all of the devices from the Control4 integration have the status "The device is disabled by Config entry.". Am I missing some necessary yaml configuration? @Xorso @mellis Could you both update the integration through HACS? I just pushed a fix for the token refresh, as it wasn't properly implemented earlier. @lawtancool downloaded the update 👍. Should I disable the option to poll devices now as well? @lawtancool downloaded the update 👍. Should I disable the option to poll devices now as well? There's no need to change that setting - I honestly don't even know what that would do, since we don't use polling at all anymore. 😅 Just updated. Yeah polling will need to be stripped out (unless we want to use it as a fallback at some point). I am hoping this is going to make it easier to get other entities in places as well (real time sensor data, motion detection, and maybe even some energy usage from the newer switches) Should this method also expose non-light relays that are part of the EA3, for example garage door switch, and reed switches? @Xesyliad I hope to be able to work on this now that we have it running out of HACS. It should allow for better testing and additional entity types. @lawtancool I have been running past 24 hours and things are running smooth. I need to kick up my logging into debug but I feel like the tokens are refreshing. Are you still seeing issues? I also haven’t seen any token issues here. After a few more days of continuous testing, I have discovered that Home Assistant stops receiving WebSockets updates after around 48 hours since the last restart. This is because, while we are refreshing the tokens every 24h, we aren't restarting the WebSocket connection to use the new tokens, and eventually the Control4 Director sends a BadToken error to us over WebSocket. I'll have to find some time to figure out how to restart the WebSocket connection when the tokens are refreshed. It might not be easy/elegant, since the current code design would require the callbacks for each entity to be re-registered, essentially forcing a full re-setup of the Home Assistant integration and entities every 24h. @mellis @Xorso I've updated the HACS integration to fix the Websockets token refresh, please update and let me know if Home Assistant continues to receive state updates without logging errors after 24-48hours. @lawtancool i updated yesterday, will let you know if I notice anything. So far things have been running smooth. I am past the 48 hour mark. Are you guys seeing anything? Things are all fine here, I also don't see any errors related to the c4 integration in the log. Not experiencing any issues so far. Hi everyone! I've updated the HACS integration again. Please update and let me know how it goes! It will take a while for the Home Assistant maintainers to merge my code into the core, but I think the HACS integration is pretty much feature complete now. Changes: The integration will now automatically recover if the network connection is temporarily dropped and reconnected. Entities will become unavailable when the connection is lost, and will become available again with correct states once the connection is restored. The integration will now automatically create a Home Assistant notification if the Control4 login credentials become invalid, allowing the user to re-enter their information. Does this mean that development may soon begin on adding features, for example relay support? @lawtancool Thanks so much for all the work! Maybe we should chat in another thread to see what integration to tackle next? @Xorso Yes, let's create another issue to discuss further integration work with different devices. @Xesyliad The problem with relay support is that my Control4 system doesn't have any relays in it, making it impossible for me to test a relay integration. If you/someone else has a relay and is willing to develop and test the integration, they can always open a PR with the Home Assistant core directly. I would be glad to review such a PR, but I wouldn't be able to verify the actual functionality.
gharchive/issue
2021-12-14T15:06:42
2025-04-01T06:39:21.687677
{ "authors": [ "Xesyliad", "Xorso", "lawtancool", "mellis" ], "repo": "lawtancool/pyControl4", "url": "https://github.com/lawtancool/pyControl4/issues/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2628191009
[Build] Warning shown during site build: WARN found no layout file for "sitemap" for kind "home" Current Behavior When executing make site, the following log is shown in terminal output. Expected Behavior No build warnings. Screenshots/Logs WARN found no layout file for "sitemap" for kind "home": You should create a template file which matches Hugo Layouts Lookup Rules for this combination. Environment Host OS: Mac Contributor Guide and Resources 📚 Instructions for contributing to documentation Layer5 documentation site and source 🎨 Wireframes and designs for Layer5 site in Figma (open invite) 🙋🏾🙋🏼 Questions: Layer5 Discussion Forum and Layer5 Community Slack @leecalcote I would like to work on this issue. Could you please assign me this? shall i work on this issue ?
gharchive/issue
2024-11-01T02:47:31
2025-04-01T06:39:21.693398
{ "authors": [ "aakankshabhende", "leecalcote", "shivankurchavan" ], "repo": "layer5io/docs", "url": "https://github.com/layer5io/docs/issues/405", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
796649060
[mesheryctl] [epic] provide support for platform: kubernetes Prologue Getting Meshery up and running locally on a Docker-enabled system is easy with Meshery’s command line interface, mesheryctl. The same ease by which Meshery is deployed to a Docker host should be afforded for deployments to Kubernetes clusters - different types of Kubernetes clusters. Current Scenario While the Meshery contexts contained within config.yaml offer configuration of the type of platform to deploy Meshery, currently only platform: docker is supported. Desired Scenario Also support platform: kubernetes across all mesheryctl commands. Epic Acceptance Tests Each system command considers for context platform type “kubernetes” Support platform: kubernetes in mesheryctl context (in config.yaml) and across all mesheryctl system commands. Future Consider supporting specific Kubernetes platforms like AKS, EKS, GKE, OpenShift, Minikube, Docker Desktop and so on. Example: platform: eks. Review mesheryctl system config in consideration for being implicitly executed as part of mesheryctl system context or system start. Child Issues [x] Issue #2308 [mesheryctl] [child] platform support for Kubernetes in system start(platform: kubernetes) [x] Issue #2309 [mesheryctl] [child] platform support during “bash script” installation(platform: kubernetes) [x] Issue #2310 [mesheryctl] [child] make system reset platform aware(platform: kubernetes) [x] Issue #2311 [mesheryctl] [child] make system stop platform aware(platform: kubernetes) [x] Issue #2312 [mesheryctl] [child] make system logs platform aware(platform: kubernetes) [x] Issue #2514 [mesheryctl] [child] make system status platform aware(platform: kubernetes) [x] Issue #2561 [mesheryctl] [child] make system update platform aware(platform: kubernetes) Wow. This is excellent. All the child issues mentioned in this epic is done. Closing this issue.
gharchive/issue
2021-01-29T07:23:08
2025-04-01T06:39:21.720476
{ "authors": [ "leecalcote", "navendu-pottekkat" ], "repo": "layer5io/meshery", "url": "https://github.com/layer5io/meshery/issues/2307", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
497944769
icon for load generators Signed-off-by: Lee Calcote leecalcote@gmail.com Related to #213 @agarwalrohit2503 what do you think? Do you approve of the addition of this icon? If you approve, be sure to add your review to this PR @agarwalrohit2503 I'll go ahead and move forward with this review. I hope the short video I created helped. @subhamkrai will be adding that video to the CONTRIBUTING.md for future reference.
gharchive/pull-request
2019-09-24T21:58:48
2025-04-01T06:39:21.723009
{ "authors": [ "leecalcote" ], "repo": "layer5io/meshery", "url": "https://github.com/layer5io/meshery/pull/292", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
970507024
[BUG] /root/shell.sh: No such file or directory Description Currently when we start any of the labs we get greeted with this error: $ /root/shell.sh -bash: /root/shell.sh: No such file or directory Expected Behavior Lab starts without any issues Screenshots Environment: OS: Elementary OS 6 Browser: Brave Version: 1.28 Device:Laptop Closing because issue got resolved by itself @iamsdas is this still an issue? I tried out the labs a few times and could not reproduce the issue. So I think it is safe to assume that it has been fixed. 🚀 Awesome. Thank you, @iamsdas. By the way, we are about to start building a new Meshery adapter for Cilium service mesh. Please check in Slack, if interested to participate in it.
gharchive/issue
2021-08-13T15:31:28
2025-04-01T06:39:21.727457
{ "authors": [ "iamsdas", "leecalcote" ], "repo": "layer5io/service-mesh-labs", "url": "https://github.com/layer5io/service-mesh-labs/issues/27", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1758781795
added pull request template #29 Added a pull request template. @ShivangRawat30 Please remove the issue template, I have added it via github. Lets keep the pretty PR template.
gharchive/pull-request
2023-06-15T13:02:02
2025-04-01T06:39:21.728474
{ "authors": [ "Abhishek-kumar09", "ShivangRawat30" ], "repo": "layer5labs/meshmap-snapshot", "url": "https://github.com/layer5labs/meshmap-snapshot/pull/30", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2406641002
单选框()元素的name属性的值含空格会导致单选框变成多选【纯html没有这个问题】 议题条件 [X] 我确认已查看官方使用文档:https://layui.dev ,但没有找到相关解决方案。 [X] 我确认已在 Issues 中搜索过类似的问题,但没有找到相关解决方案。 [X] 我已仔细阅读: 🍀 Layui Issue 贡献指南 议题类型 疑是 BUG 使用版本 v2.9.14 问题描述 按照layui官方文档示例(https://layui.dev/docs/2/form/radio.html ) 普通单选框例子 <div class="layui-form"> <input type="radio" name="AAA" value="1" title="默认"> <input type="radio" name="AAA" value="2" title="选中" checked> <input type="radio" name="AAA" value="3" title="禁用" disabled> </div> <script src="//unpkg.com/layui@2.9.14/dist/layui.js"></script> 右上角编辑,将input元素里面的name属性都修改为name=“AA A”「加了一个空格」, <div class="layui-form"> <input type="radio" name="AA A" value="1" title="默认"> <input type="radio" name="AA A" value="2" title="选中" checked> <input type="radio" name="AA A" value="3" title="禁用" disabled> </div> <script src="//unpkg.com/layui@2.9.14/dist/layui.js"></script> 运行,点击选择其它框,发现一、二两个框都被选择了,单选失效了(这些单选框不是一组了)。 如果只是纯html的就没有这种问题: <div> <input name="a b" type="radio" value="1" checked> <input name="a b" type="radio" value="2"> <input name="a b" type="radio" value="3"> <input name="a b" type="radio" value="4" disabled> </div> 问题总结: 单选框<input type="radio">的name在引入layui后值不能包含空格。 如果这里单选框中的name属性的值不支持空格请说明一下(可能只有我无聊书写加了空格,正常写默认是a_b),不过html5这么写没问题。 业务代码 <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Demo</title> <!-- 请勿在项目正式环境中引用该 layui.css 地址 --> <link href="//unpkg.com/layui@2.9.14/dist/css/layui.css" rel="stylesheet"> </head> <body class="layui-padding-3"> <div class="layui-form"> <input type="radio" name="AA A" value="1" title="默认"> <input type="radio" name="AA A" value="2" title="选中" checked> <input type="radio" name="AA A" value="3" title="禁用" disabled> </div> <!-- 请勿在项目正式环境中引用该 layui.js 地址 --> <script src="//unpkg.com/layui@2.9.14/dist/layui.js"></script> </body> </html> 截图补充 浏览器 Microsoft Edge 版本 126.0.2592.102 (正式版本) (64 位) 演示地址 No response 友好承诺 [X] 我承诺将本着相互尊重、理解和友善的态度进行交流,共同维护 Layui 良好的社区氛围。 name 值一般不应该出现空格,name="AA A" 这是一个很奇怪的写法,建议将空格换成_或-。
gharchive/issue
2024-07-13T02:30:17
2025-04-01T06:39:21.739302
{ "authors": [ "cmk271314", "sentsim" ], "repo": "layui/layui", "url": "https://github.com/layui/layui/issues/2098", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
665944796
Should we require real names? In favor of real names: Helps tie things back to being a real conference with real people Makes it easier for us to tie CoC violations back to real people Feels more "grown-up" than just handles/usernames Against real names: Some people, who are not trolls, might prefer to be pseudonymous, and that's totally valid. Needs to be worded correctly to make it clear we're not asking for a legal name. Yet another thing to ask people. Less info is better! UI/design complications we want to let people say "show me names instead of usernames" I'm currently mildly against. I am also mildly against, particularly since I don't use my wallet name in this community. To 'tie CoC violations back to real people', perhaps we can encourage/require an email field that matches the email of the Eventbrite ticket - or even a field where you have to enter your eTicket confirmation number? Just for the sake of argument, I want to emphasize that "real name" != wallet/legal name. The assumption would be the same as Slack: e.g. in my case, my username would be "lazerwalker" but my "real name" would be "Em Lazer-Walker" despite that not being my legal name. maybe we can copy Slack here and call it "Display Name". in the chat, if a user has a display name, we render that instead of the username. i also think "Display Name" makes it more obvious that this doesn't have to be your legal name. Yeah, the overwhelming feedback I got on Twitter is "Real Name" is definitely not correct 😂 I think I've also realized that "get your full name for CoC violations" is the wrong approach — the main benefit of this is to be able to help humanize people and make them seem less like just chat handles. I want to play around with having part of the profile edit screen be literally making a "Hello My Name Is"-style name badge, and using that metaphor to help make it clear what should go in 'name'. I think our current solution is fine!
gharchive/issue
2020-07-27T03:19:43
2025-04-01T06:39:21.744442
{ "authors": [ "annie", "kawa-kitsuragi", "lazerwalker" ], "repo": "lazerwalker/azure-mud", "url": "https://github.com/lazerwalker/azure-mud/issues/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
944582461
pre roll ads fall out [ ] some ads can be clicked, open in new tab, and ad pauses - there's no way to resume or skip (see slack) [ ] do not load any ad stuff when signed in (https://tag.targeting.unrulymedia.com/rmp/216276/0/vast2?vastfw=vpaid&w=300&h=500&url=https://odysee.com/@Kona_and_Suba_Guinea_Pig_Adventures:c/this-is-peanut-the-guinea-pig:1 is called) [ ] cut off at the bottom a bit and have a top black bar (https://lbryians.slack.com/archives/C81FGKR51/p1625678103453600?thread_ts=1625646978.442100&cid=C81FGKR51) [ ] ads must have a skip button (Josh mentioned this) Regression: [ ] The issue of videos not switching correctly (previously fixed by restoring the double src call) is back again. [ ] The "Retry" button is gone, so we are back to having to do a full reload or re-enter the page. A retry almost always loads for me. Either fix to prevent this scenario, auto-retry, and put back the button? [ ] Minor: It doesn't make sense to keep spinning when it already failed (makes user think whether to wait or not). This was previously fixed. We are getting more reports of adblockers blocking various calls/media on the page after we pushed this change, even when logged in. It may be related to the ad calls still getting made when logged in, but not sure. Reached out to adblocker. Confirmed that my findings are causing issues for signed in users with pop up blockers due to the call still being made while signed in. This prevents the video from playing (probably just from this error, and potentially not from our domain being blacklisted) Some others [ ] don't load for signed in users: https://imasdk.googleapis.com/js/sdkloader/ima3.js [ ] double head request causes double master playlist call: @mayeaux this is an older ads issue, if it's not useful, please close it
gharchive/issue
2021-07-14T16:05:17
2025-04-01T06:39:21.754726
{ "authors": [ "infinite-persistence", "kauffj", "tzarebczan" ], "repo": "lbryio/lbry-desktop", "url": "https://github.com/lbryio/lbry-desktop/issues/6477", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1364816311
wip: add initial support for streaming torrent files [x] - wait for piece [x] - stream piece to browser [ ] - prioritize from streaming position [ ] - improve testing needs libtorrent 2.0.6 Coverage decreased (-0.1%) to 57.739% when pulling 0c8c0a0140394ae69b73f3f28dd6949c18820f51 on torrent_stream into 5c543cb3744ed616f4e237d08dbf14d94eeef250 on master.
gharchive/pull-request
2022-09-07T15:08:18
2025-04-01T06:39:21.757622
{ "authors": [ "coveralls", "shyba" ], "repo": "lbryio/lbry-sdk", "url": "https://github.com/lbryio/lbry-sdk/pull/3657", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1290653979
Question about MQTT topic structure and JSON packet schema Dear Lee, it is fun to see what you are working on over here. It looks like an excellent project. Kudos! May I humbly ask you a specific question, as I see you are using MQTT within your architecture. In the README, it says: Keg Cop will report pours to Raspberry Pints via MQTT or a web endpoint with a generic JSON packet. Can you outline the typical JSON message(s) emitted by this appliance, and the corresponding MQTT topic structure? Is it some measurement values? I am asking because I am thinking that Kotori may be a sweet complement for your system. It is easy to write adapters for individual device families, and if that would fit together in any way, I will be happy to provide a corresponding adapter for Keg Cop. With kind regards, Andreas. Hi again, I see that I asked too quickly before dedicating some time to browse the API documentation ^1 and the documentation of the JSON models ^2. Apologies. ^3 and ^4 would be the payloads which describe the measurement values emitted by the system, right? I am now also seeing that the Controller-Initiated Communication part of the API would probably be the right thing what I was looking for. Specifically: The Target URL Report provides a holistic picture of the system to a custom/third-party endpoint. It is a timer-based POST; a change of state does not trigger it. As with all target system configurations within Keg Cop, it will post to HTTP only. -- https://docs.kegcop.com/en/latest/api/#url I think this would be the right choice for integrating with Kotori, if that would make actual sense. By chance, on the screenshots you provided on the Operations and Configuration pages ^5, I haven't seen any about displaying graphs of measurement values over time. Maybe I am missing them. On the other hand, if such a feature is not implemented within Keg Cop yet, but you think it would be nice to have, then I will be happy to support. With kind regards, Andreas. The intention was always that this would be part of an extensible system of systems. @thorrak has been "going to finish" KegScreen for a while now. Maybe if I get enough people to help me shame him, he will. 😉 Anyway, so if such a picture over time were desired, the "upstream" should handle that. My main target was to be the "physical" connection and own those measurements, where KEgScreen would build the fancy tap list. All right. Maybe @thorrak is interested as well, otherwise I will not step on anyones toes. I am looking at eyecandy like the weather dashboards we are operating over at https://weather.hiveeyes.org/, for example ^1. It is sweet to determine long-term trends and get a different grip on the telemetry data which is already emitted by the system. As mentioned above, I don't even know if that would be an actual benefit to your community at all. Probably it would be a better fit for a brewing rig? You are absolutely not out of line, I'm all ears. That's the best part of Open Source, everyone's ideas count. I'll poke John to read this and see what he thinks. It's possible there's complimentary work to what he wants to do. Hey @amotl, I am going to close this as not an issue - but if you are interested in this project and you'd like to see additional support in MQTT, for instance, let me know. Right now, I think the webhook functionality could send JSON anywhere you like and be ingested as JSON. I had another question about a complete (and standards-based) MQTT setup, so it's rolling around in my head for use.
gharchive/issue
2022-06-30T21:22:46
2025-04-01T06:39:21.916472
{ "authors": [ "amotl", "lbussy" ], "repo": "lbussy/keg-cop", "url": "https://github.com/lbussy/keg-cop/issues/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }