id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2304980930
|
Added "should evict workload due workload deactivated on pods ready timeout" test.
What type of PR is this?
/kind bug
What this PR does / why we need it:
Added an integration test for the pod integration that exercises the case of not sending replacement pods.
Which issue(s) this PR fixes:
Fixes #2222
Special notes for your reviewer:
Does this PR introduce a user-facing change?
NONE
/assign @trasc
/assign @alculquicondor @mimowo
/approve
Please add release notes for bug fixes
/kind cleanup
|
gharchive/pull-request
| 2024-05-20T03:10:39 |
2025-04-01T04:34:48.255851
|
{
"authors": [
"alculquicondor",
"mbobrovskyi",
"mimowo"
],
"repo": "kubernetes-sigs/kueue",
"url": "https://github.com/kubernetes-sigs/kueue/pull/2230",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1552848142
|
Switch to gcr.io/kubebuilder/kube-rbac-proxy
What type of PR is this?
/kind documentation
What this PR does / why we need it:
The old location does not seem to be the official one, so we switch over to the new one as well as adapt the dependency verification.
Which issue(s) this PR fixes:
Fixes https://github.com/kubernetes-sigs/security-profiles-operator/issues/1423
Does this PR have test?
Yes
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
Switched to `gcr.io/kubebuilder/kube-rbac-proxy` from `quay.io/brancz`.
Codecov Report
Merging #1429 (31bdbb1) into main (3e42cf0) will not change coverage.
The diff coverage is n/a.
Additional details and impacted files
@@ Coverage Diff @@
## main #1429 +/- ##
=======================================
Coverage 44.13% 44.13%
=======================================
Files 50 50
Lines 5651 5651
=======================================
Hits 2494 2494
Misses 3037 3037
Partials 120 120
/lgtm
/retest
/test pull-security-profiles-operator-test-e2e
/lgtm
|
gharchive/pull-request
| 2023-01-23T10:25:47 |
2025-04-01T04:34:48.272918
|
{
"authors": [
"ccojocar",
"codecov-commenter",
"saschagrunert"
],
"repo": "kubernetes-sigs/security-profiles-operator",
"url": "https://github.com/kubernetes-sigs/security-profiles-operator/pull/1429",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
382672621
|
Manila: Add StorageClass parameter to specify NFS share client
The NFS share created by Manila provisioner is by default read/write by the world. This is not always welcome behaviour. I have added a new StorageClass option that allows to specify the NFS clients that would have access to the share.
Moreover the default value ('0.0.0.0/0') doesn't work with Ganesha NFS server as it doesn't seem to like the '/0' part so I changed the default to '0.0.0.0'
Manila provisioner recognizes a new StorageClass option to specify allowed NFS share client.
/lgtm
cc @dims
/approve
/lgtm
|
gharchive/pull-request
| 2018-11-20T13:51:52 |
2025-04-01T04:34:48.296139
|
{
"authors": [
"adisky",
"dims",
"gman0",
"tsmetana"
],
"repo": "kubernetes/cloud-provider-openstack",
"url": "https://github.com/kubernetes/cloud-provider-openstack/pull/370",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
192393628
|
GLBC Ingress - allow using existing SSL Certificates
The GCE Ingress requires the actual SSL private key and certificate be defined inside of Kubernetes [via secrets], which the controller then applies to the GCE L7 load balancer. In order to protect the private key, we would like to instead refer to an existing certificate in the GCP Project by name in the Ingress definition.
Example configuration:
kind: Ingress
spec:
tls:
- certName: my-cert
Or an annotation like for AWS is fine: http://kubernetes.io/docs/user-guide/services/#ssl-support-on-aws
GCP-equivalent example:
gcloud compute ssl-certificates create my-cert \
--certificate my.crt --private-key my.pem
gcloud compute target-https-proxies create my-proxy \
--ssl-certificate my-cert --url-map my-map
It appears you are already storing certs in the Project and referring to them by name: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/loadbalancers/loadbalancers.go#L354 -- we would just like to specify the name instead.
This allows the separation of roles: a certificate administrator can upload the key/cert into the GCP Project, where the key is not accessible to users, then a k8s/GKE administrator can use it without needing to have that very valuable secret.
Thanks.
A reasonable request, it just needs to be implemented. Today, if you don't have a tls section, the controller assumes you want HTTP.
Should this issue be cross-posted in https://github.com/kubernetes/ingress (and closed here)?
Sure, @porridge volunteered to mass move bugs over anyway
https://github.com/kubernetes/ingress/issues/45
Thank you.
|
gharchive/issue
| 2016-11-29T20:41:50 |
2025-04-01T04:34:48.325594
|
{
"authors": [
"bprashanth",
"jrynyt",
"tonglil"
],
"repo": "kubernetes/contrib",
"url": "https://github.com/kubernetes/contrib/issues/2095",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
345398334
|
Kubelet Device Plugin Registration
Feature Description
One-line feature description (can be used as a release note): Kubelet should have a standard way to discover local plugins (CSI, GPU, etc.).
Primary contact (assignee): @vikaschoudhary16
Responsible SIGs: @kubernetes/sig-node-feature-requests @kubernetes/sig-storage-feature-requests
Design proposal link (community repo):
https://github.com/kubernetes/community/pull/2369
Google Doc
Link to e2e and/or unit tests:
Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @RenaudWasTaken @vishh @jiayingz
Approver (likely from SIG/area to which feature belongs): @dchen1107 @saad-ali
Feature target (which target equals to which milestone):
Alpha release target (x.y): 1.11
Beta release target (x.y): 1.12
Stable release target (x.y): 1.13
Introduced as alpha in v1.11.
See PR https://github.com/kubernetes/kubernetes/pull/63328
See issue: https://github.com/kubernetes/kubernetes/issues/56944
Partial list of work for beta:
Recursive detection of UDS in sub directories
Unregistration of drivers when UDS is deleted
Thanks for the update! This has been added to the 1.12 Tracking sheet.
Who's the primary SIG on this? SIG Node or SIG Storage?
/stage beta
The umbrella issue tracker is here: https://github.com/kubernetes/kubernetes/issues/65773
@saad-ali design proposal PR (migration from original google doc): https://github.com/kubernetes/community/pull/2369
Hey there! @saad-ali I'm the wrangler for the Docs this release. Is there any chance I could have you open up a docs PR against the release-1.12 branch as a placeholder? That gives us more confidence in the feature shipping in this release and gives me something to work with when we start doing reviews/edits. Thanks! If this feature does not require docs, could you please update the features tracking spreadsheet to reflect it?
Sounds perfect! Thanks! I'll mark that in the feature tracking spreadsheet
@RenaudWasTaken -- please be careful using Fix[es] or Closes in a commit as it may cause us to fall into a close loop when the bots pick up a commit.
/reopen
I think it'll take effect on every branch transaction that the commit gets pulled into. I'd say that you could do it by amending the commit, but it might be less effort to just reopen this issue every time :/
Sorry I didn't see that Github converted it into a "fix". I Will pay more attention to that in the future.
Hi folks,
Kubernetes 1.13 is going to be a 'stable' release since the cycle is only 10 weeks. We encourage no big alpha features and only consider adding this feature if you have a high level of confidence it will make code slush by 11/09. Are there plans for this enhancement to graduate to stable within the 1.13 release cycle? If not, can you please remove it from the 1.12 milestone or add it to 1.13?
We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post. Please take the opportunity to develop a KEP.
Hello @ameukam !
This feature graduated to beta in 1.12 and there are no plans to graduate it to stable in 1.13.
/milestone clear
@saad-ali I've added this to the tracking sheet but putting it At Risk because I feel this has not had enough time to bake. I believe there will be concern about promoting a feature so quickly. @AishSundar @spiffxp
@saad-ali what work is left for this in 1.13 to be able to go to Stable? Similarly how confident are we of CSI making it to Stable in 1.13 and if [CSI] slips what are the plans for this feature then?
@AishSundar
I have discussed this with author (and other interested parties). He feels confident that the code OK and stable given adoption. This feature is being used by several plugin implementers and users (NVIDIA device plugin, several CSI volume plugins) since beta. If CSI slips, this feature is adopted by other plugin providers who are Ok with v1.
@vladimirvivien thanks for the update. Could you plz point us to a list of pending PRs (code, tests and docs) for this feature?
@AishSundar
PR for dev doc - https://github.com/kubernetes/kubernetes/pull/68562
Pending issue to be handled this quarter - https://github.com/kubernetes/kubernetes/issues/69015
That's all.
@vladimirvivien can I plz know the latest status on this for 1.13. I see quite a few PRs pending in kubernetes/kubernetes#69015. With Code freeze approaching this Friday 11/16, I am afraid its too little time for all the changes. Could you plz provide a list of pending PRs tracking for 1.13?
Hello @AishSundar !
There are only two PRs pending:
https://github.com/kubernetes/kubernetes/pull/70559
Which is only missing approval, I believe we'll bring this up in Tuesday's sig-node
https://github.com/kubernetes/kubernetes/pull/70494
Which only needs us to choose an option
Whatever option is chosen, code changes will be minimal
In both case I believe the deadline of Friday 11/16 will be respected.
Thanks for looking into this!
@AishSundar as @RenaudWasTaken mentioned we have an approved PR (which needs rebase) and an additional PR which should get resolved early this week. All other PR's have been resolved or merged. We are tracking this to be in by code freeze this Friday. Thank you for addressing this.
closed https://github.com/kubernetes/kubernetes/issues/70484 via https://github.com/kubernetes/kubernetes/pull/70559
@vladimirvivien @RenaudWasTaken looks like there is still some active discussion in the PR kubernetes/kubernetes#70494. Are we still on track for Code freeze tomorrow?
Discussing with @msau42 pending PR https://github.com/kubernetes/kubernetes/pull/70494 is an optimization and not really a blocker for this enhancement in 1.13. As much as we will try to get in today, this can also go as a 1.13.1 patch if merge get delayed.
Stop please
|
gharchive/issue
| 2018-07-28T00:04:47 |
2025-04-01T04:34:48.412260
|
{
"authors": [
"AishSundar",
"Kymb3rl33",
"RenaudWasTaken",
"ameukam",
"justaugustus",
"kacole2",
"saad-ali",
"vikaschoudhary16",
"vladimirvivien",
"zparnold"
],
"repo": "kubernetes/features",
"url": "https://github.com/kubernetes/features/issues/595",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
334565817
|
helm3: was Release.Namespace removed from helm3 build in objects on purpose?
as how else can we inject the namespace where a chart is being installed into?
I got a regression when trying to install the chart stable/nginx-ingress chart:
$ helm3 install --name jxing stable/nginx-ingress --namespace kube-system --set rbac.create=true
ClusterRoleBinding.rbac.authorization.k8s.io "jxing-nginx-ingress" is invalid: subjects[0].namespace: Required value
as this value is no longer substituted:
https://github.com/kubernetes/charts/blob/master/stable/nginx-ingress/templates/rolebinding.yaml#L18
It will be added back. The goal for now is to have a template rendering step that can create resources to add to any namespace
I am getting the similar error in helm 2 as well.
helm install -n jxing jenkins-x/nginx-ingress --values jxing-nginx-ingress-values.yaml --namespace kube-system --version 0.21.0
Error: release jxing failed: ClusterRoleBinding.rbac.authorization.k8s.io "jxing-nginx-ingress" is invalid: subjects[0].namespace: Required value
helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
@ysaakpr I'd follow up with the Jenkins X team maintaining that chart directly.
CC @jstrachan ^ :)
|
gharchive/issue
| 2018-06-21T16:21:50 |
2025-04-01T04:34:48.418638
|
{
"authors": [
"adamreese",
"bacongobbler",
"jstrachan",
"ysaakpr"
],
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/issues/4255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
175978016
|
fixes #1169 to use the charts url from the repository index.yaml if available
This change is
those test failures seem to be glide/bootstrap failures; the tests work fine here honestly ;)
Any progress on this?
we can close this now - looks like the URL field is now read from index.yaml
|
gharchive/pull-request
| 2016-09-09T10:29:30 |
2025-04-01T04:34:48.420650
|
{
"authors": [
"jstrachan",
"technosophos"
],
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/pull/1170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
217414627
|
fix(helm): local path in requirements.yaml relative to working dir
closes bug: #2103
@qwangrepos Thanks for the refactor! I am going to do one more round of testing as a sanity check tonight and then merge it
|
gharchive/pull-request
| 2017-03-28T00:34:00 |
2025-04-01T04:34:48.421865
|
{
"authors": [
"qwangrepos",
"thomastaylor312"
],
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/pull/2194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
145117673
|
Expand Readme to show e2e manual test
Fixed a few helm references and added a complete example.
I think it is good to show in the main readme a quick walkthrough of helm, that way people don't have to click in the tree to find a quick start guide or examples...
@michelleN thanks for the review, I addressed your comments
|
gharchive/pull-request
| 2016-04-01T08:18:07 |
2025-04-01T04:34:48.422884
|
{
"authors": [
"runseb"
],
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/pull/545",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
319111656
|
fix :"名字空间"change to "命名空间“
line15 本套译为中namespace 均译为命名空间
@yulng 由于文档结构变更,需要关闭此PR,请谅解,欢迎重新提交,谢谢。
|
gharchive/pull-request
| 2018-05-01T04:22:02 |
2025-04-01T04:34:48.548716
|
{
"authors": [
"markthink",
"yulng"
],
"repo": "kubernetes/kubernetes-docs-zh",
"url": "https://github.com/kubernetes/kubernetes-docs-zh/pull/467",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
258581536
|
Update /docs/setup/independent/create-cluster-kubeadm.md for 1.8.
This introduction needed a couple of small tweaks to cover the --discovery-token-ca-cert-hash flag added in https://github.com/kubernetes/kubernetes/pull/49520.
cc @luxas @kubernetes/sig-cluster-lifecycle-misc
This change is
@mattmoyer, Please rebase your branch so that it picks up 99fbc2b. That will fix the deploy/netlify failure. Thanks.
@luxas updated to address your comments, PTAL.
@steveperry-53 thanks for the tip, rebased.
Ping @mattmoyer
@steveperry-53 apologies, I've been afk most of this week and didn't get to update this until just now.
PTAL, I think I've addressed everything for now.
|
gharchive/pull-request
| 2017-09-18T18:45:35 |
2025-04-01T04:34:48.552080
|
{
"authors": [
"mattmoyer",
"steveperry-53"
],
"repo": "kubernetes/kubernetes.github.io",
"url": "https://github.com/kubernetes/kubernetes.github.io/pull/5524",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
411815825
|
fix negative slice index error in keymutex
xref: https://github.com/kubernetes/kubernetes/issues/73858
The keymutex's slice index could be negative which should not happened.
This PR aims to make the index always positive.
/assign @thockin @dims
weird, why cla check failed ?
@danielqsj can you please sign the CLA? See https://github.com/kubernetes/community/blob/master/CLA.md for instructions and trouble shooting steps
@dims I signed the CLA for CNCF a long time ago. And also checked again.
I'm also a member of kubernetes. So is there some issues about the check here because my PRs in k/k can pass CLA check.
@danielqsj you used a different email id :)
https://patch-diff.githubusercontent.com/raw/kubernetes/kubernetes/pull/72336.patch
https://patch-diff.githubusercontent.com/raw/kubernetes/utils/pull/84.patch
@dims oops, thank you so much.
@danielqsj is there some kind of test we could add?
/lgtm
/approve
|
gharchive/pull-request
| 2019-02-19T09:05:23 |
2025-04-01T04:34:49.033005
|
{
"authors": [
"apelisse",
"danielqsj",
"dims"
],
"repo": "kubernetes/utils",
"url": "https://github.com/kubernetes/utils/pull/84",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
359103757
|
Documentation should not mention ingress controller specific annotations [File: /content/en/docs/concepts/services-networking/ingress.md]
This is a...
[x] Feature Request
[ ] Bug Report
Problem:
The ingress resource documentation describes examples that contain an annotation that will not work with other ingress controllers besides nginx: nginx.ingress.kubernetes.io/rewrite-target
Proposed Solution:
While this may help some users, there is also a generic rewrite-target annotation called ingress.kubernetes.io/rewrite-target. This annotation will also work with traefik and other ingress controllers.
The documentation should be kept as implementation-agnostic as possible, so the reference to nginx should be dropped.
Page to Update:
https://kubernetes.io/docs/concepts/services-networking/ingress/
Working on it
Cool, thank you.
I did a little bit of research, and it seems that behaviour differs a bit between different ingress controllers, but at least ingress-nginx and Træfik have supported the generic annotation for a while.
I don't know about others.
Apparently, what I said is not quite true, as can be seen in this comment: https://github.com/kubernetes/ingress-nginx/issues/3109#issuecomment-422802472
But there's a somewhat more pragmatic discussion for Træfik: https://github.com/containous/traefik/pull/1723
However, I think it would make a lot of sense to have a common syntax for different ingress controllers. The ingress.kubernetes.io/rewrite-target annotation seems to work well in many cases, but a controller-specific prefix may be needed. The documentation should reflect this, or a design decision may be needed (well out of scope for this bug report).
Here is the documentation for Træfik: https://docs.traefik.io/configuration/backends/kubernetes/#general-annotations
And for nginx: https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md
|
gharchive/issue
| 2018-09-11T15:36:39 |
2025-04-01T04:34:49.039694
|
{
"authors": [
"anlunas",
"onitake"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/10272",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
407565343
|
Issue with k8s.io/docs/tutorials/services/source-ip/
This is a...
[ ] Feature Request
[x] Bug Report
Problem:
If the client pod and server pod are in the same node, the client_address is the client pod’s IP address. However, if the client pod and server pod are in different nodes, the client_address is the client pod’s node flannel IP address.
From Cluster Networking, all containers can communicate with all other containers without NAT.
In this respect, the client_address should be always client pod's IP address, not the node flannel IP address, even the client pod and server pod are in different nodes.
I have tested in On-Premises Cluster (flannel / weave net), AKS and aliyun with 1.12. All backed my conclusion.
Proposed Solution:
The client_address is the client pod's IP address, whether the client pod and server pod are in the same node or not.
Page to Update:
https://kubernetes.io/docs/tutorials/services/source-ip/
1.12
I am interested to work on this issue. Shall i submit a pull request for the same?
Thanks
Close issue because PR has been merged.
|
gharchive/issue
| 2019-02-07T06:59:26 |
2025-04-01T04:34:49.044728
|
{
"authors": [
"SupriyaSirbi",
"zerda"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/12533",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
510608899
|
Issue with k8s.io/docs/reference/
This is a Bug Report
Problem:
Broken link for federation-apiserver and federation-controller-manager
Proposed Solution:
Fix broken links.
Page to Update:
k8s.io/docs/reference/
/assign
@thecrudge the documents these links point to were removed by this PR -https://github.com/kubernetes/website/pull/14140
federation-apiserver > content/en/docs/reference/command-line-tools-reference/federation-apiserver.md
federation-controller-manager > content/en/docs/reference/command-line-tools-reference/federation-controller-manager.md
I believe as part of federation v1 remediations. Shall I just delete the reference in this page or should these be pointing to some other page?
/kind bug
/priority important-soon
|
gharchive/issue
| 2019-10-22T12:01:59 |
2025-04-01T04:34:49.048957
|
{
"authors": [
"aimeeu",
"miteshskj"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/17113",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
581963141
|
Translate concepts/storage/volume-snapshots in Korean
This is a Feature Request
What would you like to be added
Translate concepts/storage/volume-snapshots in Korean.
Why is this needed
No translation with concepts/storage/volume-snapshots in Korean.
Comments
Page to Update:
https://kubernetes.io/docs/concepts/storage/volume-snapshots/
/language ko
/assign
/close
|
gharchive/issue
| 2020-03-16T03:05:40 |
2025-04-01T04:34:49.051751
|
{
"authors": [
"sunminjeon"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/19649",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
623386859
|
Migrate built-in node label list into Reference section
This is a cleanup request
Problem:
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels is reference-type information that appears inside an already long concept page.
Proposed Solution:
Migrate https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels (the heading and the following section) to be within
https://kubernetes.io/docs/reference/
Page to Update:
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels
/language en
/priority awaiting-more-evidence
/kind cleanup
/remove-lifecycle stale
Still seems useful to do
/remove-lifecycle stale
Also relevant: #26989
Done
/close
|
gharchive/issue
| 2020-05-22T17:53:17 |
2025-04-01T04:34:49.056137
|
{
"authors": [
"sftim"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/21130",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1424787747
|
[AR] localization umbrella issue
Arabic localization
General workflow
Select an available file from the list and inform community by commenting this issue
Fork the main branch of the kubernetes/website repo.
Translate the English file.
In your fork, create a corresponding Arabic file (in /content/ar/…)
Translate the text and keep the layout and code samples intact.
Create a PR named [ar] Translation path/file.md and add the language/ar label to it.
Select dev-1.25-ar.1 as the target branch.
Progress track for dev-1.25-ar.1
:green_circle: File localization done
:yellow_circle: File localization in progress
Home
[ ] :yellow_circle: home/_index.md
[ ] :yellow_circle: home/supported-doc-versions.md
Setup
[ ] setup/_index.md
[ ] setup/best-practices/_index.md
[ ] setup/best-practices/certificates.md
[ ] setup/best-practices/cluster-large.md
[ ] setup/best-practices/enforcing-pod-security-standards.md
[ ] setup/best-practices/multiple-zones.md
[ ] setup/best-practices/node-conformance.md
[ ] setup/learning-environment/_index.md
[ ] setup/production-environment/_index.md
[ ] setup/production-environment/container-runtimes.md
[ ] setup/production-environment/tools/_index.md
[ ] setup/production-environment/tools/kops.md
[ ] production-environment/tools/kubeadm/_index.md
[ ] production-environment/tools/kubeadm/control-plane-flags.md
[ ] production-environment/tools/kubeadm/create-cluster-kubeadm.md
[ ] production-environment/tools/kubeadm/dual-stack-support.md
[ ] production-environment/tools/kubeadm/ha-topology.md
[ ] production-environment/tools/kubeadm/high-availability.md
[ ] production-environment/tools/kubeadm/install-kubeadm.md
[ ] production-environment/tools/kubeadm/kubelet-integration.md
[ ] production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm.md
[ ] production-environment/tools/kubeadm/troubleshooting-kubeadm.md
[ ] production-environment/tools/kubespray.md
[ ] production-environment/turnkey-solutions.md
Tutorials
[ ] _index.md
[ ] kubernetes-basics/create-cluster/_index.md
[ ] kubernetes-basics/create-cluster/cluster-interactive.html
[ ] kubernetes-basics/create-cluster/cluster-intro.html
[ ] kubernetes-basics/deploy-app/_index.md
[ ] kubernetes-basics/deploy-app/deploy-interactive.html
[ ] kubernetes-basics/deploy-app/deploy-intro.html
[ ] kubernetes-basics/explore/_index.md
[ ] kubernetes-basics/explore/explore-interactive.html
[ ] kubernetes-basics/explore/explore-intro.html
[ ] kubernetes-basics/expose/_index.md
[ ] kubernetes-basics/expose/expose-interactive.html
[ ] kubernetes-basics/expose/expose-intro.html
[ ] kubernetes-basics/_index.html
[ ] kubernetes-basics/scale/_index.md
[ ] kubernetes-basics/scale/scale-interactive.html
[ ] kubernetes-basics/scale/scale-intro.html
[ ] kubernetes-basics/update/_index.md
[ ] kubernetes-basics/update/update-interactive.html
[ ] kubernetes-basics/update/update-intro.html
[ ] :yellow_circle: hello-minikube.md
Site strings
[ ] :yellow_circle: ar.toml
Releases
[ ] _index.md
[ ] download.md
[ ] notes.md
[ ] patch-releases.md
[ ] release-managers.md
[ ] release.md
[ ] version-skew-policy.md
/language ar
I picked up the /home/_index.md here is a PR https://github.com/kubernetes/website/pull/37547
/triage accepted
I picked up the : Tutorials/hello-minikube.md, data/i18n/ar/ar.toml and README-ar.md.
Picked setup/_index.md here: https://github.com/kubernetes/website/pull/44289
Picked home/supported-doc-versions.md here: #44117
I picked up the /home/_index.md here is a PR #37547
I have new PR here https://github.com/kubernetes/website/pull/45031
Picked home/supported-doc-versions.md here: #44117
Thanks @adowair our first file home/supported-doc-versions.md 🥇 merged https://github.com/kubernetes/website/pull/44698
I picked Tutorials/*: https://github.com/kubernetes/website/pull/45047
#44682 Tracks the broader effort to launch this localization, which includes some administrative and process work.
/remove-lifecycle stale
this file setup/best-practices/_index.md was much easier than I thought so I am picking another one 😆
https://github.com/kubernetes/website/pull/45572
Picking setup/production-environment/tools/_index.md next.
/assign @adowair
@mboukhalfa will you please change the issue title to "Translate Minimum Required Documentation Pages to Arabic"? This will differentiate it better from https://github.com/kubernetes/website/issues/44682.
/retitle Translate Minimum Required Documentation Pages to Arabic
I am picking multiple tiny files at once :
https://github.com/kubernetes/website/blob/release-1.29/content/en/search.md
https://github.com/kubernetes/website/blob/release-1.29/content/en/docs/setup/learning-environment/_index.md
https://github.com/kubernetes/website/blob/release-1.29/content/en/docs/setup/production-environment/tools/kubeadm/_index.md
https://github.com/kubernetes/website/blob/release-1.29/content/en/docs/setup/production-environment/turnkey-solutions.md
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/create-cluster/_index.md
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/deploy-app/_index.md
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/explore/_index.md
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/scale/_index.md
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/update/_index.md
https://github.com/kubernetes/website/tree/release-1.29/content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html
https://github.com/kubernetes/website/tree/release-1.29/content/en/releases/notes.md
Picking up : https://github.com/kubernetes/website/blob/release-1.29/content/en/docs/setup/best-practices/certificates.md
Picked :
content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html
AND
content/en/docs/tutorials/kubernetes-basics/update/update-intro.html
I will work on content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
I will work on content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html
Picked up content/en/docs/setup/production-environment/_index.md here: #47819.
/area localization
|
gharchive/issue
| 2022-10-26T22:49:56 |
2025-04-01T04:34:49.087237
|
{
"authors": [
"AbdelatifAitBara",
"RA489",
"adowair",
"essamgouda97",
"mboukhalfa",
"seifrajhi",
"selaamimech",
"tengqm",
"vaibhav2107"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/37546",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1433828582
|
Improvement for k8s.io/docs/concepts/storage/dynamic-provisioning/
Provide sample usage where a pod would be creating needing dynamic storage provisioning.
That is provide the whole recipe not just the part about creating a PVC to create a volume.
This probably should be referenced against same using a replicaset where multiple volumes may be needed and so manually creating individual ones is not practical. However, even the creation of a single volume for a pod to use dynamically provisioned storage should be explained, as this just makes the volume "appear" as needed.
/sig storage
Rather than add an example into https://k8s.io/docs/concepts/storage/dynamic-provisioning/, I recommend that we add a tutorial for this (perhaps based on an existing tutorial), and change https://k8s.io/docs/concepts/storage/dynamic-provisioning/ to link to that tutorial.
This is a substantial piece of work.
We should be wary of adding too much tutorial type information into a concept guide. Doing that can really make the whole topic hard to take in.
/language en
|
gharchive/issue
| 2022-11-02T21:50:19 |
2025-04-01T04:34:49.090330
|
{
"authors": [
"Kartik494",
"sftim",
"sjmudd"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/37677",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2358521607
|
[StorageVersionMigrator] Update docs with fetauregates and runtime config required to use SVM
Ref :- https://kubernetes.slack.com/archives/C06S7LHB06B/p1718503343709779?thread_ts=1718299215.504679&cid=C06S7LHB06B
/assign
/sig api-machinary
/triage accepted
/sig api-machinery
What's SVM @nilekhc? Contributors might need more of a clue about what work needs doing.
What's SVM @nilekhc? Contributors might need more of a clue about what work needs doing.
It's storage version migrator
|
gharchive/issue
| 2024-06-17T23:59:21 |
2025-04-01T04:34:49.092733
|
{
"authors": [
"nilekhc",
"sftim"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/46862",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
374198394
|
Update pod-security-policy.md
Fix a typo:
Line 219: Service Acount->Service Account
/assign @pweil-
/lgtm
/approve
|
gharchive/pull-request
| 2018-10-26T02:03:59 |
2025-04-01T04:34:49.094387
|
{
"authors": [
"AdamDang",
"stewart-yu",
"xiangpengzhao"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/10740",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
379496107
|
Fix spell error in docs
Signed-off-by: Maxwell csuhp007@gmail.com
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please delete this note before submitting the pull request.
For 1.13 Features: set Milestone to 1.13 and Base Branch to dev-1.13
Help editing and submitting pull requests:
https://kubernetes.io/docs/contribute/start/#improve-existing-content.
Help choosing which branch to use:
https://kubernetes.io/docs/contribute/start#choose-which-git-branch-to-use.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fix spell error in docs
/assign @mistyhacks
|
gharchive/pull-request
| 2018-11-11T06:54:56 |
2025-04-01T04:34:49.097386
|
{
"authors": [
"huangqg"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/10951",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
384602196
|
Pick-right-solution page translates into Korean.
Reference : #10716
Pick-right-solution page translates into Korean.
/assign @gochist
@gochist 리뷰 감사합니다. 리뷰 내용 적용해서 PR 업데이트 했습니다.
|
gharchive/pull-request
| 2018-11-27T03:16:05 |
2025-04-01T04:34:49.099060
|
{
"authors": [
"ClaudiaJKang"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/11340",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
626571314
|
[WIP] Add MP Interactive Tutorial to Stateless Section
This provides the previously discussed/socialized and demoed Interactive Tutorial for configuring Java microservices in Kubernetes.
There will be two PRs of which only one should be accepted.
The first one which places the tutorial under the "Configuration" (/configuration) section in the ToC.
The second PR (this one) has the exact same content, but places it under "Stateless Applications" (/stateless-application) section in the ToC.
Please note that (in both PRs) the Katacoda tutorial is hosted at data-katacoda-id="jamiecoleman/kubeconfig" and not at kubernetes-bootcamp so that will likely need an action to be moved/migrated in Katacoda, and then the PR updated to point at the location in kubernetes-bootcamp
/assign @makoscafee
I followed what the bot instructed and assigned @xiangpengzhao to this PR, and @makoscafee to the second PR which have the exact same content (as per the description in the first comment), but are just in a different location. Likely only one person should be assigned to both PRs (and only one PR should be approved).
I reviewed #21268
/assign
Hi @mbroz2 .
Just following up on this PR. Are you actively working on these changes?
Hello @mbroz2 .
I am closing this pull request.
At any time, you can reopen this PR to resume work.
/close
Hi, yes, was planning on having an update before the end of this week, just coordinating the changes to both the interactive tutorial (katacoda) and the text for the site. I'll reopen the PR when I have those updates ready.
Thank you for taking the time to provide the detailed feedback, we've been working on incorporating all of it into the Katacoda tutorial and PR for the tutorial page.
The Katacoda environment has been updated to reflect all the comments, with the exception of the HTTP basic auth being used. HTTP was purposefully used as it is not uncommon (especially in private cloud/kubernetes deployments) to terminate TLS at the proxy to reduce processing overhead and it simplifies the example in the tutorial and avoids complexities regarding trusting/managing self-signed certs. In other words, we were avoiding additional complexity and overhead of HTTPS. However, if you believe this is an issue and needs to be addressed, then we can change the environment to support HTTPS.
The tutorial topic page has also been changed to better match the other content under the "Configuration" (and Stateles/Stateful Applications) section, for example the "Configuring Redis using a ConfigMap" and "Example: Deploying PHP Guestbook application with Redis" with the main difference that instead of the instructions being placed on the page, the page instead links out the Katacoda Interactive Tutorial environment. Like the other examples, a light overview of the concepts is provided first, followed by the example section where the user goes more in-depth with the subject, in a hand-on approach. The Kubernetes topics have been moved to the beginning of the section and expanded on, while the Java/MP topics have been moved down and shortened.
Regarding the above comment "At any time, you can reopen this PR to resume work." is that possible, or do I need to open a new PR?
Yes:
/reopen
This also looks like an unintentional /reopen. ☝️
/close
Hi @zacharysarah it was meant to be reopened and waiting for a review and approval/feedback, but I missed the bots instruction to assign @sftim. Reopening and assigning sftim, please see my previous comment for details.
/reopen
/assign @sftim
@mbroz2 this is marked as work in progress; if that's not what you want, you can edit the PR title to remove the “[WIP]”
/nudge
Hi @mbroz2 .
I don't think it makes sense to keep two PRs open for the same tutorial.
I left several comments in PR #21268.
Please select one PR to close.
Thanks!
Hi @kbhawkey,
I'm happy to close this PR and just continue the conversation in PR #21268. The reason 2 were open was to get feedback whether this guide should go into the Config section (#21268) or the Stateless App section (this PR), and each PR hosts the content under each of those categories.
If it's instead decided that the Stateless App section is a better place for this content then the Config section, then I'll just reopen this PR and close the other one.
|
gharchive/pull-request
| 2020-05-28T14:45:01 |
2025-04-01T04:34:49.108313
|
{
"authors": [
"kbhawkey",
"mbroz2",
"sftim",
"zacharysarah"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/21269",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
760209326
|
Fix formatting error in kubectl debug release note
A missing newline after the last list item caused an important note about plugins being hidden to be append to an unrelated note about debugging nodes.
Page preview:
https://deploy-preview-25515--kubernetes-io-master-staging.netlify.app/blog/2020/12/08/kubernetes-1-20-release-announcement/#kubectl-debug-graduates-to-beta
Thanks @verb .
/lgtm
/approve
|
gharchive/pull-request
| 2020-12-09T10:35:35 |
2025-04-01T04:34:49.110557
|
{
"authors": [
"castrojo",
"kbhawkey",
"verb"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/25515",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
842646430
|
Fix line separation in concepts/architecture/nodes
https://github.com/kubernetes/website/pull/27232
Page still in sync.
/lgtm
/approve
|
gharchive/pull-request
| 2021-03-28T02:42:26 |
2025-04-01T04:34:49.111728
|
{
"authors": [
"npu21",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/27261",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
981754499
|
[ko] fix broken links in install kubectl windows page
It seems the same issue #29570 exist in this localization as well.
/language ko
Deploy preview of the modified page https://deploy-preview-29574--kubernetes-io-main-staging.netlify.app/ko/docs/tasks/tools/install-kubectl-windows/
/lgtm
[As-is]
https://kubernetes.io/ko/docs/tasks/tools/install-kubectl-windows/
curl -LO https://dl.k8s.io/release//bin/windows/amd64/kubectl.exe
[Preview]
https://deploy-preview-29574--kubernetes-io-main-staging.netlify.app/ko/docs/tasks/tools/install-kubectl-windows/
curl -LO "https://dl.k8s.io/release/v1.22.0/bin/windows/amd64/kubectl.exe"
Nice fix! 😊
/lgtm
Thanks !
/approve
|
gharchive/pull-request
| 2021-08-28T06:48:16 |
2025-04-01T04:34:49.115331
|
{
"authors": [
"jihoon-seo",
"niteshseram",
"seokho-son",
"yoonian"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/29574",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1013496666
|
PLACEHOLDER: Document kubectl debug profiles for 1.23
KEP-1441: kubectl debug
Blocking PRs
kubernetes/kubernetes#105008
/assign
/assign
I'll be watching over this PR from the k8s release docs team.
/hold
this feature won't merge by code freeze, punting to next release.
|
gharchive/pull-request
| 2021-10-01T15:44:33 |
2025-04-01T04:34:49.117695
|
{
"authors": [
"chrisnegus",
"verb"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/29876",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1080548687
|
Note that subPathExpr uses round brackets.
I was silly and spent far too long using ${ENV_VAR} whereas the correct syntax is $(ENV_VAR), and wasn't able to understand what I was doing wrong. The syntax/issue makes sense once you know it, but is very hard to spot as a typo (imo/ime).
Adding the extra detail felt like it may be helpful for anyone else who may be confused in the same way.
Absolutely no stress if you think this is unnecessary, or that there is a better way to document it - just thought it might be helpful for others :-)
@bglimepoint Thanks for the small yet important tip. Please sign the CLA in order for the PR to be reviewed.
Thanks @tengqm, just chasing it now - it might drag until next week as the person who can approve it is out of the office this week,
@bglimepoint Thanks for the small yet important tip. Please sign the CLA in order for the PR to be reviewed.
@tengqm, the CLA is now done :-)
/check-cla
I just realised that there are the two CLAs, I'll chase down the second and post for a recheck, apologies for the noise/confusion
Thanks @bglimepoint
Did you get a chance to look at the other CLA? I'm afraid we're mid-migration.
Thanks @bglimepoint
Did you get a chance to look at the other CLA? I'm afraid we're mid-migration.
Makes sense :-)
In theory, I think I've now done the second CLA now too, yep, although it might need a recheck?
/lgtm
/approve
Nice improvement
|
gharchive/pull-request
| 2021-12-15T04:22:16 |
2025-04-01T04:34:49.122086
|
{
"authors": [
"annajung",
"bglimepoint",
"sftim",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/30959",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1274819482
|
Fix api-reference shortcode for localized pages. (v1.22 backport)
This PR fixs issue #34178 that api-reference shortcode not working on localized pages.
It is backport of PR #34272 to v1.22.
/area web-development
/kind bug
Thanks.
/lgtm
/approve
|
gharchive/pull-request
| 2022-06-17T09:46:15 |
2025-04-01T04:34:49.124005
|
{
"authors": [
"s-kawamura-w664",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/34354",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2007897920
|
Update trouble shooting to include the issue of etcd upgrade
For the isse which is reported recently: https://github.com/kubernetes/kubeadm/issues/2957 We'd better to provide some tips to workaround this known issue.
related: https://github.com/kubernetes/kubeadm/issues/2957
cc @sftim @neolit123
/sig cluster-lifecycle
this was based on dev-1.29 branch (https://github.com/kubernetes/website/pull/43903), and was suggested by @sftim to base on the main branch.
We're not using dev-1.28 any more as v1.28 is released. This PR should target main.
|
gharchive/pull-request
| 2023-11-23T10:22:19 |
2025-04-01T04:34:49.126888
|
{
"authors": [
"chendave",
"neolit123"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/44058",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2421980657
|
KEP-4191: Split Image Filesystem promotion to Beta
One-line PR description: Promote KEP-4191 "Split Image Filesystem" to Beta, update references to the KubeletSeparateDiskGC feature gate.
Issue link: https://github.com/kubernetes/enhancements/issues/4191
Other comments:
Starting from the upcoming Kubernetes release v1.31, the feature will be enabled by default, allowing users to deploy split image and container filesystems without enabling the feature gate to benefit from the new functionality.
Thus, update the documentation that references the KubeletSeparateDiskGC feature gate, ensuring that its state post-Beta promotion will be correctly reflected.
Related:
https://github.com/kubernetes/website/pull/46951
https://github.com/kubernetes/kubernetes/pull/126205
/assign kwilczynski
/sig node
Explicit hold until this PR only covers one language and excludes changes to auto-generated pages.
(Ok to unhold once that's done)
/hold
Explicit hold until this PR only covers one language and excludes changes to auto-generated pages. (Ok to unhold once that's done)
@dipesh-rawat, done. Hopefully.
@kwilczynski It appears that this PR includes changes to files across multiple languages English and Chinese. However, docs follows different processes for each localization, and we typically don't accept pull requests that impact multiple languages simultaneously.
[...]
@dipesh-rawat, good to know! I had no idea... Hence, I went with every file in which I could find a reference to the feature gate.
@dipesh-rawat, please have a look again, thank you!
/remove-language zh
/remove-area localization
/unhold
PR now covers only single language.
/hold
Explicit hold to avoid accidental merges to main branch.
(Ok to unhold once this is targetted to dev-1.31)
/hold
Explicit hold to avoid accidental merges to main branch. (Ok to unhold once this is targetted to dev-1.31)
@dipesh-rawat, done. Moved to the dev-1.31 branch.
/cc kannon92
/cc mrunalp
/approve
/unhold
PR now targets dev-1.31
@dipesh-rawat The upstream PR has not been merged ... the hold was not supposed to be lifted.
https://github.com/kubernetes/kubernetes/pull/126205
The upstream PR has not been merged ... the hold was not supposed to be lifted.
Apologies for the oversight on my part. I will raise a PR to revert the changes from dev-1.31 that were merged prematurely.
BTW, there's an option to document the behavior more, by editing:
https://kubernetes.io/docs/concepts/architecture/garbage-collection/#image-maximum-age-gc
Note that it's enabled by default
https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals
|
gharchive/pull-request
| 2024-07-22T05:33:55 |
2025-04-01T04:34:49.138662
|
{
"authors": [
"dipesh-rawat",
"kwilczynski",
"sftim",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/47228",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
322514016
|
Hugo: Fix lists
Fix display of lists, add empty lines before them so that hugo formats
them correctly as lists and not as paragraphs.
/assign @steveperry-53
|
gharchive/pull-request
| 2018-05-12T14:04:49 |
2025-04-01T04:34:49.139862
|
{
"authors": [
"ajaeger"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/8511",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
326340815
|
[mod] Client library link is dead
Client library links is dead
NG: https://kubernetes.io/docs/reference/client-libraries/
OK: https://kubernetes.io/docs/reference/using-api/client-libraries/
For example, this page contain dead link.
https://kubernetes.io/docs/reference/#api-client-libraries
Old page is this, so its change is correct.
https://v1-9.docs.kubernetes.io/docs/reference/client-libraries/
Note:
China page don't have this link.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For 1.11 Features: set Milestone to 1.11 and Base Branch to release-1.11
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NOTE: After opening the PR, please un-check and re-check the "Allow edits from maintainers" box so that maintainers can work on your patch and speed up the review process. This is a temporary workaround to address a known issue with GitHub.>
Please delete this note before submitting the pull request.
Hello bradamant3.
Nice to meet you. Please approve :)
I have already checked my changes at deployed staging now.
https://deploy-preview-8734--kubernetes-io-master-staging.netlify.com
/assign @bradamant3
I heard from mistyhacks, HUGO migration is checked by you.
Thank you for your help!
/assign @steveperry-53
/assign @Bradamant3
Issue is registered.
https://github.com/kubernetes/website/issues/8768
/lgtm
/lgtm
/approve
|
gharchive/pull-request
| 2018-05-25T00:21:31 |
2025-04-01T04:34:49.146474
|
{
"authors": [
"Bradamant3",
"MasayaAoyama",
"erictune"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/8734",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1685277468
|
feat(): cluster auto deregister
Through this PR we are supporting cluster auto deregistration by
Adding worker finalizer to cluster CR
Cluster reconciler detects finalizer and deletion timestamp
Create a job to uninstall helm chart
Notify controller about registration status
Remove worker finalizer after clenup job succeeded
merging for @Rahul-D78
@richiesebastian can you please merge this PR since we are not able to merge because of this error.
The base branch requires all commits to be signed
|
gharchive/pull-request
| 2023-04-26T15:42:28 |
2025-04-01T04:34:49.151839
|
{
"authors": [
"Rahul-D78",
"richiesebastian"
],
"repo": "kubeslice/worker-operator",
"url": "https://github.com/kubeslice/worker-operator/pull/238",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2125498608
|
feature: design and document procedure for making KS releases and snapshots
Feature Description
We need to be able to make KubeStellar snapshots and releases, and do so without breaking users of previous ones. Other repositories in the kubestellar org should also have their own release processes.
What is a snapshot? Maybe it is a release that we do not advertise widely because it does not have all the user-visible features that we want in the next release. Maybe it is a git tag and equal "image" (published container and published Helm chart(s)) tag with instructions.
One part of the problem is just defining the criteria that a release process should meet.
Proposed Solution
I do not have a proposal yet.
Want to contribute?
[x] I would like to work on this issue.
Additional Context
No response
Here is one thing giving us grief: IF we distribute a Helm chart through an OCI "image" repository then the OCI "image" tag has to equal the "version" in the Chart.yaml (https://helm.sh/docs/topics/registries/#oci-feature-deprecation-and-behavior-changes-with-v370). This means, for example, that we can not tell users to refer to such an "image" using a branch release tag UNLESS the chart also declares the branch release to be its "version" --- which would mean that as we advance along a release branch (make new patch releases) the chart gets overwritten.
Here is an initial draft of the criteria for a release process. Note that this set of criteria is not internally consistent.
MUST HAVE: A user of a given snapshot or release does not get perturbed by our work on later snapshots or releases. Not even if they restart something. Not even if they tear their environment all the way down and re-create it using the instructions for the old release that they are using. This includes having stable instructions and other documentation on the web somewhere.
STRONGLY DESIRED: The contents of a release or snapshot come from one git commit, tagged with a git tag that is identical to the release identifier.
STRONGLY DESIRED: A git tag is immutable: once it is associated with a given git commit, the tag is not later re-associated with a different commit.
STRONGLY DESIRED: Use semantic versions for release identifiers.
DESIRED: have a concept of release branch. That is a series of patch releases that share the same major.minor version number.
DESIRED: The user-visible container image references (and possibly others) use a tag that identifies a release branch (e.g., release-0.20) rather than one specific release. This makes it easier to deliver bug fixes, for example. This does NOT mean that there are no additional container image tags; indeed it would probably be positively good to have additional tags that identify specific builds, for example.
The above draft is not internally consistent: sometimes it speaks about a user locking onto an individual release, sometimes it speaks of a user using a release branch. I think the latter is more important. Hence the second draft below.
This draft focuses only on releases, supposing that a snapshot is either not-highly-advertised release or something that needs a distinct set of its own goals.
MUST HAVE: have a concept of release branch. That is a series of releases that share the same major.minor version number.
MUST HAVE: A user of a given release release branch does not get perturbed by our work on other release branches. Not even if the user restarts something. Not even if they tear their environment all the way down and re-create it using the same instructions that they have been using. This includes having stable instructions and other documentation on the web somewhere.
MUST HAVE: the contents of main always work, including having accurate instructions for users and developers. This does not mean that main documents all release branches; quite the opposite, I think.
STRONGLY DESIRED: The contents of a particular release come from one git commit, tagged with a git tag that is identical to the release identifier.
STRONGLY DESIRED: A git tag is immutable: once it is associated with a given git commit, the tag is not later re-associated with a different commit.
STRONGLY DESIRED: Use semantic versions for release identifiers.
DESIRED: The user-visible container image references (and possibly others) use a tag that identifies a release branch (e.g., release-0.20) rather than one specific release. This does NOT mean that there are no additional container image tags; indeed it would probably be positively good to have additional tags that identify specific builds, for example.
DESIRED: the process of adding a new patch release to a release branch includes testing in-place migration from an earlier release in that branch to the new release.
There is useful background information in docs/content/v0.20/packaging.md. And an expansion of it in #1723 .
I think that "reliable" snapshots could be useful for delivering advanced function to selected users. But there are practical issues: all the work necessary to have a fully working and correctly documented snapshot and creating all the snapshot artifacts. Perhaps if creating artifacts could be ~fully automated with only limited reliability, e.g. no guarantees that all documentation was correct, the effort of creating snapshots could be small enough.
A similar alternative that other projects use is to simply update artifacts tagged "latest" and be discriminating about when these artifacts are updated.
Ideally the release process would require little manual effort so that releases can be created as needed.
Comments with regards the list started in https://github.com/kubestellar/kubestellar/issues/1732#issuecomment-1934680370
MUST HAVE: have a concept of release branch. That is a series of releases that share the same major.minor version number.
=> meaning: Minimal effort to create patch releases?
MUST HAVE: A user of a given release release branch does not get perturbed by our work on other release branches...
=> meaning: All release artifacts required for a running system are immutable, and all 3rd party artifacts are referenced by version?
MUST HAVE: the contents of main always work, ...
=> I agree that this is invalid, i.e. NOT REQUIRED
STRONGLY DESIRED: The contents of a particular release come from one git commit ...
=> why isn't this MUST HAVE?
STRONGLY DESIRED: A git tag is immutable: once it is associated with a given git commit ...
=> agree but only after the associated release is made
STRONGLY DESIRED: Use semantic versions for release identifiers.
=> meaning major.minor.patch?
DESIRED: we can test a release before making it an actual release.
=> should be MUST HAVE
DESIRED: The user-visible container image references ...
=> no can grok
DESIRED: the process of adding a new patch release to a release branch includes testing in-place migration ...
=> does "in-place migration" mean support for updating a running system?
@eaepstein:
I would say that keeping down the effort needed to create a release is a criterion on its own. I should have written it down as such.
The main part of my thinking behind wanting a concept of release branch is so that we can both (a) say that a release is an immutable thing and (b) get bug fixes into users' hands without making a new minor release (that is, fix bugs by making patch releases). With users latched onto release branches rather than particular releases, that threads this needle.
All release artifacts required for a running system are immutable, and all 3rd party artifacts are referenced by version?
That would be my preference, but there is a problem with achieving that and also letting a user focus on a release branch. For a Helm chart being published through an OCI registry, a user wanting to use that chart has to fully specify the exact release to use, not a release branch.
the contents of main always work ... NOT REQUIRED
Why not?
Regarding semantic versions: they are described at semver.org , which allows more stuff after the patch.
Fans of goreleaser are going to be disappointed by saying the git tag is applied after making the release, because goreleaser requires applying the git tag first. I do not consider using goreleaser to be a requirement.
@MikeSpreitzer
the contents of main always work ... NOT REQUIRED
Why not?
Because of the potential extra burden of any manual testing required. Agree that it is a good idea if comprehensive release testing can be automatically run on PRs.
in-place migration
Now this is an important subject! Does OCM support in-place migration? What support does OCM have for managed cluster migration? Or even OCM support for managed clusters running different versions of k8s?
for goreleaser I think you can create a fake tag to initiate the release process and then re-tag. Also what the issue in tagging first (and remove if the release creation fails) ?
Helm charts are particularly difficult with respect to versioning, because each version of a chart is distinct. You can not have two versions (e.g., 0.20.1 and 0.20) that refer to the same chart. That is because inside the chart, in Chart.yaml, is an assertion of the chart's version!
Helm charts simply do not support a concept of a release branch that is a series of immutable things. You either use immutable un-aliased chart versions, or a mutable "version".
@eaepstein , regarding "DESIRED: The user-visible container image references (and possibly others) use a tag that identifies a release branch (e.g., release-0.20) rather than one specific release". What I am trying to talk about there is instructions for a user about how to use a release branch. Such instructions need to refer a user to a git and/or container image and/or published Helm chart version, and the thought is that we would direct the user at a release branch rather than an individual release. The verbiage about the tags appearing in such instructions not being the only tags is so that there can also be tags for individual releases.
Sadly, as noted above, Helm does not play nice here.
(a) instructions for how to use the latest stable release, and (b) instructions for how to use the current contents of main.
Contributors may also try to validate function with private applications, and they rely on current usage instructions. But it is OK to make a change that invalidates current usage instructions? Is the same OK for architecture documentation?
What I am trying to talk about there is instructions for a user about how to use a release branch.
I thought a release branch is a mechanism to apply bug fixes to a previous release as a patch release. A patch could be to fix a performance problem that was affecting latency or utilization. If a user wants to reproduce a previous result they cannot depend on an artifact tagged 'latest'.
I agree that the tag latest is just an invitation to a bunch of pain.
I was thinking that a tag like release-0.20 could be useful for a user to use. Even though individual releases in that branch would be identified as 0.20.0, 0.20.1, and so on --- and the next release branch would be release-0.21.
I was thinking that a tag like release-0.20 could be useful for a user to use.
Agreed, except for example, someone doing performance tests on things peripheral to KS and doesn't want changes due to a new patch level to affect measurements.
I was also confused by the reference to a branch rather than a semantic version, as runtime usage accesses release artifacts not source branches.
But assuming the team adheres to the expectation that all minor versions > 0.20 are backwards compatible, then specifying only major version 0 should be more useful than pinning to a specific minor version. This also assumes that the artifacts tagged with just major version=0 are always updated to the latest released version 0..
Major version 0 is special in that it explicitly denies the usual backward-compatibility in its progression of minor versions.
I agree that a user should be able to lock onto a specific release. That would mean that picking up a bug fix involves explicitly moving to another release (presumably a later patch release of the same major.minor version). This is a bit of change in the goals, let me try to compose a new set.
Latest draft of criteria.
MUST HAVE: A user selects an individual release to use.
STRONGLY DESIRED: Use semantic versions for release identifiers.
MUST HAVE: A user of a given release release does not get perturbed by our work on other releases. Not even if the user restarts something. Not even if they tear their environment all the way down and re-create it using the same instructions that they have been using. This includes having stable instructions and other documentation on the web somewhere.
MUST HAVE: have a concept of release branch. That is a series of releases that share the same major.minor version number. This means that contributors can contribute bug fixes to existing release branches, and new patch releases in those branches can be made.
MUST HAVE: the contents of main always work. This includes passing CI tests. This means that the instructions in main are always accurate. This does not mean that main documents all release branches; quite the opposite, I think. The instructions in main might document how a contributor works with main. The instructions in main might document what is the latest stable release and how a user can use it.
STRONGLY DESIRED: The contents of a particular release come from one git commit, tagged with a git tag that is "v" + the release identifier.
STRONGLY DESIRED: A git tag is immutable: once it is associated with a given git commit, the tag is not later re-associated with a different commit.
DESIRED: we can test a release before making it an actual release.
Let me illustrate the self-reference difficulties we have been giving ourselves. Look at the picture in https://github.com/kubestellar/kubestellar/blob/main/docs/content/v0.20/packaging.md#outline-of-publishing and the release process goals above . To be fully concrete, one of the things I would expect is that the contents of release 1.2.3 are built from the git commit tagged v1.2.3.
Suppose we have a commit XXX... that we want to make into a release.
Build container image(s) from commit XXX... and tag them 1.2.3 or v1.2.3.
Test them. Oops, what test refers to images tagged 1.2.3? None. OK, suppose we somehow have tests that take the tag as a parameter, somebody/something somehow is responsible for carefully invoking the tests with the right tag parameter.
Supposing the container images pass their tests, now we need to update the Helm chart(s) so that their defaults for image references use tag 1.2.3. Oops, now we made a new git commit, it is now YYY... rather than XXX....
Supposing that somehow the previous "oops" is not a problem, test the new Helm chart(s). What tests the chart(s) with version 1.2.3? Nothing in commit XXX... or YYY... unless those tests take the chart version as a parameter. Suppose we do that, and something/somebody is somehow responsible for invoking these tests correctly.
Update the user-facing instructions to refer to Helm chart(s) version 1.2.3. Oops, now we made another change in git, creating commit ZZZ...
Suppose we have some document in this git repository that says "the latest stable release is X.Y.Z". Well, suppose that is what the document says in the git branch named main. What does that document say in a release branch? Suppose it is a release branch for a major.minor that has not yet been tested enough for us to know whether or not to call it "stable". We want to continue testing that release branch. Thus I conclude that testing of a release can not be directed by a file that declares the latest stable release.
FYI, here is a way to find the latest tag applied to any commit in a given branch.
mspreitz@mjs13 kubestellar % git checkout main #demonstrate querying different branch
Switched to branch 'main'
Your branch is up to date with 'upstream/main'.
mspreitz@mjs13 kubestellar % git for-each-ref --count=1 --sort=-creatordate 'refs/tags/*' --format='%(refname)' --merged=release-0.14
refs/tags/v0.14.2
Suppose it is a release branch for a major.minor that has not yet been tested enough ...
One approach: assume the tag for the new release has been chosen, and that the image and helm chart artifacts will include that tag info in their identifier. Release testing uses artifacts with the correct identifier. If an error is found and git must be updated, a new git tag is be created, affected artifacts with correct identifiers created and used to overwrite the previous. This iterates until testing succeeds, at which point a git tag with the desired ID is then be associated with the working git tag.
If there is another approach, the steps should be describable in a similar fashion.
@eaepstein: If I understand you correctly, you are suggesting that for some git tags, the associate commit changes over time. I would prefer to avoid that. I think that we can avoid that with no comparative disadvantage. I think your suggestion involves creating a release before testing it, and the alternatives do too. As long as we can identify a release with an identifier like 0.20.0-rc1 and then, if testing is successful, add the identifier 0.20.0. I think the textual format of those tags makes it immediately obvious what the quality and implications of them are.
@MikeSpreitzer OK, that sounds fine. What about the artifacts? For pre-release testing, what container image and helm chart names are used? Are they pointing at artifacts with 0.20.1-rc1?
This also connects to the website, since that also has specific technology for release branches.
@eaepstein : #1758 proposes naming test releases $something-rc$number. I think the meaning of that is immediately clear to everyone.
@MikeSpreitzer Sorry I was not clear on the issue. For example, release testing will use release helm charts that reference container images. The helm charts are in the git repo. If the charts reference test image tags, how will they be changed to use the official release image tags without updating git and requiring a new git tag?
I'm confused about the ordering of steps: git release tag, image building, release testing, updating helm charts, etc.
@eaepstein : Let's pursue this in #1758 . I think that you should already find there an answer about how the published Helm chart of release 1.2.3 references the published image of release 1.2.3. I will review and may add some emphasis. As for step-by-step instructions, it is simple: add a git tag --- that is all that a contributor needs to do in order to trigger publishing of release artifacts.
@eaepstein : I have updated #1758
Another part of the problem is divergence/convergence between main and a new release branch as that new minor release is first being created and debugged. Generally there will be bugs found and fixed during the release process (-rc1 will have bugs, fixed in -rc21, and so on). How do these fixes appear in maintoo? The simplest would be to only create the new release branch after release testing and bug fixing have reached a successful conclusion with creation of new releaseX.Y.Z(sans-rc$N). Another approach would involve creating the release branch earlier, but (somehow; manually?) keep it pointing to the same commit as mainuntil afterX.Y.Zis anointed. Yet another approach, with even more manual chores, would be to cherry-pick from one branch to the other. This last is what has to happen anyway with fixes made afterX.Y.Zis anointed --- but we have historically had less of these than fixes done during the test/fix cycle leading up to release0.Y.0`.
|
gharchive/issue
| 2024-02-08T15:47:34 |
2025-04-01T04:34:49.204457
|
{
"authors": [
"MikeSpreitzer",
"eaepstein",
"ezrasilvera"
],
"repo": "kubestellar/kubestellar",
"url": "https://github.com/kubestellar/kubestellar/issues/1732",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2086849764
|
:lady_beetle: Set page section variant light
Issue:
in console 4.15 changed some css classes, one of them is setting default background to gray
Fix:
Set background implicitly to "light", so it will be white instead of gray
Screenshots:
Before:
addressed by: https://github.com/kubev2v/forklift-console-plugin/pull/831
|
gharchive/issue
| 2024-01-17T19:35:48 |
2025-04-01T04:34:49.208585
|
{
"authors": [
"yaacov"
],
"repo": "kubev2v/forklift-console-plugin",
"url": "https://github.com/kubev2v/forklift-console-plugin/issues/832",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
525003602
|
Fix typos
Fix typos in text
/lgtm
/approve
|
gharchive/pull-request
| 2019-11-19T13:17:30 |
2025-04-01T04:34:49.218873
|
{
"authors": [
"iranzo",
"ptrnull"
],
"repo": "kubevirt/katacoda-scenarios",
"url": "https://github.com/kubevirt/katacoda-scenarios/pull/8",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1791991786
|
Switch to bochs display for EFI VMs
What this PR does / why we need it:
Bochs is more powerful than VGA emulation, and OVMF (our UEFI firmware) has support for it, making it transparent to guests.
Switching to Bochs for UEFI guests just makes sense.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
Special notes for your reviewer:
Release note:
UEFI guests now use Bochs display instead of VGA emulation
/retest
Are we okay with the change in behaviour for existing VirtualMachines by unconditionally switching to bochs on restart?
I know we don't target the VDI use case but I'm just wondering if we still have any VM ABI guarantees to keep for existing VirtualMachines?
/cc
Are we okay with the change in behaviour for existing VirtualMachines by unconditionally switching to bochs on restart?
I know we don't target the VDI use case but I'm just wondering if we still have any VM ABI guarantees to keep for existing VirtualMachines?
Sorry, what's "the VDI use case"?
I don't think we guarantee that the domain XML will stay the same across upgrades, for the life of a VM. Or if we do I'm not aware of it.
Either way, the real life impact on VMs is minor, graphics will just slightly improve.
Let's wait and see if other reviewer have more insight on this. Thank you for the review!
Are we okay with the change in behaviour for existing VirtualMachines by unconditionally switching to bochs on restart?
I know we don't target the VDI use case but I'm just wondering if we still have any VM ABI guarantees to keep for existing VirtualMachines?
Sorry, what's "the VDI use case"?
Virtual Desktop Infrastructure, think oVirt with the SPICE protocol etc.
I don't think we guarantee that the domain XML will stay the same across upgrades, for the life of a VM. Or if we do I'm not aware of it. Either way, the real life impact on VMs is minor, graphics will just slightly improve. Let's wait and see if other reviewer have more insight on this. Thank you for the review!
Okay, appreciate it likely doesn't matter in this situation but it does really matter in others IMHO so I was wondering if we had a general policy in place. If we did we'd need to make this somehow opt-in for existing VirtualMachines.
/cc
@jean-edouard @victortoso I think an alternative solution to enhancing the API would be using hook sidecars, if its only for testing. WDYT?
I think an alternative solution to enhancing the API would be using hook sidecars, if its only for testing. WDYT?
Using sidecars for testing is fine but I'm not sure the intention of the PR was only testing. I added a comment more worried about changing the default itself considering different guests OS. All in all, I support customization that could enhance VDI solutions. I'm not knowledgeable with bochs but it does seems to be widely supported.
@jean-edouard @victortoso @lyarwood I am not sure what would be the impact on the project if we drop VGA compatibility in favor to using reduced code-base with small attack surface. This change seems to be non backward-compatible in terms of legacy VGA support. Thus is sounds reasonable to initiate a deprecation procedure of that, or to expose it via API and set the bochs mode via VirtualMachinePreference
/cc @vladikr @davidvossel @rmohr
Thoughts?
Yeah, adding an API field would allow people to easily test both and report on performance.
If/when we deprecate legacy BIOS, we'll just deprecate this new field along with it.
If we are concerned that it breaks something but we eventually want to switch to it as a default, how about a featuregate?
/retest
/retest
I think the approach is valid but this is missing both unit and functional tests if you have some time to add these?
Absolutely yes, I was waiting to get feedback on the approach so I didn't write tests for nothing :)
So I guess this is my cue! Thank you
I think the approach is valid but this is missing both unit and functional tests if you have some time to add these?
Absolutely yes, I was waiting to get feedback on the approach so I didn't write tests for nothing :) So I guess this is my cue! Thank you
Added unit test coverage. Not sure much can be functested here. PTAL!
Added unit test coverage. Not sure much can be functested here.
Perhaps showing that bochs driver was loaded? along the lind of $(lsmod | grep -i bochs | wc -l) > 0) or identifying the device with lspci, e.g: In my fedora 37 pet VM
(toso)$ lspci -vv
...
00:01.0 Display controller: Device 1234:1111 (rev 02)
Subsystem: Red Hat, Inc. Device 1100
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Region 0: Memory at fc000000 (32-bit, prefetchable) [size=16M]
Region 2: Memory at fea0c000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at fea00000 [disabled] [size=32K]
Capabilities: <access denied>
Kernel driver in use: bochs-drm
Kernel modules: bochs
...
So, checking for 1234:1111 should be fine. lspci -d 1234:1111
Just a suggestion :)
Cheers,
Added unit test coverage. Not sure much can be functested here.
Perhaps showing that bochs driver was loaded? along the line of $(lsmod | grep -i bochs | wc -l) > 0) or identifying the device with lspci, e.g: In my fedora 37 pet VM
(toso)$ lspci -vv
...
00:01.0 Display controller: Device 1234:1111 (rev 02)
Subsystem: Red Hat, Inc. Device 1100
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Region 0: Memory at fc000000 (32-bit, prefetchable) [size=16M]
Region 2: Memory at fea0c000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at fea00000 [disabled] [size=32K]
Capabilities: <access denied>
Kernel driver in use: bochs-drm
Kernel modules: bochs
...
So, checking for 1234:1111 should be fine. lspci -d 1234:1111
Just a suggestion :) Cheers,
That should be doable, but it feels like it's outside the scope of KubeVirt... I feel like it would test libvirt/qemu/Fedora, but not so much this project.
Any other opinion on that is more than welcome, I just don't want to add unnecessary burden to the e2e lanes.
/lgtm
Added unit test coverage. Not sure much can be functested here.
Perhaps showing that bochs driver was loaded? along the line of $(lsmod | grep -i bochs | wc -l) > 0) or identifying the device with lspci, e.g: In my fedora 37 pet VM
(toso)$ lspci -vv
...
00:01.0 Display controller: Device 1234:1111 (rev 02)
Subsystem: Red Hat, Inc. Device 1100
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Region 0: Memory at fc000000 (32-bit, prefetchable) [size=16M]
Region 2: Memory at fea0c000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at fea00000 [disabled] [size=32K]
Capabilities: <access denied>
Kernel driver in use: bochs-drm
Kernel modules: bochs
...
So, checking for 1234:1111 should be fine. lspci -d 1234:1111
Just a suggestion :) Cheers,
That should be doable, but it feels like it's outside the scope of KubeVirt... I feel like it would test libvirt/qemu/Fedora, but not so much this project. Any other opinion on that is more than welcome, I just don't want to add unnecessary burden to the e2e lanes.
I agree with @jean-edouard here, that'd be testing libvirt rather than KubeVirt, I don't think it's really necessary, we should just make sure we craft the right XML for libvirt to consume.
/retest
|
gharchive/pull-request
| 2023-07-06T17:54:49 |
2025-04-01T04:34:49.239844
|
{
"authors": [
"acardace",
"enp0s3",
"jean-edouard",
"lyarwood",
"rmohr",
"victortoso",
"xpivarc"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/10056",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1995038122
|
[release-1.0] Manual cherry-pick of 10058
Manual backport of https://github.com/kubevirt/kubevirt/pull/10058
What this PR does / why we need it:
For correctly deploying Windows Shared Cluster Filesystem, we need to be able to configure the error policy to report.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2249846
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Release note:
Add field errorPolicy for disks
/cc @akalenyu @mhenriks
looks good, any special conflicts we should pay attention to? or was this the only pain point? https://github.com/kubevirt/kubevirt/compare/e43f686fb602d732e02940a4dfb6d4c179cfacfe..2475468b37e8a79f1557a95b80098e5f6348d8b3
looks good, any special conflicts we should pay attention to? or was this the only pain point? https://github.com/kubevirt/kubevirt/compare/e43f686fb602d732e02940a4dfb6d4c179cfacfe..2475468b37e8a79f1557a95b80098e5f6348d8b3
not really, only that function was refactored between this feature and the 1.o release
/hold
we need to clarify if this is an acceptable backport
/test pull-kubevirt-e2e-kind-1.25-vgpu
/test pull-kubevirt-e2e-kind-1.25-vgpu-1.0
From the discussion https://github.com/kubevirt/kubevirt/pull/10736#discussion_r1395635043, this seems the more preferable way to proceed with the setting of the error policy for Windows, I'm upholding as consequences of the discussion
/unhold
BTW, as we now have sig-api regular meetings, maybe makes sense to bring this up there?
Follow-up of the discussion: #10736 (comment)
This backport PR changes the API. However, it does not break compatibility, as it only brings new fields and constants. The alternative solution to later backport #10736 introduces a change in the default behavior.
From my perspective, this PR is a better alternative. The only issue I see is the downgrade path. If we want to support downgrades e.g. between patch versions (e.g. 1.1.1 -> 1.1.0) the code here may break migration of running workloads. If this is not a big concern for the project, I think we shall go with it. Other thoughts?
We officially don't support downgrades. There are no tests for it as of now and I did not hear an interest in supporting it for now.
/approve
+1 fwiw :)
Thanks everyone for the quick help!
|
gharchive/pull-request
| 2023-11-15T15:45:53 |
2025-04-01T04:34:49.249755
|
{
"authors": [
"akalenyu",
"alicefr",
"mhenriks",
"vasiliy-ul",
"vladikr",
"xpivarc"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/10730",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
447113276
|
q35 type is more common machine type
What this PR does / why we need it:
RHEL images would not have pc-q35-3.0 as machine types rather it would have pc-q35-rhel7.6.0 , pc-q35-rhel8.0.0, e.t.c
We encountered this issue while running these tests on CNV builds.
qemu-kvm: -machine pc-q35-3.0,accel=kvm,usb=off,dump-guest-core=off: unsupported machine type\nUse -machine help to list supported machines'
The supported machine types on RHEL virt-launcher pods are as below:
sh-4.4# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.0 (Ootpa)
sh-4.4# /usr/libexec/qemu-kvm -machine help
Supported machines are:
pc RHEL 7.6.0 PC (i440FX + PIIX, 1996) (alias of pc-i440fx-rhel7.6.0)
pc-i440fx-rhel7.6.0 RHEL 7.6.0 PC (i440FX + PIIX, 1996) (default)
pc-i440fx-rhel7.5.0 RHEL 7.5.0 PC (i440FX + PIIX, 1996)
pc-i440fx-rhel7.4.0 RHEL 7.4.0 PC (i440FX + PIIX, 1996)
pc-i440fx-rhel7.3.0 RHEL 7.3.0 PC (i440FX + PIIX, 1996)
pc-i440fx-rhel7.2.0 RHEL 7.2.0 PC (i440FX + PIIX, 1996)
pc-i440fx-rhel7.1.0 RHEL 7.1.0 PC (i440FX + PIIX, 1996)
pc-i440fx-rhel7.0.0 RHEL 7.0.0 PC (i440FX + PIIX, 1996)
q35 RHEL-8.0.0 PC (Q35 + ICH9, 2009) (alias of pc-q35-rhel8.0.0)
pc-q35-rhel8.0.0 RHEL-8.0.0 PC (Q35 + ICH9, 2009)
pc-q35-rhel7.6.0 RHEL-7.6.0 PC (Q35 + ICH9, 2009)
pc-q35-rhel7.5.0 RHEL-7.5.0 PC (Q35 + ICH9, 2009)
pc-q35-rhel7.4.0 RHEL-7.4.0 PC (Q35 + ICH9, 2009)
pc-q35-rhel7.3.0 RHEL-7.3.0 PC (Q35 + ICH9, 2009)
none empty machine
So, thinking we could make use of just q35 which is common to both RHEL and non-RHEL virt images.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
Special notes for your reviewer:
Release note:
NONE
My bad will fix and push it again.
|
gharchive/pull-request
| 2019-05-22T12:41:13 |
2025-04-01T04:34:49.253879
|
{
"authors": [
"kbidarkar"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/2308",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
958141237
|
use latest release-tool image from quay.io
The release-tool was still being pulled from docker hub. new release tool builds are now in quay.
NONE
/lgtm
/approve
/retest
|
gharchive/pull-request
| 2021-08-02T13:22:15 |
2025-04-01T04:34:49.255536
|
{
"authors": [
"davidvossel",
"mhenriks"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/6175",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1316883671
|
Add scale OWNERS
SIG-scale maintains tooling in /hack /tools and /tests/performance and some code in /pkg/monitoring/perfscale. Add approvers and reviewers for this code to the owners file.
NONE
/retest
/retest
@xpivarc @vladikr would either of you feel comfortable with adding a lgtm to this?
Absolutely, great work @rthallisey , and I am sure we will see more.
/approve
/unhold
thanks @xpivarc @davidvossel
/remove-kind release-note-label-needed
/remove-do-not-merge release-note-label-needed
/test pull-kubevirt-fossa
/test pull-kubevirt-fossa
/refresh
|
gharchive/pull-request
| 2022-07-25T13:59:12 |
2025-04-01T04:34:49.259075
|
{
"authors": [
"alaypatel07",
"rthallisey",
"xpivarc"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/8174",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
429242821
|
ClusterOverview: Improve prometheus query for CPU Capacity
and fix Capacity tests.
Pull Request Test Coverage Report for Build 1013
1 of 1 (100.0%) changed or added relevant line in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.2%) to 87.448%
Totals
Change from base Build 1010:
0.2%
Covered Lines:
3072
Relevant Lines:
3365
💛 - Coveralls
Pull Request Test Coverage Report for Build 1234
1 of 1 (100.0%) changed or added relevant line in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.2%) to 87.448%
Totals
Change from base Build 1010:
0.2%
Covered Lines:
3072
Relevant Lines:
3365
💛 - Coveralls
@mareklibra is there any PR for web-ui which changes CPU query to get CPU usage in % ? Or maybe you want to use cpuUtilization instead of cpuUsed ?
@rawagner , I forgot to push :-) https://github.com/kubevirt/web-ui/pull/263
|
gharchive/pull-request
| 2019-04-04T12:09:43 |
2025-04-01T04:34:49.271528
|
{
"authors": [
"coveralls",
"mareklibra",
"rawagner"
],
"repo": "kubevirt/web-ui-components",
"url": "https://github.com/kubevirt/web-ui-components/pull/308",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1032146090
|
Handle Docker Hub
It is true that right now Docker Hub doesn't support OCI artifacts, however... our code would fail when a user attempts to interact with the docker hub under certain circumstances.
This would happen because in many places of our code we rely on the url crate to understand whether a policy is located on a http(s), file or registry location.
See the following gist about how things can go wrong under certain circumstances: https://gist.github.com/flavio/7f60b12791b3de111f942896756fa24c
What do you think is the best approach?
In the context of one of the provided examples, registry://flavio/awesome-stuff:v0.1.0, flavio is a valid host and I think is reasonable to treat it like so. I don't think we should provide any special logic to be smart about the Docker Hub.
In the future, when the Docker Hub supports OCI artifacts, I think it's reasonable to ask for the user to refer to them as registry://registry.hub.docker.com/... or a path host+path that will serve this purpose.
I understand the defaulting logic at the docker+podman image container level, but I don't think we should implement anything like that.
Unless I misunderstood something I would close this issue as invalid.
I don't think we should allow registry://busybox:v0.1.0, this is not a well formed URL becauase v0.1.0 is not a valid port.
I think we should only allow valid URL's (always with a host, and optionally with a port).
The problem is that docker pull busybox:v0.1.0 is a valid reference. This is a bit extreme, I know... because I don't expect wasm policies to be stored as top level objects into the docker hub.
Because of that I think we can close it
|
gharchive/issue
| 2021-10-21T07:19:13 |
2025-04-01T04:34:49.275646
|
{
"authors": [
"ereslibre",
"flavio"
],
"repo": "kubewarden/policy-fetcher",
"url": "https://github.com/kubewarden/policy-fetcher/issues/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
277797069
|
bug fix
the test was not done well
Yes, I have just implemented the test as the comments it shoud do.
Nobody saw it because they use an appId
|
gharchive/pull-request
| 2017-11-29T15:29:44 |
2025-04-01T04:34:49.278373
|
{
"authors": [
"bubbatls"
],
"repo": "kudago/smart-app-banner",
"url": "https://github.com/kudago/smart-app-banner/pull/96",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
658099034
|
kudo init --dry-run -o yaml should work even without valid cluster connection
What happened:
./kubectl-kudo_0.15.0_darwin_x86_64 init -o yaml --dry-run
Errors
failed to detect any valid cert-manager CRDs. Make sure cert-manager is installed.
Error: failed to verify installation requirements
What you expected to happen:
Dry run to pass
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
duplicate of #1590
|
gharchive/issue
| 2020-07-16T11:12:14 |
2025-04-01T04:34:49.280608
|
{
"authors": [
"alenkacz"
],
"repo": "kudobuilder/kudo",
"url": "https://github.com/kudobuilder/kudo/issues/1607",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
144011332
|
Tapping result does not autofill Safari
A selected result does not fill the input on Safari. I've tested this on multiple types of iPhone as well as chrome's inspector in iPhone mode.
+1
I have also similar issue, while selecting certain address from auto suggestion.
|
gharchive/issue
| 2016-03-28T15:57:14 |
2025-04-01T04:34:49.288030
|
{
"authors": [
"shankarregmi",
"traviskindred"
],
"repo": "kuhnza/angular-google-places-autocomplete",
"url": "https://github.com/kuhnza/angular-google-places-autocomplete/issues/87",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2432051806
|
Issue with port config on zone-ingress page
What happened?
https://kuma.io/docs/2.8.x/production/cp-deployment/zone-ingress/#zone-ingress
The port config in the [Universal] tab 10000 and 10001 are problematic.
triage: it's the example in the docs we should change
|
gharchive/issue
| 2024-07-26T11:40:35 |
2025-04-01T04:34:49.294530
|
{
"authors": [
"bartsmykla",
"johncowen"
],
"repo": "kumahq/kuma-website",
"url": "https://github.com/kumahq/kuma-website/issues/1861",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2305620783
|
[Enhancement]: Master typing -start/restart button functions need to upgrade
Do you want to have the enhancement of existing game ? 😀 Describe yourself..
The letters should popup only after pressing start button and the restart button should be named reset because the button is clearing the timer and also the game's state.
Describe the solution you'd like
[ ] - Launch the game application.
[ ] - Navigate to the main menu or game interface where the start/restart button is located.
[ ] - Attempt to click on the start or restart button.
Select program in which you are contributing
GSSoC24
Code of Conduct
[X] I follow CONTRIBUTING GUIDELINE of this project.
/assign
Hey @Tasnuva12 !
Thank you for raising an issue 💗
You can self assign the issue by commenting /assign in comment 😀
Make sure you follow CODE OF CONDUCT and CONTRIBUTING GUIDELINES 🚀
Don’t Forget to ⭐ our GameZone🎮
Make sure you join our Discord🕹️
Hey @Tasnuva12 ! Thank you so much for your raising the issue💗
It’s all yours, you can come anytime again and make some contributions! 🚀
Alone, we can do little, but together we can do so much! 😇
|
gharchive/issue
| 2024-05-20T10:05:57 |
2025-04-01T04:34:49.327436
|
{
"authors": [
"Tasnuva12",
"kunjgit"
],
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/issues/3537",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1727851736
|
[New game]: Catch the fruit
🎮 Game Request
Game Logic:
The game is a simple catch-the-falling-fruit game where the player controls a character to catch falling fruits.
The player can move horizontally within the game container using the mouse.
Fruits fall from the top of the game container and the player needs to position themselves to catch the falling fruits.
When a fruit is caught, the player's score increases.
If a fruit reaches the bottom without being caught, it resets to the top, and the game continues.
The game continues indefinitely, allowing the player to try to achieve the highest score possible.
Point down the features
Features of the Game:
Player Control: The player can move horizontally within the game container by moving the mouse.
Falling Fruits: Fruits fall from the top of the game container and the player needs to catch them.
Score Tracking: The player's score is displayed at the top of the game container and increases each time a fruit is caught.
Continuous Gameplay: The game continues indefinitely, allowing the player to catch as many fruits as possible and try to achieve a high score.
Dynamic Difficulty: The speed at which the fruits fall can be adjusted to increase the difficulty as the game progresses.
Select program in which you are contributing
GSSoC23
Code of Conduct
[X] I follow CONTRIBUTING GUIDELINE of this project.
Hey @sudarshan-hub !
Thank you for raising an issue 💗
You can self assign the issue by commenting /assign in comment 😀
Make sure you follow CODE OF CONDUCT and CONTRIBUTING GUIDELINES 🚀
Don’t Forget to ⭐ our GameZone🎮
Make sure you join our Discord🕹️
Hello @sudarshan-hub, Time's Uppp!⏰
Sorry for closing your issue!
But it's more than a week since we haven't received anything from your side 😢 .
Come up with new ideas, create a new issue and make sure you finish it within a week! 🔥
All the best! 🚀
Happy Hacking! 💗
Hey @sudarshan-hub ! Thank you so much for your raising the issue💗
It’s all yours, you can come anytime again and make some contributions! 🚀
Alone, we can do little, but together we can do so much! 😇
|
gharchive/issue
| 2023-05-26T15:40:39 |
2025-04-01T04:34:49.335819
|
{
"authors": [
"kunjgit",
"sudarshan-hub"
],
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/issues/450",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1837781598
|
Editied buttons and fonts.
PR Description 📜
Please include summary related to the issue you have fixed and describe your PR in brief over here by specifying the issue number on which you were working below
Fixes # <your_issue_number>
Mark the task you have completed ✅
[ ] I follow CONTRIBUTING GUIDELINE & CODE OF CONDUCT of this project.
[ ] I have performed a self-review of my own code or work.
[ ] I have commented my code, particularly in hard-to-understand areas.
[ ] My changes generates no new warnings.
[ ] I have followed proper naming convention showed in CONTRIBUTING GUIDELINE
[ ] I have added screenshot for website preview in assets/images
[ ] I have added entries for my game in main README.md
[ ] I have added README.md in my folder
[ ] I have added working video of the game in README.md (optional)
[ ] I have specified the respective issue number for which I have requested the new game.
Add your screenshots(Optional) 📸
Thank you soo much for contributing to our repository 💗
Thank you @bhar1gitr ,for creating the PR and contributing to our GameZone 💗
Review team will review the PR and will reach out to you soon! 😇
Make sure that you have marked all the tasks that you are done with ✅.
Thank you for your patience! 😀
Hey @bhar1gitr,
Please make sure to link the relevant issue using the appropriate syntax, such as "#issueNumber" 👀.
Follow the proper guideline and make a new PR again 😀.
Happy Hacking 💗
Thank you @bhar1gitr , for your valuable time and contribution in our GameZone 💗.
It’s our GameZone, so Let’s build this GameZone altogether !!🤝
Hoping to see you soon with another PR again 😇
Wishing you all the best for your journey into Open Source🚀
|
gharchive/pull-request
| 2023-08-05T14:14:20 |
2025-04-01T04:34:49.344178
|
{
"authors": [
"bhar1gitr",
"kunjgit"
],
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/pull/2776",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1875451743
|
How to prepare input data?
Hi,
What's the format of input data? I have 10X UMI data and loom files from Velocyto. The web of Phylovelo did not describe the input data in detial. For example, "sd = pv.scData(count=xx, phylo_tree=xx)". Could you give a example of human 10X scRNA-seq?
Hi, thank you for your interest in PhyloVelo. PhyloVelo requires lineage tracing data to estimate the velocity. The input format for PhyloVelo is that you need to provide a UMI count matrix X (pandas.DataFrame) and a phylogenetic tree T (biopython.Phylo.BaseTree) as arguments to the pv.scData function, such as sd=pv.scData(count=X, phylo_tree=T). Only 10X scRNA-seq is not suitable for PhyloVelo analysis. I hope this helps. Please let me know if you have any other questions or feedback.
Hi, thank you for your interest in PhyloVelo. PhyloVelo requires lineage tracing data to estimate the velocity. The input format for PhyloVelo is that you need to provide a UMI count matrix X (pandas.DataFrame) and a phylogenetic tree T (biopython.Phylo.BaseTree) as arguments to the pv.scData function, such as sd=pv.scData(count=X, phylo_tree=T). Only 10X scRNA-seq is not suitable for PhyloVelo analysis. I hope this helps. Please let me know if you have any other questions or feedback.
I'm very intersted in this new tool and working on it. Because i am new to the analysis of scRNA-seq data, i don't know how to generate a phylogenetic tree or get a newick data, could you suggest any suitable tools for creating a phylogenetic tree? Perhaps URD or Monocle might be suitable? Thank you in advance for your guidance.
Hi @iceautumn,
Thank you for your interest in our PhyloVelo work. To reconstruct a phylogenetic tree, you need to have DNA-seq data as the input. There are different algorithms that can be used for this task, such as maximum parsimony, neighbor joining or maximum likelihood. In our PhyloVelo work, we used the following tools to implement these algorithms:
R package phangorn for maximum parsimony
R package ape for neighbor joining
IQ-TREE for maximum likelihood
Please note that these methods require DNA sequences as the input, and scRNA-seq data alone cannot be used for this purpose.
I hope this helps you understand our PhyloVelo work better.
Best regards,
Kun
|
gharchive/issue
| 2023-08-31T12:40:48 |
2025-04-01T04:34:49.350646
|
{
"authors": [
"Junjie-Hu",
"iceautumn",
"kunwang34"
],
"repo": "kunwang34/PhyloVelo",
"url": "https://github.com/kunwang34/PhyloVelo/issues/10",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1335427462
|
Integrate with argoCD to support continuous delivery
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #51
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
/lgtm
|
gharchive/pull-request
| 2022-08-11T03:33:31 |
2025-04-01T04:34:49.353265
|
{
"authors": [
"hzxuzhonghu",
"zirain"
],
"repo": "kurator-dev/kurator",
"url": "https://github.com/kurator-dev/kurator/pull/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1918633485
|
Support optional fields for allOf field object
EDIT: See below comment for clarification
Fields by default are optional in OpenAPI 3, unless mentioned in the required section of the object.
Currently, all fields are mandatory optional fields of objects defined as an allOf field is mandatory in the generated code. I had to manually make the fields optional by changing the structs fields to Option<T>.
Can you share where you're seeing this? Please provide an example OpenAPI spec, the generated libninja code, and point out the errors.
I'd need to look up examples, but from what I've seen in real world specs, I believe many specs have always-present data, at the very least in responses, even though the fields aren't technically marked as required, so it's absurdly pedantic to make all those fields optional. Perhaps libninja could have a strict mode that would generate them as option.
Actually, my bad- most fields are marked optional unless they are marked required. I encountered an edge case where I use an allOf and specify an object as one of the object definitions (rather than referencing it).
I have created an example here: https://github.com/prabhpreet/libninja-allof-example/ with the spec https://github.com/prabhpreet/libninja-allof-example/blob/main/petstore.yaml
Consider the PetTag object in the spec below:
https://github.com/prabhpreet/libninja-allof-example/blob/28fbbfeefc25b0b5653d4b36fba7f32e2d0ecd3e/petstore.yaml#L85-L119
The field weight in the object is not required but is not wrapped in an Option enum in the generated code. However, if I reference an entire object (eg. here PetHealth's optional field neutered here), the entire object does wrap the optional fields correctly.
https://github.com/prabhpreet/libninja-allof-example/blob/28fbbfeefc25b0b5653d4b36fba7f32e2d0ecd3e/src/model/pet_tag.rs#L5-L13
https://github.com/prabhpreet/libninja-allof-example/blob/28fbbfeefc25b0b5653d4b36fba7f32e2d0ecd3e/src/model/pet_health.rs#L4-L8
Got it. Just to confirm, is there an error here? To my eye, those structs look like they conform correctly to that OpenAPI spec.
I think this is an error since it is making an optional field in the spec into required in the struct.
Just to recap, specifically the error is when one of the object types in an allOf spec has an optional field (not mentioned in the required section of the object type). This field is not mapped as an Option in the struct, making it a required field.
Thank you for clarifying. Fixed with regression tests, and bumped version to 0.1.10.
Thanks for the prompt fix!
|
gharchive/issue
| 2023-09-29T05:47:23 |
2025-04-01T04:34:49.361601
|
{
"authors": [
"kurtbuilds",
"prabhpreet"
],
"repo": "kurtbuilds/libninja",
"url": "https://github.com/kurtbuilds/libninja/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2072833508
|
Incorrect Struct Insertion
The following statements gives wrong output
-STATEMENT create node table person (x SERIAL, y STRUCT(a INT64, b INT64), PRIMARY KEY(x));
---- ok
-STATEMENT CREATE (:person {y: {a: 1, b: 2}}), (:person {y: {a: 3, b: 4}});
---- ok
-STATEMENT MATCH (a:person) RETURN a.*;
---- 1
0|{a: 1, b: 2}
1|{a: 3, b: 4}
Fixed in #2645
|
gharchive/issue
| 2024-01-09T17:27:14 |
2025-04-01T04:34:49.368349
|
{
"authors": [
"andyfengHKU"
],
"repo": "kuzudb/kuzu",
"url": "https://github.com/kuzudb/kuzu/issues/2643",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2133169052
|
Add IF NOT EXISTS option to node and edge table creation.
SQL databases provide the convenient clause IF NOT EXISTS when creating tables to avoid the user having to handle errors when a table with the same name already exists.
Creating a node table in Kùzu is a bit tedious in comparison. Currently the user has to perform custom exception handling (e.g., in Python, using if/else statements or try/except blocks to handle failure when the table already exists), which slows down experimentation and requires more boilerplate.
It would be great to have a feature that allows users to create tables using this syntax:
CREATE NODE TABLE IF NOT EXISTS
Person(
person_id INT64,
name STRING,
age INT64,
PRIMARY KEY (person_id)
);
CREATE REL TABLE IF NOT EXISTS
Follows(
FROM Person TO Person
);
Per the discussion with Semih, this might be on hold till we have evidence that it's a useful feature.
slows down experimentation and requires more boilerplate
Indeed, I've come across Kuzu about 30 minutes ago and already stumbled upon this issue. I wanted to re-run the script that created the DB:
conn.execute(
"CREATE NODE TABLE Person(name STRING, PRIMARY KEY(name))")
conn.execute(
"CREATE REL TABLE Parent(FROM Person TO Person)"
)
...but it failed saying:
Traceback (most recent call last):
File "./venv/lib/python3.12/site-packages/marimo/_runtime/cell_runner.py", line 238, in run
return_value = execute_cell(cell, self.glbls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.12/site-packages/marimo/_ast/cell.py", line 444, in execute_cell
exec(cell.body, glbls)
File "/var/folders/61/p7f15sln0gxd7lwbr58cy6f00000gn/T/marimo_8105/__marimo__cell_vblA_.py", line 1, in <module>
conn.execute(
File "./venv/lib/python3.12/site-packages/kuzu/connection.py", line 92, in execute
_query_result = self._connection.execute(prepared_statement._prepared_statement, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Binder exception: Person already exists in catalog.
Makes sense, but what can I do to get rid of this? Should I just catch the RuntimeError? What if it's a different kind of RuntimeError? I want exactly the one related to attempting to create a NODE TABLE that already exists.
How should I handle this?
Hi @ForceBru, welcome, and hope you enjoy using Kùzu!
There's a relatively simple way to handle this in Python that requires just a couple extra lines of code:
import kuzu
import shutil
db_path = './my_kuzu_db'
shutil.rmtree(db_path, ignore_errors=True)
You can basically overwrite the database directory by using shutil.rmtree() - that way the next time you run the script, it will recreate the node/rel tables.
Note that this isn't recommended in any pipeline that runs in production, as you may inadvertently delete data when it was not intended. This approach shown only makes sense in early stages of experimentation, and you'd ideally remove the shutil.rmtree() line when productionizing a workflow to properly catch and handle exceptions for cases where the tables already exist. Hope this helps!
To answer your other question about "different RuntimeError, we don't have a separate error type for catching existing node/rel tables that already exist. But at that stage of the pipeline, the "table already exists" error is the most likely error one might encounter, so if you come across any cases where you need more fine-grained or explicit error handling, let us know.
Okay, got it, delete directory with the DB or handle the RuntimeError.
It might also be useful (mainly for early experimentation and new user onboarding) to support in-memory databases like in sqlite, such that one could write db = kuzu.Database(":memory:") and mess with it without polluting the filesystem.
Yup, in-memory by default is one of the main areas where Kùzu is different from systems like DuckDB - see #1816 for the existing issue and discussion on this. It's on the longer term roadmap but requires a lot more changes to the core as Kùzu's query engine was built to run on on-disk storage, so requires additional effort from the system internals perspective.
I've just run into this issue, and ended up going with a rather un-pythonic "ask permission" approach:
if not conn.execute("CALL SHOW_TABLES() WHERE name = '<table-name-here>' RETURN name;").has_next():
conn.execute("CREATE NODE TABLE <table-name-here>(<fields-go-here>)")
Yes please! I'm creating migration scripts now for longer living dbs, so it'll be extremely useful.
This feature is added in #3601 and should be available in the nightly build tmr.
Thank you 🙏
On Mon, Jun 10, 2024 at 20:45 andyfeng @.***> wrote:
This feature is added in #3601 https://github.com/kuzudb/kuzu/pull/3601
and should be available in the nightly build tmr.
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2159666116, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6N5M6WANTPPIGKOYKDZGZQDVAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJZGY3DMMJRGY
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
@andyfengHKU
I just tried this:
conn.execute("CREATE NODE TABLE IF NOT EXISTS Person(name STRING, age INT64, PRIMARY KEY (name))")
Any idea why I get this error back:
ERROR - An error occurred while loading data into the database: Failed to create table in Kuzu database: Parser exception: mismatched input 'NOT' expecting '(' (line: 1, offset: 21)
"CREATE NODE TABLE IF NOT EXISTS Person(name STRING, age INT64, PRIMARY KEY (name))"
Hi @stugorf did you get this error after installing the latest nightly build? Could you confirm which dev release you're working with?
Hi Prashanth,
I have v0.4.2:latest running.
Regards,
David
On Sat, Jul 6, 2024 at 11:24 AM Prashanth Rao @.***>
wrote:
Hi @stugorf https://github.com/stugorf did you get this error after
installing the latest nightly build? Could you confirm which dev release
you're working with?
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2211841510, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6NVJDEBV5PZTWQH5CDZLAY7FAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJRHA2DCNJRGA
.
You are receiving this because you were mentioned.Message ID:
@.***>
Ah, I don't believe this is supported in 0.4.2 yet. We will be releasing 0.5.0 very soon and it will be supported there - till then could you run on the dev version and let us know if that works?
Sure! What tag should I pull?
On Sat, Jul 6, 2024 at 4:00 PM Prashanth Rao @.***>
wrote:
Ah, I don't believe this is supported in 0.4.2 yet. We will be releasing
0.5.0 very soon and it will be supported there - till then could you run on
the dev version and let us know if that works?
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212036443, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6NOWYQUU7U46NKQJOTZLBZKBAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGAZTMNBUGM
.
You are receiving this because you were mentioned.Message ID:
@.***>
pip install --pre kuzu
Hi Prashanth,
I am using Poetry so poetry add kuzu --allow-prereleases worked and
installed kuzu = {version = "^0.4.3.dev47", allow-prereleases = true}. My
code now correctly executes conn.execute("CREATE NODE TABLE IF NOT EXISTS Person(name STRING, age INT64, PRIMARY KEY (name))")
On Sat, Jul 6, 2024 at 4:05 PM Prashanth Rao @.***>
wrote:
pip install --pre kuzu
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212037716, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6PMX2VFGJ7I7SNHQODZLBZ3VAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGAZTONZRGY
.
You are receiving this because you were mentioned.Message ID:
@.***>
Thank you for the help!
On Sat, Jul 6, 2024 at 16:09 David Hughes @.***> wrote:
Hi Prashanth,
I am using Poetry so poetry add kuzu --allow-prereleases worked and
installed kuzu = {version = "^0.4.3.dev47", allow-prereleases = true}. My
code now correctly executes conn.execute("CREATE NODE TABLE IF NOT EXISTS Person(name STRING, age INT64, PRIMARY KEY (name))")
On Sat, Jul 6, 2024 at 4:05 PM Prashanth Rao @.***>
wrote:
pip install --pre kuzu
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212037716, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6PMX2VFGJ7I7SNHQODZLBZ3VAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGAZTONZRGY
.
You are receiving this because you were mentioned.Message ID:
@.***>
Great, glad that worked!
Prashanth,
Will something like IF EXISTS be implemented for DROP so that we can
execute a command like DROP IF EXISTS Person
Regards,
David
On Sat, Jul 6, 2024 at 4:23 PM Prashanth Rao @.***>
wrote:
Great, glad that worked!
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212046778, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6MMWXNE2YEV2B6A2DDZLB35JAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGA2DMNZXHA
.
You are receiving this because you were mentioned.Message ID:
@.***>
Hi @stugorf, we haven't prioritized this yet as we normally wait to see if there's need from the community on specific keywords that require changes to our grammar. Though it seems like the DROP TABLE [IF EXISTS] syntax exists in Postgres SQL.
I'll create an issue for this, and someone can pick it up in due course. However, in the interim (assuming you're working in Python), you can easily work around this with just one additional line of code:
# Open kuzu connection
import kuzu
db = kuzu.Database("mydb")
conn = kuzu.Connection(db)
# Drop node table
node_table_name = "MyNodeTable"
if node_table_name in conn._get_node_table_names():
conn.execute("DROP MyNodeTable")
# Drop rel table
rel_table_name = "MyRelTable"
if rel_table_name in conn._get_rel_table_names():
conn.execute("DROP MyRelTable")
Not ideal, but it works well if you're using Python. You can also apply the not operator to negate the above check, and perform specific actions only if the table doesn't exist. Hope this helps!
Thank you for the workaround in the meantime. With the -dev version of kuzu
does the explorer work? I am getting this error:
[21:26:57.994] INFO (1): Access mode: READ_WRITE
[21:26:58.019] ERROR (1): Error getting version of Kùzu: Error:
std::bad_alloc
On Sun, Jul 7, 2024 at 12:27 PM Prashanth Rao @.***>
wrote:
Hi @stugorf https://github.com/stugorf, we haven't prioritized this yet
as we normally wait to see if there's need from the community on specific
keywords that require changes to our grammar. Though it seems like the DROP
TABLE [IF EXISTS] syntax exists in Postgres
https://www.postgresql.org/docs/current/sql-droptable.html SQL.
I'll create an issue for this, and someone can pick it up in due course.
However, in the interim (assuming you're working in Python), you can easily
work around this with just one additional line of code:
Open kuzu connectionimport kuzu
db = kuzu.Database("mydb")conn = kuzu.Connection(db)
Drop node tablenode_table_name = "MyNodeTable"if node_table_name in conn._get_node_table_names():
conn.execute("DROP MyNodeTable")
Drop rel tablerel_table_name = "MyRelTable"if rel_table_name in conn._get_rel_table_names():
conn.execute("DROP MyRelTable")
Not ideal, but it works well if you're using Python. You can also apply
the not operator to negate the above check, and perform specific actions
only if the table doesn't exist. Hope this helps!
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212545629, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6M5ZA633XZ3W2UPSATZLGJBXAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU2DKNRSHE
.
You are receiving this because you were mentioned.Message ID:
@.***>
Thank you for the workaround in the meantime. With the -dev version of kuzu does the explorer work? I am getting this error: [21:26:57.994] INFO (1): Access mode: READ_WRITE [21:26:58.019] ERROR (1): Error getting version of Kùzu: Error: std::bad_alloc
…
On Sun, Jul 7, 2024 at 12:27 PM Prashanth Rao @.> wrote: Hi @stugorf https://github.com/stugorf, we haven't prioritized this yet as we normally wait to see if there's need from the community on specific keywords that require changes to our grammar. Though it seems like the DROP TABLE [IF EXISTS] syntax exists in Postgres https://www.postgresql.org/docs/current/sql-droptable.html SQL. I'll create an issue for this, and someone can pick it up in due course. However, in the interim (assuming you're working in Python), you can easily work around this with just one additional line of code: # Open kuzu connectionimport kuzu db = kuzu.Database("mydb")conn = kuzu.Connection(db) # Drop node tablenode_table_name = "MyNodeTable"if node_table_name in conn._get_node_table_names(): conn.execute("DROP MyNodeTable") # Drop rel tablerel_table_name = "MyRelTable"if rel_table_name in conn._get_rel_table_names(): conn.execute("DROP MyRelTable") Not ideal, but it works well if you're using Python. You can also apply the not operator to negate the above check, and perform specific actions only if the table doesn't exist. Hope this helps! — Reply to this email directly, view it on GitHub <#2878 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKDS6M5ZA633XZ3W2UPSATZLGJBXAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU2DKNRSHE . You are receiving this because you were mentioned.Message ID: @.>
This happens most likely because the DB version you created is different than the DB version that's running in Kuzu Explorer. So if you created the db with a nightly build, then try launching the nightly Explorer. For example use this docker-compose.tml:
services:
explorer:
image: kuzudb/explorer:dev
restart: unless-stopped
environment:
- MODE=READ_ONLY
ports:
- 8000:8000
volumes:
- ./ex_db_kuzu:/database
and then do:
docker compose pull
docker compose up
Perfect; I was just reading in the repo to use the 'dev' tag. Thank you.
On Sun, Jul 7, 2024 at 2:42 PM Semih Salihoglu @.***>
wrote:
Thank you for the workaround in the meantime. With the -dev version of
kuzu does the explorer work? I am getting this error: [21:26:57.994] INFO
(1): Access mode: READ_WRITE [21:26:58.019] ERROR (1): Error getting
version of Kùzu: Error: std::bad_alloc
… <#m_7732110577369242028_>
On Sun, Jul 7, 2024 at 12:27 PM Prashanth Rao @.*> wrote: Hi @stugorf
https://github.com/stugorf https://github.com/stugorf
https://github.com/stugorf, we haven't prioritized this yet as we
normally wait to see if there's need from the community on specific
keywords that require changes to our grammar. Though it seems like the DROP
TABLE [IF EXISTS] syntax exists in Postgres
https://www.postgresql.org/docs/current/sql-droptable.html
https://www.postgresql.org/docs/current/sql-droptable.html SQL. I'll
create an issue for this, and someone can pick it up in due course.
However, in the interim (assuming you're working in Python), you can easily
work around this with just one additional line of code: # Open kuzu
connectionimport kuzu db = kuzu.Database("mydb")conn = kuzu.Connection(db)
Drop node tablenode_table_name = "MyNodeTable"if node_table_name in
conn._get_node_table_names(): conn.execute("DROP MyNodeTable") # Drop rel
tablerel_table_name = "MyRelTable"if rel_table_name in
conn._get_rel_table_names(): conn.execute("DROP MyRelTable") Not ideal, but
it works well if you're using Python. You can also apply the not operator
to negate the above check, and perform specific actions only if the table
doesn't exist. Hope this helps! — Reply to this email directly, view it on
GitHub <#2878 (comment)
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212545629>, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6M5ZA633XZ3W2UPSATZLGJBXAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU2DKNRSHE
https://github.com/notifications/unsubscribe-auth/ABKDS6M5ZA633XZ3W2UPSATZLGJBXAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU2DKNRSHE
. You are receiving this because you were mentioned.Message ID: @.*>
This happens most likely because the DB version you created is different
than the DB version that's running in Kuzu Explorer. So if you created the
db with a nightly build, then try launching the nightly Explorer. For
example use this docker-compose.tml:
services:
explorer:
image: kuzudb/explorer:dev
restart: unless-stopped
environment:
- MODE=READ_ONLY
ports:
- 8000:8000
volumes:
- ./ex_db_kuzu:/database
and then do:
docker compose pull
docker compose up
—
Reply to this email directly, view it on GitHub
https://github.com/kuzudb/kuzu/issues/2878#issuecomment-2212585292, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABKDS6JUOIMVOM7NT2CZPIDZLGY2ZAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU4DKMRZGI
.
You are receiving this because you were mentioned.Message ID:
@.***>
Yes, if this is indeed the case, let's open an issue to give a better error message. I'll ask @mewim to take a look.
Perfect; I was just reading in the repo to use the 'dev' tag. Thank you. On Sun, Jul 7, 2024 at 2:42 PM Semih Salihoglu @.> wrote:
…
Thank you for the workaround in the meantime. With the -dev version of kuzu does the explorer work? I am getting this error: [21:26:57.994] INFO (1): Access mode: READ_WRITE [21:26:58.019] ERROR (1): Error getting version of Kùzu: Error: std::bad_alloc … <#m_7732110577369242028_> On Sun, Jul 7, 2024 at 12:27 PM Prashanth Rao @.> wrote: Hi @stugorf https://github.com/stugorf https://github.com/stugorf https://github.com/stugorf, we haven't prioritized this yet as we normally wait to see if there's need from the community on specific keywords that require changes to our grammar. Though it seems like the DROP TABLE [IF EXISTS] syntax exists in Postgres https://www.postgresql.org/docs/current/sql-droptable.html https://www.postgresql.org/docs/current/sql-droptable.html SQL. I'll create an issue for this, and someone can pick it up in due course. However, in the interim (assuming you're working in Python), you can easily work around this with just one additional line of code: # Open kuzu connectionimport kuzu db = kuzu.Database("mydb")conn = kuzu.Connection(db) # Drop node tablenode_table_name = "MyNodeTable"if node_table_name in conn._get_node_table_names(): conn.execute("DROP MyNodeTable") # Drop rel tablerel_table_name = "MyRelTable"if rel_table_name in conn._get_rel_table_names(): conn.execute("DROP MyRelTable") Not ideal, but it works well if you're using Python. You can also apply the not operator to negate the above check, and perform specific actions only if the table doesn't exist. Hope this helps! — Reply to this email directly, view it on GitHub <#2878 (comment) <#2878 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKDS6M5ZA633XZ3W2UPSATZLGJBXAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU2DKNRSHE https://github.com/notifications/unsubscribe-auth/ABKDS6M5ZA633XZ3W2UPSATZLGJBXAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU2DKNRSHE . You are receiving this because you were mentioned.Message ID: @.> This happens most likely because the DB version you created is different than the DB version that's running in Kuzu Explorer. So if you created the db with a nightly build, then try launching the nightly Explorer. For example use this docker-compose.tml: services: explorer: image: kuzudb/explorer:dev restart: unless-stopped environment: - MODE=READ_ONLY ports: - 8000:8000 volumes: - ./ex_db_kuzu:/database and then do: docker compose pull docker compose up — Reply to this email directly, view it on GitHub <#2878 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKDS6JUOIMVOM7NT2CZPIDZLGY2ZAVCNFSM6AAAAABDHHAUOKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJSGU4DKMRZGI . You are receiving this because you were mentioned.Message ID: @.**>
Explorer itself has the correct infrastructure to handle a storage version mismatch, as shown below:
However, I think the issue was that the dev builds do not actively maintain the storage version number. The storage version number is defined at: https://github.com/kuzudb/kuzu/blob/9df0a994535829df598dbfdf135795513554850c/src/include/storage/storage_version_info.h#L15-L20
For each stable release, we manually update this dictionary to add the version number of kuzu and storage. But since each dev build is automatically built and deployed, it does not get its own storage version number. Instead, it simply assumes that it has the latest storage version number in the dictionary.
|
gharchive/issue
| 2024-02-13T21:36:32 |
2025-04-01T04:34:49.431020
|
{
"authors": [
"ForceBru",
"ShravanSunder",
"alanmeeson",
"andyfengHKU",
"mewim",
"prrao87",
"semihsalihoglu-uw",
"stugorf"
],
"repo": "kuzudb/kuzu",
"url": "https://github.com/kuzudb/kuzu/issues/2878",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2008679335
|
cmake: don't build python by default
We shouldn't build anything beyond the core shared and static kuzu libraries by default, and we definitely shouldn't build Python by default but no other APIs.
This changes the default values for these switches to false.
Running a dry run on this as well https://github.com/kuzudb/kuzu/actions/runs/6973481388
After some discussion with @acquamarin , we should build the shell by default since many users use it, but not the Python API. Commit message and title updated.
Docs will need to be updated. They currently say that the python bindings are built by default: https://kuzudb.com/docusaurus/development/building-kuzu/#build-language-bindings.
|
gharchive/pull-request
| 2023-11-23T18:43:03 |
2025-04-01T04:34:49.434606
|
{
"authors": [
"Riolku",
"benjaminwinger"
],
"repo": "kuzudb/kuzu",
"url": "https://github.com/kuzudb/kuzu/pull/2491",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1151722265
|
How did you port Psych Engine to android?
What is your question?
How did you port Psych Engine to android?
I also wanna know how to port FNF mods to android
|
gharchive/issue
| 2022-02-26T11:47:07 |
2025-04-01T04:34:49.442913
|
{
"authors": [
"Opheebop1234"
],
"repo": "kviks/Psych-Engine-Android",
"url": "https://github.com/kviks/Psych-Engine-Android/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
679681024
|
Query values converted to strings
Creating a board using this honeycombio_query:
data "honeycombio_query" "db_heatmap" {
calculation {
op = "HEATMAP"
column = "db.duration"
}
filter {
column = "db.duration"
op = ">"
value = 500
}
}
ends up yielding this query:
(Note that the duration is quoted)
This results in no data being shown, while data is returned if the value is converted to a numeric value.
The provider casts filter.value to a string, before assigning it: https://github.com/kvrhdn/terraform-provider-honeycombio/blob/main/honeycombio/data_source_query.go#L173
In the schema it's also declared as TypeString.
But in go-honeycombio filter value is a interface{}: https://github.com/kvrhdn/go-honeycombio/blob/main/query_spec.go#L64
So, we'll have to figure out whether it's possible to have a dynamic type in Terraform.
Does it make sense to add mutually exclusive value_string and value_numeric fields?
Feels kind of gross, but probably the most straightforward option.
Yeah, I'd like to avoid having to do that :confused: but it might be the easiest way...
I'll have to investigate the Terraform type system a bit more first.
From a quick look around it doesn't look like there's a way to specify that a field is a string or numeric.
My other thought was we could maybe have the go client convert it based on the operator.
This would probably work for comparison operators, but would fail on =/!= (those could be used with either string or numeric values).
The best option would probably be on the honeycomb side (we always send a string, honeycomb converts to an appropriate value for the field), but I think explicit value/numeric fields might be the best option for now.
But yeah, if you find a better option in the terraform type system that works too.
After playing a bit with the UI, it seems that the UI interprets the filter value based upon the type of the column. There are four types (string, integer, float, boolean) which can be set in the dataset settings.
For example, the result of entering 0.5 with the various types:
type of the column
what you type
what the UI shows after pressing enter
string
0.5
"0.5"
integer
0.5
0
float
0.5
0.5
boolean
0.5
input is not accepted
While this feels intuitive, it is impossible to recreate in the Terraform provider:
we don't have access to the column type using the API, so it's impossible to determine which conversion is appropriate
additionally, the API ignores the column type and always uses the type of the value in the JSON payload. If you send a string value, the query will use a string as filter value, even if the column has type float (as shown in the first comment).
The type system of Terraform is also a limiting factor: filter.value is declared as TypeString so even if you set value = 0.5 the Terraform SDK will always return a string. It's not possible to determine whether the user originally used a string or a number.
There is some work in the pipeline to improve the Terraform type system, starting with the v2 SDK (which dropped support for Terraform 0.11). The DynamicPseudoType (https://github.com/hashicorp/terraform-plugin-sdk/issues/248) might fix this issue, but it is expected in a later release of the SDK v2.x.
Solutions:
The best IMO: the Honeycomb API interprets the filter value using the same rules as the UI. I.e. if you send a string "0.5" but the type is float, the API should be able to convert this to a float 0.5.
Intuitive but also a bit risky: the Terraform provider tries to interpret the value by converting it to a float or integer. I.e. if you enter "0.5" we try to convert this as a float and if it succeeds, we send a float to the API. This might cause weird behavior if the type is not always clear, for instance version = "0.1" should probably not be interpreted as a float.
Add a property value_type (which defaults to string):
filter {
column = "app.tenant"
op = "="
value = "SpecialTenant" // no value_type since string is the default
}
filter {
column = "duration_ms"
op = ">"
value = 1000
value_type = "int"
Provide a version of value for every possible type:
filter {
column = "app.tenant"
op = "="
value_string = "SpecialTenant"
}
filter {
column = "duration_ms"
op = ">"
value_int = 1000
Created a new topic on the Hashicorp forum https://discuss.hashicorp.com/t/dynamic-type-in-schema/12919
Definitely agree on #1 being the best option, not a fan of #2 due to ambiguity.
I think I like 4 slightly more due to it being more explicit/harder to miss, but don't have strong objections to 3.
I would say go for 3 or 4 for now, put in a request for API support for 1 and plan on a major release/breaking change once API support in place?
|
gharchive/issue
| 2020-08-16T03:32:52 |
2025-04-01T04:34:49.458528
|
{
"authors": [
"fitzoh",
"kvrhdn"
],
"repo": "kvrhdn/terraform-provider-honeycombio",
"url": "https://github.com/kvrhdn/terraform-provider-honeycombio/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2571853586
|
Changed How the PortWatcher plots data to avoid plotting data for por…
…ts that have 0 data points stored in their loggers
Codecov Report
Attention: Patch coverage is 82.05128% with 7 lines in your changes missing coverage. Please review.
Project coverage is 90.17%. Comparing base (28c7bbe) to head (1236ab2).
Report is 1 commits behind head on main.
Files with missing lines
Patch %
Lines
src/brom_drake/PortWatcher/PortWatcher.py
76.66%
7 Missing :warning:
Additional details and impacted files
@@ Coverage Diff @@
## main #13 +/- ##
===========================================
+ Coverage 79.15% 90.17% +11.01%
===========================================
Files 41 51 +10
Lines 1166 1333 +167
===========================================
+ Hits 923 1202 +279
+ Misses 243 131 -112
Flag
Coverage Δ
?
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-10-08T01:48:21 |
2025-04-01T04:34:49.482104
|
{
"authors": [
"codecov-commenter",
"kwesiRutledge"
],
"repo": "kwesiRutledge/brom_drake-py",
"url": "https://github.com/kwesiRutledge/brom_drake-py/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
203866852
|
Relaxed stride checking in numpy 1.12 causes bottleneck unit test failures
Hi, this is what reproduces the problem on my machine (linux mint):
In [1]: import numpy as np
In [2]: import bottleneck as bn
In [3]: print('numpy:', np.__version__)
numpy: 1.12.0
In [4]: print('bottleneck:', bn.__version__)
bottleneck: 1.2.0
In [5]: a = np.ones((3, 2))
In [6]: np.sum(a[:, [1]]), bn.nansum(a[:, [1]])
Out[6]: (3.0, 1.0)
In [7]: np.sum(a[:, [1]].copy()), bn.nansum(a[:, [1]].copy())
Out[7]: (3.0, 3.0)
This has been hitting the xarray test suite on debian.
Thanks!
It works fine with numpy 1.11 which is what bottleneck 1.2 supports. So my guess is that you are using numpy 1.12. My second guess is that the unit test failures with numpy 1.12 is due to the change in numpy's relaxed stride checking. My third guess is that I don't understand what relaxed stride checking is. (That last one is not a guess.)
I've added your example as a unit test.
OK, I made the fix in master.
That was fast! Thanks a lot.
Well, you rang the fire alarm. Thanks for reporting and thanks for a simple example.
Consider issuing a bug fix release including this fix? I don't think it break any NumPy 1.11 users...
A bug in numpy 1.12.0 prevented me from making a bug fix release. But it should be possible now that 1.12.1 is out.
Ugh. The current blocker is here: #166.
See #168
|
gharchive/issue
| 2017-01-29T12:25:34 |
2025-04-01T04:34:49.488239
|
{
"authors": [
"fmaussion",
"kwgoodman",
"shoyer"
],
"repo": "kwgoodman/bottleneck",
"url": "https://github.com/kwgoodman/bottleneck/issues/161",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1403085247
|
Programatically Calling The Instrumenter Fails
I am calling the instrumenter programmatically as one element of a transformation chain. Calling the chain form a test fixture makes it panic when looking up source map information:
pub fn istanbul_transformer(mapper: Arc<impl SourceMapper>) -> impl Fold + VisitMut {
let visitor = swc_coverage_instrument::create_coverage_instrumentation_visitor(
mapper,
NoopComments {},
InstrumentOptions {
coverage_variable: "__coverage__".to_string(),
compact: false,
report_logic: false,
ignore_class_methods: Default::default(),
input_source_map: Option::None,
instrument_log: Default::default(),
debug_initial_coverage_comment: false,
},
String::from("Hello.js"),
);
as_folder(visitor)
}
The stack trace looks as follows:
0: rust_begin_unwind
at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/a55dd71d5fb0ec5a6a3a9e8c27b2127ba491ce52/library/core/src/panicking.rs:142:14
2: swc_common::source_map::SourceMap::lookup_source_file
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc_common-0.29.5/src/source_map.rs:1042:17
3: swc_common::source_map::SourceMap::lookup_char_pos
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc_common-0.29.5/src/source_map.rs:271:18
4: <swc_common::source_map::SourceMap as swc_common::errors::SourceMapper>::lookup_char_pos
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc_common-0.29.5/src/source_map.rs:1276:9
5: swc_coverage_instrument::utils::lookup_range::get_range_from_span
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc-coverage-instrument-0.0.13/src/utils/lookup_range.rs:17:23
6: swc_coverage_instrument::visitors::coverage_visitor::CoverageVisitor<C,S>::create_stmt_increase_counter_expr
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc-coverage-instrument-0.0.13/src/visitors/coverage_visitor.rs:43:5
7: swc_coverage_instrument::visitors::coverage_visitor::CoverageVisitor<C,S>::mark_prepend_stmt_counter
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc-coverage-instrument-0.0.13/src/visitors/coverage_visitor.rs:43:5
8: swc_coverage_instrument::visitors::coverage_visitor::CoverageVisitor<C,S>::cover_statement
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc-coverage-instrument-0.0.13/src/macros/instrumentation_counter_helper.rs:247:17
9: <swc_coverage_instrument::visitors::coverage_visitor::CoverageVisitor<C,S> as swc_ecma_visit::VisitMut>::visit_mut_var_declarator
at /home/stahlbau/.cargo/registry/src/github.com-1ecc6299db9ec823/swc-coverage-instrument-0.0.13/src/visitors/coverage_visitor.rs:110:5
10: swc_ecma_visit::visit_mut_var_declarators::{{closure}}
This is related to the discussion https://github.com/swc-project/swc/discussions/6073 is started on the SWC project.
Note that I am passing a SourceMapper but no input_source_map. Why are both needed?
Closing this issue for now since it might cased by an empty source mapper created when preparing the test fixture.
|
gharchive/issue
| 2022-10-10T12:20:45 |
2025-04-01T04:34:49.520728
|
{
"authors": [
"stahlbauer"
],
"repo": "kwonoj/swc-plugin-coverage-instrument",
"url": "https://github.com/kwonoj/swc-plugin-coverage-instrument/issues/181",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1792512095
|
cant install
:~/LongNet$ pip install -r requirements.txt
Requirement already satisfied: torch in /home/straughterguthrie/robust/lib/python3.10/site-packages (from -r requirements.txt (line 1)) (2.0.1)
Collecting einops
Using cached einops-0.6.1-py3-none-any.whl (42 kB)
Collecting flash_attn
Using cached flash_attn-1.0.8.tar.gz (2.0 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [15 lines of output]
Traceback (most recent call last):
File "/home/straughterguthrie/robust/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in
main()
File "/home/straughterguthrie/robust/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/straughterguthrie/robust/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 130, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-tjgu9b0f/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-tjgu9b0f/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-tjgu9b0f/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "", line 13, in
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
(robust) straughterguthrie@straughterguthrie-OMEN-by-HP-Obelisk-Desktop-875-1xxx:~/LongNet$
It looks like the issue might be with the flash_attn package itself or with its requirements during the build process. Since your environment seems to satisfy all required dependencies (including PyTorch), this might be something the creators of flash_attn need to correct on their end.
You could raise an issue on the project's GitHub repository to see if there's a known resolution. This kind of error could be due to many factors, but common ones are:
The package might not be compatible with your specific configuration.
"ModuleNotFoundError: No module named 'torch'" suggests the building process is looking for torch and not finding it. This could be an issue on the project build's end.
In the meantime, if it is not critical to your work, you might want to look for alternative packages that have similar functionality to flash_attn.
Lastly, you can fork the repository and fix the problem, but this could be time-consuming and requires a deep understanding of the codebase.
I want to assure you that the problem seems not to be on your side but with the package build system itself. You should definitely write an issue on the original flash_attn repository describing the problem and also showing that torch is indeed installed and available by running and showing this output:
python -c "import torch; print(torch.__version__)"
python -c "import torch; print(torch.version)"
2.0.1+cu117
Right. After reading requirements.txt decided to
pip install git+https://github.com/HazyResearch/flash-attention.git
And it is stuck...15 minutes building wheels for collected packages: flash-attn and counting.
@josedandrade @jmanhype Hey 👋 please try pip installing again, I put the wrong flash attention pip name
Not work.
pip install flash-attn -i https://pypi.org/simple/ --no-cache-dir
Looking in indexes: https://pypi.org/simple/
Collecting flash-attn
Downloading flash_attn-1.0.8.tar.gz (2.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 4.4 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
Traceback (most recent call last):
File "/home/wywzxxz/miniconda3/envs/privateGPT/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/wywzxxz/miniconda3/envs/privateGPT/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wywzxxz/miniconda3/envs/privateGPT/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-uh_xq5kl/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-uh_xq5kl/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-uh_xq5kl/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 13, in <module>
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
Still not working please help
Tried on windows and linux
The problem is from flash-attn
See
https://github.com/HazyResearch/flash-attention/issues/258
and
https://github.com/HazyResearch/flash-attention/issues/246
pip install flash-attn==1.0.5 should fix this
I cannot find the folder... flash_attn
Prepare flash_attn library
cd flash_attn
python setup.py install
cd ..
@AK51 hey we removed the flash_attn repo, were now using our flash implementation in LongNet/attend
Hi, thanks for your reply.
I git clone again, and use the original LongNet env, but there is another error of LongNet.torchscale
...
I really want to try LongNet.... Thx
LongNet$ pip install -r requirements.txt
Requirement already satisfied: torch in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 1)) (2.0.1)
Requirement already satisfied: einops in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 2)) (0.6.1)
Requirement already satisfied: accelerate in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 3)) (0.20.3)
Requirement already satisfied: bitsandbytes in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 4)) (0.39.1)
Requirement already satisfied: fairscale in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 5)) (0.4.0)
Requirement already satisfied: timm in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 6)) (0.6.13)
Requirement already satisfied: ninja in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 7)) (1.11.1)
Requirement already satisfied: packaging in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from -r requirements.txt (line 8)) (23.1)
Requirement already satisfied: transformers in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/transformers-4.30.2-py3.9.egg (from -r requirements.txt (line 9)) (4.30.2)
ERROR: Could not find a version that satisfies the requirement unittest (from versions: none)
ERROR: No matching distribution found for unittest
python -m unittest
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so
/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/home/ak/anaconda3/envs/LongNet/lib/libcudart.so'), PosixPath('/home/ak/anaconda3/envs/LongNet/lib/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA SETUP: CUDA runtime path found: /home/ak/anaconda3/envs/LongNet/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.9
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
E
======================================================================
ERROR: LongNet (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: LongNet
Traceback (most recent call last):
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/unittest/loader.py", line 470, in _find_test_path
package = self._get_module_from_name(name)
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/media/ak/HD/LongNet/LongNet/__init__.py", line 2, in <module>
from LongNet.model import LongNetTokenizer, LongNet, DecoderConfig, Decoder, DilatedLongNet
File "/media/ak/HD/LongNet/LongNet/model.py", line 3, in <module>
import bitsandbytes
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/__init__.py", line 6, in <module>
from . import cuda_setup, utils, research
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/research/__init__.py", line 1, in <module>
from . import nn
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module>
from .modules import LinearFP8Mixed, LinearFP8Global
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module>
from bitsandbytes.optim import GlobalOptimManager
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/optim/__init__.py", line 8, in <module>
from .adagrad import Adagrad, Adagrad8bit, Adagrad32bit
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/optim/adagrad.py", line 5, in <module>
from bitsandbytes.optim.optimizer import Optimizer1State
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/optim/optimizer.py", line 12, in <module>
import bitsandbytes.functional as F
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/functional.py", line 12, in <module>
from scipy.stats import norm
ModuleNotFoundError: No module named 'scipy'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
(LongNet) ak@ak-MS-7D99:/media/ak/HD/LongNet$ pip install scipy
Collecting scipy
Downloading scipy-1.11.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (36.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.5/36.5 MB 5.7 MB/s eta 0:00:00
Requirement already satisfied: numpy<1.28.0,>=1.21.6 in /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages (from scipy) (1.25.0)
Installing collected packages: scipy
Successfully installed scipy-1.11.1
(LongNet) ak@ak-MS-7D99:/media/ak/HD/LongNet$ python -m unittest
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so
/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/home/ak/anaconda3/envs/LongNet/lib/libcudart.so.11.0'), PosixPath('/home/ak/anaconda3/envs/LongNet/lib/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA SETUP: CUDA runtime path found: /home/ak/anaconda3/envs/LongNet/lib/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.9
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
E
======================================================================
ERROR: LongNet.torchscale (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: LongNet.torchscale
Traceback (most recent call last):
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/unittest/loader.py", line 470, in _find_test_path
package = self._get_module_from_name(name)
File "/home/ak/anaconda3/envs/LongNet/lib/python3.9/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/media/ak/HD/LongNet/LongNet/torchscale/__init__.py", line 3, in <module>
from torchscale.architecture.decoder import DecoderConfig, Decoder
ImportError: cannot import name 'DecoderConfig' from 'torchscale.architecture.decoder' (/home/ak/anaconda3/envs/LongNet/lib/python3.9/site-packages/torchscale-0.2.0-py3.9.egg/torchscale/architecture/decoder.py)
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
|
gharchive/issue
| 2023-07-07T00:39:10 |
2025-04-01T04:34:49.546101
|
{
"authors": [
"AK51",
"jmanhype",
"josedandrade",
"kyegomez",
"wywzxxz"
],
"repo": "kyegomez/LongNet",
"url": "https://github.com/kyegomez/LongNet/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1334389586
|
Revert Only patch new istio webhook(#1139)
Bring back support for older clusters with only istio-sidecar-injector mutatingwebhook
/retest
/retest
|
gharchive/pull-request
| 2022-08-10T10:09:47 |
2025-04-01T04:34:49.554760
|
{
"authors": [
"cnvergence"
],
"repo": "kyma-incubator/reconciler",
"url": "https://github.com/kyma-incubator/reconciler/pull/1152",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1861403766
|
API Gateway module pipelines
Description
API Gateway is Kyma component that will become Kyma module in the future. Both API Gateway controller and manager will stay in this repository. There are existing prow based pipelines for api-gateway controller including:
building image
unit testing
integration testing
release
There's one test still being a part of kyma: upgrade test from fast integration test. Moving this test is not a part of this task.
As a part of modularisation additional pipelines are need:
building manager image
linting
unit testing
integration testing
generating module
releasing
compatibility testing
upgrade testing
Pipelines should be based on prow and GH actions. Where it is possible goats team infrastructure should be used (GCP, domain). Pipelines should invoke makefile targets, in some cases this can be mocked (return true). Mocked pipelines targets will be implemented later on with module implementation.
ACs:
[ ] existing api-gateway controller pipelines are not affected
[ ] module required pipelines implemented
[ ] documentation created
Reasons
Modularisation
DoD:
[ ] provide documentation
- [ ] release notes and What's New updates for Kyma customers
- [ ] provide unit tests
- [ ] provide integration tests
- [ ] test on production-like environment
- [ ] verify resource limits
- [ ] followup issue
- [ ] create release and bump in kyma
- [ ] PR reviewer will verify code coverage and evaluate if it is acceptable
Attachments
part of https://github.com/kyma-project/api-gateway/issues/130
Istio module pipelines as reference
https://github.com/kyma-project/keda-manager/blob/main/docs/contributor/04-10-ci-cd.md
Building manager image (along with binary image) for Pull Requests on mod-dev branch introduced by https://github.com/kyma-project/test-infra/pull/8859.
Two jobs were defined:
pull-api-gateway-manager-build builds binary image and pushes it to registry europe-docker.pkg.dev/kyma-project/dev with the image name being api-gateway-manager:<PR-name>
pull-api-gateway-module-build builds module OCI image and pushes it to registry europe-docker.pkg.dev/kyma-project/dev/unsigned, the image name is kyma-project.io/module/api-gateway:v0.0.1-<PR-name>
Unit testing and linting introduced with #510.
Added GH workflow triggered by pull requests on mod-dev branch. The workflow runs two jobs:
lint
unit test with coverage checking
Release process:
GitHub workflow introduced with #523
Prow post and rel jobs creating api-gateway-manager binary and module template - https://github.com/kyma-project/test-infra/pull/8930
Build:
Prow post job which creates manager container image on mod-dev: https://github.com/kyma-project/test-infra/pull/8924
|
gharchive/issue
| 2023-08-22T12:53:17 |
2025-04-01T04:34:49.567167
|
{
"authors": [
"jaroslaw-pieszka",
"kolodziejczak",
"strekm"
],
"repo": "kyma-project/api-gateway",
"url": "https://github.com/kyma-project/api-gateway/issues/467",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1503068241
|
Gardener login
Description
Changes proposed in this pull request:
Add Gardener login feature. To be tested with your Gardener kubeconfigs.
We need to have login with Gardener as a feature
This feature is toggled off by default
The feature accepts as configuration the kubeconfig file that is needed to login the user - this kubeconfig file uses OIDC
the kubeconfigfile does not contain namespace, so we load all projects that the user has access to
we don't have yet an intermediary step, where the user has the chance of selecting the available clusters- if there is time for this, we ld be nice to have this
Related issue(s)
/retest
/woof
/woof
/meow
/joke
/meow
/woof
|
gharchive/pull-request
| 2022-12-19T14:37:16 |
2025-04-01T04:34:49.571232
|
{
"authors": [
"Wawrzyn321",
"dariadomagala-sap"
],
"repo": "kyma-project/busola",
"url": "https://github.com/kyma-project/busola/pull/2203",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2168655906
|
runtimes: Add nodejs20 image
Description
Changes proposed in this pull request:
Add nodejs20 image
Related issue(s)
#745
/retest
Waits for https://github.com/kyma-project/test-infra/pull/10010
|
gharchive/pull-request
| 2024-03-05T08:57:39 |
2025-04-01T04:34:49.602815
|
{
"authors": [
"halamix2"
],
"repo": "kyma-project/serverless",
"url": "https://github.com/kyma-project/serverless/pull/792",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1851184749
|
reuse compliance
Fixes #106 #105 please add your Email id in the upstream contact section.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
@ajinkyapatil8190 pls sign the CLA via the link
|
gharchive/pull-request
| 2023-08-15T09:54:03 |
2025-04-01T04:34:49.605234
|
{
"authors": [
"CLAassistant",
"ajinkyapatil8190",
"kwiatekus"
],
"repo": "kyma-project/warden",
"url": "https://github.com/kyma-project/warden/pull/107",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
527729713
|
Callback for addResponsiveText
Currently, there's no way to get the text size from the addResponsiveText method, this PR aims to fix that with a callback function that takes the new text font size calculated.
Oops, just realized how this is completely wrong, I'll PR later after I've tested and gotten it fixed.
|
gharchive/pull-request
| 2019-11-24T17:28:46 |
2025-04-01T04:34:49.660772
|
{
"authors": [
"Jo3-L"
],
"repo": "kyranet/canvasConstructor",
"url": "https://github.com/kyranet/canvasConstructor/pull/263",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
516788810
|
Conflict with ClothConfig's save method
https://github.com/shedaniel/RoughlyEnoughItems/issues/181
May also break other mods' config, haven't had a chance to test yet.
Edit: clicking on this mod's config makes it crash. The creator of REI said "cmd keybinds [is] mixin into [the] cloth config save method".
---- Minecraft Crash Report ----
// Hi. I'm Minecraft, and I'm a crashaholic.
Time: 3.11.19 16:52
Description: mouseClicked event handler
org.spongepowered.asm.mixin.transformer.throwables.MixinTransformerError: An unexpected critical error was encountered
at org.spongepowered.asm.mixin.transformer.MixinTransformer.transformClassBytes(MixinTransformer.java:521)
at net.fabricmc.loader.launch.knot.KnotClassDelegate.loadClassData(KnotClassDelegate.java:180)
at net.fabricmc.loader.launch.knot.KnotClassLoader.loadClass(KnotClassLoader.java:143)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at me.shedaniel.clothconfig2.api.ConfigBuilder.create(ConfigBuilder.java:14)
at net.kyrptonaught.cmdkeybind.config.ModMenuIntegration.buildScreen(ModMenuIntegration.java:34)
at net.kyrptonaught.cmdkeybind.config.ModMenuIntegration.lambda$getConfigScreen$0(ModMenuIntegration.java:29)
at java.util.Optional.map(Optional.java:215)
at io.github.prospector.modmenu.api.ModMenuApi.lambda$getConfigScreenFactory$0(ModMenuApi.java:50)
at io.github.prospector.modmenu.ModMenu.getConfigScreen(ModMenu.java:38)
at io.github.prospector.modmenu.gui.ModListScreen.lambda$init$1(ModListScreen.java:94)
at net.minecraft.class_4185.onPress(class_4185.java:18)
at net.minecraft.class_4264.onClick(class_4264.java:15)
at net.minecraft.class_339.mouseClicked(class_339.java:154)
at net.minecraft.class_4069.mouseClicked(class_4069.java:27)
at net.minecraft.class_312.method_1611(class_312.java:86)
at net.minecraft.class_437.wrapScreenError(class_437.java:441)
at net.minecraft.class_312.method_1601(class_312.java:86)
at org.lwjgl.glfw.GLFWMouseButtonCallbackI.callback(GLFWMouseButtonCallbackI.java:36)
at org.lwjgl.system.JNI.invokeV(Native Method)
at org.lwjgl.glfw.GLFW.glfwPollEvents(GLFW.java:3101)
at net.minecraft.class_1041.method_16001(class_1041.java:503)
at net.minecraft.class_1041.method_15998(class_1041.java:342)
at net.minecraft.class_310.method_15994(class_310.java:1023)
at net.minecraft.class_310.method_1523(class_310.java:976)
at net.minecraft.class_310.method_1514(class_310.java:410)
at net.minecraft.client.main.Main.main(Main.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at net.fabricmc.loader.game.MinecraftGameProvider.launch(MinecraftGameProvider.java:188)
at net.fabricmc.loader.launch.knot.Knot.init(Knot.java:131)
at net.fabricmc.loader.launch.knot.KnotClient.main(KnotClient.java:26)
Caused by: org.spongepowered.asm.mixin.throwables.MixinApplyError: Mixin [net.kyrptonaught.cmdkeybind.json:MixinClothConfigScreen] from phase [DEFAULT] in config [net.kyrptonaught.cmdkeybind.json] FAILED during APPLY
at org.spongepowered.asm.mixin.transformer.MixinTransformer.handleMixinError(MixinTransformer.java:779)
at org.spongepowered.asm.mixin.transformer.MixinTransformer.handleMixinApplyError(MixinTransformer.java:732)
at org.spongepowered.asm.mixin.transformer.MixinTransformer.transformClassBytes(MixinTransformer.java:513)
... 33 more
Caused by: org.spongepowered.asm.mixin.transformer.throwables.InvalidMixinException: @Shadow method onSave in net.kyrptonaught.cmdkeybind.json:MixinClothConfigScreen was not located in the target class me.shedaniel.clothconfig2.gui.ClothConfigScreen. Using refmap cmdkeybinds-refmap.json
at org.spongepowered.asm.mixin.transformer.MixinPreProcessorStandard.attachSpecialMethod(MixinPreProcessorStandard.java:387)
at org.spongepowered.asm.mixin.transformer.MixinPreProcessorStandard.attachShadowMethod(MixinPreProcessorStandard.java:363)
at org.spongepowered.asm.mixin.transformer.MixinPreProcessorStandard.attachMethods(MixinPreProcessorStandard.java:296)
at org.spongepowered.asm.mixin.transformer.MixinPreProcessorStandard.attach(MixinPreProcessorStandard.java:264)
at org.spongepowered.asm.mixin.transformer.MixinPreProcessorStandard.createContextFor(MixinPreProcessorStandard.java:244)
at org.spongepowered.asm.mixin.transformer.MixinInfo.createContextFor(MixinInfo.java:1145)
at org.spongepowered.asm.mixin.transformer.MixinApplicatorStandard.apply(MixinApplicatorStandard.java:268)
at org.spongepowered.asm.mixin.transformer.TargetClassContext.applyMixins(TargetClassContext.java:353)
at org.spongepowered.asm.mixin.transformer.MixinTransformer.apply(MixinTransformer.java:724)
at org.spongepowered.asm.mixin.transformer.MixinTransformer.applyMixins(MixinTransformer.java:703)
at org.spongepowered.asm.mixin.transformer.MixinTransformer.transformClassBytes(MixinTransformer.java:509)
... 33 more
A detailed walkthrough of the error, its code path and all known details is as follows:
---------------------------------------------------------------------------------------
-- Head --
Thread: Client thread
Stacktrace:
at org.spongepowered.asm.mixin.transformer.MixinTransformer.transformClassBytes(MixinTransformer.java:521)
at net.fabricmc.loader.launch.knot.KnotClassDelegate.loadClassData(KnotClassDelegate.java:180)
at net.fabricmc.loader.launch.knot.KnotClassLoader.loadClass(KnotClassLoader.java:143)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at me.shedaniel.clothconfig2.api.ConfigBuilder.create(ConfigBuilder.java:14)
at net.kyrptonaught.cmdkeybind.config.ModMenuIntegration.buildScreen(ModMenuIntegration.java:34)
at net.kyrptonaught.cmdkeybind.config.ModMenuIntegration.lambda$getConfigScreen$0(ModMenuIntegration.java:29)
at java.util.Optional.map(Optional.java:215)
at io.github.prospector.modmenu.api.ModMenuApi.lambda$getConfigScreenFactory$0(ModMenuApi.java:50)
at io.github.prospector.modmenu.ModMenu.getConfigScreen(ModMenu.java:38)
at io.github.prospector.modmenu.gui.ModListScreen.lambda$init$1(ModListScreen.java:94)
at net.minecraft.class_4185.onPress(class_4185.java:18)
at net.minecraft.class_4264.onClick(class_4264.java:15)
at net.minecraft.class_339.mouseClicked(class_339.java:154)
at net.minecraft.class_4069.mouseClicked(class_4069.java:27)
at net.minecraft.class_312.method_1611(class_312.java:86)
-- Affected screen --
Details:
Screen name: io.github.prospector.modmenu.gui.ModListScreen
Stacktrace:
at net.minecraft.class_437.wrapScreenError(class_437.java:441)
at net.minecraft.class_312.method_1601(class_312.java:86)
at org.lwjgl.glfw.GLFWMouseButtonCallbackI.callback(GLFWMouseButtonCallbackI.java:36)
at org.lwjgl.system.JNI.invokeV(Native Method)
at org.lwjgl.glfw.GLFW.glfwPollEvents(GLFW.java:3101)
at net.minecraft.class_1041.method_16001(class_1041.java:503)
at net.minecraft.class_1041.method_15998(class_1041.java:342)
-- Affected level --
Details:
All players: 1 total; [class_746['robotkoer'/198, l='MpServer', x=674.92, y=55.00, z=165.88]]
Chunk stats: Client Chunk Cache: 729, 441
Level dimension: minecraft:overworld
Level name: MpServer
Level seed: 0
Level generator: ID 00 - default, ver 1. Features enabled: false
Level generator options: {}
Level spawn location: World: (0,66,240), Chunk: (at 0,4,0 in 0,15; contains blocks 0,0,240 to 15,255,255), Region: (0,0; contains chunks 0,0 to 31,31, blocks 0,0,0 to 511,255,511)
Level time: 180401 game time, 33355 day time
Level storage version: 0x00000 - Unknown?
Level weather: Rain time: 0 (now: true), thunder time: 0 (now: false)
Level game mode: Game mode: creative (ID 1). Hardcore: false. Cheats: false
Server brand: fabric
Server type: Integrated singleplayer server
Stacktrace:
at net.minecraft.class_638.method_8538(class_638.java:574)
at net.minecraft.class_310.method_1587(class_310.java:1923)
at net.minecraft.class_310.method_1514(class_310.java:425)
at net.minecraft.client.main.Main.main(Main.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at net.fabricmc.loader.game.MinecraftGameProvider.launch(MinecraftGameProvider.java:188)
at net.fabricmc.loader.launch.knot.Knot.init(Knot.java:131)
at net.fabricmc.loader.launch.knot.KnotClient.main(KnotClient.java:26)
-- System Details --
Details:
Minecraft Version: 1.14.4
Minecraft Version ID: 1.14.4
Operating System: Linux (amd64) version 4.19.80-1-MANJARO
Java Version: 1.8.0_232, Oracle Corporation
Java VM Version: OpenJDK 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 2957591488 bytes (2820 MB) / 4496293888 bytes (4288 MB) up to 8589934592 bytes (8192 MB)
CPUs: 4
JVM Flags: 8 total; -Xss1M -Xmx8G -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:G1NewSizePercent=20 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=50 -XX:G1HeapRegionSize=32M
Fabric Mods:
amecs: Amecs 1.2.5+1.14.4
appleskin: AppleSkin 1.0.7
autoconfig1: Auto Config v1 1.2.0+mc1.14.4
autofish: Autofish 0.8.4
blue_endless_jankson: jankson +
blur: Blur 1.0.5
cloth: Cloth Events 0.6.0
cloth-config: Cloth Config 0.2.5
cloth-config2: Cloth Config v2 1.7.3
cmdkeybind: Command Macros 1.2.1
consolehud: ConsoleHUD 1.0.3+build.6
fabric: Fabric API 0.4.1+build.245-1.14
fabric-api-base: Fabric API Base 0.1.1+2ac73e7242
fabric-biomes-v1: Fabric Biomes (v1) 0.1.0+591e97ae42
fabric-commands-v0: Fabric Commands (v0) 0.1.1+591e97ae42
fabric-containers-v0: Fabric Containers (v0) 0.1.2+591e97ae42
fabric-content-registries-v0: Fabric Content Registries (v0) 0.1.1+591e97ae42
fabric-crash-report-info-v1: Fabric Crash Report Info (v1) 0.1.1+591e97ae42
fabric-dimensions-v1: fabric-dimensions-v1 0.1.0+369ab22e42
fabric-events-interaction-v0: fabric-events-interaction-v0 0.1.2+27da48aa46
fabric-events-lifecycle-v0: Fabric Events Lifecycle (v0) 0.1.1+591e97ae42
fabric-item-groups-v0: Fabric Item Groups (v0) 0.1.0+591e97ae42
fabric-keybindings-v0: Fabric Key Bindings (v0) 0.1.1+591e97ae42
fabric-loot-tables-v1: Fabric Loot Tables (v1) 0.1.0+591e97ae42
fabric-mining-levels-v0: fabric-mining-levels-v0 0.1.0+59147463
fabric-models-v0: Fabric Models (v0) 0.1.0+591e97ae42
fabric-networking-blockentity-v0: Fabric Networking Block Entity (v0) 0.2.0+c877038942
fabric-networking-v0: Fabric Networking (v0) 0.1.3+591e97ae42
fabric-object-builders-v0: Fabric Object Builders (v0) 0.1.1+591e97ae42
fabric-particles-v1: fabric-particles-v1 0.1.1+c877038942
fabric-registry-sync-v0: Fabric Registry Sync (v0) 0.2.2+591e97ae42
fabric-renderer-api-v1: Fabric Renderer API (v1) 0.1.1+591e97ae42
fabric-renderer-indigo: Fabric Renderer - Indigo 0.1.13+591e97ae42
fabric-rendering-data-attachment-v1: Fabric Rendering Data Attachment (v1) 0.1.1+c877038942
fabric-rendering-fluids-v1: Fabric Rendering Fluids (v1) 0.1.2+36f27aa342
fabric-rendering-v0: Fabric Rendering (v0) 0.1.1+591e97ae42
fabric-resource-loader-v0: Fabric Resource Loader (v0) 0.1.3+591e97ae42
fabric-tag-extensions-v0: Fabric Tag Extensions (v0) 0.1.1+591e97ae42
fabric-textures-v0: Fabric Textures (v0) 0.1.4+591e97ae42
fabricloader: Fabric Loader 0.6.3+build.168
fiber2cloth: Fiber To Cloth 1.2.1
lightoverlay: Light Overlay 3.5
me_zeroeightsix_fiber: fiber 0.6.0-7
minecraft: Minecraft 1.14.4
mm: Manningham Mills 1.6
modmenu: Mod Menu 1.7.14.1.14.4+build.126
mousewheelie: Mouse Wheelie 1.3.4+1.14.4
net_fabricmc_stitch: stitch 0.2.1.61
optifabric: Optifabric 0.5.2
org_slf4j_slf4j-api: slf4j-api 1.7.26
org_slf4j_slf4j-simple: slf4j-simple 1.7.26
org_zeroturnaround_zt-zip: zt-zip 1.13
overheadhp: Over Head HP 0.1.2
roughlyenoughitems: Roughly Enough Items 3.2.2+build.45
shulkerboxtooltip: Shulker Box Tootip 1.3.1+1.14.4
tweed: Tweed API 2.2.7
voxelmap: VoxelMap 1.9.13
Launched Version: fabric-loader-0.6.3+build.168-1.14.4
LWJGL: 3.2.2 build 10
OpenGL: Mesa DRI Intel(R) Ivybridge Mobile GL version 3.0 Mesa 19.2.2, Intel Open Source Technology Center
GL Caps: Using GL 1.3 multitexturing.
Using GL 1.3 texture combiners.
Using framebuffer objects because OpenGL 3.0 is supported and separate blending is supported.
Shaders are available because OpenGL 2.1 is supported.
VBOs are available because OpenGL 1.5 is supported.
Using VBOs: Yes
Is Modded: Definitely; Client brand changed to 'fabric'
Type: Client (map_client.txt)
Resource Packs: vanilla, file/Stevens Traditional 64x64 [1.14.4] (Patch 1).zip, file/ST Customized Pack.zip, file/[1.11.2]+R3D+CRAFT+128x+(v0.3.1), file/Material+2.13.4.zip, file/Material HUD 2.2.1.zip, file/roboto-bold.zip, file/Anti-obtrusive, file/Disable menu music.zip, file/LowerShields.zip (incompatible), file/VanillaTweaks_r187101.zip
Current Language: Eesti keel (Eesti)
CPU: 4x Intel(R) Core(TM) i5-3360M CPU @ 2.80GHz
OptiFine Version: OptiFine_1.14.4_HD_U_F4
OptiFine Build: 20191025-153543
Render Distance Chunks: 12
Mipmaps: 4
Anisotropic Filtering: 1
Antialiasing: 0
Multitexture: false
Shaders: null
OpenGlVersion: 3.0 Mesa 19.2.2
OpenGlRenderer: Mesa DRI Intel(R) Ivybridge Mobile
OpenGlVendor: Intel Open Source Technology Center
CpuCount: 4
Thank you for making me aware of this, fixed in 1.3.2
https://www.curseforge.com/minecraft/mc-mods/command-macros/files/2821459
|
gharchive/issue
| 2019-11-03T06:47:11 |
2025-04-01T04:34:49.666174
|
{
"authors": [
"Madis0",
"kyrptonaught"
],
"repo": "kyrptonaught/CMDKeybinds",
"url": "https://github.com/kyrptonaught/CMDKeybinds/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1875756429
|
🛑 senoshaidodelasmanos.love is down
In f4b8c5f, senoshaidodelasmanos.love (https://senoshaidodelasmanos.love) was down:
HTTP code: 0
Response time: 0 ms
Resolved: senoshaidodelasmanos.love is back up in 523fd27 after 1 hour, 2 minutes.
|
gharchive/issue
| 2023-08-31T15:24:09 |
2025-04-01T04:34:49.669631
|
{
"authors": [
"kyryl-bogach"
],
"repo": "kyryl-bogach/upptime",
"url": "https://github.com/kyryl-bogach/upptime/issues/555",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1899096910
|
🛑 senoshaidodelasmanos.love is down
In aab756b, senoshaidodelasmanos.love (https://senoshaidodelasmanos.love) was down:
HTTP code: 0
Response time: 0 ms
Resolved: senoshaidodelasmanos.love is back up in 58b0da0 after 12 minutes.
|
gharchive/issue
| 2023-09-15T21:25:27 |
2025-04-01T04:34:49.672009
|
{
"authors": [
"kyryl-bogach"
],
"repo": "kyryl-bogach/upptime",
"url": "https://github.com/kyryl-bogach/upptime/issues/900",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2092974137
|
UUID doesn't seem to work for inserts (Postgres)
This doesn't work, but I feel like it should... what am I missing? I tried two ways (see the comments in the snippet), but both result in the same error.
import { Kysely } from "kysely";
import { parse as uuidParse } from 'uuid';
/**
* @param db {Kysely<any>}
*/
export async function up(db) {
await db.insertInto('documents').values([
{
id: uuidParse('b4d99282-e0e2-4407-a5c0-f63d830c9f62'),
// also tried:
// id: 'b4d99282-e0e2-4407-a5c0-f63d830c9f62',
textract: '',
created_at: new Date(),
updated_at: new Date()
}
]).execute();
}
Returns this error:
Error: ERROR: column "id" is of type uuid but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Position: 80; SQLState: 42804
I'm running postgres and the type of the id column is uuid.
Kysely doesn't touch the data types. It simply passes things to the underlying driver as parameters. https://kysely.dev/docs/recipes/data-types
|
gharchive/issue
| 2024-01-22T02:59:41 |
2025-04-01T04:34:49.674218
|
{
"authors": [
"ee99ee",
"koskimas"
],
"repo": "kysely-org/kysely",
"url": "https://github.com/kysely-org/kysely/issues/853",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
483565412
|
chore(cxx): switch to using Abseil flags library over gflags
This does necessitate turning off gflags support the glog for the time being, due to conflicting exported macros between glog and absl. The extant flag-based functionality can also be achieved via environment variables, however.
In theory, there will be absl/logging at some point, but I don't know what the timeline on that is.
It looks like there are some build errors
It looks like there are some build errors
Oh the joy of undeclared dependencies.
I cannot reproduce the clang-format issue from arc lint locally.
|
gharchive/pull-request
| 2019-08-21T17:58:37 |
2025-04-01T04:34:49.676088
|
{
"authors": [
"salguarnieri",
"shahms"
],
"repo": "kythe/kythe",
"url": "https://github.com/kythe/kythe/pull/4011",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2201222213
|
How can I test output commands using helm and keeping the file ordination during execution?
Describe your question
Hello everyone! Thanks for chainsaw, is a great tools! 🥳
I'm attempting to test the output for several Helm commands. I'm conducting these tests separately for each file. However, my last file verifies the output for uninstalling the application. This has posed a challenge because the uninstallation should be the final test command, but it's being executed among other tests and disrupting the pipeline
How can I test it maintaining the files separated?
Below is my current directories. The expected behavior, wold be run 9-uninstall-zora like the last test?
├── 0-helm-install-test
│ └── chainsaw-test.yaml
├── 1-misc-test
│ └── chainsaw-test.yaml
....
├── 8-set-timout-for-scan
│ └── chainsaw-test.yaml
├── 9-uninstall-zora
│ └── chainsaw-test.yaml
├── config.yaml
My config file contains:
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Configuration
metadata:
name: custom-config
spec:
timeouts:
apply: 45s
assert: 20s
cleanup: 45s
delete: 25s
error: 10s
exec: 45s
skipDelete: false
failFast: true
parallel: 1
and my test for helm uninstall:
apiVersion: chainsaw.kyverno.io/v1alpha1
kind: Test
metadata:
name: delete-zora
spec:
steps:
- try:
- script:
#You can uninstall Zora and its components by uninstalling the Helm chart installed.
content: helm uninstall zora -n zora-system
check:
# This check ensures that zora has been uninstalled correctly.
($stdout): |-
release "zora" uninstalled
Thank you very much!
chainsaw version Version
v0.1.8
Additional context
I'm using like reference this part of documentation:
https://kyverno.github.io/chainsaw/latest/examples/test-output/
@lucasjct tests are independent from each other, not sure how this is supposed to work 🤔
can you elaborate on what you are doing ?
@eddycharly Thank you for the response.
What I've been doing is running tests to validate the output of helm commands. I start with the helm install and validate the program's output, then test some helm parameter passes and also validate the output. Finally, I run the helm uninstall and check if the resources were correctly removed.
However, the tests are executed independently, but they do not respect the ordering of the files I planned in the directory. So, instead of executing the file containing helm uninstall last, it ends up being executed in the middle of the tests, breaking the expected flow.
To work around this problem, we created a directory to execute only the uninstall script. And I run the test on GitHub actions by first calling all the tests and then calling a step lastly just for uninstalling.
My directories are structured as follows:
├── tests
│ ├── 0-helm-install-test
│ │ └── chainsaw-test.yaml
│ ├── 1-misc-test
│ │ └── chainsaw-test.yaml
│ ├── 2-vuln-test
│ │ └── chainsaw-test.yaml
│ ├── 3-change-schedule
│ │ └── chainsaw-test.yaml
│ ├── 4-custom-check
│ │ ├── chainsaw-test.yaml
│ │ └── check.yaml
│ ├── 5-computer-resources
│ │ └── chainsaw-test.yaml
│ ├── 6-retain-issues
│ │ └── chainsaw-test.yaml
│ ├── 7-large-vulnerability-reports
│ │ └── chainsaw-test.yaml
│ ├── 8-set-timout-for-scan
│ │ └── chainsaw-test.yaml
│ └── 9-expansive-scan
│ └── chainsaw-test.yaml
├── uninstall-zora
│ └── chainsaw-test.yaml
└── wait-scan.sh
@lucasjct this is expected behaviour, all tests are independent from each other.
From one i understand it should be a single test with different steps.
Hello, thank you for the response.
My intention, even though the tests are independent of each other, was for them to be executed following the numerical index within the files in the test directory (following different steps). However, when I run them, the order I structured in the directory is not being followed.
But I managed to create a second test directory, which is only used to terminate the test by executing the Helm Uninstall command. Therefore, it runs the chainsaw test twice in the pipeline, but we managed to work around the initial problem.
I will close this PR with the solution we used. Thank you for your attention! Much appreciated!
How we are executing:
- name: Run tests
run: |
cd tests && \
chainsaw test --config ../config/config.yaml
- name: Validate helm uninstall
run: |
cd uninstall-zora && \
chainsaw test --config ../config/config.yaml
```
|
gharchive/issue
| 2024-03-21T21:26:54 |
2025-04-01T04:34:49.690243
|
{
"authors": [
"eddycharly",
"lucasjct"
],
"repo": "kyverno/chainsaw",
"url": "https://github.com/kyverno/chainsaw/issues/1133",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
950389588
|
add note about wildcard support on conditions
See https://github.com/kyverno/kyverno/issues/2165 for reference.
Thanks @MarcusNoble!
Thank you for adding this, @MarcusNoble !
|
gharchive/pull-request
| 2021-07-22T07:45:13 |
2025-04-01T04:34:49.722739
|
{
"authors": [
"JimBugwadia",
"MarcusNoble",
"chipzoller"
],
"repo": "kyverno/website",
"url": "https://github.com/kyverno/website/pull/218",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1158242513
|
Deprecated dependencies
running npm i earljs on an empty project yields
npm WARN deprecated source-map-url@0.4.1: See https://github.com/lydell/source-map-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated source-map-resolve@0.5.3: See https://github.com/lydell/source-map-resolve#deprecated
npm WARN deprecated sane@4.1.0: some dependency vulnerabilities fixed, support for node < 10 dropped, and newer ECMAScript syntax/features added
those packages come from
$ npm ls source-map-url urix resolve-url source-map-resolve sane
└─┬ earljs@0.2.1
└─┬ jest-snapshot@26.6.2
└─┬ jest-haste-map@26.6.2
└─┬ sane@4.1.0
└─┬ micromatch@3.1.10
└─┬ snapdragon@0.8.2
└─┬ source-map-resolve@0.5.3
├── resolve-url@0.2.1
├── source-map-url@0.4.1
└── urix@0.1.0
you should at least update jest-snapshot to ^27 (released 9 months ago) (if you won't address #106 )
then preferably setup some dependency bot like renovate and/or dependabot
The newest development version completely removes jest-snapshot.
The changes are going to be released soon!
|
gharchive/issue
| 2022-03-03T10:36:40 |
2025-04-01T04:34:49.950435
|
{
"authors": [
"m-ronchi",
"sz-piotr"
],
"repo": "l2beat/earl",
"url": "https://github.com/l2beat/earl/issues/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
245094569
|
Failed to compile Linux QT wallet
g++: error: /home/exp/wallets/GirlsToken-2.0/src/leveldb/libleveldb.a: No such file or directory
g++: error: /home/exp/wallets/GirlsToken-2.0/src/leveldb/libmemenv.a: No such file or directory
Makefile:320: recipe for target 'GirlsToken2.0.1' failed
make: *** [GirlsToken2.0.1] Error 1
will look into it immediately, however i am currently on vacation using a 3g connection, so may take me a short while for the fix. Thank you for reporting this to me 😊
Ok, i have pushed a small fix, however am about 300 km away from my build machine. If you go to your GirlsToken-2.0 folder and do a "git pull" and then try building again all should work.
Please let me know 😊
Thanks. Better but still not ok
/usr/bin/ld : ne peut trouver -lboost_system-mgw49-mt-s-1_55
/usr/bin/ld : ne peut trouver -lboost_filesystem-mgw49-mt-s-1_55
/usr/bin/ld : ne peut trouver -lboost_program_options-mgw49-mt-s-1_55
/usr/bin/ld : ne peut trouver -lboost_thread-mgw49-mt-s-1_55
collect2: error: ld returned 1 exit status
make: *** [GirlsToken2.0.1] Erreur 1
I have the v1.58 of Boost and no problem to compile other wallets.
Ok if you look at that error, it is trying to use boost 1.55, which is what i used to compile. If you go into the girlstoken-qt.pro file you will see the dependancies that i used. You need to change the numbers to match the versions you are using.
On Jul 27, 2017 5:31 AM, anemol notifications@github.com wrote:
Thanks. Better but still not ok
/usr/bin/ld : ne peut trouver -lboost_system-mgw49-mt-s-1_55
/usr/bin/ld : ne peut trouver -lboost_filesystem-mgw49-mt-s-1_55
/usr/bin/ld : ne peut trouver -lboost_program_options-mgw49-mt-s-1_55
/usr/bin/ld : ne peut trouver -lboost_thread-mgw49-mt-s-1_55
collect2: error: ld returned 1 exit status
make: *** [GirlsToken2.0.1] Erreur 1
I have the v1.58 of Boost and no problem to compile other wallets.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/l8nit3tr0ubl3/GirlsToken-2.0/issues/4#issuecomment-318310377, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFHQXx0EMEzPvYwrKVvZnDup-gyN12luks5sSFjsgaJpZM4OhP34.
I deleted and git clone again
qmake Linux-qt.pro
/media/SSD/Workspace/GirlsToken-2.0/Linux-qt.pro:141: Parse Error (' �')
/media/SSD/Workspace/GirlsToken-2.0/Linux-qt.pro:141: Unterminated conditional block at end of file
Error processing project file: Linux-qt.pro
Ok, so I have uploaded a pre-built linux binary on the "releases" page ->
https://github.com/l8nit3tr0ubl3/GirlsToken-2.0/releases/download/2.0.1.0/GirlsToken2.0.1
However if you would like to build your own please follow the steps below:
ensure all dependencies are satisfied:
sudo apt-get install libdb5.1-dev libdb5.1++-dev libboost-all-dev libqrencode-dev qt4-qmake libqt4-dev build-essential libboost-dev libboost-system-dev libboost-filesystem-dev libboost-program-options-dev libboost-thread-dev libssl-dev automake
enter 'src' and build daemon to ensure leveldb is built correctly:
cd src && make -f makefile.unix
move back to main directory and build qt wallet:
cd ../ && qmake-qt4 linux-qt.pro && make
Hope this helps :)
Your fixes are good.
Deamon compilation and Qt-wallet compilation are ok now.
Thanks :)
Glad i could get it fixed for you 😊 sorry for the delay
|
gharchive/issue
| 2017-07-24T14:15:34 |
2025-04-01T04:34:49.964668
|
{
"authors": [
"anemol",
"l8nit3tr0ubl3"
],
"repo": "l8nit3tr0ubl3/GirlsToken-2.0",
"url": "https://github.com/l8nit3tr0ubl3/GirlsToken-2.0/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1070822621
|
Daniel: Welcome to JS. Weeks 7-8
🥚 Three Audiences: You can explain how a single file of code is used to communicate with 3 different audiences:
[x] Developers: You can explain how code formatting, comments, logs and variable names make it easier (or harder!) for a developer to understand a program.
[x] Computers: You can explain how a computer follows your instructions to store, read and modify data in program memory.
[ ] Users: You can explain how the computer creates a dynamic user experience by following different executions paths depending on user input.
🥚 Listening and Reading:
[ ] You can read code out loud, and understand your classmates when they read code to you. You don't need to understand how a program works to master these learning objectives!
[ ] Listening You can exactly re-write a program that a classmate has readReading
[ ] You can read a program out loud and guide your classmates to re-write exactly the same code without them seeing the program. (every indentation, semi-colon, comment and spelling must be identical) to you, without seeing the program they are reading.
🥚 Static vs. Dynamic Analysis: You can explain and use these two ways of studying a program, each can help you understand different aspects of your code. To help understand this concept, the options panel in Study Lenses is organized into static and dynamic study options:
[x] Static: Studying the text in a code file without running it. Some static study methods are creating a flowchart, analyzing variables, filling out a trace table, and drawing on code.
[x] Dynamic: Running code and studying the computer's behavior. Some dynamic study methods are running code and reading console logs, using the trace button, and stepping through in the debugger or JS Tutor.
🥚 Analyzing Variables: You can list all the variables in a program, and answer these 5 questions for each variable:
[x] Where is the variable declared?
[x] What is the variable's scope?Is the variable initialized with a value?
[x] How many times is it's value used (read) in the program?
[x] How many times is the variable assigned a new value?
[x] What types are assigned to this variable during the program's execution?
[ ] 🐣 Imperative Programming: You can explain what the Imperative Programming paradigm is, and can explain how you know the programs in Welcome to JS are Imperative.
[ ] 🐣 Tracing Execution: You can complete a "steps" trace table and correct your table using console output from the "trace" button.
[ ] 🐣 Logging: You can tracing specific aspects of a program's execution and log them to the console.
[ ] 🐣 Completing Programs: You can successfully fill in blanks for a program when the missing words are provided, including distractors.
[ ] 🐣 Describing Programs: You can read a program and describe it with comments using to the methodology from /describing-programs: zooming out -> zooming in -> connections -> goals
🐣 Naming Variables: You can analyze how a variable is used in a program and give it two names:
[ ] Generic: You can give a generic name to a variable based on how it is used in the program.
[ ] Specific: You can give a specific name to a variable based on how it's used and the program's domain (the program's specific data and use-case).
[ ] 🐥 Constructing Programs: You can reconstruct a program's lines and indentation, successfully ignoring distractor lines.
[ ] 🐥 Modifying Programs: You can make small changes in a program to change it's behavior without breaking it.
[ ] 🐔 Stepping Through: You can pause a script in a step debugger, arrange the debugger, collapse extra panels, and step through a script written with Just Enough JS. At each point in execution you can make a prediction of the next line before executing, and can check your prediction using the scopes panel.
[ ] 🐔 Authoring Programs: Given starter code with labeled goals, you can write a small program to match specs (user stories + test cases).
Week 7
[ ] I have pushed my progress to my practice .js file
Check-In
I Need Help With:
Nothing so far. Just need time and repetition to get things down properly.
What went well?
I have a decent general understanding of what is going on until now and I am trying to see where we are heading.
What went less well?
A lot of information and things to learn/memorize, so it's easy to rush through things.
Lessons Learned
Types of data, types of elements, basic syntax, basic assignment operators, getting comfortable with the console and VSCode.
Sunday Prep Work
Go over everything one more time and learn some more if possible.
Week 8
[ ] I have pushed my progress to my practice .js file
Check-In
I Need Help With:
I am learning as much JS as possible (slowly) but I can see very well yet how to incorporate that knowledge to a website. I am sure we'll get there soon.
What went well?
Being patient and taking the time to truly understand the basics before moving on.
What went less well?
I think I need to start thinking of problems in a different way. Basically, before I can write a program to solve a problem, I need to deconstruct that problem into a series of logical propositions and then create a flow where those propositions fit together.
Lessons Learned
I guess the previous paragraph answers this question 🙈
Sunday Prep Work
Practicing JS with different types of problems and resources to keep building my confidence and understanding.
I am learning as much JS as possible (slowly) but I can see very well yet how to incorporate that knowledge to a website. I am sure we'll get there soon.
Yes, the chapter seperation of concerns will handle this topic 🙂
I think I need to start thinking of problems in a different way. Basically, before I can write a program to solve a problem, I need to deconstruct that problem into a series of logical propositions and then create a flow where those propositions fit together.
Writing down what the program should do on paper or as comments might help. I always try to break down the problem in mini-problems that are easy to solve.
Keep up the good work and don't hesitate to ask questions!
|
gharchive/issue
| 2021-12-03T17:50:29 |
2025-04-01T04:34:49.980879
|
{
"authors": [
"arnochauveau",
"denrique-alvarez"
],
"repo": "lab-antwerp-1/home",
"url": "https://github.com/lab-antwerp-1/home/issues/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1171479505
|
created bio
Checklists
General Checks
[ ] the branch is up to date with main/master
[ ] the code works when pulled and run locally
[ ] All CI checks pass
[ ] all conflicts are resolved (if any)
[ ] PR has a descriptive title
[ ] PR has appropriate labels and milestones for easy identification
[ ] PR it is assigned to the owner
[ ] reviewers are assigned
[ ] the PR contributes only one focused change
[ ] It is in the appropriate column in the project board (if necessary)
[ ] has short and clear description
[ ] is linked to an issue (if it is related)
[ ] feedback is addressed (if any and if it is appropriate feedback.)
Markdown
[ ] the markdown source is formatted
[ ] spelling and grammar is correct in all text
[ ] The markdown looks correct when you preview the file
[ ] all links and images work
Hi @SWAPNACHEMBOTH There is a lot of HTML in your MD file, try using exclusively Markdown for this task and upload the changes :)
|
gharchive/pull-request
| 2022-03-16T19:49:03 |
2025-04-01T04:34:49.986237
|
{
"authors": [
"Alexander-Segovia",
"SWAPNACHEMBOTH"
],
"repo": "lab-brussels-1/group2hw",
"url": "https://github.com/lab-brussels-1/group2hw/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
115202076
|
Not usable with client browsers of Firefox, Internet Explorer
When I try to access the scalabrad-web page using non-Chrome browsers (tried Firefox, Internet Explorer) with version -M1 from bintray, I get just a blank page. No login screen, nothing.
We have not worked at all to achieve cross-browser compatibility. Any reason you can't use chrome?
Just that I have a very strong preference toward Firefox (I like the easy access to Bookmarks, personalization).
PRs welcome :) My guess is that the problem is with html imports, but if you can provide some more info from firebug or the dev console that would help.
@joshmutus FWIW easy access to bookmarks is a configuration option away in Chrome. Same thing with personalization. Not saying you should switch, just that the impediments you've listed so far can be fixed with a couple Google searches.
Just tried playing with this a bit, and it looks like the problem is due to html imports being polyfilled in firefox. Here's what I see in the network log when running in prod mode:
GET http://localhost:9000/ [HTTP/1.1 304 Not Modified 2ms]
GET http://localhost:9000/styles/main.css [HTTP/1.1 304 Not Modified 3ms]
GET http://localhost:9000/bower_components/webcomponentsjs/webcomponents-lite.js [HTTP/1.1 304 Not Modified 2ms]
GET http://localhost:9000/scripts/bundle.js [HTTP/1.1 304 Not Modified 2ms]
mutating the [[Prototype]] of an object will cause your code to run very slowly; instead create the object with the correct initial [[Prototype]] value using Object.create bundle.js:1221:5
ReferenceError: polymer is not defined bundle.js:16798:1
GET XHR http://localhost:9000/elements/elements.vulcanized.html [HTTP/1.1 304 Not Modified 2ms]
GET https://fonts.googleapis.com/css [HTTP/2.0 200 OK 0ms]
GET https://fonts.googleapis.com/css [HTTP/2.0 200 OK 0ms]
The problem seems to be that elements.vulcanized.html is being loaded asynchronously (via XHR) and so it gets parsed after the browser loads and runs bundle.js; since the polymer library is included in the html imports, it is not defined when bundle.js gets run, and the app fails to start.
I tried adding a simple html file so that both bundle.js and elements.html are loaded via html imports. This fixes the loading because the polyfill takes care of getting the loading order correct for both. However, there are a bunch of rendering bugs in firefox that will have to be looked into separately. The big one is that polymer dialogs seem not to work well. I'll make a PR with the loading changes and we can go from there.
+1 for firefox support.
If the only thing blocking cross-browser compatibility is use of HTML imports, fixing #331 should fix this as well.
|
gharchive/issue
| 2015-11-05T03:33:05 |
2025-04-01T04:34:50.011258
|
{
"authors": [
"DanielSank",
"btchiaro",
"gschaffner",
"jwenner",
"maffoo"
],
"repo": "labrad/scalabrad-web",
"url": "https://github.com/labrad/scalabrad-web/issues/93",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1727986326
|
perf: compress default desktop background size.
I have no idea why upload so large image in a web application. its too large for normal people. so i compress it.
Preview
https://github.com/labring/sealos/assets/6964737/c5c6b58d-4a3e-4e04-ae3e-90c6facc0df2
Changed
size change: 5mb -> 963k(still so large but i dont wanna change image size)
🤖 Generated by Copilot at 6af2c11
Summary
Walkthrough
Greate work!
|
gharchive/pull-request
| 2023-05-26T17:23:09 |
2025-04-01T04:34:50.013865
|
{
"authors": [
"moonrailgun",
"zzjin"
],
"repo": "labring/sealos",
"url": "https://github.com/labring/sealos/pull/3149",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
456021676
|
Show all and delete vacations
pulls all vacations for user and gives the user the ability to delete.
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
Change Status
[x] Complete, tested, ready to review and merge
[ ] Complete, but not tested (may need new tests)
[ ] Incomplete/work-in-progress, PR is for discussion/feedback
How Has This Been Tested?
[ ] Test A
[ ] Test B
Checklist
[x] My code follows the style guidelines of this project
[ ] I have performed a self-review of my own code
[x] My code has been reviewed by at least one peer
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
[x] There are no merge conflicts
Don't approve this yet
|
gharchive/pull-request
| 2019-06-14T01:34:03 |
2025-04-01T04:34:50.019409
|
{
"authors": [
"brianmgre"
],
"repo": "labsce1-ptbot/backend",
"url": "https://github.com/labsce1-ptbot/backend/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
500397865
|
Feature Request - being able to control when the list modal opens and closes
It would be useful to programmaticly trigger when tribute opens and closes using this library. Like hitting "tab" on a list option would cause an @mention but then tribute remains open and then hitting "enter" would cause and @mention and tribute would close.
Thanks for the suggestion. This is rather sth to ask for in the native library thou, not in Angular wrapper.
|
gharchive/issue
| 2019-09-30T16:54:01 |
2025-04-01T04:34:50.066086
|
{
"authors": [
"ConnorWin",
"agarbund"
],
"repo": "ladderio/ngx-tribute",
"url": "https://github.com/ladderio/ngx-tribute/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1384743756
|
Create hand scene
#2zqg53q
Task linked: CU-2zqg53q create hand scene
|
gharchive/pull-request
| 2022-09-24T16:58:13 |
2025-04-01T04:34:50.091772
|
{
"authors": [
"sredna43"
],
"repo": "lagbagstudios/Whist",
"url": "https://github.com/lagbagstudios/Whist/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
77940084
|
Release 0.1.2
Fix some bugs and support the new version of DPDK.
Note: This breaks compatibility with DPDK v1.6.0
LGTM.
Thanks!
|
gharchive/pull-request
| 2015-05-19T05:24:43 |
2025-04-01T04:34:50.103600
|
{
"authors": [
"hibitomo",
"ynkjm"
],
"repo": "lagopus/lagopus",
"url": "https://github.com/lagopus/lagopus/pull/36",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
285188954
|
感想分享,请教2个问题,打印时可以选择不要页眉和页码吗?另外,可以不预览直接打印吗?
非常感谢分享您做的这个打印控件,我现在想用来打印收据,在打印的时候会在左上角输出时间,右下角输出URL,还有页码,这些可以隐藏吗?还有可以不预览直接打印吗?
@woshiyifadaguangtou 第一个问题:只能在浏览器出来打印的那个页面自己设置。第二个问题目前没实现。
|
gharchive/issue
| 2017-12-30T07:21:13 |
2025-04-01T04:34:50.124616
|
{
"authors": [
"laixiangran",
"woshiyifadaguangtou"
],
"repo": "laixiangran/e-ngx-print",
"url": "https://github.com/laixiangran/e-ngx-print/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2074349992
|
test succeeds even when config cannot be read properly
why the config is not read properly i havent found out yet, but either way i think the situation below should not result in a successful test
either the parse error should raise completely, or an empty array should not result in a success?
==Starting gh-actions HTTP Status==
jq: parse error: Expected another array element at line 1, column 120
HTTP Status result: Success
==END==
i think what happened in this case is that the steps.deploy.outputs.url var did not exist..
- name: Confirm deployed HTTP status code
uses: "lakuapik/gh-actions-http-status@v1"
with:
sites: '["${{ steps.deploy.outputs.url }}"]'
expected: "[200]"
|
gharchive/issue
| 2024-01-10T13:11:47 |
2025-04-01T04:34:50.129767
|
{
"authors": [
"gosuto-inzasheru"
],
"repo": "lakuapik/gh-actions-http-status",
"url": "https://github.com/lakuapik/gh-actions-http-status/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
151033875
|
File storage for Circles
This PR adds documents to a Circle through and extensible FileUpload model for file storage.
All files stored in AWS S3
Application level permissions via the Ability class
Files can be cached by browsers for 1 hour, but not through intermediary proxies (eg: Cloudflare CDN, ISPs, etc...)
All files are encrypted
128bit AES GCM encryption
Unique encryption keys per file (which is stored encrypted in PostgreSQL)
Encryption in Rails to prevent S3 data breaches
Issue #53
Shipping first pass for now!
|
gharchive/pull-request
| 2016-04-26T03:48:24 |
2025-04-01T04:34:50.132377
|
{
"authors": [
"phil-monroe"
],
"repo": "lale-help/lale-help",
"url": "https://github.com/lale-help/lale-help/pull/305",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1801831025
|
initial commit
Fixes # (issue)
Changes
Please provide a brief description of the changes here.
For significant contributions please make sure you have completed the following items:
[ ] CHANGELOG.md updated for non-trivial changes
[ ] Unit tests have been added
[ ] Changes in public API reviewed
Codecov Report
Merging #258 (8837d1d) into main (cfcda57) will not change coverage.
The diff coverage is n/a.
:exclamation: Current head 8837d1d differs from pull request most recent head 7f16b2d. Consider uploading reports for the commit 7f16b2d to get more accurate results
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
Additional details and impacted files
@@ Coverage Diff @@
## main #258 +/- ##
=======================================
Coverage 87.53% 87.53%
=======================================
Files 169 169
Lines 4888 4888
=======================================
Hits 4278 4278
Misses 610 610
|
gharchive/pull-request
| 2023-07-12T22:12:24 |
2025-04-01T04:34:50.138142
|
{
"authors": [
"codecov-commenter",
"lalitb"
],
"repo": "lalitb/opentelemetry-cpp",
"url": "https://github.com/lalitb/opentelemetry-cpp/pull/258",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1240855714
|
Regressors failing for some kinds of data
For some data sets training is failing. Given the MethodError thrown, this looks like a bug to me:
julia> using MLJBase, PartialLeastSquaresRegressor
julia> X, y = @load_boston;
julia> machine(PartialLeastSquaresRegressor.PLSRegressor(), X, y) |> fit!
[ Info: Training machine(PLSRegressor(n_factors = 1), …).
┌ Error: Problem fitting the machine machine(PLSRegressor(n_factors = 1), …).
└ @ MLJBase ~/.julia/packages/MLJBase/wnJff/src/machines.jl:617
[ Info: Running type checks...
[ Info: Type checks okay.
ERROR: MethodError: no method matching check_constant_cols(::SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true})
Closest candidates are:
check_constant_cols(::Matrix{T}) where T<:AbstractFloat at /Users/anthony/.julia/packages/PartialLeastSquaresRegressor/OrIoJ/src/utils.jl:26
check_constant_cols(::Vector{T}) where T<:AbstractFloat at /Users/anthony/.julia/packages/PartialLeastSquaresRegressor/OrIoJ/src/utils.jl:27
Stacktrace:
[1] fit(m::PartialLeastSquaresRegressor.PLSRegressor, verbosity::Int64, X::NamedTuple{(:Crim, :Zn, :Indus, :NOx, :Rm, :Age, :Dis, :Rad, :Tax, :PTRatio, :Black, :LStat), NTuple{12, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}}}, Y::SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true})
@ PartialLeastSquaresRegressor ~/.julia/packages/PartialLeastSquaresRegressor/OrIoJ/src/mlj_interface.jl:65
[2] fit_only!(mach::Machine{PartialLeastSquaresRegressor.PLSRegressor, true}; rows::Nothing, verbosity::Int64, force::Bool)
@ MLJBase ~/.julia/packages/MLJBase/wnJff/src/machines.jl:615
[3] fit_only!
@ ~/.julia/packages/MLJBase/wnJff/src/machines.jl:568 [inlined]
[4] #fit!#52
@ ~/.julia/packages/MLJBase/wnJff/src/machines.jl:683 [inlined]
[5] fit!
@ ~/.julia/packages/MLJBase/wnJff/src/machines.jl:681 [inlined]
[6] |>(x::Machine{PartialLeastSquaresRegressor.PLSRegressor, true}, f::typeof(fit!))
@ Base ./operators.jl:858
[7] top-level scope
@ REPL[162]:1
[8] top-level scope
@ ~/.julia/packages/CUDA/fAEDi/src/initialization.jl:52
Probably check_constant_cols(::Matrix{T}) and check_constant_cols(::Vector{T}) just need to be made generic:
check_constant_cols(::AbstractMatrix{T})
check_constant_cols(::AbstractVector{T})
|
gharchive/issue
| 2022-05-19T00:49:12 |
2025-04-01T04:34:50.141196
|
{
"authors": [
"ablaom"
],
"repo": "lalvim/PartialLeastSquaresRegressor.jl",
"url": "https://github.com/lalvim/PartialLeastSquaresRegressor.jl/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1809756485
|
Wasm cairo integration
Because most of the codes in "Cairo by Example" are pure Cairo codes, adding an online playground for Cairo makes some sense.
Of course, this playground used https://github.com/lambdaclass/cairo-vm.
You can see how it works.
or see this video.
Can you review this PR, sir?
@unbalancedparentheses @MegaRedHand @juanbono @klaus993 @SantiagoPittella
We should, instead of having the binary commited we should add an step in the deployment that runs the compilation (we could do that if you want).
Yes, let me make a script for this.
Saw that you are using the version 2.0.0-rc2 of the compiler. Can you upgrade to latest?
It is Compiler v2.0.1 I think. I will update it to 2.0.2.
Also, for programs it works awesome. But with contracts it tries anyways to run the main entrypoint. Can you change that? Basically changing the compiler for contracts, and not running any entrypoint, just check if it compiles.
Surely we can! I will update for this too.
And can you tell me how to set up github pages for hugo.yml? I set my fork repo like this but .css and .js files are not there.
https://cryptonerdcn.github.io/cairo-by-example/index.html
Surely we can! It already has the function for this proposal. And all we need is just sth. like "Compiled success." or "Compiled failed." in the result dialog?
Yes! That would be amazing.
BTW: Can you tell me how to set up github pages for hugo.yml? I set my fork repo like this but .css and .js files are not there.
I'm hooking you up with someone that can answer this. He should be answering soon.
Hey there! Thinking about what could be going wrong, I see that the extra stuff I see in your fork is the assets/ directory. Does this work out of the box with hugo build/serve when testing locally? Maybe we need to modify the hugo.yml to correctly deploy this new code/directory to GitHub Pages. Maybe this doesn't end up in public/, that is what's uploaded to GitHub for the deploy. This is just my gut feeling, I can run some tests and get back to you, as it could be an entirely different issue, but let's see.
Thanks for the reply @klaus993 ! In the local machine, it works perfectly.
In the Upload artifact step of my repo's github action, it seems everything was uploaded.
https://github.com/cryptonerdcn/cairo-by-example/actions/runs/5587110247/jobs/10212023055
|
gharchive/pull-request
| 2023-07-18T11:41:07 |
2025-04-01T04:34:50.162651
|
{
"authors": [
"SantiagoPittella",
"cryptonerdcn"
],
"repo": "lambdaclass/cairo-by-example",
"url": "https://github.com/lambdaclass/cairo-by-example/pull/101",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
672531460
|
laminar中margin trading首页可以显示My Positions / Orders交易记录,但是进入具体交易对页面My Positions / Orders Open显示没有数据
laminar中margin trading首页可以显示My Positions / Orders交易记录,但是进入具体交易对页面My Positions / Orders Open显示没有数据。下级页面我一点,进的是https://flow.laminar.one/margin/0/EURUSD这个页面,看不到交易数据。
5CccLc64WB2hwWorqg92A4S5y9fwAiAdbAKLRY5obDskA26N
有laminar和fx两个池子,你的交易在fx的池子里面,不在laminar里面。
|
gharchive/issue
| 2020-08-04T06:09:13 |
2025-04-01T04:34:50.165771
|
{
"authors": [
"frighter518",
"xlc"
],
"repo": "laminar-protocol/flow-exchange",
"url": "https://github.com/laminar-protocol/flow-exchange/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
494090883
|
QNX 6.6 Port Client httplib_connect_websocket_client null
Hi Mr. Bies I was porting this to QNX 6.6,7.0... the websocket server functionality works great! Thanks!
(I am able to communicate via the browser as a client), but I would like a websocket client in a process.
I modified the websocket client code (examples\websocket_client)
to work with the latest (Latest commit bb25779 on Aug 15)
However using the client code specifically to create a connection always returns null, and the
server logs/error handler is enabled with the full debugging mode and I get no error on the server side indicating what the trouble is.
(Note I removed the websocket server embedded in the example ((examples\websocket_client)) , and have libHTTP server running listening on port 1339.
Here are the arguments for httplib_connect_websocket_client:
newconn1 = httplib_connect_websocket_client(ctx,"127.0.0.1",atoi("1339"),0,"/websocket",NULL,websocket_client_data_handler,websocket_client_close_handler,&client1_data);
if (newconn1 == NULL) {
printf("newconn1 httplib_connect_websocket_client() unknown:%s\n");
return 1;
}
Is there anything else I'm missing? Does ctx have to be initialized with httplib_create_client_context?
Thanks
I am also unable to verify the client functionality, could you please let me know any solution on this
QNX will not be supported. They refuse to answer my requests for a free license to port my open source libraries to QNX. And I am not willing to pay 5000 dollar for a license to develop something which benefits them but doesn't bring me any money.
QNX will only be considered again if someone is willing to pay a perpetual license for me.
I feel that
|
gharchive/issue
| 2019-09-16T14:35:18 |
2025-04-01T04:34:50.182622
|
{
"authors": [
"evilsalvo",
"lammertb",
"mvsrivas"
],
"repo": "lammertb/libhttp",
"url": "https://github.com/lammertb/libhttp/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1994998774
|
feat: add internal debugging apis for manifest and txn
Add APIs for loading and print manifest and txn files.
These are strictly meant for internal use and are kept under lance._internal
We now have some debug print functions introduced in #2202. Do we still want to pursue this?
nope, closing
|
gharchive/pull-request
| 2023-11-15T15:24:42 |
2025-04-01T04:34:50.202463
|
{
"authors": [
"chebbyChefNEQ",
"wjones127"
],
"repo": "lancedb/lance",
"url": "https://github.com/lancedb/lance/pull/1604",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1964674995
|
Toybox is broken on MacOS now
I am seeing the same errors I'm getting in the CI logs. I think a recent push broke something Mac-side
warning: using unfinished code from toys/pending
Library probe
generated/{Config.in,newtoys.h,flags.h,globals.h,tags.h,help.h}
Compile toybox
.........................................................................................toys/posix/getconf.c:110:3: error: use of undeclared identifier '_SC_LEVEL1_ICACHE_SIZE'
CONF(LEVEL1_ICACHE_SIZE), CONF(LEVEL1_ICACHE_ASSOC),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:73:1: note: expanded from here
_SC_LEVEL1_ICACHE_SIZE
^
toys/posix/getconf.c:110:29: error: use of undeclared identifier '_SC_LEVEL1_ICACHE_ASSOC'
CONF(LEVEL1_ICACHE_SIZE), CONF(LEVEL1_ICACHE_ASSOC),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:75:1: note: expanded from here
_SC_LEVEL1_ICACHE_ASSOC
^
toys/posix/getconf.c:111:3: error: use of undeclared identifier '_SC_LEVEL1_ICACHE_LINESIZE'
CONF(LEVEL1_ICACHE_LINESIZE), CONF(LEVEL1_DCACHE_SIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:77:1: note: expanded from here
_SC_LEVEL1_ICACHE_LINESIZE
^
toys/posix/getconf.c:111:33: error: use of undeclared identifier '_SC_LEVEL1_DCACHE_SIZE'
CONF(LEVEL1_ICACHE_LINESIZE), CONF(LEVEL1_DCACHE_SIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:79:1: note: expanded from here
_SC_LEVEL1_DCACHE_SIZE
^
.toys/posix/getconf.c:112:3: error: use of undeclared identifier '_SC_LEVEL1_DCACHE_ASSOC'
CONF(LEVEL1_DCACHE_ASSOC), CONF(LEVEL1_DCACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:81:1: note: expanded from here
_SC_LEVEL1_DCACHE_ASSOC
^
toys/posix/getconf.c:112:30: error: use of undeclared identifier '_SC_LEVEL1_DCACHE_LINESIZE'
CONF(LEVEL1_DCACHE_ASSOC), CONF(LEVEL1_DCACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:83:1: note: expanded from here
_SC_LEVEL1_DCACHE_LINESIZE
^
toys/posix/getconf.c:113:3: error: use of undeclared identifier '_SC_LEVEL2_CACHE_SIZE'
CONF(LEVEL2_CACHE_SIZE),CONF(LEVEL2_CACHE_ASSOC),CONF(LEVEL2_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:85:1: note: expanded from here
_SC_LEVEL2_CACHE_SIZE
^
toys/posix/getconf.c:113:27: error: use of undeclared identifier '_SC_LEVEL2_CACHE_ASSOC'
CONF(LEVEL2_CACHE_SIZE),CONF(LEVEL2_CACHE_ASSOC),CONF(LEVEL2_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:87:1: note: expanded from here
_SC_LEVEL2_CACHE_ASSOC
^
toys/posix/getconf.c:113:52: error: use of undeclared identifier '_SC_LEVEL2_CACHE_LINESIZE'
CONF(LEVEL2_CACHE_SIZE),CONF(LEVEL2_CACHE_ASSOC),CONF(LEVEL2_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:89:1: note: expanded from here
_SC_LEVEL2_CACHE_LINESIZE
^
.toys/posix/getconf.c:114:3: error: use of undeclared identifier '_SC_LEVEL3_CACHE_SIZE'
CONF(LEVEL3_CACHE_SIZE),CONF(LEVEL3_CACHE_ASSOC),CONF(LEVEL3_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:91:1: note: expanded from here
_SC_LEVEL3_CACHE_SIZE
^
toys/posix/getconf.c:114:27: error: use of undeclared identifier '_SC_LEVEL3_CACHE_ASSOC'
CONF(LEVEL3_CACHE_SIZE),CONF(LEVEL3_CACHE_ASSOC),CONF(LEVEL3_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:93:1: note: expanded from here
_SC_LEVEL3_CACHE_ASSOC
^
toys/posix/getconf.c:114:52: error: use of undeclared identifier '_SC_LEVEL3_CACHE_LINESIZE'
CONF(LEVEL3_CACHE_SIZE),CONF(LEVEL3_CACHE_ASSOC),CONF(LEVEL3_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:95:1: note: expanded from here
_SC_LEVEL3_CACHE_LINESIZE
^
toys/posix/getconf.c:115:3: error: use of undeclared identifier '_SC_LEVEL4_CACHE_SIZE'
CONF(LEVEL4_CACHE_SIZE),CONF(LEVEL4_CACHE_ASSOC),CONF(LEVEL4_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:97:1: note: expanded from here
_SC_LEVEL4_CACHE_SIZE
^
toys/posix/getconf.c:115:27: error: use of undeclared identifier '_SC_LEVEL4_CACHE_ASSOC'
CONF(LEVEL4_CACHE_SIZE),CONF(LEVEL4_CACHE_ASSOC),CONF(LEVEL4_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:99:1: note: expanded from here
_SC_LEVEL4_CACHE_ASSOC
^
toys/posix/getconf.c:115:52: error: use of undeclared identifier '_SC_LEVEL4_CACHE_LINESIZE'
CONF(LEVEL4_CACHE_SIZE),CONF(LEVEL4_CACHE_ASSOC),CONF(LEVEL4_CACHE_LINESIZE),
^
toys/posix/getconf.c:96:21: note: expanded from macro 'CONF'
#define CONF(n) {#n,SC ## n}
^
:101:1: note: expanded from here
_SC_LEVEL4_CACHE_LINESIZE
^
toys/posix/getconf.c:204:23: error: invalid application of 'sizeof' to an incomplete type 'struct config[]'
int i, j, lens[] = {ARRAY_LEN(sysconfs), ARRAY_LEN(pathconfs),
^~~~~~~~~~~~~~~~~~~
./toys.h:136:33: note: expanded from macro 'ARRAY_LEN'
#define ARRAY_LEN(array) (sizeof(array)/sizeof(*array))
^~~~~~~
16 errors generated.
make: *** [toybox] Error 1
I have the same issue on Ubuntu 22.04
Sorry, reversed ifdef. Try commit 5e9d2fa14895
Can I close this now?
|
gharchive/issue
| 2023-10-27T02:58:10 |
2025-04-01T04:34:50.236243
|
{
"authors": [
"Christoffer-Svenningsson",
"cfossace",
"landley"
],
"repo": "landley/toybox",
"url": "https://github.com/landley/toybox/issues/462",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
}
|
2350615909
|
🛑 vipgifts.net is down
In 16f3377, vipgifts.net (https://vipgifts.net) was down:
HTTP code: 567
Response time: 724 ms
Resolved: vipgifts.net is back up in 5272c58 after 23 minutes.
|
gharchive/issue
| 2024-06-13T09:12:10 |
2025-04-01T04:34:50.245287
|
{
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/11593",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1838648799
|
🛑 trixmall.com is down
In e28263f, trixmall.com (https://www.trixmall.com/) was down:
HTTP code: 567
Response time: 540 ms
Resolved: trixmall.com is back up in e9342c5.
|
gharchive/issue
| 2023-08-07T04:23:52 |
2025-04-01T04:34:50.248276
|
{
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/2354",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1903478044
|
🛑 rexingsports.cn is down
In 7b4b989, rexingsports.cn (https://www.rexingsports.cn) was down:
HTTP code: 567
Response time: 988 ms
Resolved: rexingsports.cn is back up in 497146b after 8 minutes.
|
gharchive/issue
| 2023-09-19T17:57:51 |
2025-04-01T04:34:50.251206
|
{
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/4565",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.