id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1583746027
|
Use custom service account for metrics deployment
Local run:
[bnr@bnr pipeline-service]$ k -n openshift-pipelines get pods
NAME READY STATUS RESTARTS AGE
pipeline-metrics-exporter-7c87c975bc-t2r5r 1/1 Running 0 33s
# HELP pipeline_service_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which pipeline_service_exporter was built.
# TYPE pipeline_service_exporter_build_info gauge
pipeline_service_exporter_build_info{branch="",goversion="go1.19.5",revision="",version=""} 1
# HELP pipelinerun_duration_completed_seconds Duration in seconds for a PipelineRun to complete.
# TYPE pipelinerun_duration_completed_seconds gauge
pipelinerun_duration_completed_seconds{name="pipelinerun-echo-greetings",uid="da686962-80de-413e-8ee8-34987b60c0ac"} 10
# HELP pipelinerun_duration_scheduled_seconds Duration in seconds for a PipelineRun to be scheduled.
# TYPE pipelinerun_duration_scheduled_seconds gauge
pipelinerun_duration_scheduled_seconds{name="pipelinerun-echo-greetings",uid="da686962-80de-413e-8ee8-34987b60c0ac"} 0
/lgtm
|
gharchive/pull-request
| 2023-02-14T08:50:17 |
2025-04-01T06:39:53.789933
|
{
"authors": [
"bnallapeta",
"xinredhat"
],
"repo": "openshift-pipelines/pipeline-service",
"url": "https://github.com/openshift-pipelines/pipeline-service/pull/492",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2039429842
|
Refine Dependency Checking and Improve Error Messaging in Development Environment Script
Changes
The pull request introduces enhancements to the development environment script, focusing on improving the accuracy of dependency checks and providing informative error messages for a smoother onboarding experience.
Submitter Checklist
[ ] 📝 A good commit message is important for other reviewers to understand the context of your change. Please refer to How to Write a Git Commit Message for more details how to write beautiful commit messages. We rather have the commit message in the PR body and the commit message instead of an external website.
[ ] ♽ Run make test before submitting a PR (ie: with pre-commit, no need to waste CPU cycle on CI. (or even better install pre-commit and do pre-commit install in the root of this repo).
[ ] ✨ We heavily rely on linters to get our code clean and consistent, please ensure that you have run make lint before submitting a PR. The markdownlint error can get usually fixed by running make fix-markdownlint (make sure it's installed first)
[ ] 📖 If you are adding a user facing feature or make a change of the behavior, please verify that you have documented it
[ ] 🧪 100% coverage is not a target but most of the time we would rather have a unit test if you make a code change.
[ ] 🎁 If that's something that is possible to do please ensure to check if we can add a e2e test.
[ ] 🔎 If there is a flakiness in the CI tests then don't necessary ignore it, better get the flakyness fixed before merging or if that's not possible there is a good reason to bypass it. (token rate limitation may be a good reason to skip).
Fixes #1531
/cc @chmouel @vdemeester
/ok-to-test
This looks good, just a comment! since we use this on our e2e tests which is run by github actions we should test this since i don't know why GHA don't let me run this on external contributor via an approval process like the pac's /ok-to-test (maybe i have tighten down too much the security there)
it works for me locally on my laptop (TM)
Let me know if you want to address the suggestions but i am fine to merge this as is
|
gharchive/pull-request
| 2023-12-13T10:41:57 |
2025-04-01T06:39:53.796817
|
{
"authors": [
"chmouel",
"roman-kiselenko"
],
"repo": "openshift-pipelines/pipelines-as-code",
"url": "https://github.com/openshift-pipelines/pipelines-as-code/pull/1532",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
324598651
|
Switch from go-logging to logrus
Switching from go-logging to use logrus which will match bundle-lib and make our logs a bit more consistent. The one things that doesn't quite work is color log output.
OLD log file, shows the broker logs using go-logging and the logs from bundle-lib which use logrus
Using config file mounted to /etc/ansible-service-broker/config.yaml
============================================================
== Creating Ansible Service Broker... ==
============================================================
[2018-05-18T22:36:39.37Z] [NOTICE] - Initializing clients...
[2018-05-18T22:36:39.371Z] [DEBUG] - Connecting to Cluster
time="2018-05-18T22:36:39Z" level=info msg="OpenShift version: %vv3.10.0-alpha.0+18dfae9-1209"
time="2018-05-18T22:36:39Z" level=info msg="unable to retrieve the network plugin, defaulting to not joining networks - clusternetworks.network.openshift.io \"default\" not found"
time="2018-05-18T22:36:39Z" level=info msg="Kubernetes version: %vv1.10.0+b81c8f8"
time="2018-05-18T22:36:39Z" level=info msg="== REGISTRY CX == "
time="2018-05-18T22:36:39Z" level=info msg="Name: dh"
time="2018-05-18T22:36:39Z" level=info msg="Type: dockerhub"
time="2018-05-18T22:36:39Z" level=info msg="Url: docker.io"
time="2018-05-18T22:36:39Z" level=info msg="== REGISTRY CX == "
time="2018-05-18T22:36:39Z" level=info msg="Name: localregistry"
time="2018-05-18T22:36:39Z" level=info msg="Type: local_openshift"
time="2018-05-18T22:36:39Z" level=info msg="Url: "
[2018-05-18T22:36:39.38Z] [DEBUG] - Connecting Dao
[2018-05-18T22:36:39.38Z] [DEBUG] - Connecting Registry
[2018-05-18T22:36:39.381Z] [DEBUG] - Initializing WorkEngine
[2018-05-18T22:36:39.382Z] [DEBUG] - Creating AnsibleBroker
============================================================
== Starting Ansible Service Broker... ==
============================================================
[2018-05-18T22:36:39.383Z] [INFO] - Initiating Recovery Process
[2018-05-18T22:36:39.408Z] [INFO] - Recovery complete
[2018-05-18T22:36:39.409Z] [NOTICE] - recover called
[2018-05-18T22:36:39.409Z] [INFO] - Broker configured to bootstrap on startup
[2018-05-18T22:36:39.409Z] [INFO] - Attempting bootstrap...
[2018-05-18T22:36:39.409Z] [INFO] - AnsibleBroker::Bootstrap
[2018-05-18T22:36:39.409Z] [DEBUG] - Dao::BatchGetSpecs
logrus at INFO level only, no color
Using config file mounted to /etc/ansible-service-broker/config.yaml
============================================================
== Creating Ansible Service Broker... ==
============================================================
time="2018-05-18T22:15:49Z" level=info msg="Initializing clients..."
time="2018-05-18T22:15:49Z" level=info msg="OpenShift version: %vv3.10.0-alpha.0+18dfae9-1209"
time="2018-05-18T22:15:49Z" level=info msg="unable to retrieve the network plugin, defaulting to not joining networks - clusternetworks.network.openshift.io \"default\" not found"
time="2018-05-18T22:15:49Z" level=info msg="Kubernetes version: %vv1.10.0+b81c8f8"
time="2018-05-18T22:15:49Z" level=info msg="== REGISTRY CX == "
time="2018-05-18T22:15:49Z" level=info msg="Name: dh"
time="2018-05-18T22:15:49Z" level=info msg="Type: dockerhub"
============================================================
time="2018-05-18T22:15:49Z" level=info msg="Url: docker.io"
time="2018-05-18T22:15:49Z" level=info msg="== REGISTRY CX == "
time="2018-05-18T22:15:49Z" level=info msg="Name: localregistry"
time="2018-05-18T22:15:49Z" level=info msg="Type: local_openshift"
time="2018-05-18T22:15:49Z" level=info msg="Url: "
time="2018-05-18T22:15:49Z" level=info msg="Initiating Recovery Process"
== Starting Ansible Service Broker... ==
============================================================
time="2018-05-18T22:15:49Z" level=info msg="Recovery complete"
time="2018-05-18T22:15:49Z" level=info msg="recover called"
time="2018-05-18T22:15:49Z" level=info msg="Broker configured to bootstrap on startup"
time="2018-05-18T22:15:49Z" level=info msg="Attempting bootstrap..."
time="2018-05-18T22:15:49Z" level=info msg="AnsibleBroker::Bootstrap"
You are a hero.
logrus at DEBUG level for BOTH broker and bundle-lib, no color
Using config file mounted to /etc/ansible-service-broker/config.yaml
============================================================
== Creating Ansible Service Broker... ==
============================================================
time="2018-05-19T03:16:59Z" level=info msg="Initializing clients..."
time="2018-05-19T03:16:59Z" level=debug msg="Connecting to Cluster"
time="2018-05-19T03:16:59Z" level=info msg="OpenShift version: %vv3.10.0-alpha.0+18dfae9-1209"
time="2018-05-19T03:16:59Z" level=debug msg="plugin for the network - "
time="2018-05-19T03:16:59Z" level=info msg="unable to retrieve the network plugin, defaulting to not joining networks - clusternetworks.network.openshift.io \"default\" not found"
time="2018-05-19T03:16:59Z" level=info msg="Kubernetes version: %vv1.10.0+b81c8f8"
time="2018-05-19T03:16:59Z" level=debug msg="Connecting Dao"
time="2018-05-19T03:16:59Z" level=debug msg="Connecting Registry"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get user from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get pass from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get images from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get namespaces from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get fail_on_error from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get black_list from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get auth_type from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get auth_name from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get runner from config"
time="2018-05-19T03:16:59Z" level=info msg="== REGISTRY CX == "
time="2018-05-19T03:16:59Z" level=info msg="Name: dh"
time="2018-05-19T03:16:59Z" level=info msg="Type: dockerhub"
time="2018-05-19T03:16:59Z" level=info msg="Url: docker.io"
time="2018-05-19T03:16:59Z" level=debug msg="Creating filter for registry: %sdh"
time="2018-05-19T03:16:59Z" level=debug msg="whitelist: %v[.*-apb$]"
time="2018-05-19T03:16:59Z" level=debug msg="blacklist: %v[]"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get url from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get user from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get pass from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get org from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get tag from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get images from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get fail_on_error from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get black_list from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get auth_type from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get auth_name from config"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get runner from config"
time="2018-05-19T03:16:59Z" level=info msg="== REGISTRY CX == "
time="2018-05-19T03:16:59Z" level=info msg="Name: localregistry"
time="2018-05-19T03:16:59Z" level=info msg="Type: local_openshift"
time="2018-05-19T03:16:59Z" level=info msg="Url: "
time="2018-05-19T03:16:59Z" level=debug msg="Creating filter for registry: %slocalregistry"
time="2018-05-19T03:16:59Z" level=debug msg="whitelist: %v[.*]"
time="2018-05-19T03:16:59Z" level=debug msg="blacklist: %v[]"
time="2018-05-19T03:16:59Z" level=debug msg="Initializing WorkEngine"
time="2018-05-19T03:16:59Z" level=debug msg="Unable to get secrets from config"
time="2018-05-19T03:16:59Z" level=debug msg="Creating AnsibleBroker"
============================================================
== Starting Ansible Service Broker... ==
============================================================
time="2018-05-19T03:16:59Z" level=info msg="Initiating Recovery Process"
time="2018-05-19T03:16:59Z" level=info msg="Recovery complete"
time="2018-05-19T03:16:59Z" level=info msg="recover called"
time="2018-05-19T03:16:59Z" level=info msg="Broker configured to bootstrap on startup"
time="2018-05-19T03:16:59Z" level=info msg="Attempting bootstrap..."
time="2018-05-19T03:16:59Z" level=info msg="AnsibleBroker::Bootstrap"
logrus with color (NOTE: I disabled this because the formatting needs work)
Using config file mounted to /etc/ansible-service-broker/config.yaml
============================================================
== Creating Ansible Service Broker... ==
============================================================
INFO[0000] Initializing clients...
DEBU[0000] Connecting to Cluster
INFO[0000] OpenShift version: %vv3.10.0-alpha.0+18dfae9-1209
DEBU[0000] plugin for the network -
INFO[0000] unable to retrieve the network plugin, defaulting to not joining networks - clusternetworks.network.openshift.io "default" not found
INFO[0000] Kubernetes version: %vv1.10.0+b81c8f8
DEBU[0000] Connecting Dao
DEBU[0000] Connecting Registry
DEBU[0000] Unable to get user from config
DEBU[0000] Unable to get pass from config
DEBU[0000] Unable to get images from config
DEBU[0000] Unable to get namespaces from config
DEBU[0000] Unable to get fail_on_error from config
DEBU[0000] Unable to get black_list from config
DEBU[0000] Unable to get auth_type from config
DEBU[0000] Unable to get auth_name from config
DEBU[0000] Unable to get runner from config
INFO[0000] == REGISTRY CX ==
INFO[0000] Name: dh
INFO[0000] Type: dockerhub
INFO[0000] Url: docker.io
DEBU[0000] Creating filter for registry: %sdh
DEBU[0000] whitelist: %v[.*-apb$]
DEBU[0000] blacklist: %v[]
DEBU[0000] Unable to get url from config
DEBU[0000] Unable to get user from config
DEBU[0000] Unable to get pass from config
DEBU[0000] Unable to get org from config
DEBU[0000] Unable to get tag from config
DEBU[0000] Unable to get images from config
DEBU[0000] Unable to get fail_on_error from config
DEBU[0000] Unable to get black_list from config
DEBU[0000] Unable to get auth_type from config
DEBU[0000] Unable to get auth_name from config
DEBU[0000] Unable to get runner from config
INFO[0000] == REGISTRY CX ==
INFO[0000] Name: localregistry
INFO[0000] Type: local_openshift
INFO[0000] Url:
DEBU[0000] Creating filter for registry: %slocalregistry
DEBU[0000] whitelist: %v[.*]
DEBU[0000] blacklist: %v[]
DEBU[0000] Initializing WorkEngine
DEBU[0000] Unable to get secrets from config
DEBU[0000] Creating AnsibleBroker
============================================================
== Starting Ansible Service Broker... ==
============================================================
INFO[0000] Initiating Recovery Process
INFO[0000] Recovery complete
INFO[0000] recover called
INFO[0000] Broker configured to bootstrap on startup
INFO[0000] Attempting bootstrap...
INFO[0000] AnsibleBroker::Bootstrap
COLOR output in terminal
RAW log file
Using config file mounted to /etc/ansible-service-broker/config.yaml
============================================================
== Creating Ansible Service Broker... ==
============================================================
[36mINFO[0m[0000] Initializing clients...
[37mDEBU[0m[0000] Connecting to Cluster
[36mINFO[0m[0000] OpenShift version: %vv3.10.0-alpha.0+18dfae9-1209
[37mDEBU[0m[0000] plugin for the network -
[36mINFO[0m[0000] unable to retrieve the network plugin, defaulting to not joining networks - clusternetworks.network.openshift.io "default" not found
[36mINFO[0m[0000] Kubernetes version: %vv1.10.0+b81c8f8
[37mDEBU[0m[0000] Connecting Dao
[37mDEBU[0m[0000] Connecting Registry
[37mDEBU[0m[0000] Unable to get user from config
[37mDEBU[0m[0000] Unable to get pass from config
[37mDEBU[0m[0000] Unable to get images from config
[37mDEBU[0m[0000] Unable to get namespaces from config
[37mDEBU[0m[0000] Unable to get fail_on_error from config
[37mDEBU[0m[0000] Unable to get black_list from config
[37mDEBU[0m[0000] Unable to get auth_type from config
[37mDEBU[0m[0000] Unable to get auth_name from config
[37mDEBU[0m[0000] Unable to get runner from config
[36mINFO[0m[0000] == REGISTRY CX ==
[36mINFO[0m[0000] Name: dh
[36mINFO[0m[0000] Type: dockerhub
[36mINFO[0m[0000] Url: docker.io
[37mDEBU[0m[0000] Creating filter for registry: %sdh
[37mDEBU[0m[0000] whitelist: %v[.*-apb$]
[37mDEBU[0m[0000] blacklist: %v[]
[37mDEBU[0m[0000] Unable to get url from config
[37mDEBU[0m[0000] Unable to get user from config
[37mDEBU[0m[0000] Unable to get pass from config
[37mDEBU[0m[0000] Unable to get org from config
[37mDEBU[0m[0000] Unable to get tag from config
[37mDEBU[0m[0000] Unable to get images from config
[37mDEBU[0m[0000] Unable to get fail_on_error from config
[37mDEBU[0m[0000] Unable to get black_list from config
[37mDEBU[0m[0000] Unable to get auth_type from config
[37mDEBU[0m[0000] Unable to get auth_name from config
[37mDEBU[0m[0000] Unable to get runner from config
[36mINFO[0m[0000] == REGISTRY CX ==
[36mINFO[0m[0000] Name: localregistry
[36mINFO[0m[0000] Type: local_openshift
[36mINFO[0m[0000] Url:
[37mDEBU[0m[0000] Creating filter for registry: %slocalregistry
[37mDEBU[0m[0000] whitelist: %v[.*]
[37mDEBU[0m[0000] blacklist: %v[]
[37mDEBU[0m[0000] Initializing WorkEngine
[37mDEBU[0m[0000] Unable to get secrets from config
[37mDEBU[0m[0000] Creating AnsibleBroker
============================================================
== Starting Ansible Service Broker... ==
============================================================
[36mINFO[0m[0000] Initiating Recovery Process
[36mINFO[0m[0000] Recovery complete
[36mINFO[0m[0000] recover called
[36mINFO[0m[0000] Broker configured to bootstrap on startup
[36mINFO[0m[0000] Attempting bootstrap...
[36mINFO[0m[0000] AnsibleBroker::Bootstrap
@mhrivnak meant to put you on this as a reviewer
|
gharchive/pull-request
| 2018-05-19T04:01:44 |
2025-04-01T06:39:53.808174
|
{
"authors": [
"jmrodri",
"mhrivnak"
],
"repo": "openshift/ansible-service-broker",
"url": "https://github.com/openshift/ansible-service-broker/pull/961",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
939711976
|
Add a default flag for ConsoleYamlSample
Suggested API change here. The operator framework is planning a move to using embedded YAML CRs instead of a CSV and this would remove the need for the alm-examples annotation on CSVs. Once this is in the API, I would plan to make a PR to the console that would use this flag to populate the initial YAML editor when creating new CRs via the UI.
This is partly me working out how this api is updated - I would expect to write a proposal to be discussed somewhere before actually PRing the API repo, but didn't see any contributor guidance, so please feel free to point me at a proper process!
The alternative to this could be to annotate a ConsoleYamlSample to say it is the default, in the same style as annotating a cluster default storage class.
@jhadvig @spadgett - our Red Hat TAM assures me you are the right people to have this discussion with. Can you tell me how to go about discussion of this API?
We're looking to move away from alm-examples within OLM towards using ConsoleYAMLSamples because they allow users to select appropriate templates, and because the CSV within the operator bundle is targetted for removal.
The ALM examples are attached to an operator CSV, so when the operator is installed, the examples for that operator are installed with it. If operator version 1.0.0 is installed in namespace op-100, and operator version 1.0.1 is installed in namespace op-101, then each namespace will have its own samples that match the capabilities of that operator, including features that might not have affected the CRD schema.
With ConsoleYamlSamples, the sample is associated with the CRD schema version. If operator 1.0.0 and 1.0.1 use the same schema (v1beta1), then the same samples will be used for each operator, regardless of the operator capabilities. If we go one step further and say operator 1.0.2 includes a new version of the schema (v1beta2), then the samples could include new schema fields that imply to the user those fields are supported, even when that resource is going to be controlled by the 1.0.0 operator, which does not understand the new fields.
I'd like to have a discussion about the implications, and how to handle them. At the moment the OpenShift console can imply things to our customers that don't work. I know this message is outside the scope of the PR (which only addresses part of the concern), but I'm not sure where else to go :)
/remove-lifecycle stale We're still interested in this discussion.
/remove-lifecycle stale
We're still interested in this discussion
/remove-lifecycle stale
We're still interested in this discussion
|
gharchive/pull-request
| 2021-07-08T10:27:23 |
2025-04-01T06:39:53.814668
|
{
"authors": [
"Jamstah"
],
"repo": "openshift/api",
"url": "https://github.com/openshift/api/pull/963",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1879016708
|
MGMT-15673: set core pass on installation step
Added a service for setting the user core password (when specified). This is required to ensure the pass is set after reboot (i.e. as opposed to setting using the ignition).
/retest
|
gharchive/pull-request
| 2023-09-03T09:34:35 |
2025-04-01T06:39:53.816006
|
{
"authors": [
"danielerez"
],
"repo": "openshift/appliance",
"url": "https://github.com/openshift/appliance/pull/139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1664801815
|
Make sure to add federated role label to existing accounts if not present
Small miss from https://github.com/openshift/aws-account-operator/pull/751.
Catches an edge case where the awsFederatedRole label would not be applied to existing awsfederatedaccountaccess CRs.
/hold
I need to investigate the impact of my changes to JoinLabelMaps more.
/label tide/merge-method-squash
/unhold
Was able to test this by doing the following:
Create the FederatedRoles.
Create an account.
Create a FederatedAccountAccess
Delete the awsFederatedRoleName label out of the FederatedAccountAccess
Checkout this PR
Start the operator
Validate that the label is applied to the FederatedAccountAccess CR.
/lgtm
/hold
Unhold this in the morning.
/hold cancel
|
gharchive/pull-request
| 2023-04-12T15:38:01 |
2025-04-01T06:39:53.856148
|
{
"authors": [
"AlexVulaj",
"fahlmant",
"iamkirkbater"
],
"repo": "openshift/aws-account-operator",
"url": "https://github.com/openshift/aws-account-operator/pull/757",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
568442500
|
[release-4.3] Bug 1802894: bump(github.com/mtrmac/gpgme): v0.1.2
Fixes CVE-2020-8945
/lgtm
/bugzilla refresh
|
gharchive/pull-request
| 2020-02-20T17:00:57 |
2025-04-01T06:39:53.857852
|
{
"authors": [
"adambkaplan",
"gabemontero"
],
"repo": "openshift/builder",
"url": "https://github.com/openshift/builder/pull/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1075754668
|
Library go bump
Bumping the library-go dependency to latest, this should take advantage of leader election changes for SNO clusters proposed in this library-go PR and performance improvements in this PR.
Changes:
updated library-go to use latest master branch
updated handler structs to use the new explicit handler structs
/retest-required
Is it part of a Jira story or a BZ?
TestLeaderElection needs to be adjusted, apparently the new leader election is slower and the test needs to wait until these pods join the election process.
Is it part of a Jira story or a BZ?
The Jira story this is tied to is CNF-3684
TestLeaderElection needs to be adjusted, apparently the new leader election is slower and the test needs to wait until these pods join the election process.
That's interesting, for non-SNO clusters the timing of 137/107/26 should not have been changed. Looking into it
/retest-required
/retest-required
/retest-required
/retest-required
/retest-required
/assign @dmage
/retest-required
/approve
/lgtm
|
gharchive/pull-request
| 2021-12-09T16:17:30 |
2025-04-01T06:39:53.883968
|
{
"authors": [
"dmage",
"eggfoobar",
"ggiguash"
],
"repo": "openshift/cluster-image-registry-operator",
"url": "https://github.com/openshift/cluster-image-registry-operator/pull/736",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
656068192
|
Bug 1852964: account for nil DaemonSet returned from library-go
See https://github.com/openshift/library-go/blob/master/pkg/operator/resource/resourceapply/apps.go#L125
Just going to account for nil rets from library-go in ocm-o
/assign @adambkaplan
update to no longer use goto pushed @sttts - thanks
/approve
@gabemontero nit on the name of the return function, otherwise looks good.
update of method name pushed @adambkaplan
/cherrypick release-4.5
|
gharchive/pull-request
| 2020-07-13T18:57:40 |
2025-04-01T06:39:53.888951
|
{
"authors": [
"adambkaplan",
"gabemontero"
],
"repo": "openshift/cluster-openshift-controller-manager-operator",
"url": "https://github.com/openshift/cluster-openshift-controller-manager-operator/pull/163",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2662484918
|
BUILD-1168: Remove Operator-owned RBAC
With the fixes in BUILD-1171 [1], cluster admins no longer need to create ClusterRole/ClusterRoleBindings for the Shared Resource CSI Driver. The updated instructions mostly require the creation of Role and RoleBinding objects, with the exception of creating the ClusterRole for the Shared Resource object.
[1] https://issues.redhat.com/browse/BUILD-1171
/assign @sayan-biswas
/cc @shivanisathe25 @ayushsatyam146 @avinal
|
gharchive/pull-request
| 2024-11-15T16:27:04 |
2025-04-01T06:39:53.915499
|
{
"authors": [
"adambkaplan"
],
"repo": "openshift/csi-driver-shared-resource",
"url": "https://github.com/openshift/csi-driver-shared-resource/pull/247",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1558791393
|
Publish lib utils 1.0.3 for API discovery redux action helpers
Publishes https://github.com/openshift/dynamic-plugin-sdk/pull/191
Codecov Report
Base: 38.41% // Head: 38.41% // No change to project coverage :thumbsup:
Coverage data is based on head (bb03044) compared to base (364a2d7).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## main #194 +/- ##
=======================================
Coverage 38.41% 38.41%
=======================================
Files 64 64
Lines 1588 1588
Branches 353 353
=======================================
Hits 610 610
Misses 943 943
Partials 35 35
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
/lgtm
|
gharchive/pull-request
| 2023-01-26T21:18:09 |
2025-04-01T06:39:53.921225
|
{
"authors": [
"codecov-commenter",
"florkbr",
"vojtechszocs"
],
"repo": "openshift/dynamic-plugin-sdk",
"url": "https://github.com/openshift/dynamic-plugin-sdk/pull/194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2123646503
|
OCPBUGS-26924: Add node registrat healthcheck
Fixes OCPBUGS-26924
/jira refresh
/lgtm
|
gharchive/pull-request
| 2024-02-07T18:39:15 |
2025-04-01T06:39:53.922363
|
{
"authors": [
"gnufied",
"mpatlasov"
],
"repo": "openshift/gcp-pd-csi-driver-operator",
"url": "https://github.com/openshift/gcp-pd-csi-driver-operator/pull/118",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422722989
|
reduce konnectivity-agent log verbosity
konnectivity-agent is overly verbose as currently configured. Dropped loglevel from 4 to 3, removes lines
received DATA
write to remote
close connection
received DIAL_REQ
/lgtm
/retest-required
|
gharchive/pull-request
| 2022-10-25T16:05:34 |
2025-04-01T06:39:53.923986
|
{
"authors": [
"jparrill",
"sjenning"
],
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/1828",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1685304995
|
ACM-5173 [backport 4.12] get pull secret instead of dockerconfigjson from mce credentials
Backport of https://github.com/openshift/hypershift/commit/142d91e6b81858adbfd50907674c59d9f016c447
Issue: https://issues.redhat.com/browse/ACM-5173
/area hypershift-operator
/ok-to-test
/retest-required
@bryan-cox is this an issue with the PR or the test? The failure doesn't provide much info
@o-farag - I believe someone pushed a fix for that issue. Let me issue a retest.
/retest-required
/test ho-4.13-e2e-aws
/lgtm
/test ho-4.13-e2e-aws
/test ho-4.13-e2e-aws
/approve
|
gharchive/pull-request
| 2023-04-26T16:00:03 |
2025-04-01T06:39:53.927829
|
{
"authors": [
"bryan-cox",
"csrwng",
"o-farag"
],
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/2486",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1791710456
|
Revert "Merge pull request #2770 from dharaneeshvrd/upgrade-capi-ibmcloud
https://github.com/openshift/hypershift/pull/2770 took out CI with installer failing in the root CI cluster
https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-hypershift-release-4.14-periodics-e2e-aws-ovn/1676933960199835648/artifacts/e2e-aws-ovn/dump-management-cluster/artifacts/namespaces/hypershift/core/pods/logs/installer-59cfd4cff9-9xdgj-installer-previous.log
/override ci/prow/e2e-aws
/override ci/prow/e2e-kubevirt-aws-ovn
/lgtm
@dharaneeshvrd
This is because we didn't account for transitions from existing HC/HOs.
See https://github.com/openshift/hypershift/pull/2000
https://github.com/openshift/hypershift/blob/main/api/v1beta1/capi_types.go#L5-L6
Also I'm not sure how this could work at all given you are pointing to v1 here https://github.com/openshift/hypershift/blob/main/cmd/install/assets/assets.go#L37-L41
If you don't want any kind of backward compatibility the hypershift install would need to account for removing existing CRDs first so apply command stop complaining about the stored version.
@enxebre
Apologies for the inconvenience caused and thanks for the input.
I have reworked on this and created a PR here https://github.com/openshift/hypershift/pull/2831
Tested the upgrade scenarios works fine.
Please review this PR
|
gharchive/pull-request
| 2023-07-06T14:45:44 |
2025-04-01T06:39:53.932750
|
{
"authors": [
"dharaneeshvrd",
"enxebre",
"muraee",
"sjenning"
],
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/2776",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
921688147
|
skip e2e binary during image builds
make was changed to build the e2e test binary after we broke it once and realized that e2e binary build was not covered by CI.
However, we do not need the e2e binary in the image builds for hypershift
This PR switches to a make target that doesn't build e2e.
@ironcladlou @enxebre
This shouldn't affect e2e but let's be sure:
/test-e2e
/test e2e-aws
/lgtm
|
gharchive/pull-request
| 2021-06-15T18:37:24 |
2025-04-01T06:39:53.934917
|
{
"authors": [
"ironcladlou",
"sjenning"
],
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/296",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2436722126
|
HOSTEDCP-1569: test: e2e: skip unknown conditions instead of erroring
Rehearsal failure trying to use main HO with 4.16 release
https://prow.ci.openshift.org/view/gs/test-platform-results/pr-logs/pull/openshift_release/52306/rehearse-52306-periodic-ci-openshift-hypershift-release-4.16-periodics-e2e-aws-ovn/1818008709683482624
nodepool_test.go:289: correct condition: wanted ValidReleaseImage=True, got ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-0vx638ni/release@sha256:6f727aeaa9a820058419e293027d0d1c006d5776b32dbebd23d9180cc4b7e245)
nodepool_test.go:298: Failed to validate NodePool conditions in 0s: unknown condition ValidArchPlatform
--- FAIL: TestNodePool/Main/TestNodepoolMachineconfigGetsRolledout (1380.22s)
HO in main adds a condition that is not in the set of known conditions in 4.16 ValidArchPlatform. The condition does, in fact, exist in 4.16, so we could add it to the set of known/validated conditions. However, we do have the potential of the HO adding HC or NP conditions in the future that do not exist in 4.16 and we should just skip over the unknown condition instead of erroring on it.
cc @muraee
/test e2e-aws
|
gharchive/pull-request
| 2024-07-30T02:04:51 |
2025-04-01T06:39:53.937444
|
{
"authors": [
"sjenning"
],
"repo": "openshift/hypershift",
"url": "https://github.com/openshift/hypershift/pull/4440",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
197481413
|
Handle volume layering
Any Dockerfile with a VOLUME must be two layers - one for the base
contents and one for the volume definitions. Add support for this and a
test.
First part of fixing #17
Important note - if Docker sees:
ADD file /var
VOLUME /var
ADD file2 /var
the final output will be:
/var/file
/var/file2
It's only RUN commands that don't get added. This PR adds the framework and testing without actually changing existing behavior, but sets a bunch of preparation up for that PR.
|
gharchive/pull-request
| 2016-12-24T23:46:50 |
2025-04-01T06:39:53.939524
|
{
"authors": [
"smarterclayton"
],
"repo": "openshift/imagebuilder",
"url": "https://github.com/openshift/imagebuilder/pull/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2338749234
|
csi: Add Deployment, DaemonSet hooks for config map hash annotations
We already have these for secrets. Add them for config maps.
This is needed for the OpenStack Cinder CSI Driver Operator. See https://github.com/openshift/openstack-cinder-csi-driver-operator/pull/168/ for more information.
/retest-required
CI failures appear to be due to timeouts, not anything to do with this change.
non-binding lgtm
this looks ok, but would we better off by refactoring addObjectHash to take a metav1.Object instead of a daemonset or deployment and then collapse the cases?
/approve
delegated to @dusk125
this looks ok, but would we better off by refactoring addObjectHash to take a metav1.Object instead of a daemonset or deployment and then collapse the cases?
Happy to do this here or in a follow-up PR, whatever works.
Happy to do this here or in a follow-up PR, whatever works.
I think it's fine to do that here. Thanks!
this looks ok, but would we better off by refactoring addObjectHash to take a metav1.Object instead of a daemonset or deployment and then collapse the cases?
Actually, looking into this more, metav1.Object is an interface meaning I can't access e.g. .Spec.Template.Annotations without a cast...which kind of defeats the whole point of DRYing things up. We'd also need to move the function to a common location and make it public. I'm not sure if we want to do that either.
Have I missed something?
this looks ok, but would we better off by refactoring addObjectHash to take a metav1.Object instead of a daemonset or deployment and then collapse the cases?
Actually, looking into this more, metav1.Object is an interface meaning I can't access e.g. .Spec.Template.Annotations without a cast...which kind of defeats the whole point of DRYing things up. We'd also need to move the function to a common location and make it public. I'm not sure if we want to do that either.
Have I missed something?
Edit: We could add two call to the new AddObjectHash, one for deployment and one for deployment.Spec.Template (ditto for daemonset) but again, that's not all that DRY.
Right, we'd need to use type assertion, like we do in the static resource controller here. Since the code duplication hasn't been introduced by this PR, I personally think it's fine doing this refactoring in a separate PR.
/lgtm
|
gharchive/pull-request
| 2024-06-06T16:58:59 |
2025-04-01T06:39:54.002219
|
{
"authors": [
"bertinatto",
"deads2k",
"dusk125",
"stephenfin"
],
"repo": "openshift/library-go",
"url": "https://github.com/openshift/library-go/pull/1745",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2535049972
|
update release version to 0.1.6
Description
update release version to 0.1.6
Type of change
[ ] Refactor
[ ] New feature
[ ] Bug fix
[ ] CVE fix
[ ] Optimization
[ ] Documentation Update
[ ] Configuration Update
[ ] Bump-up dependent library
Related Tickets & Documents
Related Issue #
Closes #
Checklist before requesting a review
[ ] I have performed a self-review of my code.
[ ] PR has passed all pre-merge test jobs.
[ ] If it is a core feature, I have added thorough tests.
Testing
Please provide detailed steps to perform tests related to this code change.
How were the fix/results from this change verified? Please provide relevant screenshots or results.
/override "Red Hat Konflux / ols-enterprise-contract / bundle"
/override "Red Hat Konflux / ols-enterprise-contract / test-bundle"
/override "Red Hat Konflux / ols-enterprise-contract / bundle"
/overide "Red Hat Konflux / ols-enterprise-contract / test-bundle"
/approve
|
gharchive/pull-request
| 2024-09-19T01:18:19 |
2025-04-01T06:39:54.007712
|
{
"authors": [
"raptorsun",
"xrajesh"
],
"repo": "openshift/lightspeed-operator",
"url": "https://github.com/openshift/lightspeed-operator/pull/417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2112741853
|
OLS-233: Use Optional type instead of | operator
Description
Seems we are using Optional instead of something | None.
Type of change
[X] Refactor
/lgtm
/approve
i had been pushing for the use of | syntax in PRs i reviewed because my understanding is that it is the newer/more modern syntax.
https://peps.python.org/pep-0655/ states:
Optional[] is too ubiquitous to deprecate, although use of it may fade over time in favor of the T|None notation specified by [PEP 604](https://peps.python.org/pep-0604/).
|
gharchive/pull-request
| 2024-02-01T15:03:03 |
2025-04-01T06:39:54.010289
|
{
"authors": [
"bparees",
"onmete",
"tisnik"
],
"repo": "openshift/lightspeed-service",
"url": "https://github.com/openshift/lightspeed-service/pull/326",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1822106627
|
OCPBUGS-15576: fix: ensure panic safety in PVC controller for non set storageClassName
Otherwise if a PVC is created without storageClassName, LVM Operator will crash and cause a panic.
Also removes unnecessary API Reader in PVC controller and cleans up logging, introduces test cases for ignore cases
/jira refresh
/jira refresh
Codecov Report
Merging #369 (eda99c3) into main (f444ae9) will increase coverage by 2.16%.
Report is 9 commits behind head on main.
The diff coverage is 80.00%.
Additional details and impacted files
@@ Coverage Diff @@
## main #369 +/- ##
==========================================
+ Coverage 14.38% 16.55% +2.16%
==========================================
Files 23 24 +1
Lines 1932 2060 +128
==========================================
+ Hits 278 341 +63
- Misses 1630 1693 +63
- Partials 24 26 +2
Files Changed
Coverage Δ
controllers/persistent-volume-claim/controller.go
39.70% <80.00%> (ø)
... and 3 files with indirect coverage changes
/hold for one more review
/cc @brandisher @qJkee
@jeff-roche e2e test is still flaky right? Should we wait for them to be fixed or do you want to force this merge in?
@jeff-roche e2e test is still flaky right? Should we wait for them to be fixed or do you want to force this merge in?
I will run the e2e tests manually and report back
Manually tested
/override ci/prow/lvm-operator-bundle-e2e-aws
Manually tested
/override ci/prow/lvm-operator-bundle-e2e-aws
|
gharchive/pull-request
| 2023-07-26T10:34:33 |
2025-04-01T06:39:54.021200
|
{
"authors": [
"codecov-commenter",
"jakobmoellerdev",
"jeff-roche"
],
"repo": "openshift/lvm-operator",
"url": "https://github.com/openshift/lvm-operator/pull/369",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1538295411
|
OCPBUGS-5613: Bump goutils dependency from v1.1.0 to v1.1.1 for CVE-2021-4238
Description of the change:
Bump goutils dependency from v1.1.0 to v1.1.1
Motivation for the change:
https://nvd.nist.gov/vuln/detail/CVE-2021-4238
Checklist
If the pull request includes user-facing changes, extra documentation is required:
[ ] Add a new changelog fragment in changelog/fragments (see changelog/fragments/00-template.yaml)
[ ] Add or update relevant sections of the docs website in website/content/en/docs
@rashmigottipati the change in the PR looks correct. Nicely done.
/jira refresh
/label backport-risk-assessed
/label qe-approved
/label cherry-pick-approved
I added it manually. Thanks folks!
|
gharchive/pull-request
| 2023-01-18T16:19:45 |
2025-04-01T06:39:54.040872
|
{
"authors": [
"emmajiafan",
"everettraven",
"jmrodri",
"joelanford",
"oceanc80",
"rashmigottipati"
],
"repo": "openshift/ocp-release-operator-sdk",
"url": "https://github.com/openshift/ocp-release-operator-sdk/pull/298",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
752693130
|
4.6.0-0.okd-2020-11-27-200126: NetworkManager is not checking reverse lookups when setting a hostname
Describe the bug
On installation, the bootstrap process begins, but fails after the masters have restarted. I'm noticing that despite the "core" user being created on initial boot of each master, after the reboots, the "fedora" user is listed at the prompt. Attempts to ssh into the master nodes also fails, despite using the same key for creation which works to get into the bootstrap as expected. Is the Core OS install getting hosed on reboot? Since I can't get into the masters, I'm unable to get their logs.
The process I'm following is the same scripted process that has worked previously. In fact, I built a 4.5 two days ago using the process. Nothing has changed on this end that I can see.
Version
OKD: 4.6.0-0.okd-2020-11-27-200126
FCOS: fedora-coreos-32.20201104.3.0-vmware.x86_64.ova
How reproducible
Every time.
Log bundle
4.6.0-0.okd-2020-11-27-200126-Magiera.tar.gz
Are you sure its "fedora" user, not hostname? Do you have DHCP in the setup?
Ahh, yeah. You're right. It was the hostname. Sorry, I have "teething baby lack of sleep brain" at the moment. Yes, there is DHCP. It is working, as the first boot always have the correct hostname. A quick ping test shows all the nodes have the expected forward and reverse. Here's some quick logging on another install attempt...
==================== Before Installer ============================
[jaimelm1@lsa-linux-dev bin]$ ./pingMasters.sh -c spiritus -w 5 -m 4
Pinging nodes for cluster: spiritus
PING master-0.spiritus.my.company.edu (10.103.2.100) 56(84) bytes of data.
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=1 ttl=63 time=0.329 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=2 ttl=63 time=0.458 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=3 ttl=63 time=0.485 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=4 ttl=63 time=0.444 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=5 ttl=63 time=0.472 ms
--- master-0.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 103ms
rtt min/avg/max/mdev = 0.329/0.437/0.485/0.060 ms
PING master-1.spiritus.my.company.edu (10.103.2.101) 56(84) bytes of data.
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=1 ttl=63 time=0.365 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=2 ttl=63 time=0.461 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=3 ttl=63 time=0.396 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=4 ttl=63 time=0.430 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=5 ttl=63 time=0.556 ms
--- master-1.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 91ms
rtt min/avg/max/mdev = 0.365/0.441/0.556/0.069 ms
PING master-2.spiritus.my.company.edu (10.103.2.102) 56(84) bytes of data.
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=1 ttl=63 time=0.357 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=2 ttl=63 time=0.552 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=3 ttl=63 time=0.469 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=4 ttl=63 time=0.452 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=5 ttl=63 time=0.508 ms
--- master-2.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 91ms
rtt min/avg/max/mdev = 0.357/0.467/0.552/0.069 ms
PING master-3.spiritus.my.company.edu (10.103.2.103) 56(84) bytes of data.
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=1 ttl=63 time=0.413 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=2 ttl=63 time=0.491 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=3 ttl=63 time=0.530 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=4 ttl=63 time=0.909 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=5 ttl=63 time=0.455 ms
--- master-3.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 68ms
rtt min/avg/max/mdev = 0.413/0.559/0.909/0.180 ms
======================================================================
Oddly, after the reboot, their hostnames are wrong, but the nodes are still ping-able at the correct address. (I can check the DHCP server logs if need be but this looks legit)
============================== During Installer ==============================
[jmagiera@lsa-linux-dev bin]$ ./pingMasters.sh -c spiritus -w 5 -m 4
Pinging nodes for cluster: spiritus
PING master-0.spiritus.my.company.edu (10.103.2.100) 56(84) bytes of data.
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=1 ttl=63 time=0.391 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=2 ttl=63 time=0.432 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=3 ttl=63 time=0.474 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=4 ttl=63 time=0.327 ms
64 bytes from master-0.spiritus.my.company.edu (10.103.2.100): icmp_seq=5 ttl=63 time=0.455 ms
--- master-0.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 113ms
rtt min/avg/max/mdev = 0.327/0.415/0.474/0.058 ms
PING master-1.spiritus.my.company.edu (10.103.2.101) 56(84) bytes of data.
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=1 ttl=63 time=0.554 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=2 ttl=63 time=0.385 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=3 ttl=63 time=0.366 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=4 ttl=63 time=0.382 ms
64 bytes from master-1.spiritus.my.company.edu (10.103.2.101): icmp_seq=5 ttl=63 time=0.377 ms
--- master-1.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 91ms
rtt min/avg/max/mdev = 0.366/0.412/0.554/0.075 ms
PING master-2.spiritus.my.company.edu (10.103.2.102) 56(84) bytes of data.
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=1 ttl=63 time=0.411 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=2 ttl=63 time=0.470 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=3 ttl=63 time=0.491 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=4 ttl=63 time=0.434 ms
64 bytes from master-2.spiritus.my.company.edu (10.103.2.102): icmp_seq=5 ttl=63 time=0.404 ms
--- master-2.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 92ms
rtt min/avg/max/mdev = 0.404/0.442/0.491/0.033 ms
PING master-3.spiritus.my.company.edu (10.103.2.103) 56(84) bytes of data.
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=1 ttl=63 time=0.406 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=2 ttl=63 time=0.424 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=3 ttl=63 time=0.459 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=4 ttl=63 time=0.475 ms
64 bytes from master-3.spiritus.my.company.edu (10.103.2.103): icmp_seq=5 ttl=63 time=0.446 ms
--- master-3.spiritus.my.company.edu ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 90ms
rtt min/avg/max/mdev = 0.406/0.442/0.475/0.024 ms
==========================
Install fails with...
ERROR Cluster operator network Degraded is True with BootstrapError: Internal error while reconciling platform networking resources: Unable to bootstrap OVN, expected amount of control plane nodes (4) do not match found (1): timed out waiting for the condition
INFO Use the following commands to gather logs from the cluster
INFO openshift-install gather bootstrap --help
FATAL failed to wait for bootstrapping to complete: timed out waiting for the condition
=========================
So, hostname is changed on reboot and nodes become unavailable to the installer. I've had limited time this morning but can dig into the logs deeper if nothing stands out to you in what I shared earlier.
hostname changes are not rendered in the log bundle. Check NetworkManager logs
After a period, I'm able to log into the nodes. Immediately after sudo'ing, I'm presented with...
[core@fedora ~]$ sudo -s
[systemd]
Failed Units: 1
gcp-hostname.service
Digging deeper...
[root@fedora core]# systemctl status gcp-hostname.service > gcp-hostname.service.log
[root@fedora core]# cat gcp-hostname.service.log
● gcp-hostname.service - Set GCP Transient Hostname
Loaded: loaded (/etc/systemd/system/gcp-hostname.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2020-11-28 17:29:00 UTC; 6h ago
Process: 972 ExecStartPre=/usr/bin/afterburn --provider gcp --hostname=/run/afterburn.hostname (code=exited, status=1/FAILURE)
CPU: 55ms
Nov 28 17:29:00 fedora afterburn[972]: Caused by: writing hostname
Nov 28 17:29:00 fedora afterburn[972]: Caused by: maximum number of retries (10) reached
Nov 28 17:29:00 fedora afterburn[972]: Caused by: failed to fetch
Nov 28 17:29:00 fedora afterburn[972]: Caused by: error sending request for url (http://metadata.google.internal/computeMetadata/v1/instance/hostname): error trying to connect: dns error: failed to lookup address information: Name or service not known
Nov 28 17:29:00 fedora afterburn[972]: Caused by: error trying to connect: dns error: failed to lookup address information: Name or service not known
Nov 28 17:29:00 fedora afterburn[972]: Caused by: dns error: failed to lookup address information: Name or service not known
Nov 28 17:29:00 fedora afterburn[972]: Caused by: failed to lookup address information: Name or service not known
Nov 28 17:29:00 fedora systemd[1]: gcp-hostname.service: Control process exited, code=exited, status=1/FAILURE
Nov 28 17:29:00 fedora systemd[1]: gcp-hostname.service: Failed with result 'exit-code'.
Nov 28 17:29:00 fedora systemd[1]: Failed to start Set GCP Transient Hostname.
============
It's as if the installer or FCOS thinks this is a GCP node. I'll keep poking around.
Perhaps a red herring, but #393 looks awfully similar to what I'm seeing.
After a period, I'm able to log into the nodes. Immediately after sudo'ing, I'm presented with...
[core@fedora ~]$ sudo -s
[systemd]
Failed Units: 1
gcp-hostname.service
That's a minor issue - https://github.com/openshift/okd/issues/396
I am seeing the same. I commented on another issue, but i think this is more relevant.
https://github.com/openshift/okd/issues/153#issuecomment-735314786
`I have a similar thing happening @vrutkovs
Release - 4.6.0-0.okd-2020-11-27-200126
FCOS image - 32.20201104.3.0 stable
I have a dhcp server setup to give out hostnames. Bootstrap as well as the 3 node-cluster nodes lose hostname after reboot. Bootkube service goes into restart loop on bootstrap after a while, workers kubelet error message says node "fedora" not found.`
After installing a 4.5 cluster, and then upgrading to 4.6, all the nodes hostnames changed to fedora after reboot. So had to manually set hostnames, even though DHCP was handing them out
I have a dhcp server setup to give out hostnames
...
kubelet error message says node "fedora" not found
kubelet is using hostname as identifier. Did the node receive an expected hostname?
Looking at the NetworkManager log, I don't see anything out of the ordinary, just typical DORA with the DHCP server. I'm curious why the nodes were not immediately accessible after reboot. It seemed to take some time.
OKD-ISSUE394-NetworkManager.log
On one of my nodes I also have
Nov 29 11:01:23 neptr.vrutkovs.eu NetworkManager[900]: <info> [1606647683.4489] hostname: hostname: using hostnamed
Nov 29 11:01:23 neptr.vrutkovs.eu NetworkManager[900]: <info> [1606647683.4490] hostname: hostname changed from (none) to "neptr.vrutkovs.eu"
@vrutkovs nope, only stuck with fedora, and the likely red herring failed gcp-hostname.service which i disabled
As an added note, it only happens after the part where the nodes' FCOS version is updated
Indeed. Looking at the systemd-hostnamed logs, the first call to it on the initial boot sets the hostname as expected. However, on subsequent launches it's not reset. This would leave me to believe that either systemd-hostnamed is not getting a pertinent piece of information to trigger a reset and/or another service is changing the hostname. I think that depends on if FCOS hosts are expected to maintain hostname after an update-triggered reboot, or if they default to "fedora" again unless/until DHCP provides a hostname again or static IP/hostname were implemented.
[root@fedora core]# journalctl -u systemd-hostnamed
-- Logs begin at Sat 2020-11-28 17:14:37 UTC, end at Mon 2020-11-30 01:48:48 UTC. --
Nov 28 17:21:01 localhost systemd[1]: Starting Hostname Service...
Nov 28 17:21:01 localhost systemd[1]: Started Hostname Service.
Nov 28 17:21:02 master-1.spiritus.lsa.umich.edu systemd-hostnamed[2528]: Changed host name to 'master-1.spiritus.my.company.edu'
Nov 28 17:21:32 master-1.spiritus.my.company.edu systemd[1]: systemd-hostnamed.service: Succeeded.
-- Reboot --
Nov 28 17:28:16 fedora systemd[1]: Starting Hostname Service...
Nov 28 17:28:17 fedora systemd[1]: Started Hostname Service.
Nov 28 17:28:47 fedora systemd[1]: systemd-hostnamed.service: Succeeded.
Nov 28 23:57:18 fedora systemd[1]: Starting Hostname Service...
Nov 28 23:57:18 fedora systemd[1]: Started Hostname Service.
Nov 28 23:57:48 fedora systemd[1]: systemd-hostnamed.service: Succeeded.
Nov 30 01:44:46 fedora systemd[1]: Starting Hostname Service...
Nov 30 01:44:46 fedora systemd[1]: Started Hostname Service.
Nov 30 01:45:16 fedora systemd[1]: systemd-hostnamed.service: Succeeded.
same here
4.5 upgrade to 4.6, it only happens after the part where the nodes' FCOS version is updated
❯ oc get node
NAME STATUS ROLES AGE VERSION
manage-dev-gzrpk-master-0 Ready master 6d21h v1.19.0-rc.2+9f84db3-1075
manage-dev-gzrpk-master-1 Ready master 6d21h v1.18.3
manage-dev-gzrpk-master-2 Ready master 6d21h v1.18.3
manage-dev-gzrpk-worker-9vhmd Ready worker 6d21h v1.18.3
manage-dev-gzrpk-worker-n59h2 Ready worker 6d21h v1.18.3
manage-dev-gzrpk-worker-t8b6m Ready worker 6d21h v1.19.0-rc.2+9f84db3-1075
master01.manage-dev.oc4.forchange.cn NotReady,SchedulingDisabled master 6d22h v1.18.3
master02.manage-dev.oc4.forchange.cn Ready master 6d22h v1.18.3
master03.manage-dev.oc4.forchange.cn Ready master 6d22h v1.18.3
worker01.manage-dev.oc4.forchange.cn Ready worker 6d21h v1.18.3
worker02.manage-dev.oc4.forchange.cn Ready worker 6d21h v1.18.3
worker03.manage-dev.oc4.forchange.cn Ready worker 6d21h v1.18.3
worker04.manage-dev.oc4.forchange.cn Ready worker 6d21h v1.18.3
worker05.manage-dev.oc4.forchange.cn NotReady,SchedulingDisabled worker 6d21h v1.18.3
worker06.manage-dev.oc4.forchange.cn Ready worker 6d21h v1.18.3
[root@fedora ~]# hostname
fedora
[root@fedora ~]# journalctl -u systemd-hostnamed
-- Logs begin at Mon 2020-11-23 07:26:03 UTC, end at Mon 2020-11-30 06:13:29 UTC. --
11月 23 07:50:13 localhost systemd[1]: Starting Hostname Service...
11月 23 07:50:13 localhost systemd[1]: Started Hostname Service.
11月 23 07:50:13 worker05.manage-dev.oc4.forchange.cn systemd-hostnamed[1042]: Changed host name to 'worker05.manage-dev.oc4.forchange.cn'
11月 23 07:50:36 worker05.manage-dev.oc4.forchange.cn systemd[1]: systemd-hostnamed.service: Succeeded.
-- Reboot --
11月 23 08:00:09 localhost systemd[1]: Starting Hostname Service...
11月 23 08:00:09 localhost systemd[1]: Started Hostname Service.
11月 23 08:00:09 worker05.manage-dev.oc4.forchange.cn systemd-hostnamed[742]: Changed host name to 'worker05.manage-dev.oc4.forchange.cn'
11月 23 08:00:40 worker05.manage-dev.oc4.forchange.cn systemd[1]: systemd-hostnamed.service: Succeeded.
-- Reboot --
11月 30 04:35:29 fedora systemd[1]: Starting Hostname Service...
11月 30 04:35:29 fedora systemd[1]: Started Hostname Service.
11月 30 04:35:52 fedora systemd[1]: systemd-hostnamed.service: Succeeded.
11月 30 06:04:19 fedora systemd[1]: Starting Hostname Service...
11月 30 06:04:19 fedora systemd[1]: Started Hostname Service.
11月 30 06:04:52 fedora systemd[1]: systemd-hostnamed.service: Succeeded.
Resume normal work after manually set the hostname
Seems to be failed are FQDN host...
Looking at the NetworkManager log, I don't see anything out of the ordinary, just typical DORA with the DHCP server. I'm curious why the nodes were not immediately accessible after reboot. It seemed to take some time. Edit: And even after requesting and accepting the IP and the hostname, it doesn't update the hostname.
OKD-ISSUE394-NetworkManager.log
In your NetworkManager logs I don't see where the DHCP response included the host name. I would have expected something like:
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option dhcp_lease_time => '3600'
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option domain_name => 'ec2.internal'
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option domain_name_servers => '10.0.0.2'
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option expiry => '1606176868'
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option host_name => 'ip-10-0-1-155'
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option interface_mtu => '9001'
NetworkManager[629]: <info> [1606173268.0436] dhcp4 (eth0): option ip_address => '10.0.1.155'
I imagine you're relying on reverse DNS lookups. I just did some digging and yeah, this used to work. See https://github.com/coreos/fedora-coreos-tracker/issues/649#issuecomment-736104003
For folks following along in their hymnals, Dusty asked me to try bringing a straight up FCOS 33 (no OKD) node. Sure enough, the failure remained.
====================================
[jaimelm1@lsa-linux-dev fcos]$ host fcos-33-test.my.company.edu
fcos-33-test.my.company.edu has address xxx.xxx.xxx.xxx
[jaimelm1@lsa-linux-dev fcos]$ ssh -i ~/.ssh/coreadmin_rsa core@fcos-33-test.my.company.edu
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/c/server/coreos/
Last login: Mon Nov 30 22:53:36 2020 from 141.211.211.32
[core@fedora ~]$ hostname
fedora
===========================================================
Following all the threads, it's an issue of Network Manager not falling back onto the reverse lookup due to the ordering of precedence. "Is it something other than localhost?" comes before reverse look-ups – and since the Fedora desktop folks in their infinite wisdom wanted to brand workstations by setting hostname to "fedora" when the hostname wasn't explicitly defined, here we are.
I hope the re-order makes it into Network Manager. There are a lot of services, both internal and external, that rely on proper reverse lookups. It seems to me that reverse should be higher up in the precedence.
Thanks for all the legwork on this Dusty.
As an aside, I wrote a quick script to reset hostnames on nodes that were changed to "fedora". Nothing fancy, but it simplifies the process if you have a lot of nodes to fix.
https://github.com/JaimeMagiera/oct/blob/master/repairClusterHostnames.sh
Following all the threads, it's an issue of Network Manager not falling back onto the reverse lookup due to the ordering of precedence. "Is it something other than localhost?" comes before reverse look-ups – and since the Fedora desktop folks in their infinite wisdom wanted to brand workstations by setting hostname to "fedora" when the hostname wasn't explicitly defined, here we are.
Seems NetworkManager expects localhost to be transient and doesn't fall back to reverse lookup? Do we have Fedora bug filed?
Update: I've successfully gotten past the installation stage by adding hostname entries into my DHCP configuration (DHCP client option code 12). We use BlueCat here. So, it involved a lot of clicks and was quite tedious. This isn't really a way forward for production environments.
Vadim, there is this...
https://bugzilla.redhat.com/show_bug.cgi?id=1892235#c9
There is a patch pending that reorders the precedence of hostname checks, putting reverse DNS first. If we could get that moved along somehow. Another bug report would just be marked as duplicate, maybe?
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/commit/09c8387
Gotcha, so we need NM 1.29.2+ to test the commit
@JaimeMagiera - I got a build of FCOS with an updated NM (based off of the CI builds done in copr). Do you mind trying it by itself similar to how you did in https://github.com/openshift/okd/issues/394#issuecomment-736170526 and see if the reverse DNS bits work now?
https://dustymabe.fedorapeople.org/fedora-coreos-33.20201201.dev.1-vmware.x86_64.ova
@dustymabe You sir, are a scholar and a gentleman. Thank you.
[jaimelm1@lsa-linux-dev fcos]$ ssh -t core@fcos-33-test.my.company.edu "sudo hostnamectl status"
Static hostname: n/a
Transient hostname: fcos-33-test.my.company.edu
Icon name: computer-vm
Chassis: vm
Machine ID: 228cab6cf9d34ad6afb405a6c14abaa9
Boot ID: 9645fbc52fce4b988d7c6d9bd81f4e67
Virtualization: vmware
Operating System: Fedora CoreOS 33.20201201.dev.1
CPE OS Name: cpe:/o:fedoraproject:fedora:33
Kernel: Linux 5.9.11-200.fc33.x86_64
Architecture: x86-64
@JaimeMagiera - just checking, was that system getting the hostname from DHCP too, or did you revert those changes on the DHCP server so we could verify the reverse DNS lookups were working?
The DHCP server changes that I mentioned above have to be implemented on a per-host basis (which is why its so tedious). I did not implement that change on this testing host, only on my OKD cluster nodes. So, indeed, the success of this testing VM is due to your inclusion of the updated NM.
Do you mind sharing the NM logs from that boot? Something like journalctl -b 0 ?
This is maybe unrelated, but when I install 4.6 IPI on VMWare I also have the hostname set to fedora. I am using DHCP but not assigning hostnames. the hostnames are (should) being assigned via the vmware vm name via vsphere-hostname.service
which calls /usr/local/bin/vsphere-hostname.sh which only exists after vmtools is installed.
#!/usr/bin/env bash
set -e
if [ $(hostname -s) = "localhost" ]; then
if hostname=$(/bin/vmtoolsd --cmd 'info-get guestinfo.hostname'); then
/usr/bin/hostnamectl --transient --static set-hostname ${hostname}
fi
fi
Note that it only gets changed if the current hostname is localhost
Here is the result after the node comes up and reboots:
journalctl -u vsphere-hostname.service
-- Logs begin at Wed 2020-12-02 20:30:44 UTC, end at Wed 2020-12-02 20:44:52 UTC. --
Dec 02 20:33:51 localhost systemd[1]: Started vSphere hostname.
Dec 02 20:33:51 localhost vsphere-hostname.sh[1717]: /usr/local/bin/vsphere-hostname.sh: line 5: /bin/vmtoolsd: No such file or directory
Dec 02 20:33:51 localhost systemd[1]: vsphere-hostname.service: Succeeded.
-- Reboot --
Dec 02 20:41:19 fedora systemd[1]: Started vSphere hostname.
Dec 02 20:41:19 fedora systemd[1]: vsphere-hostname.service: Succeeded.
When /usr/local/bin/vsphere-hostname.sh runs after the reboot, the hostname has already been changed to fedora and so the test fails and the hostname does not get changed...
Thanks for the info. I added that datapoint to the existing bug: https://bugzilla.redhat.com/show_bug.cgi?id=1892235#c11
So in your journalctl output, the first run is from before FCOS was upgraded to F33 where the hostname was localhost. In that case it did pass the if statement conditional logic but failed to set the hostname because there is no file at /bin/vmtoolsd. You might want to fix the script because it won't work even if the localhost versus fedora issue didn't exist.
You might want to fix the script because it won't work even if the localhost versus fedora issue didn't exist.
I'm not sure who owns the vsphere-hostname.service and /usr/local/bin/vsphere-hostname.sh.
Looks like /bin/vmtoolsd comes with vmtools (?) package
not part of FCOS, maybe OKD is layering it in. A good indicator is the fact that the file is in /usr/local/bin/.
not part of FCOS, maybe OKD is layering it in. A good indicator is the fact that the file is in /usr/local/bin/.
That could be true... I'm not sure... I'll check the service file and see what it says
Yes, It's added during ignition:
[root@fedora etc]# journalctl -u ignition-files.service | grep vsphere-hostname
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: createFilesystemsFiles: createFiles: op(1f): [started] writing file "/sysroot/var/usrlocal/bin/vsphere-hostname.sh"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: createFilesystemsFiles: createFiles: op(1f): [finished] writing file "/sysroot/var/usrlocal/bin/vsphere-hostname.sh"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: op(4b): [started] processing unit "vsphere-hostname.service"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: op(4b): op(4c): [started] writing unit "vsphere-hostname.service" at "/sysroot/etc/systemd/system/vsphere-hostname.service"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: op(4b): op(4c): [finished] writing unit "vsphere-hostname.service" at "/sysroot/etc/systemd/system/vsphere-hostname.service"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: op(4b): [finished] processing unit "vsphere-hostname.service"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: op(52): [started] setting preset to enabled for "vsphere-hostname.service"
Dec 02 21:38:59 localhost ignition[1430]: INFO : files: op(52): [finished] setting preset to enabled for "vsphere-hostname.service"
https://github.com/openshift/okd/issues/407#issuecomment-737520891
@dustymabe Actually, it's not working. Went I went to collect the NM log (journalctl -u NetworkManager), I noticed there were "option host_name" responses. Sure enough, when I went to BlueCat (DHCP and DNS system), the client DHCP option was set further up in configuration (will have to investigate how that happens). I just did a fresh vm with new IP and MAC address. The build fails to get the hostname. So, it looks like we're not there yet.
I've got the problem with false hostnames since first installing OKD 4 (4.3) due to a funny behaviour of my KVM provider. They always showing up in DHCP with something like server1234433.myprovider.com. I've changed the DNS back resolving and everything else, but FCOS is not using the right name.
Here's my solution for the problem:
Generate an ignition for every node and add the following config at the end:
storage:
files:
- path: /etc/hostname
mode: 0420
overwrite: true
contents:
inline: mynodename.should.be.filled.in.here
Replace mynodename... with your real nodename
Generate the ignition with fcct
Be sure to use the right ignition for every node (either in bootp or static)
This is not a solution, but more a workaround for the problem, but it works for me.
:peter
Great find @fortinj66.
My concern with the vsphere-hostname.service, and the upstream changes to NetworkManager which started this debacle, is that for decades incorrect hostname has been a telltale sign of network problems and/or misconfigured DNS. When you open a shell or glance at the console and see "localhost", or "xyz-some-department-that-no-longer-exists", you know that the host is either not on the network or that DNS is misconfigured/stale. Changing that functionality removes a helpful signpost.
My concern with the vsphere-hostname.service, and the upstream changes to NetworkManager which started this debacle,
Slight correction. NetworkManager didn't change. There are a few things that happened in Fedora 33 that are causing some growing pains. In Fedora the systemd team changed to use fedora as the fallback hostname instead of localhost. At the same time there was also a system wide change to use systemd-resolved and there was a subsequent change make to nsswitch.conf to add in resolve and it also lowered the priority of dns to later than myhostname. Unfortunately it's several changes that are compounding.
Fixed.
@dustymabe Actually, it's not working. When I went to collect the NM log (journalctl -u NetworkManager), I noticed there were "option host_name" responses. Sure enough, when I went to BlueCat (DHCP and DNS system), the client DHCP option was set further up in configuration, attached to the IP as opposed to the MAC address where I usually do it (will have to investigate how that happens). I just did a fresh vm with new IP and MAC address. The build fails to get the hostname. So, it looks like we're not there yet.
On that fresh VM can you give me the output of:
grep hosts /etc/nsswitch.conf
systemctl is-active systemd-resolved
systemctl is-enabled systemd-resolved
I destroyed it last night while writing a testing system of this issue. (Script that downloads an ova from URL, installs the template in vSphere, spins up a host with predesigned .ign, and runs the hostname tests. If it fails, I get zipped files of the various logs) I'll add your items to my script and get that for you.
I wonder if we then still need this PR: https://github.com/openshift/machine-config-operator/pull/2282
I wonder if we then still need this PR: openshift/machine-config-operator#2282
I don't think the NM changes fix the issues in my case where we are not using DHCP for Hostnames... I get the hostnames directly from vSphere and this will fail without the check for the fedora hostname change...
Does seem that it changed to fedora, however my hostnames are set correctly after reboot, but it might be too late. As I have same issues
@dustymabe Today was quite busy. Sorry for the delay.
Timestamp: 12-03-20-225253
FQDN: fcos-33-test.lsait.lsa.umich.edu
Transient Name: fedora
Transient hostname does not match DNS name
========== Grabbing hosts from nswitch ==========
Valid databases are: aliases, ethers, group, gshadow, hosts,
hosts: files resolve [!UNAVAIL=return] myhostname dns
========== Checking active status of systemd-resolved ==========
active
========== Checking enabled status of systemd-resolved ==========
enabled
========== Checking rpm-ostree status ==========
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/testing-devel
Version: 33.20201201.dev.1 (2020-12-01T17:14:54Z)
Commit: 05261ad8abe86bce155ec1b47dcc3c20b7d127a7af4d3382150c726c948303dc
GPGSignature: (unsigned)
If anyone else wants to test this issue quickly as new builds are made, they can grab this script. It outputs to file and console for easy pasting in here.
https://github.com/JaimeMagiera/oct/blob/master/test-hostname.sh
This issue appears to be addressed for vSphere UPI installs.
|
gharchive/issue
| 2020-11-28T15:00:58 |
2025-04-01T06:39:54.136888
|
{
"authors": [
"JaimeMagiera",
"LorbusChris",
"alexanderniebuhr",
"dustymabe",
"fortinj66",
"klzsysy",
"pflaeging",
"rajinator",
"vrutkovs"
],
"repo": "openshift/okd",
"url": "https://github.com/openshift/okd/issues/394",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
260041122
|
Adding the option to use 'stack_state' to allow for easy de-provisioning
What does this PR do?
Submitting a role to allow for easy openstack/heat stack deletion
How should this be manually tested?
Read README + use the test.yml playbook in the tests directory
Is there a relevant Issue open for this?
N/A
Who would you like to review this?
cc: @tomassedovic @bogdando
@etsauer yeah, we can do that, but that means restructuring the main.yml to avoid a messy set of checks around generating the heat templates, etc. I figured it would be cleaner to keep it separate. Let me work it and see how it goes.
@bogdando @cooktheryan @tomassedovic Please let us know what we need to do to get this one merged. We'd like to cut another release to have a stable starting point, but would like to include this one + PR https://github.com/openshift/openshift-ansible-contrib/pull/769 in this release.
Sorry @oybed! I've started the end to end test, feel free to merge this when it passes (or I'll do it tomorrow)
All checks have passed. Going to go ahead and merge
|
gharchive/pull-request
| 2017-09-23T21:52:53 |
2025-04-01T06:39:54.142818
|
{
"authors": [
"etsauer",
"oybed",
"tomassedovic"
],
"repo": "openshift/openshift-ansible-contrib",
"url": "https://github.com/openshift/openshift-ansible-contrib/pull/754",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1797152548
|
OADP-1057 add support for aws sts creds for registry
Todo:
[ ] read secret from bsl spec.
Highlights
https://github.com/openshift/openshift-velero-plugin/blob/1bbb24f5bd39bc601406b37618198d3103a3f80a/go.mod#L11
https://github.com/openshift/openshift-velero-plugin/blob/1bbb24f5bd39bc601406b37618198d3103a3f80a/go.mod#L242-L244
Signed-off-by: Tiger Kaovilai tkaovila@redhat.com
replaced by https://github.com/openshift/openshift-velero-plugin/pull/199
|
gharchive/pull-request
| 2023-07-10T16:48:23 |
2025-04-01T06:39:54.204823
|
{
"authors": [
"kaovilai"
],
"repo": "openshift/openshift-velero-plugin",
"url": "https://github.com/openshift/openshift-velero-plugin/pull/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1164525142
|
chore(gomod): bump deps
related to
CVE-2021-42576
CVE-2015-3627
these packages were inner deps of some libraries we use. github suggested updating them.
this will resolve CVE's we don't currently have but it's better than waiting on a PR for a third party IMO (it can happen in parralel but still)
ran go mod tidy now so it would be clearer what happened
so this doesn't solve it like I hoped (now that I am looking into the go.sum)
so I will close and file an issue to the packages that introduced it
|
gharchive/pull-request
| 2022-03-09T22:50:07 |
2025-04-01T06:39:54.280891
|
{
"authors": [
"georgettica"
],
"repo": "openshift/osdctl",
"url": "https://github.com/openshift/osdctl/pull/195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1123960717
|
Remove NFD runtime dependency
Remove runtime dependency from SRO to NFD labels in nodes. Now information is taken from the nodes nodeInfo struct in status.
Now NFD dependency comes from the recipes, not SRO.
cc @qbarrand @yevgeny-shnaidman
/cc @qbarrand @yevgeny-shnaidman
Hold until the downstream CI is merged.
/hold
/ok-to-test
/lgtm
Just realized a bug, dont remove the hold yet
Just realized a bug, dont remove the hold yet
what is the bug?
Just realized a bug, dont remove the hold yet
what is the bug?
Was not supporting double digit OCP versions in the regex, like 4.10 for example.
/retest
/unhold
d/s CI is operational
PTAL @qbarrand @pmtk @yevgeny-shnaidman @ybettan @enriquebelarte
/lgtm
🥳
|
gharchive/pull-request
| 2022-02-04T08:56:26 |
2025-04-01T06:39:54.370672
|
{
"authors": [
"pacevedom",
"qbarrand",
"ybettan",
"yevgeny-shnaidman"
],
"repo": "openshift/special-resource-operator",
"url": "https://github.com/openshift/special-resource-operator/pull/100",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
242292287
|
Fixed indentation in HttpApplication
Used the automatic indentaion functionality in JBDS.
merged: https://github.com/openshiftio/appdev-documentation/commit/0e1ad7e875042484f41b7baa0c309679c11cc6fe
|
gharchive/pull-request
| 2017-07-12T08:00:36 |
2025-04-01T06:39:54.384577
|
{
"authors": [
"rhoads-zach",
"tradej"
],
"repo": "openshiftio/appdev-documentation",
"url": "https://github.com/openshiftio/appdev-documentation/pull/341",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
202194883
|
Retrieving Data
How to change the names of the categories and their childs
Example i want to change Accessories to Electronics how can i change it??
CAtegories are retrieved from a service. Your should have your service on your server and change the url in the applicaiton to use your services. After that you can have whatever you want on categories list.
I don't have a service how should i get a service and how to change the urls.. Please explain me the steps how to do and if possible keep screenshots and if there is any link regarding this please inform me... Waiting for your reply..
Thank You..
|
gharchive/issue
| 2017-01-20T17:16:39 |
2025-04-01T06:39:54.416533
|
{
"authors": [
"AshokKnv",
"tugrulkarakaya"
],
"repo": "openshopio/openshop.io-android",
"url": "https://github.com/openshopio/openshop.io-android/issues/31",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
588866731
|
Remove SmoothSphereHalfSpaceForce from Moco.
Brief summary of changes
SmoothSphereHalfSpaceForce is now in opensim-core; this PR removes this class from Moco.
CHANGELOG.md (choose one)
[x] updated
This change is
It's cool that the contact is now directly in opensim-core! I left one comment, and have just one other note:
While comparing example2Dwalking.m with the one on master, I noticed that the optional GRF tracking is gone from this version but didn't show up in the diff. Is this intentional?
Moco/Bindings/Java/Matlab/Examples/example2DWalking/example2DWalking.m, line 200 at r1 (raw file):
Previously, chrisdembia (Christopher Dembia) wrote…
Thanks for catching this! Done.
(oops; I didn't push the code yet)
Thanks @carmichaelong . You're up, @nickbianco !
@nickbianco I think we should include this change for the Moco paper. I'm happy to update the models in the mocopaper repository once this is merged.
Thanks; I'll merge this after we send our draft to Jen.
|
gharchive/pull-request
| 2020-03-27T03:47:13 |
2025-04-01T06:39:54.425494
|
{
"authors": [
"carmichaelong",
"chrisdembia"
],
"repo": "opensim-org/opensim-moco",
"url": "https://github.com/opensim-org/opensim-moco/pull/608",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2438676173
|
Devcon 2024 talk proposal
What is it?
Due today
https://devcon.org/en/speaker-applications/
Submitted
https://docs.google.com/document/d/1DVuh4BenGd_88FCiB6NnApk_I42in0WAOMKLis3KGEY/edit
|
gharchive/issue
| 2024-07-30T21:03:42 |
2025-04-01T06:39:54.446827
|
{
"authors": [
"ryscheng"
],
"repo": "opensource-observer/oso",
"url": "https://github.com/opensource-observer/oso/issues/1886",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2556427898
|
Debug developer activity metrics
Which area(s) are affected? (leave empty if unsure)
Indexer
To Reproduce
Here is a query in Clickhouse:
select
m.metric_id,
m.metric_name,
t.sample_date,
t.amount
from metrics.timeseries_metrics_by_project_v0 as t
join metrics.metrics_v0 as m
on t.metric_id = m.metric_id
join default.projects_v1 as p
on t.project_id = p.project_id
where
p.project_name = 'opensource-observer'
and t.sample_date = '2024-09-22'
order by m.metric_name
Describe the Bug
We get back the following, which shows 1s for most fields.
Expected Behavior
The calculations are not correct (should not be 1s across the board). Moreover, it is not clear how active_developers and developer_active_days relate to each other.
https://github.com/opensource-observer/oso/pull/2270/files
|
gharchive/issue
| 2024-09-30T11:55:35 |
2025-04-01T06:39:54.449862
|
{
"authors": [
"ccerv1",
"ryscheng"
],
"repo": "opensource-observer/oso",
"url": "https://github.com/opensource-observer/oso/issues/2274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
925599799
|
index based access of elements using .loc on Series
we have this for DataFrame here, but missing for Series.
I just can't find a way to access an element at an index
@hemasunder Great request, I'm surprised we didn't add this. I'll fix this.
Fixed in #235
|
gharchive/issue
| 2021-06-20T13:27:55 |
2025-04-01T06:39:54.476959
|
{
"authors": [
"hemasunder",
"risenW"
],
"repo": "opensource9ja/danfojs",
"url": "https://github.com/opensource9ja/danfojs/issues/229",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1706308813
|
🛑 Weblate is down
In 33ec89c, Weblate (https://translate.opensourcepos.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Weblate is back up in 50eeef3.
|
gharchive/issue
| 2023-05-11T18:14:27 |
2025-04-01T06:39:54.500066
|
{
"authors": [
"jekkos"
],
"repo": "opensourcepos/upptime",
"url": "https://github.com/opensourcepos/upptime/issues/597",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1705701149
|
[ServerSide] Sync the OpenSRP FHIR Gateway repo with the upstream Google FHIR Gateway repo
Reason
Latest updates on the upstream added the support for gzip
https://github.com/google/fhir-gateway/pull/147
Need for Gzip for the security audit that has been requested for the codebase that we’ll be deploying.
Implementation
[ ] Sync the current OpenSRP FHIR Gateway with the upstream Google FHIR Gateway.
Acceptance Criteria
[ ] The OpenSRP FHIR Gateway should only be ahead of the Google FHIR Gateway because of the changes we have added to the plugins and deployments.
[ ] All the current permission checking and sync filtering functionality should work as expected.
FYI @dubdabasoduba @ndegwamartin @f-odhiambo
Closing this since all the PRs are merged
|
gharchive/issue
| 2023-05-11T12:10:12 |
2025-04-01T06:39:54.503898
|
{
"authors": [
"dubdabasoduba",
"rehammuzzamil"
],
"repo": "opensrp/fhircore",
"url": "https://github.com/opensrp/fhircore/issues/2340",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2574959794
|
Configure submit anyway button SDK API
Describe the feature request.
Configure the Submit anyway button visibility by leveraging the SDK's Questionnaire builder API.
Visual
SDK's Questionnaire builder API to show/hide the Submit anyway button
https://github.com/google/android-fhir/blob/92da65c313ed992e8ed3de24675ff9600a8bc46a/datacapture/src/main/java/com/google/android/fhir/datacapture/QuestionnaireFragment.kt#L422-L425
@f-odhiambo
|
gharchive/issue
| 2024-10-09T06:52:50 |
2025-04-01T06:39:54.506319
|
{
"authors": [
"FikriMilano"
],
"repo": "opensrp/fhircore",
"url": "https://github.com/opensrp/fhircore/issues/3549",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1771390036
|
PNC Condition closure
IMPORTANT: Where possible all PRs must be linked to a Github issue
Partial fix for [Quest/eCHIS] - Event Management/PNC
Engineer Checklist
[ ] I have written Unit tests for any new feature(s) and edge cases for bug fixes
[ ] I have added any strings visible on UI components to the strings.xml file
[ ] I have updated the CHANGELOG.md file for any notable changes to the codebase
[ ] I have run ./gradlew spotlessApply and ./gradlew spotlessCheck to check my code follows the project's style guide
[ ] I have built and run the FHIRCore app to verify my change fixes the issue and/or does not break the app
[ ] I have checked that this PR does NOT introduce breaking changes that require an update to Content and/or Configs? If it does add a sample here or a link to exactly what changes need to be made to the content.
Code Reviewer Checklist
[ ] I have verified Unit tests have been written for any new feature(s) and edge cases
[ ] I have verified any strings visible on UI components are in the strings.xml file
[ ] I have verifed the CHANGELOG.md file has any notable changes to the codebase
[ ] I have verified the solution has been implemented in a configurable and generic way for reuseable components
[ ] I have built and run the FHIRCore app to verify the change fixes the issue and/or does not break the app
We can merge this in and track the TD under #2488 cc @Rkareko
|
gharchive/pull-request
| 2023-06-23T12:38:33 |
2025-04-01T06:39:54.512677
|
{
"authors": [
"Rkareko",
"ndegwamartin"
],
"repo": "opensrp/fhircore",
"url": "https://github.com/opensrp/fhircore/pull/2485",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1223459464
|
OSSL_LIB_CTX_load_config is not thread safe
When OSSL_LIB_CTX_load_config is called simultaneously from multiple threads, the program abort with the following trace:
(gdb) where
#0 0x00007ffff7623387 in raise () from /lib64/libc.so.6
#1 0x00007ffff7624a78 in abort () from /lib64/libc.so.6
#2 0x00007ffff7665ed7 in __libc_message () from /lib64/libc.so.6
#3 0x00007ffff766c3e4 in malloc_printerr () from /lib64/libc.so.6
#4 0x00007ffff76716e0 in _int_realloc () from /lib64/libc.so.6
#5 0x00007ffff7672d82 in realloc () from /lib64/libc.so.6
#6 0x00000000004fa265 in CRYPTO_realloc (str=0x7ffff0006390, num=104, file=0x734de8 "crypto/stack/stack.c", line=210) at crypto/mem.c:214
#7 0x000000000052e1b8 in sk_reserve (st=0x7ffff0009570, n=1, exact=0) at crypto/stack/stack.c:210
#8 0x000000000052e2f4 in OPENSSL_sk_insert (st=0x7ffff0009570, data=0x7fffe8005ba0, loc=9) at crypto/stack/stack.c:254
#9 0x000000000052e79b in OPENSSL_sk_push (st=0x7ffff0009570, data=0x7fffe8005ba0) at crypto/stack/stack.c:368
#10 0x0000000000496625 in sk_CONF_IMODULE_push (sk=0x7ffff0009570, ptr=0x7fffe8005ba0) at crypto/conf/conf_mod.c:27
#11 0x00000000004970b6 in module_init (pmod=0x7ffff0002c10, name=0x7fffe80037e0 "providers", value=0x7fffe8003820 "provider_sect", cnf=0x7fffe8009770) at crypto/conf/conf_mod.c:390
#12 0x0000000000496b64 in module_run (cnf=0x7fffe8009770, name=0x7fffe80037e0 "providers", value=0x7fffe8003820 "provider_sect", flags=0) at crypto/conf/conf_mod.c:239
#13 0x0000000000496837 in CONF_modules_load (cnf=0x7fffe8009770, appname=0x0, flags=0) at crypto/conf/conf_mod.c:138
#14 0x00000000004969b0 in CONF_modules_load_file_ex (libctx=0x7fffe8009d00, filename=0x70ee50 "/tmp/openssl.cnf", appname=0x0, flags=0)
at crypto/conf/conf_mod.c:181
#15 0x00000000004f52f8 in OSSL_LIB_CTX_load_config (ctx=0x7fffe8009d00, config_file=0x70ee50 "/tmp/openssl.cnf") at crypto/context.c:234
Each thread allocates its own library context and load config.
Please see the sample code below which reproduces the issue. With mutex held while OSSL_LIB_CTX_load_config() is called, we don't see the issue.
#include <stdio.h>
#include <stdlib.h>
#include <openssl/bio.h>
#include <openssl/err.h>
#include <openssl/ssl.h>
pthread_mutex_t lock;
void *libctx_init(void *arg)
{
int rv;
int i;
int num_loops = (int)arg;
OSSL_LIB_CTX *libctx = NULL;
const char *path = "/ade/mmiyashi_network_openssl/oracle/crypto/lib/openssl.cnf";
for (i= 0; i < num_loops; i++)
{
printf("[%u][%d] LoadConfig called\n", pthread_self(), i);
libctx = OSSL_LIB_CTX_new();
if (libctx == NULL)
{
printf("[%u] Failed to allocate LIB_CTX\n", pthread_self());
}
//pthread_mutex_lock(&lock);
rv = OSSL_LIB_CTX_load_config(libctx, path);
//pthread_mutex_unlock(&lock);
if (rv != 1)
{
printf("[%u] Failed to load config\n", pthread_self());
}
printf("[%u][%d] LoadConfig successful\n", pthread_self(), i);
}
}
int main(int argc, char *argv[])
{
int rv;
int i;
int num_loops = 1;
int num_threads = 5;
pthread_t threads[1024];
if (argc >= 2)
{
num_threads = atoi(argv[1]);
if (num_threads > 1024)
{
printf("num_threads[%d] too largs. Reset to 1024\n", num_threads);
num_threads = 1024;
}
}
if (argc >= 3)
{
num_loops = atoi(argv[2]);
}
if (pthread_mutex_init(&lock, NULL) != 0) {
printf("mutex init has failed\n");
return 1;
}
SSL_load_error_strings();
SSL_library_init();
fprintf(stderr, "Initialize libctx\n");
for (i = 0; i < num_threads; i++)
{
rv = pthread_create(&threads[i], NULL, libctx_init, (void *)num_loops);
if (rv != 0) {
printf("[%d]: pthread_create filed\n", i);
}
}
for (i = 0; i < num_threads; i++)
{
pthread_join(threads[i], NULL);
}
return 0;
}
Is this an multi-threaded issue in OpenSSL? or an issue in the test code.
The issue is seen with OpenSSL 3.0.0.
Thank you,
-- misaki
Seems like a bug. I'd have expected this to work.
Long time no see!
Thanks for reviewing and evaluating, Pauli!
Yes, long time no see! :-)
The module_init function (called in the stack you gave) seems to access a global variable:
https://github.com/openssl/openssl/blob/cac250755efd0c40cc6127a0e4baceb8d226c7e3/crypto/conf/conf_mod.c#L382-L392
That looks like the culprit.
Possibly that should be held in the libctx instead
@mattcaswell CONF_modules_finish and CONF_modules_unload are public APIs. What would we do with these if we make initialized_modules libctx-specific?
Make them apply to the default libctx?
That would seem reasonable.
|
gharchive/issue
| 2022-05-02T23:23:57 |
2025-04-01T06:39:54.519715
|
{
"authors": [
"hlandau",
"mattcaswell",
"mmiyashi",
"paulidale"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/18226",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1231116701
|
FIPS Mode is not getting enabled in Tomcat9 using Openssl 3.0.2 post successful FIPS module installation in windows
Good Evening,
I have a issue while enabling the FIPS mode in Tomcat9 for windows where it throws me an error "Failed to enter fips mode". Below are the detail explanation and content. Sorry for the length but I am trying to provide all of the relevant details in hopes that the solution to this issue will be easily identifiable.
I have installed the openssl version (3.0.2) along with the FIPS Module installation as per the steps mentioned in the wiki (https://wiki.openssl.org/index.php/OpenSSL_3.0#Installation_and_Compilation_of_OpenSSL_3.0).
The openssl 3.0.2 and fips module got installed successfully.
OpenSSL version:
I am using openssl 3.0.2 to enable the FIPS Mode in only Tomcat9 for windows. As per the steps mentioned in the wiki referenced above, I have executed the steps under the section for Using the FIPS Module in applications and have made the changes to the openssl.cnf file. Since i need only Tomcat9 Application to be FIPS enabled i have also followed next step to set the environment variable along with the application name.
Post these steps I tried enabling the FIPS mode in tomcat9, For that I have performed:
Added the FIPSMode="on" for the APR listener:
Restarted the tomcat server and checked the catalina.log but i get a error which states "failed to enter fips mode".
Any help here would be greatly appreciated.
Thank you,
Rupesh P
Two things:
First, the value you are setting the OPENSSL_CONF environment variable is not correct. It should just be the path to the openssl config file - it should not include the Tomcat9.exe part (probably you are confused by the bash syntax shown on the wiki which sets the OPENSSL_CONF environment variable and runs "myapplication" all on the same line).
Whatever process your application is running in needs to have access to that environment variable.
Second, according to this page:
https://tomcat.apache.org/tomcat-9.0-doc/apr.html
the APR component depends on:
OpenSSL 1.0.2+ development headers (libssl-dev package)
So it looks like APR is written for OpenSSL 1.0.2 and will therefore use the old FIPS module not the new OpenSSL 3 one. Probably APR will need changes to support the OpenSSL 3 FIPS module.
Please do share your thoughts on to make Probably APR support OpenSSL 3.0 FIPS module for Tomcat 9.
This will almost certainly require some modification of the APR code. Unfortunately I know nothing about APR or Tomcat so I can't provide guidance as to specifically what would need to change.
|
gharchive/issue
| 2022-05-10T12:58:14 |
2025-04-01T06:39:54.528941
|
{
"authors": [
"Rupeshkryz",
"mattcaswell"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/18281",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
192961562
|
Double lock deadlock in fips_drbg_rand [1.0.2j]
When a thread locking is installed, OpenSSL 1.0.2j dead locks at fips_drbg_rand.c:124 when the RNG is reseeded. I found the bug with Python's test suite. The dead lock only occurs in FIPS mode. I used OPENSSL_FORCE_FIPS_MODE=1 and /etc/system-fips to enable FIPS mode. My platform is Fedora 24 on X86_64 with openssl-1.0.2j-1.fc24.x86_64.
Python 2.7
(gdb) bt
#0 0x00007f0d9f8470c7 in do_futex_wait.constprop () from /lib64/libpthread.so.0
#1 0x00007f0d9f847174 in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0
#2 0x00007f0d9f84721a in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0
#3 0x000000000051e013 in PyThread_acquire_lock (lock=0x27433a0, waitflag=1) at Python/thread_pthread.h:324
#4 0x00007f0d937b0dce in _ssl_thread_locking_function (mode=5, n=18, file=0x7f0d96e417ac "fips_drbg_rand.c", line=124)
at /home/heimes/dev/python/2.7/Modules/_ssl.c:4000
#5 0x00007f0d96df1d7e in fips_drbg_status () from /lib64/libcrypto.so.10
#6 0x00007f0d96d75b0e in drbg_rand_add () from /lib64/libcrypto.so.10
#7 0x00007f0d96d76645 in RAND_poll () from /lib64/libcrypto.so.10
#8 0x00007f0d96d75237 in ssleay_rand_bytes () from /lib64/libcrypto.so.10
#9 0x00007f0d96d75c33 in drbg_get_entropy () from /lib64/libcrypto.so.10
#10 0x00007f0d96df10c8 in fips_get_entropy () from /lib64/libcrypto.so.10
#11 0x00007f0d96df1226 in drbg_reseed () from /lib64/libcrypto.so.10
#12 0x00007f0d96d75bb8 in drbg_rand_seed () from /lib64/libcrypto.so.10
#13 0x00007f0d96d5da84 in ECDSA_sign_ex () from /lib64/libcrypto.so.10
#14 0x00007f0d96d5db00 in ECDSA_sign () from /lib64/libcrypto.so.10
#15 0x00007f0d96d3bc10 in pkey_ec_sign () from /lib64/libcrypto.so.10
#16 0x00007f0d96d81359 in EVP_SignFinal () from /lib64/libcrypto.so.10
#17 0x00007f0d96def373 in fips_pkey_signature_test () from /lib64/libcrypto.so.10
#18 0x00007f0d96d38fe0 in EC_KEY_generate_key () from /lib64/libcrypto.so.10
#19 0x00007f0d970d7d7c in ssl3_ctx_ctrl () from /lib64/libssl.so.10
#20 0x00007f0d937af934 in set_ecdh_curve (self=0x7f0d92a23f78, name='prime256v1') at /home/heimes/dev/python/2.7/Modules/_ssl.c:3110
Python 3.x
#0 0x00007f2be93dc0c7 in do_futex_wait.constprop () from /lib64/libpthread.so.0
#1 0x00007f2be93dc174 in __new_sem_wait_slow.constprop.0 () from /lib64/libpthread.so.0
#2 0x00007f2be93dc21a in sem_wait@@GLIBC_2.2.5 () from /lib64/libpthread.so.0
#3 0x0000000000433e5c in PyThread_acquire_lock_timed (lock=0x17c0b40, microseconds=-1, intr_flag=intr_flag@entry=0)
at Python/thread_pthread.h:352
#4 0x0000000000433f5e in PyThread_acquire_lock (lock=<optimized out>, waitflag=waitflag@entry=1) at Python/thread_pthread.h:556
#5 0x00007f2be0870945 in _ssl_thread_locking_function (mode=<optimized out>, n=<optimized out>, file=<optimized out>,
line=<optimized out>) at /home/heimes/dev/python/cpython/Modules/_ssl.c:5069
#6 0x00007f2be02f4d7e in fips_drbg_status () from /lib64/libcrypto.so.10
#7 0x00007f2be0278b0e in drbg_rand_add () from /lib64/libcrypto.so.10
#8 0x00007f2be0279645 in RAND_poll () from /lib64/libcrypto.so.10
#9 0x00007f2be0278237 in ssleay_rand_bytes () from /lib64/libcrypto.so.10
#10 0x00007f2be0278c33 in drbg_get_entropy () from /lib64/libcrypto.so.10
#11 0x00007f2be02f40c8 in fips_get_entropy () from /lib64/libcrypto.so.10
#12 0x00007f2be02f4226 in drbg_reseed () from /lib64/libcrypto.so.10
#13 0x00007f2be0278b39 in drbg_rand_add () from /lib64/libcrypto.so.10
#14 0x00007f2be0868870 in _ssl_RAND_add_impl (module=module@entry=<module at remote 0x7f2be0aa1a58>, view=view@entry=0x7ffd106c1420,
entropy=75) at /home/heimes/dev/python/cpython/Modules/_ssl.c:4499
Lock trace
I added a small trace helper to Modules/_ssl.c:_ssl_thread_locking_function
if (n == CRYPTO_LOCK_RAND) {
fprintf(stderr, "%s%s %i %s:%i\n",
(mode & CRYPTO_READ) ? "R" : "W",
(mode & CRYPTO_LOCK) ? "LCK" : "UNL",
n, file, line);
}
and got this result for CRYPTO_LOCK_RAND. It looks like fips_drbg_rand.c:124 tries to acquire a lock that has been locked already:
WLCK 18 fips_drbg_rand.c:80
WUNL 18 fips_drbg_rand.c:109
WLCK 18 fips_drbg_rand.c:80
WUNL 18 fips_drbg_rand.c:109
WLCK 18 md_rand.c:230
WUNL 18 md_rand.c:262
WLCK 18 md_rand.c:311
WUNL 18 md_rand.c:324
RLCK 18 fips_drbg_rand.c:124
RUNL 18 fips_drbg_rand.c:126
WLCK 18 rand_lib.c:240
RLCK 18 fips_drbg_rand.c:124
Python bug report is https://bugs.python.org/issue28854
fips_drbg_rand.c? don't build the FIPS version...
It's a downstream bug in Fedora. Tomas Mraz took care of it, https://bugzilla.redhat.com/show_bug.cgi?id=1400922
|
gharchive/issue
| 2016-12-01T21:03:11 |
2025-04-01T06:39:54.534659
|
{
"authors": [
"richsalz",
"tiran"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/2019",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
311683501
|
Import wycheproof tests
Google's project wycheproof has a large collection of test vectors, in json. We should look at building a tool to import and convert them.
https://github.com/google/wycheproof
So all we need is a conf Parser that groks json, the rest seems to be a job for evptest.
... I meant stanza reader.
Closing since #7714 has more discussion.
|
gharchive/issue
| 2018-04-05T16:32:59 |
2025-04-01T06:39:54.537491
|
{
"authors": [
"levitte",
"richsalz"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/5885",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
439957509
|
generating .so for OpenSSL_1_0_2-stable branch
Hi,
I want .so files generated for OpenSSL_1_0_2-stable branch code. kindly any one help me to solve this issue
When configuring, add the argument shared
|
gharchive/issue
| 2019-05-03T09:15:40 |
2025-04-01T06:39:54.538998
|
{
"authors": [
"deekshith-elear",
"levitte"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/8870",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
681278025
|
TEST: separate out NIST ECC tests from non-NIST
ECC keys with non-NIST group names aren't supported when running with
the FIPS provider.
Keys with such groups that are included in evp_test stanza files
aren't even possible to decode if provider side decoders are used,
since those depend on available EVP_KEYMGMT implementations and what
they support.
Those keys could only be decoded because the legacy decoders were
used.
To make these tests future proof, we separate out the stanzas having
keys with NIST approved group names into separate files, and adjust
the file lists in test/recipes/30-test_evp.t aaccordingly.
#12587 will depend on this
This pull request is ready to merge
Merged
e6ed04a9dcc2ead94e35c4a7400b9c998b5ad9ac TEST: separate out NIST ECC tests from non-NIST
|
gharchive/pull-request
| 2020-08-18T19:27:09 |
2025-04-01T06:39:54.541758
|
{
"authors": [
"levitte",
"openssl-machine"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/12672",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
829112884
|
Make EVP_PKEY_missing_parameters work properly on provided RSA keys
This requires changing semantics of the keymgmt_has()
function a little in the sense that it now returns 1
if the selection has no meaning for the key type. It
was already doing so for ECX keys for example.
The keymgmt_validate function semantics is changed
similarly to allow passing validation on the same
selection that the key returns 1 for.
Fixes #14509
Checklist
[x] documentation is added or updated
[x] tests are added or updated
Ping for review
fixup pushed
Rebased to fix trivial conflict in providers/implementations/keymgmt/rsa_kmgmt.c. No other changes. @mattcaswell still OK?
Merged to master. Thank you for the reviews.
|
gharchive/pull-request
| 2021-03-11T12:37:55 |
2025-04-01T06:39:54.544840
|
{
"authors": [
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/14511",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
841543956
|
Remove redundant header crypto/types.h
Oracle linux build was not compiling due to duplicate typedefs for
ECX_KEY.
crypto/types.h was just a subset of openssl/types.h
ECX_KEY is typedefed in include/crypto/ecx.h
The error was:
In file included from ../include/crypto/evp.h:17,
from ../crypto/asn1/a_sign.c:23:
../include/crypto/ecx.h:79: error: redefinition of typedef 'ECX_KEY'
../include/crypto/types.h:22: note: previous declaration of 'ECX_KEY' was here
Checklist
[ ] documentation is added or updated
[ ] tests are added or updated
Now that the deprecated test has failed - I see why this was added..
I just looked back at the git history, and I've obviously failed to document the intent with that header. Is a comment in that header enough, or should I create a doc/internal/man7/crypto-types.h.pod?
A comment is probably fine... It was my bad..
|
gharchive/pull-request
| 2021-03-26T02:57:44 |
2025-04-01T06:39:54.548292
|
{
"authors": [
"levitte",
"slontis"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/14690",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
890600804
|
ci: include cmd-nits in the documentation checks
Fixes #15250
[ ] documentation is added or updated
[ ] tests are added or updated
There is a known cmd-nit, so the test will fail for the moment.
CI failure is the known cmd-nits problem.
CI failure is the known cmd-nits problem.
Yes - this is going to be solved by #15053.
I'd strongly prefer to merge this PR after fixing these known failures.
I have no issue delaying the merge of this one.
24 hours has passed since 'approval: done' was set, but as this PR has been updated in that time the label 'approval: ready to merge' is not being automatically set. Please review the updates and set the label manually.
I've made the change, it's not a quick test anymore. Over four minutes instead of forty seconds.
I've made the change, it's not a quick test anymore. Over four minutes instead of forty seconds.
Doing cmd-nits was never fast. However, if make doc-nits fails, that action will still fail fast. In my view, this is the best compromise for now
I've made the change, it's not a quick test anymore. Over four minutes instead of forty seconds.
I would have expected less, but luckily this is in parallel with the other CI tests, so does not significantly prolong the total CI run time.
And when make cmd-nits is invoked locally, make apps (or equivalents) will likely be done anyway.
Why didn't you combine it with doc-nits, as done originally?
cmd-nits needs the build done to run. doc-nits can fail faster if it is done separately before the build.
The run time will become more important once we start running into parallel job limits.
|
gharchive/pull-request
| 2021-05-13T00:52:20 |
2025-04-01T06:39:54.553944
|
{
"authors": [
"DDvO",
"beldmit",
"levitte",
"openssl-machine",
"paulidale"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/15257",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
178717064
|
Prevents that OPENSSL_gmtime incorrectly signals success if gmtime_r …
…fails, and that struct* tm result's possibly uninitialized content is used
Example of gmtime_r failing:
#include "crypto/o_time.h"
int main(void)
{
struct tm _tm;
time_t t = 0xFF0F3D5300F0C2AC;
OPENSSL_gmtime(&t, &_tm);
return 0;
}
With regards to the preceding comment that says gmtime_r doesn't always return a pointer: if this is still true then it would be worth the effort to at least integrate this patch for systems where it does work and handle other systems with ifdefs. The use of this function extends into OCSP-related functions and it'd be a shame if things malfunctioned due to an uninitialized struct tm (that could have security ramifications as well if the purported time variables ever leave the system..).
It took me a while, but here it is.
I would apply this on all branches
In 1.0.2, b70dc3a; in 1.1.0 07bc93f; and in master, 7c12035. thanks!
hi, @richsalz
In vxworks 5.5 system, I discover the return value of gmtime_r is different in kernel mode and user mode.In kernel mode, gmtime_r always returns 0,and return pointer in user mode.Have you tested this?
How can I modify it? Is there a macro to distinguish between kernel mode and user mode?
https://github.com/openssl/openssl/blob/8c00f267b8df1a8c70eff8198de40aa561299e48/crypto/o_time.c#L40-L43
I am only replying because you at'ed me. :) To answer your questions: I do not know.
@levitte hi,
https://github.com/openssl/openssl/pull/1613#issuecomment-516793955 The issue,Can you help me answer the above question? and I would like to modify my code in the vxworks5.5. Do you have a better idea?
#elif defined(OPENSSL_SYS_VXWORKS)
gmtime_r(timer, result);
ts = result;
#elif defined(OPENSSL_THREADS) && !defined(OPENSSL_SYS_WIN32) && !defined(OPENSSL_SYS_MACOSX)
if (gmtime_r(timer, result) == NULL)
return NULL;
ts = result;
#elif defined (OPENSSL_SYS_WINDOWS) && defined(_MSC_VER) && _MSC_VER >= 1400
if (gmtime_s(result, timer))
return NULL;
ts = result;
#else
ts = gmtime(timer);
if (ts == NULL)
return NULL;
|
gharchive/pull-request
| 2016-09-22T20:50:50 |
2025-04-01T06:39:54.559816
|
{
"authors": [
"guidovranken",
"levitte",
"richsalz",
"yangyangtiantianlonglong"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/1613",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1226321102
|
Exclude IPv6 code using OPENSSL_USE_IPV6 instead of AF_INET6
This replaces the usage of #ifdef AF_INET6 in bio/* with #if OPENSSL_USE_IPV6. By default this still uses AF_INET6, but it allows the exclusion of all IPv6 related code by setting OPENSSL_USE_IPV6=0.
Checklist
For this to be merged into our source, you either need to add a trailer CLA: trivial to your commit message (make sure to separate it from the rest of the message with an empty line), or submit a signed CLA
I did just submit a signed CLA. I would like to make the same change in openssl 1.1.1. Should I open a separate PR for this, or are changes between the 1.1.1 and master branch synced in some way?
I would like to make the same change in openssl 1.1.1. Should I open a separate PR for this, or are changes between the 1.1.1 and master branch synced in some way?
If it cherry-picks cleanly, we'll do it. If not, then another PR will be necessary
Ah okay thanks for pointing that out.
Merged, thanks for the contribution.
|
gharchive/pull-request
| 2022-05-05T08:00:18 |
2025-04-01T06:39:54.564290
|
{
"authors": [
"bernd-edlinger",
"levitte",
"maxbachmann",
"paulidale"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/18250",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1454396093
|
Support all five EdDSA instances from RFC 8032
Fixes #6277
Adds support for all five EdDSA instances from RFC 8032: Ed25519, Ed25519ctx, Ed25519ph, Ed448, Ed448ph.
Only Ed25519 was already fully supported (via the EVP APIs)
All instances, except for Ed25519, allow context strings as input. Context strings can now be passed via an OSSL_PARAM.
The desired EdDSA instance can also be specified via an OSSL_PARAM.
Checklist
[ y ] documentation is added or updated
[ y ] tests are added or updated
@paulidale: could you take a look at the changes so far?
@slontis @mattcaswell : thank-you for taking a look!
JFYI.
https://hdevalence.ca/blog/2020-10-04-its-25519am
@beldmit : was there anything in particular from that blog that you wanted to point out?
Note that I do not intend to modify any curve arithmetic. In particular, for edwards25519, the strict verification equation is used currently and I will not change that in this PR.
All five EdDSA instances are working now. The HashEdDSA instances are supported only in "one-shot" mode.
This PR is close to being ready, but I still need to make some documentation updates and relocate my dev test.
@slontis @mattcaswell : this PR is ready (i.e. it is no longer work-in-progress).
I believe I have addressed all your previous comments.
Please let me know if you have any additional concerns.
@hlandau : thanks for your comments.
I believe I have addressed everything you raised.
Please let me know if you have other suggestions.
Strangely, the test buildbot/master:unix-macos11-x86_64 failed:
15-test_rsaoaep.t .................. ok
15-test_rsapss.t ................... ok
15-test_sha.t ...................... ok
20-test_app.t ...................... ok
20-test_cli_fips.t ................. skipped: Test only supported in a fips build with security checks
20-test_dgst.t ..................... ok
20-test_dhparam.t .................. ok
command timed out: 1200 seconds without output running [b'make', b'test'], attempting to kill
process killed by signal 9
program finished with exit code -1
elapsedTime=1952.749765
https://ci.buildbot.openssl.org/#/builders/16/builds/1736
I don't understand why this particular platform failed.
is this a test infrastructure issue?
buildbot runners seem to be unreliable currently.
@mattcaswell @hlandau: thanks for the reviews you left on 18 Nov and 30 Nov. Each of you requested changes, but I believe my updates have addressed all your comments. Would you be able to update your reviews?
needs a rebase also.
Here is example converted test data for test vector https://github.com/openssl/openssl/pull/21 (using your .h file) that I appended to test/recipes/30-test_evp_data/evppkey_ecx.txt to test this works.
Thanks for providing this example. I will give it a try.
I've added all 21 test vectors to evppkey_ecx.txt and confirmed that they are being exercised. I've dropped the file rfc8032-test-vectors.h, as well as my changes to evp_extra_test.c.
You will need to add
FIPSversion = >=3.2.0 to each of the new tests in the .txt file..
We run the tests in master against older fips providers, so this new functionality needs to be skipped when this happens.
@slontis : thanks for applying the "tests:present" label and the detailed comments you provided previously.
I am not sure what else I can do to move this PR forward.
Would you be willing to give an approval?
@hlandau will need to reapprove,
thanks, slontis, for approving these changes.
hlandau's previous approval appears to still be valid.
@mattcaswell : do you have some time to revisit this PR? You requested changes on 18 Nov 2022. I believe all your comments have been addressed.
This pull request is ready to merge
Merged to master. Thank you.
|
gharchive/pull-request
| 2022-11-18T03:57:22 |
2025-04-01T06:39:54.575990
|
{
"authors": [
"beldmit",
"hlandau",
"jamuir",
"openssl-machine",
"slontis"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/19705",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1909894472
|
Add include of e_os2.h in quictestlib.c to allow symbol definition consistency.
Fixes: #22178
Signed-of-by: Randall S. Becker randall.becker@nexbridge.ca
This should be applicable to other branches.
I have made this change NonStop-specific but I think it might apply to any c99 builds because the unspecified type, timeval, is rejected by the c99 compiler, not the platform. Please review and let me know if the include needs to move and what ifdef wrapping it otherwise needs.
This passed builds on our platform.
I originally called this "Move" instead of "Add" because it was in comparison to the proposed change by @mattcaswell. This just moves where the #include is done compared to commit b01151dfbd107bce6e88de64acc541178e520fa0 to make c99 happy.
FYI: This change, commit 71c7128, did pass build/test on NonStop with c99.
Pushed.
|
gharchive/pull-request
| 2023-09-23T14:40:56 |
2025-04-01T06:39:54.579705
|
{
"authors": [
"mattcaswell",
"rsbeckerca"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/22179",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1986608169
|
Add fixed output length SHAKE algorithms.
By adding fixed output algorithms such as SHAKE256-192 (which is 24 bytes), the SHAKE algorithm can then be used just like any other fixed sized digest, without needing to specify additional parameters. Some algorithms such as HSS/XMSS use fixed sized outputs, and this avoids code that then needed to special case SHAKE. These fixed size variants fail if you try to change the xoflen.
Checklist
[x] documentation is added or updated
[x] tests are added or updated
Looks like something broke on merge. Will have to fix next week.
Rebased.
ping
@t8m I have added SHAKE-128/256 also.
Looks like these will also be useful in other places..
e.g. For signing with ECDSA and RSA-PSS (which use the default sizes of 32 and 64 for SHAKE 128 and 256).
I have changed the names section so that the OID's are associated with these fixed functions (since this is most likely the most common way the SHAKE would be used for DER encoding)
ping
@t8m I have added SHAKE-128/256 also.
I do not see it in the PR?
Pushed :)
This PR is in a state where it requires action by @openssl/committers but the last update was 30 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 61 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 92 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 123 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 154 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 185 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 216 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 247 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 278 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 309 days ago
This PR is in a state where it requires action by @openssl/committers but the last update was 340 days ago
|
gharchive/pull-request
| 2023-11-10T00:10:36 |
2025-04-01T06:39:54.587209
|
{
"authors": [
"openssl-machine",
"slontis",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/22684",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
210787793
|
Remove bf_cbc.c
It is never built and the code is duplicated in bf_enc.c.
+1 for dropping the #ifndef BF_DEFAULT_OPTIONS
Pushed. Thanks. I also took Andy's +1 and added a commit to remove the pointless "#ifndef"
|
gharchive/pull-request
| 2017-02-28T13:33:08 |
2025-04-01T06:39:54.588836
|
{
"authors": [
"dot-asm",
"mattcaswell"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/2778",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
322616269
|
apps/speed: fix possible OOB access in some EC arrays (1.1.0)
Cherry-picked from commit 5c6a69f539.
Partial Back-port of #6133 to 1.1.0
Checklist
[X] documentation is added or updated
[X] tests are added or updated
As #6133 don't cherry-pick cleanly, I have to do some adjustments ;)
ping @dot-asm , @richsalz :)
Red cross from Travis CI is unrelated : /usr/bin/ld: unrecognized option '--push-state--no-as-needed
Merged. Thanks!
|
gharchive/pull-request
| 2018-05-13T18:26:33 |
2025-04-01T06:39:54.591300
|
{
"authors": [
"FdaSilvaYY",
"dot-asm"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/6245",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
493811278
|
Use "PARAMETERS" in the documentation headings.
For consistency.
[x] documentation is added or updated
[ ] tests are added or updated
Merged, thanks.
|
gharchive/pull-request
| 2019-09-16T01:16:37 |
2025-04-01T06:39:54.592578
|
{
"authors": [
"paulidale"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/9906",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2400517441
|
Change iscsid login timeout in computes
In a previous patch we changed the iSCSI login timeout for the control plane (OpenShift nodes).
In this patch we add a new hook to do the same thing for the edpm nodes, because the default timeout of 2 minutes is too high for some test scenarios.
It is necessary to do it this way because edpm-ansible doesn't currently have a mechanism to change the iscsid.conf file. Once that feature is added this patch can be reverted.
This patch changes the edpm iscsid timeout default to 3 retries and 5 seconds each (15 seconds in total), which is convenient for testing, as any healthy deployment and backend should be able to login to the backend in that amount of time, and if there is a broken path it will not take 2 minutes to give up, just around 15 seconds.
Related-Jira: https://issues.redhat.com/browse/OSPRH-7614
As a pull request owner and reviewers, I checked that:
[x] Appropriate testing is done and actually running
[x] Appropriate documentation exists and/or is up-to-date:
[x] README in the role
[x] Content of the docs/source is reflecting the changes
/lgtm
Internally tested
/approve
|
gharchive/pull-request
| 2024-07-10T11:32:23 |
2025-04-01T06:39:54.610866
|
{
"authors": [
"Akrog",
"pablintino",
"tosky"
],
"repo": "openstack-k8s-operators/ci-framework",
"url": "https://github.com/openstack-k8s-operators/ci-framework/pull/2060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1923591719
|
Modify libvirt_manager role for proper reproducer capabilities
This patch extend the libvirt_manager role in order to create the full
reproducer job layout.
In such a case, there are some extended needs compared to the "deploy a
layout" easy use-case: we need to inject some keys, create network
layout files/configuration and so on.
This patch, in conjuction with some others coming later, will allow
users to reproduce a 1:1 CI job, with the layout and networking.
As a pull request owner and reviewers, I checked that:
[X] Appropriate testing is done and actually running
[X] Appropriate documentation exists and/or is up-to-date:
[X] README in the role
@pablintino @raukadah so, things are really, really inter-connected with the reproducer (see #574). It's hard to make that whole thing completely independent - but I didn't want to push everything in one single PR...
/approve
@pablintino @raukadah so, things are really, really inter-connected with the reproducer (see #574). It's hard to make that whole thing completely independent - but I didn't want to push everything in one single PR...
Let's get this in!
/lgtm
|
gharchive/pull-request
| 2023-10-03T08:25:22 |
2025-04-01T06:39:54.614861
|
{
"authors": [
"cjeanner",
"pablintino",
"raukadah"
],
"repo": "openstack-k8s-operators/ci-framework",
"url": "https://github.com/openstack-k8s-operators/ci-framework/pull/625",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
902679026
|
Remove RolesFile parameter
This removes the RolesFile parameter and documents to add a custom
roles file to the tarball with required name roles_data.yaml
/lgtm
|
gharchive/pull-request
| 2021-05-26T16:52:51 |
2025-04-01T06:39:54.617920
|
{
"authors": [
"dprince",
"stuggi"
],
"repo": "openstack-k8s-operators/osp-director-operator",
"url": "https://github.com/openstack-k8s-operators/osp-director-operator/pull/248",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
432815164
|
Parking alongside streets (parking:lane)
This ticket is about the parking:lane tag on highway.
ATM its more of a collection of references and thoughts to start a discussion.
I see a push in the community to add those details more to maps. I wanted to start the talk about how to add this to iD.
Articles about curb data
https://geoawesomeness.com/curbs-openstreetmap-sideline-center/ is an article about the richness of data that is between the car-lane and the building and a call to mapping to add those data. I think the article mixes the concept of the kerb-tag with the area of the "curb" which has information that OSM stores in multiple tags (like the parking tags on highway); see also https://twitter.com/mapillary/status/1095997095352373248.
https://medium.com/coord/behind-the-curbs-building-the-curb-map-883bab281feb builds a curb map without the specific kerb-tags on the map
https://www.coord.co/surveyor is building a business case around the curb and its data
A great micro-editor for parking-tagging
https://zlant.github.io/parking-lanes/#18/52.48050/13.44234
https://github.com/zlant/parking-lanes
Those are the thinks I like the most about this editor:
a. It combines the visualization of data as one mode with an editing mode. Which pulls you right into editing, since you see what is there and what is missing.
b. It highlights the curb-side by color (edit mode) which is an easy reference to the input sidebar.
c. The sidebar only shows the parking-related tags, so nothing else needs to be understood; and also the UI can be smart for just this use case.
Some Screenshots: (Location/Link)
Visualization mode:
Editing mode:
Editing UI:
Possible OSM Keys to support
https://wiki.openstreetmap.org/wiki/Key:parking:lane
parking:lane:{left/right/both} = parallel, diagonal, perpendicular, …
parking:lane:{left/right/both}:{parallel/diagonal/perpendicular} = on_street, half_on_kerb, on_kerb, …
parking:condition:{left/right/both} = free, customers, private, …
Solution in iD
ATM, I don't see an easy way to add those presets.
With the current preset interface the sidebar would be too cluttered after adding all those (empty) fields. Also the highlighting (left curb, right curb) is non-standard ATM. Also the tags have a lot of interaction.
A first step could be to have presets that only show, once some info is there, like with the cycleway fields referenced in https://github.com/openstreetmap/iD/issues/1762#issuecomment-242025240.
Another take could be to duplicate the turn-lane UI for such usecases. So there is one box for turn tagging, one for parking, one for sidewalks …
Another take could be to duplicate the turn-lane UI for such usecases. So there is one box for turn tagging, one for parking, one for sidewalks …
The turn lane UI is still being discussed in #387. Some of the proposed designs don’t really lend themselves to details other than turn lanes.
There are some similarities between parking:lane/parking:condition and cycleway. iD already has a field that differentiates between cycleway:left and cycleway:right:
This field is far from ideal; mental gymnastics are required to tell which side is left and which is right. But a winning design for #387 need not block a very rudimentary field for parking:lane and parking:condition.
With the current preset interface the sidebar would be too cluttered after adding all those (empty) fields.
A field can be marked “optional”, which hides it by default unless the field has a value or the user explicitly adds it using the “Add field” dropdown.
Another take could be to duplicate the turn-lane UI for such usecases. So there is one box for turn tagging, one for parking, one for sidewalks …
The turn lane UI is still being discussed in #387. Some of the proposed designs don’t really lend themselves to details other than turn lanes.
To clarify, since I might be using the wrong terms here. I am not referring to #387, but this UI:
I agree that #387 should do one thing great, which is handling the (car-)lanes. Parking, Sidewalks, maybe even bike lanes should go into dedicated UIs.
Oh yeah what @1ec5 is correct. We ideally want to consolidate all of the "stuff tagged on a street" into a single place. I don't really want to build separate fields for each thing.
|
gharchive/issue
| 2019-04-13T06:46:33 |
2025-04-01T06:39:54.690133
|
{
"authors": [
"1ec5",
"bhousel",
"tordans"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/6178",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
465447902
|
Wants to add extra tag when upgrading to bike+foot path
When editing a Cycleway with foot=designated, iD thinks it's an 'incomplete' Cycle & Foot Path and wants to upgrade by adding bicycle=designated. However bicycle=designated is already assumed on Cycleways so this tag is extraneous.
@rivermont This is intentional, see #6172. I know the tag isn't critical but neither does it hurt anything.
|
gharchive/issue
| 2019-07-08T20:35:18 |
2025-04-01T06:39:54.692635
|
{
"authors": [
"quincylvania",
"rivermont"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/6635",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
305254582
|
Use remove-flow-types directly
Referencing this external issue https://github.com/leebyron/rollup-plugin-flow/issues/5
I have stolen the workaround from mapbox-gl-js, thanks to @anandthakker.
(closes https://github.com/openstreetmap/iD/issues/4874)
Thanks @kepta! I'm out today but feel free to merge if it fixes the issue 👍
I just tried it and it works great.. Thanks again @kepta and @anandthakker - I am incredibly grateful that you tracked down the root cause and fixed this 👏
|
gharchive/pull-request
| 2018-03-14T17:21:50 |
2025-04-01T06:39:54.695283
|
{
"authors": [
"bhousel",
"kepta"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/pull/4885",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1093983603
|
fix: optimize extension treeview init logic
Types
[x] 🐛 Bug Fixes
Background or solution
之前对于折叠或者隐藏状态下的 view 也会加载数据,例如这个场景下第一次激活插件所有的 treeview 加起来会创建 760 个 treeNode 实例
优化后,当视图展开且显示状态下,才会在第一次激活时获取数据,对于这个场景,第一次只创建 49 个 treeNode,后续视图折叠/展开时都会刷新并重新获取数据(与 VS Code 表现一致)
Changelog
优化插件 treeview 初始化逻辑,减少冗余的加载逻辑
Codecov Report
Merging #283 (8519c7e) into main (eea3af4) will decrease coverage by 0.00%.
The diff coverage is 43.75%.
@@ Coverage Diff @@
## main #283 +/- ##
==========================================
- Coverage 59.31% 59.31% -0.01%
==========================================
Files 1182 1182
Lines 72674 72688 +14
Branches 15061 15065 +4
==========================================
+ Hits 43110 43117 +7
- Misses 26941 26946 +5
- Partials 2623 2625 +2
Impacted Files
Coverage Δ
packages/main-layout/src/browser/layout.service.ts
66.16% <0.00%> (-1.27%)
:arrow_down:
...s/main-layout/src/common/main-layout.defination.ts
100.00% <ø> (ø)
...ion/src/browser/vscode/api/main.thread.treeview.ts
45.90% <63.63%> (+0.36%)
:arrow_up:
packages/core-common/src/node/port.ts
46.96% <0.00%> (+3.03%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update eea3af4...8519c7e. Read the comment docs.
/publish
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2022-01-05T05:01:59 |
2025-04-01T06:39:54.731876
|
{
"authors": [
"Aaaaash",
"CLAassistant",
"codecov-commenter"
],
"repo": "opensumi/core",
"url": "https://github.com/opensumi/core/pull/283",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1197006896
|
chore: remove useless code
Types
[ ] 🎉 New Features
[ ] 🐛 Bug Fixes
[ ] 📚 Documentation Changes
[ ] 💄 Code Style Changes
[ ] 💄 Style Changes
[ ] 🪚 Refactors
[ ] 🚀 Performance Improvements
[ ] 🏗️ Build System
[ ] ⏱ Tests
[x] 🧹 Chores
[ ] Other Changes
Background or solution
Changelog
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
gharchive/pull-request
| 2022-04-08T08:27:24 |
2025-04-01T06:39:54.736868
|
{
"authors": [
"CLAassistant",
"vagusX"
],
"repo": "opensumi/core",
"url": "https://github.com/opensumi/core/pull/798",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1511455201
|
PHP Fatal error: Uncaught Error: Class "Swoole\Database\PDOPool" not found
After updating to latest version i get this error :
PHP Fatal error: Uncaught Error: Class "Swoole\Database\PDOPool" not found in
Example of using PDOPool in v22 is added: https://github.com/openswoole/openswoole/blob/master/example/src/Coroutine/MySQLClientPool.php
|
gharchive/issue
| 2022-12-27T05:40:53 |
2025-04-01T06:39:54.738463
|
{
"authors": [
"doubaokun",
"siamakdals"
],
"repo": "openswoole/openswoole",
"url": "https://github.com/openswoole/openswoole/issues/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
244610863
|
Integrate check for http(s) scheme into filters
Since 9b2fa8bc985d5b68f66eb8325b2fab370c4d145d, only http(s) links are
permitted during general crawls. This is good, but the scheme check is
done inside FilterSpider, whereas it could be naturally handled by the
filtering system alongside other checks against requests. Remove the
conditonal from FilterSpider and add a scheme filter to
make_filters.
Based on the approach described by @HarrisonGregg here.
This was done deliberately, both for efficiency purposes and because the bogus:// urls cause unnecessary errors in the pipelines. It's really no different than only selecting hrefs instead of all linked objects.
Only thing this would let us do is add new schemes via filters & I can't see doing that often enough (or ever, really).
|
gharchive/pull-request
| 2017-07-21T09:15:56 |
2025-04-01T06:39:54.741108
|
{
"authors": [
"jodizzle",
"wearpants"
],
"repo": "opensyllabus/osp-scraper",
"url": "https://github.com/opensyllabus/osp-scraper/pull/136",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
791894189
|
Add cross-references to drug profile page
The GraphQL drug endpoint will soon expose cross-references with IDs that we can use to construct links to other websites.
Query for CHEMBL122
Based on a review of the data provided by ChEMBL, we will use the following cross-references to build external links:
DrugBank
ChEBI
DailyMed
DrugCentral
Wikipedia
Can we please add cross-references to the existing section displaying the ChEMBL ID? The UI style would follow that of cross-references found in the target profile page - source followed by ID (with hyperlink) followed by single pipe to separate each source.
The table below lists the source string provided by the GraphQL API and the associated link structure for implementation.
Source (from API response)
URL structure
Example
ID for QC test
"drugbank"
https://identifiers.org/drugbank: + crossReferences.reference[0]
DrugBank: DB05553
CHEMBL1162175 - regrelor
"chEBI"
https://identifiers.org/CHEBI: + crossReferences.reference[0]
ChEBI: 46195
CHEMBL122 - acetaminophen
"DailyMed"
https://dailymed.nlm.nih.gov/dailymed/search.cfm?labeltype=all&query= + crossReferences.reference[0]
DailyMed: atezolizumab
CHEMBL3707227 - atezolizumab
"DrugCentral"
https://drugcentral.org/drugcard/ + crossReferences.reference[0]
DrugCentral: 4924
CHEMBL3137343 - pembrolizumab
"Wikipedia"
https://en.wikipedia.org/wiki/ + crossReferences.reference[0]
Wikipedia: Tamoxifen
CHEMBL83 - tamoxifen
Please note that all five selected cross-references will not be available for every drug. For example, CHEMBL122 only has DrugBank, ChEBI, and Wikipedia cross-references whereas CHEMBL3137343 only has DailyMed and DrugCentral cross-references.
Can we please account for the %20 of the DailyMed links in the FE?
https://beta--platform-app.netlify.app/drug/CHEMBL3353410
@d0choa DailyMed links have been fixed!
|
gharchive/issue
| 2021-01-22T10:36:59 |
2025-04-01T06:39:54.754616
|
{
"authors": [
"andrewhercules",
"d0choa",
"mirandaio"
],
"repo": "opentargets/platform",
"url": "https://github.com/opentargets/platform/issues/1356",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
370981625
|
Pipeline QC: Automated QC distribution of scores
As part of the automated QC we should report some very roughly bucketed distribution of scores, for val, evidence, and association steps. For example 5 buckets of width 0.2
This ticket needs more detail about how this will be implemented and how it will be reported, ie, a data file, automatic generation of plots?
This could be done after the pipeline ran successfully and could be part of a bigger effort to also compare different releases, ie. a summary of which diseases, targets and associations are new or were dropped.
This ticket is for part of the automated QC, and therefore would follow that reporting pathway - currently, a tsv file similar to https://docs.google.com/spreadsheets/d/1CeAk4LlnrTFFgzSJJyRuLr09wIj2bDbQuU2PuNFubOI/edit?usp=sharing
Generating pretty plots, or other changes QC process, are outside the scope of this issue. This issue is very specifically for a quick and dirty first pass approximation within the existing automated QC framework.
|
gharchive/issue
| 2018-10-17T09:31:56 |
2025-04-01T06:39:54.757201
|
{
"authors": [
"MichaelaEBI",
"afaulconbridge"
],
"repo": "opentargets/platform",
"url": "https://github.com/opentargets/platform/issues/194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
932046016
|
Use predefined machine types
We need to add the possibility of specifying a predefined machine type to use for VM instances.
Machine type specification has precedence over custom geometry, being the latter a fall back mechanism for those situations where no machine type has been specified.
No longer needed
|
gharchive/issue
| 2021-06-28T22:33:07 |
2025-04-01T06:39:54.758569
|
{
"authors": [
"mbdebian"
],
"repo": "opentargets/terraform-google-opentargets-platform",
"url": "https://github.com/opentargets/terraform-google-opentargets-platform/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1514857698
|
reduce reliance on containers for increased code portability
I suggest developer reliance on containers results in software portability issues. This is ironic because containers are designed to help with dependency issues.
But, in a developer context, containers mask portability issues. It is too easy to avoid actually resolving portability issues because the software "works" in... the approved container.
I know this runs counter to prevailing practice. But it seems inefficient to having to use multiple images / containers to do development. For an example, see:
https://devzone.nordicsemi.com/f/nordic-q-a/95270/please-modernize-to-the-latest-raspios-openthread-otbr-matter
So... build on multiple different platforms. Use explicitly sized types. Judiciously use wrappers or #ifs for platforms and library version dependencies. But, hide these details from the clients of your code.
Thanks for consideration,
Jonathan Hess
@jahess , thanks for providing your thoughts.
Build on multiple different platforms.
We leverage GitHub Actions to implement build checks across a variety of platforms including different architectures, toolchains, and OSes - see .github/workflows/build.yml as an example.
Use explicitly sized types. Judiciously use wrappers or #ifs for platforms and library version dependencies. But, hide these details from the clients of your code.
The STYLE_GUIDE.md includes the following text:
Standard, scalar data types defined in stdint.h (C) or cstdint (C++) should be used for basic signed and unsigned integer types, especially when size and serialization to non-volatile storage or across a network is concerned. Examples of these are: uint8_t, int8_t, etc.
Judiciously use wrappers or #ifs for platforms and library version dependencies
OpenThread defines a platform abstraction minimize platform dependencies within core code.
Of course, there is always room to improve. We welcome any contributions that help improve on these areas. Thanks!
|
gharchive/issue
| 2022-12-30T21:36:25 |
2025-04-01T06:39:54.778621
|
{
"authors": [
"jahess",
"jwhui"
],
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/issues/8597",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
231131117
|
Only send one Child Update Request when restoring rx-on-when-idle child.
Resolves #1818.
LGTM
|
gharchive/pull-request
| 2017-05-24T18:29:42 |
2025-04-01T06:39:54.779632
|
{
"authors": [
"jwhui",
"wbober"
],
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/pull/1821",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1296361473
|
Build OTBR agent for Thread 1.2
I was so far working with a RCP using Thread 1.1.1 and I was building the OTBR agent with:
INFRA_IF_NAME=wlan0 BACKBONE_ROUTER=0 BORDER_ROUTING=1 OTBR_OPTIONS="-DOT_THREAD_VERSION=1.1" ./script/setup
I'm now moving to a RCP using Thread 1.2. I'm wondering if I should simply use :
INFRA_IF_NAME=wlan0 ./script/setup
or if I still need the other parameters (BACKBONE_ROUTER=0 BORDER_ROUTING=1 OTBR_OPTIONS="-DOT_THREAD_VERSION=1.2").
Can you please tell me?
Thank you
It depends what git commit you are using.
The latest OpenThread main branch default for OT_THREAD_VERSION is 1.3 - see https://github.com/openthread/openthread/blob/dd631b2a1914b60223d527aafbca50b964cd9b7d/CMakeLists.txt#L105
I'm using the latest version of openthread/ot-br-posix's main branch. So if I understand correctly, I should add OTBR_OPTIONS="-DOT_THREAD_VERSION=1.2" to force the usage of 1.2.
Do I still need BACKBONE_ROUTER=0 BORDER_ROUTING=1 or was it specific to 1.1?
Thanks
Ok thanks.
I have looked at examples/platforms/raspbian/default and I can see that BORDER_ROUTING and BACKBONE_ROUTER are enabled so I will build with :
INFRA_IF_NAME=wlan0 ./script/setup
Yes, Thread 1.2 includes low-power enhancements that require an update to the RCP. I suggest updating your RCP to match the latest main branch as well.
Ok, thank you Jonathan.
@jwhui
Hi Jonathan,
Our RCP is not yet ready to include the low-power enhancements so I will have to build it with the flags to force the version 1.1 (this is what I was using so far).
A quick question about SRP: Will it work if my OTBR is configured for Thread 1.1?
Thank you
|
gharchive/issue
| 2022-07-06T20:15:59 |
2025-04-01T06:39:54.785529
|
{
"authors": [
"OlivierGre",
"jwhui"
],
"repo": "openthread/ot-br-posix",
"url": "https://github.com/openthread/ot-br-posix/issues/1449",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
620316163
|
Add streamId to widget stats
What is this PR doing?
Add streamId to widget stats
How should this be manually tested?
When publishing, StreamId field should be present on the bottom of the publisher stats widget.
When subscribing, StreamId field should be present above the origin server field in the subscriber stats widget
What are the relevant tickets?
n/a
thanks @v-kpheng , these errors look like latest Chrome installers are using xz compressed packages, which are not working on the Travis trusty snapshot (https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/1730627). As a workaround I have added an apt-get update so we can get the fixed version of dpkg on trusty.
Now Travis is passing https://travis-ci.org/github/opentok/opentok-meet/builds/688801176
wdyt?
@albertoacn, the job is green but the logs show the tests are failing.
For example:
Thanks for the workaround, btw! The next time we touch this, we may need to use a different distribution since "trusty" is being EOLed.
thanks @v-kpheng, that isn't part of the test since we are only running unit tests here. That error comes from npm start, which is done here https://github.com/opentok/opentok-meet/blob/master/.travis.yml#L9 as a before script step and the proper line failing https://github.com/opentok/opentok-meet/blob/master/app.js#L55
This is because api_secret is not available on fork's PR, because it is a protected env var.
Run the app could be a legacy step when we used to run integration tests.
|
gharchive/pull-request
| 2020-05-18T15:48:46 |
2025-04-01T06:39:54.794663
|
{
"authors": [
"albertoacn",
"v-kpheng"
],
"repo": "opentok/opentok-meet",
"url": "https://github.com/opentok/opentok-meet/pull/87",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1847129041
|
🛑 [CN] OpenUPM API /packages/extra is down
In be1c0fd, [CN] OpenUPM API /packages/extra (https://api.openupm.cn/packages/extra) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [CN] OpenUPM API /packages/extra is back up in 094bbed.
|
gharchive/issue
| 2023-08-11T16:41:06 |
2025-04-01T06:39:55.130133
|
{
"authors": [
"favoyang"
],
"repo": "openupm/upptime-openupmcn",
"url": "https://github.com/openupm/upptime-openupmcn/issues/1101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1816480202
|
🛑 [CN] OpenUPM Website is down
In 838c3d0, [CN] OpenUPM Website (https://openupm.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [CN] OpenUPM Website is back up in 53f2647.
|
gharchive/issue
| 2023-07-21T23:41:15 |
2025-04-01T06:39:55.132763
|
{
"authors": [
"favoyang"
],
"repo": "openupm/upptime-openupmcn",
"url": "https://github.com/openupm/upptime-openupmcn/issues/608",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1594939564
|
🛑 [CN] OpenUPM API /site/info is down
In 7067edc, [CN] OpenUPM API /site/info (https://api.openupm.cn/site/info) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [CN] OpenUPM API /site/info is back up in 4b3a36f.
|
gharchive/issue
| 2023-02-22T11:28:35 |
2025-04-01T06:39:55.135260
|
{
"authors": [
"favoyang"
],
"repo": "openupm/upptime",
"url": "https://github.com/openupm/upptime/issues/1020",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2009248912
|
[Bug]: Efficient_AD with Openvino takes too much times to predict
Describe the bug
I am using Openvino inference to predict anomalies in custom datasets.
However, it takes too much times compared to other models like Padim or FastFlow.
In same width/height, Padim takes 0.33 s and Fastflow takes 0.8. However, Efficient_AD takes 15s.
Is it normal that Openvino export+EfficientAD takes more times to predict compared to padim or fastflow?
Dataset
Folder
Model
Other (please specify in the field below)
Steps to reproduce the behavior
Export by Openvino option
Load Efficient_AD
predict anomalies
OS information
OS information:
OS: [windows10]
Python version: [3.8.10]
Any other relevant information: [I'm using a custom dataset]
Expected behavior
I was expected Efficient_AD has better speed compared to Padim and Fastflow.
Also, with
512x512, Efficient_AD with openvino inferencer takes 1.2s.
Padim takes 0.2s.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
dataset:
name: belts
root: ./datasets
format : folder
task: classification
normal_dir : ok
abnormal_dir : ng
mask : null
normal_test_dir : test
extensions : ".jpg"
train_batch_size: 1
eval_batch_size: 16
num_workers: 8
image_size: 512 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: none # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
model:
name: efficientad
teacher_out_channels: 384
model_size: small # options: [small, medium]
lr: 0.0001
weight_decay: 0.00001
padding: false
pad_maps: true # relevant for "padding: false", see EfficientAD in lightning_model.py
early_stopping:
patience: 2
metric: image_AUROC
mode: max
normalization_method: min_max # options: [null, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
visualization:
show_images: False # show images on the screen
save_images: False # save images to the file system
log_images: False # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 42
path: ./results/efficient_test_model_v3
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: openvino # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 10
min_epochs: null
max_steps: 70000
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
Logs
Not much logs to put in
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Hello, this certainly isn't expected behavior. Just to clarify, in your config you use 512x512 resolution which you say takes 1.2s to execute, which resolution takes 15s?
In any case, EfficientAD should be fast, but it does use neural networks, so if you want the execution to be fast, you will need to run it on a GPU. I suspect this is the main culprit. Also make sure you are timing only the inference (predict time), not the model instantiation (and other parts), as that would falsely inflate the duration (unless your task involves this).
Hi, thanks for help. When I sue 700x1200 it takes 15s to get prediction. I only check during inference so instantiation model does not be a reason for too much time. Is it normal it takes way more times when use openvino? Other than EfficientAD, it usually takes less time to get predictions with openvino inferencer.
Okay, as you mentioned, it is maybe due to use openvino in cpu option. So I tested torch_inferencer, now it is really faster. with 700x1200, I got 0.6 for prediction return. But still not sure why it works slow in openvino inferencer.
Did you also try openvino with gpu inference? You can pass device parameter and set it to GPU by using --device GPU
@blaz-r, do you think if this is a bug or a specific use-case?
@samet-akcay I don't think this is a bug, but it surely is a bit unusual for EfficientAD to take so much time. Maybe the reason is inference on the CPU or it could also be something with OpenVINO but I can't say for sure.
@samet-akcay Hi. I recentely switched inference module to TorchInferencer and it seems like the issue has been addressed. However, still does not know why it gets extremely slower when I use OpenvinoInferencer(It gives same results. No accuracy drop nor incorrect prediction. Just prediction speed gets 10x slower). Maybe it is due to huge image size?(since I am using 800x1600 image size).
|
gharchive/issue
| 2023-11-24T07:37:30 |
2025-04-01T06:39:55.145379
|
{
"authors": [
"blaz-r",
"papago2355",
"samet-akcay"
],
"repo": "openvinotoolkit/anomalib",
"url": "https://github.com/openvinotoolkit/anomalib/issues/1499",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1017481507
|
Add LFW format
Motivation and context
Added LFW format.
How has this been tested?
Checklist
[ ] I submit my changes into the develop branch
[x] I have added description of my changes into CHANGELOG file
[x] I have updated the documentation accordingly
[x] I have added tests to cover my changes
[ ] I have linked related issues (read github docs)
[ ] I have increased versions of npm packages if it is necessary (cvat-canvas,
cvat-core, cvat-data and cvat-ui)
License
[x] I submit my code changes under the same MIT License that covers the project.
Feel free to contact the maintainers if that's a concern.
[x] I have updated the license header for each file (see an example below)
# Copyright (C) 2021 Intel Corporation
#
# SPDX-License-Identifier: MIT
The main question for me now: should we allow to export tasks in the format, if they don't correspond to the expected format. For example, our tasks don't have corresponding attributes. I have created a task with pets (cat, dog) without necessary attributes and exported the dataset in LFW. Did I get something useful?
The main question for me now: should we allow to export tasks in the format, if they don't correspond to the expected format. For example, our tasks don't have corresponding attributes. I have created a task with pets (cat, dog) without necessary attributes and exported the dataset in LFW. Did I get something useful?
@nmanovic I think, without attributes this format can still be useful for image/face classification task.
|
gharchive/pull-request
| 2021-10-06T05:13:34 |
2025-04-01T06:39:55.153100
|
{
"authors": [
"kirill-sizov",
"nmanovic"
],
"repo": "openvinotoolkit/cvat",
"url": "https://github.com/openvinotoolkit/cvat/pull/3770",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1713237085
|
Add COCO Roboflow section
Summary
Add documentation part of #976
How to test
Checklist
[ ] I have added unit tests to cover my changes.
[ ] I have added integration tests to cover my changes.
[x] I have added the description of my changes into CHANGELOG.
[x] I have updated the documentation accordingly
License
[ ] I submit my code changes under the same MIT License that covers the project.
Feel free to contact the maintainers if that's a concern.
[ ] I have updated the license header for each file (see an example below).
# Copyright (C) 2023 Intel Corporation
#
# SPDX-License-Identifier: MIT
Codecov Report
Patch coverage has no change and project coverage change: -0.03 :warning:
Comparison is base (332879d) 78.53% compared to head (62ee21e) 78.51%.
Additional details and impacted files
@@ Coverage Diff @@
## develop #1000 +/- ##
===========================================
- Coverage 78.53% 78.51% -0.03%
===========================================
Files 233 233
Lines 26749 26757 +8
Branches 5320 5323 +3
===========================================
Hits 21007 21007
- Misses 4497 4498 +1
- Partials 1245 1252 +7
Flag
Coverage Δ
macos-11_Python-3.8
?
ubuntu-20.04_Python-3.8
78.51% <ø> (-0.01%)
:arrow_down:
windows-2019_Python-3.8
?
Flags with carried forward coverage won't be shown. Click here to find out more.
see 8 files with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
gharchive/pull-request
| 2023-05-17T06:47:08 |
2025-04-01T06:39:55.163611
|
{
"authors": [
"codecov-commenter",
"vinnamkim"
],
"repo": "openvinotoolkit/datumaro",
"url": "https://github.com/openvinotoolkit/datumaro/pull/1000",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
930489956
|
Error audio sample app - Got EOS from element "pipeline0"
Hello,
I was able to build and run the DL streamer audio sample app and I'm getting this message and no output is generated.
I tried this on locahost with openVINO 2021.3, and also on openvino/ubuntu18_data_runtime:latest and openvino/ubuntu20_data_runtime:latest docker containers.
Any ideas?
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:00.052957385
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Hi @antoniomtz,
What was the input file? Can you share your full pipeline command?
The other useful tool for investigation is to set GST_DEBUG=3 at least to see the error messages:
https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html?gi-language=c
Mark
Hello @mliu2020 ,
I'm running the sample audio file from openVINO which includes an audio file.
This is the command:
gst-launch-1.0 filesrc location=how_are_you_doing.wav ! decodebin ! audioresample ! audioconvert ! audio/x-raw, channels=1,format=S16LE,rate=16000 ! audiomixer output-buffer-duration=100000000 ! gvaaudiodetect model=/home/antonio/intel/dl_streamer/models/audio_models/aclnet/FP32/aclnet.xml model-proc=./model_proc/aclnet.json sliding-window=0.2 ! gvametaconvert ! gvametapublish file-format=json-lines ! fakesink
Hi @antoniomtz ,
Could you verify that model_proc file for aclnet contains the following fix? https://github.com/openvinotoolkit/dlstreamer_gst/commit/4a8b37a81b4f8362899d3521bcabe5ecaec97014#diff-690de75ea210e7d36ad143ef356375f3931f2653dc520112d57b8cdc49baea7e
@antoniomtz,
Can you confirm on @adranit's suggested patch?
You might either try the same script on the latest 2021.4 release and see if this was fixed.
Mark
Hi @antoniomtz,
If you are no long looking into this issue, would you mind if I close this issue?
Mark
@mliu2020 @adranit it worked. However, the output from the sample audio app is not accurate. The sample audio app from openVINO uses an audio file with the speech "How are you doing".
gst-launch-1.0 filesrc location=how_are_you_doing.wav ! decodebin ! audioresample ! audioconvert ! audio/x-raw, channels=1,format=S16LE,rate=16000 ! audiomixer output-buffer-duration=100000000 ! gvaaudiodetect model=/home/antonio/intel/dl_streamer/models/audio_models/aclnet/FP32/aclnet.xml model-proc=./model_proc/aclnet.json sliding-window=0.2 ! gvametaconvert ! gvametapublish file-format=json-lines ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
{"channels":1,"events":[{"detection":{"confidence":0.7,"label":"Can opening","label_id":35,"segment":{"end_timestamp":1000000000,"start_timestamp":0}},"end_timestamp":1000000000,"event_type":"Can opening","start_timestamp":0}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":0.67,"label":"Cow","label_id":4,"segment":{"end_timestamp":1200000000,"start_timestamp":200000000}},"end_timestamp":1200000000,"event_type":"Cow","start_timestamp":200000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":0.99,"label":"Speech","label_id":53,"segment":{"end_timestamp":1400000000,"start_timestamp":400000000}},"end_timestamp":1400000000,"event_type":"Speech","start_timestamp":400000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":1600000000,"start_timestamp":600000000}},"end_timestamp":1600000000,"event_type":"Speech","start_timestamp":600000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":1800000000,"start_timestamp":800000000}},"end_timestamp":1800000000,"event_type":"Speech","start_timestamp":800000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":2000000000,"start_timestamp":1000000000}},"end_timestamp":2000000000,"event_type":"Speech","start_timestamp":1000000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":2200000000,"start_timestamp":1200000000}},"end_timestamp":2200000000,"event_type":"Speech","start_timestamp":1200000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":2400000000,"start_timestamp":1400000000}},"end_timestamp":2400000000,"event_type":"Speech","start_timestamp":1400000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":2600000000,"start_timestamp":1600000000}},"end_timestamp":2600000000,"event_type":"Speech","start_timestamp":1600000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":2800000000,"start_timestamp":1800000000}},"end_timestamp":2800000000,"event_type":"Speech","start_timestamp":1800000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":1.0,"label":"Speech","label_id":53,"segment":{"end_timestamp":3000000000,"start_timestamp":2000000000}},"end_timestamp":3000000000,"event_type":"Speech","start_timestamp":2000000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":0.99,"label":"Speech","label_id":53,"segment":{"end_timestamp":3200000000,"start_timestamp":2200000000}},"end_timestamp":3200000000,"event_type":"Speech","start_timestamp":2200000000}],"rate":16000}
{"channels":1,"events":[{"detection":{"confidence":0.99,"label":"Speech","label_id":53,"segment":{"end_timestamp":3400000000,"start_timestamp":2400000000}},"end_timestamp":3400000000,"event_type":"Speech","start_timestamp":2400000000}],"rate":16000}
Got EOS from element "pipeline0".
Execution ended after 0:00:00.138420280
Can you guide me how to translate this json message into the actual speech?
{"channels":1,"events":[{"detection":{"confidence":0.99,"label":"Speech","label_id":53,"segment":{"end_timestamp":1400000000,"start_timestamp":400000000}},"end_timestamp":1400000000,"event_type":"Speech","start_timestamp":400000000}],"rate":16000}
@antoniomtz,
The output should be expected. You misunderstood the purpose of the model, it is used to detect the kind of the sound, not to generate the transcript of the speech.
Mark Liu
That's what I assumed afterwards.
However, the sample hello wav file confused me a little bit. I was expecting to see "How are you doing" as the output.
Other than that, the ticket can be closed now. Thank you.
Yes, I agree.
I have the same experience, we should either supply a new sound track or describe it explicitly.
Mark
|
gharchive/issue
| 2021-06-25T20:42:01 |
2025-04-01T06:39:55.188505
|
{
"authors": [
"adranit",
"antoniomtz",
"mliu2020"
],
"repo": "openvinotoolkit/dlstreamer_gst",
"url": "https://github.com/openvinotoolkit/dlstreamer_gst/issues/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1879813845
|
Add heuristic logic to unify the oc block to optimize the peak memory consumption for LLM.
Description
Add heuristic logic to unify the oc block to optimize the peak memory consumption for LLM.
OV PR:https://github.com/openvinotoolkit/openvino/pull/19575
|
gharchive/pull-request
| 2023-09-04T08:51:54 |
2025-04-01T06:39:55.190454
|
{
"authors": [
"luweizhou2016"
],
"repo": "openvinotoolkit/oneDNN",
"url": "https://github.com/openvinotoolkit/oneDNN/pull/210",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2417982802
|
rm .github/ISSUE_TEMPLATE
GenAI issues found by the commpunity tend to be crated using that template which isn't correct because they usually expect us to address them.
Not the end. I was going to use one of the issues as a template for new GFI. I was trying to remember how issue assignment is implemented and just stumbled at the template because I forgot about your initial PR.
As for the general template, we don't have much to specify. It's not even reasonable to ask to provide a used version because they change rapidly and at the same time some of the problems are here for long time.
I need your GitHub button approval because Jenkins is unlikely to start its builds without the explicit approve.
|
gharchive/pull-request
| 2024-07-19T05:43:16 |
2025-04-01T06:39:55.191950
|
{
"authors": [
"Wovchena"
],
"repo": "openvinotoolkit/openvino.genai",
"url": "https://github.com/openvinotoolkit/openvino.genai/pull/646",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2593679419
|
Remove itaic on page titles
Font changes have been updated
|
gharchive/issue
| 2024-10-17T05:45:17 |
2025-04-01T06:39:55.214129
|
{
"authors": [
"Vaisakh-mv",
"deepakkumarnd"
],
"repo": "openvitae-tech/blackboard-lms",
"url": "https://github.com/openvitae-tech/blackboard-lms/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
189817057
|
Set up continuous integration
Need to set up a continuous integration pipeline for running unit tests, integration tests, builds, etc. Currently experimenting with concourse in this branch. Alternatives are travis CI, jenkins, etc.
Concourse server is set up at http://9.42.89.107:8080/
Concourse pipeline with unit tests set up and tested. Example PR using Concourse: https://github.com/openwhisk/apigateway/pull/23
|
gharchive/issue
| 2016-11-16T19:29:37 |
2025-04-01T06:39:55.232986
|
{
"authors": [
"alexsong93"
],
"repo": "openwhisk/apigateway",
"url": "https://github.com/openwhisk/apigateway/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
568753537
|
luci-app-yggdrasil: fix listen uri type and remove tap for v3.13
Tested against yggdrasil v3.12-develop (v3.13) / APU2 x86_64 (next release expected tomorrow)
uri is required, using address was omitting listen address for: ygguci get
tap interface support is removed in v3.13
do we need to update translation maps file now?
Updating translation templates is only strictly necessary if you introduce new strings. For removals, changed locations etc. we do occasional tree-wide translation syncs which will take care of such changes.
|
gharchive/pull-request
| 2020-02-21T05:59:44 |
2025-04-01T06:39:55.242325
|
{
"authors": [
"jow-",
"wfleurant"
],
"repo": "openwrt/luci",
"url": "https://github.com/openwrt/luci/pull/3659",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
810721002
|
luci-{app,proto}-wireguard: remove kmod-wireguard
Prepares for 5.10 migration. wireguard-tools will bring in the correct
wireguard kernel module dependency - either kmod-wireguard or
kmod-wireguard-oot.
Depends on openwrt/openwrt#3885
I'm going to close this for now, there has been quite a bit of discussion on how to proceed with WireGuard going forward, but I think the idea is to limit disruption to user space packages that depend on kmod-wireguard. So, no changes should be needed here.
I wonder if this is PR again relevant now when https://github.com/openwrt/openwrt/commit/cbcddc9f318607881799e329b327a68c4e76d5cb has been merged
Yeah I think it's worth removing the unnecessary dependency.
|
gharchive/pull-request
| 2021-02-18T03:19:46 |
2025-04-01T06:39:55.244692
|
{
"authors": [
"hnyman",
"lipnitsk"
],
"repo": "openwrt/luci",
"url": "https://github.com/openwrt/luci/pull/4819",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1931840757
|
luci-proto-cni: protocol update
Updated luci support for updated netifd cni support openwrt/packages#22341
maintainer: me
build/test platform: x86_64, latest git
Hold until openwrt/packages#22341 is merged.
PR openwrt/packages#22341 is merged. This is good to go.
Thanks!
|
gharchive/pull-request
| 2023-10-08T15:02:38 |
2025-04-01T06:39:55.246555
|
{
"authors": [
"feckert",
"oskarirauta"
],
"repo": "openwrt/luci",
"url": "https://github.com/openwrt/luci/pull/6626",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1687879195
|
Enable roundeven expansion pattern
Re-enables more tests from #12778.
Could we keep the approximate math patterns still guarded by a flag. Even if the flag is on by default.
Good idea, done.
|
gharchive/pull-request
| 2023-04-28T04:29:47 |
2025-04-01T06:39:55.393895
|
{
"authors": [
"jpienaar"
],
"repo": "openxla/iree",
"url": "https://github.com/openxla/iree/pull/13329",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2087408578
|
[CPU] Improve vector tile sizes for sub-byte matmuls on Aarch64
This PR introduces a simple heuristic to make sure that we at least fill one vector register for the smallest data type used in the matmul. For example, given a 128-bit vector and a i32 <- i4, i4 matmul, we used 16 tile size for the main vector dimension (16x4 = 64 bits, half vector). With this PR we use 32 (32x4 = 128 bits, full vector).
RISC-V ISel crashes... I think it's probably too ambitious to enable this for all the targets. Let me apply this to only aarch64 for now and add a TODO about generalizing it.
|
gharchive/pull-request
| 2024-01-18T03:27:19 |
2025-04-01T06:39:55.395370
|
{
"authors": [
"dcaballe"
],
"repo": "openxla/iree",
"url": "https://github.com/openxla/iree/pull/16143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2009365282
|
fix dns host using configmap name
What type of PR is this?
Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespace from that line:
/kind bug
/kind documentation
/kind enhancement
/kind good-first-issue
/kind feature
/kind question
/kind design
/sig ai
/sig iot
/sig network
/sig storage
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
other Note
/lgtm
|
gharchive/pull-request
| 2023-11-24T09:10:57 |
2025-04-01T06:39:55.398803
|
{
"authors": [
"River-sh",
"rambohe-ch"
],
"repo": "openyurtio/openyurt",
"url": "https://github.com/openyurtio/openyurt/pull/1827",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1398137281
|
support / give feedback / generate button asks for root without explanation
The app asks for root escalation to generate the feedback package, but doesn't say precisely what it's going to collect or change and doesn't offer any alternatives. It would go far to inspire confidence if the text near the button made it perfectly clear what files would be collected and how the feedback package will be handled. Is it going to immediately upload to someone's cloud or will a file be generated that I can first inspect before sharing? A welcome alternative would be steps to generate the package after an opportunity to inspect the script. The password prompt asks for permission to run "bash" which is slightly alarming. I decided not to proceed when I saw that for the reasons above.
Linux actually doesn't have a service to generate that package, I hid this in the latest commit until one gets added
|
gharchive/issue
| 2022-10-05T17:36:11 |
2025-04-01T06:39:55.411408
|
{
"authors": [
"JeremyTellier",
"qrkourier"
],
"repo": "openziti/desktop-edge-ui",
"url": "https://github.com/openziti/desktop-edge-ui/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
773691699
|
Oudated info in Packaging an operator for OLM
Bug Report
What did you do?
I was checking the doc https://olm.operatorframework.io/docs/tasks/packaging-an-operator/
What did you expect to see?
It links as an example of the Memcached sample with the layout 1.0+ of SDK in : https://github.com/operator-framework/operator-sdk/tree/master/testdata/go/v3/memcached-operator/config
The description point out the files in the config/ dir instead of the legacy deploy/ one.
What did you see instead? Under which circumstances?
A clear and concise description of what you expected to happen (or insert a code snippet).
@camilamacedo86 Thanks for the report - It looks like the doc link you provided no longer exists, so I'm going to close this out. If we're still referencing out-of-date memcached samples, let's re-visit this by opening an issue in the olm-docs repository so we have a centralized place that tracks all the upstream OLM documentation.
|
gharchive/issue
| 2020-12-23T11:38:36 |
2025-04-01T06:39:55.422122
|
{
"authors": [
"camilamacedo86",
"timflannagan"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/issues/1920",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
457601509
|
Add webhook proposal
This is my current view of how webhooks should be added into OLM. Feedback/discussion encouraged.
/retest
/test e2e-aws-console-olm
/retest
/test e2e-aws
Regarding certificate management, have you thought about relying on cert-manager or OpenShift's service-ca?
|
gharchive/pull-request
| 2019-06-18T17:16:24 |
2025-04-01T06:39:55.423892
|
{
"authors": [
"ecordell",
"jpeeler",
"rcernich"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/pull/913",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
484902087
|
[Helm] Random password regenerated indefinitely
Bug Report
What did you do?
As a part of a hack day, we created a Jenkins operator from the stable/jenkins chart, with a couple of modifications. After installing the operator, we noticed that the jenkins admin password, which is defined to be generated randomly, is regenerated indefinitely.
Repository can be found here: https://github.com/jenkins-operator/jenkins-operator
Line in template causing the problem: https://github.com/jenkins-operator/jenkins-operator/blob/master/helm-charts/jenkins/templates/secret.yaml#L18
Currently the repository contains a hard coded admin password (set to "admin" of course :1st_place_medal:).
What did you expect to see?
We expected the password to be generated once and remain consistent.
What did you see instead? Under which circumstances?
The password was changing every few moments, probably due to a reconciliation loop.
Environment
operator-sdk version:
operator-sdk version: v0.8.2, commit: 28bd2b0d4fd25aa68e15d928ae09d3c18c3b51da
go version:
go version go1.11.5 linux/amd64
Kubernetes version information:
minishift v1.33.0+ba29431
Kubernetes cluster kind: MiniShift
Are you writing your operator in ansible, helm, or go?
Helm
Thank you very much!
@maorfr This sounds like a duplicate of #1106 and #1291.
The helm-operator reconciler upgrade logic works by running a dry-run upgrade and comparing the manifest with the current deployed release. If they differ, an upgrade is performed. When using random functions on helm charts, the dry-run upgrade will always produce a different manfiest, resulting in a never-ending upgrade/reconcile loop.
In this case, the solution would be to set .master.adminPassword in your CR spec so that the template uses the provided value instead of the random function.
That would mean storing the password in source control in many cases, which is suboptimal imo.
Can the reconciler check if the resource already exists and take that into consideration? If it exists, and it is a randomly generated password, ignore it? Or something similar?
|
gharchive/issue
| 2019-08-25T07:49:34 |
2025-04-01T06:39:55.431018
|
{
"authors": [
"joelanford",
"maorfr"
],
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/issues/1862",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1376078603
|
storage: structure protobuf definition in folders
proto/frontend
proto/middlend
proto/backend
we think same #119
with #144 we now have storage/v1/backend_nvme_tcp.proto
can/should we do more like for example: storage/v1/frontend/nvme.proto and storage/v1/frontend/virtio-blk.proto and storage/v1/backend/nvme_tcp.proto
|
gharchive/issue
| 2022-09-16T15:04:06 |
2025-04-01T06:39:55.443336
|
{
"authors": [
"alanwhite",
"glimchb"
],
"repo": "opiproject/opi-api",
"url": "https://github.com/opiproject/opi-api/issues/120",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
728885885
|
Added rtadvd configuration and service calls
Added routines to create rtadvd config file; there seem to be some options that do not exist in rtadvd, so we'll need to see if that makes it unopperable. Using the background launch option fixed the issue with no response to RS when launched using mwexecf.
This is a first draft, so obviously there will need to be some fixes.! :)
Need to look at the logging output of rtadvd, in verbose mode it appears to flood the logging but not all data appears in the logs, in intermediate mode it doesn't say much, except tells you it's exited gracefully. In low mode it says sod all.
Modded rtadvd to give more output at debug level 1, no shows RS received etc. I'll PR that shortly.
rtadvd PR https://github.com/opnsense/src/pull/82
I hope for a clean replacement patch so we can see what changes when me migrate vs. adding duplication and overrides.
Totally agree, but at present, would we not be better to check it out thoroughly first? As I said, there are certain options missing in rtadvd, possibly we don't need them, possibly we do.
Much the same as when we implemented dpinger, we allowed the option until we were happy there were no issues. Unfortunately in this case we do not have a case where pfs was already running rtadvd, so we have no history in that context.
Should not compare this to dpinger. It worked very differently compared to apinger. rtadvd is mostly a drop-in replacement with eventual caveats and if it works we'll switch service right away. For this we need to be able to review changes and features lost (worst case). Especially the second worries me. And I'm half-sure there will be new bugs later on but we'll have to see when we get there...
I can issue a separate PR which does just replace radvd with rtadvd. I was also going to the ctrl function so it just forces a config re-read if the daemon is already running; another issue is the command line addition of the interfaces, should we add ALL the interfaces that are not WAN, even if they are down.?
we give rtadvd no interfaces on the command line and then switch them on and off with rtadvctl how it seems to be intended by the authors
Yeah, I could not make that work, but it might have been the daemon launch issue spiking me, I'll try it again without the command line interfaces.
Yes, that must have been the issue, it works fine using the rdadvctl enable ...interfaces. So launch the demon if not running, wait for it then enable the interfaces?
|
gharchive/pull-request
| 2020-10-24T21:59:09 |
2025-04-01T06:39:55.449372
|
{
"authors": [
"fichtner",
"marjohn56"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/pull/4431",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
50907600
|
Daemon started twice
On Ubuntu 14.04 with memcached cookbook v1.7.2. I'm using the cookbook like this:
memcached_instance 'my_memcached'
.. with these attributes:
default['memcached']['memory'] = 64
default['memcached']['logfilename'] = 'memcached.log'
default['memcached']['port'] = 11211
default['memcached']['user'] = 'memcache'
After converge, I end up with two lots of memcached running. It's like the default Ubuntu installer is also starting up it's own init script?
$ ps aux | grep memcached
memcache 1619 0.0 0.0 325396 1184 ? Sl 23:01 0:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1
root 1707 0.0 0.0 168 4 ? Ss 23:02 0:00 runsv memcached-my_memcached
memcache 1718 0.0 0.0 54820 1024 ? Sl 23:02 0:00 /usr/bin/memcached -v -m 64 -U -p 11211 -u memcache -l 0.0.0.0 -c 1024
root 1745 0.0 0.0 184 4 ? S 23:02 0:00 svlogd -tt /var/log/memcached-my_memcached
This should be addressed by #47. Please reopen if incorrect.
|
gharchive/issue
| 2014-12-03T23:22:34 |
2025-04-01T06:39:55.523513
|
{
"authors": [
"jonathanhoskin",
"stonith"
],
"repo": "opscode-cookbooks/memcached",
"url": "https://github.com/opscode-cookbooks/memcached/issues/43",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
978614913
|
🛑 API Server is down
In fc2ad2a, API Server (https://api.optbp.com/hangfire) was down:
HTTP code: 0
Response time: 0 ms
Resolved: API Server is back up in a461801.
|
gharchive/issue
| 2021-08-25T01:28:06 |
2025-04-01T06:39:55.576329
|
{
"authors": [
"optbp-monitor"
],
"repo": "optbp-monitor/optbp-monitor",
"url": "https://github.com/optbp-monitor/optbp-monitor/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
893990634
|
Multiselect don't find value (MenuItemSelectType)
=> return ['id' => (string)$key, 'label' => $value].
Heya! I merged the PR by @mariuskli and released version 5.3.3. It should be fixed now. :) Good luck!
|
gharchive/issue
| 2021-05-18T05:58:34 |
2025-04-01T06:39:55.580462
|
{
"authors": [
"Tarpsvo",
"ttungbmt"
],
"repo": "optimistdigital/nova-menu-builder",
"url": "https://github.com/optimistdigital/nova-menu-builder/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
494343095
|
Error when creating menu
I get this error when creating a menu on a fresh installation of Nova v2.3.0 and nova-menu-builder v1.4.0
SQLSTATE[22P02]: Invalid text representation: 7 ERROR: invalid input syntax for integer: "{{resourceId}}" (SQL: select count(*) as aggregate from "menus" where "slug" = yooo and "id" <> {{resourceId}} and "locale" = )
I narrowed it down the validation rules in MenuResource.php:
->rules('required', 'max:255', 'unique:menus,slug,{{resourceId}},id,locale,' . $request->locale),
I changed the rules like so:
->rules('required', 'max:255', 'unique:menus,slug')
->updateRules('required', 'max:255', 'unique:menus,slug,{{resourceId}},id,locale,' . $request->locale),
And now it works.
However, it seems weird to methat I had to do that. Is there anything wrong with my setup?
Hey! Sorry for the late reply. You're indeed correct about the validation thing, creation rules can not have {{resourceId}} in them due to it not existing. It's weird that it has presumably worked so far though. Maybe it's something to do with the operating system or SQL version being used.
Anyhow, I fixed it in version 1.4.1 in the same way you described, please give it a try. Thanks!
|
gharchive/issue
| 2019-09-17T00:41:00 |
2025-04-01T06:39:55.583317
|
{
"authors": [
"Tarpsvo",
"zippoxer"
],
"repo": "optimistdigital/nova-menu-builder",
"url": "https://github.com/optimistdigital/nova-menu-builder/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333807177
|
fixing changelog to reflect 2.0.0 rollback and 2.0.0-beta2 release
Simple changes to the changelog, which wasn't updated when we did the 2.0.0 rollback and 2.0.0-beta2 release
build
|
gharchive/pull-request
| 2018-06-19T19:28:09 |
2025-04-01T06:39:55.585713
|
{
"authors": [
"ceimaj",
"mikeng13"
],
"repo": "optimizely/android-sdk",
"url": "https://github.com/optimizely/android-sdk/pull/204",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.