id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
779226573
|
Adding dependent test images
Signed-off-by: Matthias Wessendorf mwessend@redhat.com
the real change against HEAD.... (see: https://github.com/openshift-knative/eventing-kafka/pull/27)
/lgtm
/lgtm
|
gharchive/pull-request
| 2021-01-05T16:05:49 |
2025-04-01T04:35:24.007791
|
{
"authors": [
"lberk",
"matzew"
],
"repo": "openshift-knative/eventing-kafka",
"url": "https://github.com/openshift-knative/eventing-kafka/pull/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1121772969
|
[release-v1.2] [WIP] [Test] Add an optional forward-compatability test
Testing so compatibility, do not merge
/hold
/retest
/retest
|
gharchive/pull-request
| 2022-02-02T10:51:59 |
2025-04-01T04:35:24.008907
|
{
"authors": [
"devguyio"
],
"repo": "openshift-knative/eventing-kafka",
"url": "https://github.com/openshift-knative/eventing-kafka/pull/542",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2683257988
|
CNF-14955: Adding postgres resources to controller
This adds the ability to manage the Kubernetes resources related to the Postgres database. This is a first iteration which does not include the management of the PV/PVC for the persistent data. It handles only the establishment of the deployment and the user credentials. PV/PVC handling will come in a later commit.
/cc @mlguerrero12
/hold
having an issue with the bundle-run testing
/hold cancel
...false alarm, problem between keyboard and chair.
/lgtm
/lgtm
/approve
|
gharchive/pull-request
| 2024-11-22T13:42:00 |
2025-04-01T04:35:24.026854
|
{
"authors": [
"alegacy",
"mlguerrero12",
"pixelsoccupied"
],
"repo": "openshift-kni/oran-o2ims",
"url": "https://github.com/openshift-kni/oran-o2ims/pull/336",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1703805689
|
OCPBUGS-10979: Display stderr when operator version check fails
To validate the ACM and MCE versions specified, an oc-mirror command is executed to ensure the version specified is listed in the catalog. In the case where the catalog is unreachable, such as an authentication failure due to missing certificates, the stderr of the oc-mirror command is now displayed to provide more information to the user.
/cc @alosadagrande
/hold until https://github.com/openshift/release/pull/39426 merges
|
gharchive/pull-request
| 2023-05-10T12:20:45 |
2025-04-01T04:35:24.028632
|
{
"authors": [
"donpenney"
],
"repo": "openshift-kni/telco-ran-tools",
"url": "https://github.com/openshift-kni/telco-ran-tools/pull/80",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
431613287
|
Installation fails with a machine-config timeout
Using:
commit 489d790e1819e2e1ce489bbac87a4d5bbd5ac47e
Merge: c9bbf54 98ba9fb
Author: Steven Hardy <shardy@redhat.com>
Date: Wed Apr 10 15:22:10 2019 +0100
Merge pull request #312 from zaneb/fix_certs
Run fix_certs script automatically
Virtual environment:
"level=fatal msg=\"failed to initialize the cluster: Cluster operator machine-config is reporting a failure: Failed to resync 4.0.0-0.alpha-2019-04-10-154442 because: [timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true), pool master has not progressed to latest configuration: configuration for pool master is empty, retrying]: timed out waiting for the condition\"
We need to rebase on openshift/installer.
These two PR's are needed - https://github.com/openshift-metalkube/kni-installer/pull/42, and https://github.com/openshift-metalkube/dev-scripts/pull/327
Could you try again with the latest dev-scripts/kni-installer?
This is now working fine as reported by several users and CI so lets close this out.
|
gharchive/issue
| 2019-04-10T17:02:43 |
2025-04-01T04:35:24.043875
|
{
"authors": [
"e-minguez",
"hardys",
"stbenjam"
],
"repo": "openshift-metalkube/dev-scripts",
"url": "https://github.com/openshift-metalkube/dev-scripts/issues/328",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1786188047
|
switch to JSONencoder for logging
This PR is for aligning Exporter logs with ADR-6.
Here's a sample output after switch to JSON endoer
➜ pipeline-service-exporter (main) go run main.go ✭ ✱
{"level":"info","ts":1688390488.3012707,"logger":"main","msg":"Starting pipeline_service_exporter","version":"(version=, branch=, revision=unknown)"}
{"level":"info","ts":1688390488.301288,"logger":"main","msg":"Build context","build":"(go=go1.20.5, platform=linux/amd64, user=, date=)"}
{"level":"info","ts":1688390488.301291,"logger":"main","msg":"Starting Server: ","listen_address":":9117"}
{"level":"info","ts":1688390499.6877043,"logger":"controller","msg":"get of pipelinerun CRD returned successfully"}
I0703 18:51:41.558453 369507 request.go:690] Waited for 1.046846234s due to client-side throttling, not priority and fairness, request: GET:https://a105312d9c2654c8c9f4319765d25350-726d418b983b2e6c.elb.us-east-1.amazonaws.com:6443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
{"level":"info","ts":1688390506.781838,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":9117"}
{"level":"info","ts":1688390506.7819881,"logger":"main","msg":"Starting controller-runtime manager"}
{"level":"info","ts":1688390506.7820272,"msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":1688390506.7820387,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:9117"}
{"level":"info","ts":1688390506.7820816,"msg":"Starting EventSource","controller":"pipelinerun","controllerGroup":"tekton.dev","controllerKind":"PipelineRun","source":"kind source: *v1beta1.PipelineRun"}
{"level":"info","ts":1688390506.78209,"msg":"Starting Controller","controller":"pipelinerun","controllerGroup":"tekton.dev","controllerKind":"PipelineRun"}
{"level":"info","ts":1688390507.1824684,"msg":"Starting workers","controller":"pipelinerun","controllerGroup":"tekton.dev","controllerKind":"PipelineRun","worker count":1}
^C{"level":"info","ts":1688390534.8751037,"msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":1688390534.875172,"msg":"Stopping and waiting for leader election runnables"}
{"level":"info","ts":1688390534.8751929,"msg":"Shutdown signal received, waiting for all workers to finish","controller":"pipelinerun","controllerGroup":"tekton.dev","controllerKind":"PipelineRun"}
{"level":"info","ts":1688390534.8752408,"msg":"All workers finished","controller":"pipelinerun","controllerGroup":"tekton.dev","controllerKind":"PipelineRun"}
{"level":"info","ts":1688390534.8752677,"msg":"Stopping and waiting for caches"}
{"level":"info","ts":1688390534.8753486,"msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":1688390534.875391,"msg":"Wait completed, proceeding to shutdown the manager"}
/lgtm
|
gharchive/pull-request
| 2023-07-03T13:28:03 |
2025-04-01T04:35:24.047700
|
{
"authors": [
"enarha",
"ramessesii2"
],
"repo": "openshift-pipelines/pipeline-service-exporter",
"url": "https://github.com/openshift-pipelines/pipeline-service-exporter/pull/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1319701962
|
bugfix: access_setup
Check that ArgoCD has been installed on the compute cluster
Do not change workspace if we're already in the right one
Signed-off-by: Romain Arnaud rarnaud@redhat.com
As discussed in the chat channel. I think that the PR brings these two things that are useful:
check that Argo CD has been installed
build the kcp plugin (till the kcp team makes it available with the release)
I am merging it as you have confirmed that you successfully validated it.
|
gharchive/pull-request
| 2022-07-27T15:05:09 |
2025-04-01T04:35:24.049965
|
{
"authors": [
"Roming22",
"fgiloux"
],
"repo": "openshift-pipelines/pipeline-service",
"url": "https://github.com/openshift-pipelines/pipeline-service/pull/171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
253202950
|
ignore the broker only at the root
The ag search tool respects the .gitignore file but having broker alone
was too aggressive for the tool. So by adding the / it tells git to
ignore broker only at the project root.
Before:
$ /usr/bin/ag --ignore-dir vendor "^\/\/[A-Za-z]"
pkg/handler/handler.go
419://printRequest - will print the request with the body.
pkg/app/app.go
36://CreateApp - Creates the application
pkg/apb/types.go
13://SpecManifest - Spec ID to Spec manifest
After
$ /usr/bin/ag --ignore-dir vendor "^\/\/[A-Za-z]"
pkg/handler/handler.go
419://printRequest - will print the request with the body.
pkg/app/app.go
36://CreateApp - Creates the application
pkg/apb/types.go
13://SpecManifest - Spec ID to Spec manifest
pkg/broker/types.go
183://BindResponse - Response for a bind
pkg/broker/broker.go
120://Login - Will login the openshift user.
870://AddSpec - adding the spec to the catalog for local development
With the previous .gitignore ag would never look in the broker packages for anything. Now it does. Also, you can add files to broker directory still and they are not ignored which was the reason for the !pkg entry before.
|
gharchive/pull-request
| 2017-08-28T00:57:44 |
2025-04-01T04:35:24.053565
|
{
"authors": [
"jmrodri"
],
"repo": "openshift/ansible-service-broker",
"url": "https://github.com/openshift/ansible-service-broker/pull/404",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1900776563
|
TestJob to use base64 for library import
What type of PR is this?
bug
What this PR does / Why we need it?
For testjob having library import, the current workflow will wrap the library file into a string, use a cat command to put the library content to lib.sh inline, then replace the source /path/to/lib.sh to source ./lib.sh
In some cases, the cat << EOF may cause strange issues if the library content has special characters or special formats. For example, the lib in https://github.com/openshift/managed-scripts/pull/137 may result in error unbound variable.
This PR encode the library file to base64, and decode the file inline, so that we don't need to worry about the format inside the library.
Special notes for your reviewer
We can use the existing lib-sourcer example for testing, the behavior remains unchanged.
[managed-scripts/scripts/examples/lib-sourcer] $ ocm.stg backplane testjob logs openshift-job-dev-p9jjt
This is an imported library being used in a managed script.
Pre-checks (if applicable)
[x] Ran unit tests locally
[x] Validated the changes in a cluster
[ ] Included documentation changes with PR
Codecov Report
Merging #200 (d446cf5) into main (691eb6f) will increase coverage by 0.42%.
Report is 19 commits behind head on main.
The diff coverage is 100.00%.
Additional details and impacted files
@@ Coverage Diff @@
## main #200 +/- ##
==========================================
+ Coverage 51.11% 51.53% +0.42%
==========================================
Files 51 51
Lines 3365 3415 +50
==========================================
+ Hits 1720 1760 +40
Misses 1364 1364
- Partials 281 291 +10
Files Changed
Coverage Δ
cmd/ocm-backplane/testJob/createTestJob.go
69.62% <100.00%> (ø)
... and 6 files with indirect coverage changes
/lgtm
|
gharchive/pull-request
| 2023-09-18T11:57:12 |
2025-04-01T04:35:24.082533
|
{
"authors": [
"codecov-commenter",
"feichashao",
"hectorakemp"
],
"repo": "openshift/backplane-cli",
"url": "https://github.com/openshift/backplane-cli/pull/200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1643016043
|
Improve test coverage for cloud credentials command
What type of PR is this?
(cleanup)
What this PR does / Why we need it?
An example implementation of improving a file to the recommended coverage level across the codebase of 80%.
Which Jira/Github issue(s) does this PR fix?
Resolves
https://issues.redhat.com/browse/OSD-15441
Special notes for your reviewer
Pre-checks (if applicable)
[x] Ran unit tests locally
[ ] Validated the changes in a cluster
[ ] Included documentation changes with PR
Codecov Report
Merging #56 (b3aad15) into main (2b2cfe7) will increase coverage by 3.13%.
The diff coverage is 88.23%.
Additional details and impacted files
@@ Coverage Diff @@
## main #56 +/- ##
==========================================
+ Coverage 42.94% 46.07% +3.13%
==========================================
Files 37 37
Lines 2140 2142 +2
==========================================
+ Hits 919 987 +68
+ Misses 1078 1008 -70
- Partials 143 147 +4
Impacted Files
Coverage Δ
cmd/ocm-backplane/cloud/credentials.go
83.48% <88.23%> (+61.99%)
:arrow_up:
/lgtm
|
gharchive/pull-request
| 2023-03-28T00:26:39 |
2025-04-01T04:35:24.090480
|
{
"authors": [
"codecov-commenter",
"hectorakemp",
"samanthajayasinghe"
],
"repo": "openshift/backplane-cli",
"url": "https://github.com/openshift/backplane-cli/pull/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
834594343
|
Entitled builds broken
Commit 7901cb3961215fcdc5a1bb8572de7aa5f02f56cb breaks entitled builds, because entitlement certificates are no longer passed through to the buildah process.
Host entitlements are linked in /usr/share/rhel/secrets on the build-host, which is mounted as /run/secrets (defined /usr/share/containers/mounts.conf) in the build continer. With the commit above only the rhsm portion is copied to the buildah process, the entitlement certificates are not, which results in failed entitled builds.
You are correct - we missed the etc-pki-entitlement directory, as can be observed on my Fedora 33 machine:
$ ls -al /usr/share/rhel/secrets/
total 12
drwxr-xr-x. 1 root root 68 Dec 31 1969 .
drwxr-xr-x. 1 root root 14 Dec 31 1969 ..
lrwxrwxrwx. 4 root root 20 Oct 30 07:28 etc-pki-entitlement -> /etc/pki/entitlement
lrwxrwxrwx. 4 root root 28 Oct 30 07:28 redhat.repo -> /etc/yum.repos.d/redhat.repo
lrwxrwxrwx. 4 root root 9 Oct 30 07:28 rhsm -> /etc/rhsm
I will open up a BZ.
Filed bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1940488
I will close this GitHub issue - work will be tracked there.
/close
|
gharchive/issue
| 2021-03-18T09:52:54 |
2025-04-01T04:35:24.093638
|
{
"authors": [
"adambkaplan",
"nimasamii"
],
"repo": "openshift/builder",
"url": "https://github.com/openshift/builder/issues/227",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
497153955
|
OpenStack: quote passwords
OpenStack passwords can contain special characters and therefore must be quoted to prevent yaml parsing errors.
Fixes: https://github.com/openshift/installer/issues/2392
/hold
The registry parses environment variables as YAML. Most of the time YAML treats a random sequence of symbols as a string. But sometimes it doesn't. So if we have a user-provided string and we want to use it as a value for the REGISTRY_ environment, we should yaml-encode it.
It's better to encode values in ConfigEnv and Secrets (these values should be used only as EnvVarSource).
/cc @coreydaley @ricardomaraschini
But sometimes it doesn't. So if we have a user-provided string and we want to use it as a value for the REGISTRY_ environment, we should yaml-encode it.
Particularly for YAML that is all-numeric or a boolean equivalent (true, false)
I'd suggest to change ConfigEnv to return map[string]interface{} and encode values in the PodTemplateSpec generator. @Fedosin do you want me to do it?
|
gharchive/pull-request
| 2019-09-23T15:01:37 |
2025-04-01T04:35:24.109149
|
{
"authors": [
"Fedosin",
"adambkaplan",
"dmage"
],
"repo": "openshift/cluster-image-registry-operator",
"url": "https://github.com/openshift/cluster-image-registry-operator/pull/391",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2549339563
|
honor "operatorLogLevel" from the operator spec
Right now the spec.operatorLogLevel and spec.logLevel values in the operator spec are ignored.
This PR adds a logging controller that ensures that at least the operator log level is dynamically updated based on the value of spec.operatorLogLevel
NOTE: handling spec.logLevel will take a bit more time, as it will likely require some coordination with catalogd and operator-controller maintainers.
@gallettilance, hopefully this makes your debugging at least a little bit easier. It'll definitely be nice for support associates and customers.
|
gharchive/pull-request
| 2024-09-26T02:28:01 |
2025-04-01T04:35:24.131946
|
{
"authors": [
"joelanford"
],
"repo": "openshift/cluster-olm-operator",
"url": "https://github.com/openshift/cluster-olm-operator/pull/70",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2723183650
|
feat(OSD-27020): Tekton eventlistener to terminate TLS
We need this to move CAD to a private cluster:
LB service directly pointing at tekton event listener
event listener is responsible for TLS termination
added certificate needed by evente listener (ClusterIssuer should already be there)
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 39.75%. Comparing base (4c76c01) to head (5d53cf2).
Additional details and impacted files
@@ Coverage Diff @@
## main #322 +/- ##
=======================================
Coverage 39.75% 39.75%
=======================================
Files 22 22
Lines 1738 1738
=======================================
Hits 691 691
Misses 999 999
Partials 48 48
/hold until cad promotion
/unhold
/lgtm
|
gharchive/pull-request
| 2024-12-06T14:24:01 |
2025-04-01T04:35:24.138289
|
{
"authors": [
"RaphaelBut",
"codecov-commenter",
"typeid"
],
"repo": "openshift/configuration-anomaly-detection",
"url": "https://github.com/openshift/configuration-anomaly-detection/pull/322",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1668278555
|
Revert "Migrate Azure to out of tree cloud provider"
This reverts commit 3706d32a00d0cb0ed02d92f1789960440903dd4a.
Revert of https://github.com/openshift/library-go/pull/1484 to compare impact related to OCPBUGS-11308
/hold
/approve
/lgtm
We are reverting the Azure CCM until we can promote the entire feature in a single PR, need to get this merged to introduce the new individual Azure/GCP feature gates
/hold cancel
We need to bring library-go master branch back to the desired stable state, which currently doesn't include Azure CCM being promoted. Only then can we start resolving the issues that lead to the need to revert
|
gharchive/pull-request
| 2023-04-14T13:45:57 |
2025-04-01T04:35:24.255579
|
{
"authors": [
"JoelSpeed",
"neisw"
],
"repo": "openshift/library-go",
"url": "https://github.com/openshift/library-go/pull/1499",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2352290539
|
Add support for fake llm provider in operator as well
Description
Adding fake_provider as one of supported provider, which will be helpful for load testing.
Type of change
[x] Refactor
[x] New feature
[ ] Bug fix
[ ] CVE fix
[ ] Optimization
[ ] Documentation Update
[ ] Configuration Update
[ ] Bump-up dependent library
Related Tickets & Documents
Closes # https://issues.redhat.com/browse/OLS-687?filter=-1
Checklist before requesting a review
[x] I have performed a self-review of my code.
[x] PR has passed all pre-merge test jobs.
[ ] If it is a core feature, I have added thorough tests.
Testing
Tested and verified in local and CI tests went through.
/lgtm
@xrajesh @raptorsun seems straightforward to me, anything missing?
We need to update the bundle artifacts to keep CSV file ins sync @vishnuchalla . run make update-bundle-catalog and include the bundle artifacts.
/lgtm
/approve
/retest
/retest
|
gharchive/pull-request
| 2024-06-14T01:08:56 |
2025-04-01T04:35:24.261099
|
{
"authors": [
"bparees",
"raptorsun",
"vbelouso",
"vishnuchalla",
"xrajesh"
],
"repo": "openshift/lightspeed-operator",
"url": "https://github.com/openshift/lightspeed-operator/pull/161",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2150245061
|
OLS-369: Reduce the size of the lightspeed-service-api container image
Description
This PR reduces the size of the lightspeed-service-api container image by:
installing the CPU flavor of torch, without the NVIDIA dependency
not installing dev dependencies
not creating a venv
not chowning
resulting in removal of about 7GB or 77% of the original container image size:
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/ols after a23530f28392 2 seconds ago 2.11 GB
quay.io/openshift/lightspeed-service-api latest e837b8189a9e 3 hours ago 9.04 GB
These sizes do not include RAG content.
Type of change
[ ] Refactor
[ ] New feature
[ ] Bug fix
[ ] CVE fix
[x] Optimization
[ ] Documentation Update
[ ] Configuration Update
[ ] Bump-up dependent library
Related Tickets & Documents
Related Issue #
Closes # https://issues.redhat.com/browse/OLS-369
Checklist before requesting a review
[x] I have performed a self-review of my code.
[x] PR has passed all pre-merge test jobs.
[ ] If it is a core feature, I have added thorough tests.
Testing
Please provide detailed steps to perform tests related to this code change.
How were the fix/results from this change verified? Please provide relevant screenshots or results.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 96.50%. Comparing base (8ef2294) to head (0c5f67e).
Report is 6 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #485 +/- ##
=======================================
Coverage 96.50% 96.50%
=======================================
Files 45 45
Lines 1402 1402
=======================================
Hits 1353 1353
Misses 49 49
see 4 files with indirect coverage changes
@tisnik could you PTAL at this?
@syedriko while you are updating this Containerfile, can you also fix it so the image is launched using our runner.py instead of uvicorn?
https://github.com/openshift/lightspeed-service/blob/1651629ca7a97edfc8a9438b42d3b0db7f2e6dd8/examples/openshift-lightspeed-tls.yaml#L43 will also need updating.
@bparees This now launches OLS with the new launcher and runs fine via the operator. PTAL.
/lgtm
/approve
|
gharchive/pull-request
| 2024-02-23T02:03:27 |
2025-04-01T04:35:24.271853
|
{
"authors": [
"bparees",
"codecov-commenter",
"syedriko"
],
"repo": "openshift/lightspeed-service",
"url": "https://github.com/openshift/lightspeed-service/pull/485",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2210069903
|
(Trivial) Correct provider names
Description
(Trivial) Correct provider names
Type of change
[x] Refactor
[ ] New feature
[ ] Bug fix
[ ] CVE fix
[ ] Optimization
[ ] Documentation Update
[ ] Configuration Update
[ ] Bump-up dependent library
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 96.39%. Comparing base (ba653e4) to head (b7bface).
Report is 5 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #660 +/- ##
==========================================
- Coverage 96.39% 96.39% -0.01%
==========================================
Files 53 53
Lines 1830 1829 -1
==========================================
- Hits 1764 1763 -1
Misses 66 66
/lgtm
/approve
|
gharchive/pull-request
| 2024-03-27T07:28:53 |
2025-04-01T04:35:24.278009
|
{
"authors": [
"codecov-commenter",
"onmete",
"tisnik"
],
"repo": "openshift/lightspeed-service",
"url": "https://github.com/openshift/lightspeed-service/pull/660",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1335389976
|
[RFE] Replacing obfuscating FileContents type with 2 new content types to allow specific rules depending of config and logs content
Bug Report
Recently worked on a case where the customer was a bank and should follow on PCI-DSS compliance.
One of the main focuses of the PCI-DSS is to ensure that there is no credit card or related information stored in logs.
This main include the Credit card number (16 digits), CSV (3 digits), but also account numbers and any other fields.
There are currently 3 Target types currently available that can be obfuscated:
-FilePath
-FileContents
-All
Would it be achievable to split the FileContents type into 2 sub-categories:
LogContents: including only the PODs' logs (<filename.log>)
ConfigContents: excluding the log files (cf Possible Solution section for further details)
What did you do?
Created some objects with 16 digits numbers (route.metadata.name & route.spec.hostname), run some curl command against my HTTP page to get some 16 digits numbers in the logs.
When running the
$ grep -iEr "[0-9]{16}" inspect.local.3606219103811510166/
inspect.local.3606219103811510166//namespaces/vlours-httpd/route.openshift.io/routes.yaml: name: "1234567890123456"
inspect.local.3606219103811510166//namespaces/vlours-httpd/route.openshift.io/routes.yaml: host: 6543210987654321.apps.sharedocp4upi49.lab.upshift.rdu2.redhat.com
inspect.local.3606219103811510166//namespaces/vlours-httpd/route.openshift.io/routes.yaml: host: 6543210987654321.apps.sharedocp4upi49.lab.upshift.rdu2.redhat.com
inspect.local.3606219103811510166//namespaces/vlours-httpd/pods/httpd-689fb8c5d4-7tll7/httpd/httpd/logs/current.log:2022-08-11T00:44:56.668879296Z 10.131.0.1 - - [11/Aug/2022:00:44:56 +0000] "GET /index.html?credit-card=1234567890123456 HTTP/1.1" 200 2 "-" "curl/7.29.0"
inspect.local.3606219103811510166//namespaces/vlours-httpd/pods/httpd-689fb8c5d4-7tll7/httpd/httpd/logs/current.log:2022-08-11T00:46:17.682812627Z 10.131.0.1 - - [11/Aug/2022:00:46:17 +0000] "GET /index.html?credit-card=4500123400009876 HTTP/1.1" 200 2 "-" "curl/7.29.0"
$ cat must-gather-config.yaml
config:
obfuscate:
- type: Regex
target: FileContents
regex: "[0-9]{16}"
$ must-gather-clean -c must-gather-config.yaml -i inspect.local.3606219103811510166 -o inspect.local.mgc
What did you expect to see?
With the LogContents type, only the log files should have been obfuscated.
What did you see instead? Under which circumstances?
As currently expected all file contents have been updated, including the YAML files.
$ grep -iEr "[0-9]{16}|x{16}" inspect.local.3606219103811510166
inspect.local.3606219103811510166/namespaces/vlours-httpd/route.openshift.io/routes.yaml: name: "1234567890123456"
inspect.local.3606219103811510166/namespaces/vlours-httpd/route.openshift.io/routes.yaml: host: 6543210987654321.apps.sharedocp4upi49.lab.upshift.rdu2.redhat.com
inspect.local.3606219103811510166/namespaces/vlours-httpd/route.openshift.io/routes.yaml: host: 6543210987654321.apps.sharedocp4upi49.lab.upshift.rdu2.redhat.com
inspect.local.3606219103811510166/namespaces/vlours-httpd/pods/httpd-689fb8c5d4-7tll7/httpd/httpd/logs/current.log:2022-08-11T00:44:56.668879296Z 10.131.0.1 - - [11/Aug/2022:00:44:56 +0000] "GET /index.html?credit-card=1234567890123456 HTTP/1.1" 200 2 "-" "curl/7.29.0"
inspect.local.3606219103811510166/namespaces/vlours-httpd/pods/httpd-689fb8c5d4-7tll7/httpd/httpd/logs/current.log:2022-08-11T00:46:17.682812627Z 10.131.0.1 - - [11/Aug/2022:00:46:17 +0000] "GET /index.html?credit-card=4500123400009876 HTTP/1.1" 200 2 "-" "curl/7.29.0"
$ grep -iEr "[0-9]{16}|x{16}" inspect.local.mgc
inspect.local.mgc/namespaces/vlours-httpd/route.openshift.io/routes.yaml: name: "xxxxxxxxxxxxxxxx"
inspect.local.mgc/namespaces/vlours-httpd/route.openshift.io/routes.yaml: host: xxxxxxxxxxxxxxxx.apps.sharedocp4upi49.lab.upshift.rdu2.redhat.com
inspect.local.mgc/namespaces/vlours-httpd/route.openshift.io/routes.yaml: host: xxxxxxxxxxxxxxxx.apps.sharedocp4upi49.lab.upshift.rdu2.redhat.com
inspect.local.mgc/namespaces/vlours-httpd/pods/httpd-689fb8c5d4-7tll7/httpd/httpd/logs/current.log:2022-08-11T00:44:56.668879296Z 10.131.0.1 - - [11/Aug/2022:00:44:56 +0000] "GET /index.html?credit-card=xxxxxxxxxxxxxxxx HTTP/1.1" 200 2 "-" "curl/7.29.0"
inspect.local.mgc/namespaces/vlours-httpd/pods/httpd-689fb8c5d4-7tll7/httpd/httpd/logs/current.log:2022-08-11T00:46:17.682812627Z 10.131.0.1 - - [11/Aug/2022:00:46:17 +0000] "GET /index.html?credit-card=xxxxxxxxxxxxxxxx HTTP/1.1" 200 2 "-" "curl/7.29.0"
Environment
must-gather-clean version:
must-gather-clean version
Version: pkg.Version{Version:"v0.0.1-0-gd02283f", GitCommit:"d02283f", BuildDate:"2021-10-01T09:44:36Z", GoOs:"darwin", GoArch:"amd64"}
OpenShift Version:
Any OCP version, as this is mostly about Application namespace clean-up.
Possible Solution
Replacing the current type FileContents by 2 sub-types. Ensuring that the All type will include them both.
Looking into must-gather and inspect content, most of the files are yaml or logs.
In some cases, there are some additional file extensions:
.json
.config
.stderr
.log.gz (expecially audit MG)
files without extensions
Having the type LogContents focusing only on ".log" (and ".stderr") files and the type ConfigContents including everything else, this will cover the current behaviour.
Additional context
Easily reproducible. I tried to attach my inspect logs, but GitHub blocked it, even if the extension is supported.
$ ls -l inspect.local.3606219103811510166.tgz
-rw-r--r-- 1 vlours staff 35347 11 Aug 10:49 inspect.local.3606219103811510166.tgz
$ shasum -a 256 inspect.local.3606219103811510166.tgz
4362179f6486df3a3709fe579fe3b3ad143cf39896e11131d5caf088c6fa96ea inspect.local.3606219103811510166.tgz
/remove-lifecycle stale
I'm usually available in Slack or Gchat if require during early APAC hours (Brisbane AEST TZ).
Feel free to reach out if you want to discuss it.
Cheers,
|
gharchive/issue
| 2022-08-11T02:17:04 |
2025-04-01T04:35:24.302681
|
{
"authors": [
"vlours"
],
"repo": "openshift/must-gather-clean",
"url": "https://github.com/openshift/must-gather-clean/issues/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
535502678
|
Should "odo list" not fail when not logged into the OpenShift cluster?
User Story
As a user or (ide plugin) I would like to list all components in a given path and I should not be required to be connected to the cluster.
Current behaviour
# not logged into OpenShift (CRC) yet
$ odo list --path ~/src
APP NAME TYPE SOURCE STATE CONTEXT
app backend java target/wildwest-1.0.jar Not Pushed /home/dshah/src/Wild-West-Backend
app frontend nodejs ./ Not Pushed /home/dshah/src/Wild-West-Frontend
app node nodejs ./ Not Pushed /home/dshah/src/nodejs-ex
# let's login
$ odo login -u developer
Connecting to the OpenShift cluster
Authentication required for https://api.crc.testing:6443 (openshift)
Username: developer
Password:
Login successful.
You have one project on this server: "myproject"
Using project "myproject".
# now it shows the components' STATE as Pushed!
$ odo list --path ~/src
APP NAME TYPE SOURCE STATE CONTEXT
app backend java target/wildwest-1.0.jar Pushed /home/dshah/src/Wild-West-Backend
app frontend nodejs ./ Pushed /home/dshah/src/Wild-West-Frontend
app node nodejs ./ Not Pushed /home/dshah/src/nodejs-ex
Acceptance Criteria
odo list --path should work without odo being logged or connected to the cluster
if odo list --path command can't determine the correct state (Pushed, Not Pushed), because it is not connected to the cluster, the component should have an Unknown state.
Example:$ odo list --path ~/src
APP NAME TYPE SOURCE STATE CONTEXT
app backend java target/wildwest-1.0.jar Unknown /home/dshah/src/Wild-West-Backend
app frontend nodejs ./ Unknown /home/dshah/src/Wild-West-Frontend
app node nodejs ./ Unknown /home/dshah/src/nodejs-ex
Should it not fail? This gives a false indicator that the components are not pushed and user might try to do odo push instead.
The command should NOT fail. odo list --path ~/src intention is to find components, and cluster connection shouldn't be required.
But the current behavior is also not correct and can be misleading.
If we can't verify the state of the component for some reason, we should show unknown state.
$ odo list --path ~/src
APP NAME TYPE SOURCE STATE CONTEXT
app backend java target/wildwest-1.0.jar Unknown /home/dshah/src/Wild-West-Backend
app frontend nodejs ./ Unknown /home/dshah/src/Wild-West-Frontend
app node nodejs ./ Unknown /home/dshah/src/nodejs-ex
This issue seems well defined in terms of what work needs to be done. So no further analysis needed in my opinion.
/triage ready
/assign
|
gharchive/issue
| 2019-12-10T05:27:29 |
2025-04-01T04:35:24.312922
|
{
"authors": [
"adisky",
"dharmit",
"girishramnani",
"kadel"
],
"repo": "openshift/odo",
"url": "https://github.com/openshift/odo/issues/2444",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
847250771
|
OSD-6853: Use REPO_DIGEST for CatalogSource
Use REPO_DIGEST (e.g. quay.io/app-sre/osd-metrics-exporter-registry@sha256:abc1234...) instead of the tag-based URI (e.g. quay.io/app-sre/osd-metrics-exporter-registry:production-0f3da87) to reference the CatalogSource image.
OSD-6853
/hold cancel
|
gharchive/pull-request
| 2021-03-31T20:08:03 |
2025-04-01T04:35:24.419850
|
{
"authors": [
"2uasimojo",
"jharrington22"
],
"repo": "openshift/osd-metrics-exporter",
"url": "https://github.com/openshift/osd-metrics-exporter/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1487342778
|
Adding the ability to re-import tags when they are wedged
To force the re-import of tags of the ocp/4.13-art-latest imagestream, you can run:
$ ./release-tool.py import 4.13-art-latest
To force the re-import of tags of the ocp-s390x/4.13-art-latest-s390x imagestream, you can run:
$ ./release-tool.py -a s390x import 4.13-art-latest
/lgtm
|
gharchive/pull-request
| 2022-12-09T19:35:15 |
2025-04-01T04:35:24.427838
|
{
"authors": [
"bradmwilliams",
"jupierce"
],
"repo": "openshift/release-controller",
"url": "https://github.com/openshift/release-controller/pull/476",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
190877543
|
panic: runtime error: invalid memory address or nil pointer dereference on Linux x86_64
using release v1.1.3...
s2i build https://github.com/pmorie/simple-ruby.git openshift/ruby-20-centos7 test-ruby-app
I get...
error: Unable to load docker config
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x4ef3a0]
goroutine 1 [running]:
panic(0x7c26e0, 0xc420012160)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/openshift/source-to-image/pkg/docker.GetImageRegistryAuth(0x0, 0x7fff89f4c978, 0x19, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/home/rsoares/Downloads/gocode/src/github.com/openshift/source-to-image/_output/local/go/src/github.com/openshift/source-to-image/pkg/docker/util.go:67 +0x250
main.newCmdBuild.func1(0xc420097200, 0xc4202a41e0, 0x3, 0x3)
/home/rsoares/Downloads/gocode/src/github.com/openshift/source-to-image/_output/local/go/src/github.com/openshift/source-to-image/cmd/s2i/main.go:118 +0x1003
github.com/openshift/source-to-image/vendor/github.com/spf13/cobra.(*Command).execute(0xc420097200, 0xc4202a4150, 0x3, 0x3, 0xc420097200, 0xc4202a4150)
/home/rsoares/Downloads/gocode/src/github.com/openshift/source-to-image/_output/local/go/src/github.com/openshift/source-to-image/vendor/github.com/spf13/cobra/command.go:603 +0x439
github.com/openshift/source-to-image/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420096d80, 0xc420183eb0, 0xc4202593d0, 0xc420097b00)
/home/rsoares/Downloads/gocode/src/github.com/openshift/source-to-image/_output/local/go/src/github.com/openshift/source-to-image/vendor/github.com/spf13/cobra/command.go:689 +0x367
github.com/openshift/source-to-image/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420096d80, 0xc420183ea8, 0x1)
/home/rsoares/Downloads/gocode/src/github.com/openshift/source-to-image/_output/local/go/src/github.com/openshift/source-to-image/vendor/github.com/spf13/cobra/command.go:648 +0x2b
main.main()
/home/rsoares/Downloads/gocode/src/github.com/openshift/source-to-image/_output/local/go/src/github.com/openshift/source-to-image/cmd/s2i/main.go:415 +0x660
My env is:
uname -a 2 ↵
Linux rsoares 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: RedHatEnterpriseWorkstation
Description: Red Hat Enterprise Linux Workstation release 7.3 (Maipo)
Release: 7.3
Codename: Maipo
docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
I tested with the linux binary s2i and tried to compile manually. Both fails.
can you share your docker config file, obscuring any credentials/tokens?
My docker config has only this:
cat ~/.docker/config.json
{
"auths": {}
}%
The docker unit on Systemd:
cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
#ExecStart=/usr/bin/dockerd
ExecStart=/usr/bin/dockerd --insecure-registry 172.30.0.0/16 --bip=172.17.42.1/16 -g /docker-storage
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
looks like something doesn't like that you have no auths at all in your docker config.json, as a workaround you can do a docker login to dockerhub (assuming you have a dockerhub account).
(I was able to recreate this locally).
yeah! after login hub.docker.com it worked fine!
|
gharchive/issue
| 2016-11-22T00:43:43 |
2025-04-01T04:35:24.493062
|
{
"authors": [
"bparees",
"rafaeltuelho"
],
"repo": "openshift/source-to-image",
"url": "https://github.com/openshift/source-to-image/issues/640",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
261527405
|
http://77.93.198.186/v1.2/4/shops returns 500 server error
Since 9/27, http://77.93.198.186/v1.2/4/shops started to return 500 server error. Can someone please help, thanks.
http://77.93.198.186/v1.2/4/shops returns HTTP status code: 500 Internal Server Error !!
Please can u share the API source-code ? screenshot
Me too,
Sorry for the inconvenience. The latest release is connected to the apiary mock server providing the basic functionality.
|
gharchive/issue
| 2017-09-29T04:14:19 |
2025-04-01T04:35:24.514843
|
{
"authors": [
"KaneNguyen",
"Skornos",
"fengshao-auryc",
"kassemitani"
],
"repo": "openshopio/openshop.io-ios",
"url": "https://github.com/openshopio/openshop.io-ios/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
345002679
|
Enable undo
Fixes issue #512
Brief summary of changes
Enablement of the undo button is handled by the NetBeans platform based on what windows (TopComponents) are "active" and if hey provide an undo support class. This PR unifies all editing to make one window Active and provide a singleton Undo support class across application windows.
Testing I've completed
Edits in Properties window, context menus of the Navigator and graphically dragging PathPoints are all undoable and the undo button is enabled.
CHANGELOG.md (choose one)
no need to update because it is a bugfix
@chrisdembia Not sure if review would make sense since these are bits and pieces of a larger context. @jimmyDunne , @tkuchida or @aseth1 if you can test the artifact of this PR on Windows and give me feedback that would be great. Thank you.
Testing with arm26 on OpenSim a6d9c27e-2018-07-27:
Below are the testing steps I took, trying a number of use cases. I have highlighted some issues I ran into and marked them with (Needs discussion) or (Needs fix) or some combination of both when I wasn't sure which was more appropriate.
Hiding bodies
a) Bodies -> Display -> Hide.
b) Bodies, r_humerus -> Display -> Hide.
(Needs discussion) Similar results: Undo works, but needs multiple clicks to work as might be expected. For instance, there are 4 objects within r_humerus, and 4 undos are needed to go back to the original state.
Changing color
a) Bodies, r_humerus, r_humerus_geom_1: changing color
Undo returns back to the original color.
b) r_humerus_geom_1: change color 4 times
Undo steps through each color change.
Show axes or show mass center
a) r_humerus: Show axes
b) r_humerus: Show mass center
c) Joints, r_shoulder, base_offset: Show axes
(Needs discussion): These act like toggles as previously, but these changes do not trigger an undo.
d) r_humerus -> show axes, then r_humerus_geom_1 -> Hide
Undo will do the "unhide" step and does not affect the axes shown.
Muscle properties
All tests done by changing the value in the properties window, hitting the undo button, and seeing the property change back.
a) Tested one at a time changes (floats: max_isometric_force, optimal_fiber_length, tendon_slack_length, max_contraction_velocity, max_pennation_angle
b) Tested one at a time changes (bool): appliesForce
Correctly reset to the original value in the model
Marker location
Change r_acromion location from (-0.01256 0.04 0.17) to (-0.01256 0.1 0.17). DO NOT CLICK ANYWHERE ELSE.
Result: Property updates correctly and moves in visualizer.
Hit Undo.
(Needs fix) Result: Visualizer updates, but properties window does not update. This is distinctly different than what is observed with muscle properties.
Click in the Navigator.
Result: The value in the property updates.
Undo/Redo button behavior when working with any property
Bodies -> base. Change the mass_center to (0 1 0).
Result: mass_center updates, and the Undo button lights up.
Click anywhere in the properties window:
a) On the top bar (i.e. anywhere here):
b) On a property
(Needs discussion/fix) Result: Undo/Redo buttons grey out and cannot be used.
Click anywhere out of the properties window
Result: Undo/Redo buttons come back and can be used
Moving a path point graphically:
TRIlat -> Show Only
Ctrl-click a path point and move it.
Hit Undo
(Needs fix) Result: Muscle path is quite contorted.
Joint offset translation/rotation
Joints -> r_shoduler -> base_offset
Change Translation or Rotation
(Needs fix): Undo does not come up.
Thanks for testing @carmichaelong Let's be clear on the scope of the fix: Collapsing multiple undos, or commands that have not been undoable in the past (COM, Axes that do not correspond to properties) do indeed need discussion if no issues are open already but are unrelated to the enabling of undo button.
As I explained in some earlier meeting, the undo behavior has two components:
What operations to perform when undo/redo is actually chosen by user.
Enable the undo/redo button
This PR deals only with 2. Individual issues would need to be opened for 1 as needed.
With that in mind, I believe the only sequence you reported that falls under 2. is editing multiple Properties while focus stays in the Properties window. I appreciate if you'd confirm and I'll investigate. Thank you :+1:
If you hit enter or tab (used across the application to indicate finish edit) after a property edit then you get the expected undo enabled.
While testing yesterday, this was true when I was testing all muscle properties. For some reason when testing today, this behavior is not true for any property at all (including muscle properties). I have to click back on the Navigator window to see the Undo option, and does work as expected in this PR at that point.
Separately, if I understand the scope of this PR, then it LGTM for merging, and some subset of the testing cases here should be moved to current or new issues. If we merge, then the test case should be something like:
Open up any model.
2a. Change any property (float or bool).
2b. Move a muscle path point graphically.
2c. Hide a body.
Click on the Navigator window.
Ensure that the Undo button shows up.
5a. Undoing a property panel change should be done correctly,
5b. Undoing a muscle path change will change the path, but it won't update back to the original path
5c. Undoing a "Hide" will only undo "one-by-one" but should work.
@aymanhab Let me know if this makes sense? I'm happy to discuss further.
Perfect, thank you @carmichaelong much appreciated.
I repeated the test that @carmichaelong mentioned on Mac, OpenSim 95345a0-2018-08-02. So, this issue is Verified.
However, multiple property edits cannot be redone. I've opened a separate issue for that (#938).
|
gharchive/pull-request
| 2018-07-26T20:57:08 |
2025-04-01T04:35:24.539600
|
{
"authors": [
"aymanhab",
"carmichaelong",
"chrisdembia"
],
"repo": "opensim-org/opensim-gui",
"url": "https://github.com/opensim-org/opensim-gui/pull/909",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1702253067
|
Rename master branch to main
Rename master branch to main and make sure that the CI and other dependent stuff still work.
Completed. Hence, closing this ticket.
|
gharchive/issue
| 2023-05-09T15:16:10 |
2025-04-01T04:35:24.541150
|
{
"authors": [
"aj3sh",
"sugat009"
],
"repo": "opensource-nepal/py-nepali",
"url": "https://github.com/opensource-nepal/py-nepali/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1328244943
|
Written guidance on how to make a job on OSD successful
We don't have guidance docs about how a job poster can make their job successful.
Let's start a thread here on how to write an article - which could then be used in comms related to posting a job about how to make the post successful.
Ideas:
Add a hello and contact info to your forum post when a job is approved
Check the forum post at least once per month/when able to
Answer questions for the designers, typical questions include etc....
Hi , I was thinking about working on this issue , but before doing that , I just wanted to clarify something . Should the person working on this issue provide a documented approach and some tips on how to make a job post successful?
…working on this issue provide a documented approach and some tips on how to make a job post successful?
Yes!
@Erioldoesdesign mentioned a few best practices, and I think it would also be good to provide a faux job example and why it is good. (Clear task, goals for the task, target group defined, contact person clear…)
Maybe start on an etherpad or gdoc or the like, in case you want to get quick feedback?
Hi , I was thinking about working on this issue , but before doing that , I just wanted to clarify something . Should the person working on this issue provide a documented approach and some tips on how to make a job post successful?
Agreed with @jdittrich sounds like a perfect way to start a doc with some mock examples and the rest of the OSD community can help out by reviewing :)
hi, if this issue is still open i would love to contribute
hi, if this issue is still open i would love to contribute
Go for it this is a open issue that will serve as the basis for a document that can be openly collaborated on
is this along the lines of what youre looking for? https://docs.google.com/document/d/1TvMIyV4AMG2y5y7Yy9WvdrgxEUEuA7366FZzaWIpz0c/edit?usp=sharing - and could i make any pull requests to push this into any repos?
is this ok, i sent an email also
:))
@subz3r0o0 can you give me access to your google document please - I sent a request for access via my email accounts or you can open up the document for public viewing so I can take a read and make any suggestion :)
Hi, I changed the access rights - is it ok now?
Thanks
After finalising the writing, I plan to move it to a Canva document for aesthetic purposes :)
@subz3r0o0 Amazing work - i'm blown away at how detailed and thoughtful your work on this issue is 👏 🚀
I've added some extra examples and edited some words - feel free to accept or reject those in the google document as you see fit :) (I will not be upset if you don't like my writing 😄 )
I would recommend this be added both as a .md file in the jobs repository and also an article on the website: https://opensourcedesign.net/articles/
I would also think it would be great to add as a forum post. If you want to make a visually exciting version in Canva do go ahead and we can upload the open doc files or .pdf to the repo for people to see but also embed in the article page :)))))))
I made this on Canva, what do you think? :))
https://www.canva.com/design/DAGEZuwNXPM/ApoXbQOLvP_ep6veR7fulw/edit?utm_content=DAGEZuwNXPM&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton
I made this on Canva, what do you think? :)) https://www.canva.com/design/DAGEZuwNXPM/ApoXbQOLvP_ep6veR7fulw/edit?utm_content=DAGEZuwNXPM&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton
Love it! looks great @subz3r0o0
So What i'd suggest next in order to contribute this to the website + wider community is:
Make a Pull Request in the '_posts' section of this repo (there's a how to add an article in that directory - that's the folder that populates the Articles section of the website. I would recommend just copy pasting the text into a page and adding formatting if you can.
Make a Pull Request for a .pdf (or other open format document) from your Canva file in the top level directory of the jobs repo it would also be good to have as a .md (markdown) file there too just in case people going to the repo are wary of .pdf files (e.g. .pdf files can have malware in them :) ) You can then link this file in the article and we can also do a addition to the jobs page to link to this guide.
Lastly, it'd be great if you can show this document on the forum: https://discourse.opensourcedesign.net/ we have a related thread here and I'm sure lots of community members will be excited!
Done! I made a pull request.
Closing this issue as it was done 😄 https://github.com/opensourcedesign/opensourcedesign.github.io/pull/445/files
|
gharchive/issue
| 2022-08-04T08:16:44 |
2025-04-01T04:35:24.556946
|
{
"authors": [
"Dibyajyoti2002",
"Erioldoesdesign",
"jdittrich",
"subz3r0o0"
],
"repo": "opensourcedesign/opensourcedesign.github.io",
"url": "https://github.com/opensourcedesign/opensourcedesign.github.io/issues/393",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2757385524
|
Ssh dockerfile
增加bigfiles的ssh身份校验功能
开始门禁检查,等门禁检查通过后可以合入
检查项
状态
日志
DT覆盖率
58.7%
查看日志
敏感信息扫描
✅
查看日志
开源代码漏洞扫描
✅
查看日志
Check代码检查
✅
查看日志
安全编码扫描
✅
查看日志
流水线链接
点击跳转
开始门禁检查,等门禁检查通过后可以合入
开始门禁检查,等门禁检查通过后可以合入
检查项
状态
日志
DT覆盖率
62.4%
查看日志
敏感信息扫描
❌
查看日志
开源代码漏洞扫描
✅
查看日志
Check代码检查
✅
查看日志
安全编码扫描
✅
查看日志
流水线链接
点击跳转
检查项
状态
日志
DT覆盖率
62.4%
查看日志
敏感信息扫描
❌
查看日志
开源代码漏洞扫描
✅
查看日志
Check代码检查
✅
查看日志
安全编码扫描
✅
查看日志
流水线链接
点击跳转
开始门禁检查,等门禁检查通过后可以合入
检查项
状态
日志
DT覆盖率
62.4%
查看日志
敏感信息扫描
✅
查看日志
开源代码漏洞扫描
✅
查看日志
Check代码检查
✅
查看日志
安全编码扫描
✅
查看日志
流水线链接
点击跳转
开始门禁检查,等门禁检查通过后可以合入
检查项
状态
日志
DT覆盖率
62.4%
查看日志
敏感信息扫描
✅
查看日志
开源代码漏洞扫描
✅
查看日志
Check代码检查
✅
查看日志
安全编码扫描
✅
查看日志
流水线链接
点击跳转
|
gharchive/pull-request
| 2024-12-24T08:10:42 |
2025-04-01T04:35:24.572886
|
{
"authors": [
"Zherphy",
"shishupei"
],
"repo": "opensourceways/BigFiles",
"url": "https://github.com/opensourceways/BigFiles/pull/48",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2566489584
|
Implement collapsible columns for XLSX
Overview
This PR introduces the ability to collapse selected columns in XLSX documents.
Usage
$columns = [2, 3];
$sheet->setColumnAttributes(1, false, true, ...$columns);
It will group the 2nd and 3rd columns under the 1st outline level.
Challenges
I'm still unsure about the best way to implement this and would really appreciate your feedback and suggestions. If you think a different approach might be better, please let me know. I'm open to refactoring it if we can agree on a solid solution.
Initially, I implemented this by adding extra arguments to the setColumnWidth() and setColumnWidthForRange() methods, but then I realized that this might not be the best idea, as every user would need to update their codebase to adapt to the new method structure.
I considered also on creating extra method(s) like setOutlineLevel() and setCollapsed() directly in AbstractOptions class. However, implementing this may be challenging, because we would need to add options to the existing ColumnWidth objects in $this->COLUMN_WIDTHS.
Next Steps
[ ] Update the documentation
[ ] Implement it for AbstractOptions.php
[ ] Add more tests
[ ] Refactor code to follow all code standards
Hey guys, I created a draft pull request so I could ask you about the best possible implementation of this feature. I've put my thoughts on it in the PR description in the Challenges section.
@Slamdunk Hey! 👋 We started using OpenSpout in our project. Being able to collapse columns (in XLSX files) is an important feature for us, and I think it is useful for other OpenSpout users as well.
We're currently using our fork of OpenSpout with this feature included. However, we'd prefer to get it merged, so it will be part of the official release and we don't need to keep using our own fork.
This current PR is not completely ready, because we were not sure what would be the best approach. My colleague @kamilrzany explained the challenges and some ideas in this PR's description.
Would you be able to take a look and let us know if you're open to merging this feature, and which approach would be preferred? We're happy to work a bit more on the PR, to make it nice according to the chosen approach.
I've sent a contribution through GitHub Sponsors to make up for your time 🙂
Hello, thank you for your sponsorship and the time you took to improve this library.
I researched the topic a bit more and I found that the <col> attribute in XLSX has been architectured with the false idea that all the attributes are related when instead they aren't.
This means for us that we can safely implement the properties unrelated one from each other, and only merge them once we need to write the <col> tag.
So the approach I'd take is the following:
Write different classes: ColumnWidth (already done), ColumnHidden, ColumndOutlineLevel and ColumnCollapsed
Expand the Sheet API to allow setting them separately
Merge them in the <col> attribute with distinct column ranges. This means that if the user asks for A+B to be hidden and B+C with OutlineLevel>1, we need 3 <col> tags (A, B and C), but instead for A+B hidden and A+B+C OutlineLevel>1 only two <col> are needed (A+B and C)
Completely skip the Options class: it has been a mistake by me to consider the ColumnWidth for the Workbook and it will be removed from the next major release. All the attributes belong only to the Worksheet
@Slamdunk Thanks so much for your investigation and clear reply! I'll review this with @kamilrzany and we'll schedule this refactor in our planning. We'll update the PR when it's ready.
|
gharchive/pull-request
| 2024-10-04T14:13:37 |
2025-04-01T04:35:24.652316
|
{
"authors": [
"Slamdunk",
"jhogervorst",
"kamilrzany"
],
"repo": "openspout/openspout",
"url": "https://github.com/openspout/openspout/pull/275",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2021066526
|
Fetch PractitionerId, PractitionerLocation, PractitionerCareTeam, PractitionerOrganization from Shared Preferences
Extract PractitionerId, PractitionerLocation, PractitionerCareTeam, PractitionerOrganization
* Create new enum class called PractitionerKey
* Create a function called extractSharedPrefValues in RulesFactory
IMPORTANT: Where possible all PRs must be linked to a Github issue
Fixes #2900
Depends on PR #2873 (Which saves practitioner details on shared preferences)
Engineer Checklist
[x] I have written Unit tests for any new feature(s) and edge cases for bug fixes
[x] I have added any strings visible on UI components to the strings.xml file
[ ] I have updated the CHANGELOG.md file for any notable changes to the codebase
[ ] I have run ./gradlew spotlessApply and ./gradlew spotlessCheck to check my code follows the project's style guide
[ ] I have built and run the FHIRCore app to verify my change fixes the issue and/or does not break the app
[ ] I have checked that this PR does NOT introduce breaking changes that require an update to Content and/or Configs? If it does add a sample here or a link to exactly what changes need to be made to the content.
Code Reviewer Checklist
[ ] I have verified Unit tests have been written for any new feature(s) and edge cases
[ ] I have verified any strings visible on UI components are in the strings.xml file
[ ] I have verifed the CHANGELOG.md file has any notable changes to the codebase
[ ] I have verified the solution has been implemented in a configurable and generic way for reuseable components
[ ] I have built and run the FHIRCore app to verify the change fixes the issue and/or does not break the app
@Raynafs it would be nice if you added documentation on how to use this particular rule
@Raynafs it would be nice if you added documentation on how to use this particular rule
Documentation is part of the implementation plan for the issue
|
gharchive/pull-request
| 2023-12-01T15:06:30 |
2025-04-01T04:35:24.661733
|
{
"authors": [
"Raynafs",
"Rkareko",
"SebaMutuku"
],
"repo": "opensrp/fhircore",
"url": "https://github.com/opensrp/fhircore/pull/2903",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1013346660
|
568 | Quest Patient List View with configurable views
Fixes #568 - Depends on #586
Type
Choose one: Feature
Checklist
[x] I have written Unit tests for any new feature(s) and edge cases for bug fixes
[x] I have added any strings visible on UI components to the strings.xml file
[x] I have updated the CHANGELOG.md file for any notable changes to the codebase
[x] I have run ./gradlew spotlessApply and ./gradlew spotlessCheck to check my code follows the project's style guide
[x] I have built and run the fhircore app to verify my change fixes the issue and/or does not break the app
Is it also possible to replace the png icons with svg variants?
Update the register button design and text
Done! Incorporated the feedback
@ndegwamartin Unit test written for new code in Quest do not reflect in CI. I tried writing tests into engine but most of the classes need shadow application, and implementations for base-classes. Kindly look into it so that I can see the missing line after QuestTests are analyzed
|
gharchive/pull-request
| 2021-10-01T13:25:27 |
2025-04-01T04:35:24.665946
|
{
"authors": [
"ellykits",
"maimoonak"
],
"repo": "opensrp/fhircore",
"url": "https://github.com/opensrp/fhircore/pull/595",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
175586963
|
SSL_CTX_use_PrivateKey() fails to report mismatch with certificate
Quoth the man page:
SSL_CTX_use_PrivateKey() adds pkey as private key to ctx.
SSL_CTX_use_RSAPrivateKey() adds the private key rsa of type RSA to
ctx. SSL_use_PrivateKey() adds pkey as private key to ssl;
SSL_use_RSAPrivateKey() adds rsa as private key of type RSA to ssl. If
a certificate has already been set and the private does not belong to
the certificate an error is returned. To change a certificate, private
key pair the new certificate needs to be set with SSL_use_certificate()
or SSL_CTX_use_certificate() before setting the private key with
SSL_CTX_use_PrivateKey() or SSL_use_PrivateKey().
So if I add a certificate, and a private key that doesn't belong to the certificate, I'm supposed to get an error. Simple...
Except it doesn't seem to work like that. My torture test suite contains RSA, DSA and EC keys (and certs). If I load one cert and a different key, it seems to silently accept that, then just neglect to do any client cert auth on the wire without ever telling me.
It looks like applications need to explicitly call SSL_CTX_check_private_key() to check for the classes of mismatch which SSL_CTX_use_PrivateKey() doesn't report, if they want to behave well and report errors coherently to their users? Fixed in my application thus but the documentation still wants fixing.
This is not a new bug in 1.1.1, so removing that milestone.
@t8m In the current master branch, this seems to work as expected—the public key part is correctly compared with the one from the certificate. Additionally, the documentation accurately reflects the actual behavior of the function calls. Either I’m missing something, or this issue is now obsolete.
Yeah, I am closing this now.
|
gharchive/issue
| 2016-09-07T19:36:58 |
2025-04-01T04:35:24.669772
|
{
"authors": [
"dwmw2",
"erbsland-dev",
"richsalz",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/1549",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
946844923
|
makefile:56: *** missing separator. Stop.
i try to build openssl-1.1.1k .
everything is ok before i type nmake ,the error happened that "makefile:56: *** missing separator. Stop."
i am a newer,i try to slove it ,but i lack of the make rule knowledge. anyone help?
(my system is window, i use the strawberry perl to config)
my typed commands:
perl Configure VC-WIN32
(Configuring OpenSSL version 1.1.1k (0x101010bfL) for VC-WIN32
Using os-specific seed configuration
Creating configdata.pm
Creating makefile
**********************************************************************
*** ***
*** OpenSSL has been successfully configured ***
*** ***
*** If you encounter a problem while building, please open an ***
*** issue on GitHub <https://github.com/openssl/openssl/issues> ***
*** and include the output from the following command: ***
*** ***
*** perl configdata.pm --dump ***
*** ***
*** (If you are new to OpenSSL, you might want to consult the ***
*** 'Troubleshooting' section in the INSTALL file first) ***
*** ***
**********************************************************************)
nmake
(makefile:56: *** missing separator. Stop.)
perl configdata.pm --dump
(Command line (with current working directory = .):
C:\Strawberry\perl\bin\perl.exe Configure VC-WIN32
Perl information:
C:\Strawberry\perl\bin\perl.exe
5.32.1 for MSWin32-x64-multi-thread
Enabled features:
aria
asm
async
autoalginit
autoerrinit
autoload-config
bf
blake2
camellia
capieng
cast
chacha
cmac
cms
comp
ct
deprecated
des
dgram
dh
dsa
dso
dtls
dynamic-engine
ec
ec2m
ecdh
ecdsa
engine
err
filenames
gost
hw(-.+)?
idea
makedepend
md4
mdc2
multiblock
nextprotoneg
pinshared
ocb
ocsp
pic
poly1305
posix-io
psk
rc2
rc4
rdrand
rfc3779
rmd160
scrypt
seed
shared
siphash
sm2
sm3
sm4
sock
srp
srtp
sse2
ssl
static-engine
stdio
tests
threads
tls
ts
ui-console
whirlpool
tls1
tls1-method
tls1_1
tls1_1-method
tls1_2
tls1_2-method
tls1_3
dtls1
dtls1-method
dtls1_2
dtls1_2-method
Disabled features:
afalgeng [not-linux] OPENSSL_NO_AFALGENG
asan [default] OPENSSL_NO_ASAN
buildtest-c++ [default]
crypto-mdebug [default] OPENSSL_NO_CRYPTO_MDEBUG
crypto-mdebug-backtrace [default] OPENSSL_NO_CRYPTO_MDEBUG_BACKTRACE
devcryptoeng [default] OPENSSL_NO_DEVCRYPTOENG
ec_nistp_64_gcc_128 [default] OPENSSL_NO_EC_NISTP_64_GCC_128
egd [default] OPENSSL_NO_EGD
external-tests [default] OPENSSL_NO_EXTERNAL_TESTS
fuzz-libfuzzer [default] OPENSSL_NO_FUZZ_LIBFUZZER
fuzz-afl [default] OPENSSL_NO_FUZZ_AFL
heartbeats [default] OPENSSL_NO_HEARTBEATS
md2 [default] OPENSSL_NO_MD2 (skip crypto\md2)
msan [default] OPENSSL_NO_MSAN
rc5 [default] OPENSSL_NO_RC5 (skip crypto\rc5)
sctp [default] OPENSSL_NO_SCTP
ssl-trace [default] OPENSSL_NO_SSL_TRACE
ubsan [default] OPENSSL_NO_UBSAN
unit-test [default] OPENSSL_NO_UNIT_TEST
weak-ssl-ciphers [default] OPENSSL_NO_WEAK_SSL_CIPHERS
zlib [default]
zlib-dynamic [default]
ssl3 [default] OPENSSL_NO_SSL3
ssl3-method [default] OPENSSL_NO_SSL3_METHOD
Config target attributes:
AR => "lib",
ARFLAGS => "/nologo",
AS => "nasm",
ASFLAGS => "",
CC => "cl",
CFLAGS => "/W3 /wd4090 /nologo /O2",
CPP => "\$(CC) /EP /C",
HASHBANGPERL => "/usr/bin/env perl",
LD => "link",
LDFLAGS => "/nologo /debug",
MT => "mt",
MTFLAGS => "-nologo",
RANLIB => "CODE(0x2badef8)",
RC => "rc",
aes_asm_src => "aes_core.c aes_cbc.c vpaes-x86.s aesni-x86.s",
aes_obj => "aes_core.o aes_cbc.o vpaes-x86.o aesni-x86.o",
apps_aux_src => "win32_init.c",
apps_init_src => "../ms/applink.c",
apps_obj => "win32_init.o",
aroutflag => "/out:",
asflags => "-f win32",
asoutflag => "-o ",
bf_asm_src => "bf-586.s",
bf_obj => "bf-586.o",
bin_cflags => "/Zi /Fdapp.pdb",
bin_lflags => "/subsystem:console /opt:ref",
bn_asm_src => "bn-586.s co-586.s x86-mont.s x86-gf2m.s",
bn_obj => "bn-586.o co-586.o x86-mont.o x86-gf2m.o",
bn_ops => "EXPORT_VAR_AS_FN BN_LLONG",
build_file => "makefile",
build_scheme => [ "unified", "windows", "VC-common", "VC-WOW" ],
cast_asm_src => "c_enc.c",
cast_obj => "c_enc.o",
cflags => "/Gs0 /GF /Gy /MD",
chacha_asm_src => "chacha-x86.s",
chacha_obj => "chacha-x86.o",
cmll_asm_src => "cmll-x86.s",
cmll_obj => "cmll-x86.o",
coutflag => "/Fo",
cppflags => "",
cpuid_asm_src => "x86cpuid.s",
cpuid_obj => "x86cpuid.o",
defines => [ "OPENSSL_SYS_WIN32", "WIN32_LEAN_AND_MEAN", "UNICODE", "_UNICODE", "_CRT_SECURE_NO_DEPRECATE", "_WINSOCK_DEPRECATED_NO_WARNINGS", "OPENSSL_USE_APPLINK" ],
des_asm_src => "des-586.s crypt586.s",
des_obj => "des-586.o crypt586.o",
disable => [ ],
dso_cflags => "/Zi /Fddso.pdb",
dso_extension => ".dll",
dso_scheme => "win32",
ec_asm_src => "ecp_nistz256.c ecp_nistz256-x86.s",
ec_obj => "ecp_nistz256.o ecp_nistz256-x86.o",
enable => [ ],
ex_libs => "ws2_32.lib gdi32.lib advapi32.lib crypt32.lib user32.lib",
exe_extension => "",
includes => [ ],
keccak1600_asm_src => "keccak1600.c",
keccak1600_obj => "keccak1600.o",
ldoutflag => "/out:",
lflags => "",
lib_cflags => "/Zi /Fdossl_static.pdb",
lib_cppflags => "",
lib_defines => [ "L_ENDIAN" ],
md5_asm_src => "md5-586.s",
md5_obj => "md5-586.o",
modes_asm_src => "ghash-x86.s",
modes_obj => "ghash-x86.o",
module_cflags => "",
module_cxxflags => "",
module_ldflags => "/dll",
mtinflag => "-manifest ",
mtoutflag => "-outputresource:",
padlock_asm_src => "e_padlock-x86.s",
padlock_obj => "e_padlock-x86.o",
perlasm_scheme => "win32n",
poly1305_asm_src => "poly1305-x86.s",
poly1305_obj => "poly1305-x86.o",
rc4_asm_src => "rc4-586.s",
rc4_obj => "rc4-586.o",
rc5_asm_src => "rc5-586.s",
rc5_obj => "rc5-586.o",
rcoutflag => "/fo",
rmd160_asm_src => "rmd-586.s",
rmd160_obj => "rmd-586.o",
sha1_asm_src => "sha1-586.s sha256-586.s sha512-586.s",
sha1_obj => "sha1-586.o sha256-586.o sha512-586.o",
shared_cflag => "",
shared_defines => [ ],
shared_extension => ".dll",
shared_extension_simple => ".dll",
shared_ldflag => "/dll",
shared_rcflag => "",
shared_target => "win-shared",
sys_id => "WIN32",
thread_defines => [ ],
thread_scheme => "winthreads",
unistd => "<unistd.h>",
uplink_aux_src => "../ms/uplink.c",
uplink_obj => "../ms/uplink.o",
wp_asm_src => "wp_block.c wp-mmx.s",
wp_obj => "wp_block.o wp-mmx.o",
Recorded environment:
AR =
ARFLAGS =
AS =
ASFLAGS =
BUILDFILE =
CC =
CFLAGS =
CPP =
CPPDEFINES =
CPPFLAGS =
CPPINCLUDES =
CROSS_COMPILE =
CXX =
CXXFLAGS =
HASHBANGPERL =
LD =
LDFLAGS =
LDLIBS =
MT =
MTFLAGS =
OPENSSL_LOCAL_CONFIG_DIR =
PERL =
RANLIB =
RC =
RCFLAGS =
[makefile.txt](https://github.com/openssl/openssl/files/6834918/makefile.txt)
RM =
WINDRES =
__CNF_CFLAGS =
__CNF_CPPDEFINES =
__CNF_CPPFLAGS =
__CNF_CPPINCLUDES =
__CNF_CXXFLAGS =
__CNF_LDFLAGS =
__CNF_LDLIBS =
Makevars:
AR = lib
ARFLAGS = /nologo
AS = nasm
CC = cl
CFLAGS = /W3 /wd4090 /nologo /O2
CPP = $(CC) /EP /C
CPPDEFINES =
CPPFLAGS =
CPPINCLUDES =
CXXFLAGS =
HASHBANGPERL = /usr/bin/env perl
LD = link
LDFLAGS = /nologo /debug
LDLIBS =
MT = mt
MTFLAGS = -nologo
PERL = C:\Strawberry\perl\bin\perl.exe
RANLIB = ranlib
RC = rc
RCFLAGS =
NOTE: These variables only represent the configuration view. The build file
template may have processed these variables further, please have a look at the
build file for more exact data:
makefile
build file:
makefile
build file templates:
Configurations\common0.tmpl
Configurations\windows-makefile.tmpl
Configurations\common.tmpl)
That sounds like something didn't go well when generating the makefile. Would you mind attaching it here?
That sounds like something didn't go well when generating the makefile. Would you mind attaching it here?
thanks for your reply, here is the makefile:
(https://github.com/openssl/openssl/files/6834918/makefile.txt)
Uhmmmm, this is line 56 in that makefile:
!IF "$(DESTDIR)" != ""
This is a normal nmake directive...
I don't know what i missed....
Errrr you version of nmake (a Microsoft make tool) appears to actually be "GNU Make 4.2.1"!!!!! These two "make" tools are not compatible with each other. To build the VC-WIN32 target you need to have the Microsoft Visual Studio tools on your %PATH% - usually achieved by starting a developer command prompt:
https://docs.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2019
oh!! i was a fool! i got it ,it works! big thanks!
|
gharchive/issue
| 2021-07-17T15:27:44 |
2025-04-01T04:35:24.679672
|
{
"authors": [
"PenNameLuXun",
"levitte",
"mattcaswell"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/16104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1330979456
|
Plesk panel is down after uninstalling OpenSSL version 3
Related on this post to solve the problem I tried to downgrade openssl 3 to openssl 2. Then in ssh run 'sudo apt remove openssl' command. Result is plesk panel down. After connecting ssh and reinstalling openssl3 again did not solve the problem. The server has installed Ubuntu 22.04 Plesk panel is still down. Can anyone has advice about this problem ?
This seems more like a plesk issue than something OpenSSL can advise you on. You would be better off asking in some plesk forum.
|
gharchive/issue
| 2022-08-07T10:46:57 |
2025-04-01T04:35:24.682088
|
{
"authors": [
"mattcaswell",
"mustafa-can"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/18961",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1621063177
|
EVP_PKEY_(get_)id not returning an OID (for provider PKEYs)
The masterdocumentation for EVP_PKEY_get_id (and thus for the old API EVP_PKEY_id) is documented to return an OID:
EVP_PKEY_get_id() returns the actual OID associated with pkey.
Is this really correct? It rather seems to return a NID (or doesn't "OID" in this context refer to an X.509 OID but rather to an "OpenSSL internal object ID"?). In that case some clarification wrt "OID" may be in order (Apologies if I missed this somewhere).
The result is always used as a NID anyway. The one use that causes acute problems is here:
The issue is that a provider-based PKEY always causes EVP_PKEY(_get)_id to return -1 -- which is rather "suboptimal" for code interested in outputting the (SN) name of the algorithm.
Or asked another way: How should an OpenSSL consumer (like OpenVPN) obtain the name (and/or NID) of the algorithm underlying a PKEY if that PKEY is provider-based? Ideally, it shouldn't know/have to worry about this "OpenSSL internal implementation detail", right?
In the OpenVPN use case, the problem is made worse by OpenSSL setting an error state due to the API call sequence EVP_PKEY_get_id-> OBJ_nid2sn: This causes the whole OpenVPN system to fail as OpenSSL reported an error:
OpenSSL: error:04000065:object identifier routines::unknown nid
but only when logging is set too high (trying to obtain the algorithm name as per the above). Really surprising to users. Does an error state have to be set if a "provider-based" PKEY is examined?
There is also EVP_PKEY_get0_type_name()
No, it's a NID.
OK, then I'd suggest a documentation update (will do so if no-one jumps at this). I'd also add the correction that the retval from that function may actually be -1 (not just NID_undef or positive NID as currently documented) thus indicating a provided key. Would it be sensible to then also directly point to EVP_PKEY_get0_type_name for that case?
does EVP_PKEY_get0_description do the trick?
No, that also returns NULL (but that might arguably be changed by the provider implementation). But it would not solve the issue for OpenVPN: Are you saying the code there must be updated to support providers? Adding a call to EVP_PKEY_get0_type_name would be sufficient if EVP_PKEY_(get_)id returns -1.
If so, do you recommend a "standard way" how to utilize OpenSSLv3 APIs (such as EVP_PKEY_get0_type_name) in "consumer code" (like OpenVPN)? Something like #if OPENSSL_VERSION >= 0x30000000L?
That is really unfortunate. Arguably that's a bug. We should either fix it, or fix the documentation. I worry whether it might be too late to actually fix it though??? Maybe applications already rely on this.
Their fault if they do. It's clearly documented. I'd suggest fixing the code instead. It would also solve the "downstream" OpenVPN problem: The whole code causing the problem would not trigger again if the retval of EVP_PKEY_get_id were 0 (or a real NID): https://github.com/OpenVPN/openvpn/blob/838474145933199a62d1f59fbc2df14e4fbd57f3/src/openvpn/ssl_openssl.c#L2083-L2107
As simple as
diff --git a/crypto/evp/p_lib.c b/crypto/evp/p_lib.c
index 554fad927c..6477594017 100644
--- a/crypto/evp/p_lib.c
+++ b/crypto/evp/p_lib.c
@@ -982,7 +982,7 @@ int EVP_PKEY_type(int type)
int EVP_PKEY_get_id(const EVP_PKEY *pkey)
{
- return pkey->type;
+ return pkey->type>0?pkey->type:NID_undef;
}
OK to PR?
Let's have a PR for it. We can gather more opinion there.
|
gharchive/issue
| 2023-03-13T09:21:30 |
2025-04-01T04:35:24.690849
|
{
"authors": [
"baentsch",
"mattcaswell"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/20497",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1648220983
|
How to fix OSSL_DECODER_from_bio returning 0 as failure on OpenSSL 3 FIPS mode?
I read carefully to decide which is better in the openssl-users@openssl.org mailing list or opening this issue ticket on GitHub. As my question is about the OpenSSL API used in the OpenSSL Ruby bindings. I am trying to fix a bug in the OpenSSL Ruby bindings. I thought that perhaps my question is more close to the developing OpenSSL rather than using OpenSSL. But let me know if you think my case is to ask on the mailing list. I am happy to post it on it. Sorry for that.
I am debugging the OpenSSL Ruby bindings to fix a bug. Please let me know what's wrong in the code. Perhaps the APIs are wrongly called?
You can reproduce this bug by doing git clone on my forked repository branch: https://github.com/junaruga/openssl/tree/wip/fips-read-report that includes some debugging commits on the master branch. However, the reproducing steps are a bit complicated, please let me know if there are commands that you want me to run to find additional info.
Reproducing steps
Environment
My local environment is Fedora 37. However I was able to reproduce this issue on the Ubuntu (the ubuntu-latest) on the GitHub Actions too. And this issue also happens with the both cases of OpenSSL built from the source code without any patch files, and OpenSSL RPM package on RHEL 9.1.
In the reproducing steps below, the used OpenSSL version is OpenSSL 3.0.8 compiled from the source without any patch files. The LD_LIBRARY_PATH is used to load the OpenSSL.
$ cat /etc/fedora-release
Fedora release 37 (Thirty Seven)
$ rpm -q gcc
gcc-12.2.1-4.fc37.x86_64
$ gcc --version
gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4)
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
~/.local/openssl-3.0.8-fips-debug/bin/openssl version
OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
1. Install OpenSSL with FIPS mode option.
I compiled the openssl with fips mode, and debug flags (-O0 -g3 -ggdb3 -gdwarf-5 flags.) as I wanted to debug. But this issue happens with the openssl compiled without the debug flags.
$ ./Configure --prefix=$HOME/.local/openssl-3.0.8-fips-debug --libdir=lib shared linux-x86_64 enable-fips -O0 -g3 -ggdb3 -gdwarf-5
$ make -j4
$ make install
And here is the OpenSSL config file used in the later process.
$ cat ~/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf
config_diagnostics = 1
openssl_conf = openssl_init
.include /home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/fipsmodule.cnf
#.include ./fipsmodule.cnf
[openssl_init]
providers = provider_sect
alg_section = algorithm_sect
[provider_sect]
fips = fips_sect
base = base_sect
[base_sect]
activate = 1
[algorithm_sect]
default_properties = fips=yes
Then I used this program to check if the fips mode is available.
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
~/git/openssl-test/fips_mode
FIPS mode provider available: 1
FIPS mode enabled: 1
2. Compile OpenSSL Ruby bindings.
Below is the steps to compile the Ruby OpenSSL bindings with the latets stable Ruby 3.2. I am compiling with the -O0 -g3 -ggdb3 -gdwarf-5 flags. You can skip the section.
If you want to compile with the branch on my forked repository to reproduce this issue ticket:
$ git clone -b wip/fips-read-report https://github.com/junaruga/openssl.git
or if you want to compile with the original repository:
$ git clone https://github.com/ruby/openssl.git
Then I installed the dependency RubyGems packages.
$ cd openssl
$ pwd
/home/jaruga/git/ruby/openssl
$ which ruby
/usr/local/ruby-3.2.1/bin/ruby
$ which bundle
/usr/local/ruby-3.2.1/bin/bundle
$ ruby -v
ruby 3.2.1 (2023-02-08 revision 31819e82c8) [x86_64-linux]
$ bundle exec install --standalone
I compiled the OpenSSL Ruby bindings.
$ bundle exec rake compile
If you want to clean to compile again by bundle exec rake compile, you can run the command below.
$ rm -rf tmp/ lib/openssl.so
3. Run the command raising the error.
I created a testing pem file.
$ openssl genrsa -out key.pem 4096
Then I ran the OpenSSL Ruby binding to read the pem file from the OpenSSL Ruby binding. In the result of the command, you see the error message "Could not parse PKey (OpenSSL::PKey::PKeyError)" that comes from the OpenSSL Ruby binding, and it comes from the following the OSSL_DECODER_from_bio(dctx, bio) returning 0. See below.
The other parts in the output, the [DEBUG] ... is by my printf debugging log. And the ... Input type: ... is by the ERR_print_errors_fp(stdout). The ossl_pkey_read_generic function is called 2 times, and the OSSL_DECODER_from_bio function is called 3 times in each ossl_pkey_read_generic function called.
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
003C0D92E17F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
003C0D92E17F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
003C0D92E17F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 3.
-e:1:in `read': Could not parse PKey (OpenSSL::PKey::PKeyError)
from -e:1:in `<main>'
$ echo $?
1
The error comes from the OSSL_DECODER_from_bio returning the 0.
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L149
ext/openssl/ossl_pkey.c#L149
145 OSSL_BIO_reset(bio);
146 OSSL_DECODER_CTX_set_selection(dctx, 0);
147 while (1) {
148 printf("[DEBUG] Calling OSSL_DECODER_from_bio 3.\n");
149 if (OSSL_DECODER_from_bio(dctx, bio) == 1) /* <= This OSSL_DECODER_from_bio returns 0! */
150 goto out;
151 ERR_print_errors_fp(stdout);
152 if (BIO_eof(bio))
153 break;
154 pos2 = BIO_tell(bio);
155 if (pos2 < 0 || pos2 <= pos)
156 break;
157 ossl_clear_error();
158 pos = pos2;
159 }
Debugging
ltrace
First, I captured the ltrace log by the command below. Because I think the ltrace log is good to see how the OpenSSL APIs are called in the process. You can see the OSSL_DECODER_from_bio is called totally 6 times in the ltrace. So, the 6th called OSSL_DECODER_from_bio fails and causes the error. Note that unfortunately, the log is by the ltrace in the first part, and by the ruby in the second part unfortuntely.
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
ltrace -ttt -f -l openssl.so -l libssl.so.3 -l libcrypto.so.3 \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))" >& ltrace_ttt.log
GDB
Debug around the OSSL_DECODER_from_bio
I debugged the gdb by the command below. The reason why I am setting the LD_LIBRARY_PATH in the gdb prompt is because the system openssl is the dependency of the gdb command. The gdb fails hiding the system openssl by referring to the manually installed openssl by LD_LIBRARY_PATH.
$ OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-fips-debug/lib/
After some steps, below is soon after calling the 6th OSSL_DECODER_from_bio returning the 0 as a error. And the values of the input arguments *dctx and *bio are below.
(gdb) b ossl_pkey_read_generic
(gdb) r
(gdb) c
(gdb) n
(gdb) f
#0 ossl_pkey_read_generic (bio=0x7be610, pass=4) at ../../../../ext/openssl/ossl_pkey.c:151
151 ERR_print_errors_fp(stdout);
(gdb) p *dctx
$4 = {start_input_type = 0x7fffe57d5489 "PEM", input_structure = 0x0, selection = 0, decoder_insts = 0x7bf030, construct = 0x7fffe51e5640 <decoder_construct_pkey>,
cleanup = 0x7fffe51e5984 <decoder_clean_pkey_construct_arg>, construct_data = 0x69ed30, pwdata = {type = is_pem_password, _ = {expl_passphrase = {
passphrase_copy = 0x7fffe5792cdd <ossl_pem_passwd_cb> "UH\211\345SH\203\354HH\211}ȉuĉU\300H\211M\270H\213E\270H\211E\350H\213E\350H\211\307\350q\361\377\377\204\300\017\204", <incomplete sequence \356>, passphrase_len = 4}, pem_password = {password_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>, password_cbarg = 0x4}, ossl_passphrase = {passphrase_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>, passphrase_cbarg = 0x4},
ui_method = {ui_method = 0x7fffe5792cdd <ossl_pem_passwd_cb>, ui_method_data = 0x4}}, flag_cache_passphrase = 1, cached_passphrase = 0x0, cached_passphrase_len = 0}}
(gdb) p *bio
$5 = {libctx = 0x0, method = 0x7fffe54c90a0 <mem_method>, callback = 0x0, callback_ex = 0x0, cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0, num = 0, ptr = 0x7c1c30, next_bio = 0x0,
prev_bio = 0x0, references = 1, num_read = 1504, num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c7730}
And here is the backtrace.
(gdb) bt
#0 ossl_pkey_read_generic (bio=0x7be610, pass=4) at ../../../../ext/openssl/ossl_pkey.c:151
#1 0x00007fffe57ad75c in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048,
self=140737035361920) at ../../../../ext/openssl/ossl_pkey.c:222
#2 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90,
calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#3 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>,
block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>,
ec=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#4 vm_exec_core (ec=0x0, initial=initial@entry=0) at /home/jaruga/src/ruby-3.2.1/insns.def:820
#5 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true)
at vm.c:2383
#6 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#7 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0, n=n@entry=0x7ffff7e7bab8)
at eval.c:289
#8 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7bab8) at eval.c:330
#9 0x0000000000401102 in rb_main (argv=0x7fffffffda48, argc=5) at ./main.c:38
#10 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
Debug deeply in the OSSL_DECODER_from_bio
As a reference, I stepped in the OSSL_DECODER_from_bio. Running the GDB from the start again, then here is a part that causes the error in the OSSL_DECODER_from_bio. As the ok is 0, and the decoder_process returns the 0.
(gdb) f
#0 decoder_process (params=0x7fffffffd100, arg=0x7fffffffd2b0)
at crypto/encode_decode/decoder_lib.c:747
747 ok = (rv > 0);
(gdb) p rv
$6 = 0
Here are input arguments of the function decoder_process and local variables at the same point crypto/encode_decode/decoder_lib.c:747.
(gdb) p *params
$8 = {key = 0x7fffe53ffb0b "data-structure", data_type = 4, data = 0x7fffe53ffb5e,
data_size = 14, return_size = 18446744073709551615}
(gdb) p *data
$9 = {ctx = 0x7be7c0, bio = 0x0, current_decoder_inst_index = 36, recursion = 1,
flag_next_level_called = 1, flag_construct_called = 1, flag_input_structure_checked = 0}
(gdb) i lo
rv = 0
p = 0x7fffe53ffb1f
trace_data_structure = 0x7fffffffd178 ""
data = 0x7fffffffd2b0
ctx = 0x7be7c0
decoder_inst = 0x7c0460
decoder = 0x7b9b20
cbio = 0x0
bio = 0x0
loc = 140737039563551
i = 1
ok = 0
new_data = {ctx = 0x7be7c0, bio = 0x0, current_decoder_inst_index = 0, recursion = 2,
flag_next_level_called = 0, flag_construct_called = 0, flag_input_structure_checked = 0}
data_type = 0x0
data_structure = 0x0
__func__ = "decoder_process"
Here is the backtrace.
(gdb) bt
#0 decoder_process (params=0x7fffffffd100, arg=0x7fffffffd2b0)
at crypto/encode_decode/decoder_lib.c:747
#1 0x00007fffe5363268 in pem2der_decode (vctx=0x7c0440, cin=0x7c0b30, selection=0,
data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffd2b0,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x7be7f8)
at providers/implementations/encode_decode/decode_pem2der.c:204
#2 0x00007fffe51e3d6e in decoder_process (params=0x0, arg=0x7fffffffd3e0)
at crypto/encode_decode/decoder_lib.c:962
#3 0x00007fffe51e248a in OSSL_DECODER_from_bio (ctx=0x7be7c0, in=0x7bdea0)
at crypto/encode_decode/decoder_lib.c:81
#4 0x00007fffe57ad64a in ossl_pkey_read_generic (bio=0x7bdea0, pass=4)
at ../../../../ext/openssl/ossl_pkey.c:149
#5 0x00007fffe57ad75c in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048,
self=140737035361920) at ../../../../ext/openssl/ossl_pkey.c:222
#6 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90,
calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#7 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>,
block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>,
ec=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#8 vm_exec_core (ec=0x7fffe53d6ef4, initial=140737039563551, initial@entry=0)
at /home/jaruga/src/ruby-3.2.1/insns.def:820
#9 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true)
at vm.c:2383
#10 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#11 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0, n=n@entry=0x7ffff7e7bab8)
at eval.c:289
#12 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7bab8) at eval.c:330
#13 0x0000000000401102 in rb_main (argv=0x7fffffffda48, argc=5) at ./main.c:38
#14 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
Please let me know if you want to see additional information. I am happy to help for that! Thank you for reading this, and thank you for your help.
@junaruga It may be a Red hat specific problem. in RHEL FIPS provider is auto activated when the system is in FIPS mode, and if the OpenSSL configuration file is present and loaded. So I'd look in OpenSSL initialization.
Guys, thank you for your responses!
I see this issue with not only OpenSSL 3.0 RPM package on RHEL 9 FIPS mode but also with the OpenSSL 3.0 compiled from the OpenSSL source directly without any patch files. The reproducing steps above is with the OpenSSL 3.0 compiled from the source directly.
Why do you think this may be a Red Hat specific problem?
If this is an universal problem then I'd anyway check via strace if openssl config and providers are loaded.
Thanks!
I suppose this is an universal problem. Because I can reproduce this issue on the GitHub Actions Ubuntu on my forked branch: wip/fips-read-test-report from the https://github.com/ruby/openssl. The CI log is here.
My concern is the openssl genrsa -out key.pem 4096 failed to create the key.pem on the Ubuntu case on the GitHub Actions. So, I added the key.pem file I created on my local Fedora to test. The CI log is here.
The log in Ubuntu case on the GitHub Actions:
$HOME/.openssl/openssl-3.0.8/bin/openssl genrsa -out key.pem 4096
Error initializing RSA context
4047441F347F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:../crypto/evp/evp_fetch.c:349:Global default library context, Algorithm (rsaEncryption : 104), Properties (<null>)
Error: Process completed with exit code 1.
I just prepared the strace log and ltrace -ttt -S log files including system calls on the repository below for your convenience.
https://github.com/junaruga/report-openssl-fips-read-error
Sorry, maybe my assumption was wrong about where the problematic code is. I said the problem is the 6th call of OSSL_DECODER_from_bio(dctx, bio) returning 0 after printf("[DEBUG] Calling OSSL_DECODER_from_bio 3.\n");. But I think the correct problematic code is 5th call of the OSSL_DECODER_from_bio(dctx, bio) after printf("[DEBUG] Calling OSSL_DECODER_from_bio 2.\n");.
Because I compared the result between with OpenSSL 3.0.8 FIPS enabled and OpenSSL FIPS disabled now.
non-FIPS mode (FIPS mode disabled)
Below steps are for the OpenSSL 3.0.8 non-FIPS mode compiled from the source without any patch files (not RPM pacakge).
I compiled the OpenSSL 3.0.8 non-FIPS mode with the debug flags from the source by the commands below.
./Configure --prefix=$HOME/.local/openssl-3.0.8-debug --libdir=lib shared linux-x86_64 -O0 -g3 -ggdb3 -gdwarf-5
make -j4
make install
Then I confirmed that the version and the FIPS provide is not available and FIPS mode is not enabled as expected.
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-debug/lib/ \
~/.local/openssl-3.0.8-debug/bin/openssl version
OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-debug/lib/ \
~/git/openssl-test/fips_mode
FIPS mode provider available: 0
FIPS mode enabled: 0
Then below is the result of running the OpenSSL Ruby binding. You see the OSSL_DECODER_from_bio is called totally 5 times in the non-FIPS mode case, while the OSSL_DECODER_from_bio is called totally 6 times in the case of FIPS mode case above in the first comment. That means the 5th call of the OSSL_DECODER_from_bio (after [DEBUG] Calling OSSL_DECODER_from_bio 2.) returns 1 in the non-FIPS mode, while it returns 0 in the FIPS mode.
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-debug/lib/ \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
00CCC6DB7C7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
00CCC6DB7C7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
00CCC6DB7C7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
$ echo $?
0
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L133
ext/openssl/ossl_pkey.c#L133
130 OSSL_DECODER_CTX_set_selection(dctx, EVP_PKEY_KEYPAIR);
131 while (1) {
132 printf("[DEBUG] Calling OSSL_DECODER_from_bio 2.\n");
133 if (OSSL_DECODER_from_bio(dctx, bio) == 1) /* <= This OSSL_DECODER_from_bio returns 1 in the non-FIPS mode case, but it returns 0 in the FIPS mode case! */
134 goto out;
135 ERR_print_errors_fp(stdout);
136 if (BIO_eof(bio))
137 break;
138 pos2 = BIO_tell(bio);
139 if (pos2 < 0 || pos2 <= pos)
140 break;
141 ossl_clear_error();
142 pos = pos2;
143 }
And I debugged with GDB again for the non-FIPS mode.
$ gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-debug/lib/
(gdb) b ossl_pkey_read_generic
(gdb) r
(gdb) c
(gdb) n
Here is the line soon after the 5th called OSSL_DECODER_from_bio(dctx, bio). And I printed the value of the input arguments dctx and bio.
(gdb) f
#0 ossl_pkey_read_generic (bio=0x79a180, pass=4) at ../../../../ext/openssl/ossl_pkey.c:134
134 goto out;
(gdb) p *dctx
$1 = {start_input_type = 0x7fffe57d5489 "PEM", input_structure = 0x0, selection = 135,
decoder_insts = 0x79ac80, construct = 0x7fffe51e5640 <decoder_construct_pkey>,
cleanup = 0x7fffe51e5984 <decoder_clean_pkey_construct_arg>, construct_data = 0x6a18f0,
pwdata = {type = is_pem_password, _ = {expl_passphrase = {
passphrase_copy = 0x7fffe5792cdd <ossl_pem_passwd_cb> "UH\211\345SH\203\354HH\211}ȉuĉU\300H\211M\270H\213E\270H\211E\350H\213E\350H\211\307\350q\361\377\377\204\300\017\204", <incomplete sequence \356>, passphrase_len = 4}, pem_password = {
password_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>, password_cbarg = 0x4},
ossl_passphrase = {passphrase_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>,
passphrase_cbarg = 0x4}, ui_method = {
ui_method = 0x7fffe5792cdd <ossl_pem_passwd_cb>, ui_method_data = 0x4}},
flag_cache_passphrase = 1, cached_passphrase = 0x0, cached_passphrase_len = 0}}
(gdb) p *bio
$2 = {libctx = 0x0, method = 0x7fffe54c80a0 <mem_method>, callback = 0x0, callback_ex = 0x0,
cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0, num = 0,
ptr = 0x79d880, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 1598,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c3300}
Here is the backtrace.
(gdb) bt
#0 ossl_pkey_read_generic (bio=0x79a180, pass=4) at ../../../../ext/openssl/ossl_pkey.c:134
#1 0x00007fffe57ad75c in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048,
self=140737042308920) at ../../../../ext/openssl/ossl_pkey.c:222
#2 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90,
calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#3 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>,
block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>,
ec=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#4 vm_exec_core (ec=0x0, initial=initial@entry=0)
at /home/jaruga/src/ruby-3.2.1/insns.def:820
#5 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true)
at vm.c:2383
#6 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#7 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0, n=n@entry=0x7ffff7e7ba90)
at eval.c:289
#8 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7ba90) at eval.c:330
#9 0x0000000000401102 in rb_main (argv=0x7fffffffdaa8, argc=5) at ./main.c:38
#10 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
I updated the repository: https://github.com/junaruga/report-openssl-fips-read-error adding the non-FIPS mode ltrace and strace log files.
I compared the input arguments dctx and bio before calling the 5th call of the OSSL_DECODER_from_bio between FIPS mode and non-FIPS mode that is ext/openssl/ossl_pkey.c:133.
I found one difference between the 2 cases. The value of the bio->num_read was different. Other items look same except the pointer address.
FIPS-mode
$ OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-fips-debug/lib/
(gdb) f
#0 ossl_pkey_read_generic (bio=0x7be610, pass=4) at ../../../../ext/openssl/ossl_pkey.c:133
133 if (OSSL_DECODER_from_bio(dctx, bio) == 1)
(gdb) p *dctx
$5 = {start_input_type = 0x7fffe57d5489 "PEM", input_structure = 0x0, selection = 135,
decoder_insts = 0x7bf030, construct = 0x7fffe51e5640 <decoder_construct_pkey>,
cleanup = 0x7fffe51e5984 <decoder_clean_pkey_construct_arg>, construct_data = 0x69ed30,
pwdata = {type = is_pem_password, _ = {expl_passphrase = {
passphrase_copy = 0x7fffe5792cdd <ossl_pem_passwd_cb> "UH\211\345SH\203\354HH\211}ȉuĉU\300H\211M\270H\213E\270H\211E\350H\213E\350H\211\307\350q\361\377\377\204\300\017\204", <incomplete sequence \356>, passphrase_len = 4}, pem_password = {
password_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>, password_cbarg = 0x4},
ossl_passphrase = {passphrase_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>,
passphrase_cbarg = 0x4}, ui_method = {
ui_method = 0x7fffe5792cdd <ossl_pem_passwd_cb>, ui_method_data = 0x4}},
flag_cache_passphrase = 1, cached_passphrase = 0x0, cached_passphrase_len = 0}}
(gdb) p *bio
$6 = {libctx = 0x0, method = 0x7fffe54c90a0 <mem_method>, callback = 0x0, callback_ex = 0x0,
cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0, num = 0,
ptr = 0x7c1c30, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 1504,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c7730}
(gdb) p bio->num_read
$7 = 1504
Non-FIPS-mode
$ gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-debug/lib/
(gdb) f
#0 ossl_pkey_read_generic (bio=0x79a220, pass=4) at ../../../../ext/openssl/ossl_pkey.c:133
133 if (OSSL_DECODER_from_bio(dctx, bio) == 1)
(gdb) p *dctx
$1 = {start_input_type = 0x7fffe57d5489 "PEM", input_structure = 0x0, selection = 135,
decoder_insts = 0x79ad20, construct = 0x7fffe51e5640 <decoder_construct_pkey>,
cleanup = 0x7fffe51e5984 <decoder_clean_pkey_construct_arg>, construct_data = 0x67c7f0,
pwdata = {type = is_pem_password, _ = {expl_passphrase = {
passphrase_copy = 0x7fffe5792cdd <ossl_pem_passwd_cb> "UH\211\345SH\203\354HH\211}ȉuĉU\300H\211M\270H\213E\270H\211E\350H\213E\350H\211\307\350q\361\377\377\204\300\017\204", <incomplete sequence \356>, passphrase_len = 4}, pem_password = {
password_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>, password_cbarg = 0x4},
ossl_passphrase = {passphrase_cb = 0x7fffe5792cdd <ossl_pem_passwd_cb>,
passphrase_cbarg = 0x4}, ui_method = {
ui_method = 0x7fffe5792cdd <ossl_pem_passwd_cb>, ui_method_data = 0x4}},
flag_cache_passphrase = 1, cached_passphrase = 0x0, cached_passphrase_len = 0}}
(gdb) p *bio
$2 = {libctx = 0x0, method = 0x7fffe54c80a0 <mem_method>, callback = 0x0, callback_ex = 0x0,
cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0, num = 0,
ptr = 0x79d920, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 1598,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c8280}
(gdb) p bio->num_read
$3 = 1598
I just added the new section "Reproducing steps - 2. Install Ruby and Compile OpenSSL Ruby bindings. - Install Ruby" on the first comment above, as I thought you may want to reproduce this issue on your environment.
I'm looking at all this, but can't really see directly what's happening. The BIO that's passed to OSSL_DECODER_from_bio() is a memory BIO, if I understand correctly, which is formed by this line:
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L220
Can you confirm that the contents that this BIO handles are what they are supposed to be? Also, does ossl_obj2bio() simply set up a straight BIO_s_mem() or does it do some sort of adaptation of its own?
The reason I'm asking all this is that I've started to suspect that the BIO_reset() calls that are done in ossl_pkey_read_generic() might not work as expected... I've run into trouble with that before, but my memory on this is admitedly a bit vague (it's been a few years)
Sure! Let me confirm it. And I will let you know it here.
I'm looking at all this, but can't really see directly what's happening. The BIO that's passed to OSSL_DECODER_from_bio() is a memory BIO, if I understand correctly, which is formed by this line:
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L220
I think so. The bio variable is created at the ossl_obj2bio. And it seems that ossl_obj2bio only does the BIO_s_mem in it.
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_bio.c#L21
So, below is the state of the bio soon after calling OSSL_DECODER_from_bio. (num_read = 0) on running the program on FIPS mode.
$ OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-fips-debug/lib/
(gdb) f
#0 ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048, self=140737035361920)
at ../../../../ext/openssl/ossl_pkey.c:221
221 printf("[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.\n");
(gdb) p *bio
$24 = {libctx = 0x0, method = 0x7fffe54c90a0 <mem_method>, callback = 0x0,
callback_ex = 0x0, cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0,
num = 0, ptr = 0x7c1c00, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 0,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c7730}
(gdb) p bio->num_read
$25 = 0
Then After calling the first OSSL_DECODER_from_bio, the bio variable changes. The num_read changes from 0 to 1504.
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L101
(gdb) n
[DEBUG] Calling OSSL_DECODER_from_bio 1.
101 if (OSSL_DECODER_from_bio(dctx, bio) == 1)
(gdb) p *bio
$26 = {libctx = 0x0, method = 0x7fffe54c90a0 <mem_method>, callback = 0x0,
callback_ex = 0x0, cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0,
num = 0, ptr = 0x7c1c00, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 0,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c7730}
(gdb) n
103 ERR_print_errors_fp(stdout);
(gdb) p *bio
$27 = {libctx = 0x0, method = 0x7fffe54c90a0 <mem_method>, callback = 0x0,
callback_ex = 0x0, cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0,
num = 0, ptr = 0x7c1c00, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 1504,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c7730}
Then soon after calling the OSSL_BIO_reset, the num_read is still 1504. I expected the OSSL_BIO_reset changes the num_read to 0 again. Is it right?
(gdb) n
00DCE8F7FF7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
104 OSSL_BIO_reset(bio);
(gdb) n
107 if (OSSL_DECODER_CTX_set_input_type(dctx, "PEM") != 1)
(gdb) p *bio
$28 = {libctx = 0x0, method = 0x7fffe54c90a0 <mem_method>, callback = 0x0,
callback_ex = 0x0, cb_arg = 0x0, init = 1, shutdown = 1, flags = 512, retry_reason = 0,
num = 0, ptr = 0x7c1c00, next_bio = 0x0, prev_bio = 0x0, references = 1, num_read = 1504,
num_write = 0, ex_data = {ctx = 0x0, sk = 0x0}, lock = 0x6c7730}
When calling the OSSL_BIO_reset at https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L104 , it calls the function BIO_ctrl, not BIO_reset. Is it right?
(gdb) bt
#0 BIO_ctrl (b=0x7bec20, cmd=1, larg=0, parg=0x0) at crypto/bio/bio_lib.c:567
#1 0x00007fffe57ad529 in ossl_pkey_read_generic (bio=0x7bec20, pass=4)
at ../../../../ext/openssl/ossl_pkey.c:104
#2 0x00007fffe57ad75c in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048,-
self=140737035361920) at ../../../../ext/openssl/ossl_pkey.c:222
#3 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90,-
calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#4 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>,-
block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>,-
ec=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#5 vm_exec_core (ec=0x7bec20, initial=1, initial@entry=0)
at /home/jaruga/src/ruby-3.2.1/insns.def:820
#6 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true)
at vm.c:2383
#7 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#8 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0, n=n@entry=0x7ffff7e7bab8)
at eval.c:289
#9 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7bab8) at eval.c:330
#10 0x0000000000401102 in rb_main (argv=0x7fffffffda48, argc=5) at ./main.c:38
#11 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
Reference: BIO_reset and BIO_ctrl and other related functions:
https://www.openssl.org/docs/man3.1/man3/BIO_reset.html
Guys, any other info do you want to see? I am happy to provide it. Thanks.
When checking the OSSL_BIO_reset(bio) after the 4th call of the OSSL_DECODER_from_bio(dctx, bio),
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L104
The OSSL_BIO_reset(bio) is calling BIO_ctrl(b,BIO_CTRL_RESET,0,NULL)
https://github.com/openssl/openssl/blob/openssl-3.0.8/include/openssl/bio.h.in#L532
# define BIO_reset(b) (int)BIO_ctrl(b,BIO_CTRL_RESET,0,NULL)
Then at the following line in the function mem_ctrl called in the function BIO_ctrl.
https://github.com/openssl/openssl/blob/openssl-3.0.8/crypto/bio/bss_mem.c#L272
271 /* For read only case just reset to the start again */
272 *bbm->buf = *bbm->readp;
(gdb) bt
#0 mem_ctrl (b=0x7be370, cmd=1, num=0, ptr=0x0) at crypto/bio/bss_mem.c:272
#1 0x00007fffe50f4502 in BIO_ctrl (b=0x7be370, cmd=1, larg=0, parg=0x0)
at crypto/bio/bio_lib.c:580
#2 0x00007fffe57ad529 in ossl_pkey_read_generic (bio=0x7be370, pass=4)
at ../../../../ext/openssl/ossl_pkey.c:104
#3 0x00007fffe57ad75c in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048,
self=140737035361760) at ../../../../ext/openssl/ossl_pkey.c:222
#4 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90,
calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#5 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>,
block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>,
ec=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#6 vm_exec_core (ec=0x7be370, initial=1, initial@entry=0)
at /home/jaruga/src/ruby-3.2.1/insns.def:820
#7 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true)
at vm.c:2383
#8 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#9 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0,
n=n@entry=0x7ffff7e7b9c8) at eval.c:289
#10 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7b9c8) at eval.c:330
#11 0x0000000000401102 in rb_main (argv=0x7fffffffda48, argc=5) at ./main.c:38
#12 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
It is processed as read only.
(gdb) p b->flags & BIO_FLAGS_MEM_RDONLY
$58 = 512
So these calls work fine if run with default provider but they fail with fips+base providers? If you call OSSL_PROVIDER_available("fips") and OSSL_PROVIDER_available("base") do these calls return 1?
Yes. right.
So these calls work fine if run with default provider but they fail with fips+base providers?
Yes, right.
As I commented at https://github.com/openssl/openssl/issues/20657#issuecomment-1492698823, the call of the OSSL_DECODER_from_bio at the ext/openssl/ossl_pkey.c#L133 below fails with fips+base providers. But it works fine with the default provider.
https://github.com/junaruga/openssl/blob/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c#L133
133 if (OSSL_DECODER_from_bio(dctx, bio) == 1) /* <= This OSSL_DECODER_from_bio returns 1 in the non-FIPS mode case, but it returns 0 in the FIPS mode case! */
134 goto out;
If you call OSSL_PROVIDER_available("fips") and OSSL_PROVIDER_available("base") do these calls return 1?
Yes, both the OSSL_PROVIDER_available("fips") and OSSL_PROVIDER_available("base") return the 1 on the OpenSSL 3.0.8 with FIPS mode enabled installed from the source (/home/jaruga/.local/openssl-3.0.8-fips-debug).
I tested my testing OpenSSLs /home/jaruga/.local/openssl-3.0.8-fips-debug and /home/jaruga/.local/openssl-3.0.8-debug with the following small program, https://github.com/junaruga/openssl-test/blob/8a3e508f679a0b92186dc9ef8c7f17f0a925423d/fips_mode.c .
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
~/git/openssl-test/fips_mode
Base provider available: 1
FIPS provider available: 1
FIPS mode enabled: 1
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-debug/lib/ \
~/git/openssl-test/fips_mode
Base provider available: 0
FIPS provider available: 0
FIPS mode enabled: 0
I updated my testing branch on the forked repository: https://github.com/junaruga/openssl/commits/wip/fips-read-report by adding more debug logs to print the base/fips providers and the fips-enabled, and the result of the OSSL_DECODER_from_bio 2 with one additional commit. I hope it's better for us to see the difference between the FIPS and non-FIPS mode easily The result is below.
FIPS mode
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
003C66CC977F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 failed.
003C66CC977F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Base provider available: 1
[DEBUG] FIPS provider available: 1
[DEBUG] FIPS mode enabled: 1
[DEBUG] Calling OSSL_DECODER_from_bio 1.
003C66CC977F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 failed.
[DEBUG] Calling OSSL_DECODER_from_bio 3.
-e:1:in `read': Could not parse PKey (OpenSSL::PKey::PKeyError)
from -e:1:in `<main>'
$ echo $?
1
Non-FIPS mode
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-debug/lib/ \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
004CCA7C327F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 failed.
004CCA7C327F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Base provider available: 0
[DEBUG] FIPS provider available: 0
[DEBUG] FIPS mode enabled: 0
[DEBUG] Calling OSSL_DECODER_from_bio 1.
004CCA7C327F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 ok.
$ echo $?
0
I do not think the problem is related to the bio reset call. It is fairly strange because the decoders in default and base providers are identical implementations and if fips and base providers are properly loaded there should be really no difference if you're importing 4096 bit RSA key in unencrypted PEM format.
Could you please put breakpoint to rsa_d2i_PKCS8 in the debugger to see if it is called in the non-fips and fips cases and if there are any differences in the return value?
Could you please put breakpoint to rsa_d2i_PKCS8 in the debugger to see if it is called in the non-fips and fips cases and if there are any differences in the return value?
Sure. I checked it. The rsa_d2i_PKCS8 is called from the 5h call of the OSSL_DECODER_from_bio soon after printing the "[DEBUG] Calling OSSL_DECODER_from_bio 2." in both the FIPS and non-FIPS cases. And the rsa_d2i_PKCS8 (key = ctx->desc->d2i_PKCS8(NULL, &derp, der_len, ctx)) returns NULL in both FIPS and non-FIPS mode cases at providers/implementations/encode_decode/decode_der2key.c:214.
Below is the working log on GDB.
FIPS mode
$ OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-fips-debug/lib/
(gdb) b rsa_d2i_PKCS8
Breakpoint 1 at 0x7fffe53615b8: file providers/implementations/encode_decode/decode_der2key.c, line 506.
Below is the line where the rsa_d2i_PKCS8 is called.
(gdb) bt
#0 rsa_d2i_PKCS8 (key=0x0, der=0x7fffffffcf10, der_len=2375, ctx=0x7c2610) at providers/implementations/encode_decode/decode_der2key.c:506
#1 0x00007fffe5360ec7 in der2key_decode (vctx=0x7c2610, cin=0x7c0a30, selection=135, data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffcf80,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x7bec98) at providers/implementations/encode_decode/decode_der2key.c:213
#2 0x00007fffe51e3d6e in decoder_process (params=0x7fffffffd100, arg=0x7fffffffd2b0) at crypto/encode_decode/decoder_lib.c:962
#3 0x00007fffe5363268 in pem2der_decode (vctx=0x7c08e0, cin=0x7c09b0, selection=135, data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffd2b0,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x7bec98) at providers/implementations/encode_decode/decode_pem2der.c:204
#4 0x00007fffe51e3d6e in decoder_process (params=0x0, arg=0x7fffffffd3e0) at crypto/encode_decode/decoder_lib.c:962
#5 0x00007fffe51e248a in OSSL_DECODER_from_bio (ctx=0x7bec60, in=0x7be340) at crypto/encode_decode/decoder_lib.c:81
#6 0x00007fffe57ad5b0 in ossl_pkey_read_generic (bio=0x7be340, pass=4) at ../../../../ext/openssl/ossl_pkey.c:133
#7 0x00007fffe57ad7b2 in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048, self=140737035361920) at ../../../../ext/openssl/ossl_pkey.c:226
#8 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90, calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#9 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>, block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>, ec=<optimized out>)
at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#10 vm_exec_core (ec=0x0, initial=140737488342800, initial@entry=0) at /home/jaruga/src/ruby-3.2.1/insns.def:820
#11 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true) at vm.c:2383
#12 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#13 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0, n=n@entry=0x7ffff7e7bab8) at eval.c:289
#14 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7bab8) at eval.c:330
#15 0x0000000000401102 in rb_main (argv=0x7fffffffda48, argc=5) at ./main.c:38
#16 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
Then after stepping forward to get the return value, the decode_der2key.c:214 is where I can print the return value key from the ctx->desc->d2i_PKCS8(NULL, &derp, der_len, ctx) calling the rsa_d2i_PKCS8.
(gdb) f
#0 der2key_decode (vctx=0x7c2610, cin=0x7c0a30, selection=135,
data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffcf80,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x7bec98)
at providers/implementations/encode_decode/decode_der2key.c:214
214 if (ctx->flag_fatal) {
(gdb) l
209 ERR_set_mark();
210 if ((selection & OSSL_KEYMGMT_SELECT_PRIVATE_KEY) != 0) {
211 derp = der;
212 if (ctx->desc->d2i_PKCS8 != NULL) {
213 key = ctx->desc->d2i_PKCS8(NULL, &derp, der_len, ctx);
214 if (ctx->flag_fatal) {
215 ERR_clear_last_mark();
216 goto end;
217 }
218 } else if (ctx->desc->d2i_private_key != NULL) {
The return value is NULL. I also printed the input values of the function and the local variables too.
(gdb) p key
$1 = (void *) 0x0
(gdb) i lo
ctx = 0x7c2610
der = 0x7c3b10 "0\202\tC\002\001"
derp = 0x7c4457 ""
der_len = 2375
key = 0x0
ok = 0
__func__ = "der2key_decode"
(gdb) p *derp
$2 = 0 '\000'
(gdb) p der_len
$3 = 2375
(gdb) p *ctx
$4 = {provctx = 0x6ff1f0, desc = 0x7fffe54e4280 <PrivateKeyInfo_rsapss_desc>,
selection = 135, flag_fatal = 0}
Non-FIPS mode
I did the same thing with the FIPS mode case to get the return value.
$ gdb --args ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
(gdb) set environment LD_LIBRARY_PATH /home/jaruga/.local/openssl-3.0.8-debug/lib/
(gdb) b rsa_d2i_PKCS8
Breakpoint 2 at 0x7fffe53615b8: file providers/implementations/encode_decode/decode_der2key.c, line 506.
Below is the line where the rsa_d2i_PKCS8 is called.
(gdb) bt
#0 rsa_d2i_PKCS8 (key=0x0, der=0x7fffffffcf70, der_len=2375, ctx=0x79e5f0)
at providers/implementations/encode_decode/decode_der2key.c:506
#1 0x00007fffe5360ec7 in der2key_decode (vctx=0x79e5f0, cin=0x79c920, selection=135,
data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffcfe0,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x79ac58)
at providers/implementations/encode_decode/decode_der2key.c:213
#2 0x00007fffe51e3d6e in decoder_process (params=0x7fffffffd160, arg=0x7fffffffd310)
at crypto/encode_decode/decoder_lib.c:962
#3 0x00007fffe5363268 in pem2der_decode (vctx=0x79c890, cin=0x79c9e0, selection=135,
data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffd310,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x79ac58)
at providers/implementations/encode_decode/decode_pem2der.c:204
#4 0x00007fffe51e3d6e in decoder_process (params=0x0, arg=0x7fffffffd440)
at crypto/encode_decode/decoder_lib.c:962
#5 0x00007fffe51e248a in OSSL_DECODER_from_bio (ctx=0x79ac20, in=0x79a220)
at crypto/encode_decode/decoder_lib.c:81
#6 0x00007fffe57ad5b0 in ossl_pkey_read_generic (bio=0x79a220, pass=4)
at ../../../../ext/openssl/ossl_pkey.c:133
#7 0x00007fffe57ad7b2 in ossl_pkey_new_from_data (argc=1, argv=0x7ffff7443048,
self=140737042308920) at ../../../../ext/openssl/ossl_pkey.c:226
#8 0x00007ffff7b309f7 in vm_call_cfunc_with_frame (ec=0x40a0c0, reg_cfp=0x7ffff7542f90,
calling=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_insnhelper.c:3268
#9 0x00007ffff7b35d44 in vm_sendish (method_explorer=<optimized out>,
block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>,
ec=<optimized out>) at /home/jaruga/src/ruby-3.2.1/vm_callinfo.h:367
#10 vm_exec_core (ec=0x0, initial=140737488342896, initial@entry=0)
at /home/jaruga/src/ruby-3.2.1/insns.def:820
#11 0x00007ffff7b3bdf9 in rb_vm_exec (ec=0x40a0c0, jit_enable_p=jit_enable_p@entry=true)
at vm.c:2383
#12 0x00007ffff7b3cde8 in rb_iseq_eval_main (iseq=<optimized out>) at vm.c:2633
#13 0x00007ffff7951755 in rb_ec_exec_node (ec=ec@entry=0x40a0c0,
n=n@entry=0x7ffff7e7ba90) at eval.c:289
#14 0x00007ffff7957c7b in ruby_run_node (n=0x7ffff7e7ba90) at eval.c:330
#15 0x0000000000401102 in rb_main (argv=0x7fffffffdaa8, argc=5) at ./main.c:38
#16 main (argc=<optimized out>, argv=<optimized out>) at ./main.c:57
(gdb) f
#0 der2key_decode (vctx=0x79e5f0, cin=0x79c920, selection=135,
data_cb=0x7fffe51e36e9 <decoder_process>, data_cbarg=0x7fffffffcfe0,
pw_cb=0x7fffe525bc84 <ossl_pw_passphrase_callback_dec>, pw_cbarg=0x79ac58)
at providers/implementations/encode_decode/decode_der2key.c:214
214 if (ctx->flag_fatal) {
(gdb) l
209 ERR_set_mark();
210 if ((selection & OSSL_KEYMGMT_SELECT_PRIVATE_KEY) != 0) {
211 derp = der;
212 if (ctx->desc->d2i_PKCS8 != NULL) {
213 key = ctx->desc->d2i_PKCS8(NULL, &derp, der_len, ctx);
214 if (ctx->flag_fatal) {
215 ERR_clear_last_mark();
216 goto end;
217 }
218 } else if (ctx->desc->d2i_private_key != NULL) {
The return value is NULL as well as the FIPS mode case.
(gdb) p key
$1 = (void *) 0x0
(gdb) i lo
ctx = 0x79e5f0
der = 0x79fad0 "0\202\tC\002\001"
derp = 0x7a0417 ""
der_len = 2375
key = 0x0
ok = 0
__func__ = "der2key_decode"
(gdb) p *derp
$3 = 0 '\000'
(gdb) p der_len
$4 = 2375
(gdb) p *ctx
$5 = {provctx = 0x78b010, desc = 0x7fffe54e3280 <PrivateKeyInfo_rsapss_desc>,
selection = 135, flag_fatal = 0}
Just to avoid having to deal with Ruby, I made a test program that essentially does what your extension does, but limits itself to the problem domain. I can confirm seeing the same problem in my runs.
https://gist.github.com/levitte/7a27cebdb9537ff0a59641c9a5bed53d
With an OpenSSL built with enable-trace, I added these lines to my program:
BIO *trace_bio = BIO_new_fp(stderr, BIO_NOCLOSE | BIO_FP_TEXT);
OSSL_trace_set_channel(OSSL_TRACE_CATEGORY_DECODER, trace_bio);
That's a lot of output, but one line that I think tells a bit of the story is this (where {n} is really 0 or 1):
(ctx 0x...) >> Running constructor => {n}
When running with the FIPS module, the last such line has {n} being 0, while with the default module, it's 1. That gives me an indication where to look:
https://github.com/openssl/openssl/blob/40f4884990a1717755df366e2aa06d01a1affd63/crypto/encode_decode/decoder_pkey.c#L68-L70
Wow, thank you for the test program without Ruby and your investigation! I also learned the enable-trace option in the Configure script from you!
Do note that said small patch to ossl_pkey_read_generic() is a viable workaround, BTW. You might want to apply it to your code, @junaruga .
Thank you!! I am still reading your comment to understand it. And I will apply your patch above to my my OpenSSL Ruby binding code!
I tested the OpenSSL Ruby bindings with your patch. The result was there is an error in the 1st call of the ossl_pkey_read_generic called from the ossl_dh_initialize., before the problematic step. As you know, the ossl_pkey_read_generic is called 2 times from the ossl_dh_initialize and ossl_pkey_new_from_data in the process. I am checking why.
$ git diff
diff --git a/ext/openssl/ossl_pkey.c b/ext/openssl/ossl_pkey.c
index 00a7a9c..6bec437 100644
--- a/ext/openssl/ossl_pkey.c
+++ b/ext/openssl/ossl_pkey.c
@@ -90,7 +90,8 @@ ossl_pkey_read_generic(BIO *bio, VALUE pass)
EVP_PKEY *pkey = NULL;
int pos = 0, pos2;
- dctx = OSSL_DECODER_CTX_new_for_pkey(&pkey, "DER", NULL, NULL, 0, NULL, NULL);
+ dctx = OSSL_DECODER_CTX_new_for_pkey(&pkey, "DER", NULL, NULL,
+ EVP_PKEY_KEYPAIR, NULL, NULL);
if (!dctx)
goto out;
if (OSSL_DECODER_CTX_set_pem_password_cb(dctx, ossl_pem_passwd_cb, ppass) != 1)
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
000C50DE7C7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 failed.
000C50DE7C7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
000C50DE7C7F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
/home/jaruga/var/git/ruby/openssl/lib/openssl/pkey.rb:132:in `initialize': could not parse pkey (OpenSSL::PKey::DHError)
from /home/jaruga/var/git/ruby/openssl/lib/openssl/pkey.rb:132:in `new'
from /home/jaruga/var/git/ruby/openssl/lib/openssl/pkey.rb:132:in `new'
from /home/jaruga/var/git/ruby/openssl/lib/openssl/ssl.rb:37:in `<class:SSLContext>'
from /home/jaruga/var/git/ruby/openssl/lib/openssl/ssl.rb:23:in `<module:SSL>'
from /home/jaruga/var/git/ruby/openssl/lib/openssl/ssl.rb:22:in `<module:OpenSSL>'
from /home/jaruga/var/git/ruby/openssl/lib/openssl/ssl.rb:21:in `<top (required)>'
from /home/jaruga/var/git/ruby/openssl/lib/openssl.rb:21:in `require_relative'
from /home/jaruga/var/git/ruby/openssl/lib/openssl.rb:21:in `<top (required)>'
from <internal:/usr/local/ruby-3.2.1/lib/ruby/3.2.0/rubygems/core_ext/kernel_require.rb>:88:in `require'
from <internal:/usr/local/ruby-3.2.1/lib/ruby/3.2.0/rubygems/core_ext/kernel_require.rb>:88:in `require'
from -e:1:in `<main>'
My guess about the cause of the error above, is when setting the selection value: EVP_PKEY_KEYPAIR, it keeps in the life time of the OSSL_DECODER_CTX. However, the line 150 sets the selection 0, and it seems it is not actually set.
Perhaps, the workaround is that we need to create the OSSL_DECODER_CTX_new_for_pkey each time to try with the different selection?
https://github.com/junaruga/openssl/blob/9921a42ed8cc6fbf7168fd99c9a740da77ee728a/ext/openssl/ossl_pkey.c#L148-L162
ext/openssl/ossl_pkey.c
149 OSSL_BIO_reset(bio);
150 OSSL_DECODER_CTX_set_selection(dctx, 0);
151 while (1) {
152 printf("[DEBUG] Calling OSSL_DECODER_from_bio 3.\n");
153 if (OSSL_DECODER_from_bio(dctx, bio) == 1)
154 goto out;
155 ERR_print_errors_fp(stdout);
156 if (BIO_eof(bio))
157 break;
158 pos2 = BIO_tell(bio);
159 if (pos2 < 0 || pos2 <= pos)
160 break;
161 ossl_clear_error();
162 pos = pos2;
163 }
I checked the values of the selection. And I couldn't find the 0. I am not sure where the value 0 comes from.
On the document:
https://github.com/openssl/openssl/blob/master/doc/man3/EVP_PKEY_fromdata.pod#selections
The following constants can be used for selection:
* EVP_PKEY_KEY_PARAMETERS: Only key parameters will be selected.
* EVP_PKEY_PUBLIC_KEY: Only public key components will be selected. This includes optional key parameters.
* EVP_PKEY_KEYPAIR: Any keypair components will be selected. This includes the private key, public key and key parameters.
On the current master branch:
https://github.com/openssl/openssl/blob/40f4884990a1717755df366e2aa06d01a1affd63/include/openssl/evp.h#L85-L91
I created the patch below. It works.
diff --git a/ext/openssl/ossl_pkey.c b/ext/openssl/ossl_pkey.c
index 00a7a9c..50d6e0d 100644
--- a/ext/openssl/ossl_pkey.c
+++ b/ext/openssl/ossl_pkey.c
@@ -103,10 +103,9 @@ ossl_pkey_read_generic(BIO *bio, VALUE pass)
ERR_print_errors_fp(stdout);
OSSL_BIO_reset(bio);
- /* Then check PEM; multiple OSSL_DECODER_from_bio() calls may be needed */
- if (OSSL_DECODER_CTX_set_input_type(dctx, "PEM") != 1)
- goto out;
/*
+ * Then check PEM; multiple OSSL_DECODER_from_bio() calls may be needed.
+ *
* First check for private key formats. This is to keep compatibility with
* ruby/openssl < 3.0 which decoded the following as a private key.
*
@@ -127,7 +126,12 @@ ossl_pkey_read_generic(BIO *bio, VALUE pass)
* Note that normally, the input is supposed to contain a single decodable
* PEM block only, so this special handling should not create a new problem.
*/
- OSSL_DECODER_CTX_set_selection(dctx, EVP_PKEY_KEYPAIR);
+ OSSL_DECODER_CTX_free(dctx);
+ dctx = NULL;
+ dctx = OSSL_DECODER_CTX_new_for_pkey(&pkey, "PEM", NULL, NULL,
+ EVP_PKEY_KEYPAIR, NULL, NULL);
+ if (!dctx)
+ goto out;
while (1) {
printf("[DEBUG] Calling OSSL_DECODER_from_bio 2.\n");
if (OSSL_DECODER_from_bio(dctx, bio) == 1) {
@@ -146,7 +150,11 @@ ossl_pkey_read_generic(BIO *bio, VALUE pass)
}
OSSL_BIO_reset(bio);
- OSSL_DECODER_CTX_set_selection(dctx, 0);
+ OSSL_DECODER_CTX_free(dctx);
+ dctx = NULL;
+ dctx = OSSL_DECODER_CTX_new_for_pkey(&pkey, "PEM", NULL, NULL, 0, NULL, NULL);
+ if (!dctx)
+ goto out;
while (1) {
printf("[DEBUG] Calling OSSL_DECODER_from_bio 3.\n");
if (OSSL_DECODER_from_bio(dctx, bio) == 1)
FIPS mode
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug/lib/ \
OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug/ssl/openssl_fips.cnf \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
007CC55D067F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 failed.
007CC55D067F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Base provider available: 1
[DEBUG] FIPS provider available: 1
[DEBUG] FIPS mode enabled: 1
[DEBUG] Calling OSSL_DECODER_from_bio 1.
007CC55D067F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 ok.
$ echo $?
0
Non-FIPS mode
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-debug/lib/ \
ruby -I lib -e "require 'openssl'; OpenSSL::PKey.read(File.read('key.pem'))"
[DEBUG] Calling ossl_pkey_read_generic from ossl_dh_initialize.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
008CA772937F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 failed.
008CA772937F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: PEM
[DEBUG] Calling OSSL_DECODER_from_bio 3.
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Base provider available: 0
[DEBUG] FIPS provider available: 0
[DEBUG] FIPS mode enabled: 0
[DEBUG] Calling OSSL_DECODER_from_bio 1.
008CA772937F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 2 ok.
$ echo $?
0
Yeah, your workaround is correct.
As for the 0 for selection - that slightly underdocumented but citing the OSSL_DECODER_CTX_new_for_pkey() manpage:
The search of decoder implementations can also be limited with I<keytype>
and I<selection>, which specifies the expected resulting keytype and contents.
NULL and zero are valid and signify that the decoder implementations will
find out the keytype and key contents on their own from the input they get.
As for the 0 for selection - that's slightly underdocumented but citing the OSSL_DECODER_CTX_new_for_pkey() manpage:
The search of decoder implementations can also be limited with I<keytype>
and I<selection>, which specifies the expected resulting keytype and contents.
NULL and zero are valid and signify that the decoder implementations will
find out the keytype and key contents on their own from the input they get.
Yeah, but this also demands the cooperation of surrounding code... and that cooperation is lacking for the moment. That's an implementation detail, however, and is therefore not quite suitable for that particular manual.
I do have some possible ideas (yup, and old branch I have lying around)... it needs quite a bit of testing, though
I changed this issue ticket's title to the "OSSL_DECODER_CTX_set_selection don't set the selection value properly on OpenSSL 3 FIPS mode.". I hope it's better to describe this issue.
@t8m I see that you renamed this issue ticket's title to "OSSL_DECODER_CTX_set_selection doesn't apply the selection value properly", removing the "on OpenSSL 3 FIPS mode". In my understanding, this issue only happens in the FIPS mode case. Is that right?
@junaruga, the issue is more generic than that. The way the DECODER functionality works, the provider implementations for it may live in a different provider than the one handling the keys themselves, and that's the combination where we have this issue. For you, it ended up being triggered by the combination of FIPS and base providers (the latter being where the DECODER implementations are), but it might as well have been the combination of the OQS and base providers.
... also, OSSL_DECODER_CTX_set_selection() doesn't do its job right, that's undeniable.
@levitte thanks for your explanation! OK. I understood it.
In my case there are 2 providers "base" and "fips". And the "base" provider handles the DECODER implementation, and "fips" provider handles (holds) the key. The issue happened as one of the cases that a provider handling the decoder is different from a provider handling a key.
With an OpenSSL built with enable-trace, I added these lines to my program:
@levitte I tried your way to debug with enable-trace option above.
$ ./Configure \
--prefix=$HOME/.local/openssl-3.0.8-fips-debug-trace \
shared \
enable-fips \
enable-trace \
-O0 -g3 -ggdb3 -gdwarf-5
$ make -j$(nproc)
$ make install
$ LD_LIBRARY_PATH=/home/jaruga/.local/openssl-3.0.8-fips-debug-trace/lib64/ \
/home/jaruga/.local/openssl-3.0.8-fips-debug-trace/bin/openssl version
OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)
I added the following lines to your reproducing program.
diff --git a/test/20657.c b/test/20657.c
index bb4446a..6f9d6d7 100644
--- a/test/20657.c
+++ b/test/20657.c
@@ -5,6 +5,7 @@
#include <openssl/pem.h>
#include <openssl/bio.h>
#include <openssl/provider.h>
+#include <openssl/trace.h>
/* BEGIN COPY */
/* The following is extracted from https://github.com/junaruga/openssl/raw/41bc792df2cf54660264bd6fc6368044f2877e99/ext/openssl/ossl_pkey.c and modified to get rid of Ruby specific stuff */
@@ -13,6 +14,10 @@
EVP_PKEY *
ossl_pkey_read_generic(BIO *bio, char *pass)
{
+ /* Trace */
+ BIO *trace_bio = BIO_new_fp(stderr, BIO_NOCLOSE | BIO_FP_TEXT);
+ OSSL_trace_set_channel(OSSL_TRACE_CATEGORY_DECODER, trace_bio);
+
OSSL_DECODER_CTX *dctx;
EVP_PKEY *pkey = NULL;
int pos = 0, pos2;
$ gcc -o 20657 20657.c -lcrypto
Then I expected a trace log in stderr. However I haven't seen it. Do you know what's wrong in my code?
$ OPENSSL_CONF=/home/jaruga/.local/openssl-3.0.8-fips-debug-trace/ssl/openssl_fips.cnf \
OPENSSL_CONF_INCLUDE=/home/jaruga/.local/openssl-3.0.8-fips-debug-trace/ssl \
OPENSSL_MODULES=/home/jaruga/.local/openssl-3.0.8-fips-debug-trace/lib64/ossl-modules \
/home/jaruga/git/report-openssl-fips-read-error/test/20657 key.pem
Loaded providers:
fips
base
[DEBUG] Calling ossl_pkey_read_generic from ossl_pkey_new_from_data.
[DEBUG] Calling OSSL_DECODER_from_bio 1.
40779C04277F0000:error:1E08010C:DECODER routines:OSSL_DECODER_from_bio:unsupported:crypto/encode_decode/decoder_lib.c:101:No supported data to decode. Input type: DER
[DEBUG] Calling OSSL_DECODER_from_bio 2.
[DEBUG] Calling OSSL_DECODER_from_bio 3.
40779C04277F0000:error:00800000:unknown library:ossl_pkey_new_from_data:unknown library:20657.c:130:Could not parse PKey
$ gcc -o 20657 20657.c -lcrypto
This links the program to the default libcrypto, not your build. To link with your build, you must tell gcc where it is:
$ gcc -I /home/jaruga/.local/openssl-3.0.8-fips-debug-trace/include -L /home/jaruga/.local/openssl-3.0.8-fips-debug-trace/lib64 -o 20657 20657.c -lcrypto
Also, when you run the program, remember to set LD_LIBRARY_PATH, the same way you did when running /home/jaruga/.local/openssl-3.0.8-fips-debug-trace/bin/openssl version
Incorrectly closed, see https://github.com/openssl/openssl/issues/20657#issuecomment-1505161924
How is the progress about this issue?
Now I understood why the issue happens with the help of the https://github.com/openssl/openssl/issues/20657#issuecomment-1505161924 . It's not new information. But let me explain.
Cause
I used my updated reprocucer.c with the tracing logs to check it by msyelf. You can also see the tracing logs log/trace_non_fips.log and log/trace_fips.log on this repository too.
In the non-FIPS case, the log is below. 1 is ok. 0 is not ok.
$ grep 'Running constructor' log/trace_non_fips.log
(ctx 0x22efba0) >> Running constructor
(ctx 0x22efba0) >> Running constructor => 0
(ctx 0x22efba0) >>> Running constructor
(ctx 0x22efba0) >>> Running constructor => 1
In the FIPS case, the log is below.
$ grep 'Running constructor' log/trace_fips.log
(ctx 0x1310780) >> Running constructor
(ctx 0x1310780) >> Running constructor => 0
(ctx 0x1310780) >>> Running constructor
(ctx 0x1310780) >>> Running constructor => 0
The source code is below.
https://github.com/openssl/openssl/blob/06a0d40322e96dbba816b35f82226871f635ec5a/crypto/encode_decode/decoder_lib.c#L768-L780
And the ctx->construct(decoder_inst, params, ctx->construct_data); calls the crypto/encode_decode/decoder_pkey.c function decoder_construct_pkey magically. I just checked it on GDB.
https://github.com/openssl/openssl/blob/06a0d40322e96dbba816b35f82226871f635ec5a/crypto/encode_decode/decoder_pkey.c#L71-L73
And in the decoder_construct_pkey function, in the non-FIPS case (only one "default" provider), it is the case of the if (keymgmt_prov == decoder_prov), and in the FIPS case ("base" provider and "fips" provider), it is the case of the else.
https://github.com/openssl/openssl/blob/06a0d40322e96dbba816b35f82226871f635ec5a/crypto/encode_decode/decoder_pkey.c#L151-L170
And the import_data.selection = data->selection; in the else, the data->selection is not updated by the OSSL_DECODER_CTX_set_selection.
How to fix?
The data->selection above comes from the struct decoder_pkey_data_st *data = construct_data in the decoder_construct_pkey in the decoder_pkey.c.
So, my guess is that we need to update the selection value in the construct_data (struct decoder_pkey_data_st *) in the ((OSSL_DECODER_CTX *)ctx).
But the problem is the ctx->construct_data is (void *) in the decoder_lib.c where OSSL_DECODER_CTX_set_selection exists. The OSSL_DECODER_CTX is the struct ossl_decoder_ctx_st.
https://github.com/openssl/openssl/blob/06a0d40322e96dbba816b35f82226871f635ec5a/crypto/encode_decode/encoder_local.h#L155
In the crypto/encode_decode/decoder_pkey.c, the struct decoder_pkey_data_st is defined and used to access the members of the construct_data. But we can not use this struct decoder_pkey_data_st in the function OSSL_DECODER_CTX_set_selection to access the construct_data->selection. This is a challenge to fix this issue.
https://github.com/openssl/openssl/blob/06a0d40322e96dbba816b35f82226871f635ec5a/crypto/encode_decode/decoder_pkey.c#L61-L69
I am experimenting one idea to fix this issue on the latest master branch 7a2bb2101be4f4dfd9f437ebe1d7fd5dbc14b894 including the commits by the https://github.com/openssl/openssl/pull/21519 related to this issue. I am working on this branch on my forked repository for that. The latest one commit on the branch is for the idea.
The idea is to change the member selection to selectionp (the pointer to ctx->selection in the struct decoder_pkey_data_st to access the value set by OSSL_DECODER_CTX_set_selection.. The change is below.
diff --git a/crypto/encode_decode/decoder_pkey.c b/crypto/encode_decode/decoder_pkey.c
index e3aaa44902..01958e3487 100644
--- a/crypto/encode_decode/decoder_pkey.c
+++ b/crypto/encode_decode/decoder_pkey.c
@@ -61,7 +61,7 @@ DEFINE_STACK_OF(EVP_KEYMGMT)
struct decoder_pkey_data_st {
OSSL_LIB_CTX *libctx;
char *propq;
- int selection;
+ int *selectionp; /* A pointer of the selection */
STACK_OF(EVP_KEYMGMT) *keymgmts;
char *object_type; /* recorded object data type, may be NULL */
@@ -155,11 +155,11 @@ static int decoder_construct_pkey(OSSL_DECODER_INSTANCE *decoder_inst,
import_data.keymgmt = keymgmt;
import_data.keydata = NULL;
- if (data->selection == 0)
+ if (*(data->selectionp) == 0)
/* import/export functions do not tolerate 0 selection */
import_data.selection = OSSL_KEYMGMT_SELECT_ALL;
else
- import_data.selection = data->selection;
+ import_data.selection = *(data->selectionp);
/*
* No need to check for errors here, the value of
@@ -417,7 +417,7 @@ static int ossl_decoder_ctx_setup_for_pkey(OSSL_DECODER_CTX *ctx,
process_data->object = NULL;
process_data->libctx = libctx;
- process_data->selection = ctx->selection;
+ process_data->selectionp = &(ctx->selection);
process_data->keymgmts = keymgmts;
/*
@@ -561,7 +561,7 @@ ossl_decoder_ctx_for_pkey_dup(OSSL_DECODER_CTX *src,
process_data_dest->object = (void **)pkey;
process_data_dest->libctx = process_data_src->libctx;
- process_data_dest->selection = process_data_src->selection;
+ process_data_dest->selectionp = process_data_src->selectionp;
if (!OSSL_DECODER_CTX_set_construct_data(dest, process_data_dest)) {
ERR_raise(ERR_LIB_OSSL_DECODER, ERR_R_OSSL_DECODER_LIB);
goto err;
However, the value set by the OSSL_DECODER_CTX_set_selection is not propagated to the construct_data->selectionp in the decoder_construct_pkey.
I am debugging this with my reproducer.
At the following step to set the process_data initially with the selection: 0, the value of the process_data->selectionp is 0x5391e0.
In ossl_decoder_ctx_setup_for_pkey in crypto/encode_decode/decoder_pkey.c
418 process_data->object = NULL;
419 process_data->libctx = libctx;
420 process_data->selectionp = &(ctx->selection);
421 process_data->keymgmts = keymgmts;
(gdb) f
#0 ossl_decoder_ctx_setup_for_pkey (ctx=0x5391d0, keytype=0x0, libctx=0x0, propquery=0x0)
at crypto/encode_decode/decoder_pkey.c:421
421 process_data->keymgmts = keymgmts;
(gdb) p process_data->selectionp
$1 = (int *) 0x5391e0
(gdb) p &(ctx->selection)
$2 = (int *) 0x5391e0
Then when setting the EVP_PKEY_KEYPAIR in the OSSL_DECODER_CTX_set_selection(dctx, EVP_PKEY_KEYPAIR); in the reproducer.c, The code is below. The value of the &(ctx->selection) is 0x551be0. It's same value from the code above. So far so good.
(gdb) f
#0 OSSL_DECODER_CTX_set_selection (ctx=0x551bd0, selection=135)
at crypto/encode_decode/decoder_lib.c:178
178 return 1;
(gdb) p ctx->selection
$4 = 135
(gdb) p &(ctx->selection)
$5 = (int *) 0x551be0
But then on the following step to get the data->selectionp, the value of the data->selectionp is 0x551be0. This is different pointer address from the code above. This is not good. The *(data->selectionp) is 0, not 135. Do you know why the pointer address in this step is different from the steps above? What do you think about this idea to fix this issue?
In decoder_construct_pkey in crypto/encode_decode/decoder_pkey.c
158 if (*(data->selectionp) == 0)
159 /* import/export functions do not tolerate 0 selection */
160 import_data.selection = OSSL_KEYMGMT_SELECT_ALL;
161 else
162 import_data.selection = *(data->selectionp);
(gdb) f
#0 decoder_construct_pkey (decoder_inst=0x551c40, params=0x7fffffffd140,
construct_data=0x5527d0) at crypto/encode_decode/decoder_pkey.c:158
158 if (*(data->selectionp) == 0)
(gdb) p data->selectionp
$6 = (int *) 0x5391e0
(gdb) p *(data->selectionp)
$7 = 0
What is the status of this issue?
|
gharchive/issue
| 2023-03-30T19:52:38 |
2025-04-01T04:35:24.784233
|
{
"authors": [
"beldmit",
"junaruga",
"levitte",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/20657",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
204991190
|
[rt.openssl.org #4359] Duplicate n2l, etc., macros
Migrated from rt.openssl.org#4359 (status was 'new')
Requestors:
rsalz@akamai.com
From rsalz@akamai.com on 2016-02-29 21:34:37:
From discussion in GH 664 with Rob Percival. The issue of repeatd macros came up.
Thanks. I've just looked at merging all of the various definitions of those macros and it's not pretty - not all of the definitions match. There's a bug in some of the definitions in ssl_locl.h ('c' is not bracketed) and some of the defintions in idea_lcl.h appear to have blatantly dishonest comments above them:
/* NOTE - c is not incremented as per n2l */
#define n2ln(c,l1,l2,n) { \
c+=n; \
...
Marking as inactive to be closed at the end of 3.4 dev barring further input
Here's code if you want to do it: https://github.com/quictls/quictls/pull/190
|
gharchive/issue
| 2017-02-02T20:44:38 |
2025-04-01T04:35:24.792255
|
{
"authors": [
"nhorman",
"richsalz"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/2480",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
308965462
|
Please adjust the security levels to be more practical
This issue is meant to start a discussion how to improve the usefulness of @SECLEVEL setting. The proposal below can be adjusted according to it.
The current meaning of security levels is not that practical for real world use. Given the current use of the parameters affected by the @SECLEVEL it is not particularly useful to use anything higher than level 2 which on the other hand does not disable things that are not that widely used any more and whose security is already problematic.
The setting could be more practical if following or similar adjustment was done:
Level 1: Disable also SSL3.
Level 2: Disable also 3DES and TLS 1.0
Level 3: Disable also TLS 1.1 and DTLS 1.0
Probably this is not going to happen for 1.1.1. Moving to a post 1.1.1 milestone.
Sure, I have no problem with that.
This is obsolete now.
|
gharchive/issue
| 2018-03-27T13:19:26 |
2025-04-01T04:35:24.795355
|
{
"authors": [
"mattcaswell",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/issues/5760",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
867956288
|
Rename some globals, add ossl prefix.
Fixes: #13526
24 hours has passed since 'approval: done' was set, but this PR has failing CI tests. Once the tests pass it will get moved to 'approval: ready to merge' automatically, alternatively please review and set the label manually.
Merged to master, thanks for the namespace hoovering.
|
gharchive/pull-request
| 2021-04-26T17:37:00 |
2025-04-01T04:35:24.796827
|
{
"authors": [
"openssl-machine",
"paulidale",
"richsalz"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/15035",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
899810900
|
Deprecate old style BIO callback calls
New style BIO_debug_callback_ex() function added to provide
replacement for BIO_debug_callback().
Checklist
[x] documentation is added or updated
[x] tests are added or updated
It would be really nice to deprecate this as the old style callbacks are fairly broken in regards to length type handling and we should remove them as soon as possible.
Added a note in migration guide. Rebased to fix trivial conflict in libcrypto.num.
As the fixup just added additional testing. I'm merging.
Merged. Thank you for the reviews.
|
gharchive/pull-request
| 2021-05-24T16:18:40 |
2025-04-01T04:35:24.799129
|
{
"authors": [
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/15440",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
192052910
|
Add the STORE module take 2, for loaded certs, keys and more given a URI
Checklist
[x] documentation is added or updated
[x] tests are added or updated
[x] CLA is signed
Description of change
This is a redesign of #1962
This STORE module adds the following functionality:
A function STORE_open(), STORE_load() and STORE_close() that accesses
a URI and helps loading the supported objects (PKEYs, CERTs and CRLs
for the moment) from it.
An opaque type STORE_INFO that holds information on each loaded object.
A few functions to retrieve desired data from a STORE_INFO reference.
Functions to register and unregister loaders for different URI schemes. This enables dynamic addition of loaders from applications or from engines.
Also includes a loader for the "file" scheme. The goal is to have it load PEM files and raw DER files alike, transparently.
Fixes #1958, #1959
For the curious, I have a merge of the store2 branch (this PR) and the uri branch (#1961) being built here
This seems very much based on the concept that you can open a 'STORE', read it sequentially, and close it again. As such, it doesn't seem to apply to storage like PKCS#11. Is that something we care about?
This seems very much based on the concept that you can open a 'STORE', read it sequentially, and close it again. As such, it doesn't seem to apply to storage like PKCS#11. Is that something we care about?
For the file: scheme, you're entirely correct. That's the only scheme that's built in, for obvious reasons. However, the design is also to allow anyone to added their own loader for whatever sheme they see fit, and the only requirement is that this loader can simulate a sequential store of objects, given URI parameters. The backend data doesn't have to be organised in any specific way as long as the loader can simulate sequential semantics.
I haven't looked too much into the pkcs11: scheme, but I have the impression that a URI usually refers to exactly one object, so it should be fairly simple to create a loader that supports that.
Note: this branch now uses commits from #1961 and from #2027. If someone approves this one without looking at those, I'll consider it an approval of them as well.
I cleaned this up quite a bit, squashed what made sense to squash, and rebased on top of fresh master
I just did a huge rework of this PR.
Major new: public file scheme handlers are gone. There are a few left for the different kinds of data, but no more separate handler for each key type. This is instead handled through already existing OpenSSL functionality (most notably, EVP_PKEY_ASN1_METHODs).
Time to take this one out of WIP. Comments appreciated!
Note: this PR only contains the base STORE functionality
No adaptations of other parts of OpenSSL are made here, that will appear in other PRs after this one comes through reviews. (wishlist appreaciated)
FYI, I've some thoughts on extending the STORE with search attribute capabilities. To be implemented in a separate PR when this one has been merged. Such capabilities will strongly help to rewrite by_dir to use the STORE functionality.
Adapted to use the extended UR decoding recently added in #1961
I've made some changes after internal feedback from other members of the team. No URI decomposition any more, the URI is just passed along to the scheme handler, to be dealt with by the handler.
I try to implement store2 functionality into one engine and I wonder why is responsible to define structure store_loader_st?
Could I assume that each STORE_LOADER is responsible to define own structure?
Could I assume that each STORE_LOADER is responsible to define own structure?
I think you're mixing things up. The STORE_LOADER is an OpenSSL type to keep track of each loader, much like ENGINE is an OpenSSL type to keep track of engines. Maybe you're thinking of STORE_LOADER_CTX? That's a private structure that each loader is responsible for. For the 'file:' loader, you can see the definition here
But anyhow, to add a new loader, you must create it with STORE_LOADER_new and manipulate its internals with the other STORE_LOADER_ functions, and finally register it with STORE_register_loader. That's when OpenSSL can start using it.
Did this answer your question? May I guess that you're working on e_nss?
@petrovr, your reviews come disconnected from the lines you're commenting on, making it impossible for me to know what you're talking about...
Richard Levitte wrote:
@petrovr, your reviews come disconnected from the lines you're commenting on, making it impossible for me to know what you're talking about...
I note the same too late. Sorry for confusion .
doc/man3/STORE_LOADER.pod
a) please add bank lines between function groups related to
STORE_LOADER_CTX - open, ctrl , load and etc
b) STORE_LOADER_free - free is void
doc/man1/storeutl.pod
typo in description section ...The B command => s/rsautl/storeutl/ ?
Richard Levitte wrote:
I think you're mixing things up.
Sorry, I mean store loader context : typedef struct store_loader_ctx_st
STORE_LOADER_CTX
But anyhow, to add a new loader[SNIP]... For the 'file:' loader,[SNIP]
I note what is implemented in file loader.
I module is responsible to define structure above typedef will limits
scope - one source file , one store context.
It is fine with me because I plan to implement only one loader.
If some one would like in one source file to define more then one
loaders each with one context current declaration is blocking.
May I guess that you're working on e_nss?
yes
doc/man3/STORE_LOADER.pod
a) please add bank lines between function groups related to
STORE_LOADER_CTX - open, ctrl , load and etc
Do you mean something like this?
diff --git a/doc/man3/STORE_LOADER.pod b/doc/man3/STORE_LOADER.pod
index 2f12846e3f..e036a0bc4b 100644
--- a/doc/man3/STORE_LOADER.pod
+++ b/doc/man3/STORE_LOADER.pod
@@ -14,10 +14,13 @@ unregister STORE loaders for different URI schemes
#include <openssl/store.h>
typedef struct store_loader_st STORE_LOADER;
+
STORE_LOADER *STORE_LOADER_new(void);
int STORE_LOADER_set0_scheme(STORE_LOADER *store_loader, const char *scheme);
/* struct store_loader_st is defined differently by each loader */
+
typedef struct store_loader_ctx_st STORE_LOADER_CTX;
+
typedef STORE_LOADER_CTX *(*STORE_open_fn)(const char *uri);
int STORE_LOADER_set_open(STORE_LOADER *store_loader,
STORE_open_fn store_open_function);
b) STORE_LOADER_free - free is void
Good point, will fix.
doc/man1/storeutl.pod
typo in description section ...The B command => s/rsautl/storeutl/ ?
Oh! Good point! Will fix.
Richard Levitte wrote:
I think you're mixing things up.
Sorry, I mean store loader context : typedef struct store_loader_ctx_st STORE_LOADER_CTX
But anyhow, to add a new loader[SNIP]... For the 'file:' loader,[SNIP]
I note what is implemented in file loader.
I module is responsible to define structure above typedef will limits scope - one source file , one store context.
It is fine with me because I plan to implement only one loader.
If some one would like in one source file to define more then one loaders each with one context current declaration is blocking.
Yup, you're right, that is a limitation. I actually gave it some thought but couldn't figure out a way that would maintain some level of type safety. I really don't want to just pass void *.
On the other hand, one loader per source file isn't a severe limitation, I think most can live with that. If they really want to, nothing stops them from defining the structure like this:
struct store_loader_ctx_st {
union {
STORE_LOADER_CTX_1 ctx1;
STORE_LOADER_CTX_2 ctx2;
STORE_LOADER_CTX_3 ctx3;
} _;
};
And make sure that each loader uses the correct sub-context.
Richard Levitte wrote:
doc/man3/STORE_LOADER.pod
a) please add bank lines between function groups related to
STORE_LOADER_CTX - open, ctrl , load and etc
Do you mean something like this?
diff --git a/doc/man3/STORE_LOADER.pod b/doc/man3/STORE_LOADER.pod
index 2f12846e3f..e036a0bc4b 100644
--- a/doc/man3/STORE_LOADER.pod
+++ b/doc/man3/STORE_LOADER.pod
@@ -14,10 +14,13 @@ unregister STORE loaders for different URI schemes
#include <openssl/store.h>
typedef struct store_loader_st STORE_LOADER;
+
ok
STORE_LOADER *STORE_LOADER_new(void);
blank here
int STORE_LOADER_set0_scheme(STORE_LOADER *store_loader, const char *scheme);
blank here
/* struct store_loader_st is defined differently by each loader */
+
no at this point.
struct store_loader_st(typo) actually is store_loader_ctx_st(!) and
comment is for next line
typedef struct store_loader_ctx_st STORE_LOADER_CTX;
+
ok
typedef STORE_LOADER_CTX *(*STORE_open_fn)(const char *uri);
int STORE_LOADER_set_open(STORE_LOADER *store_loader,
STORE_open_fn store_open_function);
exactly both lines are for open
[SNIP]
I did test implementation for testing purposes (only to list all certificates) -
https://gitlab.com/e_nss/e_nss/blob/store2/engines/e_nss_store.c
Test command openssl storeutl -engine e_nss --text 'nss:xx' / only scheme is , rest of uri is not used and command will list all certificates
I have some remarks amount openssl store2:
eof is called too early after open but before load . At this point function can return result only if data is not protected. Load is with UI method and could prompt for password(pin) unlike open function.
open vs load method
'open' is only with URI as argument and is difficult initialize external store(device). Initialization may require access to engine and UI method.
'load' is with context(allows access to engine) and UI method and it seems to me this is the only place to initialize external device on first call.
So far so good but documentation is opposite to function declarations.
May be open function with store loader and UI in addition to uri is compromise solution.
If open could initialize external device it may know that there is data and in such case call of eof between open and load make sense.
I have some remarks amount openssl store2:
eof is called too early after open but before load . At this point function can return result only if data is not protected. Load is with UI method and could prompt for password(pin) unlike open function.
The very simple answer is this change:
--- e_nss_store.c~ 2017-03-12 21:18:50.316760863 +0100
+++ e_nss_store.c 2017-03-12 21:19:38.973236773 +0100
@@ -144,7 +144,7 @@
/*TODO: swich URI cases*/
- if (ctx->ndx == -2) return 0; /*TODO function is called too early !*/
+ if (ctx->ndx == -2) return 1; /* called early => eof not reached yet */
if (ctx->certs == NULL) return 1;
if (ctx->ndx >= sk_X509_num(ctx->certs)) return 1;
The idea is taken from feof(), which only returns a EOF indicator, which in some cases won't be set before fread() has reached EOF. In other words, if you don't know you have reached EOF, then assume you haven't.
This does mean that the user may have to deal with STORE_INFO_UNSPECIFIED, which also indicates that EOF has been reached... as a matter of fact, I've been wondering if I should just remove STORE_eof() entirely because of this, but haven't decided yet.
However...
open vs load method
'open' is only with URI as argument and is difficult initialize external store(device). Initialization may require access to engine and UI method.
'load' is with context(allows access to engine) and UI method and it seems to me this is the only place to initialize external device on first call.
What I hear is that the open method should also get the UI method and data, and yes, I think that's a good idea, precisely for the reason you mention, that a device may just need an initial authentication. I will fix that (and I suppose that also gives you another way to deal with the EOF issue ;-) ).
So far so good but documentation is opposite to function declarations.
Hmm? Not sure I understand. However, I'm just noticing that I haven't documented how the STORE_LOADER method functions are supposed to work, at all (except a little in store.h). That needs fixing too, thanks for having me think of it.
STORE_open_fn now takes a ui_method and a ui_data
I've also added documentation in doc/man3/STORE_LOADER.pod, I hope that it's understandable enough.
Richard Levitte wrote:
I have some remarks amount openssl store2:
eof is called too early after open but before load . At this point function can return result only if data is not protected. Load is with UI method and could prompt for password(pin) unlike open function.
The very simple answer is this change:
--- e_nss_store.c~ 2017-03-12 21:18:50.316760863 +0100
+++ e_nss_store.c 2017-03-12 21:19:38.973236773 +0100
@@ -144,7 +144,7 @@
/*TODO: swich URI cases*/
- if (ctx->ndx == -2) return 0; /*TODO function is called too early !*/
+ if (ctx->ndx == -2) return 1; /* called early => eof not reached yet */
if (ctx->certs == NULL) return 1;
if (ctx->ndx >= sk_X509_num(ctx->certs)) return 1;
The idea is taken from feof(), which only returns a EOF indicator, which in some cases won't be set before fread() has reached EOF. In other words, if you don't know you have reached EOF, then assume you haven't.
if eof() returns 1 before load next called method is close() .
engine "e_nss" set.
...TRACE_E_NSS nss_store_open(uri="nss:bongo"): ...
...TRACE_E_NSS nss_store_open(): ctx=0x1b71200->name=":bongo"
...TRACE_E_NSS nss_store_eof(ctx=0x1b71200, ...): ...
Total found: 0
...TRACE_E_NSS nss_store_close(ctx=0x1b71200): ...
...TRACE_E_NSS nss_finish()
May be logic is inverted. Currently zero mean 'more data'
[SNIP]
May be logic is inverted. Currently zero mean 'more data'
argh, no... it was my brain farting, never mind, zero is the correct return.
Richard Levitte wrote:
STORE_open_fn now takes a ui_method and a ui_data
10x
Now engine test implementation is updated with new more intuitive logic
for open, eof, load and close.
Engine supports one commands that list X.509 certificates:
LIST_CERTS: List certificates (1=User, 2=CA, 3=All)
(input flags): NUMERIC
For each certificate outputs certificate 'nickname' and information for
subject(distinguished name). In this context 'nickname' is like
identifier used to find certificate or key - "key id".
I update command to use store api and I have some design questions.
Store info support some types. Name type is one of them but model does
not allow name to be used with other types in one STORE_INFO. I mean
that when load returns key or certificate value there is no way to
return URI of returned data in same instance.
I wonder why name is separate type?
I did next step to unregister store on engine destroy. Unfortunately e_nss regression test with sig/verify operation start to fail. Quick check (trace) show that crash is in STORE_unregister_loader.
Test without sign/verify operation pass but unregister function returns NULL.
I'm willing to take a closer look at e_nss for this. What branch or commit should I go for?
I did next step to unregister store on engine destroy. Unfortunately e_nss regression test with sig/verify operation start to fail. Quick check (trace) show that crash is in STORE_unregister_loader.
Test without sign/verify operation pass but unregister function returns NULL.
I'm willing to take a closer look at e_nss for this. What branch or commit should I go for?
Duh, never mind... the store2 branch, of course ;-)
@petrovr:
Store info support some types. Name type is one of them but model does not allow name to be used with other types in one STORE_INFO. I mean that when load returns key or certificate value there is no way to return URI of returned data in same instance.
I wonder why name is separate type?
It's to provide "directory listing" functionality that the caller can do what they wish with. The "name" must be a new URI that can be used with STORE_open.
So basically, nss_store_load in e_nss_store.c isn't quite right, all those NSS_QUERY_LIST* should return a STORE_INFO_NAME type object with a URI that can be used to get each certificate.
Compare this to the file: scheme... STORE_open("file:/foo/bar/", ...) will return a list of names, like this, for example "file://foo/bar/ca.pem" and "file:/foo/bar/user.pem", and then, the user or application can use whichever they choose of those to get the object (or objects, as the case may be) they want.
Does this clarify things for you?
The other possibility is that you were thinking that, for example, nss:list=ca would work a bit like a PEM file in the file: scheme, and simply return all objects in there. That's absolutely fine, but in that case, nss:list=ca works like a container, and is the URI to get them, all in one go. That's how the file: scheme treats PEM files and PKCS#12 files, for example.
This gives you a choice. If you truly want to list a series of URIs that can be used to pick individual certificates, it will have to be a two step process. If you'd rather want to just return the objects, you will have to accept that the listing URI is the way to get them. Either way, the application already knows the URI to get them, there's no point returning the name together with the object.
I did next step to unregister store on engine destroy. Unfortunately e_nss regression test with sig/verify operation start to fail. Quick check (trace) show that crash is in STORE_unregister_loader.
Test without sign/verify operation pass but unregister function returns NULL.
I'm willing to take a closer look at e_nss for this. What branch or commit should I go for?
I figured it out. I'm calling destroy_loaders_int much too early. Fixing it now
I also hacked e_nss_store.c to provide a list of nickname URIs when list URIs are used, and to parse that same nickname URI when given.
Hi,
Richard Levitte wrote:
This gives you a choice. If you truly want to list a series of URIs that can be used to pick individual certificates, it will have to be a two step process. If you'd rather want to just return the objects, you will have to accept that the listing URI is the way to get them. Either way, the application already knows the URI to get them, there's no point returning the name together with the object.
Thinking more between different solutions finally I decided that simple
solution is enough.
From functional point of view proposed API is complete.
P.S.
May be store "open" method could be declared with "store loader" as
first argument :
typedef STORE_LOADER_CTX *(*STORE_open_fn)(
const STORE_LOADER *store_loader
const char *uri,
const UI_METHOD *ui_method,
void *ui_data);
, i.e. pseudo object oriented style.
Roumen Petrov wrote:
Hi,
Richard Levitte wrote:
This gives you a choice. If you truly want to list a series of URIs
that can be used to pick individual certificates, it will have to be
a two step process. If you'd rather want to just return the objects,
you will have to accept that the listing URI is the way to get them.
Either way, the application already knows the URI to get them,
there's no point returning the name together with the object.
Thinking more between different solutions finally I decided that
simple solution is enough.
From functional point of view proposed API is complete.
P.S.
May be store "open" method could be declared with "store loader" as
first argument :
typedef STORE_LOADER_CTX *(*STORE_open_fn)(
const STORE_LOADER *store_loader
const char *uri,
const UI_METHOD *ui_method,
void *ui_data);
, i.e. pseudo object oriented style.
I don't think so. It might be pseudo OO style, but it's not very much OpenSSL style. Also, that would probably make things more confusing.
From functional point of view proposed API is complete.
Cool, glad you approve.
If you take a look at the STORE project, you can see that there are a few more PRs with additional functionality.
Ok, I've addressed all stuff I said I'd address.
Also, I've renamed STORE_LOADER_set0_scheme to STORE_LOADER_set_scheme, so we don't get tangled in the set0/set1 semantics, which the current implementation simply doesn't meet.
I'm also pondering if the scheme shouldn't rather be given through STORE_LOADER_new, considering how important it is that it's set...
Ok. I'll try and take another pass through tomorrow.
Second pass done. Aside from these items I think the main sticking points we need to resolve are 1) comment style 2) multi-thread issue 3) OPENSSL_assert 4) OPENSSL_init_crypto 5) OpenConnect licence issue.
Addressed most (all?) added comments. 5) now resolved.
Looking good; thanks. If I can find time I'll be doing two things::
• Work up a PKCS#11 module proof-of-concept to validate that we're not missing anything needed for that.
• Modify app(s) to use it. My success criterion here is that I can ditch the hundreds (thousands?) of lines of code I currently have for loading certs, and replace it with a single call to a function which will do the right thing.
Regarding the OpenConnect reference: I don't mind about attribution per se, but it might be useful to have some comment about the origin, just to help us remember to keep in sync when new forms are added. And you have inherited my TODO item of ensuring that you handle non-ASCII passwords correctly in the test suite. Try a password which is U+0102 U+017B :)
Looking good; thanks. If I can find time I'll be doing two things::
• Work up a PKCS#11 module proof-of-concept to validate that we're not missing anything needed for that.
• Modify app(s) to use it. My success criterion here is that I can ditch the hundreds (thousands?) of lines of code I currently have for loading certs, and replace it with a single call to a function which will do the right thing.
Cool. Although, it will take a little more than one line... but not hundreds or thousands! ;-)
Regarding the OpenConnect reference: I don't mind about attribution per se, but it might be useful to have some comment about the origin, just to help us remember to keep in sync when new forms are added.
"someone" could simply make a PR with an update of 90-test_store.t a little now and then ;-)
And you have inherited my TODO item of ensuring that you handle non-ASCII passwords correctly in the test suite. Try a password which is U+0102 U+017B :)
That's a bit outside of the scope of this effort... is there a clear concensus that all passwords should be what, take as is (with the risk of failure on machines with a different default encoding)? Convert to utf-8 before use? Convert to unicode (wchar_t) before use? That's a subject for a whole different PR.
Second pass done. Aside from these items I think the main sticking points we need to resolve are 1) comment style 2) multi-thread issue ~3) OPENSSL_assert~ ~4) OPENSSL_init_crypto~ ~5) OpenConnect licence issue~.
all OPENSSL_assert have now been changed to assert
clarified
Second pass done. Aside from these items I think the main sticking points we need to resolve are 1) comment style ~2) multi-thread issue~ ~3) OPENSSL_assert~ ~4) OPENSSL_init_crypto~ ~5) OpenConnect licence issue~.
I've added thread safety around the loader registry bits.
Remains the comment style, which is pretty much unanswered on the team list...
Another issue I thought of. Are we at liberty to claim the STORE_* namespace? It seems quite likely that we could get symbol clashes with some other application somewhere. Should it be OSSL_STORE_* or similar?
I would rather not rename the namespace, but it's a good question. I've tried more than once to do a sweep over SDK docs I could find to see if I could find a clash somewhere, but have found none to date.
Ask on openssl-dev?
It sounds like something we should have a general policy on. I'd raise it with the team and/or openssl-dev.
Not if the new name is an opaque pointer.
So @levitte you don't like the "base type" idea because there is nothing common in base? I suggest at least putting an int for a "type identifier". Then you can check it in the per-file modules if you want. It's more future-proof (imagine a future vresion that has locking in a common place). Please think about it, over your weekend :)
Why should any loader share anything with another loader, dictated by the core? The core has its own context anyway, if there would be a common lock, that's where I would put it.
shrug. ok.
@richsalz, what I hear from you is that the STORE loaders should receive a STORE_CTX pointer, i.e. that the core context structure should be passed around everywhere, a bit like a EVP_CIPHER_CTX pointer is passed down to the individual symmetric algo implementations and those get to extract their algo-specific data pointer from that common structure...
I do understand why the passing around of a common EVP_CIPHER_CTX pointer came to be, but for the STORE, I want a stronger separation between the core library and the loaders, clear line in the sand kind of thing. I won't change my mind on that one.
@levitte, about the CTX pointer, I'm fine. I made a point, you considered it and disagree, it's okay with me.
If you're adding a store API which takes passwords for the objects being retrieved then please don't say that the charset encoding is the subject of a further PR. It is a fundamental part of this new API, surely? Get it right in the first place before anyone makes assumptions around current behaviour; don't make it an afterthought.
Imagine reviewing a PR with an API which takes a void * string but neglects to specify if it's char or wchar_t, or even whether it's NUL-terminated or Pascal-style {Len,Chars[]}.
That expression on your face... and that "Hell No!"you type in the review... that's what I want to see for any suggestion of password API which doesn't specify the charset expectations. Don't do it;
If you're adding a store API which takes passwords for the objects being retrieved then please don't say that the charset encoding is the subject of a further PR. It is a fundamental part of this new API, surely? Get it right in the first place before anyone makes assumptions around current behaviour; don't make it an afterthought.
No, it's not a fundamental part of this API. This API is merely a user of whatever PBE APIs there are in other parts of OpenSSL (more specifically, it uses PKCS12_verify_mac and PKCS12_pbe_crypt).
Imagine reviewing a PR with an API which takes a void * string but neglects to specify if it's char or wchar_t, or even whether it's NUL-terminated or Pascal-style {Len,Chars[]}.
Our UI library (which is what STORE uses to get passwords) deals with NUL-terminated char arrays, that should clarify your question. This means that we really only need to deal with the encoding (i.e. is this string UTF-8 or ISO-LATIN-{x}?).
That expression on your face... and that "Hell No!"you type in the review... that's what I want to see for any suggestion of password API which doesn't specify the charset expectations. Don't do it;
I agree it should be dealt with, but not as part of this PR.
@dwmw2, the best way to make sure the password encoding issue doesnt get lost is to create an issue. Please feel free to do so.
Comments reformatted to better fit our style guide
DECODED renamed to EMBEDDED
Copyright years corrected or amended as needed
store_local.h renamed to store_locl.h
That should address almost everything to date, no?
The question of STORE_INFO_new_ENDOFDATA vs a ferror like function.
No, it's not a fundamental part of this API. This API is merely a user of whatever PBE APIs there are in other parts of OpenSSL (more specifically, it uses PKCS12_verify_mac and PKCS12_pbe_crypt).
In this context, that isn't a valid observation. The STORE API needs its documentation to stand alone and describe how to use it. Its use of other functions internally is purely an implementation detail.
Our UI library (which is what STORE uses to get passwords) deals with NUL-terminated char arrays, that should clarify your question. This means that we really only need to deal with the encoding (i.e. is this string UTF-8 or ISO-LATIN-{x}?).
What I'm saying is that a failure to clearly define the encoding is AS BAD AS not knowing if it's NUL-terminated chars or something else. For the new STORE API, please at least document precisely what is expected to be passed in.
And if you can't clearly specify it, with http://david.woodhou.se/draft-woodhouse-cert-best-practice.html in mind, then it's not really appropriate to pass the buck and say "but the PKCS12_* functions are broken already and it's not my fault". By simply using those and further entrenching their problems in new APIs, you are making the situation worse. Please don't do that.
We have an opportunity here to introduce a new API which applications can use to Do The Right Thing, without the hundreds of lines of code that I mentioned. Please please please let's not infest it with legacy behaviour and portability issues right from the start.
@dwmw2, could we at least investigate our PBEs and what needs to be done with them?
Also, it sounds like you're asking me to reimplement our PBEs as part of this PR, and perhaps even make them part of the file scheme loader. That will not happen.
As for "The STORE API needs its documentation to stand alone and describe how to use it"... The STORE API uses UI_METHOD for password prompting.... are you saying that the UI_METHOD docs should be duplicated into the STORE docs?
In essence, the STORE API and library aren't standalone, they aren't designed that way.
Upon more carefull thought, the issue is even less part of the STORE API than I expressed. With regards to passphrases / passwords, the STORE API itself does nothing, by design. All it does is pass around a UI_METHOD, that the loaders can use as they please, and they can treat the result from the UI calls as they please as well.
So it would seem to me that your issue with passwords here belongs with the file scheme loader that's implemented as part of this PR.
Maybe this separation of stuff makes things clearer for you?
You don't need to duplicate the UI_METHOD docs. It's OK to refer to them, of course.
But if the UI_METHOD docs don't specify the character set, then you transitively inherit that same problem in your own code. And sure, you can pass the buck already and say "the old code was broken". But now you've added new APIs which are similarly broken, which is less OK.
And you have potentially made it worse, if we end up in a situation where one back end expects one thing (the locale charset) while others expect another (a PKCS#11 store probably really does require UTF-8 since that's specifiied by PKCS#11). So users of your new API now have to make guesses based on internal details and pass something different according to their guess.
The reason I'm pushing for this now, as a precursor to this landing, is because if we merge it with that kind of ambiguity then we end up with the API problem for ever. The API should be stable before it lands.
Ah, I thought I had already added references to the the UI / UI_METHOD docs. Apparently not. Will fix.
Still not good enough if the UI_METHOD docs don't specify charset handling, and if your overall resulting behaviour isn't correct given what is said about it.
As for the rest of what you say, @dwmw2, I repeat, let's fix the code where it's actually broken.
Sure, but typically one tries to fix such brokenness before propagating it and producing new ambiguous APIs. But as long as it gets done before an actual release with the STORE API, I suppose that's fine. We still shouldn't end up in a situation where applications have to jump through hoops to do different things for different back ends or different versions of OpenSSL.
Based on the team discussion I consider (2) resolved (my objection about requiring different translation units can be fairly easily worked around). (4) is also resolved - OPENSSL_init_crypto() is required to ensure the atexit handlers get invoked properly.
So, remaining issues:
STORE_ vs OSSL_STORE_ prefix (not yet concluded as far as I can see)
app name (store or storeutl)
ENDOFDATA vs ferror like interface.
I'm currently working on 3. 1 is waiting on team decision (I want it to become policy if we decide for OSSL_STORE_)... and quite frankly, 2 seems like a small thing, I would prefer "storeutl".
@dwmw2, regarding the passphrase issue, I'll have to insist on raising a separate issue for it, not in this PR. I'm pondering the whats and what nots right now.
Regarding what's done first or after, what about in parallell?
I think @dwm2 makes a strong point. If the current UI is broken for non-ascii passwords, then adding a new API on top of that also seems broken.
You know, I figured that yes, some work needs to be done in the file scheme loader with regards to passwords. I suddenly remembered that the encoding issue was dealt with in the pkcs12 app, where a given pass phrase will be tried as is first when loading a file, and failing that, a pass phrases that's been converted from whatever encoding it's in to utf8 will be attempted instead. The same kind of operation should be attempted in the file scheme loader.
Why it should be done this way? Because there are existing keys that have been encrypted with iso-8859-* encoded pass phrases, and of course, there are also keys encrypted with utf8 encoded pass phrases.
In a similar vein, we cannot assume to know exactly what encoding each loader wants. That will be up to loader specific decisions and implementations. So for example, I would assume that a loader for PKCS#11 would make sure to convert the pass phrase it gets from UI calls to utf8. That should be a matter of what, one library call? Either OPENSSL_asc2utf8 or something similar from a more potent character conversion library.
UI, btw, will give you straight (or as straight as possible) what it was fed, with the current local encoding. It delivers NUL-terminated char *, and that's it.
STORE_ vs OSSL_STORE_ prefix (not yet concluded as far as I can see)
I think doing a policy will be very difficult. The consensus seems to be that STORE is a generic name, and we shouldn't claim it. Can we just do with that one change? What's the policy, if not. "Don't use common names, otherwise prefix with OSSL" Our exported symbols have 73 items that start with openssl, and none that start with ossl:
; grep -i '^openssl' *num | wc
73 292 6010
; grep -i ossl *num
exit 1
;
I agree OPENSSL_STORE is big and ugly, but it is the convention we already have OPENSSL_LHASH _sk, _thread, etc.
app name (store or storeutl)
This is not a hill worth fighting over. Pick whatever name you want.
ENDOFDATA vs ferror like interface.
If it can look like BIO or FILE* with feof indicators, awesome. If not, so be it.
app name (store or storeutl)
This is not a hill worth fighting over. Pick whatever name you want.
I prefer "store" (in my mind the utl bit doesn't add anything). But its just a preference so I will accept whatever you decide.
BTW, I am now struggling to load this PR in my browser. Github keeps throwing up a page saying it is taking too long to load.
@richsalz wrote thusly:
STORE_ vs OSSL_STORE_ prefix (not yet concluded as far as I can see)
I think doing a policy will be very difficult. The consensus seems to be that STORE is a generic name, and we shouldn't claim it.
X509 is also pretty generic (and we already have one clash). RSA as well. I could go on...
Can we just do with that one change? What's the policy, if not. "Don't use common names, otherwise prefix with OSSL" Our exported symbols have 73 items that start with openssl, and none that start with ossl:
; grep -i '^openssl' *num | wc
73 292 6010
; grep -i ossl *num
exit 1
;
I agree OPENSSL_STORE is big and ugly, but it is the convention we already have OPENSSL_LHASH _sk, _thread, etc.
Hmmm, hadn't thought of the OPENSSL_LHASH_ case... ok then.
app name (store or storeutl)
This is not a hill worth fighting over. Pick whatever name you want.
ENDOFDATA vs ferror like interface.
If it can look like BIO or FILE* with feof indicators, awesome. If not, so be it.
Well, we do have the end-of-file indicator already, but not the error indicator.
BTW, I am now struggling to load this PR in my browser. Github keeps throwing up a page saying it is taking too long to load.
Yup, I have the same issue. As far as I can tell, it's load related (on github's side)
X509 is also pretty generic (and we already have one clash). RSA as well. I could go on...
And most, if not all, of these come from the SSLeay days. :) Things are different now, and we want to avoid naming conflicts if we can.
In a similar vein, we cannot assume to know exactly what encoding each loader wants. That will be up to loader specific decisions and implementations. So for example, I would assume that a loader for PKCS#11 would make sure to convert the pass phrase it gets from UI calls to utf8. That should be a matter of what, one library call? Either OPENSSL_asc2utf8 or something similar from a more potent character conversion library.
I would probably try to reframe @dwmw2's position as "you should document exactly what format the input to the new APIs should be in, including charset/encoding/etc." (Unicode normalization form?) Ideally this would be a nice clean and consistent interface that is easy to describe, though the initial implementation might be accompanied by warnings of buggy behavior in some cases that does not match up with the clean API spec. A more "messy" API that has the encoding depend on which backend is in use might be possible to document accurately as well; I didn't look very hard. But consumers of the new functions should be able to have a very specific understanding and expectation for their behavior. If the encoding really is solely up to the UI_METHOD and that's all that STORE is passing around, then maybe we need to add some NIDs for various encodings and a way to query the UI_METHOD what is needed.
But to reiterate, I agree with @dwmw that character encoding is a giant minefield, and I think the best way we can help our consumers avoid issues is to clearly document how to correctly use our code; secondarily is to make correct usage simpler and more consistent.
Please move the pass phrase encoding discussion to #3531. I will refuse to answer these questions further here.
Please move the pass phrase encoding discussion to #3531.
Thank you for filing that.
I will refuse to answer these questions further here.
... except where they are directly related, I hope. Specifically:
• Please explicitly document the expectation implied by #3531, that the UI_METHOD passed in to STORE_open() is expected to provide strings in the locale charset, except where OPENSSL_WIN32_UTF8 is defined.
Can I suggest closing this PR and reopening it with a different PR number? Github just cannot cope with loading this page now - I guess because there have been so many comments. Most times I get an error page :-(
Closing this PR, please redirect your attention to #3542
|
gharchive/pull-request
| 2016-11-28T16:34:56 |
2025-04-01T04:35:24.894111
|
{
"authors": [
"dwmw2",
"kaduk",
"levitte",
"mattcaswell",
"petrovr",
"richsalz"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/2011",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2379067466
|
Check EC_GROUP_get0_order result before dereference
Fix NULL dereference
Found by Linux Verification Center (linuxtesting.org) with SVACE.
CLA: trivial
Checklist
[x] documentation is added or updated
[x] tests are added or updated
@paulidale - please can you confirm you are ok with CLA: trivial?
This pull request is ready to merge
Merged to all the active branches. Thank you for your contribution.
|
gharchive/pull-request
| 2024-06-27T20:47:36 |
2025-04-01T04:35:24.900161
|
{
"authors": [
"JohnnySavages",
"mattcaswell",
"openssl-machine",
"t8m"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/24755",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2556691998
|
Use the correct length value for input salt in HKDF
In this function the salt can be either a zero buffer of exactly mdlen length, or an arbitrary salt of prevsecretlen length. Although in practice OpenSSL will always pass in a salt of mdlen size bytes in the current TLS 1.3 code, the openssl kdf command can pass in arbitrary values (I did it for testing), and a future change in the higher layer code could also result in unmatched lengths.
If prevsecretlen is > mdlen this will cause incorrect salt expansion, if prevsecretlen < mdlen this could cause a crash or reading random information. Inboth case the generated output would be incorrect.
Fixes #25557
expecting all existing tests to pass; as well as FIPS KAT and fips-tests.
It would be nice to have a test case for this.
please let me know where you would put it. Would it be sufficient to provide a pre-computed output file and use "openssl kdf" to compute this with a short salt and fail if the output does not match ?
It would be nice to have a test case for this.
please let me know where you would put it. Would it be sufficient to provide a pre-computed output file and use "openssl kdf" to compute this with a short salt and fail if the output does not match ?
It should be possible to add such test case to test/recipes/30-test_evp_data/evpkdf_tls13_kdf.txt, shouldn't it? It will need a FIPSVersion = >=3.4.0 condition.
I added a test, hopefully it works right
I fixed two issues with the test:
A shorter salt often has 0s appended, and the HMAC computations, where this salt is used as the key happen to extend with zeros a key shorter than the AES block ... using a key longer than they keyblock will force a Hash of the key, which should be different.
I had computed the result with the old code by mistake anyway :)
Hopefully this works
There is something funny here, I am computing the result with a different implementation, and now that I changed the salt length it does not match (which is why the tests are failing). I will investigate.
Ok, should be all good now.
I manually tested that the results differ between my installed openssl and a build with this patch for all salt length values except saltlen=32 in which case they obviously match.
They also always match for any length of salt < 32 as mentioned before, given the HMAC construct pads the key with zeros and generally (just by luck) openssl gets a buffer for the salt that ends up having zeros after it in the original implementation.
FWIW, I am not sure why I wrote AES block when what I really meant is Digest size ...
The CI test failure does not look relevant
24 hours has passed since 'approval: done' was set, but as this PR has been updated in that time the label 'approval: ready to merge' is not being automatically set. Please review the updates and set the label manually.
Merged to all the active branches. Thank you for your contribution.
|
gharchive/pull-request
| 2024-09-30T13:29:52 |
2025-04-01T04:35:24.907170
|
{
"authors": [
"mattcaswell",
"openssl-machine",
"simo5",
"t8m",
"xnox"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/25579",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
262395964
|
stack/stack.c: various cleanups.
This is kind of follow-up to recent stack changes.
There are a few places that test st->num < 0. Those tests can be removed, and of course, for those that look like this, it be removed entirely:
if (sk->num < 0)
return NULL;
Take a look...
|
gharchive/pull-request
| 2017-10-03T11:41:28 |
2025-04-01T04:35:24.908875
|
{
"authors": [
"dot-asm",
"levitte"
],
"repo": "openssl/openssl",
"url": "https://github.com/openssl/openssl/pull/4455",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2373293192
|
Put ssh_jumper to use in libvirt_manager role
This patch changes some behavior in ssh_jumper to make its integration
smoother.
It also integrate the role in libvirt_manager in order to manage the
various ssh configuration we usually inject on the hypervisor and
ansible controller (laptop, etc).
As a pull request owner and reviewers, I checked that:
[x] Appropriate testing is done and actually running
[x] Appropriate documentation exists and/or is up-to-date:
[x] README in the role
/approve
|
gharchive/pull-request
| 2024-06-25T17:52:02 |
2025-04-01T04:35:24.922236
|
{
"authors": [
"cjeanner",
"raukadah"
],
"repo": "openstack-k8s-operators/ci-framework",
"url": "https://github.com/openstack-k8s-operators/ci-framework/pull/1952",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2439173705
|
Remove old unused CRC nodesets
The 3 matches from tripleo-ci-internal-jobs are beta jobs that will soon be removed with tripleo-ci-internal-jobs closing down.
For the match in weirdo-jobs.yaml, I reached out to Joel and awaiting a response on if this is needed or they will bump to CRC 2.39:
❯ tree -L 2
.
├── downstream
│ ├── ci-framework-config
│ ├── ci-framework-jobs
│ ├── ci-framework-testproject
│ ├── docs
│ ├── sf-config
│ ├── testproject
│ ├── tripleo-ci-internal-config
│ └── tripleo-ci-internal-jobs
└── upstream
├── ci-bootstrap
├── ci-framework
├── ci-framework.worktrees
├── config
├── data-plane-adoption
├── rdo-jobs
├── release
├── review.rdoproject.org-config
└── testproject
❯ rg -g '*.yaml' nodeset:.\*2-3 | rg -v '2-39'
upstream/rdo-jobs/zuul.d/weirdo-jobs.yaml: nodeset: centos-9-crc-2-30-0-6xlarge
downstream/tripleo-ci-internal-jobs/zuul.d/podified-jobs.yaml: nodeset: rhel-9-4-crc-extracted-2-30-0-3xl
downstream/tripleo-ci-internal-jobs/zuul.d/edpm.yaml: nodeset: 2x-rhel-9-4-crc-extracted-2-30-0-xxl
downstream/tripleo-ci-internal-jobs/zuul.d/edpm.yaml: nodeset: 2x-rhel-9-4-crc-extracted-2-30-0-3xl
Jira: OSPRH-8666
Depends-On: https://review.rdoproject.org/r/c/config/+/53945
As a pull request owner and reviewers, I checked that:
[x] Appropriate testing is done and actually running
I need to update the nova-operator zuul config to not pull config from these renovate branches.
recheck
Just waiting on [1] to merge
[1] https://review.rdoproject.org/r/c/config/+/53945
recheck
recheck
recheck
recheck
/cherrypick 18.0.0-proposed
|
gharchive/pull-request
| 2024-07-31T05:39:28 |
2025-04-01T04:35:24.927069
|
{
"authors": [
"lewisdenny",
"raukadah",
"rlandy"
],
"repo": "openstack-k8s-operators/ci-framework",
"url": "https://github.com/openstack-k8s-operators/ci-framework/pull/2181",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1798530919
|
Use set_openstack_containers role to patch baremetal operator
set_openstack_containers role takes care of updating the env vars for any operator.
This pr uses the set_openstack_containers role to update the baremetal operator csv with the correct image.
Note: It also bumps crc ram to 24 GB as the edpm jobs are failing consistently.
Tested here: https://github.com/openstack-k8s-operators/edpm-image-builder/pull/11 and
results: https://github.com/openstack-k8s-operators/edpm-image-builder/pull/11#issuecomment-1633820238
As a pull request owner and reviewers, I checked that:
[x] Appropriate testing is done and actually running
recheck
recheck
/approve
Seems lovely - and working, according to a parallel test here: https://github.com/openstack-k8s-operators/edpm-image-builder/pull/11#issuecomment-1633820238
/lgtm
recheck
recheck
recheck
|
gharchive/pull-request
| 2023-07-11T09:37:13 |
2025-04-01T04:35:24.931394
|
{
"authors": [
"Sandeepyadav93",
"cjeanner",
"raukadah"
],
"repo": "openstack-k8s-operators/ci-framework",
"url": "https://github.com/openstack-k8s-operators/ci-framework/pull/372",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
189483139
|
Patches and patches_base should be before BuildArch
It would be nice if rdopkg helped enforce making sure patches_base and patches are after SourceX and before BuildArch.
openstack-tripleo newton-rdo branch had an instance of this
https://review.rdoproject.org/r/#/c/3757/
Expected behavior
rdopkg update-patches/patch
`
Source0: https://github.com/openstack/%{repo_name}/archive/%{commit}.tar.gz#/%{repo_name}-%{commit}.tar.gz
Source1: tripleo
BuildArch: noarch
patches_base=c3fb309727671130a32b4c19de48ec22c8530aa1
Patch0001: 0001-Use-packaged-template-directory-path.patch
`
would get turned into the following:
`
Source0: https://github.com/openstack/%{repo_name}/archive/%{commit}.tar.gz#/%{repo_name}-%{commit}.tar.gz
Source1: tripleo
patches_base=c3fb309727671130a32b4c19de48ec22c8530aa1
Patch0001: 0001-Use-packaged-template-directory-path.patch
BuildArch: noarch
`
The problem (since I don't see it listed here) is that when using %autosetup, the patches do not get auto-applied during the rpm builds unless the patches appear before the buildarch in the spec file.
rdopkg already contains a check for BuildArch after patches, but it was only performed when applying patches by git, not by %autosetup. Following review fixes that:
https://review.rdoproject.org/r/#/c/3762/
Instead of doing such a interusive edit, rdopkg update-patches and derived actions will refuse to work on .spec with message explaining the situation.
perfect solution, thanks!
|
gharchive/issue
| 2016-11-15T19:26:59 |
2025-04-01T04:35:24.946261
|
{
"authors": [
"mburns72h",
"yac",
"yazug"
],
"repo": "openstack-packages/rdopkg",
"url": "https://github.com/openstack-packages/rdopkg/issues/88",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
580835750
|
Hover state persistst when tapping buttons on touch devices
Mobile Safari gives tapped buttons the :hover CSS styling, which shows them as pressed. However, this doesn't always properly disappear after the tap is complete. It's not a big deal but it looks sloppy and can be confusing.
Turns out the hover media feature can by used to apply CSS only to environments where the primary pointer can hover. I wrapped all of iD's :hover selectors in media queries and added :active selectors so all buttons still react when pressed on touch devices.
|
gharchive/issue
| 2020-03-13T20:44:43 |
2025-04-01T04:35:25.022273
|
{
"authors": [
"quincylvania"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/7432",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1782915663
|
Error adding bus stopping location to vertex
URL
No response
How to reproduce the issue?
Select a node along a line or area (regardless of the way’s tags).
Choose the Bus Stopping Location preset.
Nothing happens.
Screenshot(s) or anything else?
The following error appears in the console:
Uncaught TypeError: S is undefined
g preset.js:315
moreFields preset.js:46
Y1 change_preset.js:19
choose preset_list.js:427
p history.js:45
g history.js:63
perform history.js:168
m context.js:273
choose preset_list.js:422
AL on.js:3
3 preset.js:315:14
https://github.com/openstreetmap/iD/blob/b1121b11753cfe8919e5665df315060d2ec04139/modules/actions/change_preset.js#L19 https://github.com/openstreetmap/iD/blob/b1121b11753cfe8919e5665df315060d2ec04139/modules/presets/preset.js#L315
Which deployed environments do you see the issue in?
Development version at ideditor.netlify.app
What version numbers does this issue effect?
2.26.0-dev
Which browsers are you seeing this problem on?
Firefox
Hello @1ec5 , thanks for sharing the issue. I have been trying to reproduce it by trying to select a node along a line and changing its preset to Bus Stopping Location, but unfortunately I am not able to encounter any bug on console. Could you please verify if I am doing something wrong here?
I’m no longer able to reproduce the issue either. (This time, I was using Firefox 134.0b3 and looking at a location around 39.28228°N, 84.28452°W.) Given the age of the bug report, we can close this one as not reproducible and reopen it if it ever comes up again.
|
gharchive/issue
| 2023-06-30T17:44:51 |
2025-04-01T04:35:25.027462
|
{
"authors": [
"1ec5",
"Deeptanshu-sankhwar"
],
"repo": "openstreetmap/iD",
"url": "https://github.com/openstreetmap/iD/issues/9719",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1037854215
|
[4.8.0] Support types of function parameters and return variable
This is an example of a stub file to define arg info: https://github.com/openswoole/swoole-src/blob/master/ext-src/swoole_table.stub.php
Then the file https://github.com/openswoole/swoole-src/blob/master/ext-src/swoole_table_arginfo.h is generated by run
./tools/gen_sub.php ./ext-src ...stub.php
Next step is include in the main file https://github.com/openswoole/swoole-src/blob/master/ext-src/swoole_table.cc#L21
Use the old arg info for lower versions:
https://github.com/openswoole/swoole-src/blob/master/ext-src/swoole_table.cc#L128
You may have to check the type of params and return value of each function like PHP_METHOD(swoole_table, __construct) and also fix the inconsistent issues.
PR are welcome to help with this like https://github.com/openswoole/swoole-src/pull/23
You may also reference to the definition at https://github.com/openswoole/ide-helper/tree/master/src/swoole/Swoole , but keep in mind of the inconsistent issues and bugs.
Available in v4.8.0
|
gharchive/issue
| 2021-10-27T20:55:24 |
2025-04-01T04:35:25.076892
|
{
"authors": [
"doubaokun"
],
"repo": "openswoole/swoole-src",
"url": "https://github.com/openswoole/swoole-src/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1986442175
|
feat: hydra full error on dataproc
Use HYDRA_FULL_ERROR=1 in dataproc by default to have more verbose errors. Otherwise, errors are occasionally so succinct that it's hard to know what's happening.
After failing with a few setups, this produces the intended results.
Codecov Report
Merging #241 (75af590) into main (68a77b3) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #241 +/- ##
=======================================
Coverage 85.95% 85.95%
=======================================
Files 80 80
Lines 1816 1816
=======================================
Hits 1561 1561
Misses 255 255
|
gharchive/pull-request
| 2023-11-09T21:38:27 |
2025-04-01T04:35:25.080893
|
{
"authors": [
"codecov-commenter",
"d0choa"
],
"repo": "opentargets/genetics_etl_python",
"url": "https://github.com/opentargets/genetics_etl_python/pull/241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1337362909
|
Add pics_95perc_credset to V2D
As a member of the data team I want to have the flag pics_95perc_credset included in the V2D dataset produced in the ETL because this field is relevant in the L2G pipeline.
Background
I am currently working on #2608. In the process of updating the L2G dependencies, the features that we used to extract from the LD table (gs://genetics-portal-dev-staging/v2d/220401/ld.parquet) will now be processed from the V2D dataset (gs://genetics-portal-dev-data/22.05.2/outputs/v2d).
The LD table has a field called pics_95perc_credset that is dropped in the ETL. I need this field because it is used in several places of L2G.
Tasks
[ ] Remove the line of the V2D generation that drops the field (see line)
Acceptance tests
How do we know the task is complete?
When I inspect the V2D schema, the boolean flag pics_95perc_credset is present
How is this being used in L2G @ireneisdoomed ? The only input I knew about for L2G was V2G?
It is not used at the moment, but the new implementation will be using all the outputs of the ETL, so we will be using the V2D dataset instead of the LD table. Afaik, V2D's main dependencies are the LD, studies, and top loci tables.
|
gharchive/issue
| 2022-08-12T15:41:58 |
2025-04-01T04:35:25.085457
|
{
"authors": [
"JarrodBaker",
"ireneisdoomed"
],
"repo": "opentargets/issues",
"url": "https://github.com/opentargets/issues/issues/2695",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2170924189
|
Service categories are shown if no services are listed
If you on docsportal page (swiss) you can see categories which have no service listed. This shouldn't be the case.
Gets fixed by https://github.com/opentelekomcloud/otc-sphinx-directives/pull/35
|
gharchive/issue
| 2024-03-06T08:31:13 |
2025-04-01T04:35:25.086698
|
{
"authors": [
"SebastianGode",
"tischrei"
],
"repo": "opentelekomcloud/otcdocstheme",
"url": "https://github.com/opentelekomcloud/otcdocstheme/issues/229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1174861477
|
docker compose fails to start otel-sdk-bundle-example-sf5-php on mac monterey
Docker Desktop 4.6.0 (75818)
Mac OS 12.2.1
setfacl: var/log: Not supported
setfacl: var/log/dev.log: Not supported
commenting out
setfacl -R -m u:www-data:rwX -m u:"$(whoami)":rwX var
setfacl -dR -m u:www-data:rwX -m u:"$(whoami)":rwX var
from
https://github.com/opentelemetry-php/otel-sdk-bundle-example-sf5/blob/main/docker/php/docker-entrypoint.sh#L67-L68
fixes the issue
the below log segment gets repeated indefinitely and the php app doesn't load
otel-sdk-bundle-example-sf5-php-1 | For additional security you should declare the allow-plugins config with a list of packages names that are allowed to run code. See https://getcomposer.org/allow-plugins
otel-sdk-bundle-example-sf5-php-1 | You have until July 2022 to add the setting. Composer will then switch the default behavior to disallow all plugins.
otel-sdk-bundle-example-sf5-php-1 | Installing dependencies from lock file (including require-dev)
otel-sdk-bundle-example-sf5-php-1 | Verifying lock file contents can be installed on current platform.
otel-sdk-bundle-example-sf5-php-1 | Nothing to install, update or remove
otel-sdk-bundle-example-sf5-php-1 | Generating optimized autoload files
otel-sdk-bundle-example-sf5-php-1 | 44 packages you are using are looking for funding.
otel-sdk-bundle-example-sf5-php-1 | Use the `composer fund` command to find out more!
otel-sdk-bundle-example-sf5-php-1 |
otel-sdk-bundle-example-sf5-php-1 | Run composer recipes at any time to see the status of your Symfony recipes.
otel-sdk-bundle-example-sf5-php-1 |
otel-sdk-bundle-example-sf5-php-1 | > composer dump-autoload --optimize
otel-sdk-bundle-example-sf5-php-1 | For additional security you should declare the allow-plugins config with a list of packages names that are allowed to run code. See https://getcomposer.org/allow-plugins
otel-sdk-bundle-example-sf5-php-1 | You have until July 2022 to add the setting. Composer will then switch the default behavior to disallow all plugins.
otel-sdk-bundle-example-sf5-php-1 | Generating optimized autoload files
otel-sdk-bundle-example-sf5-php-1 | Generated optimized autoload files containing 2412 classes
otel-sdk-bundle-example-sf5-php-1 | Executing script cache:clear [OK]
otel-sdk-bundle-example-sf5-php-1 | Executing script assets:install public [OK]
otel-sdk-bundle-example-sf5-php-1 |
otel-sdk-bundle-example-sf5-php-1 | setfacl: var/log: Not supported
otel-sdk-bundle-example-sf5-php-1 | setfacl: var/log/dev.log: Not supported
otel-sdk-bundle-example-sf5-php-1 exited with code 1
@grahamgreen
Thank you.
I will look into this.
I found a similar issue here: https://github.com/zolweb/docker-php7/issues/9 you can see the change in this commit https://github.com/zolweb/docker-php7/commit/1038e51467a9eb26222999a47651deeff4627c7f
5.1 --> 5.2
|
gharchive/issue
| 2022-03-21T04:05:40 |
2025-04-01T04:35:25.094098
|
{
"authors": [
"grahamgreen",
"tidal"
],
"repo": "opentelemetry-php/otel-sdk-bundle-example-sf5",
"url": "https://github.com/opentelemetry-php/otel-sdk-bundle-example-sf5/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
820123389
|
add jupyter notebook
--- adds a jupyter notebook
-- removed notebook
--- adds has_coldkey and other key file functions to the wallet
--- adds checks to the metagraph to ensure it does not fail without a wallet
--- kills the process in the executor gracefully using self.wallet.assert_coldkey ...
|
gharchive/pull-request
| 2021-03-02T15:26:06 |
2025-04-01T04:35:25.095659
|
{
"authors": [
"unconst"
],
"repo": "opentensor/bittensor",
"url": "https://github.com/opentensor/bittensor/pull/228",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
222579435
|
Support #1619 - MTD vrs FTD builds, and Makefile.am If/Else/Endif issues
As discussed in #1467 (Changes to Support IDEs)
These problems are intertwined and share a number of common traits and are being addressed together in a single commit, sadly I don't see a way to break this up.
PROBLEM 1
In src/core/Makefile.am - lists numerous IF/ELSE/ENDIF which choose which files are added or not added to a specific library. While IDEs can provide some level of exclusion, they are not that flexible.
PROBLEM 2
Need both --enable-cli-ftd --enable-cli-mtd, and likewise --enable-mtd
Need granularity such that CLI can be built with MTD only, And NCP can be built using MTD or FTD
Current ./configure solution is: FTD for both, or FTD for Neither
(there are no details for this item below)
PROBLEM 3
Linking the CLI application with the CLI library, and NCP with the NCP library
Currently when the CLI (or NCP) library is built - it is built using only one of the two internal header configurations, ie: FTD - with all of the structure/class elements, or MTD with various structure/class elements removed.
This causes problems with link time optimization and static code analysis.
PROBLEM 4
Appvayor.
I do not have access to the version of Visual Studio used here, I have Visual Studio 2015, but the project appears to use an older version of Visual Studio, my experience has been: VS2015 will upgrade the projects... this is a one-way process
if someone can advise, it would be helpful.
DETAILS FOLLOW
================================================
PROBLEM 1 - details
a) Makefile.am files should not longer use If/Else/Endif type constructs to add/remove files from a build.
Instead, the makefile should generally compile everything.
For example some IDEs - such as Eclipse CDT based IDEs - compile all files ".c" and ".cpp" and are of the view - if the file does not belong, or should not be compiled then it should not be present in the directory. While other IDEs have other schemes, in IDE land - there does not exist a single common way to exclude/include files based upon the build configuration.
Solution: Where required, appropriate #if/#else/#endif type statements need to be added to various source files so that the entire quote "translation unit" - effectively becomes a pseudo-empty translation unit.
A number of files did not have, or do not #include "openthread-config.h" - which is required to support the #if/#else/#endif described above, this was added to a number of files.
Result: A number of places additional tests must be applied to the conditional.
For example in several places, the #if SOMETHING becomes: #if SOMETHING && OPENTHREAD_FTD
PROBLEM 2 Details
This is described above, not expanded here.
PROBLEM 3 Details
When the NCP or CLI library is built, it uses the internal thread headers, and thus structures and class members.
When the NCP or CLI application is linked, the openthread library effectively uses a different set of header files (either MTD or FTD configuration) - the net result is these internal structures are different.
What is needed is the CLI library must be built in FTD mode, and MTD mode so that it matches the openthread library, these differences are flagged as errors by compilers and static code analysis tools that do "link-time-optimization"
Thus, in all cases we have added two command line defines:
When building anything-FTD, "-DOPENTHREAD_FTD=1" must be defined
When building anything-MTD, "-DOPENTHREAD_MTD=1" must be defined
PROBLEM 4 - No access to Visual Studio OLD?
I have Visual Studio 2015 - I do not have older versions, I cannot make any of the required changes to support the above for appvayor.
I believe the change is simple, because I believe the Windows solution only builds the FTD configuration.
In the library build projects add: -DOPENTHREAD_FTD=1
FWIW, I've been editing the project files in etc/visual-studio directly using a text editor. You'll probably want to add OPENTHREAD_FTD to the PreprocessorDefinitions blocks.
Or you can add it to openthread-windows-config.h.
Codecov Report
Merging #1620 into master will increase coverage by 0.52%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1620 +/- ##
==========================================
+ Coverage 69.11% 69.63% +0.52%
==========================================
Files 159 159
Lines 20273 20260 -13
Branches 2486 2486
==========================================
+ Hits 14011 14108 +97
+ Misses 5360 5249 -111
- Partials 902 903 +1
Impacted Files
Coverage Δ
src/core/api/commissioner_api.cpp
46.66% <ø> (ø)
:arrow_up:
src/core/thread/thread_netif.cpp
82.05% <ø> (ø)
:arrow_up:
src/core/thread/network_data_leader.cpp
80.2% <ø> (+0.5%)
:arrow_up:
src/core/net/dns_client.cpp
4.45% <ø> (ø)
:arrow_up:
src/ncp/ncp_uart.cpp
71.6% <ø> (ø)
:arrow_up:
src/core/thread/network_data_leader_ftd.cpp
68.07% <ø> (-0.47%)
:arrow_down:
src/core/meshcop/commissioner.cpp
68.92% <ø> (ø)
:arrow_up:
src/core/mac/mac_whitelist.cpp
93.61% <ø> (ø)
:arrow_up:
examples/platforms/posix/flash.c
90.9% <ø> (ø)
:arrow_up:
src/cli/cli.cpp
37.52% <ø> (+0.09%)
:arrow_up:
... and 49 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fb30b17...6af6a11. Read the comment docs.
Nick Banks (@nibanks)
Sadly - I had to edit the vcxproj files ...
Another item is this which is different from openthread.
In openthreads case, we ./configure the NCP to be either SPI or UART - it does not build both versions, this is done in the 'openthread-config.h' file, today the choice is binary - either SPI or UART
In contrast, the windows builds do both UART and SPI at the same time, I could not duplicate the config.h method because this would require two different openthread-config.h files for windows.
Thus, in those cases I had to add the #defines on the commandline for the NCP library build.
Please take a look.
Thanks for your help.
Technically, the --with-*/--without-* options are supposed to be used for determining which external dependencies to use. It is the --enable-*/--disable-* that is supposed to be used for adjusting features.
From Autotools Mythbuster:
There are three types of options (or properly arguments ) that can be added to the configure script:
--enable-*/--disable-* arguments
The arguments starting with --enable- prefix are usually used to enable features of the program. They usually add or remove dependencies only if they are needed for that particular feature being enabled or disabled.
--with-*/--without-* arguments
The arguments starting with --with- prefix are usually used to add or remove dependencies on external projects. These might add or remove features from the project.
environment variables
Environment variables that are used by the configure script should also be declared as arguments; their use will be explained below in detail.
The first two kinds of parameters differ just for the displayed name and from the macro used, but are in effect handled mostly in the same way. They both are actually used to pass variables, in the form of --(enable|with)-foo=bar and both provide defaults for when the variable is omitted (the value yes for --enable and --with and the value no for --disable and --without).
While there is no technical difference between the two, it's helpful for both users and distribution to follow the indications given above about the use of the two parameters' kind. This allows to identify exactly what the parameters are used for.
Using the command-line environment variable to specify the bus type seems like the most natural way to go about this, but I'm fairly flexible.
My point was more something else.
There are 4 variants of the NCP application.
ncp-{MTD|FTD}-{UART|SPI}
In ./configure land - the choice of UART/SPI - is at ./configure time, and 0, 1 or 2 variants (ie: None, MTD, FTD, or both/all)
In contrast, windows has taken the other approach, Only build FTD, and build both UART and SPI
The header files are not setup in the windows flavor. The question is, should we adjust? or not.
For Windows, as far as building NCP, there isn't any product scenario that actually requires it. It is more just for ensuring build support/coverage and used for unit test library. For all other features, since Windows doesn't really have any processing or memory restrictions compared to a real device, so we try to enable every feature that makes sense. There is no need to make it configurable.
Q: What is blocking this from being merged?
Do I need to address anything?
My earlier comment was addressing the use of --with-ncp-bus=uart and --with-ncp-bus=spi, which isn't technically what the --with-*/--without-* arguments are intended to be used for (see my previous comment for details). The current usage is inconsistent with how those types of arguments are intended to be used.
But I'm not going to recommend we hold back for that. We already aren't super consistent to begin with.
I'm going to give this another review pass right now, should take a few minutes.
FYI - the "--with-ncp-bus" was per Jonhui's suggestion in a different discussion.
@DuaneEllis-TI, thanks for these enhancements to bring boarder toolchain support!
|
gharchive/pull-request
| 2017-04-19T00:04:05 |
2025-04-01T04:35:25.127863
|
{
"authors": [
"DuaneEllis-TI",
"codecov-io",
"darconeous",
"jwhui",
"nibanks"
],
"repo": "openthread/openthread",
"url": "https://github.com/openthread/openthread/pull/1620",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2013432867
|
Handle git ls-remote asking for authentication
If a repository is removed or made private, git ls-remote will hang asking for user authentication and break the bumper.
One question, AFAICS, git ls-remote is called directly using os/exec. Did you folks consider using go-git package for calling git commands in the registry? I wonder if there is any reason not to use that. It looks like quite an active project, and takes care of the error handling for most of the cases.
@Yantrio had used it extensively in similar work recently and was running into some extremely odd issues and inconsistencies.
Alternatively we could try using https://api.github.com/repos/:org/:repo/git/refs/tags
ex: https://api.github.com/repos/eda-dev-test/terraform-aws-test/git/refs/tags
It's also worth noting that this is not a problem for github actions. git ls-remote is unable to poll the term for user input and fails. It's mostly an issue when running it locally.
|
gharchive/issue
| 2023-11-28T01:37:36 |
2025-04-01T04:35:25.140829
|
{
"authors": [
"cam72cam",
"serdardalgic"
],
"repo": "opentofu/registry",
"url": "https://github.com/opentofu/registry/issues/48",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
225194378
|
Provide mechanism for unwrapping spans
In https://github.com/openzipkin/brave-opentracing/issues/39, I oversimplified my solution. My span was actually an AutoReleasingManagedSpan. I had to write additional code to unwrap it, as follows:
public static Span unwrapSpan(Span span) {
while (true) {
if (span instanceof ManagedSpan) {
ManagedSpan managedSpan = (ManagedSpan) span;
span = managedSpan.getSpan();
continue;
}
return span;
}
}
A solution could go in ot-spanmanager, but that seems like it's going to become part of ot-java soonish and more changes would probably delay that, so I'm posting it here with the expectation that a solution would wait until https://github.com/opentracing/opentracing-java/pull/115 is merged.
The solution wouldn't necessarily need to be a static utility method. With SpanManager baked into ot-java, Spans could just have an unwrap method that recursively finds the innermost wrapped span. @devinsba has something similar he wrote: https://gitter.im/openzipkin/zipkin?at=5903978bd32c6f2f094dfd75
@jakerobb what's the status on this post-115?
I think that's a better question for @devinsba.
Spring cleaning here: I believe this is fine - or at least differently wrong - now that we have active span manager. @jakerobb do you mind if closing this issue for now and reopen/make-a-new-one later?
Forgive me for not being up to speed on the latest developments -- I didn't notice that #115 had been merged. I poked around a bit and found ActiveSpanSource -- is that what you're referring to? Skimming the documentation, it seems like we simply don't wrap spans anymore, obviating the need for this issue.
If that's the case, please confirm and I'll be happy to close. Thanks!
Yep, that's correct! The API has changed significantly (we should do a blog post about it), but there should be no need to wrap spans, store the active span in a thread local, etc, with the new API.
Great! Thanks. A blog post with advice for people converting from earlier versions would be very helpful.
|
gharchive/issue
| 2017-04-28T21:12:18 |
2025-04-01T04:35:25.155202
|
{
"authors": [
"bhs",
"jakerobb",
"tedsuo"
],
"repo": "opentracing/opentracing-java",
"url": "https://github.com/opentracing/opentracing-java/issues/126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1879046533
|
🛑 [CN] OpenUPM Website Docs is down
In 3b6970f, [CN] OpenUPM Website Docs (https://openupm.cn/docs/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [CN] OpenUPM Website Docs is back up in a23ee95 after 21 minutes.
|
gharchive/issue
| 2023-09-03T10:46:12 |
2025-04-01T04:35:25.497424
|
{
"authors": [
"favoyang"
],
"repo": "openupm/upptime-openupmcn",
"url": "https://github.com/openupm/upptime-openupmcn/issues/1625",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1967574933
|
🛑 [CN] OpenUPM Registry /-/all is down
In d3c020f, [CN] OpenUPM Registry /-/all (https://package.openupm.cn/-/all) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [CN] OpenUPM Registry /-/all is back up in 75d1316 after 12 minutes.
|
gharchive/issue
| 2023-10-30T06:03:01 |
2025-04-01T04:35:25.499926
|
{
"authors": [
"favoyang"
],
"repo": "openupm/upptime-openupmcn",
"url": "https://github.com/openupm/upptime-openupmcn/issues/3204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1977685408
|
🛑 [CN] OpenUPM Website Docs is down
In dc90c87, [CN] OpenUPM Website Docs (https://openupm.cn/docs/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [CN] OpenUPM Website Docs is back up in e48bc17 after .
|
gharchive/issue
| 2023-11-05T09:18:35 |
2025-04-01T04:35:25.502281
|
{
"authors": [
"favoyang"
],
"repo": "openupm/upptime-openupmcn",
"url": "https://github.com/openupm/upptime-openupmcn/issues/3304",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1232549608
|
add input_info to nncf config when not defined by user
Description
Adds input_info to the NNCF config when this has not been defined by the user in the config.yaml.
This allows users to enable NNCF by adding the following to config.yaml:
optimization:
nncf:
apply: true
Where previously, the following was needed:
optimization:
nncf:
apply: true
input_info:
sample_size: null
Changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
Checklist
[x] My code follows the pre-commit style and check guidelines of this project.
[x] I have performed a self-review of my code
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[x] New and existing tests pass locally with my changes
Do you think if we should add the nncf settings to the config files as well? Currently, none of the config files have nncf
|
gharchive/pull-request
| 2022-05-11T12:21:09 |
2025-04-01T04:35:25.510823
|
{
"authors": [
"djdameln",
"samet-akcay"
],
"repo": "openvinotoolkit/anomalib",
"url": "https://github.com/openvinotoolkit/anomalib/pull/307",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
868388926
|
Can these images be used with the gstreamer vaapi plugin in a non-interactive shell?
Maybe I'm missing something obvious, but if I try to use this image without first initializing an interactive bash session, I cannot use the vaapi plugins. Compare the output of the following, executed on an Ubuntu 18 host:
Creating a container based on the data_dev image and directly running gst-inspect-1.0:
docker run -it --rm --device '/dev/dri:/dev/dri' \
openvino/ubuntu20_data_dev:latest \
gst-inspect-1.0 vaapih264enc
error: XDG_RUNTIME_DIR not set in the environment.
No such element or plugin 'vaapih264enc'
Doing the same, but running the command within the context of an "interactive" bash shell:
docker run -it --rm --device '/dev/dri:/dev/dri' \
openvino/ubuntu20_data_dev:latest \
/bin/bash -i -c "gst-inspect-1.0 vaapih264enc"
error: XDG_RUNTIME_DIR not set in the environment.
[setupvars.sh] OpenVINO environment initialized
Factory Details:
Rank primary (256)
Long-name VA-API H264 encoder
Klass Codec/Encoder/Video/Hardware
Description A VA-API based H264 video encoder
Author Wind Yuan <feng.yuan@intel.com>
# <etc...>
As I understand it, using the -i flag when starting bash has many effects, including reading startup files. I can see in the output that [setupupvars.sh] runs in the interactive session. However, there seems to be something more going on. I tried creating a new derived image with an ENTRYPOINT script that sources /opt/intel/openvino/setupvars.sh (and/or .bashrc or .profile files), then execing the original command, but that is not sufficient to make the vaapi plugin available (and this may just be my misunderstanding about how bash's environment propagates -- suggestions are welcome).
Obviously, the command above (using /bin/bash -i -c) works around this issue, but it is not ideal. Is there a way to run a gstreamer command that uses the vaapi plugin within a container based on this image without using an interactive bash session?
Thanks!
Thanks for your response, and I'm happy to hear a future image might be able to offer a more native experience.
Even so, the issue I have isn't that the setup script is required, it's that it seems to work only if the session is marked interactive. Or at least, the vaapi plugin specifically fails to load in non-interactive sessions, even if the non-interactive session directly sources any combination of the various setupvars.sh scripts and/or .bashrc/.profile files, etc. I've not found a way to get those plugins to load unless they're wrapped in bash -i -c, which means I've got to introduce a lot of ugly quoting (and it gets worse if I need to pass a quoted string as an application argument, e.g., as an element property in gstreamer pipeline syntax). In fairness, the problem might be with the vaapi plugin itself (if it's for some reason detecting whether its in an interactive session & changing its behavior as a result).
I'm OK with creating an ENTRYPOINT that sources the correct files before execing the incoming command, but that still doesn't work in all cases, and I'm hoping y'all might have some guidance on what is going on during initialization that's causing this to fail in non-interactive session.
Thank you again!
|
gharchive/issue
| 2021-04-27T02:49:46 |
2025-04-01T04:35:25.517101
|
{
"authors": [
"saites"
],
"repo": "openvinotoolkit/docker_ci",
"url": "https://github.com/openvinotoolkit/docker_ci/issues/99",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
450617628
|
Commit 5316dfd Appears to Have Broken Invocation of the Python Weave Device Manager
It appears that commit 5316dfdacdcd2f5892d0e596a67144d9f0df7c61 somehow broke invocation of the Python weave-device-mgr, both in-situ and from an installation:
% src/device-manager/python/weave-device-mgr
/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so not exist
Could not find the WeaveDeviceMgr module!
Per the above, this should work when invoking both in-situ (in the build tree) as well as as-installed. There is specific logic in the WeaveDeviceMgr load exception handler that attempts to intuit the right location to find _WeaveDeviceMgr.so:
https://github.com/openweave/openweave-core/blob/master/src/device-manager/python/weave-device-mgr.py#L40
In the case of in-situ execution:
% src/device-manager/python/weave-device-mgr
/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so not exist
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/lib/python2.7/site-packages/weave
Could not find the WeaveDeviceMgr module!
In the in-situ case, the shared object is located as follows:
% find src -type f -name "_WeaveDeviceMgr.so"
src/device-manager/python/.libs/_WeaveDeviceMgr.so
In the case of as-installed execution, this gets slightly more complicated since weave-device-mgr is actually a symbolic link to a trampoline script, weave-run, that tries even harder to sort out the run time environment and, once it does, executes libexec/weave-device-mgr:
% tmp-install/usr/local/bin/weave-device-mgr
pkgskdrootdir is /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local
trying to build loaderpaths from pkgsdklibdir lib
trying to build loaderpaths from pkgsdklibdir lib/python/weave
trying to build loaderpaths from pkgsdklibdir lib/python2.7/dist-packages/weave
trying to build loaderpaths from pkgsdklibdir lib/python2.7/site-packages/weave
set DYLD_LIBRARY_PATH to /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib:/Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib/python/weave:/Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib/python2.7/dist-packages/weave:/Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib/python2.7/site-packages/weave
attempting to exec /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/libexec/weave-device-mgr
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install/usr/local/lib/python2.7/site-packages/weave
Could not find the WeaveDeviceMgr module!
In the installed case, the shared object is located as follows:
% find tmp-install/ -type f -name "_WeaveDeviceMgr.so"
tmp-install/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
So, the key problem, at least in the installed case, is that _WeaveDeviceMgr.so isn't being qualified with "weave/...".
Looking at the generated makefile, it would appear that pyexecdir and pkgpyexecdir are correct:
pkgpyexecdir = ${pyexecdir}/weave
pyexecdir = ${exec_prefix}/lib/python2.7/site-packages
weavedir = $(pyexecdir)/weave
pyexec_LTLIBRARIES = _WeaveDeviceMgr.la
install-pyexecLTLIBRARIES: $(pyexec_LTLIBRARIES)
@$(NORMAL_INSTALL)
@list='$(pyexec_LTLIBRARIES)'; test -n "$(pyexecdir)" || list=; \
list2=; for p in $$list; do \
if test -f $$p; then \
list2="$$list2 $$p"; \
else :; fi; \
done; \
test -z "$$list2" || { \
echo " $(MKDIR_P) '$(DESTDIR)$(pyexecdir)'"; \
$(MKDIR_P) "$(DESTDIR)$(pyexecdir)" || exit 1; \
echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(pyexecdir)'"; \
$(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(pyexecdir)"; \
}
Comparing to commit 6d6b3fc6, in-situ we find:
% src/device-manager/python/weave-device-mgr
/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so not exist
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/src/device-manager/lib/python2.7/site-packages/weave
WEAVE:ML: Binding general purpose IPv4 UDP endpoint to [::]:11095
WEAVE:IN: IPV6_PKTINFO: 22
WEAVE:ML: Listening on general purpose IPv4 UDP endpoint
WEAVE:ML: Binding general purpose IPv6 UDP endpoint to [::]:11095 ()
WEAVE:IN: IP_PKTINFO: 22
WEAVE:ML: Listening on general purpose IPv6 UDP endpoint
WEAVE:ML: Adding lo0 to interface table
WEAVE:ML: Adding en0 to interface table
WEAVE:ML: Adding awdl0 to interface table
WEAVE:ML: Adding utun0 to interface table
Traceback (most recent call last):
File "src/device-manager/python/weave-device-mgr", line 96, in <module>
from WeaveCoreBluetoothMgr import CoreBluetoothManager as BleManager
File "/Users/gerickson/Source/github.com/openweave/openweave-core/src/device-manager/python/WeaveCoreBluetoothMgr.py", line 36, in <module>
from Foundation import *
ImportError: No module named Foundation
A different failure that still should work. It would appear Bluetooth has broken this use case.
with the actual module in the same place as for commit 5316dfd:
% find src -type f -name "_WeaveDeviceMgr.so"
src/device-manager/python/.libs/_WeaveDeviceMgr.so
% find src -type f -name "_WeaveDeviceMgr.la"
src/device-manager/python/_WeaveDeviceMgr.la
Pretty much everything looks the same there. Checking the installed case:
% tmp-install-6d6b3fc6/usr/local/bin/weave-device-mgr
pkgskdrootdir is /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local
trying to build loaderpaths from pkgsdklibdir lib
trying to build loaderpaths from pkgsdklibdir lib/python/weave
trying to build loaderpaths from pkgsdklibdir lib/python2.7/dist-packages/weave
trying to build loaderpaths from pkgsdklibdir lib/python2.7/site-packages/weave
set DYLD_LIBRARY_PATH to /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib:/Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib/python/weave:/Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib/python2.7/dist-packages/weave:/Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave
attempting to exec /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/libexec/weave-device-mgr
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build/tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave
Could not find the WeaveDeviceMgr module!
it fails in the same way as for commit
with the actual module in the same place as for commit 5316dfd, so it's not clear that commit 5316dfd is the problem:
find tmp-install-6d6b3fc6/ -type f -name "_WeaveDeviceMgr.so"
tmp-install-6d6b3fc6//usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
% find tmp-install-6d6b3fc6/ -type f -name "_WeaveDeviceMgr.la"
tmp-install-6d6b3fc6//usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.la
Looking at the two installs, the file listings are identical:
% find tmp-install/ | sort | tee ../openweave-core/tmp-install-find-6d6b3fc6.out
% find tmp-install/ | sort | tee ../openweave-core/tmp-install-find-5316dfd.out
cmp tmp-install-find-6d6b3fc6.out tmp-install-find-5316dfd.out
echo $?
0
Diffing the actual files in the two installations, we find the obvious and expected differences in the binary timestamps, etc. and we find the minor expected difference in ASN1OID.h include paths. Otherwise, every file that might have some bearing on this issue is identical:
% diff -q -aruN tmp-install-6d6b3fc6/ tmp-install-5316dfd/
Files tmp-install-6d6b3fc6/usr/local/include/BuildConfig.h and tmp-install-5316dfd/usr/local/include/BuildConfig.h differ
Files tmp-install-6d6b3fc6/usr/local/include/Weave/Support/ASN1.h and tmp-install-5316dfd/usr/local/include/Weave/Support/ASN1.h differ
Files tmp-install-6d6b3fc6/usr/local/include/Weave/Support/ASN1OID.h and tmp-install-5316dfd/usr/local/include/Weave/Support/ASN1OID.h differ
Files tmp-install-6d6b3fc6/usr/local/include/Weave/WeaveVersion.h and tmp-install-5316dfd/usr/local/include/Weave/WeaveVersion.h differ
Files tmp-install-6d6b3fc6/usr/local/lib/libBleLayer.a and tmp-install-5316dfd/usr/local/lib/libBleLayer.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libInetLayer.a and tmp-install-5316dfd/usr/local/lib/libInetLayer.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libRADaemon.a and tmp-install-5316dfd/usr/local/lib/libRADaemon.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libSystemLayer.a and tmp-install-5316dfd/usr/local/lib/libSystemLayer.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libWarm.a and tmp-install-5316dfd/usr/local/lib/libWarm.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libWeave.a and tmp-install-5316dfd/usr/local/lib/libWeave.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libWeaveDeviceManager.1.dylib and tmp-install-5316dfd/usr/local/lib/libWeaveDeviceManager.1.dylib differ
Files tmp-install-6d6b3fc6/usr/local/lib/libWeaveDeviceManager.a and tmp-install-5316dfd/usr/local/lib/libWeaveDeviceManager.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libWeaveDeviceManager.dylib and tmp-install-5316dfd/usr/local/lib/libWeaveDeviceManager.dylib differ
Files tmp-install-6d6b3fc6/usr/local/lib/libcrypto.a and tmp-install-5316dfd/usr/local/lib/libcrypto.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libmincrypt.a and tmp-install-5316dfd/usr/local/lib/libmincrypt.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libnlfaultinjection.a and tmp-install-5316dfd/usr/local/lib/libnlfaultinjection.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libnlunit-test.a and tmp-install-5316dfd/usr/local/lib/libnlunit-test.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libopenssl-jpake.a and tmp-install-5316dfd/usr/local/lib/libopenssl-jpake.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libssl.a and tmp-install-5316dfd/usr/local/lib/libssl.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/libuECC.a and tmp-install-5316dfd/usr/local/lib/libuECC.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.a and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.a differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveBleBase.pyc and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveBleBase.pyc differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveBleBase.pyo and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveBleBase.pyo differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveBleUtility.pyc and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveBleUtility.pyc differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveBleUtility.pyo and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveBleUtility.pyo differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveBluezMgr.pyc and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveBluezMgr.pyc differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveBluezMgr.pyo and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveBluezMgr.pyo differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveCoreBluetoothMgr.pyc and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveCoreBluetoothMgr.pyc differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveCoreBluetoothMgr.pyo and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveCoreBluetoothMgr.pyo differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.pyc and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.pyc differ
Files tmp-install-6d6b3fc6/usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.pyo and tmp-install-5316dfd/usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.pyo differ
Files tmp-install-6d6b3fc6/usr/local/libexec/mock-device and tmp-install-5316dfd/usr/local/libexec/mock-device differ
Files tmp-install-6d6b3fc6/usr/local/libexec/weave and tmp-install-5316dfd/usr/local/libexec/weave differ
Files tmp-install-6d6b3fc6/usr/local/libexec/weave-device-descriptor and tmp-install-5316dfd/usr/local/libexec/weave-device-descriptor differ
Files tmp-install-6d6b3fc6/usr/local/libexec/weave-heartbeat and tmp-install-5316dfd/usr/local/libexec/weave-heartbeat differ
Files tmp-install-6d6b3fc6/usr/local/libexec/weave-key-export and tmp-install-5316dfd/usr/local/libexec/weave-key-export differ
Files tmp-install-6d6b3fc6/usr/local/libexec/weave-ping and tmp-install-5316dfd/usr/local/libexec/weave-ping differ
I addressed the pyobjc issue noted in https://github.com/openweave/openweave-core/issues and explicitly checked out commit 5316dfd and the in-situ case works fine:
% src/device-manager/python/weave-device-mgr
/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so not exist
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-5316dfd/src/device-manager/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-5316dfd/src/device-manager/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-5316dfd/src/device-manager/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-5316dfd/src/device-manager/lib/python2.7/site-packages/weave
WEAVE:ML: Binding general purpose IPv4 UDP endpoint to [::]:11095
WEAVE:IN: IPV6_PKTINFO: 22
WEAVE:ML: Listening on general purpose IPv4 UDP endpoint
WEAVE:ML: Binding general purpose IPv6 UDP endpoint to [::]:11095 ()
WEAVE:IN: IP_PKTINFO: 22
WEAVE:ML: Listening on general purpose IPv6 UDP endpoint
WEAVE:ML: Adding lo0 to interface table
WEAVE:ML: Adding en0 to interface table
WEAVE:ML: Adding awdl0 to interface table
WEAVE:ML: Adding utun0 to interface table
Weave Device Manager Shell
weave-device-mgr > quit
I've done a side-by-side diff of the build results between commit X and Y and, beyond trivial, non-impactful path deltas based on my two, differently-named build directories, there are no meaningful differences in the build artifacts:
% diff -x "*.a" -x "*.o" -x "*.html" -x "tmp-install*" -aruN build-6d6b3fc6/src/device-manager/ build-5316dfd/src/device-manager/
From what I can see, there has also been a regression in master in which the "Re-bootstrapped package" portion of commit 5316dfd has regressed.
There is specific logic in the WeaveDeviceMgr load exception handler that attempts to intuit the right location to find _WeaveDeviceMgr.so:
This is not correct. The logic you cite in the device manager shell (weave-device-mgr.py) is attempting to find the WeaveDeviceMgr python module (WeaveDeviceMgr.py), not the associated ctypes shared library (_WeaveDeviceMgr.so). The logic for locating the latter is here: https://github.com/openweave/openweave-core/blob/06664935accc174666675a93b18163507879f439/src/device-manager/python/WeaveDeviceMgr.py#L411-L431
Unfortunately the code in weave-device-mgr.py is pretty broken. In particular, it presumes that any exception from the statement import WeaveDeviceMgr implies that WeaveDeviceMgr.py couldn't be found. And if importing is ultimately unsuccessful, it very unhelpfully suppresses the real reason. So the error "Could not find the WeaveDeviceMgr module!" in many cases means something entirely different went wrong.
Also, regarding this:
In the installed case, the shared object is located as follows: ... tmp-install/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
This location is wrong. Since _WeaveDeviceMgr.so is not a python module (i.e. it cannot be imported), it shouldn't be installed into a module directory. Rather, it should be treated as package data that is installed within the module's subdirectory (in this case weave).
@jaylogue, so if I understand what you're saying correctly, the below install is actually backwards and transposed relative to what it should be:
% find tmp-install-master/ -name "*WeaveDeviceMgr.*"
tmp-install-master//usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.pyo
tmp-install-master//usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.pyc
tmp-install-master//usr/local/lib/python2.7/site-packages/weave/WeaveDeviceMgr.py
tmp-install-master//usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
tmp-install-master//usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.la
tmp-install-master//usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.a
in that the .py files should be up one level and the *.{so,la,a} files should be down one level in weave/...?
Per @jaylogue 's recommendation, added full print of the exception:
% src/device-manager/python/weave-device-mgr
trying to find dmLibName /Users/gerickson/Source/github.com/openweave/openweave-core/src/device-manager/python/_WeaveDeviceMgr.so
nope, trying to find dmLibName _WeaveDeviceMgr.so
nope, one last try, trying to find dmLibName /usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so does not exist
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/lib/python2.7/site-packages/weave
trying to find dmLibName /Users/gerickson/Source/github.com/openweave/openweave-core/src/device-manager/python/_WeaveDeviceMgr.so
nope, trying to find dmLibName _WeaveDeviceMgr.so
nope, one last try, trying to find dmLibName /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/python/_WeaveDeviceMgr.so
Could not import the WeaveDeviceMgr module: dlopen(/Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/python/_WeaveDeviceMgr.so, 6): Symbol not found: __ZN2nl5Weave8Platform16PersistedStorage4ReadEPKcRj
Referenced from: /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/python/../.libs/libWeaveDeviceManager.1.dylib
Expected in: flat namespace
in /Users/gerickson/Source/github.com/openweave/build-master/src/device-manager/python/../.libs/libWeaveDeviceManager.1.dylib
The real problem appears to be the missing nl::Weave::Platform::PersistedStorage::Read(char const*, unsigned int&) symbol.
This also better elucidates the problem with the exec-from-install problem:
% tmp-install-master/usr/local/bin/weave-device-mgr
pkgsdkrootdir is /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local
trying to build loaderpaths from pkgsdklibdir lib
trying to build loaderpaths from pkgsdklibdir lib/python/weave
trying to build loaderpaths from pkgsdklibdir lib/python2.7/dist-packages/weave
trying to build loaderpaths from pkgsdklibdir lib/python2.7/site-packages/weave
set DYLD_LIBRARY_PATH to /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib:/Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python/weave:/Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/dist-packages/weave:/Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave
attempting to exec /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/libexec/weave-device-mgr
trying pkgpythondir python
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/python
trying pkgpythondir lib/python/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python/weave
trying pkgpythondir lib/python2.7/dist-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/dist-packages/weave
trying pkgpythondir lib/python2.7/site-packages/weave
trying pyweavepath /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave
trying to find dmLibName /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave/_WeaveDeviceMgr.so
nope, trying to find dmLibName /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave/_WeaveDeviceMgr.so
nope, one last try, trying to find dmLibName /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave/_WeaveDeviceMgr.so
Could not import the WeaveDeviceMgr module: dlopen(/Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave/_WeaveDeviceMgr.so, 6): Library not loaded: @loader_path/../libWeaveDeviceManager.1.dylib
Referenced from: /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/weave/_WeaveDeviceMgr.so
Reason: image not found
Not a fix by any means; however, this will improve the diagnostic output for this an other future such failures: https://github.com/openweave/openweave-core/pull/new/bug/github-issue-250.1
the .py files should be up one level and the *.{so,la,a} files should be down one level in weave/...?
Well, perhaps the correct solution is a bit more subtle. If we wish to treat WeaveDeviceMgr itself as a module (i.e. where one can say import WeaveDeviceMgr), then either:
WeaveDeviceMgr.py operates as a stand-alone script, which gets installed into site-packages. In this case _WeaveDeviceMgr.so should be installed alongside it directly in site-packages.
or
site-packages contains a WeaveDeviceMgr subdirectory which contains WeaveDeviceMgr.py and an appropriate module init script (__init__.py). In this case _WeaveDeviceMgr.so should be installed in the WeaveDeviceMgr subdirectory.
Following convention, if there is a subdirectory site-packages/weave then this is expected to contain a module called weave, with an __init__.py script. Structured this way, WeaveDeviceMgr.py would be a class file in the weave module directory and _WeaveDeviceMgr.so would sit alongside it.
I believe making "weave" a proper module is the right thing to do going forward, as it allows us to package other classes in a single module. However, this means changing the way scripts import WeaveDeviceMgr to:
from weave import WeaveDeviceMgr
The real problem appears to be the missing nl::Weave::Platform::PersistedStorage::Read(char const*, unsigned int&) symbol.
Ah. So that's a build problem unrelated to how python operates.
If I fix / change _src/device-manager/python/Makefile.am as follows:
@@ -189,6 +199,6 @@ uninstall-exec-binLINKS: $(bin_LINKS)
dir='$(DESTDIR)$(bindir)'; $(am__uninstall_files_from_dir)
install-exec-hook:
- $(call set-relocation-path,$(DESTDIR)$(pyexecdir)/_WeaveDeviceMgr.so,libWeaveDeviceManager.$(LIBWEAVE_VERSION_CURRENT).dylib,@loader_path/../)
+ $(call set-relocation-path,$(DESTDIR)$(pyexecdir)/_WeaveDeviceMgr.so,libWeaveDeviceManager.$(LIBWEAVE_VERSION_CURRENT).dylib,@loader_path/../../)
include $(abs_top_nlbuild_autotools_dir)/automake/post.am
then the execute-as-installed case works and I get the same error as for the execute-in-situ case:
Could not import the WeaveDeviceMgr module: dlopen(/Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so, 6): Symbol not found: nl::Weave::Platform::PersistedStorage::Read(char const*, unsigned int&)
Referenced from: /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/../../libWeaveDeviceManager.1.dylib
Expected in: flat namespace
in /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/../../libWeaveDeviceManager.1.dylib
There also seems to be a problem with set-relocation-path as executed by install-exec-hook. If I run the following steps:
% tmp-install-master/usr/local/bin/weave-device-mgr
[Dynamic Loader FAIL due to missing libWeaveDeviceManager.1.dylib]
% rm -f ./tmp-install-master/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
% make DESTDIR=`pwd`/tmp-install-master install
% otool -L /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
/Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so:
/usr/local/lib/libWeaveDeviceManager.1.dylib (compatibility version 2.0.0, current version 2.5.0)
/usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 400.9.4)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.250.1)
We see that the install_name_tool has no effect. However, if I run it again manually on the CLI, it works and then invoking weave-device-mgr works (missing symbol notwithstanding):
% install_name_tool -change /usr/local/lib/libWeaveDeviceManager.1.dylib @loader_path/../../libWeaveDeviceManager.1.dylib /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
% otool -L /Users/gerickson/Source/github.com/openweave/build-master/tmp-install-master/usr/local/lib/python2.7/site-packages/_WeaveDeviceMgr.so
% tmp-install-master/usr/local/bin/weave-device-mgr
[Dynamic Loader FAIL due to missing nl::Weave::Platform::PersistedStorage::Read(char const*, unsigned int&)]
https://github.com/openweave/openweave-core/pull/254 fixes the second of three (two) issues.
Once this and https://github.com/openweave/openweave-core/pull/252 are merged, then someone with more familiarity with nl::Weave::Platform::PersistedStorage::Read(char const*, unsigned int&) can address that issue.
@robszewczyk fixed the last issue.
|
gharchive/issue
| 2019-05-31T05:09:18 |
2025-04-01T04:35:25.602828
|
{
"authors": [
"gerickson",
"jaylogue"
],
"repo": "openweave/openweave-core",
"url": "https://github.com/openweave/openweave-core/issues/250",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
493423540
|
Better support for multiple Weave client processes on single host
Modify the Weave Message Layer and associated code to better support multiple Weave-enabled processes operating as independent clients on a single host. This should include:
The ability to initiate outbound TCP and UDP communications from an ephemeral port.
The ability to disable TCP/UDP listening on the standard Weave port.
Automatic assignment of unique device ids to client instances.
+1
|
gharchive/issue
| 2019-09-13T16:50:44 |
2025-04-01T04:35:25.606125
|
{
"authors": [
"gerickson",
"jaylogue"
],
"repo": "openweave/openweave-core",
"url": "https://github.com/openweave/openweave-core/issues/367",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
161577044
|
WIP: restore isAlive behavior for consul liveness check (issue #713).
FYI @psuter
Will pile this on as part of #722.
Replaced with newer PR.
|
gharchive/pull-request
| 2016-06-22T01:48:46 |
2025-04-01T04:35:25.627072
|
{
"authors": [
"rabbah"
],
"repo": "openwhisk/openwhisk",
"url": "https://github.com/openwhisk/openwhisk/pull/727",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
747026994
|
[deps] Update celery to version 5
Better to address https://github.com/openwisp/openwisp-controller/issues/325 first.
Celery 5 should brings some breaking changes.
We should ensure the command to launch celery that is indicated in the README works and if not we need to update it.
Other than ensuring all tests pass, it's advised to do some manual testing running celery in the development environment, trigger the celery tasks and ensure they work.
There's no need to do anything here because it has been handled in openwisp-controller.
|
gharchive/issue
| 2020-11-19T23:56:33 |
2025-04-01T04:35:25.634288
|
{
"authors": [
"nemesisdesign"
],
"repo": "openwisp/openwisp-network-topology",
"url": "https://github.com/openwisp/openwisp-network-topology/issues/97",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
538974853
|
luci-app-bmx7: fix typos
Signed-off-by: Balázs Úr
@hnyman I think that can be merged, If I understood that correctly?
In summary, if someone changes the msgid, he must also have to update the po file to make weblat work?
@feckert @hnyman as I know, this is the correct workflow if translations are outsourced to Weblate:
Developer (or contributor via PR) makes string changes.
Developer re-generates the POT file.
Weblate will automatically pull the changed POT file and merge it to existing PO files.
Translator adapts the translation to new strings.
Developer pulls updated translations from Weblate or Weblate pushes the changes automatically back to git repository.
Should not edit the PO files manually. I did it, because I was asked to do it.
Thanks merged
|
gharchive/pull-request
| 2019-12-17T10:27:22 |
2025-04-01T04:35:25.640697
|
{
"authors": [
"feckert",
"urbalazs"
],
"repo": "openwrt/luci",
"url": "https://github.com/openwrt/luci/pull/3416",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2382310761
|
Cert auth and runtime base href
resolves #388
Please provide precise steps one can use to use to test this
Please provide precise steps one can use to use to test this
Test procedure:
Build branch and bind to a controller mgmt API web listener
Get an admin identity token
ziti edge create identity kpop4 -A -o /tmp/kpop4.jwt
Enroll to obtain client cert
ziti edge enroll -o /tmp/kpop4.json <(<<< eyJhbGciOiJSUzI1NiIsImtpZCI6IjBlNDNmYWNhNGI1ODNlMTc4N2NjNjYwZDdjNzRkNDk2NmI0OWVkY2QiLCJ0eXAiOiJKV1QifQ.eyJpc3MiOiJodHRwczovL3ppdGkua2VuLmRlbW8ub3BlbnppdGkub3JnOjEyODAiLCJzdWIiOiI1dUdyYm5wby0iLCJhdWQiOlsiIl0sImV4cCI6MTcxOTc2MjU1NiwianRpIjoiYjFmZjU4YzktNzU2Zi00YzZhLWI1ZTktZTZhNjlmNmRlN2I3IiwiZW0iOiJvdHQiLCJjdHJscyI6bnVsbH0.i9PeNhjGrHA0WxFJ5_uREXYibOUV5ydLlv3fn5GMHbAs0Svw8--uWrVvVBGSPs0ia8hekE-TXEk6IXywxnD1plwQWkfBBs4probJzRoPmD7S3XkMt5Zp6QUML83X7mkSbCDr97QHR-SdqYZfKQy3RLQiLeyamOF-10WtVsas_xgH0WLJB43FBtBDvcWKw5GTeVRY5KMI0l0AA0IEd9aiv_42YQnPjvKMFcxfqHhNU_DFO9DSqed_6R9yxPOUGXaqWmJVqLYLHZ1wBYhzZuZRzvzxBITl9dxsTJ8Rzd6qoKgX_SWgxnxaodjpEML-h0aLPlQhlUD6s1VymwS7mwqnsM4UH8IQYdBIKucyBvrYTeeKY9uIQ2Loov78rD61m2KpRsBb3cdir9m9rUi44Z9SkZt2O-zEKKjb-0EAoQDVKK9p1LOJvyQVr3Pwxw0Vl3PL7rSI8d6rbsRIDbkzQDM5C9Y_ipOSx2Uy4eFFrZS5QTPbW3O1xLs6SVOv3qduJbDR7suMFX_0hW3YNewK50sfx-SZ5KEUNpmxTDxOojQ1ga3BgGfBuDdtCDR5MOMdJs1vazqYTHubC_H48b-Zy300yjGwMnEtWN_XCD0rPGqX_lPZvOQ4cpfbmM64bhz65smzEK30A5xOldldRuMAzTWsJQ_zYfZWlu1UYRePHeukYKQ)
Unwrap the JSON identity file into separate cert, key, etc.
ziti ops unwrap /tmp/kpop4.json
Fix filemodes because unwrap did not obey the umask
chmod u+rw /tmp/kpop4.*
Compose a keystore for import
openssl pkcs12 -export -in /tmp/kpop4.cert -inkey /tmp/kpop4.key -out /tmp/kpop4.p12 -name "kpop4"
In Chrome security settings > certs > my certs > import
Visit the controller's mgmt API URL where /zac/ is bound
Click "login" button with empty username/password
|
gharchive/pull-request
| 2024-06-30T15:42:40 |
2025-04-01T04:35:25.804005
|
{
"authors": [
"dovholuknf",
"qrkourier"
],
"repo": "openziti/ziti-console",
"url": "https://github.com/openziti/ziti-console/pull/389",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1652957290
|
initial implementation of access_tokens table and logout
This addresses https://github.com/opeolluwa/raccoon/issues/163
Not yet ready for merge, remaining work:
[ ] manual end to end testing
[ ] address questions & todos
The questions I have are
should the token column in acesss_tokens have an index? i think it should unless its intended to be kept small
does the access_tokens table need to h ave anything other than the token itself, and does it need to store tokens when they are created, or can it store only when logged out?
This addresses #163
Not yet ready for merge, remaining work:
[ ] manual end to end testing
[ ] address questions & todos
The questions I have are
should the token column in acesss_tokens have an index? i think it should unless it's intended to be kept small
does the access_tokens table need to have anything other than the token itself, and does it need to store tokens when they are created, or can it store only when logged out?
@derekleverenz First off, well done for the good work, I appreciate you ❤️
Regards the questions you're asked.
I made a review the algorithm I wrote earlier.
I believe it's best to persist the token when the user logs out only.
And for the second question, let's have the table also store the logout time stamp. This hook will later be used to remove expired and invalidated tokens.
In summary,
Create the token validation table having the token field and the last_valid_at - timestamp the token was saved
When a user logs out, save the token.
Update the token validation method to check if the token is in the access_token table, if it is, this would imply the token has been invalidated by the user. Return an Unauthorized error telling the user to log in again.
Also, Index would be great to achieve step 4 defined above with relatively low latency.
|
gharchive/pull-request
| 2023-04-04T00:16:50 |
2025-04-01T04:35:25.810648
|
{
"authors": [
"derekleverenz",
"opeolluwa"
],
"repo": "opeolluwa/raccoon",
"url": "https://github.com/opeolluwa/raccoon/pull/168",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2149231
|
'opera failed to access or upgrade your profile'
I installed Opera 11.52 and am trying to use the Selenium webdriver bindings for Python to launch Opera with the Remote interface (Python bindings don't support Opera directly). Opera starts to launch, but gives the above 'Startup error'. It launches normally otherwise. I may try with Java just to see if that goes better. Any suggestions?
I take it you are using Windows? Your Opera is installed to a directory to which OperaDriver cannot write. If you use an Opera installed to a writable directory or run your code as Administrator, that should solve it.
The core of the issue is that Opera 11.5x does not support specifying which profile to use during startup, meaning it will (on Windows) write to C:\Program Files\Opera.autotestprofile (or something similar). Specifying this is supported in 11.60 Beta and in Opera Next 12.x.
You can get Opera 12 from here: http://www.opera.com/browser/next/
With Opera 12 you should have no issues, as it writes the profile to a writable temporary directory.
See this for more reference: https://github.com/operasoftware/operadriver/issues/8
|
gharchive/issue
| 2011-11-04T22:32:47 |
2025-04-01T04:35:25.813305
|
{
"authors": [
"andreastt",
"tomsem"
],
"repo": "operasoftware/operadriver",
"url": "https://github.com/operasoftware/operadriver/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
450181558
|
Adding Knative Eventing Operator to community operators
Version 0.6.0 of knative-eventing.
These manifests have been tested against an OKD/OpenShift cluster via a subscription through OLM!
Bonus: Also tested against minikube!
|
gharchive/pull-request
| 2019-05-30T08:01:01 |
2025-04-01T04:35:25.817481
|
{
"authors": [
"matzew"
],
"repo": "operator-framework/community-operators",
"url": "https://github.com/operator-framework/community-operators/pull/407",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
644426184
|
add log info for manager options config
This information is very helpful for when we are working locally running the main.go as to help users when they open their issues since we can check the values informed by seen the info in the logs.
Closes: https://github.com/joelanford/helm-operator/issues/24
Closing this one since it is open for a while and shows that will not be accepted.
@camilamacedo86 this would be useful. Can you please reopen this PR.
|
gharchive/pull-request
| 2020-06-24T08:32:57 |
2025-04-01T04:35:25.819235
|
{
"authors": [
"camilamacedo86",
"varshaprasad96"
],
"repo": "operator-framework/helm-operator-plugins",
"url": "https://github.com/operator-framework/helm-operator-plugins/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1320819483
|
Operatorhub Catalog ARM64 Support
Bug Report
What did you do?
1. Installed OLM on my raspberry pi k3s cluster (ARM64).
I did have to change the catalog image quay.io/operatorhubio/catalog:latest to quay.io/operatorhubio/catalog:lts. There were no logs output by the pod as you would expect it just wasn't running but switch to LTS tag saw the GRPC server startup and things look healthy.
2. Installed my first operator
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-argocd-operator
namespace: operators
spec:
channel: alpha
name: argocd-operator
source: operatorhubio-catalog
sourceNamespace: olm
What did you expect to see?
That the operator framework would perform its magic and install argo-cd on the cluster.
What did you see instead? Under which circumstances?
$ kubectl -n operators describe sub my-argocd-operator
...
Status:
Catalog Health:
Catalog Source Ref:
API Version: operators.coreos.com/v1alpha1
Kind: CatalogSource
Name: operatorhubio-catalog
Namespace: olm
Resource Version: 622853
UID: aeef1f77-c29d-415c-a1bc-a726372b8ae9
Healthy: true
Last Updated: 2022-07-28T11:09:02Z
Conditions:
Last Transition Time: 2022-07-28T11:09:02Z
Message: all available catalogsources are healthy
Reason: AllCatalogSourcesHealthy
Status: False
Type: CatalogSourcesUnhealthy
Last Transition Time: 2022-07-28T11:10:35Z
Message: bundle unpacking failed. Reason: BackoffLimitExceeded, and Message: Job has reached the specified backoff limit
Reason: InstallCheckFailed
Status: True
Type: InstallPlanFailed
Current CSV: argocd-operator.v0.2.1
Install Plan Generation: 1
Install Plan Ref:
API Version: operators.coreos.com/v1alpha1
Kind: InstallPlan
Name: install-ztjh5
Namespace: operators
Resource Version: 625442
UID: bbb2ad75-cc7e-41b7-a59a-b368ecf65ac2
Installplan:
API Version: operators.coreos.com/v1alpha1
Kind: InstallPlan
Name: install-ztjh5
Uuid: bbb2ad75-cc7e-41b7-a59a-b368ecf65ac2
Last Updated: 2022-07-28T11:10:35Z
State: UpgradePending
Events: <none>
$ kubectl -n olm get jobs
NAME COMPLETIONS DURATION AGE
a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa 0/1 40m 40m
$ kubectl -n olm get job a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa -o yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2022-07-28T11:09:04Z"
generation: 1
labels:
controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
job-name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
namespace: olm
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: false
kind: ConfigMap
name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
uid: b4865d98-0576-46e9-ae18-3f7f6b9abb5d
resourceVersion: "625775"
uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
spec:
activeDeadlineSeconds: 600
backoffLimit: 3
completionMode: NonIndexed
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
suspend: false
template:
metadata:
creationTimestamp: null
labels:
controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
job-name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
spec:
containers:
- command:
- opm
- alpha
- bundle
- extract
- -m
- /bundle/
- -n
- olm
- -c
- a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
- -z
env:
- name: CONTAINER_IMAGE
value: quay.io/operatorhubio/argocd-operator:v0.2.1
image: quay.io/operator-framework/upstream-opm-builder:latest
imagePullPolicy: Always
name: extract
resources:
requests:
cpu: 10m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bundle
name: bundle
dnsPolicy: ClusterFirst
initContainers:
- command:
- /bin/cp
- -Rv
- /bin/cpb
- /util/cpb
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: IfNotPresent
name: util
resources:
requests:
cpu: 10m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /util
name: util
- command:
- /util/cpb
- /bundle
image: quay.io/operatorhubio/argocd-operator:v0.2.1
imagePullPolicy: Always
name: pull
resources:
requests:
cpu: 10m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bundle
name: bundle
- mountPath: /util
name: util
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: bundle
- emptyDir: {}
name: util
status:
conditions:
- lastProbeTime: "2022-07-28T11:10:33Z"
lastTransitionTime: "2022-07-28T11:10:33Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 4
ready: 0
startTime: "2022-07-28T11:09:04Z"
Environment
operator-lifecycle-manager version:
$ grep image base/olm.yaml
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: IfNotPresent
- --util-image
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: IfNotPresent
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: Always
image: quay.io/operatorhubio/catalog:lts
Kubernetes version information:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:38:26Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2+k0s", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-07-11T06:55:47Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/arm64"}
Kubernetes cluster kind:
v1.24.3+k3s1
Additional context
Already looked at this issue but didn't provide a fix for my specific problem
https://github.com/operator-framework/operator-lifecycle-manager/issues/1138
Having a similar issue with multi-arch (amd64/arm64) cluster. Could pinpoint it down to the quay.io/operator-framework/upstream-opm-builder:latest image being used, which is documented as deprecated.
After digging down deeply into the rabbit hole and finding out that multi-arch builds have actually been implemented upstream (just for another image), it seems like the opm image at https://quay.io/repository/operator-framework/opm is the correct image to use
Should be fixed with the v0.24.0 release!
Should be fixed with the v0.24.0 release!
@awgreene It appears that the extract container in the pod trying to install an operator is still referencing the upstream-opm-builder image instead of opm as @StopMotionCuber mentions above. However, I don't know what needs to happen for that PR to be accepted and/or if that means another new release.
|
gharchive/issue
| 2022-07-28T11:53:53 |
2025-04-01T04:35:25.828909
|
{
"authors": [
"StopMotionCuber",
"agelwarg",
"awgreene",
"darktempla"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/issues/2823",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
662637631
|
Fix validate CRD compatibility check and deprecated CRD test case
Fix validate CRD compatibility
Should only run against served versions instead the non-served
versions (! operator error)
Clean up the deprecated CRD versions test:
Using standup catsrc upgrade and get rid of unnecessary subscription
deletion.
Signed-off-by: Vu Dinh vdinh@redhat.com
Description of the change:
Motivation for the change:
Reviewer Checklist
[ ] Implementation matches the proposed design, or proposal is updated to match implementation
[ ] Sufficient unit test coverage
[ ] Sufficient end-to-end test coverage
[ ] Docs updated or added to /docs
[ ] Commit messages sensible and descriptive
This PR failed tests for 1 times with 1 individual failed tests and 4 skipped tests. A test is considered flaky if failed on multiple commits.
totaltestcount: 1
flaketestcount: 1
skippedtestcount: 4
flaketests:
classname: End-to-end
name: 'Install Plan with CRD schema change Test missing existing versions in
new CRD'
counts: 1
details:
count: 1
error: |4-
/home/runner/work/operator-lifecycle-manager/operator-lifecycle-manager/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:43
Error Trace: installplan_e2e_test.go:849
value.go:460
value.go:321
table_entry.go:37
runner.go:113
runner.go:64
it_node.go:26
spec.go:215
spec.go:138
spec_runner.go:200
spec_runner.go:170
spec_runner.go:66
suite.go:62
ginkgo_dsl.go:226
ginkgo_dsl.go:214
e2e_test.go:54
Error: Received unexpected error:
clusterserviceversions.operators.coreos.com "nginx-bc9nv-beta" not found
/home/runner/work/operator-lifecycle-manager/operator-lifecycle-manager/vendor/github.com/stretchr/testify/require/require.go:1005
meandurationsec: 26.277321
skippedtests:
classname: End-to-end
name: 'Subscription updates existing install plan'
counts: 1
details: []
meandurationsec: 0.37271
classname: End-to-end
name: 'Subscriptions create required objects from Catalogs Given a Namespace
when a CatalogSource is created with a bundle that contains prometheus objects
creating a subscription using the CatalogSource should install the operator
successfully'
counts: 1
details: []
meandurationsec: 2.119683
classname: End-to-end
name: 'Catalog image update'
counts: 1
details: []
meandurationsec: 0.487384
classname: End-to-end
name: 'Subscriptions create required objects from Catalogs Given a Namespace
when a CatalogSource is created with a bundle that contains prometheus objects
creating a subscription using the CatalogSource should have created the expected
prometheus objects'
counts: 1
details: []
meandurationsec: 2.165053
/retest
This PR failed tests for 1 times with 1 individual failed tests and 4 skipped tests. A test is considered flaky if failed on multiple commits.
totaltestcount: 1
flaketestcount: 1
skippedtestcount: 4
flaketests:
classname: End-to-end
name: 'CSV emits CSV requirement events'
counts: 1
details:
count: 1
error: |4-
/home/runner/work/operator-lifecycle-manager/operator-lifecycle-manager/test/e2e/csv_e2e_test.go:2667
Timed out after 60.000s.
Expected success, but got an error:
<*errors.StatusError | 0xc001208000>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {
SelfLink: "",
ResourceVersion: "",
Continue: "",
RemainingItemCount: nil,
},
Status: "Failure",
Message: "Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"csv-9swqs\": the object has been modified; please apply your changes to the latest version and try again",
Reason: "Conflict",
Details: {
Name: "csv-9swqs",
Group: "operators.coreos.com",
Kind: "clusterserviceversions",
UID: "",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 409,
},
}
Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "csv-9swqs": the object has been modified; please apply your changes to the latest version and try again
/home/runner/work/operator-lifecycle-manager/operator-lifecycle-manager/test/e2e/csv_e2e_test.go:2746
meandurationsec: 64.845177
skippedtests:
classname: End-to-end
name: 'Subscriptions create required objects from Catalogs Given a Namespace
when a CatalogSource is created with a bundle that contains prometheus objects
creating a subscription using the CatalogSource should have created the expected
prometheus objects'
counts: 1
details: []
meandurationsec: 1.13717
classname: End-to-end
name: 'Subscriptions create required objects from Catalogs Given a Namespace
when a CatalogSource is created with a bundle that contains prometheus objects
creating a subscription using the CatalogSource should install the operator
successfully'
counts: 1
details: []
meandurationsec: 2.106923
classname: End-to-end
name: 'Catalog image update'
counts: 1
details: []
meandurationsec: 0.112922
classname: End-to-end
name: 'Subscription updates existing install plan'
counts: 1
details: []
meandurationsec: 0.780872
/retest
/retest
/lgtm
/retest
/approve
/backport release-4.5
/cherry-pick release-4.5
|
gharchive/pull-request
| 2020-07-21T06:15:54 |
2025-04-01T04:35:25.847376
|
{
"authors": [
"Bowenislandsong",
"dinhxuanvu",
"exdx",
"kevinrizza",
"njhale"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/pull/1659",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
475613548
|
Bug 1732914: Operator upgrades fail when versions field is not set
Fix the ensureCRDVersions to account for version field
Add test cases for version scenarios
Signed-off-by: Vu Dinh vdinh@redhat.com
will this change be back ported to 4.2?
@rthallisey This should be 4.2 bug fix so it should be in 4.2.
/approve
/lgtm
/retest
/retest
/retest
/retest
/retest
/retest
/retest
/retest
/retest
/retest
/retest
/retest
/retest
|
gharchive/pull-request
| 2019-08-01T10:51:20 |
2025-04-01T04:35:25.852408
|
{
"authors": [
"dinhxuanvu",
"ecordell",
"rthallisey"
],
"repo": "operator-framework/operator-lifecycle-manager",
"url": "https://github.com/operator-framework/operator-lifecycle-manager/pull/973",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1605614879
|
CI Broken?
Hey,
So Operator SDK has been doing some minor work in this repo recently, and it seems the CI is now broken, and the CI seems dependent on some containerbuildsystem stuff we don't have access to.
I think @zach-source has access to these resources. Are you still available to help maintain this repo? We at SDK are thinking of moving the pieces we use inline to the sdk repo and archiving this repository, what are your thoughts on that?
I'll take a look at the CI and the PR.
I took a peek the other day. The image still appear to be available but something else it going wrong. I checked a past git commit to see if it was an introduced change but it failed as well. No one has really touched the integration tests since me so I'm not entirely sure what is going wrong. I'd like to get the integration tests working again but haven't had time to get into it. I'll attempt to grab a few cycles this weekend. The integration tests are inherited from the python repository and I only modified them enough to work with the golang version; so my knowledge is a little lax.
cool, thanks. It looks to me like the test runner was failing to inspect the image for some reason.
I've just disabled it for now; from trying to pull the images I couldn't see any issues with the images themselves. Until the proper time can be put into fixing them the overall functionality hasn't changed so little risk in drift. We'll need to address a fix sometime in the future.
Skopeo exec was missing from the build image (used to be included automatically). Changed CLI default to crane; job is now enabled and passing.
|
gharchive/issue
| 2023-03-01T20:23:43 |
2025-04-01T04:35:25.855403
|
{
"authors": [
"jberkhahn",
"zach-source"
],
"repo": "operator-framework/operator-manifest-tools",
"url": "https://github.com/operator-framework/operator-manifest-tools/issues/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
552531175
|
bundle command user documenation
Now that bundle create has been added, users can create operator manifest images from their projects to use in deployment scenarios, ex. OLM. These scenarios, and more basically the reason bundle build was added in the first place, need documentation so users can deploy their operators effectively.
More information on bundle images: operator bundle OEP.
@estroz so should bundle create now be used instead of opm alpha bundle build?
For reference:
https://github.com/operator-framework/operator-registry/blob/master/docs/design/operator-bundle.md#build-bundle-image
@jdockter opm alpha bundle build == operator-sdk bundle create with a few CLI differences:
--default == --default-channel
opm alpha bundle generate == operator-sdk bundle create --generate-only
--overwrite does not exist in operator-sdk bundle create
opm alpha bundle build --tag <image-tag> == operator-sdk bundle create <image-tag>
The output and underlying behavior is the same, except nothing will be written to disk if --generate-only is unset (the default).
These differences will be documented very soon. Feel free to comment here if you have any further questions about usage.
|
gharchive/issue
| 2020-01-20T22:16:19 |
2025-04-01T04:35:25.874338
|
{
"authors": [
"estroz",
"jdockter"
],
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/issues/2442",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
474210819
|
cmd/.../genutil: fix typo in comment
Description of the change:
Fix typo in the generateWithHeaderFile comment.
Motivation for the change:
"arguemnt" isn't a valid English word.
/ok-to-test
Not sure why the Travis job failed. Maybe it's unstable?
@johananl Yeah, it's flaky. We can babysit it until it passes.
@johananl If you can fix the merge conflict, I can LGTM again and see if we can get the tests to pass this time around. Sorry about the flakes on a change that obviously won't break anything!
@joelanford done. Let's hope CI passes this time.
/retest
|
gharchive/pull-request
| 2019-07-29T19:12:24 |
2025-04-01T04:35:25.877453
|
{
"authors": [
"estroz",
"joelanford",
"johananl"
],
"repo": "operator-framework/operator-sdk",
"url": "https://github.com/operator-framework/operator-sdk/pull/1746",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
450150585
|
compiler: change load/save/make/get_jit_dir/get_codepy_dir into methods
This allows Compiler subclasses to customise the locations of the build directories etc, for cases where tempfile.gettempdir() returns something that we don't want to use.
Codecov Report
Merging #838 into master will decrease coverage by <.01%.
The diff coverage is 84.61%.
@@ Coverage Diff @@
## master #838 +/- ##
==========================================
- Coverage 88.84% 88.83% -0.01%
==========================================
Files 100 100
Lines 11341 11340 -1
Branches 2234 2234
==========================================
- Hits 10076 10074 -2
- Misses 981 982 +1
Partials 284 284
Impacted Files
Coverage Δ
devito/yask/wrappers.py
86.38% <100%> (-0.07%)
:arrow_down:
devito/operator.py
93.43% <100%> (ø)
:arrow_up:
devito/compiler.py
48.98% <82.35%> (ø)
:arrow_up:
devito/data/allocators.py
72.78% <0%> (-0.69%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5a63696...d4dcba5. Read the comment docs.
Thanks! Merged
|
gharchive/pull-request
| 2019-05-30T06:22:25 |
2025-04-01T04:35:25.886517
|
{
"authors": [
"FabioLuporini",
"codecov-io",
"tjb900"
],
"repo": "opesci/devito",
"url": "https://github.com/opesci/devito/pull/838",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
280626837
|
SMTP Notification over SSL/TLS not working
Hi,
The SMTP notification is not working with some providers or in generally when "Enable SMTP over SSL/TLS" is checked.
Dec 8 22:03:11 | opnsense: /system_advanced_notifications.php: Could not send the message to me@home.de -- Error: could not connect to the host "smtp.strato.de": ??
-- | --
When I disable SSL/TLS and use port 587 the notification will be send out.
My OPNsense vesion is OPNsense 17.7.9_8-amd64. I'm new with OPNsense so I don't know since which version this issue appears.
Because I've found a post in the German OPNsense forum which descripes the same behaviour with another provider, and the same issue in the pfSense bug list I think it could be also a bug in OPNsense.
German forum: https://forum.opnsense.org/index.php?topic=6263.msg26469#msg26469
pfSense bug: https://redmine.pfsense.org/issues/5604
Thank you.
Jas Man
Although the symptoms are similar to what some pfSense users experience. The causes are totally different.
In the OPNsense case it is caused by notices.smtp.inc: fsockopen being passed the IP address instead of domain name. Thus certificate validation fails.
To test, "$ip" can be replaced with "$domain" in the 2 fsockopen calls.
I just track this down a few minutes ago. So fresh off the press. Of course the SMTP server has to present a certificate that is trusted by the client (OPNsense). So if you signed your own, the CA will need to be added.
We don't really use the notifications, there are only a few places left where they are triggered. Maybe monit is a better alternative to receive status messages.
Since it is available and being used it would seem appropriate for it to be functional. And secure.
As it stands SMTPS fails due to certificate being verified against IP address instead of domain.
And STARTTLS is open to MITM due to peer verifications being disabled. Perhaps that was done to accommodate the fsockopen using IP address instead of domain.
Passing $domain to fsockopen instead of $ip allows both SMTPS and STARTTLS (if verifications enabled) to establish secure connections.
@NOYB in case you would like to work on a fix, certainly feel free to do so and offer a pull request.
More interested in the architect correcting the security hole they created. Already provided an outline of what needs to be done.
discussed here recently: https://forum.opnsense.org/index.php?topic=7165.0
Overcome by https://github.com/opnsense/core/issues/2919
|
gharchive/issue
| 2017-12-08T21:47:31 |
2025-04-01T04:35:25.914102
|
{
"authors": [
"AdSchellevis",
"JasMan78",
"NOYB",
"fichtner"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/1983",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1346077751
|
PHP 8.1 support
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
[x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Is your feature request related to a problem? Please describe.
Although 8.0 was our goal for the year if we can move to 8.1 that would be even better.
Describe the solution you like
Phalcon 5 needs to support it first. Likely to be carried out after 22.10 business release.
Check https://www.php.net/manual/en/migration81.php but otherwise builds and runs fine for 23.1 beta already.
|
gharchive/issue
| 2022-08-22T09:12:22 |
2025-04-01T04:35:25.917660
|
{
"authors": [
"fichtner"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/5979",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1848598945
|
Virtual IP status filtering issue LAN IPv6 CARP VIP is missing
Important notices
Before you add a new report, we ask you kindly to acknowledge the following:
[x] I have read the contributing guide lines at https://github.com/opnsense/core/blob/master/CONTRIBUTING.md
[x] I am convinced that my issue is new after having checked both open and closed issues at https://github.com/opnsense/core/issues?q=is%3Aissue
Describe the bug / To Reproduce
I had created multiple IPv6 CARP VIPs under Virtual IP in Interfaces -> Virtual IPs -> Settings.
At least 1 on WAN and 1 on LAN (ULA)
When I go to Interfaces -> Virtual IPs -> Status
The 1 or more on the WAN side are shown when I have only CARP selected in the drop down.
But the one the LAN side is missing.
Might be related to #6543
Describe alternatives you considered
Also select IP Alias from the drop down and then we see it.
OPNsense 23.7.1-amd64
When the virtual ip type is alias, that's logical. The filter matches the type selected at the vip configured in Interfaces: Virtual IPs: Settings.
The odd thing is, it's a CARP, it also creates a CARP on the interface if you check on the console.
As a test I added some IP-aliasses and things got more 'intriguing'.
As you can see it shows the CARP on the IP-alias selection and does NOT show the IP-aliasses.
IPv6 compressed format, it helps to be specific when opening tickets, this only happens when ifconfig's output doesn't equal the one provided in the vip (:: vs :0:).
|
gharchive/issue
| 2023-08-13T13:36:59 |
2025-04-01T04:35:25.923702
|
{
"authors": [
"AdSchellevis",
"Lennie"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/6742",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
131843203
|
show configd in service list so it can be restarted manually from there if needed
@AdSchellevis is that ok with you? feel free to grab this if you want :)
via: b3171b45-7ada-4850-9fe9-ca5313bb7054
Good idea :)
@fichtner yes, good idea, let's add that for the next release.
|
gharchive/issue
| 2016-02-06T10:44:12 |
2025-04-01T04:35:25.925091
|
{
"authors": [
"AdSchellevis",
"fichtner",
"oparoz"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/754",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2432193619
|
Cores and threads appear to be swapped in the CPU and thermal sensors widgets
Reporting 4 cores and 2 threads for 11th Gen Intel Core i3-1115G4.
Should be 2 cores and 4 threads.
Similar in thermal sensors widget reporting 4 cores and 2 zones.
24.7
Reference source:
https://ark.intel.com/content/www/us/en/ark/products/208652/intel-core-i3-1115g4-processor-6m-cache-up-to-4-10-ghz.html
fixed 6 hours ago 678eaf2fb99afc
Fixes CPU widget but not Thermal Sensors widget. Thermal Sensors widget still shows 4 cores (0,1,2,3) and 2 zones (0,1).
I did a tweak for thermal sensors but af74aa42ab from the looks of it it just reports what you have. Might be worth checking sysctls:
# configctl system temp
Cheers,
Franco
Doesn't resolve the issue and seems to randomly inflict some strange behavior.
Mostly displays this:
Infrequently displays this:
The configctl system temp command seems to report threads as cores. Every thread pair is a single core though.
How about if hyperthreading enabled return only every other "core"? i.e. the even or odd numbered "cores".
11th Gen Intel(R) Core(TM) i3-1115G4 @ 3.00GHz
(2 cores, 4 threads)
The configctl system temp command seems to report threads as cores. Every thread pair is a single core though.
Dude, just post the actual output. I don’t have time for this.
configctl system temp
dev.cpu.0.temperature=49.0C
dev.cpu.1.temperature=49.0C
dev.cpu.2.temperature=57.0C
dev.cpu.3.temperature=57.0C
hw.acpi.thermal.tz0.temperature=27.9C
hw.acpi.thermal.tz1.temperature=10.1C
Your first screenshot kooks correct. If you apply the patch I mentioned does it still jump around between both screenshots? I tried to adjust the logic in the widget that wasn't fully accurate.
Maybe we should replace "Core" with "CPU" since we don't know if it's core or thread for one reason or another (as shown by the duplicated temp readings between 0/1 and 2/3 here it's likely a thread).
After applying the patch is when first noticed the jumping around.
Here is what I consider to be correct for this CPU.
(11th Gen Intel Core i3-1115G4)
Reworked /usr/local/opnsense/scripts/system/temperature.sh to return only one "cpu" temperature per core if hyperthread enabled. i.e. multiple threads per core. Pretty concise to filter for 2 threads per core. Just exclude those ending with odd digit. '[0-9]*[13579]'
Still tinkering on it a bit.
More than 2 threads per core is a little involved but doable. Is support for more than 2 threads per core needed?
Yeah that is calling for trouble trying to undo how the system expands the temperature readings. May consider merging in the widget like it's already doing for when all CPUs detect the same temperature, but then you also have to adjust the numbering again as it would say CPU 0, CPU 2. Frankly this is not worth the effort and will break for the next person.
From working with this I've come to the conclusion that threads are considered to be "CPUs" by sysctl.
For 2 threads (cpus) per core, just excluding the odd numbered "cpus" seems pretty safe.
Don't like the dynamic nature of not reporting if same temp. Keeps changing the number of items shown.
But I'm not sure what this is:
It should try to condense all CPU readings into 1 and if it fails display all, which would only be problematic when the temperatore matches up for both of your cores.
But that screenshot is strange, because the zone reading is duplicated which is never handled except being pushed through as is.
That snapshot is from before any of my changes. Only with the patch that you made a few days ago. It is strange for sure.
With my changes a similar thing also happens but differently. It sometimes displays only "Core 0", "Zone 0", "Zone 1" and then "Zone 0" again. Like since it received 4 items it insists on displaying 4 items.
I saw this happen now once, one of the four cpus was a different temp and it expanded correctly but then glitched as you posted earlier. Safe to assume this is a JS glitch. I'll fix this and close when I manage to reliably reproduce.
|
gharchive/issue
| 2024-07-26T13:00:39 |
2025-04-01T04:35:25.936331
|
{
"authors": [
"NOYB",
"fichtner"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/issues/7657",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
382184981
|
sidebar - optimize
@fichtner sidebar - optimize -> not shure if this works now!
@ Fichtner sorry, I should have adjusted in notepad ++ through the 4 space conversion a lot.
@opnsenseuser I'm happy to fix this manually for now. Not worth spending too much time on on your end. :)
|
gharchive/pull-request
| 2018-11-19T12:05:11 |
2025-04-01T04:35:25.937997
|
{
"authors": [
"fichtner",
"opnsenseuser"
],
"repo": "opnsense/core",
"url": "https://github.com/opnsense/core/pull/2934",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
302383704
|
Monit does not update
Since update today to Monit 1.5 new or changed settings do not work as expected.
Also email notifications are not consistent. The 'Test Configuration' button does not work consistently.
Clicking the 'Reload Configuration' button does not reinitialize the status.
Monit also shows control issues on one of the boxes. Shows
'/usr/local/etc/monitrc:17: syntax error 'failed'' when testing the configuration. Tried to reinstall Monit with no success.
Non of these issues existed yesterday.
This should help for now:
# opnsense-revert -r 18.1.2 os-monit
Definitely an issue with 1.5. All works with 1.4.
One thing I noticed is the Apply button in the General Settings does not do anything with 1.5
With 1.4 the circle spins and changes are made to /etc/monitrc
/usr/local/etc/monitrc:17: syntax error 'failed'
@tpcr you have an error in your 'Service Test Settings' that probably wasn't detected before 1.5
Could you post the lines 15 to 19 from your monitrc?
One thing I noticed is the Apply button in the General Settings does not do anything in version 1.5
The behavior has changed slightly.
The Apply button does exactly what it's name say: "Apply the changes to the config" and not more.
In 1.4 it also reloads the service. This took some time and that's why you saw the spinner.
@fichtner I think we need a better feedback here. Maybe a status line "Your config has changed. Please click Apply and reload the service" which disappears after clicking Apply/Reload?
I'll import monit plugin to base and take a look this week. All pages apply + reload/restart, maybe if restart takes to long we can get away with a reload?
No, it already reloads. :smirk:
The problem was that the tables apply the config via dialog boxes. And the 'General Settings' via Apply button.
This was inconsistent since after editing tables you had to click on Reload.
Editing 'General Settings' only Apply was needed.
Here is the monitrc file
`# DO NOT EDIT THIS FILE -- OPNsense auto-generated file
set httpd unixsocket /var/run/monit.sock
allow localhost
set daemon 60 with start delay 30
set logfile syslog facility log_daemon
set mailserver 192.168.10.24 port 2525 username me@here.com password "xxxxxxxx"
set alert me@here.com { action,resource,status } mail-format { from: me@here.com } reminder on 10 cycles
check system $HOST
if failed link then alert # This is line 17 #
if memory usage is greater than 25% then alert
if cpu usage is greater than 25% then alert
check filesystem RootFs with path /
if space usage is greater than 50% then alert
check program CPUtemp with path /usr/local/bin/CheckCPUTemp.sh
if status notequal 0 then alert
`
Monit has been uninstalled from the box with the issue, but the /etc/monitrc file is still there.
Monit 1.5 has been downgraded on the other boxes to version 1.4. With 1.5 installed it did not send notifications.
It acts as though it is getting the wrong monitrc file in version 1.5.
Also, one other thing I noticed in Monit. When email 'Secure Connection' is enabled, it forces SSL and not STARTTLS, so when you import the configuration from notifications and STARTTLS was used, it does not use the same Secure Connection in Monit. Which is why I think I get no notifications in version 1.5. Once I Imported the config from notifications, it stored the info in the 'ghost' monitrc and uses that instead of the actual one. Making changes in Monit has no affect on /etc/monitrc in the box with the issue.
For more in depth info. When I first used monit 1.5, I made a change to the 'Service Tests Settings' condition on one of the entries to test. I changed the memory usage to invoke a trigger. I may have entered a wrong value which initially caused the error. But I was never able to correct it. The monitrc file made the change back, but monit would never start after that with the error I caused in the initial test.
So maybe a good test is to create a condition error on purpose and see if you can get rid of it.
check system $HOST
if failed link then alert # This is line 17 #
if memory usage is greater than 25% then alert
if cpu usage is greater than 25% then alert
'failed link' is a network interface test and you have it linked to the system test.
see Monit Network Interface Tests
Monit has been uninstalled from the box with the issue, but the /etc/monitrc file is still there.
Really /etc/monitrc ?
This could cause your trouble. Because:
Monit is configured and controlled via a control file called monitrc. The default location for this file is ~/.monitrc. If this file does not exist, Monit will try /etc/monitrc, then @sysconfdir@/monitrc and finally ./monitrc. ...
The plugin uses only /usr/local/etc/monitrc.
Also, one other thing I noticed in Monit. When email 'Secure Connection' is enabled, it forces SSL and not STARTTLS, so when you import the configuration from notifications and STARTTLS was used, it does not use the same Secure Connection in Monit. Which is why I think I get no notifications in version 1.5
As far as I know Monit doesn't support STARTTLS for alerts. The notification import enables the SSL instead because most mail servers which support STARTTLS support SSL too.
I believe that the service is pulling the data from somewhere other than /usr/local/etc/monitrc
I guess from /etc/monitrc. :smirk:
Thanks, the Network link was the issue. Error went away as soon as I removed it and reloaded it.
The lack of documentation for this plug-in is so sparse I had no way to know where the Network link should have gone.
Also the way that the Apply button works is different and is misleading as to what it really does now.
The Monit install does not create monitrc at ~/ on my systems. The only place it exists is in /usr/local/etc.
The only reason I bring up the STARTTLS issue is because when importing from notifications, SSL does not exist so if the email server is setup to receive only STARTTLS, for whatever reason, it will not work in Monit. The GUI should have a warning when importing the secure connection since notifications and Monit use different methods. Don't assume every email server accepts both.
Thanks for clearing this up. I will close this issue.
The lack of documentation for this plug-in is so sparse ...
A good starting point is the Monit Mini Howto
And then the Monit Documentation itself.
Read those, no explanation on what all the services tests are and how to use them. Which is what got me in trouble in the first place. Also no docs on setting up custom tests. What is the syntax for using conditions with a custom script? Docs are sparse and expect many issues if users like me dive into Monit.
For example, what conditions do I use for Network Saturation or Network link and do these only work with Type ' Network'? Do I need a specific name for a service or can I use anything?
Services are all things you can monitor: system, programs, filesystems or network interfaces just to name a few.
Possible service tests are listed in the SERVICE TESTS section of the Monit documentation.
But you have to define conditions at which you want to have an alert or other actions like restart, unmonitor etc.
These conditions are service specific. The system cannot have a failed link. It can have an overloaded CPU for example.
But a network interface test can check the link.
The NETWORK SERVICE TEST section describes all tests you can do on a network interface.
Example step by step:
What to monitor? Network Link.
Which condition? Link is down, i.e. failed.
What to do? Try to bring it up again.
First create a test:
Name: NetworkLinkRestart <- choose a name of your choice
Condition: failed link <- see Monit Documentation
Action: Restart
Now you can reuse this test on any network service check.
Then a service:
Name: Iface_OPT1 <- freely selectable, I add the interface name to see which one failed (see Interfaces->Assignments)
Type: Network
Address: <- leave it empty, we are not interested in addresses
Interface: OPT1 <- the interface to monitor
Start: /etc/rc.d/netif start vtnet1 <- vtnet1 is OPT1 see assignments
Stop: /etc/rc.d/netif stop vtnet1
Test: NetworkLinkRestart <- the test created above
Create a service for all your interfaces with the same test.
Same with saturation (your networkcard must support it):
Test:
Name: Saturation90
Condition: saturation greater 90%
Action: Alert
Service:
Name: Iface_OPT1_90
Type: Network
Interface: OPT1
Test: Saturation90
Start/Stop is not needed since the test creates only alerts.
That was a GREAT help. Thank you.
Now what about a custom test. I have created a script to check CPU temps.
How can I add a condition that has the temp value to check
ie. Condition - "CPU temp greater than 60 degrees"
here is my script
`#!/bin/csh
set MaxCPUTemp = 60
set status = 0
set NumCPUs = sysctl -n kern.smp.cpus
set CurrentCPU = 0
while ( $CurrentCPU < $NumCPUs )
set CPUTemp = sysctl dev.cpu.$CurrentCPU.temperature | awk '{print $2}' | awk -F. '{print $1}'
echo "CPU $CurrentCPU temp: $CPUTemp"
if ( $CPUTemp >= $MaxCPUTemp ) then
exit 1
endif
@ CurrentCPU = $CurrentCPU + 1
end
exit 0`
What can I add to a script that passes a variable/value to the script?
Usually as parameter.
But now I see the plugin doesn't allow that. It's a bug. :worried:
Anyway the PROGRAM STATUS TEST section says that you can test the exit code only.
Therefore let the script exit with the CPU temperature and create a test with condition 'status greater 60'.
Then create a custom service for each CPU and a script for each CPU as workaround for the bug.
After the fix you can use a single script and provide the CPU number as parameter.
#!/bin/csh
#set CPU = $1
set CPU = 0
set CPUTemp = sysctl dev.cpu.$CPU.temperature | awk '{print $2}' | awk -F. '{print $1}'
echo "CPU$CPU temp: $CPUTemp"
exit $CPUTemp
Setting the Network Link action to restart, will that run the start/stop commands, or restart OPNsense?
it runs the Start/Stop commands
I simplified the script because I got an error. Getting the same error.
Here is the script I used
#!/bin/csh
set CPUTemp = sysctl dev.cpu.0.temperature | awk '{print $2}' | awk -F. '{print $1}'
echo "CPU0 temp: $CPUTemp"
exit $CPUTemp
Yes, was a quick hack, not tested.
But I see you got the idea :smiley:
And with the fix of #585 you need only one script with the CPU number as parameter.
Excellent. Let me know when fix is ready and I will test it.
@tpcr the bug was fixed in #587
I did make the changes and used the following script, which seems to work using a condition - "status greater 1. I could not use $CPUTemp as the exit code.
#!/bin/csh
set MaxCPUTemp = 60
set NumCPUs = `sysctl -n kern.smp.cpus`
set CurrentCPU = 0
while ( $CurrentCPU < $NumCPUs )
set CPUTemp = `sysctl dev.cpu.$CurrentCPU.temperature | awk '{print $2}' | awk -F. '{print $1}'`
echo "CPU $CurrentCPU temp: $CPUTemp"
if ( $CPUTemp >= $MaxCPUTemp ) then
exit 1
endif
@ CurrentCPU = $CurrentCPU + 1
end
exit 0
Why not?
I get a unrecognized variable as the Last Exit Value
And why do you changed the script? :confused:
You do not need more than these 6 lines.
I was just pointing out that I did not get a value returned when using
exit $CPUTemp as I did with the previous script that only checked CPU 0.
I am happy with the new script as it sends a notification with the temp of the failing CPU from the echo command. But I would prefer a script that was controlled by the condition statement instead of having to hard code the max temp value in the script as in - status greater than 60. I am sure it can be done, I am just a novice coder.
Sorry now you've lost me.
Is your problem solved now or not?
You still need a custom service for each CPU.
Add the CPU number as parameter to the Path and use the script above.
The script works if I hard code the MaxCPUTemp value in the script. Which I can live with for now.
The better approach would be to test all the CPU temps and pass the highest one to the exit code. Then have that temp checked by the condition - status greater 60. Or better yet - CPUtemp greater 60.
That way a default service test for CPU temps can be included with OPNsense.
The script above checks all the CPU temps and then exits with an exit code 1 on the first CPU that exceed MAXTemp. If all CPU's are good, then an exit code of zero is passed. One script for all CPU tests, except then I have to hard code the MAX Temp value. It's a trade off.
Ah ok. now i got you.
And the problem :smiley:
You should declare CPUTemp in a global context and provide MaxCPUTemp as parameter.
Try this script and add your max to the Path Path: /usr/script/path 60):
#!/bin/csh
set MaxCPUTemp = $1
set NumCPUs = `sysctl -n kern.smp.cpus`
set CurrentCPU = 0
set CPUTemp = 1000
while ( $CurrentCPU < $NumCPUs )
set CPUTemp = `sysctl dev.cpu.$CurrentCPU.temperature | awk '{print $2}' | awk -F. '{print $1}'`
echo "CPU $CurrentCPU temp: $CPUTemp"
if ( $CPUTemp >= $MaxCPUTemp ) then
exit $CPUTemp
endif
@ CurrentCPU = $CurrentCPU + 1
end
exit $CPUTemp
Bingo! Works like a champ.
Thanks!!
:sunglasses:
@tpcr would be fantastic if you could add this as an example to the Monit Mini Howto.
Please :smile:
I have it ready, but need the patch first to Monit 1.6. Commit e94878b
When I do # opnsense-patch e94878b cannot find in the repository
# opnsense-patch -c plugins e94878b
returns
fetch: transfer timed out
I just updated to OPNsense 18.1.4 then tried again with same response
sounds like dns or connectivity doesn't work (IPS?), it's a simple fetch
Yes was an issue with IDS.
I have added to the thread. Should be a sticky in the forum.
|
gharchive/issue
| 2018-03-05T17:07:58 |
2025-04-01T04:35:25.977223
|
{
"authors": [
"fbrendel",
"fichtner",
"tpcr"
],
"repo": "opnsense/plugins",
"url": "https://github.com/opnsense/plugins/issues/583",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
254809596
|
MDNS: add status indicator
closes #242
We should add this to the validation:
mdns-repeater: error: at least 2 interfaces must be specified
FYI https://github.com/opnsense/plugins/commit/49f860905b398775753ad5c3644af333634b370a
@fichtner what should we do because of the model?
afaik it is not possible to run migrations on data which does not exist
migration on data that not exists should work fine. it is for filling a set of default values.
have you not set a default for interfaces maybe? ;)
https://github.com/opnsense/plugins/blob/master/net/mdns-repeater/src/opnsense/mvc/app/models/OPNsense/MDNSRepeater/MDNSRepeater.xml#L10-L13
It has no default and it should not (I don't know how a user calls his networks - so this would be guessing random values)
Then can't set to required, it's an impossible condition. IDS defaults to lan for that reason.
On 2. Sep 2017, at 18:15, Fabian Franz notifications@github.com wrote:
https://github.com/opnsense/plugins/blob/master/net/mdns-repeater/src/opnsense/mvc/app/models/OPNsense/MDNSRepeater/MDNSRepeater.xml#L10-L13
It has no default and it should not (I don't know how a user calls his networks - so this would be guessing random values)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
does lan always exist?
no, but it doesn't matter. we should simply make sure the validation works in the enforced defaults.
it's even trickier here: we would have to assume lan,opt1. it's not pretty, but then it would work in the most default cases hitting "enable" and apply :)
the other thing would be to remove required
the required must stay as the configuration is invalid without the interface name. So I will add a default of lan in the hope that it will work.
ok 👍
added it directly to master
|
gharchive/pull-request
| 2017-09-02T12:45:11 |
2025-04-01T04:35:25.985025
|
{
"authors": [
"fabianfrz",
"fichtner"
],
"repo": "opnsense/plugins",
"url": "https://github.com/opnsense/plugins/pull/244",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
532553495
|
いいところ診断の結果に優しさを追加したい
以下の結果を追加したい
'{userName}のいいところは優しさです。あなたの優しい雰囲気や立ち振る舞いに多くの人が癒やされています。'
自作自演で対応します
dfe5636 で対応しました。
|
gharchive/issue
| 2019-12-04T09:25:36 |
2025-04-01T04:35:25.989026
|
{
"authors": [
"opossum-san"
],
"repo": "opossum-san/assessment",
"url": "https://github.com/opossum-san/assessment/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
52486718
|
redhat iptables not saving?
This might not be an issue, but my customer told me that on reboot of the system we stood up for them via chef, their ip tables rules are not persistent.
excuse my ignorance, but can someone explain to me what this is doing and why redhat/centos are not in the list of platforms?
case node[:platform]
when "ubuntu", "debian"
iptables_save_file = "/etc/iptables/general"
template "/etc/network/if-pre-up.d/iptables_load" do
source "iptables_load.erb"
mode 0755
variables :iptables_save_file => iptables_save_file
end
end
BTW: I read through the cookbook and see that the rebuild-iptables.erb is supposed to write the ip table rules to "/etc/sysconfig/iptables", I was more so curious as to why the above is needed.
template "/usr/sbin/rebuild-iptables" do
source "rebuild-iptables.erb"
mode 0755
variables(
:hashbang => ::File.exist?('/usr/bin/ruby') ? '/usr/bin/ruby' : '/opt/chef/embedded/bin/ruby'
)
end
Yes, it confuses me as well.
I also had this problem with Amazon Linux (aka Enterprise Linux), where rules where not showing in "iptables -L" after a reboot.
This turned out to be because the "iptables" service was not being enabled with chkconfig. Simply enabling the service with chkconfig resulted in the rules being present after a reboot.
$ chkconfig iptables on
$ chkconfig --list iptables
iptables 0:off 1:off 2:on 3:on 4:on 5:on 6:off
$ sudo reboot
However, above is too manual, presume the cookbook should enable the service like below?
service 'iptables'
action [:enable, :start]
end
We're now enabling the services on RHEL based systems. This shouldn't be an issue anymore.
|
gharchive/issue
| 2014-12-19T14:13:21 |
2025-04-01T04:35:26.042552
|
{
"authors": [
"DennyZhang",
"stevejmason",
"tas50",
"yairgo"
],
"repo": "opscode-cookbooks/iptables",
"url": "https://github.com/opscode-cookbooks/iptables/issues/22",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1583209773
|
Add starting GraphQL method
See #5 SDK should support GraphQL
I wanted to change as little as possible, so this uses existing REST methods to perform GraphQL lookups.
@kmacdonaO What further changes would you like to see before I submit this pull request?
Also, I've previously contributed to this repository and signed the OCA.
Keith is also building graphql support as mentioned in #5
|
gharchive/pull-request
| 2023-02-13T23:06:34 |
2025-04-01T04:35:26.103781
|
{
"authors": [
"tcaruth"
],
"repo": "oracle/content-management-sdk",
"url": "https://github.com/oracle/content-management-sdk/pull/6",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
473519091
|
Plugin not intercepting eventSource or calls from html
It seems that although I set the InterceptRemoteRequests to all
calls not created directly by an ajax call is not intercepted.
As a result eventSource or an img will fail if authentication (cookies) is required.
Is this intentional, or I am missing something.
Currently my workaround is that I initiate a login also by a non intercepted call so I set up the session cookies in both layers.
Also I couldn't make that the origin not to be "null" while according to the documentation this one important feature.
I cannot say that cordova-plugin-wkwebview-file-xhr doesn't do anything either as without the plugin I cannot make any calls because my server needs the AllowUntrustedCerts=on setting, which seems to work. Strange...
Is my configuration correct?
XHR is isolated to the JavaScript context - XMLHttpRequest and fetch API. The plug-in can't intercept how the browser retrieves images defined by the DOM. Bypassing self signed certificates is also isolated to XHR. Please see the known limitation section of the readme regarding cookie handling.
As a workout, you could fetch images via XHR and convert them to data urls.
That was my guess as well, but then, why does have the plugin any effect regarding the self signed certificates? Is the XHR calls are isolated from the WKWebView plugin as well?
My workaround was that on iOS (and only there) injected a login url into the html, which this way retrieved the required session cookies and then was able to download the images correctly.
The same applies to the logout procedure. This workaround seemed to me better as apart from the login and logout the rest of the code remained unchanged after I changed to WKWebView.
|
gharchive/issue
| 2019-07-26T20:16:10 |
2025-04-01T04:35:26.108514
|
{
"authors": [
"cszasz",
"gvanmat"
],
"repo": "oracle/cordova-plugin-wkwebview-file-xhr",
"url": "https://github.com/oracle/cordova-plugin-wkwebview-file-xhr/issues/40",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
381345058
|
Make FilesystemWatchPollingStrategy.initWatchServiceModifiers public
Creating this bug as the result of investigating issue #181
The class io.helidon.config.internal.FilesystemWatchPollingStrategy implements polling strategy using java.nio.file.WatchService.
WatchService implementation in the JDK is either platform specific if supported, is a based on a polling mechanism if not supported sun.nio.fs.PollingWatchService.
The polling interval of sun.nio.fs.PollingWatchService can be configured by passing instances of java.nio.file.WatchEvent.Modifier to java.nio.file.Path.register. java.nio.file.WatchEvent.Modifier is implemented by com.sun.nio.file.SensitivityWatchEventModifier that defines 3 constants:
HIGH (2s)
MEDIUM (10s)
LOW (30s)
sun.nio.fs.PollingWatchService uses MEDIUM (10s) by default.
Java does not support native WatchService for MacOS. While MacOS is not a target production platform, it is used for development and this behavior leads to inconsistent test behaviors.
Instead of forcing users to be aware of the default behavior for WatchService on MacOS and have them add sleep >10s in their code, we should provide a way to pass Modifier instances to FilesystemWatchPollingStrategy.
There is actually a method initWatchServiceModifiers on FilesystemWatchPollingStrategy that does exactly that, but is package protected and used exclusively by tests. The intent was probably (you guessed it) for cross-platform consistency.
We should allow our users to do the same with their tests, and thus make this method public.
This means users would tap into our internal API (i.e FilesystemWatchPollingStrategy) and configure the polling interval for sun.nio.fs.PollingWatchService by passing a modifier instance of com.sun.nio.file.SensitivityWatchEventModifier which is considered internal Java API.
An example code would look like this:
Path configFile = new File("/tmp/application.yaml").toPath();
FilesystemWatchPollingStrategy pollingStrategy = new FilesystemWatchPollingStrategy(configFile, null);
pollingStrategy.initWatchServiceModifiers(SensitivityWatchEventModifier.HIGH);
Config config = Config.from(ConfigSources
.file("/tmp/application.yaml")
.pollingStrategy(pollingStrategy)
.build());
@tjquinno @tomas-langer FYI
|
gharchive/issue
| 2018-11-15T21:23:30 |
2025-04-01T04:35:26.125633
|
{
"authors": [
"romain-grecourt"
],
"repo": "oracle/helidon",
"url": "https://github.com/oracle/helidon/issues/188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1980298151
|
Running git fetch only does not guarantee that we have fetched all changes related to tags
Summary
git fetch will not fetch the changes of existing tags in the local repository if that tag has been modified in the remote repository. To be able to fetch that modification, we must run git fetch --force --tags.
Note, before git 2.20, running git fetch --tags has the same behavior as git fetch --force --tags in this scenario (documentation).
Details
From the documentation, the default behavior of git fetch is:
Fetch branches and/or tags (collectively, "refs") from one or more other repositories, along with the objects necessary to complete their histories.
...
By default, any tag that points into the histories being fetched is also fetched; the effect is to fetch tags that point at branches that you are interested in.
...
When no remote is specified, by default the origin remote will be used, unless there's an upstream branch configured for the current branch.
...
With that say, in the context of fetching, when you run git fetch , it will fetch new references (e.g. branches) and the objects necessary to complete their histories (commit objects). It will only fetch the tags that points to the new histories or any new tags from the remote repository (even in the case that new tags point to an old commit that already exists).
However, it will not update an existing tag in the local repository if that tag has been modified in the remote repository. By "modified", I mean that someone would do this:
git tag --delete existing_tag_name
git tag existing_tag_name <another_new_commit>
git push --force --tags
which essentially modify the commit that the tag existing_tag_name is pointing to in the remote repository.
In this case, if you run git fetch on the local repository, existing_tag_name will not be updated to reflect the modification from the remote repository (however, it will not raise any error, and new commits, new branches, new refs are still being fetched as usual).
To make existing_tag_name in the local repository point to another_new_commit you must run:
git fetch --force --tags
Running git fetch --tags only: 1. wouldn't update the modified existing tags, 2. still fetch the new commit history from remote, 3. the command will return an error status code:
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 851 bytes | 851.00 KiB/s, done.
From github.com:tromai/test-repo
14800f3..58b0069 main -> origin/main
! [rejected] v1.0.0 -> v1.0.0 (would clobber existing tag)
Solution
We have decided that for now, it's better to use git fetch --tags --force to make sure we analyze the most up-to-date version of the repository.
I don't think this should be tagged as a bug. What if analyzing an old commit that already exists locally is preferred? This is a decision made by us to force update the tags, but in general it shouldn't be considered as a bug.
Update
After some more testing, I noticed that if a tag were to be deleted in the remote repository, running git fetch --tags --force won't delete that tag from the local repository. This behavior is similar to branches, where fetching only gains new branches (that are newly created in remote) but doesn't update deleted branches (in remote) to your local git repository.
To make git delete any branches or tags from the local repository if they are deleted from remote, we must run git fetch --tags --force --prune --prune-tags.
Therefore, we have decided to update the final solution to use git fetch --tags --force --prune --prune-tags instead.
|
gharchive/issue
| 2023-11-07T00:12:22 |
2025-04-01T04:35:26.133035
|
{
"authors": [
"behnazh-w",
"tromai"
],
"repo": "oracle/macaron",
"url": "https://github.com/oracle/macaron/issues/547",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2355169580
|
Remove namespace setting from helm chart
When deploying the helm chart, the namespace is hardcoded in the values.yaml:
deploymentNamespace : native-ingress-controller-system
Looking into usage, I can see the deployment.yaml template is creating the namespace itself and deploying the resources into the namespace.
This chart shouldn't be responsible of:
creating any namespace
deploying the resources into a namespace that's different from what you use for the helm chart deployment
Consider the following Terraform script (same for helm CLI):
resource "helm_release" "oci-native-ingress-controller" {
name = "oci-native-ingress-controller"
chart = "oci-native-ingress-controller"
namespace = "cluster-tools"
wait = true
cleanup_on_fail = true
atomic = true
set {
name = "compartment_id"
value = var.compartment_id
}
set {
name = "subnet_id"
value = var.load_balancer_subnet_id
}
set {
name = "cluster_id"
value = var.cluster_id
}
}
Obviously the expectation is that the ingress-controller will be deployed in the cluster-tools namespace but it's not since the namespace definition is hardcoded as mentioned above.
This is misleading and definitely not the responsibility of the chart.
I suggest to entirely remove any namespace specifics.
Hi @galovics, thanks for reaching out.
You can simply set the pre-existing namespace for deploymentNamespace in values.yaml for this. Looking at your TF example, I believe this can achieved by adding the snippet
set {
name = "deploymentNamespace"
value = <Pre-existing Namespace>
}
The latest chart released with v1.3.7 checks if the namespace already exists and doesn't create it if that's so. Have a look at the code here - https://github.com/oracle/oci-native-ingress-controller/blob/main/helm/oci-native-ingress-controller/templates/deployment.yaml#L5-L13
I am closing this issue as it's already dealt with.
@piyush-tiwari that's true I can set it, but it's a matter of responsibilities. I just don't get why the chart handles namespace creation and deploying into that specific namespace. Why not follow the best practices in the industry and make the helm chart free from any pre-defined namespaces?
|
gharchive/issue
| 2024-06-15T19:33:35 |
2025-04-01T04:35:26.164139
|
{
"authors": [
"galovics",
"piyush-tiwari"
],
"repo": "oracle/oci-native-ingress-controller",
"url": "https://github.com/oracle/oci-native-ingress-controller/issues/75",
"license": "UPL-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
348221019
|
Adding support for disabling volume roundup
Add support for disabling volume roundup in the case a PVC is less than 50GB, via a parameters of a storage class. If a PVC is less than 50GB, it will fail.
@bdourallawzi Needs rebase but otherwise LGTM 👍
@prydie Done :+1:
@bdourallawzi changes look good but looks like we're also pulling in some changes from another branch here that are unrelated.
@owainlewis My only query would be the option name. If you're happy with it then LGTM - let's merge.
|
gharchive/pull-request
| 2018-08-07T08:35:12 |
2025-04-01T04:35:26.166574
|
{
"authors": [
"bdourallawzi",
"owainlewis",
"prydie"
],
"repo": "oracle/oci-volume-provisioner",
"url": "https://github.com/oracle/oci-volume-provisioner/pull/156",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
305765229
|
whois auth information returns incorrect results
The way I expect this to work is as it does on Freenode. On Freenode the account I'm registered with is eck, but I also use the handle eklitzke in some development channels, and I've linked my eklitzke account to the eck account.
I just logged in using the Freenode webchat client, and here's what I see:
/whois eck
[17:22] == eck [~postfix@fsf/member/eck]
[17:22] == realname : sendmail
[17:22] == server : moon.freenode.net [Atlanta, GA, US]
[17:22] == : is using a secure connection
[17:22] == account : eck
[17:22] == End of WHOIS
/whois eklitzke
[17:22] == eklitzke [~evan@fsf/member/eck]
[17:22] == realname : Evan Klitzke
[17:22] == server : hitchcock.freenode.net [Sofia, BG, EU]
[17:22] == : is using a secure connection
[17:22] == account : eck
[17:22] == End of WHOIS
As you can see, they both show me authed as eck. This would also apply if I used /nick on one of those accounts, it would still link me back to my eck handle.
Here's what I actually see right now on irc.darwin.network (where I am logged in as ghost while I was testing some /umode stuff).
/whois ghost
17:24 -- [ghost] (~death@hacker.monad.io): death
17:24 -- [ghost] #darwin
17:24 -- [ghost] 75.101.96.6 :Actual user@host, Actual IP ~death@hacker.monad.io
17:24 -- [ghost] is using a secure connection
17:24 -- [evan] logged in as :is
17:24 -- ghost en :can speak these languages
17:24 -- [ghost] idle: 00 hours 06 minutes 09 seconds, signon at: Thu, 15 Mar 2018 17:10:54
17:24 -- [ghost] End of /WHOIS list
OK so far so good, I am actually authed as evan and that shows up correctly (although the formatting is terrible, see below). Let me try looking up another user:
/whois nish
17:25 -- [nis] End of /WHOIS list
17:25 -- [nish] (~nishbot@198.199.114.107): Supybot 0.83.4.1
17:25 -- [nish] #darwin
17:25 -- [nish] is using a secure connection
17:25 -- nish en :can speak these languages
17:25 -- [nish] idle: 00 hours 02 minutes 12 seconds, signon at: Thu, 15 Mar 2018 01:25:38
17:25 -- [nish] End of /WHOIS list
That also looks OK. But I see strange results when I /whois this guy who previously tried to steal my nick:
17:26 -- [shivaram] (~shivaram@c-107-3-81-17.hsd1.ma.comcast.net): shivaram
17:26 -- [shivaram] #darwin
17:26 -- [shivaram] is using a secure connection
17:26 -- [evan] logged in as :is
17:26 -- shivaram en :can speak these languages
17:26 -- [shivaram] idle: 00 hours 02 minutes 46 seconds, signon at: Wed, 14 Mar 2018 06:57:00
17:26 -- [shivaram] End of /WHOIS list
This shivaram guy shows up being authed as me, which is not what I expect. I'd like to ghost him, but when I /msg nickserv shivaram it doesn't work:
17:27 -- MSG(nickserv): ghost shivaram
17:27 -- irc.darwin.network: You don't own that nick
Indeed I do not own that nick, so it seems like the ircd has the right mapping but is returning bad whois data.
It also looks like the formatting isn't right, the lines look like
17:26 -- [evan] logged in as :is
but I believe it should say
17:26 -- :is logged in as [evan]
My guess is that some prankster was using RTL characters while impersonating me. It's still strange that would affect /whois on my own account.
The core bug is pretty straightforward, here's a fix:
https://github.com/slingamn/oragono/commit/74c243d5aeab05820eea8d5844e2fec66bb56262
however, clients still may not be interpreting the resulting output correctly. In Hexchat raw I/O:
<< whois netcat
>> @time=2018-03-16T00:42:46.892Z :oragono.test 311 shivaram_hexchat netcat ~netcat 0::1 * netcat
>> @time=2018-03-16T00:42:46.892Z :oragono.test 319 shivaram_hexchat netcat :@#shivaramtest
>> @time=2018-03-16T00:42:46.892Z :oragono.test 330 shivaram_hexchat netcat :is logged in as
>> @time=2018-03-16T00:42:46.892Z :oragono.test 317 shivaram_hexchat netcat 25 1521160924 :seconds idle, signon time
>> @time=2018-03-16T00:42:46.892Z :oragono.test 318 shivaram_hexchat netcat :End of /WHOIS list
but in the UI:
* [netcat] (~netcat@0::1): netcat
* [netcat] @#shivaramtest
* [netcat] ogged in as :is
* [netcat] idle 00:00:25, signon: Thu Mar 15 20:42:04
* [netcat] End of WHOIS list.
If I Google this, I find a number of related client bugs:
https://bugs.quassel-irc.org/issues/1145
https://forums.mirc.com/ubbthreads.php/topics/243026/debug_window_330_numeric_issue
Another one: https://bugzilla.mozilla.org/show_bug.cgi?id=217474
According to https://www.alien.net.au/irc/irc2numerics.html it is not recommended to use a 330 response code as it has conflicting definitions, but I'm not sure how official of a resource this is.
Well, 'user' is a specific part of the nickmask, nick!user@host, so changing 311 to return the account name instead of the user part of the nickmask would mess with things a fair bit more. It'd be worth checking with the IRCds around and seeing whether any (and how many) servers still use 330 as RPL_WHOWAS_TIME. After we get those details, we can file updates against the irc-defs numerics list (the updated fork of the alien.net.au lists).
I saw some discussion about this topic in #ircv3 today (well related, it was about general rpc-keyvalue responses in whois queries). Seems like this is on track to be fixed in general in IRC One Day.
Weechat issue link for reference: weechat/weechat#1160
Oragono did in fact have a bug in its RPL_WHOISACCOUNT response line; fixed in #290.
|
gharchive/issue
| 2018-03-16T00:35:40 |
2025-04-01T04:35:26.180003
|
{
"authors": [
"DanielOaks",
"eklitzke",
"slingamn"
],
"repo": "oragono/oragono",
"url": "https://github.com/oragono/oragono/issues/217",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
618828701
|
Bumps
Perhaps is a stupid question, but is it possible generate an hills (or depression) on the grid defined by another surface?
@scarpae I am assuming you are referring to the surface generator?
Closing due to no activity.
|
gharchive/issue
| 2020-05-15T09:26:08 |
2025-04-01T04:35:26.199373
|
{
"authors": [
"orbingol",
"scarpae"
],
"repo": "orbingol/geomdl-examples",
"url": "https://github.com/orbingol/geomdl-examples/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
344509709
|
docs(README.md): Add Haja and Maintainer sponsors
This adds sponsors to the sponsor list.
LGTM 👍
|
gharchive/pull-request
| 2018-07-25T16:06:54 |
2025-04-01T04:35:26.200425
|
{
"authors": [
"RichardLitt",
"haadcode"
],
"repo": "orbitdb/orbit-db",
"url": "https://github.com/orbitdb/orbit-db/pull/421",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.