id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
174152177
dracut: add network config for auto link local Both OpenStack and DigitalOcean require link-local auto-configuration in order to communicate with the network metadata service. lgtm, how does this relate to https://github.com/coreos/init/pull/216? We are going we a slightly different approach. Trigger conditions don't work in network units and we only want to add a link-local address on the first interface when using DigitalOcean. https://github.com/coreos/bootengine/pull/98
gharchive/pull-request
2016-08-30T23:43:15
2025-04-01T06:38:16.998917
{ "authors": [ "crawford", "vcaputo" ], "repo": "coreos/bootengine", "url": "https://github.com/coreos/bootengine/pull/97", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
232741991
internal: Simplify checking for supported providers and metadata function to call This duplication bothered me a bit, so I figured I'd submit a PR to get rid of it. Perhaps a better suggestion here: introduce a metadataFetcher() (fetchFn, error) which returns the provider-specific fetcher or an error for unknown ones, then either react to the error or invoke the returned function. While at it, please also introduce some unit-test covering positive and negative cases. Perhaps a better suggestion here: introduce a metadataFetcher(provider string) (fetchFn, error) which returns the provider-specific fetcher or an error for unknown ones, then either react to the error or invoke the returned function. While at it, please also introduce some unit-test covering positive and negative cases. I went down this path also, but felt that it was more than I should introduce in one go. However, I'm happy to take a pass at implementing something like this if that'd be the preferred approach. Thanks for the review! @lucab - is this in line with what you were thinking? Cool, should be good now 👍
gharchive/pull-request
2017-06-01T01:33:52
2025-04-01T06:38:17.017584
{ "authors": [ "joonas", "lucab" ], "repo": "coreos/coreos-metadata", "url": "https://github.com/coreos/coreos-metadata/pull/48", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
205658130
Bug: Using Kubernetes storage, always gets "Bad Request - Invalid client_id" kubectl get oauth2clients -o yaml -n auth: apiVersion: v1 items: - apiVersion: oidc.coreos.com/v1 clientID: stag clientSecret: <censored> kind: OAuth2Client metadata: creationTimestamp: 2017-02-06T17:30:30Z name: stag namespace: auth resourceVersion: "56746098" selfLink: /apis/oidc.coreos.com/v1/namespaces/auth/oauth2clients/stag uid: <censored> redirectURIs: - <censored>/redirect_uri kind: List metadata: {} resourceVersion: "" selfLink: "" Client redirects to here: https://<dex_address>/auth?scope=openid%20email%20profile&client_id=stag&state=<censored>&nonce=<censored>&redirect_uri=<censored>%2Fredirect_uri&response_type=code No matter what client ID I tried using, response is always: Bad Request Invalid client_id ("stag"). (or any other client ID I've tried, of course. BTW, with when I set it through staticClients, it does work. Dex version: 2.1.0 Kubernetes version: 1.5.2 Many thanks guys. Clients should be set through the dex API[0] not by editing the Kubernetes third party resources directly. We compute a hash for the "name", it's not 1-to-1. Sorry, we should really add warning to the Kubernetes docs that users shouldn't edit the third party resources manually. [0] https://github.com/coreos/dex/blob/master/Documentation/api.md Oh, I got it. Thanks a lot! I was following the directions here, BTW - https://github.com/coreos/dex/blob/master/Documentation/storage.md On Mon, Feb 6, 2017, 19:58 Eric Chiang notifications@github.com wrote: Clients should be set through the dex API[0] not by editing the Kubernetes third party resources directly. We compute a hash for the "name", it's not 1-to-1. Sorry, we should really add warning to the Kubernetes docs that users shouldn't edit the third party resources manually. [0] https://github.com/coreos/dex/blob/master/Documentation/api.md — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/coreos/dex/issues/798#issuecomment-277761020, or mute the thread https://github.com/notifications/unsubscribe-auth/AKvJYDewJyKqdKNE0Y9VaMS5rXATmyxsks5rZ19AgaJpZM4L4gNw . @alon-argus yep I opened #799 to update that doc (feel free to comment). Going to keep this open until we get that merged.
gharchive/issue
2017-02-06T17:43:33
2025-04-01T06:38:17.026503
{ "authors": [ "alon-argus", "ericchiang" ], "repo": "coreos/dex", "url": "https://github.com/coreos/dex/issues/798", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
256018056
pod-policy: add automountServiceAccountToken to the pod policy By default, Kubernetes will mount a service account token into the etcd pods. This allows users to disable this via the pod-policy. @etcd-bot ok to test @etcd-bot retest this please Sorry, I had violated a minor go-simple rule. This has been fixed. Hopefully CI will agree. @etcd-bot retest this please Took 3 tries to get tests to pass. On the first two runs, e2eslow and unit tests passed, but jenkins-ci failed in a unique way. Test 1: https://jenkins-etcd-public.prod.coreos.systems/job/etcd-operator/2256/console 19:21:59 --- FAIL: TestPerClusterS3AllDown (70.51s) 19:21:59 crd_util.go:45: creating etcd cluster: test-etcd-wgb5l 19:21:59 util.go:82: 2017-09-07 19:17:20.409401557 +0000 UTC waiting size (3), healthy etcd members: names ([]) 19:21:59 util.go:82: 2017-09-07 19:17:30.410119671 +0000 UTC waiting size (3), healthy etcd members: names ([]) 19:21:59 util.go:82: 2017-09-07 19:17:40.409618991 +0000 UTC waiting size (3), healthy etcd members: names ([]) 19:21:59 util.go:82: 2017-09-07 19:17:50.408909517 +0000 UTC waiting size (3), healthy etcd members: names ([test-etcd-wgb5l-0000]) 19:21:59 util.go:82: 2017-09-07 19:18:00.409892369 +0000 UTC waiting size (3), healthy etcd members: names ([test-etcd-wgb5l-0002]) 19:21:59 util.go:82: 2017-09-07 19:18:10.410668413 +0000 UTC waiting size (3), healthy etcd members: names ([test-etcd-wgb5l-0002]) 19:21:59 util.go:82: 2017-09-07 19:18:20.410634434 +0000 UTC waiting size (3), healthy etcd members: names ([test-etcd-wgb5l-0003 test-etcd-wgb5l-0002]) Test 2: https://jenkins-etcd-public.prod.coreos.systems/job/etcd-operator/2257/console 20:04:03 --- FAIL: TestBackupStatus (90.35s) 20:04:03 crd_util.go:45: creating etcd cluster: test-etcd-z1hj3 20:04:03 util.go:82: 2017-09-07 20:01:58.30972811 +0000 UTC waiting size (1), healthy etcd members: names ([]) 20:04:03 util.go:82: 2017-09-07 20:02:08.305941564 +0000 UTC waiting size (1), healthy etcd members: names ([]) 20:04:03 util.go:82: 2017-09-07 20:02:18.305902963 +0000 UTC waiting size (1), healthy etcd members: names ([test-etcd-z1hj3-0000]) 20:04:03 cluster_status_test.go:103: failed to create backup pod: still failing after 6 retries Test 3: https://jenkins-etcd-public.prod.coreos.systems/job/etcd-operator/2258/console Successful. 🎉 @ultimateboy have you test this code manually? Yes. I built the image and pushed to docker hub with: $ IMAGE=ultimateboy/etcd-operator ./hack/build/operator/build Then I deployed with helm using: $ helm install stable/etcd-operator --set image.repository="ultimateboy/etcd-operator" --set image.tag=latest --set image.pullPolicy=Always Then tested with two different cluster definitions: $ cat mount.cluster.yaml apiVersion: "etcd.database.coreos.com/v1beta2" kind: "EtcdCluster" metadata: name: "example-etcd-cluster-mount" spec: size: 1 version: "3.1.8" $ cat nomount.cluster.yaml apiVersion: "etcd.database.coreos.com/v1beta2" kind: "EtcdCluster" metadata: name: "example-etcd-cluster-nomount" spec: size: 1 version: "3.1.8" pod: automountServiceAccountToken: false After creating the above clusters (using kubectl create -f file.yaml), verified with: $ kubectl get po example-etcd-cluster-mount-0000 -o yaml | grep automount $ echo $? 1 $ kubectl get po example-etcd-cluster-nomount-0000 -o yaml | grep automount automountServiceAccountToken: false @ultimateboy What's the output with automountServiceAccountToken: false when running $ kubectl get po example-etcd-cluster-nomount-0000 -o yaml | grep serviceAccount @fanminshi $ kubectl get po example-etcd-cluster-nomount-0000 -o yaml | grep serviceAccount serviceAccount: default serviceAccountName: default $ kubectl get po example-etcd-cluster-mount-0000 -o yaml | grep serviceAccount serviceAccount: default serviceAccountName: default lgtm @ultimateboy Can you add a line to CHANGELOG? LGTM after that. CHANGELOG line added and rebased to fix conflict. I really appreciate the fast turnaround here. Thanks @hongchaodeng and @fanminshi!!
gharchive/pull-request
2017-09-07T17:46:24
2025-04-01T06:38:17.035114
{ "authors": [ "fanminshi", "hongchaodeng", "ultimateboy" ], "repo": "coreos/etcd-operator", "url": "https://github.com/coreos/etcd-operator/pull/1383", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
134637685
etcd-agent: get base when renaming Partially related to https://github.com/coreos/etcd/issues/4552. /cc @heyitsanthony @xiang90 Thanks. lgtm.
gharchive/pull-request
2016-02-18T17:05:29
2025-04-01T06:38:17.036909
{ "authors": [ "gyuho", "xiang90" ], "repo": "coreos/etcd", "url": "https://github.com/coreos/etcd/pull/4558", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
178638682
functional-tester: decouple stresser from tester This commit decouples stresser from the tester of functional-tester. For doing it, this commit adds a new option --stresser to etcd-tester. The option accepts two types of stresser: "default" and "nop". If the option is "default", etcd-tester stresses its etcd cluster with the existing stresser. If the option is "nop", etcd-tester does nothing for stressing. Partially fixes https://github.com/coreos/etcd/issues/6446 /cc @heyitsanthony @mitake Test failed? @gyuho sorry, this PR has a format error. I'll fix it in the next update. @gyuho @heyitsanthony updated for fixing the style problem and cleaner decoupling, PTAL lgtm. Thanks!
gharchive/pull-request
2016-09-22T15:16:00
2025-04-01T06:38:17.039745
{ "authors": [ "gyuho", "heyitsanthony", "mitake" ], "repo": "coreos/etcd", "url": "https://github.com/coreos/etcd/pull/6506", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
343277150
CHANGELOG-3.2: update from #7892 CHANGELOG-3.2: update from #7892 Codecov Report Merging #9948 into master will increase coverage by 0.03%. The diff coverage is n/a. @@ Coverage Diff @@ ## master #9948 +/- ## ========================================== + Coverage 69.12% 69.15% +0.03% ========================================== Files 386 386 Lines 35784 35784 ========================================== + Hits 24735 24746 +11 - Misses 9232 9236 +4 + Partials 1817 1802 -15 Impacted Files Coverage Δ etcdctl/ctlv3/command/lease_command.go 65.34% <0%> (-5.95%) :arrow_down: pkg/adt/interval_tree.go 84.98% <0%> (-5.71%) :arrow_down: pkg/transport/listener.go 58.67% <0%> (-4.09%) :arrow_down: proxy/grpcproxy/watcher.go 85.71% <0%> (-4.09%) :arrow_down: proxy/grpcproxy/watch.go 88.19% <0%> (-1.25%) :arrow_down: etcdserver/api/v2http/client.go 85.51% <0%> (-1.21%) :arrow_down: etcdserver/raft.go 80.09% <0%> (-0.72%) :arrow_down: mvcc/watchable_store.go 82.8% <0%> (-0.71%) :arrow_down: clientv3/watch.go 91.71% <0%> (-0.43%) :arrow_down: clientv3/balancer/grpc1.7-health.go 59.01% <0%> (-0.3%) :arrow_down: ... and 14 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 104b6a3...82b712a. Read the comment docs. I think we need the same change from https://github.com/coreos/etcd/commit/a47f0a0dbadeb9a775189a1153283e956e8cc7ec for 3.2 changelog? @wenjiaswe Ping? Once we update this, we are ready for patch release. Thanks. @gyuho Sorry missed this review. I just updated for line 31, with the correct replacement, please let me know if that's OK or not. For ["the same change from a47f0a0 for 3.2 changelog"](the same change from a47f0a0 for 3.2 changelog), I delivered in #9943 . @wenjiaswe Can you also help merge https://github.com/coreos/etcd/pull/9950? So we do not do this kind of manual checking? Thanks! Yes, will do!
gharchive/pull-request
2018-07-20T23:52:07
2025-04-01T06:38:17.055663
{ "authors": [ "codecov-io", "gyuho", "wenjiaswe" ], "repo": "coreos/etcd", "url": "https://github.com/coreos/etcd/pull/9948", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
172722121
fleetctl: service uptime for list-units Allow fleetctl list-units to retrieve per-unit uptime from systemd, for example: localhost # ./fleet/fleetctl list-units UNIT MACHINE ACTIVE SUB UPTIME world.service 06ecd4f7.../192.168.122.30 active running 2015-07-06 07:38:38 AM UTC, Since 11m39s world2.service 1d3430ef.../192.168.122.31 active running 2015-07-06 07:48:24 AM UTC, Since 1m54s world_glob.service 06ecd4f7.../192.168.122.30 active running 2015-07-06 07:48:00 AM UTC, Since 2m18s world_glob.service 1d3430ef.../192.168.122.31 active running 2015-07-06 07:47:59 AM UTC, Since 2m18s What's changed since #1293: Avoided unnecessary iteration for each unit when fetching systemd properties. Instead make use of setting internal UnitState, both for the normal loop for systemd (>= 230) and for the fallback loop for systemd (<= 229). Fixed typos and bugs in registry and schema Excluded ActiveEnterTimestamp from unit comparison condition. Added timezone e.g. UTC to the output Fixes https://github.com/coreos/fleet/issues/1128 Supersedes https://github.com/coreos/fleet/pull/1293 /cc @wuqixuan I'm not sure about this one - please hold off until I can give it a little more thought @jonboulle I see.
gharchive/pull-request
2016-08-23T14:57:09
2025-04-01T06:38:17.059365
{ "authors": [ "dongsupark", "jonboulle" ], "repo": "coreos/fleet", "url": "https://github.com/coreos/fleet/pull/1669", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
172843814
Make install does not work Looks like I'm missing some dependencies or something. If so you should explicitly list the required dependencies. Also there is no Make command or documentation on how to do a clean uninstall. I'm running Fedora 24 with nothing special related to python. ran: sudo make install output: rm -fr build/ rm -fr dist/ rm -fr .eggs/ find . -name '*.egg-info' -exec rm -fr {} + find . -name '*.egg' -exec rm -f {} + find . -name '*.pyc' -exec rm -f {} + find . -name '*.pyo' -exec rm -f {} + find . -name '*~' -exec rm -f {} + find . -name '__pycache__' -exec rm -fr {} + rm -fr .tox/ rm -f .coverage rm -fr htmlcov/ python setup.py install running install running bdist_egg running egg_info creating kpm.egg-info writing requirements to kpm.egg-info/requires.txt writing kpm.egg-info/PKG-INFO writing top-level names to kpm.egg-info/top_level.txt writing dependency_links to kpm.egg-info/dependency_links.txt writing manifest file 'kpm.egg-info/SOURCES.txt' reading manifest file 'kpm.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'kpm/jsonnet/*.jsonnet' warning: no files found matching 'kpm/jsonnet/*.libjsonnet' warning: no previously-included files matching '__pycache__' found under directory '*' warning: no previously-included files matching '*.py[co]' found under directory '*' warning: no files found matching '*.rst' under directory 'docs' warning: no files found matching 'conf.py' under directory 'docs' warning: no files found matching 'Makefile' under directory 'docs' warning: no files found matching 'make.bat' under directory 'docs' writing manifest file 'kpm.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build creating build/lib creating build/lib/kpm copying kpm/auth.py -> build/lib/kpm copying kpm/render_jsonnet.py -> build/lib/kpm copying kpm/loghandler.py -> build/lib/kpm copying kpm/discovery.py -> build/lib/kpm copying kpm/registry.py -> build/lib/kpm copying kpm/console.py -> build/lib/kpm copying kpm/kub_base.py -> build/lib/kpm copying kpm/semver.py -> build/lib/kpm copying kpm/deploy.py -> build/lib/kpm copying kpm/manifest.py -> build/lib/kpm copying kpm/manifest_jsonnet.py -> build/lib/kpm copying kpm/template_filters.py -> build/lib/kpm copying kpm/packager.py -> build/lib/kpm copying kpm/exception.py -> build/lib/kpm copying kpm/kubernetes.py -> build/lib/kpm copying kpm/__init__.py -> build/lib/kpm copying kpm/new.py -> build/lib/kpm copying kpm/utils.py -> build/lib/kpm copying kpm/display.py -> build/lib/kpm copying kpm/kub_jsonnet.py -> build/lib/kpm copying kpm/command.py -> build/lib/kpm creating build/lib/kpm/api copying kpm/api/deployment.py -> build/lib/kpm/api copying kpm/api/registry.py -> build/lib/kpm/api copying kpm/api/builder.py -> build/lib/kpm/api copying kpm/api/info.py -> build/lib/kpm/api copying kpm/api/config.py -> build/lib/kpm/api copying kpm/api/proxy.py -> build/lib/kpm/api copying kpm/api/authorization.py -> build/lib/kpm/api copying kpm/api/app.py -> build/lib/kpm/api copying kpm/api/__init__.py -> build/lib/kpm/api copying kpm/api/wsgi.py -> build/lib/kpm/api creating build/lib/kpm/models copying kpm/models/package_base.py -> build/lib/kpm/models copying kpm/models/channel_base.py -> build/lib/kpm/models copying kpm/models/__init__.py -> build/lib/kpm/models creating build/lib/kpm/commands copying kpm/commands/login.py -> build/lib/kpm/commands copying kpm/commands/command_base.py -> build/lib/kpm/commands copying kpm/commands/pull.py -> build/lib/kpm/commands copying kpm/commands/generate.py -> build/lib/kpm/commands copying kpm/commands/logout.py -> build/lib/kpm/commands copying kpm/commands/list_package.py -> build/lib/kpm/commands copying kpm/commands/delete_package.py -> build/lib/kpm/commands copying kpm/commands/deploy.py -> build/lib/kpm/commands copying kpm/commands/channel.py -> build/lib/kpm/commands copying kpm/commands/version.py -> build/lib/kpm/commands copying kpm/commands/kexec.py -> build/lib/kpm/commands copying kpm/commands/__init__.py -> build/lib/kpm/commands copying kpm/commands/show.py -> build/lib/kpm/commands copying kpm/commands/remove.py -> build/lib/kpm/commands copying kpm/commands/push.py -> build/lib/kpm/commands copying kpm/commands/new.py -> build/lib/kpm/commands copying kpm/commands/jsonnet.py -> build/lib/kpm/commands creating build/lib/kpm/models/etcd copying kpm/models/etcd/channel.py -> build/lib/kpm/models/etcd copying kpm/models/etcd/__init__.py -> build/lib/kpm/models/etcd copying kpm/models/etcd/package.py -> build/lib/kpm/models/etcd creating build/lib/kpm/jsonnet copying kpm/jsonnet/manifest.jsonnet.j2 -> build/lib/kpm/jsonnet creating build/lib/kpm/jsonnet/lib copying kpm/jsonnet/lib/kpm-utils.libjsonnet -> build/lib/kpm/jsonnet/lib copying kpm/jsonnet/lib/kpm.libjsonnet -> build/lib/kpm/jsonnet/lib creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/auth.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/render_jsonnet.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/loghandler.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/discovery.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/registry.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/console.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/kub_base.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/semver.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/deploy.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/manifest.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/manifest_jsonnet.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/template_filters.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/deployment.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/registry.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/builder.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/info.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/config.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/proxy.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/authorization.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/app.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/__init__.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/wsgi.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/packager.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/jsonnet creating build/bdist.linux-x86_64/egg/kpm/jsonnet/lib copying build/lib/kpm/jsonnet/lib/kpm-utils.libjsonnet -> build/bdist.linux-x86_64/egg/kpm/jsonnet/lib copying build/lib/kpm/jsonnet/lib/kpm.libjsonnet -> build/bdist.linux-x86_64/egg/kpm/jsonnet/lib copying build/lib/kpm/jsonnet/manifest.jsonnet.j2 -> build/bdist.linux-x86_64/egg/kpm/jsonnet copying build/lib/kpm/exception.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/kubernetes.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/__init__.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/models copying build/lib/kpm/models/package_base.py -> build/bdist.linux-x86_64/egg/kpm/models copying build/lib/kpm/models/channel_base.py -> build/bdist.linux-x86_64/egg/kpm/models copying build/lib/kpm/models/__init__.py -> build/bdist.linux-x86_64/egg/kpm/models creating build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/models/etcd/channel.py -> build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/models/etcd/__init__.py -> build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/models/etcd/package.py -> build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/new.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/login.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/command_base.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/pull.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/generate.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/logout.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/list_package.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/delete_package.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/deploy.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/channel.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/version.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/kexec.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/__init__.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/show.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/remove.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/push.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/new.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/jsonnet.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/utils.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/display.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/kub_jsonnet.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/command.py -> build/bdist.linux-x86_64/egg/kpm byte-compiling build/bdist.linux-x86_64/egg/kpm/auth.py to auth.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/render_jsonnet.py to render_jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/loghandler.py to loghandler.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/discovery.py to discovery.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/registry.py to registry.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/console.py to console.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/kub_base.py to kub_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/semver.py to semver.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/deploy.py to deploy.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/manifest.py to manifest.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/manifest_jsonnet.py to manifest_jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/template_filters.py to template_filters.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/deployment.py to deployment.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/registry.py to registry.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/builder.py to builder.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/info.py to info.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/config.py to config.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/proxy.py to proxy.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/authorization.py to authorization.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/app.py to app.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/wsgi.py to wsgi.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/packager.py to packager.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/exception.py to exception.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/kubernetes.py to kubernetes.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/package_base.py to package_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/channel_base.py to channel_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/etcd/channel.py to channel.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/etcd/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/etcd/package.py to package.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/new.py to new.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/login.py to login.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/command_base.py to command_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/pull.py to pull.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/generate.py to generate.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/logout.py to logout.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/list_package.py to list_package.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/delete_package.py to delete_package.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/deploy.py to deploy.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/channel.py to channel.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/version.py to version.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/kexec.py to kexec.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/show.py to show.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/remove.py to remove.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/push.py to push.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/new.py to new.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/jsonnet.py to jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/utils.py to utils.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/display.py to display.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/kub_jsonnet.py to kub_jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/command.py to command.pyc creating build/bdist.linux-x86_64/egg/EGG-INFO installing scripts to build/bdist.linux-x86_64/egg/EGG-INFO/scripts running install_scripts running build_scripts creating build/scripts-2.7 copying and adjusting bin/kpm -> build/scripts-2.7 changing mode of build/scripts-2.7/kpm from 644 to 755 creating build/bdist.linux-x86_64/egg/EGG-INFO/scripts copying build/scripts-2.7/kpm -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/kpm to 755 copying kpm.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating dist creating 'dist/kpm-0.21.0-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing kpm-0.21.0-py2.7.egg creating /usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg Extracting kpm-0.21.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding kpm 0.21.0 to easy-install.pth file Installing kpm script to /usr/bin Installed /usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg Processing dependencies for kpm==0.21.0 Searching for flask-cors Reading https://pypi.python.org/simple/flask-cors/ Best match: Flask-Cors 3.0.0 Downloading https://pypi.python.org/packages/fa/2b/122df210a7cbb7900a6e36e258dda27026e06beeab911647013053efd8ba/Flask_Cors-3.0.0-py2.7.egg#md5=7e73a3d63c717bae22228b355584b470 Processing Flask_Cors-3.0.0-py2.7.egg creating /usr/lib/python2.7/site-packages/Flask_Cors-3.0.0-py2.7.egg Extracting Flask_Cors-3.0.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding Flask-Cors 3.0.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/Flask_Cors-3.0.0-py2.7.egg Searching for Flask>=0.10.1 Reading https://pypi.python.org/simple/Flask/ Best match: Flask 0.11.1 Downloading https://pypi.python.org/packages/55/8a/78e165d30f0c8bb5d57c429a30ee5749825ed461ad6c959688872643ffb3/Flask-0.11.1.tar.gz#md5=d2af95d8fe79cf7da099f062dd122a08 Processing Flask-0.11.1.tar.gz Writing /tmp/easy_install-vkmLqz/Flask-0.11.1/setup.cfg Running Flask-0.11.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-vkmLqz/Flask-0.11.1/egg-dist-tmp-RB7fTn warning: no previously-included files matching '*.py[co]' found anywhere in distribution no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'docs/_themes' creating /usr/lib/python2.7/site-packages/Flask-0.11.1-py2.7.egg Extracting Flask-0.11.1-py2.7.egg to /usr/lib/python2.7/site-packages Adding Flask 0.11.1 to easy-install.pth file Installing flask script to /usr/bin Installed /usr/lib/python2.7/site-packages/Flask-0.11.1-py2.7.egg Searching for semantic-version Reading https://pypi.python.org/simple/semantic_version/ Best match: semantic-version 2.5.0 Downloading https://pypi.python.org/packages/8e/0e/33052dd97ab9d07dae8ddffcfb2740efe58c46d72efbc060cf6da250439f/semantic_version-2.5.0.tar.gz#md5=9a3f8e3ca00dcd2da16e30d55a4d4d99 Processing semantic_version-2.5.0.tar.gz Writing /tmp/easy_install-rD89IY/semantic_version-2.5.0/setup.cfg Running semantic_version-2.5.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-rD89IY/semantic_version-2.5.0/egg-dist-tmp-CUJRC3 no previously-included directories found matching 'docs/_build' zip_safe flag not set; analyzing archive contents... Moving semantic_version-2.5.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding semantic-version 2.5.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/semantic_version-2.5.0-py2.7.egg Searching for python-etcd Reading https://pypi.python.org/simple/python-etcd/ Best match: python-etcd 0.4.3 Downloading https://pypi.python.org/packages/fe/f6/da82dee704be089b6c3f5a7eb17a5f7c67e4fb6d030405dde392dc846714/python-etcd-0.4.3.tar.gz#md5=02f84aaab5eff364bc1e1e876c2f6a1f Processing python-etcd-0.4.3.tar.gz Writing /tmp/easy_install-n5ehNu/python-etcd-0.4.3/setup.cfg Running python-etcd-0.4.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-n5ehNu/python-etcd-0.4.3/egg-dist-tmp-891u56 creating /usr/lib/python2.7/site-packages/python_etcd-0.4.3-py2.7.egg Extracting python_etcd-0.4.3-py2.7.egg to /usr/lib/python2.7/site-packages Adding python-etcd 0.4.3 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/python_etcd-0.4.3-py2.7.egg Searching for termcolor Reading https://pypi.python.org/simple/termcolor/ Best match: termcolor 1.1.0 Downloading https://pypi.python.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz#md5=043e89644f8909d462fbbfa511c768df Processing termcolor-1.1.0.tar.gz Writing /tmp/easy_install-lWEoR8/termcolor-1.1.0/setup.cfg Running termcolor-1.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-lWEoR8/termcolor-1.1.0/egg-dist-tmp-97NzWg zip_safe flag not set; analyzing archive contents... Moving termcolor-1.1.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding termcolor 1.1.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/termcolor-1.1.0-py2.7.egg Searching for tabulate Reading https://pypi.python.org/simple/tabulate/ Best match: tabulate 0.7.5 Downloading https://pypi.python.org/packages/db/40/6ffc855c365769c454591ac30a25e9ea0b3e8c952a1259141f5b9878bd3d/tabulate-0.7.5.tar.gz#md5=576e1f063b8e74dbfeda02d978564987 Processing tabulate-0.7.5.tar.gz Writing /tmp/easy_install-Fo2MKG/tabulate-0.7.5/setup.cfg Running tabulate-0.7.5/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Fo2MKG/tabulate-0.7.5/egg-dist-tmp-YW0lW2 zip_safe flag not set; analyzing archive contents... Moving tabulate-0.7.5-py2.7.egg to /usr/lib/python2.7/site-packages Adding tabulate 0.7.5 to easy-install.pth file Installing tabulate script to /usr/bin Installed /usr/lib/python2.7/site-packages/tabulate-0.7.5-py2.7.egg Searching for jsonpatch Reading https://pypi.python.org/simple/jsonpatch/ Best match: jsonpatch 1.14 Downloading https://pypi.python.org/packages/4b/2b/72f41fe41af008ebd5af3161345d7f47f2afe2b766d4ab1c412701ad71e5/jsonpatch-1.14.tar.gz#md5=cf4fbad8188f1389363433dbf867109f Processing jsonpatch-1.14.tar.gz Writing /tmp/easy_install-5n4fqy/jsonpatch-1.14/setup.cfg Running jsonpatch-1.14/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5n4fqy/jsonpatch-1.14/egg-dist-tmp-PNe4ax warning: pypandoc module not found, could not convert Markdown to RST zip_safe flag not set; analyzing archive contents... Moving jsonpatch-1.14-py2.7.egg to /usr/lib/python2.7/site-packages Adding jsonpatch 1.14 to easy-install.pth file Installing jsonpatch script to /usr/bin Installing jsondiff script to /usr/bin Installed /usr/lib/python2.7/site-packages/jsonpatch-1.14-py2.7.egg Searching for jinja2 Reading https://pypi.python.org/simple/jinja2/ Best match: Jinja2 2.8 Downloading https://pypi.python.org/packages/f2/2f/0b98b06a345a761bec91a079ccae392d282690c2d8272e708f4d10829e22/Jinja2-2.8.tar.gz#md5=edb51693fe22c53cee5403775c71a99e Processing Jinja2-2.8.tar.gz Writing /tmp/easy_install-srzY31/Jinja2-2.8/setup.cfg Running Jinja2-2.8/setup.py -q bdist_egg --dist-dir /tmp/easy_install-srzY31/Jinja2-2.8/egg-dist-tmp-Vvv4Zv warning: no files found matching 'run-tests.py' warning: no files found matching '*' under directory 'custom_fixers' warning: no files found matching '*' under directory 'jinja2/testsuite/res' warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no previously-included files matching '*.pyc' found under directory 'jinja2' warning: no previously-included files matching '*.pyc' found under directory 'docs' warning: no previously-included files matching '*.pyo' found under directory 'jinja2' warning: no previously-included files matching '*.pyo' found under directory 'docs' creating /usr/lib/python2.7/site-packages/Jinja2-2.8-py2.7.egg Extracting Jinja2-2.8-py2.7.egg to /usr/lib/python2.7/site-packages Adding Jinja2 2.8 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/Jinja2-2.8-py2.7.egg Searching for pyyaml Reading https://pypi.python.org/simple/pyyaml/ Best match: PyYAML 3.11 Downloading https://pypi.python.org/packages/75/5e/b84feba55e20f8da46ead76f14a3943c8cb722d40360702b2365b91dec00/PyYAML-3.11.tar.gz#md5=f50e08ef0fe55178479d3a618efe21db Processing PyYAML-3.11.tar.gz Writing /tmp/easy_install-TMZK3C/PyYAML-3.11/setup.cfg Running PyYAML-3.11/setup.py -q bdist_egg --dist-dir /tmp/easy_install-TMZK3C/PyYAML-3.11/egg-dist-tmp-6aOcKU gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory libyaml is not found or a compiler error: forcing --without-libyaml (if libyaml is installed correctly, you may need to specify the option --include-dirs or uncomment and modify the parameter include_dirs in setup.cfg) zip_safe flag not set; analyzing archive contents... Moving PyYAML-3.11-py2.7-linux-x86_64.egg to /usr/lib/python2.7/site-packages Adding PyYAML 3.11 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/PyYAML-3.11-py2.7-linux-x86_64.egg Searching for futures Reading https://pypi.python.org/simple/futures/ Best match: futures 3.0.5 Downloading https://pypi.python.org/packages/55/db/97c1ca37edab586a1ae03d6892b6633d8eaa23b23ac40c7e5bbc55423c78/futures-3.0.5.tar.gz#md5=ced2c365e518242512d7a398b515ff95 Processing futures-3.0.5.tar.gz Writing /tmp/easy_install-rj3iWE/futures-3.0.5/setup.cfg Running futures-3.0.5/setup.py -q bdist_egg --dist-dir /tmp/easy_install-rj3iWE/futures-3.0.5/egg-dist-tmp-cXQjQ_ creating /usr/lib/python2.7/site-packages/futures-3.0.5-py2.7.egg Extracting futures-3.0.5-py2.7.egg to /usr/lib/python2.7/site-packages Adding futures 3.0.5 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/futures-3.0.5-py2.7.egg Searching for click>=2.0 Reading https://pypi.python.org/simple/click/ Best match: click 6.6 Downloading https://pypi.python.org/packages/7a/00/c14926d8232b36b08218067bcd5853caefb4737cda3f0a47437151344792/click-6.6.tar.gz#md5=d0b09582123605220ad6977175f3e51d Processing click-6.6.tar.gz Writing /tmp/easy_install-tRWBc3/click-6.6/setup.cfg Running click-6.6/setup.py -q bdist_egg --dist-dir /tmp/easy_install-tRWBc3/click-6.6/egg-dist-tmp-Ay9KKV warning: no previously-included files matching '*.pyc' found under directory 'docs' warning: no previously-included files matching '*.pyo' found under directory 'docs' warning: no previously-included files matching '*.pyc' found under directory 'tests' warning: no previously-included files matching '*.pyo' found under directory 'tests' warning: no previously-included files matching '*.pyc' found under directory 'examples' warning: no previously-included files matching '*.pyo' found under directory 'examples' no previously-included directories found matching 'docs/_build' zip_safe flag not set; analyzing archive contents... click.core: module references __file__ creating /usr/lib/python2.7/site-packages/click-6.6-py2.7.egg Extracting click-6.6-py2.7.egg to /usr/lib/python2.7/site-packages Adding click 6.6 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/click-6.6-py2.7.egg Searching for itsdangerous>=0.21 Reading https://pypi.python.org/simple/itsdangerous/ Best match: itsdangerous 0.24 Downloading https://pypi.python.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz#md5=a3d55aa79369aef5345c036a8a26307f Processing itsdangerous-0.24.tar.gz Writing /tmp/easy_install-qXGd07/itsdangerous-0.24/setup.cfg Running itsdangerous-0.24/setup.py -q bdist_egg --dist-dir /tmp/easy_install-qXGd07/itsdangerous-0.24/egg-dist-tmp-UtW3i_ warning: no previously-included files matching '*' found under directory 'docs/_build' creating /usr/lib/python2.7/site-packages/itsdangerous-0.24-py2.7.egg Extracting itsdangerous-0.24-py2.7.egg to /usr/lib/python2.7/site-packages Adding itsdangerous 0.24 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/itsdangerous-0.24-py2.7.egg Searching for Werkzeug>=0.7 Reading https://pypi.python.org/simple/Werkzeug/ Best match: Werkzeug 0.11.10 Downloading https://pypi.python.org/packages/b7/7f/44d3cfe5a12ba002b253f6985a4477edfa66da53787a2a838a40f6415263/Werkzeug-0.11.10.tar.gz#md5=780967186f9157e88f2bfbfa6f07a893 Processing Werkzeug-0.11.10.tar.gz Writing /tmp/easy_install-MABCLf/Werkzeug-0.11.10/setup.cfg Running Werkzeug-0.11.10/setup.py -q bdist_egg --dist-dir /tmp/easy_install-MABCLf/Werkzeug-0.11.10/egg-dist-tmp-oSdTdx no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'docs/_themes' warning: no previously-included files matching '*.py[cdo]' found anywhere in distribution warning: no previously-included files matching '__pycache__' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution creating /usr/lib/python2.7/site-packages/Werkzeug-0.11.10-py2.7.egg Extracting Werkzeug-0.11.10-py2.7.egg to /usr/lib/python2.7/site-packages Adding Werkzeug 0.11.10 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/Werkzeug-0.11.10-py2.7.egg Searching for dnspython Reading https://pypi.python.org/simple/dnspython/ Best match: dnspython 1.14.0 Downloading https://pypi.python.org/packages/e1/ab/36f4e337d6cf6590f9cf46349f519b682542d211c604755ab8409f67f26b/dnspython-1.14.0.zip#md5=577f6b60b185d1ac90d76e9364a543d4 Processing dnspython-1.14.0.zip Writing /tmp/easy_install-t5Bojo/dnspython-1.14.0/setup.cfg Running dnspython-1.14.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-t5Bojo/dnspython-1.14.0/egg-dist-tmp-LYeA19 warning: no files found matching 'TODO' warning: no files found matching '*.txt' under directory 'examples' warning: no files found matching '*.txt' under directory 'tests' zip_safe flag not set; analyzing archive contents... Moving dnspython-1.14.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding dnspython 1.14.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/dnspython-1.14.0-py2.7.egg Searching for jsonpointer>=1.9 Reading https://pypi.python.org/simple/jsonpointer/ Best match: jsonpointer 1.10 Downloading https://pypi.python.org/packages/f6/36/6bdd302303e8bc7c25102dbc1eabb3e3d97f57b0f8f414f4da7ea7ab9dd8/jsonpointer-1.10.tar.gz#md5=d68c0c6ad6889e9c94ec0feba719e45e Processing jsonpointer-1.10.tar.gz Writing /tmp/easy_install-Tg43O7/jsonpointer-1.10/setup.cfg Running jsonpointer-1.10/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Tg43O7/jsonpointer-1.10/egg-dist-tmp-TSLiD2 warning: pypandoc module not found, could not convert Markdown to RST zip_safe flag not set; analyzing archive contents... Moving jsonpointer-1.10-py2.7.egg to /usr/lib/python2.7/site-packages Adding jsonpointer 1.10 to easy-install.pth file Installing jsonpointer script to /usr/bin Installed /usr/lib/python2.7/site-packages/jsonpointer-1.10-py2.7.egg Searching for MarkupSafe Reading https://pypi.python.org/simple/MarkupSafe/ Best match: MarkupSafe 0.23 Downloading https://pypi.python.org/packages/c0/41/bae1254e0396c0cc8cf1751cb7d9afc90a602353695af5952530482c963f/MarkupSafe-0.23.tar.gz#md5=f5ab3deee4c37cd6a922fb81e730da6e Processing MarkupSafe-0.23.tar.gz Writing /tmp/easy_install-5aCaRD/MarkupSafe-0.23/setup.cfg Running MarkupSafe-0.23/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5aCaRD/MarkupSafe-0.23/egg-dist-tmp-2cxH0i gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory ========================================================================== WARNING: The C extension could not be compiled, speedups are not enabled. Failure information, if any, is above. Retrying the build without the C extension now. ========================================================================== WARNING: The C extension could not be compiled, speedups are not enabled. Plain-Python installation succeeded. ========================================================================== creating /usr/lib/python2.7/site-packages/MarkupSafe-0.23-py2.7.egg Extracting MarkupSafe-0.23-py2.7.egg to /usr/lib/python2.7/site-packages Adding MarkupSafe 0.23 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/MarkupSafe-0.23-py2.7.egg Searching for requests==2.10.0 Best match: requests 2.10.0 Adding requests 2.10.0 to easy-install.pth file Using /usr/lib/python2.7/site-packages Searching for six==1.10.0 Best match: six 1.10.0 Adding six 1.10.0 to easy-install.pth file Using /usr/lib/python2.7/site-packages Searching for urllib3==1.15.1 Best match: urllib3 1.15.1 Adding urllib3 1.15.1 to easy-install.pth file Using /usr/lib/python2.7/site-packages Finished processing dependencies for kpm==0.21.0 rm -fr build/ rm -fr dist/ rm -fr .eggs/ find . -name '*.egg-info' -exec rm -fr {} + find . -name '*.egg' -exec rm -f {} + find . -name '*.pyc' -exec rm -f {} + find . -name '*.pyo' -exec rm -f {} + find . -name '*~' -exec rm -f {} + find . -name '__pycache__' -exec rm -fr {} + rm -fr .tox/ rm -f .coverage rm -fr htmlcov/ python setup.py install running install running bdist_egg running egg_info creating kpm.egg-info writing requirements to kpm.egg-info/requires.txt writing kpm.egg-info/PKG-INFO writing top-level names to kpm.egg-info/top_level.txt writing dependency_links to kpm.egg-info/dependency_links.txt writing manifest file 'kpm.egg-info/SOURCES.txt' reading manifest file 'kpm.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'kpm/jsonnet/*.jsonnet' warning: no files found matching 'kpm/jsonnet/*.libjsonnet' warning: no previously-included files matching '__pycache__' found under directory '*' warning: no previously-included files matching '*.py[co]' found under directory '*' warning: no files found matching '*.rst' under directory 'docs' warning: no files found matching 'conf.py' under directory 'docs' warning: no files found matching 'Makefile' under directory 'docs' warning: no files found matching 'make.bat' under directory 'docs' writing manifest file 'kpm.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build creating build/lib creating build/lib/kpm copying kpm/auth.py -> build/lib/kpm copying kpm/render_jsonnet.py -> build/lib/kpm copying kpm/loghandler.py -> build/lib/kpm copying kpm/discovery.py -> build/lib/kpm copying kpm/registry.py -> build/lib/kpm copying kpm/console.py -> build/lib/kpm copying kpm/kub_base.py -> build/lib/kpm copying kpm/semver.py -> build/lib/kpm copying kpm/deploy.py -> build/lib/kpm copying kpm/manifest.py -> build/lib/kpm copying kpm/manifest_jsonnet.py -> build/lib/kpm copying kpm/template_filters.py -> build/lib/kpm copying kpm/packager.py -> build/lib/kpm copying kpm/exception.py -> build/lib/kpm copying kpm/kubernetes.py -> build/lib/kpm copying kpm/__init__.py -> build/lib/kpm copying kpm/new.py -> build/lib/kpm copying kpm/utils.py -> build/lib/kpm copying kpm/display.py -> build/lib/kpm copying kpm/kub_jsonnet.py -> build/lib/kpm copying kpm/command.py -> build/lib/kpm creating build/lib/kpm/api copying kpm/api/deployment.py -> build/lib/kpm/api copying kpm/api/registry.py -> build/lib/kpm/api copying kpm/api/builder.py -> build/lib/kpm/api copying kpm/api/info.py -> build/lib/kpm/api copying kpm/api/config.py -> build/lib/kpm/api copying kpm/api/proxy.py -> build/lib/kpm/api copying kpm/api/authorization.py -> build/lib/kpm/api copying kpm/api/app.py -> build/lib/kpm/api copying kpm/api/__init__.py -> build/lib/kpm/api copying kpm/api/wsgi.py -> build/lib/kpm/api creating build/lib/kpm/models copying kpm/models/package_base.py -> build/lib/kpm/models copying kpm/models/channel_base.py -> build/lib/kpm/models copying kpm/models/__init__.py -> build/lib/kpm/models creating build/lib/kpm/commands copying kpm/commands/login.py -> build/lib/kpm/commands copying kpm/commands/command_base.py -> build/lib/kpm/commands copying kpm/commands/pull.py -> build/lib/kpm/commands copying kpm/commands/generate.py -> build/lib/kpm/commands copying kpm/commands/logout.py -> build/lib/kpm/commands copying kpm/commands/list_package.py -> build/lib/kpm/commands copying kpm/commands/delete_package.py -> build/lib/kpm/commands copying kpm/commands/deploy.py -> build/lib/kpm/commands copying kpm/commands/channel.py -> build/lib/kpm/commands copying kpm/commands/version.py -> build/lib/kpm/commands copying kpm/commands/kexec.py -> build/lib/kpm/commands copying kpm/commands/__init__.py -> build/lib/kpm/commands copying kpm/commands/show.py -> build/lib/kpm/commands copying kpm/commands/remove.py -> build/lib/kpm/commands copying kpm/commands/push.py -> build/lib/kpm/commands copying kpm/commands/new.py -> build/lib/kpm/commands copying kpm/commands/jsonnet.py -> build/lib/kpm/commands creating build/lib/kpm/models/etcd copying kpm/models/etcd/channel.py -> build/lib/kpm/models/etcd copying kpm/models/etcd/__init__.py -> build/lib/kpm/models/etcd copying kpm/models/etcd/package.py -> build/lib/kpm/models/etcd creating build/lib/kpm/jsonnet copying kpm/jsonnet/manifest.jsonnet.j2 -> build/lib/kpm/jsonnet creating build/lib/kpm/jsonnet/lib copying kpm/jsonnet/lib/kpm-utils.libjsonnet -> build/lib/kpm/jsonnet/lib copying kpm/jsonnet/lib/kpm.libjsonnet -> build/lib/kpm/jsonnet/lib creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/auth.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/render_jsonnet.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/loghandler.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/discovery.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/registry.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/console.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/kub_base.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/semver.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/deploy.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/manifest.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/manifest_jsonnet.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/template_filters.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/deployment.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/registry.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/builder.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/info.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/config.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/proxy.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/authorization.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/app.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/__init__.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/api/wsgi.py -> build/bdist.linux-x86_64/egg/kpm/api copying build/lib/kpm/packager.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/jsonnet creating build/bdist.linux-x86_64/egg/kpm/jsonnet/lib copying build/lib/kpm/jsonnet/lib/kpm-utils.libjsonnet -> build/bdist.linux-x86_64/egg/kpm/jsonnet/lib copying build/lib/kpm/jsonnet/lib/kpm.libjsonnet -> build/bdist.linux-x86_64/egg/kpm/jsonnet/lib copying build/lib/kpm/jsonnet/manifest.jsonnet.j2 -> build/bdist.linux-x86_64/egg/kpm/jsonnet copying build/lib/kpm/exception.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/kubernetes.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/__init__.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/models copying build/lib/kpm/models/package_base.py -> build/bdist.linux-x86_64/egg/kpm/models copying build/lib/kpm/models/channel_base.py -> build/bdist.linux-x86_64/egg/kpm/models copying build/lib/kpm/models/__init__.py -> build/bdist.linux-x86_64/egg/kpm/models creating build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/models/etcd/channel.py -> build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/models/etcd/__init__.py -> build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/models/etcd/package.py -> build/bdist.linux-x86_64/egg/kpm/models/etcd copying build/lib/kpm/new.py -> build/bdist.linux-x86_64/egg/kpm creating build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/login.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/command_base.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/pull.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/generate.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/logout.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/list_package.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/delete_package.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/deploy.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/channel.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/version.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/kexec.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/__init__.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/show.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/remove.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/push.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/new.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/commands/jsonnet.py -> build/bdist.linux-x86_64/egg/kpm/commands copying build/lib/kpm/utils.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/display.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/kub_jsonnet.py -> build/bdist.linux-x86_64/egg/kpm copying build/lib/kpm/command.py -> build/bdist.linux-x86_64/egg/kpm byte-compiling build/bdist.linux-x86_64/egg/kpm/auth.py to auth.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/render_jsonnet.py to render_jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/loghandler.py to loghandler.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/discovery.py to discovery.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/registry.py to registry.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/console.py to console.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/kub_base.py to kub_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/semver.py to semver.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/deploy.py to deploy.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/manifest.py to manifest.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/manifest_jsonnet.py to manifest_jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/template_filters.py to template_filters.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/deployment.py to deployment.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/registry.py to registry.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/builder.py to builder.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/info.py to info.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/config.py to config.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/proxy.py to proxy.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/authorization.py to authorization.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/app.py to app.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/api/wsgi.py to wsgi.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/packager.py to packager.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/exception.py to exception.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/kubernetes.py to kubernetes.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/package_base.py to package_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/channel_base.py to channel_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/etcd/channel.py to channel.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/etcd/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/models/etcd/package.py to package.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/new.py to new.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/login.py to login.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/command_base.py to command_base.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/pull.py to pull.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/generate.py to generate.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/logout.py to logout.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/list_package.py to list_package.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/delete_package.py to delete_package.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/deploy.py to deploy.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/channel.py to channel.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/version.py to version.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/kexec.py to kexec.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/__init__.py to __init__.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/show.py to show.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/remove.py to remove.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/push.py to push.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/new.py to new.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/commands/jsonnet.py to jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/utils.py to utils.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/display.py to display.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/kub_jsonnet.py to kub_jsonnet.pyc byte-compiling build/bdist.linux-x86_64/egg/kpm/command.py to command.pyc creating build/bdist.linux-x86_64/egg/EGG-INFO installing scripts to build/bdist.linux-x86_64/egg/EGG-INFO/scripts running install_scripts running build_scripts creating build/scripts-2.7 copying and adjusting bin/kpm -> build/scripts-2.7 changing mode of build/scripts-2.7/kpm from 644 to 755 creating build/bdist.linux-x86_64/egg/EGG-INFO/scripts copying build/scripts-2.7/kpm -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/kpm to 755 copying kpm.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying kpm.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating dist creating 'dist/kpm-0.21.0-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing kpm-0.21.0-py2.7.egg creating /usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg Extracting kpm-0.21.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding kpm 0.21.0 to easy-install.pth file Installing kpm script to /usr/bin Installed /usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg Processing dependencies for kpm==0.21.0 Searching for flask-cors Reading https://pypi.python.org/simple/flask-cors/ Best match: Flask-Cors 3.0.0 Downloading https://pypi.python.org/packages/fa/2b/122df210a7cbb7900a6e36e258dda27026e06beeab911647013053efd8ba/Flask_Cors-3.0.0-py2.7.egg#md5=7e73a3d63c717bae22228b355584b470 Processing Flask_Cors-3.0.0-py2.7.egg creating /usr/lib/python2.7/site-packages/Flask_Cors-3.0.0-py2.7.egg Extracting Flask_Cors-3.0.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding Flask-Cors 3.0.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/Flask_Cors-3.0.0-py2.7.egg Searching for Flask>=0.10.1 Reading https://pypi.python.org/simple/Flask/ Best match: Flask 0.11.1 Downloading https://pypi.python.org/packages/55/8a/78e165d30f0c8bb5d57c429a30ee5749825ed461ad6c959688872643ffb3/Flask-0.11.1.tar.gz#md5=d2af95d8fe79cf7da099f062dd122a08 Processing Flask-0.11.1.tar.gz Writing /tmp/easy_install-vkmLqz/Flask-0.11.1/setup.cfg Running Flask-0.11.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-vkmLqz/Flask-0.11.1/egg-dist-tmp-RB7fTn warning: no previously-included files matching '*.py[co]' found anywhere in distribution no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'docs/_themes' creating /usr/lib/python2.7/site-packages/Flask-0.11.1-py2.7.egg Extracting Flask-0.11.1-py2.7.egg to /usr/lib/python2.7/site-packages Adding Flask 0.11.1 to easy-install.pth file Installing flask script to /usr/bin Installed /usr/lib/python2.7/site-packages/Flask-0.11.1-py2.7.egg Searching for semantic-version Reading https://pypi.python.org/simple/semantic_version/ Best match: semantic-version 2.5.0 Downloading https://pypi.python.org/packages/8e/0e/33052dd97ab9d07dae8ddffcfb2740efe58c46d72efbc060cf6da250439f/semantic_version-2.5.0.tar.gz#md5=9a3f8e3ca00dcd2da16e30d55a4d4d99 Processing semantic_version-2.5.0.tar.gz Writing /tmp/easy_install-rD89IY/semantic_version-2.5.0/setup.cfg Running semantic_version-2.5.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-rD89IY/semantic_version-2.5.0/egg-dist-tmp-CUJRC3 no previously-included directories found matching 'docs/_build' zip_safe flag not set; analyzing archive contents... Moving semantic_version-2.5.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding semantic-version 2.5.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/semantic_version-2.5.0-py2.7.egg Searching for python-etcd Reading https://pypi.python.org/simple/python-etcd/ Best match: python-etcd 0.4.3 Downloading https://pypi.python.org/packages/fe/f6/da82dee704be089b6c3f5a7eb17a5f7c67e4fb6d030405dde392dc846714/python-etcd-0.4.3.tar.gz#md5=02f84aaab5eff364bc1e1e876c2f6a1f Processing python-etcd-0.4.3.tar.gz Writing /tmp/easy_install-n5ehNu/python-etcd-0.4.3/setup.cfg Running python-etcd-0.4.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-n5ehNu/python-etcd-0.4.3/egg-dist-tmp-891u56 creating /usr/lib/python2.7/site-packages/python_etcd-0.4.3-py2.7.egg Extracting python_etcd-0.4.3-py2.7.egg to /usr/lib/python2.7/site-packages Adding python-etcd 0.4.3 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/python_etcd-0.4.3-py2.7.egg Searching for termcolor Reading https://pypi.python.org/simple/termcolor/ Best match: termcolor 1.1.0 Downloading https://pypi.python.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz#md5=043e89644f8909d462fbbfa511c768df Processing termcolor-1.1.0.tar.gz Writing /tmp/easy_install-lWEoR8/termcolor-1.1.0/setup.cfg Running termcolor-1.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-lWEoR8/termcolor-1.1.0/egg-dist-tmp-97NzWg zip_safe flag not set; analyzing archive contents... Moving termcolor-1.1.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding termcolor 1.1.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/termcolor-1.1.0-py2.7.egg Searching for tabulate Reading https://pypi.python.org/simple/tabulate/ Best match: tabulate 0.7.5 Downloading https://pypi.python.org/packages/db/40/6ffc855c365769c454591ac30a25e9ea0b3e8c952a1259141f5b9878bd3d/tabulate-0.7.5.tar.gz#md5=576e1f063b8e74dbfeda02d978564987 Processing tabulate-0.7.5.tar.gz Writing /tmp/easy_install-Fo2MKG/tabulate-0.7.5/setup.cfg Running tabulate-0.7.5/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Fo2MKG/tabulate-0.7.5/egg-dist-tmp-YW0lW2 zip_safe flag not set; analyzing archive contents... Moving tabulate-0.7.5-py2.7.egg to /usr/lib/python2.7/site-packages Adding tabulate 0.7.5 to easy-install.pth file Installing tabulate script to /usr/bin Installed /usr/lib/python2.7/site-packages/tabulate-0.7.5-py2.7.egg Searching for jsonpatch Reading https://pypi.python.org/simple/jsonpatch/ Best match: jsonpatch 1.14 Downloading https://pypi.python.org/packages/4b/2b/72f41fe41af008ebd5af3161345d7f47f2afe2b766d4ab1c412701ad71e5/jsonpatch-1.14.tar.gz#md5=cf4fbad8188f1389363433dbf867109f Processing jsonpatch-1.14.tar.gz Writing /tmp/easy_install-5n4fqy/jsonpatch-1.14/setup.cfg Running jsonpatch-1.14/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5n4fqy/jsonpatch-1.14/egg-dist-tmp-PNe4ax warning: pypandoc module not found, could not convert Markdown to RST zip_safe flag not set; analyzing archive contents... Moving jsonpatch-1.14-py2.7.egg to /usr/lib/python2.7/site-packages Adding jsonpatch 1.14 to easy-install.pth file Installing jsonpatch script to /usr/bin Installing jsondiff script to /usr/bin Installed /usr/lib/python2.7/site-packages/jsonpatch-1.14-py2.7.egg Searching for jinja2 Reading https://pypi.python.org/simple/jinja2/ Best match: Jinja2 2.8 Downloading https://pypi.python.org/packages/f2/2f/0b98b06a345a761bec91a079ccae392d282690c2d8272e708f4d10829e22/Jinja2-2.8.tar.gz#md5=edb51693fe22c53cee5403775c71a99e Processing Jinja2-2.8.tar.gz Writing /tmp/easy_install-srzY31/Jinja2-2.8/setup.cfg Running Jinja2-2.8/setup.py -q bdist_egg --dist-dir /tmp/easy_install-srzY31/Jinja2-2.8/egg-dist-tmp-Vvv4Zv warning: no files found matching 'run-tests.py' warning: no files found matching '*' under directory 'custom_fixers' warning: no files found matching '*' under directory 'jinja2/testsuite/res' warning: no previously-included files matching '*' found under directory 'docs/_build' warning: no previously-included files matching '*.pyc' found under directory 'jinja2' warning: no previously-included files matching '*.pyc' found under directory 'docs' warning: no previously-included files matching '*.pyo' found under directory 'jinja2' warning: no previously-included files matching '*.pyo' found under directory 'docs' creating /usr/lib/python2.7/site-packages/Jinja2-2.8-py2.7.egg Extracting Jinja2-2.8-py2.7.egg to /usr/lib/python2.7/site-packages Adding Jinja2 2.8 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/Jinja2-2.8-py2.7.egg Searching for pyyaml Reading https://pypi.python.org/simple/pyyaml/ Best match: PyYAML 3.11 Downloading https://pypi.python.org/packages/75/5e/b84feba55e20f8da46ead76f14a3943c8cb722d40360702b2365b91dec00/PyYAML-3.11.tar.gz#md5=f50e08ef0fe55178479d3a618efe21db Processing PyYAML-3.11.tar.gz Writing /tmp/easy_install-TMZK3C/PyYAML-3.11/setup.cfg Running PyYAML-3.11/setup.py -q bdist_egg --dist-dir /tmp/easy_install-TMZK3C/PyYAML-3.11/egg-dist-tmp-6aOcKU gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory libyaml is not found or a compiler error: forcing --without-libyaml (if libyaml is installed correctly, you may need to specify the option --include-dirs or uncomment and modify the parameter include_dirs in setup.cfg) zip_safe flag not set; analyzing archive contents... Moving PyYAML-3.11-py2.7-linux-x86_64.egg to /usr/lib/python2.7/site-packages Adding PyYAML 3.11 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/PyYAML-3.11-py2.7-linux-x86_64.egg Searching for futures Reading https://pypi.python.org/simple/futures/ Best match: futures 3.0.5 Downloading https://pypi.python.org/packages/55/db/97c1ca37edab586a1ae03d6892b6633d8eaa23b23ac40c7e5bbc55423c78/futures-3.0.5.tar.gz#md5=ced2c365e518242512d7a398b515ff95 Processing futures-3.0.5.tar.gz Writing /tmp/easy_install-rj3iWE/futures-3.0.5/setup.cfg Running futures-3.0.5/setup.py -q bdist_egg --dist-dir /tmp/easy_install-rj3iWE/futures-3.0.5/egg-dist-tmp-cXQjQ_ creating /usr/lib/python2.7/site-packages/futures-3.0.5-py2.7.egg Extracting futures-3.0.5-py2.7.egg to /usr/lib/python2.7/site-packages Adding futures 3.0.5 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/futures-3.0.5-py2.7.egg Searching for click>=2.0 Reading https://pypi.python.org/simple/click/ Best match: click 6.6 Downloading https://pypi.python.org/packages/7a/00/c14926d8232b36b08218067bcd5853caefb4737cda3f0a47437151344792/click-6.6.tar.gz#md5=d0b09582123605220ad6977175f3e51d Processing click-6.6.tar.gz Writing /tmp/easy_install-tRWBc3/click-6.6/setup.cfg Running click-6.6/setup.py -q bdist_egg --dist-dir /tmp/easy_install-tRWBc3/click-6.6/egg-dist-tmp-Ay9KKV warning: no previously-included files matching '*.pyc' found under directory 'docs' warning: no previously-included files matching '*.pyo' found under directory 'docs' warning: no previously-included files matching '*.pyc' found under directory 'tests' warning: no previously-included files matching '*.pyo' found under directory 'tests' warning: no previously-included files matching '*.pyc' found under directory 'examples' warning: no previously-included files matching '*.pyo' found under directory 'examples' no previously-included directories found matching 'docs/_build' zip_safe flag not set; analyzing archive contents... click.core: module references __file__ creating /usr/lib/python2.7/site-packages/click-6.6-py2.7.egg Extracting click-6.6-py2.7.egg to /usr/lib/python2.7/site-packages Adding click 6.6 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/click-6.6-py2.7.egg Searching for itsdangerous>=0.21 Reading https://pypi.python.org/simple/itsdangerous/ Best match: itsdangerous 0.24 Downloading https://pypi.python.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz#md5=a3d55aa79369aef5345c036a8a26307f Processing itsdangerous-0.24.tar.gz Writing /tmp/easy_install-qXGd07/itsdangerous-0.24/setup.cfg Running itsdangerous-0.24/setup.py -q bdist_egg --dist-dir /tmp/easy_install-qXGd07/itsdangerous-0.24/egg-dist-tmp-UtW3i_ warning: no previously-included files matching '*' found under directory 'docs/_build' creating /usr/lib/python2.7/site-packages/itsdangerous-0.24-py2.7.egg Extracting itsdangerous-0.24-py2.7.egg to /usr/lib/python2.7/site-packages Adding itsdangerous 0.24 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/itsdangerous-0.24-py2.7.egg Searching for Werkzeug>=0.7 Reading https://pypi.python.org/simple/Werkzeug/ Best match: Werkzeug 0.11.10 Downloading https://pypi.python.org/packages/b7/7f/44d3cfe5a12ba002b253f6985a4477edfa66da53787a2a838a40f6415263/Werkzeug-0.11.10.tar.gz#md5=780967186f9157e88f2bfbfa6f07a893 Processing Werkzeug-0.11.10.tar.gz Writing /tmp/easy_install-MABCLf/Werkzeug-0.11.10/setup.cfg Running Werkzeug-0.11.10/setup.py -q bdist_egg --dist-dir /tmp/easy_install-MABCLf/Werkzeug-0.11.10/egg-dist-tmp-oSdTdx no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'docs/_themes' warning: no previously-included files matching '*.py[cdo]' found anywhere in distribution warning: no previously-included files matching '__pycache__' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution creating /usr/lib/python2.7/site-packages/Werkzeug-0.11.10-py2.7.egg Extracting Werkzeug-0.11.10-py2.7.egg to /usr/lib/python2.7/site-packages Adding Werkzeug 0.11.10 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/Werkzeug-0.11.10-py2.7.egg Searching for dnspython Reading https://pypi.python.org/simple/dnspython/ Best match: dnspython 1.14.0 Downloading https://pypi.python.org/packages/e1/ab/36f4e337d6cf6590f9cf46349f519b682542d211c604755ab8409f67f26b/dnspython-1.14.0.zip#md5=577f6b60b185d1ac90d76e9364a543d4 Processing dnspython-1.14.0.zip Writing /tmp/easy_install-t5Bojo/dnspython-1.14.0/setup.cfg Running dnspython-1.14.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-t5Bojo/dnspython-1.14.0/egg-dist-tmp-LYeA19 warning: no files found matching 'TODO' warning: no files found matching '*.txt' under directory 'examples' warning: no files found matching '*.txt' under directory 'tests' zip_safe flag not set; analyzing archive contents... Moving dnspython-1.14.0-py2.7.egg to /usr/lib/python2.7/site-packages Adding dnspython 1.14.0 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/dnspython-1.14.0-py2.7.egg Searching for jsonpointer>=1.9 Reading https://pypi.python.org/simple/jsonpointer/ Best match: jsonpointer 1.10 Downloading https://pypi.python.org/packages/f6/36/6bdd302303e8bc7c25102dbc1eabb3e3d97f57b0f8f414f4da7ea7ab9dd8/jsonpointer-1.10.tar.gz#md5=d68c0c6ad6889e9c94ec0feba719e45e Processing jsonpointer-1.10.tar.gz Writing /tmp/easy_install-Tg43O7/jsonpointer-1.10/setup.cfg Running jsonpointer-1.10/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Tg43O7/jsonpointer-1.10/egg-dist-tmp-TSLiD2 warning: pypandoc module not found, could not convert Markdown to RST zip_safe flag not set; analyzing archive contents... Moving jsonpointer-1.10-py2.7.egg to /usr/lib/python2.7/site-packages Adding jsonpointer 1.10 to easy-install.pth file Installing jsonpointer script to /usr/bin Installed /usr/lib/python2.7/site-packages/jsonpointer-1.10-py2.7.egg Searching for MarkupSafe Reading https://pypi.python.org/simple/MarkupSafe/ Best match: MarkupSafe 0.23 Downloading https://pypi.python.org/packages/c0/41/bae1254e0396c0cc8cf1751cb7d9afc90a602353695af5952530482c963f/MarkupSafe-0.23.tar.gz#md5=f5ab3deee4c37cd6a922fb81e730da6e Processing MarkupSafe-0.23.tar.gz Writing /tmp/easy_install-5aCaRD/MarkupSafe-0.23/setup.cfg Running MarkupSafe-0.23/setup.py -q bdist_egg --dist-dir /tmp/easy_install-5aCaRD/MarkupSafe-0.23/egg-dist-tmp-2cxH0i gcc: error: /usr/lib/rpm/redhat/redhat-hardened-cc1: No such file or directory ========================================================================== WARNING: The C extension could not be compiled, speedups are not enabled. Failure information, if any, is above. Retrying the build without the C extension now. ========================================================================== WARNING: The C extension could not be compiled, speedups are not enabled. Plain-Python installation succeeded. ========================================================================== creating /usr/lib/python2.7/site-packages/MarkupSafe-0.23-py2.7.egg Extracting MarkupSafe-0.23-py2.7.egg to /usr/lib/python2.7/site-packages Adding MarkupSafe 0.23 to easy-install.pth file Installed /usr/lib/python2.7/site-packages/MarkupSafe-0.23-py2.7.egg Searching for requests==2.10.0 Best match: requests 2.10.0 Adding requests 2.10.0 to easy-install.pth file Using /usr/lib/python2.7/site-packages Searching for six==1.10.0 Best match: six 1.10.0 Adding six 1.10.0 to easy-install.pth file Using /usr/lib/python2.7/site-packages Searching for urllib3==1.15.1 Best match: urllib3 1.15.1 Adding urllib3 1.15.1 to easy-install.pth file Using /usr/lib/python2.7/site-packages Finished processing dependencies for kpm==0.21.0 afterwards if i run: kpm I get: $ kpm Traceback (most recent call last): File "/usr/bin/kpm", line 4, in <module> __import__('pkg_resources').run_script('kpm==0.21.0', 'kpm') File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 724, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1650, in run_script exec(code, namespace, namespace) File "/usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg/EGG-INFO/scripts/kpm", line 2, in <module> from kpm.command import cli File "/usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg/kpm/command.py", line 3, in <module> from kpm.commands import all_commands File "/usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg/kpm/commands/__init__.py", line 1, in <module> from kpm.commands.push import PushCmd File "/usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg/kpm/commands/push.py", line 5, in <module> from kpm.manifest_jsonnet import ManifestJsonnet File "/usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg/kpm/manifest_jsonnet.py", line 4, in <module> from kpm.render_jsonnet import RenderJsonnet, yaml_to_jsonnet File "/usr/lib/python2.7/site-packages/kpm-0.21.0-py2.7.egg/kpm/render_jsonnet.py", line 3, in <module> import _jsonnet ImportError: No module named _jsonnet Thanks for report, I added pip install -r requirement.txt in the Makefile you can also run it manually
gharchive/issue
2016-08-24T01:18:56
2025-04-01T06:38:17.082408
{ "authors": [ "ant31", "sym3tri" ], "repo": "coreos/kpm", "url": "https://github.com/coreos/kpm/issues/107", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
341224020
[helm/kuber-prometheus] Can't control nodeSelector for all services What did you do? I need use helm to deploy prometheus operator and kube-prometheus and nominate nodeSelector (for example, function=monitoring). Since there are more dependency services under kuber-prometheus (https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/requirements.yaml), the key nodeSelector are not set for all within default values.yaml What did you expect to see? all services running in nominated nodeSelector. What did you see instead? Under which circumstances? It is deployed to all workers. Environment AWS EKS (should be same for others) Kubernetes version information: insert output of kubectl version here Kubernetes cluster kind: $ kk version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} I run with own values.yaml , but it doesn't work alertmanager: nodeSelector: function: monitoring prometheus: nodeSelector: function: monitoring exporter-coredns: nodeSelector: function: monitoring exporter-kube-controller-manager: nodeSelector: function: monitoring exporter-kube-dns: nodeSelector: function: monitoring exporter-kube-etcd: nodeSelector: function: monitoring exporter-kube-scheduler: nodeSelector: function: monitoring exporter-kube-state: nodeSelector: function: monitoring exporter-kubelets: nodeSelector: function: monitoring exporter-kubernetes: nodeSelector: function: monitoring exporter-node: nodeSelector: function: monitoring grafana: nodeSelector: function: monitoring Maybe the exporter-nodes should be run in all workers. I'd rather recomend you to use affinity to select node. However not all the chart has support for either nodeSelector and affinity. If you are interested to contribute to the projec read the guide and please send a PR
gharchive/issue
2018-07-14T09:40:12
2025-04-01T06:38:17.090291
{ "authors": [ "gianrubio", "ozbillwang" ], "repo": "coreos/prometheus-operator", "url": "https://github.com/coreos/prometheus-operator/issues/1617", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
110704827
fetch: support docker v2 registry The docker:// protocol currently uses v1, which is effectively gone from upstream Docker at this point. Probably related: https://github.com/appc/docker2aci/issues/46 I threw a lot of time at getting docker2aci to support the new API. I got stuck on something, my woes are documented in the related issue in docker2aci. @dgonyeo can this be closed now? As of https://github.com/coreos/rkt/pull/1826, yup.
gharchive/issue
2015-10-09T17:39:25
2025-04-01T06:38:17.092841
{ "authors": [ "dgonyeo", "jonboulle", "stevenschlansker" ], "repo": "coreos/rkt", "url": "https://github.com/coreos/rkt/issues/1583", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
634436688
SQLi using set var at PL2 Referring to #1727 this new rule tries to block SQLi payload that uses MySQL set variable syntax at PL1. Looked a bit into this rule and ran it on some trafic: it doesn't trigger too much but still raise a few false positives on passwords. We're also seeing a few false positives on semi-formatted data from the POST body but overall, it looks fine on our end. I feel the rule could be more specific (doesn't catch all set bla = 0 syntaxes, ignore the set part) but it tackle the original report and isn't harmful on the way. still raise a few false positives on passwords Sorry, I can't understand this statement. Can you attach an example? A password should be a free input text and if a user chooses a password like exec(/bin/bash);'+OR+1=1-- it may trigger the whole rule-set... We're also seeing a few false positives on semi-formatted data from the POST body Please, if you can share them it would be really helpful to improve this rule. doesn't catch all set bla = 0 syntaxes, ignore the set part Yes, because (IMO) set bla = 0 isn't something we would like to block at PL1. As you can see on #1727 is easy to bypass libinjection and our PL1 by using both "@var:=" and {`label`<sql>} syntaxes. Based on my tests, this is not true by using set variable = <sql> syntax. If you can provide a different example of exploiting 1727 PoC by using this syntax it would be really helpful. We run it a bit further and it actually started to trigger on scans :D Appear to overlap slightly with 942190, but it may be because it was a complex payload. Sorry, I can't understand this statement. Can you attach an example? The actual exemples are redacted, but not too hard to imagine: you just need to following expression somewhere in the password: @[\w\d]+\=\S. For exemple, @4=d which could quite likely appear in a randomly generated password. The password you suggested is deliberately malicious, the exemple we're having here is similar to sh somewhere in a randomly generated string triggering an SHI rule. I agree that free text is a challenge but we can't completely ignore it: contextualizing the rules enable fewer edge cases and wider usage. For some rules, it's highly complex, for this one it doesn't feel so. Please, if you can share them it would be really helpful to improve this rule I'm seeing payloads like this, although not sure where they come from: redacted@blablav=website.com Based on my tests, this is not true by using set variable = syntax Thanks a lot for the rationale, that makes a ton of sense! Could you add it in the rule's header to explain where this design came from? Will likely be useful in the future! The password you suggested is deliberately malicious Yes but this is not only related to this rule. It is true for all CRS libinjection rules or XSS rules when a random string or base64 encoded string is provided. For example random/onrandom== triggers 941100 and 941120 at PL1. Those two rules should have the same behavior for you. I agree, but when possible we should try to minimize it. 941120 is a good exemple of a rule that we deemed too sensitive and since disabled by default to our users. My point isn't to be completely resilient from false positives in free text (I agree it'll be tricky) but to contextualize patterns a bit when there are low hanging fruits. Is your point that we should keep the patterns as broad as possible, or that what I'm identifying as a low hanging fruit (looking for a set prefix) actually isn't? I'm a bit late to the show, but here is the summary of the discussion about this PR during the project chat in July 2020. "We want better documentation, better / more tests and the rule stays in PL2 kindergarden until it is proven to have only very few FPs. It can still be in PL1 for 3.4, but per default it goes to PL2." @theMiddleBlue : Could you shift this to PL2 please. We'll merge as soon this is done. Monthly chat meeting September: https://github.com/coreruleset/coreruleset/issues/1869#issuecomment-688474359 We should explain it in the issue and ask @theMiddleBlue to explain why the range is not sufficient. TODO from meeting: theMiddleBlue can shift to PL2 and we merge on next meeting. rule moved to PL2 This was supposed to be merged after the November meeting. Time to do this. Thank you @theMiddleBlue.
gharchive/pull-request
2020-06-08T09:28:27
2025-04-01T06:38:17.103920
{ "authors": [ "Taiki-San", "dune73", "franbuehler", "lifeforms", "theMiddleBlue" ], "repo": "coreruleset/coreruleset", "url": "https://github.com/coreruleset/coreruleset/pull/1793", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2592960311
Pre-Commit hook rejects valid commits As Developer I would like the kotlin linter/formatter to only run on staged files for the pre-commit hook check So that I can make small commits, as currently a single unfinished/not compiling file that is not contained in the commit is enough that the formatter rejects the commit. Lowest Priority, this is almost a non issue Notes: This does not have a milestone on purpose. This is not a blocker and also not annoying enough to warrant investing a ton of time into. Acceptance Criteria: [ ] Pre-Commit hook only checks the staged files and doesn't reject commits containing only valid code Solved by using https://github.com/nikkischnelle/ktfmt-pre-commit-hook
gharchive/issue
2024-10-16T20:15:30
2025-04-01T06:38:17.108624
{ "authors": [ "flamion", "nikkischnelle" ], "repo": "corewar-teamprojekt/corewar", "url": "https://github.com/corewar-teamprojekt/corewar/issues/47", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2268757846
About --wx --wy --a --cdt --conf_thresh --vmax Hello! Thanks for sharing the code. I am currently trying to apply ucmctrack to visdrone. I have a question. What is the criteria for determining the parameters mentioned in the title in run_mot17_test.py? I can see that the parameters are set differently for each sequence. Is this an experimentally determined value, or is there a code to find the appropriate value? Hi. I am also curious how parameters were tuned for the test set, since there are no ground truth files available? same question! is there any efficient solution to this?
gharchive/issue
2024-04-29T11:18:31
2025-04-01T06:38:17.110608
{ "authors": [ "Emil-Jiang", "Paulkie99", "ohchannah" ], "repo": "corfyi/UCMCTrack", "url": "https://github.com/corfyi/UCMCTrack/issues/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1684083752
Spring 2023 Pre-Enroll Release Summary TBD Changes Front & Back End [x] Create workflow to backup Firestore #754 @benjamin-shen [x] Semesters singleton migration #723 @andxu282 @zachary0kent [x] Added Standardized Abbreviations for Colleges, Majors, and Minors #780 @jerrry1123 @zachary0kent @noschiff [x] Refactor Teleport Modal to use Composition API #791 @zachary0kent [x] courses + meta data script #796 @rohanmaheshwari430 @zachary0kent [x] Analytics User Properties #803 @zachary0kent [x] Changing Courses Retrieval Method in BottomBar component #804 @rohanmaheshwari430 [x] Course retrieval function and New course interface #805 @rohanmaheshwari430 @zachary0kent [x] Course color and credits dropdown #808 @KaylinChan Requirements [x] Added Applied Economics Minor #773 @mirandayu131 [x] Added Earth and atmosphere Minor #779 @rohanmaheshwari430 [x] Added Game Design Minor #785 @KaylinChan [x] Add data science minor #788 @elizabeth-tang [x] Fix DBME #797 @elizabeth-tang [x] Add ansc minor #793 @PabloRaigoza [x] Parsed Other Yes List #801 @mirandayu131 More [x] new spring roster #772 @mirandayu131 [x] Update README.md #778 @rohanmaheshwari430 [x] add pablo to contributers #781 @PabloRaigoza [x] Add Kaylin to README.md #782 @KaylinChan [x] Add name to README #783 @elizabeth-tang [x] Update Spring 2023 Contributors in README.md #784 @noschiff Test Plan Confirm everything works as expected on all pages of CoursePlan site, especially components that use course data, with no console errors. Notes Do not squash and merge. @rohanmaheshwari430 course data on dev isn't working @rohanmaheshwari430 course data on dev isn't working Yeah see my comment
gharchive/pull-request
2023-04-26T01:06:19
2025-04-01T06:38:17.121835
{ "authors": [ "noschiff", "zachary0kent" ], "repo": "cornell-dti/course-plan", "url": "https://github.com/cornell-dti/course-plan/pull/821", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2276180435
SP24 Release Summary SP24 Release Schedule Generator! Frontend fixes Updates courses to FA24 What's up with the merge conflicts?
gharchive/pull-request
2024-05-02T18:12:11
2025-04-01T06:38:17.123672
{ "authors": [ "andxu282", "zachary-kent" ], "repo": "cornell-dti/course-plan", "url": "https://github.com/cornell-dti/course-plan/pull/931", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
639461684
Wrong warning about disabled bluetooth Describe the bug The app tells me that bluetooth is disabled even though it is not. Expected behaviour The app should not show me a wrong warning. Steps to reproduce the issue I don't know. Technical details Tested on a OnePlus 6T. See screenshot for more details. Additional context At first it displayed it correctly. After opening the app again later, the warning appeared. Disabling and re-enabeling Bluetooth fixed the issue. I'll report if it re-appeares after a while. I had that too, disabling and re-enabling tracing worked for me. if the Trick from @tomjschwanke doesn't work you just have to deinstall the app and restart your phone. I had this Problem too. While there may be many tricks to circumvent the issue, this shouldn't happen in the first place and could cause confusion for less tech-savvy users. In the worst case, the user might not even notice the issue and thinks that the app is collecting data even though it isn't. Same issue here. OnePlus 6T A6013, but Oxygen OS 10.3.4 Reproducible. So this is a big thing that needs to get investigated asap! Same here! Perhaps a OnePlus related bug?! Hello @Bastian and community, to follow up, I like to ask if the error still occurs? Thanks, LMM Corona-Warn-App Open Source Team No, the error did not occur again. It probably only happens on the first install. I assume this issue got fixed in a previous update. I will close this issue, for now, to keep the board clean. Best regards, SG Corona-Warn-App Open Source Team
gharchive/issue
2020-06-16T08:08:18
2025-04-01T06:38:17.156231
{ "authors": [ "Bastian", "FEWI999", "GPclips", "MaikWagner", "corneliusroemer", "svengabr", "tomjschwanke" ], "repo": "corona-warn-app/cwa-app-android", "url": "https://github.com/corona-warn-app/cwa-app-android/issues/496", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
672878326
Translation Delivery Translation Delivery This pull request is for translation delivery. Please review and approve them, @D067796 This pull request is for translation delivery. Please review and approve them, @D067796
gharchive/pull-request
2020-08-04T15:23:50
2025-04-01T06:38:17.157823
{ "authors": [ "service-tip-git" ], "repo": "corona-warn-app/cwa-app-android", "url": "https://github.com/corona-warn-app/cwa-app-android/pull/976", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1414971099
Translation Delivery Translation Delivery This pull request is for translation delivery. Please review and approve them.
gharchive/pull-request
2022-10-19T13:25:04
2025-04-01T06:38:17.158606
{ "authors": [ "service-tip-git" ], "repo": "corona-warn-app/cwa-app-ios", "url": "https://github.com/corona-warn-app/cwa-app-ios/pull/4849", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1538319796
Add hotline shutdown information to README This PR adds the information about the planned TAN hotline shutdown on Jan 31, 2023 to the README file. Reference https://www.coronawarn.app/en/blog/2023-01-18-cwa-3-0/ There's a separate PR bumping the copyright year in the README.md file here: https://github.com/corona-warn-app/cwa-hotline/pull/32 I have no strong opinions whether this PR should be merged with the changes to the copyright year, or if they should be removed. I leave it to the team to decide this. @dsarkar Will you consider merging this PR? @MikeMcC399 @Ein-Tim Thanks for the ping, we will review this.
gharchive/pull-request
2023-01-18T16:37:24
2025-04-01T06:38:17.161508
{ "authors": [ "Ein-Tim", "MikeMcC399", "dsarkar" ], "repo": "corona-warn-app/cwa-hotline", "url": "https://github.com/corona-warn-app/cwa-hotline/pull/34", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1066256924
Science blog 4 EN Internal Tracking ID: EXPOSUREAPP-10697 @dsarkar Do you want feedback on this PR at the moment? Could you explain what "on hold" means if that is relevant for a review? Hi @MikeMcC399 This is ready for review. "On hold" because we still don't know the publishing date. @dsarkar This is ready for review. "On hold" because we still don't know the publishing date. I will take a look at this. I noticed a few minor points. Also the number formatting is inconsistent concerning decimal and thousands separator. I'm planning to submit a PR onto the cuisines fork. Due to the number formatting there will be quite a few changes which are inconvenient to add via comments. I've never done a PR for somebody else's fork before, so it will be a new experience. Stand by! 🙂 @MikeMcC399 thanks in advance for reviewing. Should you have trouble with the PR, we can move the branch to this repository (new PR). Let me know! My PR is https://github.com/cuisines/cwa-website/pull/7. There are two issues in figures which I did not address: In Figure 2 "Number of donations (by operating system version)" the numerical day of the month has a dot (.) after it e.g. "8. Mar", which is a typical German format. There should be no dot in the date. In Figure 7 "Participants by population density in urban and rural districts." "east" is written lower case, whereas "West" is written upper case. I suggest making "east" upper case to be consistent. @dsarkar My inputs have now all been dealt with. Thank you for merging the PR https://github.com/cuisines/cwa-website/pull/7 into this branch.
gharchive/pull-request
2021-11-29T16:42:02
2025-04-01T06:38:17.166755
{ "authors": [ "MikeMcC399", "dsarkar" ], "repo": "corona-warn-app/cwa-website", "url": "https://github.com/corona-warn-app/cwa-website/pull/2137", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1736012434
Feature/technical hotline shutdown remove hotline cypress test remove hotline test from workflow remove hotline from footer @dsarkar There is also a line in https://github.com/corona-warn-app/cwa-website/blob/master/docs/TESTING.md which could be removed
gharchive/pull-request
2023-06-01T10:38:51
2025-04-01T06:38:17.169468
{ "authors": [ "MikeMcC399", "dsarkar" ], "repo": "corona-warn-app/cwa-website", "url": "https://github.com/corona-warn-app/cwa-website/pull/3529", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1670157665
🛑 FashionLady.ro is down In 47885a2, FashionLady.ro (https://fashionlady.ro) was down: HTTP code: 0 Response time: 0 ms Resolved: FashionLady.ro is back up in 5e6358c.
gharchive/issue
2023-04-16T21:50:28
2025-04-01T06:38:17.197681
{ "authors": [ "corozanu" ], "repo": "corozanu/uptime.crz.ro", "url": "https://github.com/corozanu/uptime.crz.ro/issues/66", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
688316505
Use apt repo in debian images and update to 8u265 and 11.0.8 Issue #, if available: #28 Description of changes: Instead of downloading the deb and using dpkg for local installation, use Corretto apt repo instead. Update Dockerfiles to the latest 8u265 & 11.0.8 releases. Rev-2: Modified update-dockerfiles.sh to update Debian Dockerfiles.
gharchive/pull-request
2020-08-28T19:41:36
2025-04-01T06:38:17.204499
{ "authors": [ "ziyiluo" ], "repo": "corretto/corretto-docker", "url": "https://github.com/corretto/corretto-docker/pull/30", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
754538568
Service Monitor templates have no defaults When deploying from the latest commit, I'm getting these errors: Error: template: cortex/templates/table-manager-servicemonitor.yaml:1:14: executing "cortex/templates/table-manager-servicemonitor.yaml" at <.Values.table_manager.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/store-gateway-servicemonitor.yaml:1:14: executing "cortex/templates/store-gateway-servicemonitor.yaml" at <.Values.store_gateway.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/ruler-servicemonitor.yaml:1:14: executing "cortex/templates/ruler-servicemonitor.yaml" at <.Values.ruler.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/query-frontend-servicemonitor.yaml:1:14: executing "cortex/templates/query-frontend-servicemonitor.yaml" at <.Values.query_frontend.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/querier-servicemonitor.yaml:1:14: executing "cortex/templates/querier-servicemonitor.yaml" at <.Values.querier.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/ingester-servicemonitor.yaml:1:14: executing "cortex/templates/ingester-servicemonitor.yaml" at <.Values.ingester.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/distributor-servicemonitor.yaml:1:14: executing "cortex/templates/distributor-servicemonitor.yaml" at <.Values.distributor.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/configs-servicemonitor.yaml:1:14: executing "cortex/templates/configs-servicemonitor.yaml" at <.Values.configs.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/compactor-servicemonitor.yaml:1:14: executing "cortex/templates/compactor-servicemonitor.yaml" at <.Values.compactor.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled Error: template: cortex/templates/alertmanager-servicemonitor.yaml:1:14: executing "cortex/templates/alertmanager-servicemonitor.yaml" at <.Values.alertmanager.serviceMonitor.enabled>: nil pointer evaluating interface {}.enabled The top-level values.yaml doesn't contain any default values for these serviceMonitor entries. I was able to work around the issue by adding the following to my custom values file: table_manager: serviceMonitor: enabled: false store_gateway: serviceMonitor: enabled: false ruler: serviceMonitor: enabled: false query_frontend: serviceMonitor: enabled: false querier: serviceMonitor: enabled: false ingester: serviceMonitor: enabled: false distributor: serviceMonitor: enabled: false configs: serviceMonitor: enabled: false compactor: serviceMonitor: enabled: false alertmanager: serviceMonitor: enabled: false I'd offer a PR, but I'm not fully certain what the author had in mind for these serviceMonitor entries, and it's not something that I'm using right now. Thanks! Great catch, thank you! Those defaults were added to the values.yaml file
gharchive/issue
2020-12-01T16:23:48
2025-04-01T06:38:17.212330
{ "authors": [ "drewbowering", "khaines" ], "repo": "cortexproject/cortex-helm-chart", "url": "https://github.com/cortexproject/cortex-helm-chart/issues/79", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1443834666
Chart labels overlapping As seen on the attached example, the left-side label on all chart is covering the axis values Much better. Please sync with @katrinDY she is also working on some element repositioning, rotation and hiding features. This is _Axis.nameLocation, correct? How about if we let the user decide between center, 'end', start, and an additional hide that hides the name? Correct, we can do that, i wouldn't do hide but instead if there is no value we can just not show it. Otherwise a dropdown for the position. The Y Axis now has an option for the label position on both compose and reporter charts @Fajfa This looks good. When you have the time just add the entry for the CL. 🍻 What was added? For charts with y-axis, a new label position option was added. It enables you to position the label at the bottom, middle or top of the y-axis. How was added? By using the nameLocation property of Apache Echarts y-axis, which enables this. A select dropdown was also added in the y-axis section in Compose and Reporter chart configurators. Added to CL.
gharchive/issue
2022-11-10T12:37:57
2025-04-01T06:38:17.229595
{ "authors": [ "Bojan-Svirkov", "Fajfa", "darh" ], "repo": "cortezaproject/corteza", "url": "https://github.com/cortezaproject/corteza/issues/464", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
304623489
keeps style folder but deletes everything in it Changes the tools/removDemo.js file so that the styles folder is not deleted but that everything in the styles folder is deleted per https://github.com/coryhouse/react-slingshot/issues/545#issue-304096397 Coverage remained the same at 91.453% when pulling 4e239567c0058388d86b6f190dfa2112ede85df5 on nharrisanalyst:master into 4f03a7d271a49dc62fa28e036c60a11ebd0a3ef3 on coryhouse:master.
gharchive/pull-request
2018-03-13T04:23:00
2025-04-01T06:38:17.249966
{ "authors": [ "coveralls", "nharrisanalyst" ], "repo": "coryhouse/react-slingshot", "url": "https://github.com/coryhouse/react-slingshot/pull/546", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2542637031
Sonic revert I want sonic revert pls A Minecraft maybe? Whell its blocked so it realy doesent materr but nate-gamew needs to fix Sonic revert El lun, 30 de sep. de 2024 12:00 p. m., warawl @.***> escribió: A Minecraft maybe? — Reply to this email directly, view it on GitHub https://github.com/cosmic-city/cosmic-city.github.io/issues/5#issuecomment-2383719984, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDSM4RZTH63ZAF63XJVSL73ZZF7T5AVCNFSM6AAAAABOWCL7RKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBTG4YTSOJYGQ . You are receiving this because you authored the thread.Message ID: @.***>
gharchive/issue
2024-09-23T13:21:50
2025-04-01T06:38:17.255078
{ "authors": [ "Axolotl045419", "warawl" ], "repo": "cosmic-city/cosmic-city.github.io", "url": "https://github.com/cosmic-city/cosmic-city.github.io/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2059966854
Update Terra Gas Fees update terra to reflect gas price changes Thank you for the contribution. In the future please try to use a more descriptive PR title.
gharchive/pull-request
2023-12-29T13:20:52
2025-04-01T06:38:17.257152
{ "authors": [ "JeremyParish69", "mwmerz" ], "repo": "cosmos/chain-registry", "url": "https://github.com/cosmos/chain-registry/pull/3520", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1941110716
fix: allow value with slashes in URL template (for v4) Description This is an issue that has already been raised and has been implemented in v7 and main. https://github.com/cosmos/ibc-go/pull/3045 This should also apply to v5 and v6, which are currently maintained, including v4. closes: #XXXX Commit Message / Changelog Entry type: commit message see the guidelines for commit messages. (view raw markdown for examples) Before we can merge this PR, please make sure that all the following items have been checked off. If any of the checklist items are not applicable, please leave them but write a little note why. [x] Targeted PR against correct branch (see CONTRIBUTING.md). [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work. [x] Code follows the module structure standards and Go style guide. [ ] Wrote unit and integration tests. [ ] Updated relevant documentation (docs/) or specification (x/<module>/spec/). [ ] Added relevant godoc comments. [x] Provide a commit message to be used for the changelog entry in the PR description for review. [x] Re-reviewed Files changed in the Github PR explorer. [ ] Review Codecov Report in the comment section below once CI passes. thanks for this PR @JoowonYun! I will add the backport label. Thanks, @JoowonYun. Considering that all lines of v4 and v5 will reach end of line at the end of October (see here), I question if it really makes a lot of sense to backport this fix to those lines? Have you encountered situations where this problem impeded you in chains using ibc-go < v7 and you couldn't work around the problem? i actually think we might need to look into this more, finding this issue. it looks like this fix actually introduced a bug in which the endpoint interprets an empty trace rather than a call to the DenomTraces endpoint. @JoowonYun I think the problem with /ibc/apps/transfer/v1/denom_traces that you may be having was fixed in https://github.com/cosmos/ibc-go/pull/4709. If this is causing you troubles and you cannot work around the issue, we are happy to backport both https://github.com/cosmos/ibc-go/pull/3045 and https://github.com/cosmos/ibc-go/pull/4709 to all currently supported release lines, we just wanted to make sure that we would cut all these releases for a good reason. :) Ah.. I didn't understand the backport process exactly. I checked the prs and it's perfect. 👍 @JoowonYun The v4.5.1 release with the fix is out now.
gharchive/pull-request
2023-10-13T02:17:46
2025-04-01T06:38:17.304582
{ "authors": [ "JoowonYun", "charleenfei", "crodriguezvega" ], "repo": "cosmos/ibc-go", "url": "https://github.com/cosmos/ibc-go/pull/4858", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
365066989
Jordan/1283 language refactor Closes #1283 Description: many language changes based on info from @rigelrozanski (#1283) didn't change any vuex modules which should still be done to complete this "epic" (#1381) already did! #1381
gharchive/pull-request
2018-09-28T22:58:57
2025-04-01T06:38:17.306283
{ "authors": [ "jbibla" ], "repo": "cosmos/voyager", "url": "https://github.com/cosmos/voyager/pull/1383", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
417204821
Fabo/2138 fix nbsp Closes #2138 Closes #1211 Description: Thank you! 🚀 For contributor: [ ] Added entries in CHANGELOG.md with issue # and GitHub username [ ] Reviewed Files changed in the github PR explorer [ ] Attach screenshots of the UI components on the PR description (if applicable) [ ] Scope of work approved for big PRs For reviewer: [ ] Manually tested the changes on the UI conflicts bitte
gharchive/pull-request
2019-03-05T09:42:58
2025-04-01T06:38:17.309354
{ "authors": [ "faboweb", "fedekunze" ], "repo": "cosmos/voyager", "url": "https://github.com/cosmos/voyager/pull/2163", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
428704859
Use "/usr/local" as default prefix for "make install" Conventional filesystem standards imply that /usr prefix should be used only by system package managers. Custom installations should go to /usr/local by default. However, on Linux machines make install installs to /usr, messing with package manager installations. Let's install to /usr/local/ everywhere. However, we should keep /usr prefix for packaging, therefore override the default for deb and rpm targets. How this will affect current users that have Themis installed in /usr? Good question... Assuming these are users that install from source code, I can see make uninstall breaking if they wish to use newer version to uninstall the previous one. The users will have to explicitly set PREFIX=/usr to uninstall the previous version from correct path. Similarly, if Themis 0.11 is installed to /usr (by current default) then installing Themis 0.12 to /usr/local will not overwrite the previous install (as it is now). Instead, Themis 0.12 will be installed alongside Themis 0.11. Usually /usr/local takes precedence over /usr, so effectively Themis 0.12 should be used. But that depends on the software, and it may break in case of misconfiguration. In particular, Themis libraries do not have versions embedded into them so the software can load a wrong library. Overall, I feel that we will have to ask those users who install from source to cleanly uninstall previous version of Themis using previous source code, and then install an upgrade using the new source code. Alternatively, I think we could detect whether Themis is already installed to /usr and not managed by package manager. If so then we keep using the old prefix, otherwise default to /usr/local (unless the user explicitly sets something). Together with that we could urge users to switch to the new default and eventually drop the special case. @vixentael I've pushed a commit that implements the backwards compatibility behavior described in the previous commit. With that, on systems that do not have Themis installed (yet) we'll be using the new /usr/local prefix. However, if the system already has Themis manually installed to /usr (not by the package manager) then we'll keep using /usr. If the user explicitly specifies the prefix then we'll be using that instead. I'd avoid to set path automatically. How about next behavour: if we detected themis installed in path differ from /usr/local, but included into ldconfig, then stop with special error code (differ than default ones); add environment variable, for example THEMIS_ALLOW_MULTIPLE_INSTALLS that allows to install even when previous installation detected. @shadinua, well, that's a reasonable approach as well. Though, I feel that installing a local version to /usr/local in parallel with system version in /usr will be quite often used by developers for local testing, so I don't think that we should make it more annoying. We can't account for all the conflicts that are possible by multiple installations. Ideally, I'd avoid doing all this guesswork altogether (as it's bound to fail for someone, human perversion has no limits). If you install Themis from source then well, do manage your versions yourself if you insist on not using any sort of package manager. I've added one more convenience for the users: now we print the installation prefix so the users know where Themis has been installed to or removed from. Now, what should we do about multiple installations and upgrades? Decline this PR. Keep installation paths as is. Pro: users can continue being blissfully ignorant of where they install stuff. Con: sudo make install on a system with installed packages overwrites the packages. Accept current PR. Prefer /usr/local, unless there is an existing installation in /usr. Pro: users can still be somewhat ignorant. Pro: we won't overwrite package manager installations anymore. Con: complicates installation, endorses bad behavior. Revert guesswork, follow @shadinua. Deny multiple installations, unless asked to. Pro: not taking chances, fail early if we detect a conflict. Pro: still allows multiple installs if needed. Con: complicates installation, has to be documented. Revert guesswork. Just install to /usr/local from now on. Pro: dead simple. Con: previous installation to /usr has to be manually removed. Any other options out there? @vixentael? As for paths, I believe that we'd run into conflicts either way if we ever going to fix the mismatch on CentOS (libraries should go to /usr/lib64, not /usr/lib). For that we will have to ask the users to cleanly remove their previous installation of Themis and install the new version from scratch. That gives us freedom to change the new installation path however we like. Though, I feel that installing a local version to /usr/local in parallel with system version in /usr will be quite often used by developers for local testing, so I don't think that we should make it more annoying. By default /usr/local/lib is included to ldconfig search paths of most distributives. Thus in case when we have different versions of library that installed simultaneously in /usr/lib and /usr/local/lib, it will change behaviour anyway. I wouldn't like to think that installing the new version of the library for development purposes while we changing behaviour of whole system is the right way. Isn't it better to put the library in the project folder? Ideally, I'd avoid doing all this guesswork altogether ... I completely agree. That is why I'd suggest to choose one of standard, predictable and simple variants to solve this issue. Isn't it better to put the library in the project folder? This is involves much more hassle than make install. You have to get Themis to your project directory, setup all lookup paths in your project, etc. I don't really imagine people using make install for that. I'd personally would just copy binaries out of build directory if I don't want to keep Themis installed in my system. By default /usr/local/lib is included to ldconfig search paths of most distributives. Thus in case when we have different versions of library that installed simultaneously in /usr/lib and /usr/local/lib, it will change behaviour anyway. Yeah, that's exactly the point. Imagine you're using PyThemis and there's some bug in the core library. Your core library is installed from package repositories. However, you are able to grab the latest source code, sudo make install it, check whether the bug is fixed for you (without having to fiddle with Python library lookup), and then sudo make uninstall everything back. And the system-managed installation of Themis is kept intact. Another common approach is system-wide installation to /opt, when you do make install PREFIX=/opt/themis-0.11 and configure multiple projects to use the same installation. I believe we should allow such installations without any additional required flags and variables, regardless of whatever is or is not installed in /usr and /usr/local. I think that we can keep it simple by installing to specified prefix if it is explicitly requested by the user. If they request it, they have their reasons and did their homework. Yeah, that's exactly the point. Imagine you're using PyThemis and there's some bug in the core library. Your core library is installed from package repositories. ... There are too many cases. For some of them, this is an advantage. For some — a disadvantage. These libraries can be installed not only on the developers' machines, where changing in the behaviour of the whole system is normal. In the proposed variant, parallel installation of the library using make install is absolutely possible. Howeve, in my opinion, for responsible libraries, such as Themis, it is better to be sure that the user is consciously doing this. Well. I'll not insist on that behaviour. I'd prefer to have Makefile without any extended logic inside. Let's choose variant #4. But we have to implement CentOS detection to set correct paths. Let's choose variant #4. But we have to implement CentOS detection to set correct paths. With this option I would feel better if we notify user that installation path was changed. For example, if we detect that Themis is already installed in /usr, we can write a warning log message after successful installation ~ "Multiple Themis installation found. If you didn't do it intentionally consider removing old versions." @shadinua @vixentael I have reverted the 'AI-commit' and added a simple check with a warning if we detect Themis in both /usr and /usr/local simultaneously, suggesting to remove the (presumed) old installation from /usr and keep installations by make install in /usr/local.
gharchive/pull-request
2019-04-03T11:06:50
2025-04-01T06:38:17.330285
{ "authors": [ "ilammy", "shadinua", "vixentael" ], "repo": "cossacklabs/themis", "url": "https://github.com/cossacklabs/themis/pull/448", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
656475964
Move static libs and link files into dev packages Normally the “library” packages on Debian and RHEL contain only shared library files. Headers, static libraries, and shared library links go into the “development” package which is necessary to build software that uses the library (but not run it). On RHEL the static libraries are often moved into their own -static package to avoid bloating the -devel package (as dynamic linkage is customary). Historically, we've been putting everything into the “library” package but let's follow the conventions from now on. libthemis: libsoter.so.0 and libthemis.so.0 libthemis-devel: header files pkg-config files libsoter.a and lbthemis.a libsoter.so and libthemis.so symlinks We already recommend to install “development” packages to develop software that uses Themis. Some language wrappers – dynamic languages like Python and Ruby – will need that as they resolve Themis dynamic library dynamically and need the symlink to be present. Checklist [X] Change is covered by automated tests (somewhat? on Buildbot? maybe) [X] Changelog is updated By the way, Some language wrappers – dynamic languages like Python and Ruby – will need that as they resolve Themis dynamic library dynamically and need the symlink to be present. This is not necessarily true. Python docs, for example, suggest that find_library should locate libraries with ABI version suffixes too. However, I remember that we did have issues with finding and loading libraries when the symlink is not present. We stay on the safe side for now, but it may very well be that libthemis is sufficient for Python software to run as well. No idea about Ruby though (they docs are very terse). Does this change affect BuildBot testing? @shadinua Should not. Here we do not change building or installation flow how it looks from user (and CICD) side.
gharchive/pull-request
2020-07-14T09:39:21
2025-04-01T06:38:17.337427
{ "authors": [ "ilammy", "shadinua" ], "repo": "cossacklabs/themis", "url": "https://github.com/cossacklabs/themis/pull/678", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1268684124
Styling does not work I have tried to update colorlist in global config, but it does not work. Check stackblitz: https://stackblitz.com/edit/minimal-confirm-box-s13g4b Same here, doesn't matter what colours I set in app.module, they do not change. I had to add CSS to my global styles, eg: .ed-btn-customone { background-color: #2d922d; // slightly darker main color green, to override costlypopup CUSTOMONE } Sorry for waiting so long, fixed it in 3.1.4 release https://github.com/costlydeveloper/ngx-awesome-popup/releases/tag/3.1.4 https://stackblitz.com/edit/minimal-confirm-box-o6ubr5?file=package.json
gharchive/issue
2022-06-12T19:50:55
2025-04-01T06:38:17.345972
{ "authors": [ "costlydeveloper", "mdudek", "rmcsharry" ], "repo": "costlydeveloper/ngx-awesome-popup", "url": "https://github.com/costlydeveloper/ngx-awesome-popup/issues/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
925382091
How to disable scroll animation when user tap TOC in a post Checklist [x] I have read the tutorials and know the correct effect of the functional design. [x] There are no similar question on existing issues (including closed ones). [x] I searched the Internet for related problems, but still couldn't solve it. [x] My question is based on the latest code of master branch. Description If a blog post has a lot of images , when the user clicks TOC, however, it didn't jump to the right position. The reason for this problem is that the loading of the image changes the frame of the page during the scrolling animation. This issue can be avoided if the page can jump without animation. So, here's my question: how to turn off the scrolling animation when user tap TOC ? Demo post (Using chirpy theme with scroll animation ): https://huang-libo.github.io/posts/Objective-C-Runtime-Changes-in-iOS-14/ , you can try to recurrence the issue by clicking the last item in the TOC. On GitHub, there is no animation for the jump inside the page, so you can jump to the exact location, demo : https://github.com/Huang-Libo/Huang-Libo.github.io/blob/master/_posts/2021-05-17-Objective-C-Runtime-Changes-in-iOS-14.md Demo post (Using chirpy theme without scroll animation ): https://onevcat.com/2019/12/2019-final/ Lastly, thanks for the amazing theme, really appreciate your hard work! 平滑滚动是写在 _javascript/utils/smooth-scroll.js 里的,要禁用该脚本,可以先把 gulpfile.js/tasks/js.js 里 const postJs = () => { return concatJs([ `${JS_SRC}/commons/*.js`, `${JS_SRC}/utils/img-extra.js`, `${JS_SRC}/utils/timeago.js`, `${JS_SRC}/utils/lang-badge.js`, `${JS_SRC}/utils/checkbox.js`, `${JS_SRC}/utils/copy-link.js`, // 'smooth-scroll.js' must be called after ToC is ready `${JS_SRC}/utils/smooth-scroll.js` ], 'post' ); }; 的 ${JS_SRC}/utils/smooth-scroll.js 删掉,然后用 npm install -g gulp-cli 安装 gulp.js,使用 gulp 命令重新生成 assets/js/dist 目录下的那几个 *.min.js 文件。 基于 @NichtsHsu 的回答做补充: 如果 theme 是使用 gem 添加的,则先在自己项目的根目录添加 _javascript 和 gulpfile.js 目录,然后修改 gulpfile.js/tasks/js.js,删除 ${JS_SRC}/utils/smooth-scroll.js 这一行。 如果没安装 gulp ,先安装相关 npm 包: npm install gulp-cli -g npm install gulp -D npm install --save-dev gulp-concat npm install --save-dev gulp-rename npm install --save-dev gulp-uglify npm install --save-dev gulp-insert 最后,在项目的根目录执行 gulp 命令,即可重新生成 assets/js/dist 目录下的那几个 *.min.js 文件。 上述方法需要改 theme 源码,有没有什么方法可以把 smooth scroll 设置成可配的呢? 比如在 _config.yml 中通过一个布尔值来设置,不知是否可行? 还有,如果用户真想用 smooth scroll,图片较多时,滚动的位置就不准确,这个 bug 该怎么解呢? Hi @Huang-Libo, 平滑滚动是可以明显提升用户体验的一个特性,所以不曾想过、当然将来也不会关闭它,假如做为配置项那就是等于开倒车了。。。 滚动位置不准这个是由于图片加载后浏览器会重绘页面元素,导致滚动部分的逻辑计算位置也会跟着出现偏差,目前我已经有思路去改进这个体验了,请关注后续的 commit。
gharchive/issue
2021-06-19T12:13:15
2025-04-01T06:38:17.363869
{ "authors": [ "Huang-Libo", "NichtsHsu", "cotes2020" ], "repo": "cotes2020/jekyll-theme-chirpy", "url": "https://github.com/cotes2020/jekyll-theme-chirpy/issues/351", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1627252245
🛑 Mainnet [TezTools] (eu01-node.teztools.net) is down In 2d9cee9, Mainnet [TezTools] (eu01-node.teztools.net) (https://eu01-node.teztools.net/chains/main/blocks/head/header) was down: HTTP code: 502 Response time: 563 ms Resolved: Mainnet [TezTools] (eu01-node.teztools.net) is back up in 74729f2.
gharchive/issue
2023-03-16T11:14:37
2025-04-01T06:38:17.367677
{ "authors": [ "copolycube" ], "repo": "cotezos/teznodes", "url": "https://github.com/cotezos/teznodes/issues/1671", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
62828341
Port iOS's View.updateIndex() - multi-view index update iOS's View.updateIndex() updates multiple views. This might improve updateIndex() performance. Dup of #681
gharchive/issue
2015-03-18T23:37:41
2025-04-01T06:38:17.371144
{ "authors": [ "hideki", "zgramana" ], "repo": "couchbase/couchbase-lite-java-core", "url": "https://github.com/couchbase/couchbase-lite-java-core/issues/501", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
117460337
Changes for compatibility with ES 2.x Changes to make plugin compatible with ES 2.x Still testing... Please wait until my next comment to merge. One additional thing that I haven't looked into yet is the compatibility with other ES plugins, namely "marvel". In my attempts to get the transporter to work with ES 2.0 (note that this PR is actually for ES 2.x), I noted that the ".%DATE%"-type indices created by "marvel" were configured to not allow dynamic mapping, which caused issues when the transporter tried to connect to them (during the couchbaseCheckpoint transport process). I'm not sure if there are 2.x-compatible versions of the other ES plugins, so I'm not sure if that is an issue for this particular PR. I would add code to just filter out ES indices that begin with '.', but I'm not sure if that is going to have any repercussions with the XDCR contact, etc. Any guidance on this issue would be appreciated.
gharchive/pull-request
2015-11-17T21:56:51
2025-04-01T06:38:17.399069
{ "authors": [ "bignolip" ], "repo": "couchbaselabs/elasticsearch-transport-couchbase", "url": "https://github.com/couchbaselabs/elasticsearch-transport-couchbase/pull/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
111960931
Tests not fully supported on Windows. Our integration tests rely on starting and stopping a temporary CouchDB server. On Windows there are some problems doing this. Consequently, the integration test get_view fails. $ cargo test get_view Running target\debug\commands-8c77530f0accee12.exe running 1 test test get_view ... FAILED failures: ---- get_view stdout ---- thread 'get_view' panicked at 'called `Result::unwrap()` on an `Err` value: InternalServerError { response: ErrorResponse { error: "EXIT", reason: "{{badmatch,{error,{bad_return_value,{os_process_error,{exit_status,4}}}}},\n [{couch_query_servers,new_process,3,\n [{file,\"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_query_servers.erl\"},\n {line,477}]},\n {couch_query_servers,lang_proc,3,\n [{file,\"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_query_servers.erl\"},\n{line,462}]},\n {couch_query_servers,handle_call,3,\n[{file,\"c:/cygwin/relax/APACHE~2.1/src/couchdb/couch_query_servers.erl\"},\n {line,334}]},\n {gen_server,handle_msg,5,[{file,\"gen_server.erl\"},{line,585}]},\n {proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,239}]}]}" } }', ../src/libcore\result.rs:732 failures: get_view test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured thread '<main>' panicked at 'Some tests failed', ../src/libtest/lib.rs:252 Also, the integration tests assume CouchDB is installed in the default path. From src/server.rs: c.arg("c:/program files (x86)/apache software foundation/couchdb/etc/couchdb/default.ini"); c.arg("c:/program files (x86)/apache software foundation/couchdb/etc/couchdb/local.ini"); Lastly, the tests run slowly, and CPU use for the test executable seems excessive. The root cause is with how the tests manage the CouchDB server process. More specifically, the Server class spins, reading continually from the server process's stdout stream. The Server class reads stdout to obtain the server's URI. Then the Server class drains stdout by continually reading line-by-line, ignoring each line. However, on Windows, these read operations return immediately, and the result appears to be the empty string. Also, on Windows, the CouchDB server process is spamming stdout. Adding -sasl errlog_type error to the command line invocation reduces the output, but it doesn't resolve this issue. This will be fixed via #21.
gharchive/issue
2015-10-17T12:15:54
2025-04-01T06:38:17.403201
{ "authors": [ "cmbrandenburg" ], "repo": "couchdb-rs/couchdb", "url": "https://github.com/couchdb-rs/couchdb/issues/8", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2188887323
readme fix the read-me entirely. make sure it is no longer knitted / markdown. also, @ciretje already had some updates with names and stuff it appears lost. maybe you could locate it. If you want to see the sepearte details (e.g., fix title etc) see recorded video good work linda. i have finalized your changed
gharchive/issue
2024-03-15T15:37:03
2025-04-01T06:38:17.418603
{ "authors": [ "Ciertje" ], "repo": "course-dprep/team-project-team_1", "url": "https://github.com/course-dprep/team-project-team_1/issues/41", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
614531179
Update jsoniter-scala-core, ... to 2.2.2 Updates com.github.plokhotnyuk.jsoniter-scala:jsoniter-scala-core com.github.plokhotnyuk.jsoniter-scala:jsoniter-scala-macros from 2.2.1 to 2.2.2. GitHub Release Notes - Version Diff I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "com.github.plokhotnyuk.jsoniter-scala" } ] labels: library-update, semver-patch Superseded by https://github.com/coursier/coursier/pull/1709.
gharchive/pull-request
2020-05-08T06:17:40
2025-04-01T06:38:17.432871
{ "authors": [ "alexarchambault", "scala-steward" ], "repo": "coursier/coursier", "url": "https://github.com/coursier/coursier/pull/1707", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
142513620
Add byte-array returning methods TODO: implement decodeQRDataAsBytes Will return to this in the future.
gharchive/pull-request
2016-03-22T00:50:44
2025-04-01T06:38:17.471211
{ "authors": [ "osdiab" ], "repo": "cozmo/jsQR", "url": "https://github.com/cozmo/jsQR/pull/8", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
371446461
Accept non-ASCII octets in percent-decoding This PR fixes #118. All unit tests passed. Removed code that throws when an octet outside ASCII is detected. Changed a unit test that asserts decoding %80 does not throw. Added a unit test that asserts decoding some UTF-8 does not throw. Many thanks for the report and for the PR. I can confirm that the does what it's expected to do. FYI, I developed a "successor" to this URI implementation, based on the WhatWG spec. The percent encoding implementation there behaves the same way as your fix. Thanks for merging. I already have an eye on your url project (looks promising), but I cannot switch at the moment since I am using cppnetlib which depends on network::uri.
gharchive/pull-request
2018-10-18T09:45:25
2025-04-01T06:38:17.506679
{ "authors": [ "glynos", "mtrenkmann" ], "repo": "cpp-netlib/uri", "url": "https://github.com/cpp-netlib/uri/pull/119", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
1970966345
News items display News listings display needs updating. TBD all changes, but these seem like a reasonable starting point: Title of item links to news detail page This is always necessary even for Link types since this is the only page a user can edit/delete from URL displayed underneath, directly linking to the URL If content exists, have short preview. No need for any "Read more" link since the title of the item links to it. Include the author's name followed by published date after If an image was added, the image needs to have some styling applied. Should not be floating extreme right of the item info, if there was no description, the image can appear where the description would have. @gregnewman I forgot I had assigned this to myself. I like what you showed me already, but I do think we should have the author's name and the published date displayed. Along with having the image appear more nicely (or, I'm not opposed to not actually showing the image here either and only having it on the detail page). @4down this has been merged in PR #830 so if you confirm the work you can close it.
gharchive/issue
2023-10-31T17:36:28
2025-04-01T06:38:17.520918
{ "authors": [ "4down", "gregnewman" ], "repo": "cppalliance/temp-site", "url": "https://github.com/cppalliance/temp-site/issues/769", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
304610162
数据无法动态显示 按教程成功部署好了,但我发现您给出的 DEMO 上服务器的数据是动态展示的,而我自己部署的却是静态的,要手动刷新页面后,网络等数据才变化。 查看nginx是否做了缓存~ @pkuplus @cppla 查看了 PHP 确实启用了 Zend Opcache,关闭后可以了。就是刷新频率比较慢,您的刷新间格设置是多久? 默认是1秒,可以自己调。 @pkuplus ,查看演示:https://tz.cloudcpp.com @cppla 我默认也是1秒,但差不多几十秒才自动刷新一次,而您的演示站刷新频率几乎是时时的。 检查是否开启缓存,反代,正代等。 我这边刷新频率是1s,同时我nginx缓存是1s,其实间隔2s刷新 @pkuplus @cppla 好的。 我刚用秒表看了下,大概30秒刷一次,我再检查下再回复。 location ~ [^/]\.php(/|$) { #fastcgi_pass remote_php_ip:9000; fastcgi_pass unix:/dev/shm/php-cgi.sock; fastcgi_index index.php; include fastcgi.conf; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ { #expires 30d; #access_log off; add_header Cache-Control no-cache; } location ~ .*\.(js|css)?$ { #expires 7d; #access_log off; add_header Cache-Control no-cache; } location ~ /\.ht { deny all; } nginx 中的静态缓存我已经注释掉了,但观察发现刷新是30秒一次,请问我还需要检查哪里? 肯定是你nginx或者apache、tomcat等后端server 的问题, 你这是部分配置把 ? 检查你的主nginx配置和 ServerStatus 所绑定的nginx配置文件。 和php缓存无关,这个不依赖php。你可以自行检查,f12 chrome调试。https://s1.ax1x.com/2018/03/13/9fbksU.gif @pkuplus ,其实你可以针对这个Server Status 网站的目录,重写覆盖下nginx配置,既不会影响原有缓存配置,同时该ServerStatus也会是实时更新。nginx 配置本身就是模块化的,可以绑定某virtual host做重载。 #If you have a lot of static files to serve through Nginx then caching of the files' metadata (not the actual files' contents) can save some latency. open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; 找到原因了,是 nginx.conf 配置中的 open_file_cache_valid 30s; 的原因。谢谢帮助!@cppla
gharchive/issue
2018-03-13T02:55:49
2025-04-01T06:38:17.526708
{ "authors": [ "cppla", "pkuplus" ], "repo": "cppla/ServerStatus", "url": "https://github.com/cppla/ServerStatus/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
98782820
More data Continuing the experiment to see where it goes. It works for me, and I think the team will like it. Plus, cookies... Not sure about this... Have you tested it really well? OK... I'll go with it this time...
gharchive/pull-request
2015-08-03T16:25:50
2025-04-01T06:38:17.549594
{ "authors": [ "jpjuecks" ], "repo": "cps209test/teamwork", "url": "https://github.com/cps209test/teamwork/pull/2", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
398565752
Add FHIR R4 to the translator Add support for the released FHIR R4 version. Adressed in 1.3.14
gharchive/issue
2019-01-12T15:57:46
2025-04-01T06:38:17.556108
{ "authors": [ "brynrhodes" ], "repo": "cqframework/clinical_quality_language", "url": "https://github.com/cqframework/clinical_quality_language/issues/380", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
71388119
Overflow content choice Why the scrollbar to manage the overflow content is put in the modal/alert rather than on the windows (with overflow:auto ) ? On Tablet or mobile device, when the scrollbar is attach to the device screen, i think that it's better and simply to scroll content (in the case where the content overflow the height of the screen) ? Actually : Can be maybe better ? : Moreover, i noticed that you have added a new theme. If we want put many text it's not very pretty... when you think to do this ? Hello @usb248 Thanks for the suggestion, I will implement this in the next release. :) Ok nice, Why we can't choice the width of alert/confirm/dialog other than by modifying CSS ? Yes i will be adding choice of width, https://github.com/craftpip/jquery-confirm/issues/27 bootstrap columns will be provided to set the widths. So, cool ! I wait your release ;). An other improvement which can be very nice. Adding hash tracking by manupulating browser history (by adding #hash... ) like this similar plugin http://vodkabears.github.io/remodal/. Hello @usb248 The overflow feature you said is added in the new release, Other feature about tracking with #hash, what is a practical usage with this? Thanks When we press back/next button (on some the mouse, or browser windows to navigate in the browser history), the alert/confirm/dialog come back When we refresh the page, the alert/confirm/dialog don't disappear, it remains displayed.
gharchive/issue
2015-04-27T20:11:52
2025-04-01T06:38:17.639086
{ "authors": [ "craftpip", "usb248" ], "repo": "craftpip/jquery-confirm", "url": "https://github.com/craftpip/jquery-confirm/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
220477416
Organize unit tests This is a first in a series of PRs that will update our testing infrastructure. This PR organizes test files into appropriate subdirectories. Next PR will update qunit version and all unit tests that run in browser in node: All test commands are no longer available on global scope, but rather on QUnit itself. Assertion methods are called on an argument passed into the test functions. Looks good to me -- I confirmed that the tests still ran from the grunt commands and index.html, and that the number of tests matched between this branch and develop.
gharchive/pull-request
2017-04-09T15:16:24
2025-04-01T06:38:17.641151
{ "authors": [ "mucaho", "starwed" ], "repo": "craftyjs/Crafty", "url": "https://github.com/craftyjs/Crafty/pull/1104", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
119236859
Updates for most recent versions of node+packages I tried updating to the most recent versions of node and all the various packages. I ran into some sort of issue with node-qunit, where it complained about being unable to find lib/log.js. I don't know if this is a problem with my install, or the package itself. grunt-node-qunit looks to be about 3 years old, so that might be the root of the issue. I'd assume we should try to resolve that issue one way or the other before merging this. (e: Forgot to bump the version of .travis.yml.) I would changing node version to latest LTS, that's 4.2.2. node-qunit is a fantastic node module - I'm currently implementing visual regression tests (#950, current progress see webdriver branch) and I'm heavily relying on it. I would change node version to latest LTS, that's 4.2.2. node-qunit is a fantastic node module - I'm currently implementing visual regression tests (#950, current progress see webdriver branch, see crafty distro gh-pages branch for preliminary screenshots) and I'm heavily relying on it. Downgraded to 4.2.2, and that seemed to solve the issue with grunt-node-qunit. I suspect that the issue is in the grunt wrapper, since it hasn't been upgraded in so long. As long as we rely on it, it'll block us from upgrading node. (That said, I bet it'd be pretty easy to fix/fork the grun wrapper.) I see that you actually just had a PR merged into that grunt-node-qunit, so good to know the maintainer is still around. @mucaho Not sure if it's related to the changes in this PR, but there seems to be some issue where the sauce-labs task has hung. As long as we rely on it, it'll block us from upgrading node ... ... good to know the maintainer is still around. Yeah let's cross that bridge when we come to it :) there seems to be some issue where the sauce-labs task has hung Happened to me too on occasion, I think their servers get overloaded from time to time, I usually fixed it by triggering a rebuild on Travis' website. (It also doesn't help that I have been stressing their servers a bit lately until I can get the configuration fine-tuned). Ah yes, would you try adding ^ in front of every dependency, so we can depend on "semantically similar" versions. Ah yes, would you try adding ^ in front of every dependency, so we can depend on "semantically similar" versions. Sure, I was lazy and just used npm-check-updates to updated package.json -- I think it kept the prefixing the same, but obviously we should be consistent unless there are specific issues. Added the ^ qualifier to each dependency. The open-sauce tests don't run on PRs, they just run after something is merged into a Crafty repo branch (limitation described in PR). However, I don't fully grasp why the Open-Sauce connect plugin (that one that gave you the error) started and failed in the first place in your previous build (the required open sauce credentials should only be available when running a build for craftyjs/Crafty). Either way, I think it's best to change it so that the cloud tests are performed when it matters - only on the testing branch before release. We have tested it now that Crafty's JS side works on different browser, they just add too much delay and unpredictability for the day-to-day builds. This will be in the upcoming PR I'm finishing. Going to go ahead and merge this.
gharchive/pull-request
2015-11-27T19:03:54
2025-04-01T06:38:17.649980
{ "authors": [ "mucaho", "starwed" ], "repo": "craftyjs/Crafty", "url": "https://github.com/craftyjs/Crafty/pull/984", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
128872828
remove unnessesay firewall-config package Can somebody explain the need for firewall-config? It provides a GUI AFAIK, we can probably drop it as a default :+1: Probably better as an optional parameter defaulting to false.... Added to 2.2.0
gharchive/issue
2016-01-26T16:36:47
2025-04-01T06:38:17.690739
{ "authors": [ "crayfishx", "helge000", "pioto" ], "repo": "crayfishx/puppet-firewalld", "url": "https://github.com/crayfishx/puppet-firewalld/issues/43", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
318077434
Test won't work for Puppet 4.0.0 As rspec-puppet breaks with puppet 4.0.0 (see rodjek/rspec-puppet#663), I propose to drop tests and support for puppet 4.0.0 The module still works, if its just the rspec that fails I think it would make more sense to just pin rpsec-puppet to 2.6.9, or whichever last known good version works for puppet 4.0.0 see #179 Thanks for that @jfroche - thats merged - I'm re-running the other tests and will try and get the remainng PR's (most of them yours! :-)) merged shortly and 3.5.0 relased. Thanks for all the contributions!
gharchive/pull-request
2018-04-26T15:18:18
2025-04-01T06:38:17.693522
{ "authors": [ "crayfishx", "jfroche" ], "repo": "crayfishx/puppet-firewalld", "url": "https://github.com/crayfishx/puppet-firewalld/pull/178", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1150669333
Delete key fails when you use set a subkey for the fingerprint input Behaviour If you use the input fingerprint with the fingerprint of the subkey, the Post import GPG key step tried to delete the key using the fingerprint of the subkey. I suppose you have to use the KeyID (fingerprint of the primary key). Steps to reproduce this issue They were described in this PR. Expected behaviour I should delete the key. Actual behaviour It does not delete the key and it was correctly imported into the keyring. Configuration Repository URL (if public): https://github.com/Nautilus-Cyberneering/chinese-ideographs-website Build error URL (if public): https://github.com/Nautilus-Cyberneering/chinese-ideographs-website/runs/5336399157?check_suite_focus=true#step:21:3 Logs I am experiencing this in v5 of the action when passing & loading only a signing subkey. It tries to delete using the fingerprint of the pubkey which is discovered via the subkey. t3chguy@Michael-t3chguy-MBP ~> gpg --list-keys --with-subkey-fingerprints D7B0B66941D01538 pub rsa4096 2019-04-15 [SC] [expires: 2024-04-13] 12D4CD600C2240A9F4A82071D7B0B66941D01538 uid [ unknown] riot.im packages <packages@riot.im> sub rsa3072 2019-04-15 [S] [expires: 2023-04-15] 75741890063E5E9A46135D01C2850B265AC085BD
gharchive/issue
2022-02-25T17:04:48
2025-04-01T06:38:17.697746
{ "authors": [ "josecelano", "t3chguy" ], "repo": "crazy-max/ghaction-import-gpg", "url": "https://github.com/crazy-max/ghaction-import-gpg/issues/124", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
775280333
Improve protocol documentation Do you think there is a way to disable automatic build just for this PR? Maybe we can cancel it early or something since it does not make sense to increment version number just for documentation changes. Version number only increments when a release is manually made oh then awesome
gharchive/pull-request
2020-12-28T08:36:22
2025-04-01T06:38:17.701544
{ "authors": [ "crc-32", "matejdro" ], "repo": "crc-32/libpebblecommon", "url": "https://github.com/crc-32/libpebblecommon/pull/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
181495951
Staging Site Settings for IPN Testing This relates to #27691 in Redmine and the Recurring Contributions Extension testing: Here's the staging site for the new Creative Commons Gravity Forms donation page: https://staging.creativecommons.org/donate/ September 16th we started discussing with Affinity Bridge how we would handle the IPN messages from PayPal after a transaction occurs. We said that the Civi extension we are writing needs IPN messages in order to create a new Civi contribution when a recurring contribution occurs. It doesn't look like Paypal IPN messages can be sent to multiple urls, so all the messages need to go to a single url and we need to figure out how to get both platforms the info they need. Affinity Bridge said that they do not need the IPN messages. To that end, we agreed that CiviCRM will maintain donation tracking. CiviCRM can get all the information it needs to track the user and transaction records through the IPN. If the relevant IPN message contains a unique GF_entry_ID that we can use to relate the IPN message back to the Gravity Forms entry, that would be great. So, since everyone agrees that Gravity Forms doesn't need to receive any IPN messages from Paypal, it would be fine to point the IPN to CiviCRM when we launch the CiviCRM part of this project. NOTE: We said that we would not want to update the notify_url before the CiviCRM part of this project launches, because then any interim notification messages would not be recorded or processed by anyone. Robin from Affinity Bridge said: CiviCRM is the authority on donation tracking (one-time and recurring). Gravity Forms (GF) does not need to host an authoritative copy of the transactional data (ie. records of PayPal Transaction ID or subsequent recurring payment records). Gravity Forms is currently doing this but does not necessarily need to. CiviCRM can get all the information it needs to track the user and transaction records though the IPN (or some other mechanism, like GF API callbacks). Can you confirm these assumption are correct? Our team did some testing with the PayPal sandbox settings on the stage server. If we do not point the IPN on PayPal back to Gravity Forms we still can offer the person making the donation the same user experience; we simply do not get the PayPal transaction id pushed back to Gravity Forms. So if our assumptions are correct we might just wish to point the IPN to CiviCRM. It is good to also note that the IPN response includes a custom field with a value for GF_entry_ID which maps to the entry in Gravity Forms (good for tracking from Civi to GF). If our assumptions are incorrect, then we should discuss the development of the IPN broadcasters as you suggested below. My feeling is that your team is in the best position to develop this piece. Given that, on October 4th we asked Creative Commons to help us set up their staging site to use our test PayPal account, and to stop setting an IPN endpoint in your PayPal requests. That way we would be able to control the IPN endpoint via our test PayPal account and then have any IPN messages sent to our developer. The end goal is to be able to make and track test payments on the CC staging site (only) through PayPal. The settings for the test Paypal account we are using are as follows: Merchant Account Email: jliu@giantrabbit.com Site URL + Recurring Payments URL: https://www.sandbox.paypal.com/ After some setup with Rob, it appeared that the notify_url in the PayPal link was still set to the staging site url instead of being left blank: https://www.sandbox.paypal.com/cgi-bin/webscr/?notify_url=https%3A%2F%2Fstaging.creativecommons.org%2F%3Fpage%3Dgf_paypal_ipn&charset=UTF-8&currency_code=USD&business=jliu%40giantrabbit.com&custom=74356|a26e8869a48ade3dbe6c76b27169af32&first_name=Giant&last_name=Rabbit&email=jliu%2Bdonate%40giantrabbit.com&address1=2748+Adeline+St.%2C+Suite+A&city=Berkeley&state=CA&zip=94703&country=US&cbt=Click here to continue&no_note=1&no_shipping=1&return=https%3A%2F%2Fstaging.creativecommons.org%2Fdonate%2F%3Fgf_paypal_return%3DaWRzPTN8NzQzNTYmaGFzaD04NjkzZGU2ZmY4NmY3ZWEzNTdkNThlYWRmOTBlMGRmMA%3D&rm=2&cmd=_xclick-subscriptions&item_name=Amount&a3=15&p3=1&t3=M&src=1&sra=0&bn=Rocketgenius_SP#gf_3 On October 6th, Floyd from Affinity Bridge said: Yes, it looks like the GravityForms PayPal plugin hardcodes that parameter in the query string. https://github.com/creativecommons/new-creativecommons.org/blob/master/plugins/gravityformspaypal/class-gf-paypal.php#L607-L613 It could be patched to not include that parameter. We are currently discussing with Affinity Bridge if the sandbox testing from September 20th is representative of what is currently on the staging site, in which case the patch should probably be applied or if the staging site settings have changed since Robin tested not pointing the IPN back to Gravity Forms and we can simply reapply his settings. @robmyers a pull request for removing the IPN from the GF requests has been submitted in the new site repos. https://github.com/creativecommons/new-creativecommons.org/pull/30 It looks like we are now able to receive PayPal IPN messages for test donations to our development environment:
gharchive/issue
2016-10-06T18:51:10
2025-04-01T06:38:17.717979
{ "authors": [ "alkrieger", "fmann" ], "repo": "creativecommons/creativecommons.org", "url": "https://github.com/creativecommons/creativecommons.org/issues/476", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1658306130
[Feature Request] Guide to update for react 17? What is your enhancement? is there any guidance on how to update this package to reactjs 17? I've been trying to figure it out but I'm still not sure how to do it, anyway this is a really great template! Hello, @ahmadxgani ! Thanks for using our products! The product already is at React v17! If you are using v16, there are no breaking changes while migrating. Regards, Vlad.
gharchive/issue
2023-04-07T03:18:10
2025-04-01T06:38:17.732325
{ "authors": [ "ahmadxgani", "simmmpleweb" ], "repo": "creativetimofficial/argon-dashboard-chakra", "url": "https://github.com/creativetimofficial/argon-dashboard-chakra/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1562265226
Install FindSecBugs ...and suppress false positives. (No actual issues found) Reviewer checklist [ ] Read the contributing guide [ ] PR should be motivated, i.e. what does it fix, why, and if relevant, how [ ] Ensure relevant issues are linked (description should include text like "Fixes #") [ ] Ensure any appropriate documentation has been added or amended Coverage: 94.007%. Remained the same when pulling 2e2ba8cc41ed9fae9c52025cc9981fb07e80e24c on sec into 0e1dd5b0b668d66f7b33bba1b43871929a211563 on main.
gharchive/pull-request
2023-01-30T11:29:24
2025-04-01T06:38:17.741498
{ "authors": [ "big-andy-coates", "coveralls" ], "repo": "creek-service/creek-json-schema", "url": "https://github.com/creek-service/creek-json-schema/pull/108", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
403165415
Imbalanced learning simple solution It would be nice to see if we could implement something simple to handle imbalanced learning. This could be part of a new sub-module called imblearn in reference to the imbalanced-learn library. I'm on it and I'm proposing two things: Weighted cross-entropy The focal loss You can add a weight parameter to LogLoss too, no? Yes :) We'll need to find a dataset and do a comparison with and without the new losses. Credit Card Fraud Detection the dataset is high imbalanced, with only 0.17% of transactions being classified as fraudulent It seems like a good data set. Yep it's perfect! Let's add it to the datasets module. @AdilZouitine I added weight parameters to losses.Log and losses.CrossEntropy. Once we add losses.Focal we can close this issue :)
gharchive/issue
2019-01-25T13:51:23
2025-04-01T06:38:17.754009
{ "authors": [ "AdilZouitine", "MaxHalford" ], "repo": "creme-ml/creme", "url": "https://github.com/creme-ml/creme/issues/2", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
253120927
FIX: Intermittent spec failure First run, all pass > rspec ............................................................................................................................................................................................ Finished in 12.18 seconds (files took 2.8 seconds to load) 188 examples, 0 failures Second run, several fail > rspec WARN: Unresolved specs during Gem::Specification.reset: diff-lcs (< 2.0, >= 1.2.0) WARN: Clearing out unresolved specs. Please report a bug if this causes problems. ...................................................FFF....................................................................................................................FFFFFFFFFF..F..... Failures: 1) Setting and changing an articles published_at date updating an existing article Failure/Error: expect(page).to have_content 'Logged in!' expected to find text "Logged in!" in "Retry later" # ./spec/spec_helper.rb:31:in `login_user' # ./spec/features/article_datetime_settings_spec.rb:38:in `block (2 levels) in <top (required)>' 2) Setting and changing an articles published_at date Saving an article without entering publication date info Failure/Error: within('main') do fill_in 'username', with: 'user1' fill_in 'password', with: 'c'*31 end Capybara::ElementNotFound: Unable to find visible css "main" # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/finders.rb:313:in `block in synced_resolve' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/base.rb:85:in `synchronize' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/finders.rb:302:in `synced_resolve' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/finders.rb:37:in `find' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/session.rb:776:in `block (2 levels) in <class:Session>' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/session.rb:333:in `within' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/dsl.rb:50:in `block (2 levels) in <module:DSL>' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/rspec/matcher_proxies.rb:14:in `within' # ./spec/spec_helper.rb:26:in `login_user' # ./spec/features/article_datetime_settings_spec.rb:62:in `block (2 levels) in <top (required)>' 3) Setting and changing an articles published_at date Using 'PUBLISH NOW' feature Failure/Error: within('main') do fill_in 'username', with: 'user1' fill_in 'password', with: 'c'*31 end Capybara::ElementNotFound: Unable to find visible css "main" # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/finders.rb:313:in `block in synced_resolve' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/base.rb:85:in `synchronize' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/finders.rb:302:in `synced_resolve' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/node/finders.rb:37:in `find' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/session.rb:776:in `block (2 levels) in <class:Session>' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/session.rb:333:in `within' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/dsl.rb:50:in `block (2 levels) in <module:DSL>' # /Users/s/.gem/ruby/2.4.1/gems/capybara-2.15.1/lib/capybara/rspec/matcher_proxies.rb:14:in `within' # ./spec/spec_helper.rb:26:in `login_user' # ./spec/features/article_datetime_settings_spec.rb:81:in `block (2 levels) in <top (required)>' 4) Pagination Redirects archives redirects on page 1 Failure/Error: expect(response).to redirect_to("/2017/01/01/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:8:in `block (3 levels) in <top (required)>' 5) Pagination Redirects archives redirects when no page number given Failure/Error: expect(response).to redirect_to("/2017/01/01/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:14:in `block (3 levels) in <top (required)>' 6) Pagination Redirects categories redirects on page 1 Failure/Error: expect(response).to redirect_to("/categories/slug/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:22:in `block (3 levels) in <top (required)>' 7) Pagination Redirects categories redirects when no page number given Failure/Error: expect(response).to redirect_to("/categories/slug/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:28:in `block (3 levels) in <top (required)>' 8) Pagination Redirects tags redirects on page 1 Failure/Error: expect(response).to redirect_to("/tags/slug/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:36:in `block (3 levels) in <top (required)>' 9) Pagination Redirects tags redirects when no page number given Failure/Error: expect(response).to redirect_to("/tags/slug/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:42:in `block (3 levels) in <top (required)>' 10) Pagination Redirects videos redirects on page 1 Failure/Error: expect(response).to redirect_to("/videos/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:50:in `block (3 levels) in <top (required)>' 11) Pagination Redirects videos redirects when no page number given Failure/Error: expect(response).to redirect_to("/videos/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:56:in `block (3 levels) in <top (required)>' 12) Pagination Redirects admin redirects on page 1 Failure/Error: expect(response).to redirect_to("/admin/videos/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:64:in `block (3 levels) in <top (required)>' 13) Pagination Redirects admin redirects when no page number given Failure/Error: expect(response).to redirect_to("/admin/videos/") Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> Response body: Retry later # ./spec/requests/paginations_spec.rb:70:in `block (3 levels) in <top (required)>' 14) Rack::Redirect doesn't redirect when not found Failure/Error: expect(response.status).to eq(200) expected: 200 got: 429 (compared using ==) # ./spec/requests/redirects_spec.rb:19:in `block (2 levels) in <top (required)>' Finished in 8.15 seconds (files took 2.95 seconds to load) 188 examples, 14 failures Failed examples: rspec ./spec/features/article_datetime_settings_spec.rb:33 # Setting and changing an articles published_at date updating an existing article rspec ./spec/features/article_datetime_settings_spec.rb:61 # Setting and changing an articles published_at date Saving an article without entering publication date info rspec ./spec/features/article_datetime_settings_spec.rb:75 # Setting and changing an articles published_at date Using 'PUBLISH NOW' feature rspec ./spec/requests/paginations_spec.rb:5 # Pagination Redirects archives redirects on page 1 rspec ./spec/requests/paginations_spec.rb:11 # Pagination Redirects archives redirects when no page number given rspec ./spec/requests/paginations_spec.rb:19 # Pagination Redirects categories redirects on page 1 rspec ./spec/requests/paginations_spec.rb:25 # Pagination Redirects categories redirects when no page number given rspec ./spec/requests/paginations_spec.rb:33 # Pagination Redirects tags redirects on page 1 rspec ./spec/requests/paginations_spec.rb:39 # Pagination Redirects tags redirects when no page number given rspec ./spec/requests/paginations_spec.rb:47 # Pagination Redirects videos redirects on page 1 rspec ./spec/requests/paginations_spec.rb:53 # Pagination Redirects videos redirects when no page number given rspec ./spec/requests/paginations_spec.rb:61 # Pagination Redirects admin redirects on page 1 rspec ./spec/requests/paginations_spec.rb:67 # Pagination Redirects admin redirects when no page number given rspec ./spec/requests/redirects_spec.rb:16 # Rack::Redirect doesn't redirect when not found Not sure if this is just me or if I did something to cause it. Or if it's intermittent for everyone. This may be obvious, but have you tried bundle exec rspec ? If it's selecting wrong gems, you can also remove non relevant gems with bundle install --clean I got this too after 4-5 repeated runs (@shushugah, even with bundle exec rspec) @veganstraightedge if you look at the failed specs they are all something like: Expected response to be a <3XX: redirect>, but was a <429: Too Many Requests> pretty sure this is rack-attack detecting the test suite as a ddos. Will look into how to disable rack-attack in test
gharchive/issue
2017-08-26T20:28:43
2025-04-01T06:38:17.777769
{ "authors": [ "astronaut-wannabe", "shushugah", "veganstraightedge" ], "repo": "crimethinc/website", "url": "https://github.com/crimethinc/website/issues/454", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2667012198
2024-01-06 :loud_sound:Crisp Weekly 2024-01-06 is now in collecting. /post Sampling with SQL https://blog.moertel.com/posts/2024-08-23-sampling-with-sql.html /post Vector Databases Are the Wrong Abstraction https://www.timescale.com/blog/vector-databases-are-the-wrong-abstraction/ /post HTML Form Validation is heavily underused https://expressionstatement.com/html-form-validation-is-heavily-underused /post On Typesetting Engines: A Programmer's Perspective https://blog.ppresume.com/posts/on-typesetting-engines /post On Good Software Engineers https://candost.blog/on-good-software-engineers/ /post Optimize your shell experience https://thoughtbot.com/blog/optimize-your-shell-experience /post Errors, Errors Everywhere: How We Centralized and Structured Error Handling https://olivernguyen.io/w/namespace.error/ /post Constraints in Go https://bitfieldconsulting.com/posts/constraints
gharchive/issue
2024-11-18T03:47:03
2025-04-01T06:38:17.788687
{ "authors": [ "crispgm" ], "repo": "crispgm/weekly", "url": "https://github.com/crispgm/weekly/issues/87", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1275849881
Add basic support for refutable if-let statements TODO: [x] handle array patterns [x] don't destructure variables for refutable parts of a pattern Codecov Report Merging #125 (4a811bd) into main (208bacc) will increase coverage by 0.01%. The diff coverage is 89.06%. @@ Coverage Diff @@ ## main #125 +/- ## ========================================== + Coverage 91.07% 91.08% +0.01% ========================================== Files 29 29 Lines 2789 2962 +173 ========================================== + Hits 2540 2698 +158 - Misses 249 264 +15 Impacted Files Coverage Δ src/codegen/d_ts.rs 83.73% <0.00%> (+2.52%) :arrow_up: src/types.rs 75.73% <62.50%> (-2.84%) :arrow_down: src/ast/pattern.rs 71.42% <73.33%> (+4.76%) :arrow_up: src/infer/infer_pattern.rs 80.00% <80.00%> (-1.25%) :arrow_down: src/codegen/js.rs 87.67% <91.36%> (+1.70%) :arrow_up: src/infer/infer_expr.rs 99.31% <100.00%> (ø) src/parser/mod.rs 98.26% <100.00%> (+0.03%) :arrow_up: src/parser/pattern.rs 100.00% <100.00%> (ø) tests/integration_test.rs 98.91% <100.00%> (+0.06%) :arrow_up: ... and 1 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 208bacc...4a811bd. Read the comment docs.
gharchive/pull-request
2022-06-18T18:08:24
2025-04-01T06:38:17.821866
{ "authors": [ "codecov-commenter", "kevinbarabash" ], "repo": "crochet-lang/crochet", "url": "https://github.com/crochet-lang/crochet/pull/125", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1452483496
Fix Ped.CanBeKnockedOffBike This fixes Ped.CanBeKnockedOffBike by inverting the value passed to that native. The underlying native in this function uses an enum rather than a bool, but the first two values in the enum are '0' for the default ped behavior of being able to be knocked off, and '1' for a ped never being able to be knocked off. Thus, by inverting the bool passed to this function, we can achieve the intended behavior without further changes. There are 4 possible values according to the native doc in alloc8or's (and that leaked scripting source repo), but this change makes sense since the value type is bool.
gharchive/pull-request
2022-11-17T00:16:29
2025-04-01T06:38:17.827076
{ "authors": [ "kagikn", "nomakewan" ], "repo": "crosire/scripthookvdotnet", "url": "https://github.com/crosire/scripthookvdotnet/pull/1111", "license": "Zlib", "license_type": "permissive", "license_source": "github-api" }
992407718
add readme Description of your changes A rough readme to get something up and running. There are still things that are unclear but it'd give people an idea about what it does and how they can get started if they feel adventurous. We'll need to do another pass in the coming weeks. Fixes https://github.com/crossplane-contrib/terrajet/issues/32 I have: [x] Read and followed Crossplane's contribution process. [x] Run make reviewable to ensure this PR is ready for review. [x] Added backport release-x.y labels to auto-backport this PR if necessary. How has this code been tested N/A @luebken I've removed all the instructions since they are still changing fast, left only the description parts. Added https://github.com/crossplane-contrib/terrajet/issues/75 for the instructions.
gharchive/pull-request
2021-09-09T16:21:17
2025-04-01T06:38:17.872587
{ "authors": [ "luebken", "muvaf" ], "repo": "crossplane-contrib/terrajet", "url": "https://github.com/crossplane-contrib/terrajet/pull/57", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
503642360
Template Stacks UX: overall experience; modeling, design thinking; UX; part of #853 What problem are you facing? How could Crossplane help solve your problem? I am working on this in a repository with code examples. I plan to try to create a design document to complement the code examples and Marques's design document, though there is already a chunk of design type discussion in the readme in the examples repo. I've added an example of a resource pack which shows how one might create a resource pack to set up the baseline GCP Network resources needed to run Wordpress in GCP. Now that this example is in the repo, I'm going to switch over to getting this into a reviewable format. I expect that will be ready some time on Monday. Interested parties can already begin reviewing by taking a look at the repository though. I've opened a pull request with a design document which refers back to the code examples in #956 As mentioned in #1011, I will be updating the design to incorporate some feedback. I've updated the design based on some feedback - see #956
gharchive/issue
2019-10-07T19:23:49
2025-04-01T06:38:17.892706
{ "authors": [ "prasek", "suskin" ], "repo": "crossplaneio/crossplane", "url": "https://github.com/crossplaneio/crossplane/issues/915", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
60605310
[stability][xwalk-3585] Add testapp source code for fuzz Impacted TCs num(approved): New 0, Update 1, Delete 0 Unit test Platform: Android Android test result summary: Pass 0, Fail 1, Block 0 Fail bugId: xwalk-3575 Let's hold on this for license clarify
gharchive/pull-request
2015-03-11T03:29:07
2025-04-01T06:38:17.896083
{ "authors": [ "cicili", "wanghongjuan" ], "repo": "crosswalk-project/crosswalk-test-suite", "url": "https://github.com/crosswalk-project/crosswalk-test-suite/pull/1928", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
139801522
[usecase-apptools] Add tests to test signing release for android apptools usecase Impacted tests(approved): new 1, update 0, delete 0 Unit test platform: [Android] Unit test result summary: pass 1, fail 0, block 0 BUG=https://crosswalk-project.org/jira/browse/XWALK-5436 LGTM
gharchive/pull-request
2016-03-10T06:44:14
2025-04-01T06:38:17.897510
{ "authors": [ "Honry", "yunxliu" ], "repo": "crosswalk-project/crosswalk-test-suite", "url": "https://github.com/crosswalk-project/crosswalk-test-suite/pull/3427", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1514078602
Add support for new parameters Opening this PR to add support for new parameters and functions that were not in previous versions. Majority of video functions, amp functions, all tone functions and listening modes will be added. Below is a check list that I need to work through to get all the responses decoded, then I will work through this list again to implement any functions that currently are not implemented. This is all taken from official Pioneer documentation I have loads more changes for ha-pioneer_async lined up to include support for all of this. [x] Power [x] Volume [x] Mute [x] Input [x] Tone Control [x] Listening Modes [x] DSP [x] Channel Levels [x] AMP [x] Key Lock [x] Video [x] Zone Power [x] Zone Input [x] Zone Volume [x] Zone Mute [x] Zone Channel Level [x] Zone Tone [x] Information Requests [x] Tuner [x] iPod Operation [ ] NETWORK Operation [ ] ADAPTER PORT Operation [ ] MHL Device Operation [ ] Cursor Operation Looking and sounding great. I got the basic functionality (on/off, input switching, volume) that I needed working and unfortunately everything else went on the backlog. The next items on my list was decoding the audio and video parameters to be able to detect stereo vs multichannel audio, and being able to tune the radio frequency - both of which are already on your list :) Thanks, the biggest one for me was the listening modes, the remote for my AVR doesn't work very well anymore (despite replacing batteries) and the iControl app has completely stopped working on my phone, while I was at it I thought to myself "well, some other features in the app are quite useful" so decided to just go through all the docs and serial commands / responses. I've just finished audio / video information decoding but I need some help as at the moment it is only able to if we sent ?AST or ?VST, which I need to send every time we receive input changes or power state changes. Other solution to this (maybe) is that at least on my AVR I get a response starting with AUA / AUB when the source changes but I've found no documentation on what that actually is yet. Sending another command when the parser (run from the connection listener task) detects a state change is a little tricky - you need to schedule a separate task to call send_command() to avoid blocking the connection listener task. This is currently done in _connection_listener() to schedule bounce_volume() when AVR power-on is detected if PARAM_POWER_ON_VOLUME_BOUNCE is enabled, you should be able to do something similar when an input or power state change is detected. Using inspiration from the bounce_volume() functions and how that's scheduled in, I've added a command queue, which can be used to schedule in commands to run after processing the response by adding commands_to_queue.add("COMMAND HERE") in the _parse_response() function. I don't think AST / VST will be sent in the serial session if that state changes on the AVR itself (IE, HDMI input goes from stereo to multi channel etc.) Tuner functions complete, a few actions I haven't added as other functions take care of what they would do, list below: TUNER PRESET (DIGIT key) - set_tuner_preset() takes care of this, but combines the class and preset digit together TUNER CLASS change - Same as above TUNER PRESET INCREMENT - Same as above TUNER PRESET DECREMENT - Same as above DIRECT ACCESS (tuner) - This sets frequency directly, but responses are more complicated to implement, for now the loop in set_tuner_frequency() takes care of this No specific iPod functionality added, apart from just mapping the commands and being able to send those commands, might be useful in HA where we can have play/pause buttons etc. I'm not able to test this though as I don't own an iPod. I think this is ready now, there a probably some places that could be improved on, happy to change where needed. These two commands have been added now: TUNER PRESET INCREMENT TUNER PRESET DECREMENT They are used as previous and next commands for the media controls... I need a place to document the media controls though, would you be happy to enable to wiki feature in this repo and that could then be used to store the Python API docs perhaps? Currently it doesn't return playback information, however I do have the docs for that but I'm not sure if the AVRs will automatically send playback info when that state changes or if I'll have to schedule more frequent updates for those. I enabled the wiki, but I think you need to first have submitted to this repo in order for you to update the wiki. I started reviewing the PR, it works with my VSX-930 so that's a great start :) I have a few initial general comments and will add additional comments to the code as I go (may take a while, there's a lot to digest): There's an unfortunate clash of terminology here as I've used "parameter" to refer to operational attributes that could differ between different AVR models, such as maximum possible volume or a flag for a feature that exists only for some AVR models. But I note that Pioneer also uses "parameter" to refer to command and response arguments (as well as audio/video parameter!). I guess this might be the rationale for the naming of PARAM_MEDIA_CONTROL_SOURCES, PARAM_MEDIA_CONTROL_COMMANDS, etc? To avoid conflation between these two uses, I'd suggest just dropping the PARAM_ prefix for these, they seem to me to just be dicts used to translate response values to something more meaningful. It would be good for those dicts to be moved to a separate file, eg. const.py. This could be a good future home for PIONEER_COMMANDS to reduce pioneer_avr.py by ~700 lines, down from 3200+ lines (a bit long I think!) though let's leave that change for a separate PR. A full update takes about 5 seconds on my AVR as it queries a lot of attributes now. It may be worth gating the updates behind parameters for some high level categories (eg. tone, amp, tuner, channel_level, dsp, video, audio) that can be turned off by default and turned on only for specific models via model specific default parameters or user level parameter overrides. This would minimise backwards compatibility and reduce impact to AVRs that may not have such features or be able to handle all the new queries. These groups would probably also be useful when exposing the attributes to HA, as I'd ideally prefer to avoid having to update the HA integration whenever more attributes are added. note to self: will need to refactor bounce_task to use the command queue, on my AVR that is running in parallel to the full update that is triggered at AVR power on. I've also updated cli.py to add a few commands that dump the various new attributes, and will submit that to your fork shortly. There's an unfortunate clash of terminology here as I've used "parameter" to refer to operational attributes that could differ between different AVR models, such as maximum possible volume or a flag for a feature that exists only for some AVR models. But I note that Pioneer also uses "parameter" to refer to command and response arguments (as well as audio/video parameter!). I guess this might be the rationale for the naming of PARAM_MEDIA_CONTROL_SOURCES, PARAM_MEDIA_CONTROL_COMMANDS, etc? To avoid conflation between these two uses, I'd suggest just dropping the PARAM_ prefix for these, they seem to me to just be dicts used to translate response values to something more meaningful. It would be good for those dicts to be moved to a separate file, eg. const.py. This could be a good future home for PIONEER_COMMANDS to reduce pioneer_avr.py by ~700 lines, down from 3200+ lines (a bit long I think!) though let's leave that change for a separate PR. Agree with all of this, I've updated now and dropped PARAM, we can move a lot out to a const.py file as suggested later on. A full update takes about 5 seconds on my AVR as it queries a lot of attributes now. It may be worth gating the updates behind parameters for some high level categories (eg. tone, amp, tuner, channel_level, dsp, video, audio) that can be turned off by default and turned on only for specific models via model specific default parameters or user level parameter overrides. This would minimise backwards compatibility and reduce impact to AVRs that may not have such features or be able to handle all the new queries. These groups would probably also be useful when exposing the attributes to HA, as I'd ideally prefer to avoid having to update the HA integration whenever more attributes are added. Noted, I've updated now and added the following user configurable parameters: PARAM_DISABLE_AUTO_QUERY - disables all queries when _update_zone is called. PARAM_ENABLED_FUNCTIONS - provides the ability to only query certain "functions", in future we can use this against some of the set commands, but for now it splits the PIONEER_COMMAND key on an underscore, and checks if the second item in that array is in PARAM_ENABLED_FUNCTIONS. Currently the following is supported for PARAM_ENABLED_FUNCTIONS: amp dsp tuner tone channels video system audio I made some changes to the command queue functions to minimise repeated commands being sent to the AVR, and also migrated the volume bouncer to use the command queue instead, can you see if either of these impact your changes. I haven't tested the new set_* functions yet, they can be tested later. VScode/pylint is also complaining about a few other issues but I think these can also be addressed later on. The above notwithstanding I think this PR is pretty much ready to merge, after which I can push a beta release to PyPi and test further within my prod environment, and also work on the HA integration to expose the additional AVR info and functions to HA. Thanks again @11harveyj for your efforts Ok, great. I'll test these changes later on and feedback :) Everything is still working ok for me, my HA instance is pending an update so when I update that I'll test those changes in HA too (changed the manifest.json to point to my fork).
gharchive/pull-request
2022-12-29T21:59:03
2025-04-01T06:38:17.930324
{ "authors": [ "11harveyj", "crowbarz" ], "repo": "crowbarz/aiopioneer", "url": "https://github.com/crowbarz/aiopioneer/pull/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
389717384
Reset button not working in email Reset button in 'Reset Crownstone password' email with Outlook client is broken. Issue -Webviewer outlook email contains in plain text: "[cloud.crownstone.rocks/reset-password?access_token='...']Reset" -Mobile outlook viewer shows button but doesn't respond to click. Do you only accept plain text? It is html. What should be changed in https://github.com/crownstone/crownstone-cloud/blob/master/server/emails/passwordResetEmail.html for this? Apparently to solve this a proprietary VML markup for Outlook needs to be used. See for example https://litmus.com/blog/a-guide-to-bulletproof-buttons-in-email-design
gharchive/issue
2018-12-11T11:25:20
2025-04-01T06:38:17.943455
{ "authors": [ "alexderidder", "mrquincle" ], "repo": "crownstone/crownstone-cloud", "url": "https://github.com/crownstone/crownstone-cloud/issues/13", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2753381025
test issue for webhook ignore
gharchive/issue
2024-12-20T19:36:26
2025-04-01T06:38:17.944284
{ "authors": [ "crowplexus" ], "repo": "crowplexus/hscript-iris", "url": "https://github.com/crowplexus/hscript-iris/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
262477784
Status bar text ( center aligned) doesn't show up in iPhone X Hi Guys, i've encountered an issue with the CRToastTypeStatusBar. The texts which are aligned center in the status bar presenter are not showing up in iPhone X. I am attaching the screenshot as well. Could you please let me know if there is any alternative for this behavior or are you guys updating the library any where soon? Above red marked area is where i have my text. I really appreciate your work with CRToast. Thanks. Came exactly for the same issue. Is there any chance to show the notification under the navigation bar? Hey mavris, as far as i know, you can show it as a navigationBar. But under the navigation bar, i am not sure. It seems the library needs to be updated to take into account the new "Safe Area" released in iOS 11. Not sure how that would work on an iPhone X running iOS 10 (if that's even possible). Love CRToast – please make this happen! Same to me, part of the Title text got covered by the motion detector area(top centre). Stupid iPhone X design. My recommendation would to be stop using the status bar for toasts. You could detect the device and change to navigation bar toasts - or move to navigation bar toasts entirely. Can you move the toast to a custom location? If yes, how? Documentation is terrible @Ashton-W Even I change to navigation bar toasts, part of the text still got covered by the area. Try not using over status bar too, we would need to add code to be aware of safe area to fix it otherwise ⚠️ Warning ⚠️: This is not well tested You can apply a really dirty hack to shift the text down only on the iPhone X. In CRToast.m -layoutSubviews add a check to see if the status bar frame is taller than 20. Again, this is hardly tested so it may totally fall over in some circumstances but it at least works in the demo app If someone has the time to properly test this a build out a better solution please do - I unfortunately do not have the time but just found this kind of worked or at least was a starting point. Screen Shot & git diff here diff --git a/CRToast/CRToastView.m b/CRToast/CRToastView.m index 8a9518d..ce2d2c4 100644 --- a/CRToast/CRToastView.m +++ b/CRToast/CRToastView.m @@ -132,6 +132,9 @@ static CGFloat CRCenterXForActivityIndicatorWithAlignment(CRToastAccessoryViewAl CGFloat preferredPadding = self.toast.preferredPadding; CGFloat statusBarYOffset = self.toast.displayUnderStatusBar ? (CRGetStatusBarHeight()+CRStatusBarViewUnderStatusBarYOffsetAdjustment) : 0; + if (CRGetStatusBarHeight() > 20) { + statusBarYOffset += 24; + } contentFrame.size.height = CGRectGetHeight(contentFrame) - statusBarYOffset;
gharchive/issue
2017-10-03T15:39:45
2025-04-01T06:38:17.950630
{ "authors": [ "Ashton-W", "chessboy", "dmiedema", "mavris", "quhaoran007", "vamshikpadala" ], "repo": "cruffenach/CRToast", "url": "https://github.com/cruffenach/CRToast/issues/226", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
138255999
UI " he can take a look at perfect viewer and see how that works, with different spots on the screen doing different things" Use exhentais colors for backgrounds so it looks more like the actual page
gharchive/issue
2016-03-03T18:15:12
2025-04-01T06:38:17.952805
{ "authors": [ "Lapan" ], "repo": "cruor99/sadpandareader", "url": "https://github.com/cruor99/sadpandareader/issues/29", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
246188840
Update attachment card implementations Updates all currently implemented attachments Converts implementations to use the cost and/or target API. Makes sure the ability can only trigger or initiate if it would change the game state Standardises action labels, prompt titles and chat messages. Changes actions to use a handler instead of method, where needed. Fixes canAttach() methods to match the card limitations. Changes var into let where possible. Invokes game.transferGold() instead of two times invoking game.addGold(). I'll merge for now, but definitely need to revisit handlers setting instance variables.
gharchive/pull-request
2017-07-27T22:34:31
2025-04-01T06:38:17.962861
{ "authors": [ "DukeTax", "ystros" ], "repo": "cryogen/throneteki", "url": "https://github.com/cryogen/throneteki/pull/1263", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
487422406
refactor: Switch tests to ava. $ npm run test @mohanson 现在全部测试都能跑过了,之前跑不过的原因是字符串里有 \n,eval 会解析成 "";。。 奇葩
gharchive/pull-request
2019-08-30T10:57:14
2025-04-01T06:38:17.964314
{ "authors": [ "yejiayu" ], "repo": "cryptape/minits", "url": "https://github.com/cryptape/minits/pull/36", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
353733067
add docker, drop MetaData and optional for blocks optional for blocks drop MetaData add docker for deploy Codecov Report Merging #11 into master will increase coverage by 0.01%. The diff coverage is 100%. @@ Coverage Diff @@ ## master #11 +/- ## ========================================== + Coverage 99.06% 99.08% +0.01% ========================================== Files 41 42 +1 Lines 966 986 +20 ========================================== + Hits 957 977 +20 Misses 9 9 Impacted Files Coverage Δ app/controllers/concerns/split_requests_concern.rb 100% <ø> (ø) :arrow_up: app/controllers/concerns/local_infos_concern.rb 100% <ø> (ø) :arrow_up: spec/controllers/api/statistics_controller_spec.rb 100% <ø> (ø) :arrow_up: ...c/controllers/concerns/local_infos_concern_spec.rb 100% <ø> (ø) :arrow_up: app/models/transaction.rb 100% <100%> (ø) :arrow_up: spec/models/cita_sync/persist_spec.rb 100% <100%> (ø) :arrow_up: app/models/cita_sync/persist.rb 96.15% <100%> (+0.26%) :arrow_up: app/models/sync_info.rb 100% <100%> (ø) app/controllers/api/statistics_controller.rb 100% <100%> (ø) :arrow_up: ...ontrollers/concerns/split_requests_concern_spec.rb 100% <100%> (ø) :arrow_up: ... and 3 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 0686cc0...ca6fce6. Read the comment docs. Sorry for deleting the develop branch... I didn't protect this branch.....🤣 now I know protect branch is important....
gharchive/pull-request
2018-08-24T10:40:13
2025-04-01T06:38:17.979673
{ "authors": [ "classicalliu", "codecov-io", "yatef" ], "repo": "cryptape/re-birth", "url": "https://github.com/cryptape/re-birth/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1322729158
Autopublish + Artefacts on Error New Features Autopublish artefacts When test pipeline fails, due to failing tests, artefacts are still generated Workflow now uses local action for easier testing Added test for upload-artifact: false Changes Bash cleanup .NET 6.0 Notes My thoughts are people integrating this will already need to checkout and configure .NET to build their project - so I removed this todo from the code, but still exists in the README should you consider it necessary. Personally, while not tested, I think it would run the steps twice and eat up build time. This is an excellent simplification of bash arguments and clean implementation of automatic artifact uploading. The changes are well-tested through github workflows. Expect a 1.3.0 release soon!
gharchive/pull-request
2022-07-29T22:28:58
2025-04-01T06:38:17.989899
{ "authors": [ "awgeorge", "cryptic-wizard" ], "repo": "cryptic-wizard/run-specflow-tests", "url": "https://github.com/cryptic-wizard/run-specflow-tests/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
169460737
Timeout's not properly caught by poller raises the following traceback: Traceback (most recent call last): File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/hourglass_naglitejs/cache.py", line 100, in get response = yield from session.get(url) File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/aiohttp/client.py", line 529, in __iter__ resp = yield from self._coro File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/aiohttp/client.py", line 183, in _request conn = yield from self._connector.connect(req) File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/aiohttp/connector.py", line 310, in connect transport, proto = yield from self._create_connection(req) File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/aiohttp/connector.py", line 581, in _create_connection local_addr=self._local_addr) File "/usr/lib/python3.4/asyncio/base_events.py", line 524, in create_connection yield from tasks.wait(fs, loop=self) File "/usr/lib/python3.4/asyncio/tasks.py", line 331, in wait return (yield from _wait(fs, timeout, return_when, loop)) File "/usr/lib/python3.4/asyncio/tasks.py", line 410, in _wait yield from waiter File "/usr/lib/python3.4/asyncio/futures.py", line 388, in __iter__ yield self # This tells Task to wait for completion. File "/usr/lib/python3.4/asyncio/tasks.py", line 286, in _wakeup value = future.result() File "/usr/lib/python3.4/asyncio/futures.py", line 269, in result raise CancelledError concurrent.futures._base.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "contrib/uwsgi/scheduler.py", line 10, in <module> scheduler.run_tasks() File "./hourglass/scheduler.py", line 46, in run_tasks results = self.loop.run_until_complete(asyncio.gather(*tasks)) File "/usr/lib/python3.4/asyncio/base_events.py", line 276, in run_until_complete return future.result() File "/usr/lib/python3.4/asyncio/futures.py", line 277, in result raise self._exception File "/usr/lib/python3.4/asyncio/tasks.py", line 233, in _step result = coro.throw(exc) File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/hourglass_naglitejs/cache.py", line 111, in update_objects results = yield from self.get(session, url) File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/hourglass_naglitejs/cache.py", line 101, in get return (yield from response.json()) File "/opt/hourglass/.virtualenvs/hourglass/lib/python3.4/site-packages/aiohttp/helpers.py", line 488, in __exit__ raise asyncio.TimeoutError concurrent.futures._base.TimeoutError FWIW, this was not experienced on the latest version, so I am unsure if this issue has been addressed already or not This has been fixed. Closing this out.
gharchive/issue
2016-08-04T20:01:11
2025-04-01T06:38:17.992137
{ "authors": [ "cryptk", "testeddoughnut" ], "repo": "cryptk/opsy", "url": "https://github.com/cryptk/opsy/issues/160", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2042793182
trade history page closes #320 there's no trade history page for desktop, so i'm adding trade history page for both desktop and mobile via this PR I think hover effect on dropdown items should cover the top and bottom space and no radius in between. it's seems to be that way on all dropdowns, should probably be fixed as it's own issue
gharchive/pull-request
2023-12-15T02:40:32
2025-04-01T06:38:17.994370
{ "authors": [ "dreacot" ], "repo": "crypto-power/cryptopower", "url": "https://github.com/crypto-power/cryptopower/pull/333", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
2141010601
Add operator validao cosmoshub-osmosis.json Add ValiDAO details Relayer Operator Onboarding Thank you for your IBC relaying efforts for the Cosmos Hub. This pull request template is for onboarding new operators. Instructions Add your account addresses and contact information in the correct format to the specified JSON file(s) in the _IBC folder. Ensure that your submission follows the provided format. If you have opened this pull request through a related issue, please link the issue number below: Closes #44 Checklist [x] I have read the onboarding documentation and contribution guidelines. [x] I have added my Account Addresses and Operator Information to the correct path file in the _IBC folder. [x] I have ensured my changes follow the required format. [x] I have linked the related onboarding issue (if one was created). Thank you for your contribution! Your feegrant allowance will be added at the next scheduled review meeting. Please frequently check the Operators table in the README to stay informed about your allowance status and spend limit.
gharchive/pull-request
2024-02-18T14:22:09
2025-04-01T06:38:18.004373
{ "authors": [ "clemensgg", "murakamikaze" ], "repo": "cryptocrew-validators/relayer-feegrant-wg", "url": "https://github.com/cryptocrew-validators/relayer-feegrant-wg/pull/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1367304580
crystal-1.5.1-1-darwin-universal.tar.gz is broken Bug Report Seems like tar archive generation for darwin is broken: v1.5.1 archive release is only 118Kb (versus 44Mb for v1.5.0). As result, some CICD pipeines is broken: https://github.com/evgkrsk/reqs-up/runs/8263698759 I uploaded the correct package. Something must have gone wrong while uploading the package to GitHub. The crystal-1.5.1-1-darwin-universal.tar.gz was only 118kb, and the full archive was instead named crystal-1.5.1-1-darwin-universal.tar.gz.1. /cc @beta-ziliani Did you use the automation script https://github.com/crystal-lang/distribution-scripts/blob/master/processes/scripts/publish-crystal-packages-on-github.sh for that? If so, can you retrace what might've gone wrong? Thanks! @straight-shoota I did. Unfortunately I don't have the history of the session nor the tmp files to track what happened.
gharchive/issue
2022-09-09T05:26:56
2025-04-01T06:38:18.059441
{ "authors": [ "beta-ziliani", "evgkrsk", "straight-shoota" ], "repo": "crystal-lang/crystal", "url": "https://github.com/crystal-lang/crystal/issues/12463", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
716140707
JSON::Any.nil? returns false Hopefully a simple one, the work around is simple enough but thought it worth making an issue require "json" test = JSON.parse("null") test.nil? # => false test == nil # => true nil.nil? # => true Maybe a special case of .nil? needs to be created for JSON::Any? Try json.raw.nil? You're definitely right, raw.nil? is the way this should be checked.
gharchive/issue
2020-10-07T02:13:27
2025-04-01T06:38:18.061458
{ "authors": [ "Blacksmoke16", "stakach" ], "repo": "crystal-lang/crystal", "url": "https://github.com/crystal-lang/crystal/issues/9807", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1398228994
[WASM] add missing __powisf2 and __powidf2 compiler-rt functions Functions __powisf2 and __powidf2 from compiler-rt are called from the wasm32 codegen and must be provided. This happens whenever float exponentiation is used. wasm-ld: error: F-loat64.wasm: undefined symbol: __powidf2 Implementation source: https://github.com/llvm-mirror/compiler-rt/blob/master/lib/builtins/powidf2.c https://github.com/llvm-mirror/compiler-rt/blob/master/lib/builtins/powisf2.c CI failure is unrelated.
gharchive/pull-request
2022-10-05T18:57:07
2025-04-01T06:38:18.063817
{ "authors": [ "lbguilherme" ], "repo": "crystal-lang/crystal", "url": "https://github.com/crystal-lang/crystal/pull/12569", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
354173
Using Scala classes I've got octobot all up and running, but I'm having a hard time getting it to load my tasks. Specifically, they're Scala classes... am I missing something? I've got them packaged up in a jar as described by the docs, but they don't seem to be getting picked up. Are there more steps involved if I want to use Scala tasks? I realize this is an old discussion, but I ran into the same issue (Octobot is unable to find tasks when using octobot-jar) and I figured someone else may find this useful. The solution I came up with is to modify the manifest in build.xml to specify the class path: <manifest file="MANIFEST.MF"> <attribute name="Built-By" value="${user.name}"/> <attribute name="Class-Path" value="../tasks.jar" /> <attribute name="Main-Class" value="com.urbanairship.octobot.Octobot"/> </manifest> Apparently, passing the -cp parameter to the java command when launching a self-executing jar is not enough. I'm not really a Java programmer and I could be totally wrong about that. Anyway, this works for me. P.S. Thanks for creating Octobot... it looks pretty awesome.
gharchive/issue
2010-10-09T00:46:29
2025-04-01T06:38:18.129392
{ "authors": [ "eatenbyagrue", "matthiase" ], "repo": "cscotta/Octobot", "url": "https://github.com/cscotta/Octobot/issues/2", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2306903583
Text editor only changes the subtitles of one of the dates Bug unable to be recreated as of now. Refreshing fixed it. There is no screenshot / recording of this bug since this was discovered in the lab computer. Details: somehow editing any of the text boxes will change the subtitles of only one (the first of the list). Like the title suggested, refreshing the page cleared this bug. As of now, there is no solution thought of yet since the bug hasn't been able to be remade. Subtitles no longer exist The issue now exists but only for the title.
gharchive/issue
2024-05-20T22:29:05
2025-04-01T06:38:18.134366
{ "authors": [ "BernicoJC", "michaelcheungkm" ], "repo": "cse110-sp24-group18/cse110-sp24-group18", "url": "https://github.com/cse110-sp24-group18/cse110-sp24-group18/issues/102", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
126381934
Different type of infrastructures: how to identify them? When an application has to be submitted to the infrastructure there is not a parameter to clarify what kind of infrastructure is and which adaptor should be used. The distinction of the different infrastructure is made using the parameter jobservice which has URI format. The protocol allow to distinguish among different kind of infrastructures and according to the protocol it is possible to select different adaptors.
gharchive/issue
2016-01-13T09:57:18
2025-04-01T06:38:18.139183
{ "authors": [ "fmarco76" ], "repo": "csgf/csgf-api", "url": "https://github.com/csgf/csgf-api/issues/3", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1091784189
Daten konnten nicht geladen werden Hi, ich habe das Problem, dass ich die app nicht u laufen bekomme mit folgender Fehlermeldung: "Daten konnten nicht geladen werden" als admin/user. ich kann aktuell keine Pläne anlegen, Typen erzeugen etc. Muss ich hier irgendwas noch anpassen? nc-version: 22.2.3 nextcloud-log: {"reqId":"v3bdQKLDEP5ZGnBa1xOp","level":3,"time":"2022-01-01T09:47:52+00:00","remoteAddr":"192.168.1.140","user":"admin","app":"index","method":"GET","url":"/index.php/apps/shifts/getAllAnalysts","message":"Call to a member function getUsers() on null","userAgent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0","version":"22.2.3.0","exception":{"Exception":"Exception","Message":"Call to a member function getUsers() on null","Code":0,"Trace":[{"file":"/var/www/nextcloud/lib/private/AppFramework/App.php","line":156,"function":"dispatch","class":"OC\\AppFramework\\Http\\Dispatcher","type":"->"},{"file":"/var/www/nextcloud/lib/private/Route/Router.php","line":302,"function":"main","class":"OC\\AppFramework\\App","type":"::"},{"file":"/var/www/nextcloud/lib/base.php","line":1006,"function":"match","class":"OC\\Route\\Router","type":"->"},{"file":"/var/www/nextcloud/index.php","line":36,"function":"handleRequest","class":"OC","type":"::"}],"File":"/var/www/nextcloud/lib/private/AppFramework/Http/Dispatcher.php","Line":158,"Previous":{"Exception":"Error","Message":"Call to a member function getUsers() on null","Code":0,"Trace":[{"file":"/var/www/nextcloud/lib/private/AppFramework/Http/Dispatcher.php","line":217,"function":"getAllAnalysts","class":"OCA\\Shifts\\Controller\\ShiftController","type":"->"},{"file":"/var/www/nextcloud/lib/private/AppFramework/Http/Dispatcher.php","line":126,"function":"executeController","class":"OC\\AppFramework\\Http\\Dispatcher","type":"->"},{"file":"/var/www/nextcloud/lib/private/AppFramework/App.php","line":156,"function":"dispatch","class":"OC\\AppFramework\\Http\\Dispatcher","type":"->"},{"file":"/var/www/nextcloud/lib/private/Route/Router.php","line":302,"function":"main","class":"OC\\AppFramework\\App","type":"::"},{"file":"/var/www/nextcloud/lib/base.php","line":1006,"function":"match","class":"OC\\Route\\Router","type":"->"},{"file":"/var/www/nextcloud/index.php","line":36,"function":"handleRequest","class":"OC","type":"::"}],"File":"/var/www/nextcloud/apps/shifts/lib/Controller/ShiftController.php","Line":154},"CustomMessage":"--"}} sorry, already mentioned https://github.com/csoc-de/Shifts/issues/16 sorry, already mentioned https://github.com/csoc-de/Shifts/issues/16
gharchive/issue
2022-01-01T10:02:36
2025-04-01T06:38:18.153481
{ "authors": [ "franconianmetal" ], "repo": "csoc-de/Shifts", "url": "https://github.com/csoc-de/Shifts/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2278268578
kairos-cli: implement call to deposit endpoint and integration test Things done Moved all code except cli parsing to kairos_cli::run Introduced new kairos-server-address option, that defaults a value if not set Implemented a client function that will submit a transaction to the kairos-server, this client function introduces a KairosClientError enum type that could be used for better error handling implemented the mapping from cli arguments for a deposit -> deposit payload, signing and submitting of a deposit Updated the integration test to actually spawn a kairos-server Updated the ne2e-test Overview #92 Motivation I'm a little confused as to why you're spawning a tokio runtime to use reqwests async API in a CLI application when reqwest has a blocking API and the nature of a CLI is generally synchronous. Let's merge it, just leaving this comment here for future reference and to make everyone aware of reqwests blocking API. I'm a little confused as to why you're spawning a tokio runtime to use reqwests async API in a CLI application when reqwest has a blocking API and the nature of a CLI is generally synchronous. Let's merge it, just leaving this comment here for future reference and to make everyone aware of reqwests blocking API. @Rom3dius that's right! CLI client (replaced with Rust SDK in the future) should be synchronous, not only for simplicity, but there is no advantage of having async there. @marijanp could you use reqwest::blocking? @Rom3dius @koxu1996 I began writing the client with potentially factoring it out in a separate crate, so I decided to make it async as they usually don't block but do so in our CLI. But it might just be premature optimization. so I will make this blocking.
gharchive/pull-request
2024-05-03T19:12:45
2025-04-01T06:38:18.165602
{ "authors": [ "Rom3dius", "koxu1996", "marijanp" ], "repo": "cspr-rad/kairos", "url": "https://github.com/cspr-rad/kairos/pull/91", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
223339961
Exporting the generated QR code as image Would it be possible to get the image of the QR code as a usable image, like base64? Is that something we can already access or is an additional method in the component needed? The library works perfect also in Expo. I wanna to have a photograph enlarged,how to do? I wanna to have a photo enlarged,how to do? I wanna to have a photo enlarged,how to do? @lucfranken did you work out a solution to this? I'm also wanting to do the same. We moved to generating the a PDF (for which we needed the image) and used PDFMake which can internally generate a QR code. 你好,不知道你的问题解决了没有,我们也有同样的需求,需要将生成的QR code图片保存到本地 @lucfranken Hi!, Is there going to be any solution for this ? I would also like to know about this.
gharchive/issue
2017-04-21T10:11:35
2025-04-01T06:38:18.179343
{ "authors": [ "Cariss", "kylanhurt", "lucfranken", "nianxiaoning", "omoprodigi", "sigmazen", "xiaomuxi" ], "repo": "cssivision/react-native-qrcode", "url": "https://github.com/cssivision/react-native-qrcode/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
899032658
Quote column names to avoid name colisions with reserved names or keywords What type of PR is this? bug What this PR does / why we need it: Imagine we have this CSV file (called nutrient.csv, from the https://fdc.nal.usda.gov/download-datasets.html // Supporting data for Downloads - April 2021 (CSV – 1.0M): id,name,unit_name,nutrient_nbr,rank 1001,Solids,G,201,200 1002,Nitrogen,G,202,500 1003,Protein,G,203,600 1004,Total lipid (fat),G,204,800 1005,"Carbohydrate, by difference",G,205,1110 The command ./csv2db generate --file nutrient.csv --table nutrient --verbose generates: Finding file(s). Found 1 file(s). Generating CREATE TABLE statement. CREATE TABLE nutrient ( ID VARCHAR(1000), NAME VARCHAR(1000), UNIT_NAME VARCHAR(1000), NUTRIENT_NBR VARCHAR(1000), RANK VARCHAR(1000) ); Executing this query against a MySQL Community Server - GPL 8.0.21. will result in [ERROR in query 2] You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'RANK VARCHAR(1000) )' at line 7 The reason: RANK is not quoted and a reserved keyword since MySQL 8.0.2. A similar error will happen for the insert statements. Solution: Quoting column names. This PR quotes the column names in the CREATE and INSERT statements. Special notes for your reviewer: This was tested against MySQL Community Server - GPL 8.0.21. Other database types were not tested. Thanks a lot @andygrunwald for this PR! Unfortunately, I'm pretty sure that different databases use different quoting techniques. Will have to go through the documentation of each supported database and find the correct characters to use. @gvenzl Would it be a (pragmatic) solution to apply these changes only for MySQL? Like if cfg.db_type == f.DBType.MYSQL.value: # Apply "`" else: # normal behaviour Let me know what you think Hey @andygrunwald, You know, I was thinking along similar lines this morning. Not all databases may have the same restrictions with keywords, some might be smart enough to understand when it's a column name and hence not need the quoting. On the other hand, I know that Oracle and I believe SQL Server and Db2 will also introduce case sensitivity the moment an identifier is quoted, i.e. SELECT "RANK" FROM nutrient is a different column than SELECT "rank" FROM nutrient while SELECT RANK FROM nutrient and SELECT rank FROM nutrient is the same. Right now csv2db makes all identifier uppercase as the case does not matter. But the moment quotes come into the picture... I also need to read up on MySQL and on what other semantics the quoting character may imply as well. Perhaps it's not a biggy to just quote all the identifiers for all the supported databases, perhaps it's just necessary for MySQL and perhaps it's best left to another command-line option. @gvenzl Sounds good and reasonable. Thanks for putting thoughts into this. Is there anything that comes into your mind where I can help/support? Hey @andygrunwald, Sorry, Monday was a bank holiday over here and I took Friday off as well to go to the middle of nowhere and recharge a bit. Hence why I didn't reply until now. You could help with the research for quoted identifiers if you like. That is my next step, see what's implied with the quotes for each supported database, as described above. We have to check the following: Oracle MySQL Postgres SQL Server Db2 @gvenzl No worries. Take your time. Life is more important. I did not expect a super-fast response, because I know how it can be to maintain a project. Ok, let's start with the research: 1. Oracle Oracle seems to be using the double quote ". I was not able to find an official documentation page about this, but a few other sources: Ispirer SQLWays Database Migration Software > Oracle Reserved Words: There are reserved words in Oracle which cannot be used as identifiers (table or column names etc.) without being delimited with double quotation marks ("). The only exception is that you cannot use the uppercase reserved word ROWID as an identifier, even in double quotation marks. StackOverflow: How do I escape a reserved word in Oracle?: [...] Oracle appears to use double quotes (", eg "table") and apparently requires the correct case—whereas, for anyone interested, MySQL defaults to using backticks (`) except when set to use double quotes for compatibility. 2. MySQL In MySQL you quote reserved keywords via the backtick character: CREATE TABLE `interval` (begin INT, end INT); Source: MySQL 8.0 Reference Manual / Language Structure / Keywords and Reserved Words 3. Postgres Here, it seems to be a simple double quote " character: There is a second kind of identifier: the delimited identifier or quoted identifier. It is formed by enclosing an arbitrary sequence of characters in double-quotes ("). A delimited identifier is always an identifier, never a key word. So "select" could be used to refer to a column or table named “select”, whereas an unquoted select would be taken as a key word and would therefore provoke a parse error when used where a table or column name is expected. The example can be written with quoted identifiers like this: Source: Documentation → PostgreSQL 13 → 4.1. Lexical Structure → 4.1.1. Identifiers and Key Words 4. SQL Server It seems that square brackets [FIELD] are used to escape a reserved keyword. I was not able to find THE one original resource to proof this, but those links might be useful: StackOverflow: How to deal with SQL column names that look like SQL keywords? What is the use of the square brackets [] in sql statements? Delimited Identifiers (Database Engine) 5. Db2 Seems to be the double quote character as well ": However, a keyword can be used as an identifier in a context where it is a reserved word, by specifying it as a delimited identifier. For example: ALL cannot be a column name in a SELECT statement, unless it is delimited. However, if the quotation mark (") is the escape character that begins and ends delimited identifiers, “ALL” can be used as a column name in a SELECT statement. [...] Source: Db2 for z/OS > Reserved words Sidenote I have only deep experience with MySQL. All other database types are based on documentation reading. Does this help @gvenzl ? Hey @gvenzl, based on my research, I added quote support of column names for all five database types. Would be nice if you can provide a review. Let me know what you think Hey @gvenzl, any update on this? Anything I can do to speed it up? Hey @andygrunwald, my sincere apologies! Life has caught up with me and I dropped the ball on this until now. Thanks a lot for your analysis and the PR! I found that SQL Server seems to support both, the square brackets ([]) and the double-quote identifiers (""): https://docs.microsoft.com/en-us/sql/t-sql/statements/set-quoted-identifier-transact-sql?view=sql-server-ver15 The latter is thanks to the parameter QUOTED_IDENTIFIER which the documentation says is turned ON by default but actually, on my test environment, it appears to be OFF by default. The double-quoted identifiers are also what the SQL standard specifies. The only outlier appears to be MySQL. However, as stated before, there is another implication with quoted identifiers, which is that they also treat the identifier case-sensitive. Unfortunately, to make matters a bit more complex, the way how they treat the case is also not consistent For example, Db2 treats everything in uppercase by default: db2 => connect to test; Database Connection Information Database server = DB2/LINUXX8664 11.5.4.0 SQL authorization ID = DB2INST1 Local database alias = TEST db2 => create table test (rank int); DB20000I The SQL command completed successfully. db2 => db2 => create table test1 ("rank" int); DB20000I The SQL command completed successfully. db2 => db2 => create table test2 ("RANK" int); DB20000I The SQL command completed successfully. db2 => db2 => select rank from test; RANK ----------- 0 record(s) selected. db2 => select rank from test1; SQL0104N An unexpected token "" was found following "SYSIBM.RANK". Expected tokens may include: "OVER". SQLSTATE=42601 db2 => db2 => select rank from test2; RANK ----------- 0 record(s) selected. db2 => describe table test; Data type Column Column name schema Data type name Length Scale Nulls ------------------------------- --------- ------------------- ---------- ----- ------ RANK SYSIBM INTEGER 4 0 Yes 1 record(s) selected. db2 => describe table test1; Data type Column Column name schema Data type name Length Scale Nulls ------------------------------- --------- ------------------- ---------- ----- ------ rank SYSIBM INTEGER 4 0 Yes 1 record(s) selected. db2 => describe table test2; Data type Column Column name schema Data type name Length Scale Nulls ------------------------------- --------- ------------------- ---------- ----- ------ RANK SYSIBM INTEGER 4 0 Yes 1 record(s) selected. db2 => You can see that table test1 actually has a lowercase specified column named rank. Given that the database treats every identifier as uppercase by default, the select rank from test1; really becomes a select RANK from TEST1; which in that case does not exist. The same is true for Oracle DB: Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 SQL> create table test (rank int); Table created. SQL> create table test1 ("rank" int); Table created. SQL> create table test2 ("RANK" int); Table created. SQL> select rank from test; no rows selected SQL> select rank from test1; select rank from test1 * ERROR at line 1: ORA-00904: "RANK": invalid identifier SQL> select rank from test2; no rows selected SQL> describe test; Name Null? Type ----------------------------------------- -------- ---------------------------- RANK NUMBER(38) SQL> describe test1; Name Null? Type ----------------------------------------- -------- ---------------------------- rank NUMBER(38) SQL> describe test2; Name Null? Type ----------------------------------------- -------- ---------------------------- RANK NUMBER(38) SQL> However, Postgres on the other hand treats everything lowercase by default, unlike Oracle or Db2: psql (13.4 (Debian 13.4-1.pgdg100+1)) Type "help" for help. test=> create table test (rank int); CREATE TABLE test=> create table test1 ("rank" int); CREATE TABLE test=> create table test2 ("RANK" int); CREATE TABLE test=> select rank from test; rank ------ (0 rows) test=> select rank from test1; rank ------ (0 rows) test=> select rank from test2; ERROR: column "rank" does not exist LINE 1: select rank from test2; ^ test=> \d test; Table "public.test" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- rank | integer | | | test=> \d test1; Table "public.test1" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- rank | integer | | | test=> \d test2; Table "public.test2" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- RANK | integer | | | test=> Yet MySQL does not care about the case of the quoted identifiers: Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> create table test1 (`rank` int); Query OK, 0 rows affected (0.05 sec) mysql> create table test2 (`RANK` int); Query OK, 0 rows affected (0.07 sec) mysql> select `rank` from test1; Empty set (0.00 sec) mysql> select `rank` from test2; Empty set (0.00 sec) mysql> describe test1; +-------+------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+------+------+-----+---------+-------+ | rank | int | YES | | NULL | | +-------+------+------+-----+---------+-------+ 1 row in set (0.01 sec) mysql> describe test2; +-------+------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+------+------+-----+---------+-------+ | RANK | int | YES | | NULL | | +-------+------+------+-----+---------+-------+ 1 row in set (0.01 sec) This is also noted in the MySQL documentation: Partition, subpartition, column, index, stored routine, event, and resource group names are not case-sensitive on any platform, nor are column aliases. SQL Server, btw, does also not seem to care much about the case-sensitivity of the quoted identifier: 1> create table test ([rank] int); 2> go 1> create table test1 ([rank] int); 2> go 1> create table test2 ([RANK] int); 2> go 1> select rank from test; 2> go rank ----------- (0 rows affected) 1> select rank from test1; 2> go rank ----------- (0 rows affected) 1> select rank from test2; 2> go rank ----------- (0 rows affected) 1> sp_help test; 2> go Name Owner Type Created_datetime -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ------------------------------- ----------------------- test dbo user table 2021-08-17 02:51:56.267 Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ----------------------------------- ----------- ----- ----- ----------------------------------- ----------------------------------- ----------------------------------- -------------------------------------------------------------------------------------------------------------------------------- rank int no 4 10 0 yes (n/a) (n/a) NULL Identity Seed Increment Not For Replication -------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------- ---------------------------------------- ------------------- No identity column defined. NULL NULL NULL RowGuidCol -------------------------------------------------------------------------------------------------------------------------------- No rowguidcol column defined. Data_located_on_filegroup -------------------------------------------------------------------------------------------------------------------------------- PRIMARY The object 'test' does not have any indexes, or you do not have permissions. No constraints are defined on object 'test', or you do not have permissions. No foreign keys reference table 'test', or you do not have permissions on referencing tables. No views with schema binding reference table 'test'. 1> sp_help test1; 2> go Name Owner Type Created_datetime -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ------------------------------- ----------------------- test1 dbo user table 2021-08-17 02:52:01.017 Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ----------------------------------- ----------- ----- ----- ----------------------------------- ----------------------------------- ----------------------------------- -------------------------------------------------------------------------------------------------------------------------------- rank int no 4 10 0 yes (n/a) (n/a) NULL Identity Seed Increment Not For Replication -------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------- ---------------------------------------- ------------------- No identity column defined. NULL NULL NULL RowGuidCol -------------------------------------------------------------------------------------------------------------------------------- No rowguidcol column defined. Data_located_on_filegroup -------------------------------------------------------------------------------------------------------------------------------- PRIMARY The object 'test1' does not have any indexes, or you do not have permissions. No constraints are defined on object 'test1', or you do not have permissions. No foreign keys reference table 'test1', or you do not have permissions on referencing tables. No views with schema binding reference table 'test1'. 1> sp_help test2; 2> go Name Owner Type Created_datetime -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ------------------------------- ----------------------- test2 dbo user table 2021-08-17 02:52:06.880 Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ----------------------------------- ----------- ----- ----- ----------------------------------- ----------------------------------- ----------------------------------- -------------------------------------------------------------------------------------------------------------------------------- RANK int no 4 10 0 yes (n/a) (n/a) NULL Identity Seed Increment Not For Replication -------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------- ---------------------------------------- ------------------- No identity column defined. NULL NULL NULL RowGuidCol -------------------------------------------------------------------------------------------------------------------------------- No rowguidcol column defined. Data_located_on_filegroup -------------------------------------------------------------------------------------------------------------------------------- PRIMARY The object 'test2' does not have any indexes, or you do not have permissions. No constraints are defined on object 'test2', or you do not have permissions. No foreign keys reference table 'test2', or you do not have permissions on referencing tables. No views with schema binding reference table 'test2'. 1> 1> set quoted_identifier on; 2> go 1> create table test (rank int); 2> go 1> create table test1 ("rank" int); 2> go 1> create table test2 ("RANK" int); 2> go 1> select rank from test; 2> go rank ----------- (0 rows affected) 1> select rank from test1; 2> go rank ----------- (0 rows affected) 1> select rank from test2; 2> go rank ----------- (0 rows affected) 1> I think that the conclusion of all of this is that if we introduce quoted identifier support into csv2db, it will have to be via an additional parameter, not just blank over everything. There is still an outstanding question of what the quote characters should be for SQL Server, the legacy [] or the standard, yet apparently not turned on by default "". Last but not least, identifiers also apply to table names. If one would want to quote identifiers, it is to be assumed that it should apply for both, the column names, as well as the table name itself. @gvenzl Thanks a lot for this big analysis. Let's break down the challenges followed by suggestions from me: Quoted identifier via an additional parameter That is a very good idea, I agree. This will not change the current behavior, but also add the possibility for all. Let me work on this and modify this PR. [ ] Add an additional parameter to quote fields and tables Quote character (" vs. []) What do you think about using " as default and adding yet another parameter to switch the quote character to []. By this, the user can switch, because they know their setup best. Quoting table names Good point. Thanks for catching this. Here, I agree as well. Table names should also be quotes once enabled. Let me work on this and modify this PR. [ ] When quoting is enabled, also quote table names Lower vs. Uppercase of database systems Maybe it is useful to document this knowledge because it seems to be applied once identifiers are quoted. A table in the README can help here. Your feedback What do you think? If these changes are applied, do you see any chance of adding this into the main branch? Thanks @andygrunwald for breaking it down! I agree with all the points you have made. For the [] vs "" I still want to research a bit more what SQL Server actually recommends. Adding a separate parameter for just that strikes me as a bit too excessive. It could be that, for example, it turns out that [] is already deprecated/discouraged. And if not, csv2db could perhaps be smart enough to check for the parameter at runtime and make the decision itself (if that parameter value is accessible to non-DBA database users that a user will most likely load data with). Yeah, I think it we have tracked it all down, we can merge it into main. We should also agree on the parameter name. I think --quote-identifiers makes it quite explicit. However, I just saw th-q (--quote) is already occupied for the string quotation of the actual data. Perhaps we don't need a shorthand parameter for this? Personally, I am a fan of being explicit and try to use long parameter names. I am fine with --quote-identifiers, but also --use-quote-identifiers would work as well. Or --keyword-quoting or something like this. I am also OK with dropping a shorthand. Let me try to reserve some time in the next days to get something done. I will ping you for early reviews. Ok? Great! quote in the case of --quote-identifiers is a verb, i.e. "Please csv2db, quote these identifiers for me". For the use, technically it should be --use-quoted-identifiers, as they are generally referred to as "quoted identifiers". "Keyword" is technically not correct, as a keyword is different from an identifier. Consider, for example, SELECT col1 FROM table1;: both col1 and table1 are identifiers, while SELECT and FROM are keywords. Given your comments, I would say let's please go with --quote-identifiers. Fair enough and good point. Thanks. Lets go for it. Awesome, thanks! @gvenzl I spent some time applying the changes we have discussed. Read through it and let me know what you think: Quoted identifier via an additional parameter I introduced a new flag --quote-identifiers (see https://github.com/csv2db/csv2db/pull/51/commits/0deba4aa466a1df9a41737604edfc2133ee87377) and updated the docs (see https://github.com/csv2db/csv2db/pull/51/commits/0707922cdc6921020092b5472d9678c80c8d09c8). Quote character for SQL-Server (" vs. []) A bit more research needs to be done on this topic. I am not that familiar with SQL-Server. For now, I added a TODO to decide which SQL-Server quote character we use. See https://github.com/csv2db/csv2db/pull/51/commits/301cb0c67adb3babb4bc107af2c461eec1bd4534 The question remains on how we progress here on this. How can I help? Do you plan to do it? One idea can be to document the limitation, create an issue to fix this. This would have the benefit of not blowing up this PR further and not blocking it any longer. What do you think? Quoting table names Done in https://github.com/csv2db/csv2db/pull/51/commits/63dc1da9b2734c672d921ee9b6a3e9eedf06aefe Lower vs. Uppercase of database systems As discussed, I documented the behavior in https://github.com/csv2db/csv2db/pull/51/commits/0707922cdc6921020092b5472d9678c80c8d09c8 Open topics Unit/Integration tests We don't have any automated tests yet. Personally, I would prefer to wait for an easier test solution like described in https://github.com/csv2db/csv2db/issues/52 Feedback That's it from my side. Anything you have in mind what should be tackled? Superseded by #61
gharchive/pull-request
2021-05-23T14:28:23
2025-04-01T06:38:18.241015
{ "authors": [ "andygrunwald", "gvenzl" ], "repo": "csv2db/csv2db", "url": "https://github.com/csv2db/csv2db/pull/51", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
731964299
make network compile using Box One of the two or both need to be boxed to fix these errors: Compiling azure_mgmt_network v0.1.0 (/Users/cameron/rs/azure-sdk-for-rust/services/mgmt/network) error[E0072]: recursive type `IpConfigurationPropertiesFormat` has infinite size --> services/mgmt/network/src/package_2020_06/models.rs:1503:1 | 1503 | pub struct IpConfigurationPropertiesFormat { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ recursive type has infinite size ... 1511 | pub public_ip_address: Option<PublicIpAddress>, | ----------------------- recursive without indirection | help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to make `IpConfigurationPropertiesFormat` representable | 1511 | pub public_ip_address: Box<Option<PublicIpAddress>>, | ^^^^ ^ error[E0072]: recursive type `IpConfiguration` has infinite size --> services/mgmt/network/src/package_2020_06/models.rs:1516:1 | 1516 | pub struct IpConfiguration { | ^^^^^^^^^^^^^^^^^^^^^^^^^^ recursive type has infinite size ... 1520 | pub properties: Option<IpConfigurationPropertiesFormat>, | --------------------------------------- recursive without indirection | help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to make `IpConfiguration` representable | 1520 | pub properties: Box<Option<IpConfigurationPropertiesFormat>>, | ^^^^ ^ error[E0072]: recursive type `PublicIpAddressPropertiesFormat` has infinite size --> services/mgmt/network/src/package_2020_06/models.rs:5239:1 | 5239 | pub struct PublicIpAddressPropertiesFormat { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ recursive type has infinite size ... 5245 | pub ip_configuration: Option<IpConfiguration>, | ----------------------- recursive without indirection | help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to make `PublicIpAddressPropertiesFormat` representable | 5245 | pub ip_configuration: Box<Option<IpConfiguration>>, | ^^^^ ^ error[E0072]: recursive type `models::PublicIpAddress` has infinite size --> services/mgmt/network/src/package_2020_06/models.rs:5264:1 | 5264 | pub struct PublicIpAddress { | ^^^^^^^^^^^^^^^^^^^^^^^^^^ recursive type has infinite size ... 5270 | pub properties: Option<PublicIpAddressPropertiesFormat>, | --------------------------------------- recursive without indirection | help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to make `models::PublicIpAddress` representable | 5270 | pub properties: Box<Option<PublicIpAddressPropertiesFormat>>, | ^^^^ ^ error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0072`. error: could not compile `azure_mgmt_network` To use the network_fixed branch, these would be the dependencies: [dependencies] azure_mgmt_network = { branch = "network_fixed", git = "https://github.com/ctaggart/azure-sdk-for-rust" } azure_identity = { branch = "network_fixed", git = "https://github.com/ctaggart/azure-sdk-for-rust" } tokio = { version = "*", features = ["macros"] } reqwest = { version = "*", features = ["json"] } @bmc-msft, it will be in master once https://github.com/Azure/azure-sdk-for-rust/pull/76 is merged.
gharchive/pull-request
2020-10-29T03:26:31
2025-04-01T06:38:18.255219
{ "authors": [ "ctaggart" ], "repo": "ctaggart/azure-sdk-for-rust", "url": "https://github.com/ctaggart/azure-sdk-for-rust/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
631506607
feat: create a bootstrap command to initialize users on the first run. What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Ability to bootstrap goiardi installation What is the new behavior (if this is a feature change)? Added new flag --bootstrap. When used it will introduce a clean exit from the app once core actors have been correctly created. Useful in situation of multi stage initial setup where you want to extract admin pem file and/or add a custom logic before actual goairdi deployment. Does this PR introduce a breaking change? no Huh, that's an interesting idea that hadn't occurred to me. I need to double check and make sure that there's not a reason that the default actor creation wasn't in that particular spot for a reason, but otherwise it looks good to me. in theory we can leave it as it is, I just moved it because (since we are going to terminate execution flow early) it felt like a waste to do chore cleaning. Absolutely; I just wanted to double check that there wasn't a reason I put it in that location. Turns out there isn't.
gharchive/pull-request
2020-06-05T11:03:41
2025-04-01T06:38:18.258739
{ "authors": [ "alekc", "ctdk" ], "repo": "ctdk/goiardi", "url": "https://github.com/ctdk/goiardi/pull/73", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
450100175
怎么分模块打包 您好,怎么根据不同模块分别打包不同的crn bundle 假设我们有两个业务,它们分别是通过crn-cli init flight和crn-cli init bus创建的。 我们分别进入flight和bus目录执行crn-cli pack。这时候会在工程目录生成对应打包后产物publish里面有rn_common和rn_flight,rn_bus。其中rn_common是公共部分,rn_flight和rn_bus对应业务包。 总结: 进入不同的工程目录,分别执行crn-cli pack进行打包。 假设我们有两个业务,它们分别是通过crn-cli init flight和crn-cli init bus创建的。 我们分别进入flight和bus目录执行crn-cli pack。这时候会在工程目录生成对应打包后产物publish里面有rn_common和rn_flight,rn_bus。其中rn_common是公共部分,rn_flight和rn_bus对应业务包。 总结: 进入不同的工程目录,分别执行crn-cli pack进行打包。 您好,非常感谢您的回答。这样建立两个项目是可以这样,那假如一个项目里,有三个模块,分别打包,有好的方法吗?
gharchive/issue
2019-05-30T02:27:50
2025-04-01T06:38:18.281983
{ "authors": [ "blackwuxin", "fangcaiwen" ], "repo": "ctripcorp/CRN", "url": "https://github.com/ctripcorp/CRN/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
305412627
最近发现配置更新推送机制失效了,没有异常日志,只有5分钟一次的拉取机制 最近发现配置更新推送机制失效了,没有异常日志,只有5分钟一次的拉取机制,有碰到类似问题的吗? 你找一下客户端的日志,应该是有日志的。Warning级别的也看一下,估计是长连接有问题。 @LittleShrimp1987 之前在群里提这个问题的也是你吧,看群里的聊天记录应该解决了? 我的问题解决了,是我自己加一个spring mvc filter的时候,把http long polling那种异步请求结果DeferredResult 的response body给覆盖为空了,去掉这个filter就好了 我自己的问题,解决了。
gharchive/issue
2018-03-15T04:34:01
2025-04-01T06:38:18.283578
{ "authors": [ "LittleShrimp1987", "lepdou", "nobodyiam" ], "repo": "ctripcorp/apollo", "url": "https://github.com/ctripcorp/apollo/issues/1000", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
425181269
docker镜像层下载不下来 docker镜像有一层下载不下来 Status: Downloaded newer image for mysql:5.7 Pulling apollo-quick-start (nobodyiam/apollo-quick-start:)... latest: Pulling from nobodyiam/apollo-quick-start 6c40cc604d8e: Downloading 这一层下载不了,试了好几次 e78b80385239: Download complete f41fe1b6eee3: Download complete f81fe3873b24: Download complete 6ee996dcbacb: Download complete bf515fdbd069: Download complete 5e4fe6eab272: Download complete 7d18aca46713: Download complete cdbb4cafaa13: Download complete ERROR: error parsing HTTP 408 response body: invalid character '<' looking for beginning of value: "408 Request Time-out\nYour browser didn't send a complete request in time.\n\n\n" 检查下网络,或者弄一个镜像加速吧 Pulling apollo-quick-start (nobodyiam/apollo-quick-start:)... latest: Pulling from nobodyiam/apollo-quick-start 6c40cc604d8e: Pull complete e78b80385239: Pull complete f41fe1b6eee3: Pull complete f81fe3873b24: Pull complete 6ee996dcbacb: Pull complete bf515fdbd069: Pull complete 5e4fe6eab272: Pull complete 7d18aca46713: Pull complete cdbb4cafaa13: Pull complete Creating apollo-dbdata ... done Creating apollo-db ... done Creating apollo-quick-start ... done case先关闭了,如还有问题,可以提供更多信息,或进群交流。
gharchive/issue
2019-03-26T01:09:20
2025-04-01T06:38:18.286918
{ "authors": [ "abelzhyb", "nobodyiam" ], "repo": "ctripcorp/apollo", "url": "https://github.com/ctripcorp/apollo/issues/2080", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
456920029
openapi 是否支持对集群的操作 openapi 是否支持对集群的操作,比如创建集群、删除集群 目前不支持
gharchive/issue
2019-06-17T12:57:00
2025-04-01T06:38:18.287990
{ "authors": [ "houzhen1308", "nobodyiam" ], "repo": "ctripcorp/apollo", "url": "https://github.com/ctripcorp/apollo/issues/2346", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
236917797
Add a backup of the desired output csv Verified that code works. Great job!
gharchive/pull-request
2017-06-19T14:52:39
2025-04-01T06:38:18.294082
{ "authors": [ "KevinHanson", "StewartUF" ], "repo": "ctsit/J.O.B-Training-Repo-2", "url": "https://github.com/ctsit/J.O.B-Training-Repo-2/pull/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1313343715
Repeated Wrong Perms on /var/run/chrony Seems to be working, but I do see perm issue a lot in the log. Running in Docker on a RPi4 and an Intel Nuc not with the 'higher security option. Log is pulled from the rpi4, similar on the nuc. Volumes: /var/lib/chrony /etc/chrony /run/chrony Build: cturra/docker-ntp build-date:- 2022-02-27T03:59:53+0000 Thanks! And thanks for the docker image! 2022-07-06T01:54:05Z Wrong permissions on /var/run/chrony 2022-07-06T01:54:05Z Disabled command socket /var/run/chrony/chronyd.sock 2022-07-06T01:54:05Z Disabled control of system clock 2022-07-06T01:54:11Z Selected source 69.89.207.99 (0.north-america.pool.ntp.org) 2022-07-06T01:54:58Z chronyd exiting 2022-07-06T01:55:05Z chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) 2022-07-06T01:55:05Z Wrong permissions on /var/run/chrony 2022-07-06T01:55:05Z Disabled command socket /var/run/chrony/chronyd.sock 2022-07-06T01:55:05Z Disabled control of system clock 2022-07-06T01:55:10Z Selected source 142.147.88.111 (2.north-america.pool.ntp.org) 2022-07-06T01:55:11Z Selected source 192.5.41.209 (ntp2.usno.navy.mil) 2022-07-06T02:45:40Z chronyd exiting 2022-07-06T02:45:56Z chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) 2022-07-06T02:45:56Z Wrong permissions on /var/run/chrony 2022-07-06T02:45:56Z Disabled command socket /var/run/chrony/chronyd.sock 2022-07-06T02:45:56Z Disabled control of system clock 2022-07-06T02:47:34Z Forward time jump detected! 2022-07-06T02:47:40Z Selected source 192.5.41.209 (ntp2.usno.navy.mil) 2022-07-10T07:24:52Z Source 68.171.16.4 replaced with 45.32.207.136 (0.north-america.pool.ntp.org) 2022-07-12T15:41:16Z chronyd exiting 2022-07-12T15:41:32Z chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) 2022-07-12T15:41:32Z Wrong permissions on /var/run/chrony 2022-07-12T15:41:32Z Disabled command socket /var/run/chrony/chronyd.sock 2022-07-12T15:41:32Z Disabled control of system clock 2022-07-12T15:43:07Z Selected source 192.5.41.209 (ntp2.usno.navy.mil) 2022-07-12T15:44:20Z Source 129.146.64.32 replaced with 72.14.183.239 (0.north-america.pool.ntp.org) 2022-07-19T14:33:34Z chronyd exiting 2022-07-19T14:33:51Z chronyd version 4.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) 2022-07-19T14:33:51Z Wrong permissions on /var/run/chrony 2022-07-19T14:33:51Z Disabled command socket /var/run/chrony/chronyd.sock 2022-07-19T14:33:51Z Disabled control of system clock 2022-07-19T14:35:30Z Forward time jump detected! 2022-07-19T14:35:36Z Selected source 192.5.41.209 (ntp2.usno.navy.mil) hey @phantum29, sorry to hear you're having a rough time with directory permissions and apologies for my delayed response. thanks for providing such detailed information! based on the details you've shared, it sounds like you're mounting /run/chrony with a volume, is that right? if so, there is probably ownership or permission issues with that external (to the docker container volume). during startup there is a script that checks these permissions, but that would only be effective if they can be modified by the root process of the container. you can see that here. https://github.com/cturra/docker-ntp/blob/main/assets/startup.sh#L8-L9 from my locally running version of the container, here is what the run directory and it's contents look like (note: i am not using any volumes): $> docker exec -ti ntp ls -l /run/ total 0 drwxr-x--- 2 chrony chrony 80 Jul 25 15:59 chrony $> docker exec -ti ntp ls -l /run/chrony total 4 -rw-r--r-- 1 root root 2 Jul 17 19:09 chronyd.pid srwxr-xr-x 1 chrony chrony 0 Jul 17 19:09 chronyd.sock i hope this helps point you in the right direction. let me know how you make out. Thank you! I thought the perms were correct but somehow they were wrong. Sorry I didn't double-check that and thank you for such a thorough response. Really appreciate the container!
gharchive/issue
2022-07-21T14:23:17
2025-04-01T06:38:18.305498
{ "authors": [ "cturra", "phantum29" ], "repo": "cturra/docker-ntp", "url": "https://github.com/cturra/docker-ntp/issues/51", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
344907202
custom style not working If i add custom style as follows import * as customStyle from 'react-tabtab/lib/themes/bootstrap'; This works fine. However, if i create my own style following instructions and put into themes/index.js. Import as follows import * as customStyle from '../themes'; Then it does not work. Not work meaning the styles are not picked up from my index.js, there are no errors either. My index.js is exact copy of https://github.com/ctxhou/react-tabtab/tree/master/src/themes/bootstrap Hi @prodigylabs Could you paste your copy code to let me know more? Do you change this line to import {styled} from 'react-tabtab'?
gharchive/issue
2018-07-26T16:02:33
2025-04-01T06:38:18.308576
{ "authors": [ "ctxhou", "prodigylabs" ], "repo": "ctxhou/react-tabtab", "url": "https://github.com/ctxhou/react-tabtab/issues/101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1908182023
Contact Details No response What would you like to ask or discuss? 正常情况hdfs日志应该是 /data/ds2hdfs/dolphinscheduler/2023-09-22/4292/1149520/3552154.log 这个格式 正常情况hdfs日志应该是 /data/ds2hdfs/dolphinscheduler/2023-09-22/4292/1149520/3552154.log 这个格式 是因为日志格式 1149520/3552154.log 有带 .(dot)号,我们这边解析escape了,查看一下, https://github.com/cubefs/compass/blob/main/task-application/src/main/java/com/oppo/cloud/application/util/EscapePathUtil.java#L26 public static String escape(String str) { if (str != null) { return str.replaceAll("\\s+|:|\\.+|~", "_"); } return null; } 所以这个是正常的逻辑处理吗? 所以这个是正常的逻辑处理吗? 之前我们使用flume上传日志,发现:有异常,替换为下划线了,如果不使用flume可以改下,但奇怪都是dot也需要替换,我们再确认下,你可以先去掉看看 所以这个是正常的逻辑处理吗? @cn-tingguo 这里有个hadoop文件合理标准,可以看下 https://hadoop.apache.org/docs/r3.2.2/hadoop-project-dist/hadoop-common/filesystem/model.html#File_references 但是我通过flume上传目录的时候是这种格式,hdfs 版本是2.6。 我注释掉住特殊字符转移的代码后还是会提示_log 路径不存在,是还有在其他抵挡校验吗 但是我通过flume上传目录的时候是这种格式,hdfs 版本是2.6。 我注释掉住特殊字符转移的代码后还是会提示_log 路径不存在,是还有在其他代码中校验吗 不应该,你clean下,另外使用新的数据看看, 只要改return str.replaceAll("\\s+|:|\\.+|~", "_"); 修改代码 return str.replaceAll("\s+|:|~", "_"); 重新编译,目前正常获取日志路径 good, nice! | | zhuangzebo | | @.*** | ---- 回复的原邮件 ---- | 发件人 | @.> | | 发送日期 | 2023年9月22日 17:04 | | 收件人 | @.> | | 抄送人 | @.> , @.> | | 主题 | Re: [cubefs/compass] [Question]: 日志路径解析的,为什么会出现_log结尾呢 (Issue #126) | 修改代码 return str.replaceAll("\s+|:|~", "_"); 重新编译,目前正常获取日志路径 — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
gharchive/issue
2023-09-22T04:57:07
2025-04-01T06:38:18.389881
{ "authors": [ "cn-tingguo", "zebozhuang" ], "repo": "cubefs/compass", "url": "https://github.com/cubefs/compass/issues/126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
128123795
Minimum text size: Only check text nodes with actual text in them We're raising failures for elements that have no text (or just whitespace) in them. Fixed in https://github.com/cucumber-ltd/bbc-a11y/pull/86
gharchive/issue
2016-01-22T10:12:10
2025-04-01T06:38:18.409512
{ "authors": [ "joshski", "mattwynne" ], "repo": "cucumber-ltd/bbc-a11y", "url": "https://github.com/cucumber-ltd/bbc-a11y/issues/81", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }