id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
261135160
Number of stats and length of stat name should be configurable Envoy currently hardcodes a limit of 16k individual stats, with a fixed maximum name length. These values should be tunable. +1 Defaults should be settable at compile-time via .bazelrc (or similar), and overridable via CLI flag at startup. @hennna It is easy to make finding unused stats O(1) with a free-list or similar. But doing the name-lookups is a little bit harder in shared memory. We'll probably need a lookup table of some kind in the shared memory. If we want to do this, I would recommend doing it as a totally independent change from the length stuff. Given how low frequency stat allocation is from shared memory the current situation is probably not that big of a deal for most people. I was going to benchmark filling up a somewhat big number of stats, maybe 1M or something, and see how long it takes. That should simulate the worst likely-case, which is startup with a large configuration. But I'm expecting that would be pretty slow: n^2 is 1 trillion operations. But I agree that it can be done in a separate change. Making the sizes tunable won't cause any degradation to existing use cases unless users opt-in to a really large stat size, and they should immediately notice some pain on startup if they choose too-large of a size.
gharchive/issue
2017-09-27T22:16:04
2025-04-01T06:38:34.097497
{ "authors": [ "ggreenway", "htuch", "mattklein123" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/1761", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1664972971
Add Attributes connection.transport_failure_reason Title: Add Attributes connection.transport_failure_reason along with TLS connection errors Description: Describe the desired behavior, what scenario it enables and how it would be used. Purposed Attributes connection.transport_failure_reason: Currently upstream.transport_failure_reason is included in the attributes to indicate the upstream transport failure such as certificate validation failed. But there's no similar ones for downstream connection. Considering recent PR added downstream transport failure reason to access log, it can be added to attributes as well for consistency. [optional Relevant Links:] Any extra documentation required to understand the issue. https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/advanced/attributes.html#attributes https://github.com/envoyproxy/envoy/pull/25322/files cc @kyessenov @mattklein123 could you add a "help wanted" tag to revive this issue?
gharchive/issue
2023-04-12T17:36:46
2025-04-01T06:38:34.102156
{ "authors": [ "XinyiZhangAws", "mattklein123", "ytsssun" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/26710", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2514216849
Test flake: //test/extensions/filters/http/ext_proc:ext_proc_integration_test https://dev.azure.com/cncf/envoy/_build/results?buildId=179508&view=logs&j=4930ecaf-18f4-5b3c-dea3-309729c3b3ae&t=573d8780-d7b9-52e3-b4e0-a89886b0b9ff&l=3840 [ RUN ] IpVersionsClientTypeDeferredProcessing/ExtProcIntegrationTest.GetAndCloseStreamWithTracing/IPv4_GoogleGrpc_WithDeferredProcessing test/extensions/filters/http/ext_proc/tracer_test_filter.cc:52: Failure Expected equality of these values: want Which is: "0" got Which is: "" grpc.status_code: 0 not found in tags: component: proxy status: canceled upstream_address: 127.0.0.1:37667 upstream_cluster: ext_proc_server_0 @tyxia @cainelli Do you mind taking a look at this? Thank you! I think tracing related feature was added by your change oh sorry about that. I will take a look into it this week. @tyxia the failure is a timeout and the tracing failure is a side effect. I don't see how such simple request would take more than 5s to run. test/extensions/filters/http/ext_proc/ext_proc_integration_test.cc:281: Failure Value of: fake_upstreams_[0]->waitForHttpConnection(*dispatcher_, fake_upstream_connection_) Actual: false (Timed out waiting for new connection.) Expected: true .... test/extensions/filters/http/ext_proc/ext_proc_integration_test.cc:277: Failure Expected equality of these values: std::to_string(status_code) Which is: "200" response.headers().getStatusValue() Which is: "504" Stack trace: 0x1578018: (unknown) 0x13cb8bc: (unknown) 0x7fc368411a4d: testing::internal::HandleSehExceptionsInMethodIfSupported<>() 0x7fc3683f822e: testing::internal::HandleExceptionsInMethodIfSupported<>() 0x7fc3683dfb1d: testing::Test::Run() 0x7fc3683e060e: testing::TestInfo::Run() ... Google Test internal frames ... actually, is the timeout 5ms? should we increase it a bit? https://github.com/envoyproxy/envoy/blob/7a7df5d8887dfe673eef51ce396feab4bff9383f/test/integration/http_integration.cc#L555-L556 @cainelli Thanks for spending effort reducing the flakiness. In the past, we have test (as you linked in slack) that have larger then 5s timeout. However, that is because Please don't waitForHttpConnection with a 5s timeout if failure is expected. In your case, failure is not expected. I am not sure if the flakiness is because tracing will take a bit more time but 5s should be sufficient here. Or maybe because ext_proc_integration test has grown very big now. Do you happen to know what is the flakiness rate? if you can't repro the flake (per slack) one thing you can do is add a LogLevelSetter in that test such that CI logs more information when it flakes. then next time we see a failure you'll have more information. One thing I've found often helps if you can't repro a flake is to run stress -c 16 (or however many CPU cores) in another terminal while the test runs with --runs_per_test=n. (Flags also depending on if the flakiness is from being CPU bound or network bound or disk bound.) Thank you all for the context and tips. One thing I've found often helps if you can't repro a flake is to run stress -c 16 (or however many CPU cores) in another terminal while the test runs with --runs_per_test=n. (Flags also depending on if the flakiness is from being CPU bound or network bound or disk bound.) I did tried that with various combinations to stress during the test but did not have any luck reproducing it. if you can't repro the flake (per slack) one thing you can do is add a LogLevelSetter in that test such that CI logs more information when it flakes. then next time we see a failure you'll have more information. I will try this path moving forward (https://github.com/envoyproxy/envoy/pull/36583).
gharchive/issue
2024-09-09T15:02:01
2025-04-01T06:38:34.109718
{ "authors": [ "alyssawilk", "cainelli", "ravenblackx", "tyxia" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/36041", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2626347837
Implement a Token Introspection (RFC 7662) HTTP Filter Token Introspection (RFC 7662): Implement an HTTP Filter to verify external token Description: Allow external OAuth2/OIDC token to be valided via the Token Introspection api in order for envoy to act as a Identity Aware Proxy (IAP) Relevant Links: https://datatracker.ietf.org/doc/html/rfc7662 https://www.oauth.com/oauth2-servers/token-introspection-endpoint/ Please get familiar with our extension policy: https://github.com/envoyproxy/envoy/blob/main/EXTENSION_POLICY.md cc @tyxia @mattklein123 @TAOXUY (as oauth, jwt extension owners who may be interested in this proposal)
gharchive/issue
2024-10-31T09:05:26
2025-04-01T06:38:34.113181
{ "authors": [ "nezdolik", "supercairos" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/36931", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
508523044
Redis through envoy gets high response time with redis-benchmark Hi, I’m deploying envoy redis in our environment and I’ve found out that it’s increasing the RTT of the 95 percentile requests to redis in about 7~ms, I have run redis-benchmark tool in both configurations, and through envoy the higher percents getting a much higher response time. results attached. this is the configuration file: static_resources: listeners: - address: socket_address: address: 0.0.0.0 port_value: 50051 filter_chains: - filters: - name: envoy.http_connection_manager config: codec_type: auto stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: local_service domains: - "*" routes: - match: prefix: "/" route: cluster: local_service_grpc http_filters: - name: envoy.router config: {} - name: redis_nrt_listener address: socket_address: address: 0.0.0.0 port_value: 6379 filter_chains: - filters: - name: envoy.redis_proxy typed_config: "@type": type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy stat_prefix: egress_redis settings: op_timeout: 0.03s enable_redirection: true enable_hashtagging: true prefix_routes: catch_all_route: cluster: redis_nrt_cluster - name: redis_vol_listener address: socket_address: address: 0.0.0.0 port_value: 6380 filter_chains: - filters: - name: envoy.redis_proxy typed_config: "@type": type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy stat_prefix: egress_redis settings: op_timeout: 0.03s enable_redirection: true enable_hashtagging: true prefix_routes: catch_all_route: cluster: redis_vol_cluster clusters: - name: local_service_grpc connect_timeout: 0.250s type: logical_dns lb_policy: round_robin http2_protocol_options: {} health_checks: - timeout: 1s interval: 3s interval_jitter: 1s unhealthy_threshold: 3 healthy_threshold: 3 tcp_health_check: send: receive: [] hosts: - socket_address: address: router-us-east4-b-prod.ocddx.com port_value: 50051 - name: redis_vol_cluster connect_timeout: 1s type: strict_dns # static lb_policy: MAGLEV load_assignment: cluster_name: redis_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: address: redis-us-east4-b-prd.ocddx.com port_value: 6379 - name: redis_nrt_cluster connect_timeout: 1s type: strict_dns # static lb_policy: MAGLEV load_assignment: cluster_name: redis_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: address: redis-nrt-us-east4-b-prd.ocddx.com port_value: 6379 admin: access_log_path: "/var/log/envoy_admin_access.log" address: socket_address: address: 0.0.0.0 port_value: 9901 redis-benchmark running from container to the envoy sidecar: redis-benchmark -h collector_envoy -t set,get ====== SET ====== 100000 requests completed in 7.06 seconds 50 parallel clients 3 bytes payload keep alive: 1 4.92% <= 1 milliseconds 59.69% <= 2 milliseconds 73.69% <= 3 milliseconds 82.54% <= 4 milliseconds 88.11% <= 5 milliseconds 91.77% <= 6 milliseconds 94.26% <= 7 milliseconds 95.89% <= 8 milliseconds 97.18% <= 9 milliseconds 98.01% <= 10 milliseconds 98.58% <= 11 milliseconds 98.97% <= 12 milliseconds 99.22% <= 13 milliseconds 99.43% <= 14 milliseconds 99.58% <= 15 milliseconds 99.73% <= 16 milliseconds 99.78% <= 17 milliseconds 99.81% <= 18 milliseconds 99.88% <= 19 milliseconds 99.90% <= 20 milliseconds 99.90% <= 21 milliseconds 99.93% <= 22 milliseconds 99.97% <= 23 milliseconds 99.98% <= 24 milliseconds 99.98% <= 28 milliseconds 100.00% <= 29 milliseconds 14164.31 requests per second ====== GET ====== 100000 requests completed in 7.66 seconds 50 parallel clients 3 bytes payload keep alive: 1 4.18% <= 1 milliseconds 55.05% <= 2 milliseconds 71.89% <= 3 milliseconds 81.03% <= 4 milliseconds 87.10% <= 5 milliseconds 90.96% <= 6 milliseconds 93.66% <= 7 milliseconds 95.49% <= 8 milliseconds 96.44% <= 9 milliseconds 97.22% <= 10 milliseconds 97.75% <= 11 milliseconds 98.34% <= 12 milliseconds 98.83% <= 13 milliseconds 99.15% <= 14 milliseconds 99.34% <= 15 milliseconds 99.49% <= 16 milliseconds 99.59% <= 17 milliseconds 99.69% <= 18 milliseconds 99.72% <= 19 milliseconds 99.81% <= 20 milliseconds 99.87% <= 21 milliseconds 99.89% <= 22 milliseconds 99.91% <= 23 milliseconds 99.92% <= 24 milliseconds 99.93% <= 25 milliseconds 99.96% <= 26 milliseconds 99.97% <= 27 milliseconds 99.97% <= 28 milliseconds 99.98% <= 29 milliseconds 99.98% <= 34 milliseconds 99.98% <= 35 milliseconds 99.99% <= 36 milliseconds 100.00% <= 37 milliseconds 13063.36 requests per second redis-benchmark running from container to a single redis host (part of a masters cluster): bash-4.4# redis-benchmark -h 10.240.15.147 -t set,get ====== SET ====== 100000 requests completed in 4.95 seconds 50 parallel clients 3 bytes payload keep alive: 1 11.96% <= 1 milliseconds 94.58% <= 2 milliseconds 96.80% <= 3 milliseconds 98.02% <= 4 milliseconds 98.75% <= 5 milliseconds 99.27% <= 6 milliseconds 99.58% <= 7 milliseconds 99.75% <= 8 milliseconds 99.89% <= 9 milliseconds 99.96% <= 10 milliseconds 99.97% <= 11 milliseconds 99.97% <= 12 milliseconds 99.99% <= 13 milliseconds 100.00% <= 13 milliseconds 20197.94 requests per second ====== GET ====== 100000 requests completed in 5.07 seconds 50 parallel clients 3 bytes payload keep alive: 1 15.41% <= 1 milliseconds 90.76% <= 2 milliseconds 95.60% <= 3 milliseconds 97.79% <= 4 milliseconds 98.70% <= 5 milliseconds 99.23% <= 6 milliseconds 99.60% <= 7 milliseconds 99.76% <= 8 milliseconds 99.82% <= 9 milliseconds 99.83% <= 11 milliseconds 99.87% <= 12 milliseconds 99.89% <= 13 milliseconds 99.91% <= 14 milliseconds 99.97% <= 15 milliseconds 99.99% <= 16 milliseconds 100.00% <= 16 milliseconds 19704.43 requests per second As you can see there's a huge different between the response times, I've been trying to change some configuration for example: type to logical_dns instead of strict_dns, remove the lb_type and add max_buffer_size_before_flush and buffer_flush_timeout and even change the dns to point to only one member of the redis cluster, the same host I checked in the second test, to ensure the reliability of the redis-benchmark test. I'd be glad if someone who using redis with envoy will do the same test I did and share the results, and if someone has any recommendations to solve this response time issue @zuercher please add a BUG label, seems like its happening also in other environments Are you introducing a network hop in the envoy case in your test? Here's the result for my test against a local docker container: root@8362fe3593b4:/# redis-benchmark -h redis-server -p 7001 -t get,set ====== SET ====== 100000 requests completed in 3.07 seconds 50 parallel clients 3 bytes payload keep alive: 1 84.07% <= 1 milliseconds 99.09% <= 2 milliseconds 99.84% <= 3 milliseconds 99.93% <= 4 milliseconds 99.94% <= 5 milliseconds 99.96% <= 6 milliseconds 99.96% <= 9 milliseconds 99.96% <= 10 milliseconds 99.97% <= 11 milliseconds 99.99% <= 26 milliseconds 100.00% <= 27 milliseconds 100.00% <= 27 milliseconds 32626.43 requests per second ====== GET ====== 100000 requests completed in 3.04 seconds 50 parallel clients 3 bytes payload keep alive: 1 85.27% <= 1 milliseconds 98.79% <= 2 milliseconds 99.85% <= 3 milliseconds 99.97% <= 4 milliseconds 99.97% <= 9 milliseconds 99.99% <= 10 milliseconds 100.00% <= 11 milliseconds 32894.74 requests per second root@8362fe3593b4:/# redis-benchmark -p 6381 -t get,set ====== SET ====== 100000 requests completed in 3.78 seconds 50 parallel clients 3 bytes payload keep alive: 1 0.00% <= -32 milliseconds 0.01% <= -30 milliseconds 0.02% <= -29 milliseconds 0.03% <= -28 milliseconds 0.03% <= -26 milliseconds 0.04% <= 0 milliseconds 24.19% <= 1 milliseconds 89.52% <= 2 milliseconds 99.04% <= 3 milliseconds 99.78% <= 4 milliseconds 99.82% <= 5 milliseconds 99.84% <= 6 milliseconds 99.85% <= 7 milliseconds 99.87% <= 8 milliseconds 99.89% <= 10 milliseconds 99.94% <= 11 milliseconds 99.96% <= 12 milliseconds 99.97% <= 13 milliseconds 99.98% <= 14 milliseconds 99.98% <= 15 milliseconds 99.98% <= 16 milliseconds 99.99% <= 19 milliseconds 100.00% <= 20 milliseconds 26462.03 requests per second ====== GET ====== 100000 requests completed in 3.05 seconds 50 parallel clients 3 bytes payload keep alive: 1 28.07% <= 1 milliseconds 96.50% <= 2 milliseconds 99.51% <= 3 milliseconds 99.80% <= 4 milliseconds 99.95% <= 5 milliseconds 100.00% <= 6 milliseconds 100.00% <= 6 milliseconds 32797.64 requests per second Here's the relevant section of envoy.yaml: static_resources: listeners: - name: listener_1 address: socket_address: address: 127.0.0.1 port_value: 6381 filter_chains: filters: name: envoy.redis_proxy config: stat_prefix: redis_stats prefix_routes: catch_all_route: cluster: cluster_1 settings: op_timeout: 5s clusters: - name: cluster_1 connect_timeout: 0.25s lb_policy: RING_HASH hosts: - socket_address: address: redis-server port_value: 7001 type: STRICT_DNS envoy is a sidecar container on the application I ran both tests on the envoy container to get an apple to apple measurement. @mosespx i am facing the same issue. Did you got any solution or workaround? @saagar241290 no, I didn't use this solution because of this issue. please share here if you find something interesting @mosespx I tried by increasing number of connections of redis pool to 100 and it gave me a better performance. Earlier there was only a single connection. @saagar241290 when you says tried by increasing number of connections of redis pool to 100 and it gave me a better performance. Earlier there was only a single connection. Are these connections on client?
gharchive/issue
2019-10-17T14:39:47
2025-04-01T06:38:34.132482
{ "authors": [ "HenryYYang", "mosespx", "ramaraochavali", "saagar241290" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/8644", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
522910625
pre startup checks for windows: log if system variables need changing pre startup checks: Check os variables before startup on windows Description: As a fix for issue https://github.com/envoyproxy/envoy/issues/7130, PR #8600 introduces an interface and a posix implementation that runs platform specific checks before startup. The win32 implementation source/exe/win32/platform_checks.cc adds a no-op which should be implemented. #7130 is fixed via #9098, which just adds user documentation instead of moving checks into envoy source code.
gharchive/issue
2019-11-14T14:54:41
2025-04-01T06:38:34.135706
{ "authors": [ "sriduth" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/9025", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
538217268
API Request: Add HttpFilter.instance_name HttpFilter.name is used to instantiate a specific type of filter by http_connection_manager In that sense HttpFilter.name is the className parameter. In filters like WASM and lua, a filter with the same name is deployed multiple times. This makes it difficult to identify a specific filter instance without resorting to peeking inside HttpFilter.typed_config. Add message HttpFilter { // The name of the filter to instantiate. The name must match a // :ref:`supported filter <config_http_filters>`. string name = 1 [(validate.rules).string = {min_bytes: 1}]; string instance_name = 5; // ... } Using instance_name, the filter can be identified in a meaningful way. In the following example, without instance_name=wasm.stats both filters have the same name envoy.filters.http.wasm. filters: - name: envoy.http_connection_manager typed_config: '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager forward_client_cert_details: APPEND_FORWARD generate_request_id: true http_filters: - instance_name: wasm.stats name: envoy.filters.http.wasm config: config: root_id: stats_inbound vm_config: code: inline_string: envoy.wasm.stats runtime: envoy.wasm.runtime.null vm_id: stats_inbound - instance_name: wasm.metadata_exchange name: envoy.filters.http.wasm config: config: vm_config: code: inline_string: envoy.wasm.metadata_exchange runtime: envoy.wasm.runtime.null Here @kyessenov @rshriram My initial sniff test is that this is not generally useful, and I don't see why it's that bad to peak inside the typed_config, but happy to be swayed if there is a convincing argument. I think this would be useful for Filter Config discovery service. Having a common field across all filter configs helps in discovery and monitoring IMHO. I don't think it's possible to express that Any in the typed config must have a specific field, so pulling it up one level to filter config seems reasonable: # static config name: envoy.http.wasm config_name: wasm1 typed_config: {} # dynamic config name: envoy.http.wasm config_name: wasm1 config_source: ads: {} In the case of FCDS, we will be moving config up to a oneof, and then the config itself will have to have a name and a config source, so I think it would be covered there? Yeah, it can be done either way. But it would help us to have config name to be a peer of typed_config instead of being nested in a oneof peer. This is because we run multiple transformation passes, and having a name in the xDS helps with identifying the config/filter instance. This is consistent with the rest of xDS where each resource has a name in its proto. @htuch any thoughts on ^? I think this is useful in the context of FCDS, WASM and tooling (or control planes) that operate on opaque config (i.e. they can't peek inside). Here's an interesting thought for v3; since we will no longer have untyped Struct, and will have a world of only TypeStruct and Any, and every extension should have its own unique config proto, we should in face be able to get rid of the need to have any filter type. I.e. you don't need to write envoy.wasm, your use of the WASM config option implies that via the embedded type URL. This means that the name field could be used arbitrarily for user purposes in v3+. @htuch This would work if every filter config is unique per filter. I think there may be cases where two filters share a proto for the config. Not sure if that's something dis-allowed already. One more use case for the control plane to operate on this config in an opaque way is to be able to do partial ordering between different filters. I.e. if control plane is provided with 10 "envoy.wasm" filters, there is no meaningful way to describe a relative order between them. @htuch, just having "name" as a unique arbitrary name and rely on type for the actual type in v3 works as well. Partial order is one of the motivating use case here. Peeking inside requires specific knowledge of the filter. Changing the meaning of “name” from type_name to unique_name seems risky, though it will work. If we add instance_name / config_name field we can actually add it to both v2 and v3 For xDS v2, we can relax the constraint that the config message must match the filter name (as long as it's not a regular struct). That would allow arbitrary names without a breaking change. This instance_name which will be referenced by FCDS. Looks like in the cpp code the expression would be resource.XX_name(). Maybe calling it resource_name()? It would be good to have a v2 xDS solution here, but we need to be very clear what the semantics are if we reuse the field, i.e. it should only be possible if fully unambiguously typed configuration is otherwise present. It also might surprise some folks, as they may have built validators in their config pipeline to ensure consistency of name and config. We could also add a filter_type field that unambiguously denotes the type. If the type field is specified then name can be free form. @htuch In the linked PR, we can infer the name of the extension from the protobuf type for most cases. There are just two exceptions: Empty, which is being solved separately. Migrated APIs. I think we need this information regardless, but it seems clear that all versions of configs should be distinct. I agree about the surprise effect. Fortunately that only happens with the invalid config, e.g. some invalid config might become valid since name is not significant. I actually quite like the name/typed_config pattern. It applies in many places across the code base. @kyessenov what do you reckon the state of this issue is? I think this is resolved. If typed_config is used with the extension-specific type, name can be set to anything. I've updated the unit tests https://github.com/envoyproxy/envoy/pull/10071, https://github.com/envoyproxy/envoy/pull/10122, https://github.com/envoyproxy/envoy/pull/10130 Ack, thanks @kyessenov for the rad contribution here, closing.
gharchive/issue
2019-12-16T06:44:54
2025-04-01T06:38:34.148945
{ "authors": [ "alexburnos", "htuch", "kyessenov", "lambdai", "mandarjog", "mattklein123" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/9358", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
735729040
[fuzz] Got rid of all uninteresting call logs in health check fuzzing Signed-off-by: Zach Reyes zasweq@google.com Commit Message: Got rid of all uninteresting call logs in health check fuzzing Additional Description: Uninteresting call logs were slowing down my health check fuzzer in oss-fuzz. This PR gets rid of all uninteresting call logs by wrapping applicable objects in NiceMocks. However, since at the beginning of my internship I refactored the unit tests to also use fuzz tests, I put the unit test classes back in test/common/upstream:health_checker_impl_test.cc, and renamed test/common/upstream:health_checker_impl_test_utils to health_check_fuzz_test_utils. No loss in coverage over source/common/upstream/health_checker_impl.cc. Speed up to 30 exec/sec on my cloudtop instance. Risk Level: Low /assign @asraa @htuch @adisuissa Thanks! What was the before of the speed on cloudtop? Honestly Asra it was same speed, 30 exec/sec.
gharchive/pull-request
2020-11-04T00:44:01
2025-04-01T06:38:34.153165
{ "authors": [ "asraa", "zasweq" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/13891", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
919796648
docs: Use yaml build config for vars Signed-off-by: Ryan Northey ryan@synca.io Commit Message: docs: Use yaml build config for vars Additional Description: Another breakout from #15229 This adds a build configuration file for vars passed through to sphinx Risk Level: Testing: Docs Changes: Release Notes: Platform Specific Features: [Optional Runtime guard:] [Optional Fixes #Issue] [Optional Deprecated:] [Optional API Considerations:] I think this looks good, other than the question re: descriptor_path parameter in the validating code block. lgtm
gharchive/pull-request
2021-06-13T13:14:01
2025-04-01T06:38:34.156560
{ "authors": [ "dmitri-d", "phlax" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/16959", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
951740251
bazel: remove old luajit workaround According to https://luajit.org/install.html Important: this relates to LuaJIT 2.0 only — use LuaJIT 2.1 to avoid these complications. Since we have updated past 2.1 we shouldn't need these anymore which is great since it breaks on Apple Silicon https://github.com/envoyproxy/envoy/issues/16482#issuecomment-846439439 Signed-off-by: Keith Smiley keithbsmiley@gmail.com @moderation wdyt? LGTM. I commented these lines out when I got M1 building a while back - https://github.com/envoyproxy/envoy/issues/16482#issuecomment-846439439 Removing as we don't require makes sense. MacOS CI failing however Yea I just noticed that we can probably remove them instead. Turns out I can't let the options fallthrough, hopefully green now
gharchive/pull-request
2021-07-23T16:48:45
2025-04-01T06:38:34.159873
{ "authors": [ "keith", "moderation" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/17466", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
959392346
conn pool: default enable runtime feature conn_pool_delete_when_idle Signed-off-by: Greg Greenway ggreenway@apple.com Commit Message: This enables the new behavior (clean up conn pools when they're idle, to avoid leaking memory in some configurations) from #17403 by default. It can still be disabled by setting runtime feature envoy.reloadable_features.conn_pool_delete_when_idle to false. Additional Description: Risk Level: Medium Testing: Test coverage was added in #17403. Docs Changes: Release Notes: Added in #17403 Platform Specific Features: [Optional Runtime guard:] [Optional Fixes #Issue] [Optional Deprecated:] [Optional API Considerations:] have we smoke tested somewhere yet? have we smoke tested somewhere yet? It's still the same code (minus one possible crash when a cluster is removed via CDS) that @rgs1 smoke tested awhile ago. have we smoke tested somewhere yet? It's still the same code (minus one possible crash when a cluster is removed via CDS) that @rgs1 smoke tested awhile ago. ... tested with the new tcp conn pool, whereas the additional crashers were with the old pool fwiw ... Ah cool, didn't realize the prior version had been canaried. Just to check my memory, the folks encountering tcp proxy crashes didn't provide additional data, and agreed they should switch back to the new pool in any case right? If so LGTM-as-long-as-you-cc-them because it's as safe as it's going to get (folks shouldn't be using the old pool without informing us the new one is problematic) @bianpengyuan FYI this change, that you reported a crash in #16948, is being reintroduced. Looking at that report again, it's very possible that it was the same crash fixed in #17522. Not enough information to know for sure, but it's a possible match, so it may be fixed. coverage test flake; unrelated: 2021-08-03T20:04:16.6349000Z test/extensions/transport_sockets/starttls/starttls_integration_test.cc:329: Failure 2021-08-03T20:04:16.6350199Z Value of: test_server_->server().listenerManager().numConnections() 2021-08-03T20:04:16.6350878Z Expected: is equal to 1 2021-08-03T20:04:16.6351556Z Actual: 0 (of type unsigned long) 2021-08-03T20:04:16.6352267Z Stack trace: 2021-08-03T20:04:16.6352891Z 0x454827: (unknown) 2021-08-03T20:04:16.6353611Z 0x7f6ad1696d96: testing::internal::HandleSehExceptionsInMethodIfSupported<>() 2021-08-03T20:04:16.6354475Z 0x7f6ad167b701: testing::internal::HandleExceptionsInMethodIfSupported<>() 2021-08-03T20:04:16.6355200Z 0x7f6ad1663042: testing::Test::Run() 2021-08-03T20:04:16.6355864Z 0x7f6ad1663b58: testing::TestInfo::Run() 2021-08-03T20:04:16.6356462Z ... Google Test internal frames ...``` /retest /retest
gharchive/pull-request
2021-08-03T18:47:49
2025-04-01T06:38:34.166867
{ "authors": [ "alyssawilk", "ggreenway", "rgs1" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/17577", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
384420243
reformat dynamic metadata emitted by Mongo proxy Description: Emit metadata as map<resource, list(operations> so that it can be used in metadata matchers easily. The existing format (messages:list(structs)) is too hard to represent in metadata matchers. Risk Level: LOW Testing: Unit tests Signed-off-by: Shriram Rajagopalan shriramr@vmware.com cc @venilnoronha @dio the PR that implemented this was merged yesterday :). So users have not seen this stuff yet. So version history doesn't exist..
gharchive/pull-request
2018-11-26T16:51:50
2025-04-01T06:38:34.169363
{ "authors": [ "rshriram" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/5117", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1942846397
Global downstream connection limits. Add a overload manager configuration to limit the total number of downstream connections https://www.envoyproxy.io/docs/envoy/latest/configuration/operations/overload_manager/overload_manager#limiting-active-connections It appears that the functionality is incomplete and unsuitable for production use. Should we enable it? @tmsnan im fine waiting, but atm we dont have any way of enabling overload manager @arkodg If possible, I could first add other overload manager features that are already fully supported. @tmsnan sure, imo the others like heap size based will require an API addition, putting the burden of getting it right on the user . Was trying to use this GH issue to enable sensible defaults @arkodg Maybe we can refer to Google VRP edge server configuration. `overload_manager: refresh_interval: 0.25s resource_monitors: name: "envoy.resource_monitors.fixed_heap" typed_config: "@type": type.googleapis.com/envoy.extensions.resource_monitors.fixed_heap.v3.FixedHeapConfig TODO: Tune for your system. max_heap_size_bytes: 2147483648 # 2 GiB actions: name: "envoy.overload_actions.shrink_heap" triggers: name: "envoy.resource_monitors.fixed_heap" threshold: value: 0.95 name: "envoy.overload_actions.stop_accepting_requests" triggers: name: "envoy.resource_monitors.fixed_heap" threshold: value: 0.98` https://www.envoyproxy.io/docs/envoy/latest/configuration/best_practices/edge#best-practices-edge Please assign me
gharchive/issue
2023-10-14T01:05:39
2025-04-01T06:38:34.176495
{ "authors": [ "arkodg", "shahar-h", "tmsnan" ], "repo": "envoyproxy/gateway", "url": "https://github.com/envoyproxy/gateway/issues/1966", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2084862185
feat(cors): Allowed more wildcard options A few weeks ago the allowed CORS origins have been changed from a regex to a wildcard notation (#2389). Implementation wise all kinds of wildcards are supported, however, the validation regex on the SecurityPolicy CRD limits the CORS options to hostnames prefixed with an wildcard followed by a dot, allowing all subdomains of that host. This reduces the freedom when allowing cross origins a lot compared to how it was before. This PR aims to relax the validation regex a bit to enable the following use cases: Allowing all hosts of an specific scheme (https://*) Allowing all hosts regardless of the scheme (*) Allowing all ports of a specific host (http://localhost:*) While allowing all hosts in the context of CORS might sound a bit hacky, this is sometimes required. For instance when a web service provides an API which is consumed by many third-party web applications hosted under arbitrary domains not under the control of the maintainer of aforementioned web service. In addition to that it can be very useful during application development. This is why I have added the option to allow all ports of a specific host as well. Review the new and the old validation regexes. @jaynis can sign your commits and repush ? DCO is failing @jaynis Thanks for the improvement in the wildcard host matching. The implementation looks good to me. I only have a little hesitation about the port wildcard matching. Suffix/Port wildcard matching is not a common practice for hostnames. Do you have many ports for a given hostname? Thank you for your review @zhaohuabing. I only have a little hesitation about the port wildcard matching. Suffix/Port wildcard matching is not a common practice for hostnames. In your use case, do you have many ports for a given hostname? The port range matching was solely meant to be a dev feature so that one can configure CORS for a host (e.g. localhost) regardless of the port the application runs on. But this scenario could be covered by the general wildcard as well, therefore I would also be fine with deleting it again if you think it is not required. Just let me know your preference. Thank you for your review @zhaohuabing. I only have a little hesitation about the port wildcard matching. Suffix/Port wildcard matching is not a common practice for hostnames. In your use case, do you have many ports for a given hostname? The port range matching was solely meant to be a dev feature so that one can configure CORS for a host (e.g. localhost) regardless of the port the application runs on. But this scenario could be covered by the general wildcard as well, therefore I would also be fine with deleting it again if you think it is not required. Just let me know your preference. Prefer to remove the suffix matching to keep it aligned with the common practice. Thanks.
gharchive/pull-request
2024-01-16T20:35:09
2025-04-01T06:38:34.183618
{ "authors": [ "arkodg", "jaynis", "zhaohuabing" ], "repo": "envoyproxy/gateway", "url": "https://github.com/envoyproxy/gateway/pull/2453", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
103241761
Enyo 2171 Add tabindex for Item to prevent accessibility timing issue Issue screen reader sometimes does not read child component's content when Item is focused. Cause Item may have components as child. However, Sometimes screen reader does not read child component's content because time to receive tabindex is late than child. To prevent this timing issue, I add tabindex to Item. Fix Add tabindex in ariaObservers. Enyo-DCO-1.1-Signed-off-by: Bongsub Kim bongsub.kim@lgepartner.com I will re-create PR with latest code.
gharchive/pull-request
2015-08-26T11:23:29
2025-04-01T06:38:34.223972
{ "authors": [ "kbs12e" ], "repo": "enyojs/moonstone", "url": "https://github.com/enyojs/moonstone/pull/2465", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1000814437
[QUESTION] Path still gets generated even when told not to Path still gets generated in the swagger.dart file when specified like the following in the build.yaml file: targets: $default: sources: - lib/** - $package$ builders: chopper_generator: options: header: "//Generated code" swagger_dart_code_generator: options: input_folder: "lib/" output_folder: "lib/swagger_generated_code/" exclude_paths: - "/api/mobile/actuator/" Am I using wrong syntax or is this an actual bug? hi @dtaskai , exclude_path and include_path are regex strings. to make your path excluded, you need to add something like this: \/api\/mobile\/actuator\/ Please use Regex validator to check is you String passing or not. For example, you can use this validator: https://regex101.com/ @dtaskai , if something is not clear - please let us know The exclusion didn't work on my project even after using regex syntax, so I have tried it on the example project: Added an exclusion to /rooms swagger_dart_code_generator: options: input_folder: "lib/" output_folder: "lib/swagger_generated_code/" exclude_paths: - "\/rooms" Ran flutter build run build_runner build Then it still generated the code for /rooms @Get(path: '/rooms') Future<chopper.Response<List<Room>>> roomsGet( {@Query('id') required String? id}); Ok good let me check it @dtaskai Yep you're right. We removed it in 2+ version. Let me fix. @dtaskai Please try it on latest version. Also you can put just /rooms to exclude_path. It works. Latest version is 2.1.3+2 Works on both the example and my personal project, thank you!
gharchive/issue
2021-09-20T10:37:52
2025-04-01T06:38:34.277567
{ "authors": [ "Vovanella95", "dtaskai" ], "repo": "epam-cross-platform-lab/swagger-dart-code-generator", "url": "https://github.com/epam-cross-platform-lab/swagger-dart-code-generator/issues/245", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1438145424
[QUESTION] How can I config worker-pool on Chopper? Please describe a problem. The below is Chopper worker poll example. /// inspired by https://github.com/d-markey/squadron_sample/blob/main/lib/main.dart void initSquadron(String id) { Squadron.setId(id); Squadron.setLogger(ConsoleSquadronLogger()); Squadron.logLevel = SquadronLogLevel.all; Squadron.debugMode = true; } Future<void> main() async { /// initialize Squadron before using it initSquadron('worker_pool_example'); final jsonDecodeServiceWorkerPool = JsonDecodeServiceWorkerPool( // Set whatever you want here concurrencySettings: ConcurrencySettings.oneCpuThread, ); /// start the Worker Pool await jsonDecodeServiceWorkerPool.start(); /// Instantiate the JsonConverter from above final converter = JsonSerializableWorkerPoolConverter( { Resource: Resource.fromJsonFactory, }, /// make sure to provide the WorkerPool to the JsonConverter jsonDecodeServiceWorkerPool, ); /// Instantiate a ChopperClient final chopper = ChopperClient( client: client, baseUrl: 'http://localhost:8000', // bind your object factories here converter: converter, errorConverter: converter, services: [ // the generated service MyService.create(), ], /* ResponseInterceptorFunc | RequestInterceptorFunc | ResponseInterceptor | RequestInterceptor */ interceptors: [authHeader], ); /// Do stuff with myService final myService = chopper.getService<MyService>(); /// ...stuff... /// stop the Worker Pool once done jsonDecodeServiceWorkerPool.stop(); } Describe the solution you'd like How can I config worker-poll on Chopper? Hi @dfdgsdfg , Unfortunately I have no experience with WorkerPool. We just generation swagger code. If you have an idea, how to generate it - let us know.
gharchive/issue
2022-11-07T10:45:18
2025-04-01T06:38:34.280296
{ "authors": [ "Vovanella95", "dfdgsdfg" ], "repo": "epam-cross-platform-lab/swagger-dart-code-generator", "url": "https://github.com/epam-cross-platform-lab/swagger-dart-code-generator/issues/483", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2697974744
Background for items in context menu opened from chat header should be blue (now green) EPAM AI DIAL chat version 0.21.0 What happened? Actual: Expected: on the central part items are highlighted using blue color Example: Confidential information [X] I confirm that do not share any confidential information verified on staging successfully
gharchive/issue
2024-11-27T10:13:16
2025-04-01T06:38:34.284128
{ "authors": [ "VolhaBazhkova", "YauheniyaH" ], "repo": "epam/ai-dial-chat", "url": "https://github.com/epam/ai-dial-chat/issues/2678", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2595382305
fix(chat): Update the phrase "Not allowed model selected. Please, change the model to proceed" (Issue #2363) Description: Update the phrase "Not allowed model selected. Please, change the model to proceed" Issues: Issue #2363 Checklist: [x] the pull request name complies with Conventional Commits [x] the pull request name starts with fix(<scope>):, feat(<scope>):, feature(<scope>):, chore(<scope>):, hotfix(<scope>): or e2e(<scope>):. If contains breaking changes then the pull request name must start with fix(<scope>)!:, feat(<scope>)!:, feature(<scope>)!:, chore(<scope>)!:, hotfix(<scope>)!: or e2e(<scope>)!: where <scope> is name of affected project: chat, chat-e2e, overlay, shared, sandbox-overlay, etc. [x] the pull request name ends with (Issue #<TICKET_ID>) (comma-separated list of issues) [x] I confirm that do not share any confidential information like API keys or any other secrets and private URLs /deploy-review /deploy-review
gharchive/pull-request
2024-10-17T17:56:05
2025-04-01T06:38:34.289251
{ "authors": [ "Derikyan" ], "repo": "epam/ai-dial-chat", "url": "https://github.com/epam/ai-dial-chat/pull/2393", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1758757955
Enable SAST scan for Tekton pipelines As an EDP user, I would like to be able to use SAST scanning out of the box for tekton pipelines. Acceptance Criteria: SAST scan available out of the box for Tekton pipelines; Enable only for build pipelines; We have recently implemented a static application security testing feature for our EDP frameworks on build pipelines using DefectDojo. This feature is available for application templates Python (Python 3.8, FastAPI, Flask) Go (Beego, Gin) JavaScript (React, Vue, Angular, Next.js, Express) Java (Maven, Gradle) C# (.Net 3.1, .Net6.0) As well as library templates: Python (Python 3.8, FastAPI, Flask) JavaScript (React, Vue, Angular, Next.js, Express) Java (Maven, Gradle) C# (.Net 3.1, .Net6.0) This implementation will allow for improved security testing measures throughout our development process and ultimately result in higher-quality applications and libraries.
gharchive/issue
2023-06-15T12:48:33
2025-04-01T06:38:34.292991
{ "authors": [ "NikolayMarusenko", "Rolika4" ], "repo": "epam/edp-install", "url": "https://github.com/epam/edp-install/issues/27", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
126428336
Reference to undefined pattern comment I tried converting http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.1.xsd to rnc using this stylesheet and then trang. However, when I tried to use the .rnc file in nxml-mode in emacs, it said: nxml-display-file-parse-error: Reference to undefined pattern comment. My XSD is a bit rusty but it looks to me like comment is defined in the .xsd file, but is not defined in the .rng or .rnc files. I managed to workaround this by specifying the start parameter. I was able to get this schema to work first by applying greenrd's sed script to the xsd file, renaming it with 'mod', then using xsltproc --stringparam start databaseChangeLog XSDtoRNG.xsl dbchangelog-3.1.mod.xsd > dbchangelog-3.1.rng. I then converted it to rnc with trang.
gharchive/issue
2016-01-13T14:25:58
2025-04-01T06:38:34.303260
{ "authors": [ "dwhoman", "greenrd" ], "repo": "epiasini/XSDtoRNG", "url": "https://github.com/epiasini/XSDtoRNG/issues/20", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
527031031
Pagination Documentation I think a pagination entry is needed in the documentation. It works really nice and it's shame if someone overlooks it. Using it right now with markup-tables Thanks a lot for this wonderful template Thanks for suggestion. We're going to add docs as a part of vuestic-ui. Here's some not feedback ready work: http://vuestic-ui-develop-docs.sub.asva.by/components/VaPagination.html :). Hi, any news on this? I am looking forward in using pagination entry on this template. Thank you :D Here's the new link: https://vuestic.dev/en/ui-elements/pagination. Things are very close to release. We'll time update vuestic-admin with vuestic-ui.
gharchive/issue
2019-11-22T07:19:21
2025-04-01T06:38:34.306157
{ "authors": [ "Heavenwalker", "asvae", "haizad" ], "repo": "epicmaxco/vuestic-admin", "url": "https://github.com/epicmaxco/vuestic-admin/issues/680", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
854256392
Split tests ... ... for parallelization, more focussed test areas, and speedup. To be tackled post-mvp We have tests that install/uninstall Epinio or some components. As far as I can tell, these are the features tests (which enable/disable in-cluster services etc) There are also other tests that enable the components if they are not there already but never disable them. That mean we could simply enable those components in the BeforeSuite block. All other tests can safely run against the same Epinio instance (and the same Kubernetes cluster). That means we don't need GINKGO_NODES number of clusters but just one. Enabling in-cluster services is optional because it will probably never be used in production environments. The gke service is optional because it need configuration (auth) that only the users who want to use google cloud will have available thus it makes no sense to do it in epinio install. We need to find a way to run the tests that mutate the cluster serialized somehow and on a separate cluster (ginkgo doesn't seem to support this yet: https://github.com/onsi/ginkgo/issues/526). We can run all the rest in parallel on the same cluster. This limits the number of clusters we need to just 2. The mutating tests are not expected to grow as much as the other tests so it's sane to expect this to keep working for a while. An option would be to separate the mutating tests to a new test suite.
gharchive/issue
2021-04-09T07:34:50
2025-04-01T06:38:34.337441
{ "authors": [ "jimmykarily", "kkaempf" ], "repo": "epinio/epinio", "url": "https://github.com/epinio/epinio/issues/263", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
732598332
[v0.5] Epicli does not create PostgreSQL SET_BY_AUTOMATION values correctly Rename SET_BY_AUTOMATION to AUTOCONFIGURED for shared_preload_libraries /azp run /azp run
gharchive/pull-request
2020-10-29T19:35:17
2025-04-01T06:38:34.340008
{ "authors": [ "to-bar" ], "repo": "epiphany-platform/epiphany", "url": "https://github.com/epiphany-platform/epiphany/pull/1811", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
386537120
Optional property How to make an optional property ? They are optional by default. Please see JSON schema spec
gharchive/issue
2018-12-02T07:59:10
2025-04-01T06:38:34.350393
{ "authors": [ "DavidIzaac", "epoberezkin" ], "repo": "epoberezkin/ajv", "url": "https://github.com/epoberezkin/ajv/issues/894", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
641889333
fix overlaps check The isBetween function did not return true when either top or bottom of cement was equal to top or bottom of hole. (Check was < or >, needs to be <= and >=) Think I might have simplified it a little to much. Will check and update PR before review is needed Fixed the overlap check and now all cements we are testing with in wellx-designer renders correctly. Added tests
gharchive/pull-request
2020-06-19T11:02:50
2025-04-01T06:38:34.419972
{ "authors": [ "ooystein" ], "repo": "equinor/esv-intersection", "url": "https://github.com/equinor/esv-intersection/pull/336", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2521437942
refactor!: simplify subnet configuration BREAKING CHANGE: remove subnet object properties network_security_group, route_table and nat_gateway. Add subnet object properties security_group_id, route_table_id and nat_gateway_id. Depends on hashicorp/terraform-provider-azurerm#27199
gharchive/pull-request
2024-09-12T06:09:09
2025-04-01T06:38:34.426320
{ "authors": [ "hknutsen" ], "repo": "equinor/terraform-azurerm-network", "url": "https://github.com/equinor/terraform-azurerm-network/pull/78", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
301166680
LALR parser throws gives UnexpectedToken exception with optional string I have written a grammar that succesfully passes my testcases with the default parser. Now I am trying to convert it to an LALR parser. But this parser throws an exception when parsing a fairly sentence. Expected Behavior Lark should either give an error that certain constructions in the grammar are not allowed for LALR parsing or parse the grammar correctly. Current Behavior I have the following grammar rule: qgate : "QGate[" STRING "]" ["*"] "(" INT ")" which I apply to the following sentence QGate["not"](0) This worked succesfully in with the normal parser, but with the LALR parser I get an error Error Traceback (most recent call last): File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parsers/lalr_parser.py", line 46, in get_action return states[state][key] KeyError: '__ANONSTR_8' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 59, in testPartExecutor yield File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/unittest/case.py", line 605, in run testMethod() File "/Users/eddie/dev/quippy/test_quipper_parser.py", line 44, in test_gatelist_qgate parsed = parser.parse(basic_text) File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/lark.py", line 197, in parse return self.parser.parse(text) File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parser_frontends.py", line 37, in parse return self.parser.parse(token_stream) File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parsers/lalr_parser.py", line 73, in parse action, arg = get_action(token.type) File "/Users/eddie/dev/quippy/.venv/lib/python3.6/site-packages/lark/parsers/lalr_parser.py", line 50, in get_action raise UnexpectedToken(token, expected, seq, i) lark.common.UnexpectedToken: Unexpected token Token(__ANONSTR_8, '](') at line 1, column 11. Expected: dict_keys(['__RSQB']) Context: <no context> When I modify the text to (by adding a '*') QGate["not"]*(0) it parses succesfully. Alternatively, I can change the grammar rule to qgate : "QGate[" STRING "](" INT ")" and that also works. It would seem to me that an optional symbol should be possible in an LR grammar (please correct me if I'm wrong), so where does this go awry? Environment OS: MacOS 10.13 Lark: 0.5.4 Okay, so here's the reason for the error. In LALR, the lexer is by design deterministic. So if it has two terminals somewhere in the grammar, ] and ](, and it sees ]( in the input, it has to choose between them, and cannot try both. By default it chooses the longer one, though you can change that with priority. What I'm saying is, somewhere else in the grammar there's a ]( terminal, and I suggest you break it into ] and (. Yes, thank you. It is clear I needed to learn how a lexer behaves.
gharchive/issue
2018-02-28T20:14:11
2025-04-01T06:38:34.475923
{ "authors": [ "eddieschoute", "erezsh" ], "repo": "erezsh/lark", "url": "https://github.com/erezsh/lark/issues/98", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2182957835
RuntimeError: expected scalar type BFloat16 but found Float when i run train_eval.py and do the evaluation stage1, i met this question and have no solution. Can you provide more error details? This part seems an auto-casting issue, which should be automatically handled by lightening.
gharchive/issue
2024-03-13T01:46:03
2025-04-01T06:38:34.497913
{ "authors": [ "KzZheng", "oldwangggggg" ], "repo": "eric-ai-lab/MiniGPT-5", "url": "https://github.com/eric-ai-lab/MiniGPT-5/issues/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
400016478
Error Hi there, is there any thing to do in this case? pylinkedin.exceptions.ServerIpBlacklisted: Linkedin blacklists ips for unauthentified http requests, Aws, Digital Ocean Hey, LinkedIn blacklists all IPs from major cloud providers so the only way i see you can use it is by using a different cloud provider. There are many - Vultr, Scaleway, OVH and many more. I haven't tried running it on any VPS. You can find great offers on lowendbox. Do tell which one works. Getting the same issue as @renatoluz
gharchive/issue
2019-01-16T21:58:06
2025-04-01T06:38:34.536712
{ "authors": [ "conorg763", "nithinkashyapn", "renatoluz" ], "repo": "ericfourrier/scrape-linkedin", "url": "https://github.com/ericfourrier/scrape-linkedin/issues/17", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
339700724
Twing throws: "SyntaxError: missing ) after argument list" whenever it encounters a ` character. Hi, it took me a while to track down this issue as I'm currently porting a large project to Twing from Twig.js to be more compatible with our TwigPHP environment. After a lot of digging I tracked down a "SyntaxError: missing ) after argument list" error to any template that contains a ` character, no matter if the ` is in a string or comment. Example: {% block content %} <p>Some text containing `back tick characters` that we use to parse with a custom markdown tag</p> {% endblock %} or: {# Some sample code `<div class="example">Your code here</div>` #} We have a lot of templates with these characters as we parse the contents of blocks with markdown to form an internal style-guide and coding guide. To be honest I'm not sure if this is a bug or it can be avoided using some escape configuration but simply doing \` fixes the parser but won't work when producing the template. Any ideas? Cheers. I've reproduced this. It happens because backticks are not escaped in getSourceContext() method. @nedkelly, @deflock, I reproduce it when the environment debug option is set to true. I'll fix it in no time but for now you should be able to avoid this issue by setting debug to false. @deflock, you are totally right, this comes from the getSourceContext content of the pre-compiled template. I don't remember if there is a reason for using compiler.raw instead of compiler.string. It was a bad idea, anyway. Fixed in Twing@1.2.2
gharchive/issue
2018-07-10T05:45:21
2025-04-01T06:38:34.554698
{ "authors": [ "deflock", "ericmorand", "nedkelly" ], "repo": "ericmorand/twing", "url": "https://github.com/ericmorand/twing/issues/230", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1761872195
client.Describe No result returned the code that reproduces this issue or a replay of the bug const imgUrl=`https://cdn.discordapp.com/attachments/1008571049039896576/1119473369813884958/manuelcorazzari_dirty_hands_holding_dirt_from_the_ground_sun_li_98bfb5e5-8c4f-4cb3-879a-bc229108e505.png` const msg = await client.Describe(imgUrl); console.log({ msg }); Describe the bug No result returned error log No result returned ws:true https://github.com/erictik/midjourney-client/blob/main/example/describe.ts I have the same problem with describe The Code const msg = await client.Describe( "https://img.ohdat.io/midjourney-image/1b74cab8-70c9-474e-bfbb-093e9a3cfd5c/0_1.png" ); console.log({msg}); The output: https://github.com/erictik/midjourney-client/blob/main/example/describe.ts#L18 Yeah, I saw that client.Connect() but it gave me this error: TypeError: client.Connect is not a function at main (/*******/index.js:12:18) at Object.<anonymous> (/*******/index.js:24:1) at Module._compile (node:internal/modules/cjs/loader:1254:14) at Module._extensions..js (node:internal/modules/cjs/loader:1308:10) at Module.load (node:internal/modules/cjs/loader:1117:32) at Module._load (node:internal/modules/cjs/loader:958:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:23:47 Maybe I'm forgetting something important? npm install midjourney Ok, ny default npm installed v 2.7.79. I forced the package.json to 3.0.80 and it works! Thank you! Ok, ny default npm installed v 2.7.79. I forced the package.json to 3.0.80 and it works! Thank you! Still no. What's wrong, "midjourney": "^3.0.81" help @clementepestelli The task was successfully created, but the returned result was not broadcast 有毒,仓库代码也不行,其他setting API,reset api都可以,就这个Describe 不行,救命 I have the same issue with the example. No msg is returned from Describe(). Doesn't work for me either, using the latest version. Node 20, midjourney@4.0.97, no matter if I enable websockets or not - it submits the describe request and I can find the response in the discord channel, but MJ API never catches the response. Maybe because it's a public channel?
gharchive/issue
2023-06-17T14:16:24
2025-04-01T06:38:34.572186
{ "authors": [ "clementepestelli", "lx-0", "lys623", "pyronaur", "zcpua" ], "repo": "erictik/midjourney-client", "url": "https://github.com/erictik/midjourney-client/issues/136", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
219985062
Folder Copying Won't Start - Returns Blank Screen I just started using the app today and successfully used it twice this morning, including one folder that was very large (over 2000 total files). However, when I try to start a new folder now, once I click "copy folder," it goes to a blank screen and doesn't start the copy (I checked my Drive to see if the new folder had been created and it was working in the background). I also tried using "Resume" to see if a previous copy was actually still in progress, but when I click "Resume copying," the same thing happens with the blank screen I am having the same issue, and my folder is not too large. Actually, i thought that could be the issue and tried with a smaller folder and the same happened I can reproduce, but this isn't an issue with the app. Google must have changed something with their Google Apps Service, which I imagine will be fixed soon. It appears that they changed the headers that allow the google.script service to be accessed. This is causing the google.script service to not be found. I don't have any control over this, but I'll leave the issue open until Google resolves it. Thanks for your quick response, Eric. Seems whatever it was has been resolved as I've been able to use the app just fine this morning. And, thank you for creating &maintaining this - it's been a real lifesaver in managing my drive. I´m still experiencing the issue
gharchive/issue
2017-04-06T18:12:01
2025-04-01T06:38:34.636692
{ "authors": [ "GASPARDYP", "ericyd", "karynrose1784" ], "repo": "ericyd/gdrive-copy", "url": "https://github.com/ericyd/gdrive-copy/issues/15", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2643774955
[automation] Updated mainnet.toml for erigon3 up to 21.141M This is an AUTOMATIC PR raised from this machine (which is running erigon3): snapshotter-bm-e3-ethmainnet-n1 I'm closing this PR as we were waiting for the new Caplin state files, which were not produced because of a downloader glitch in creating their torrents' hashes
gharchive/pull-request
2024-11-08T11:33:24
2025-04-01T06:38:34.646171
{ "authors": [ "michelemodolo" ], "repo": "erigontech/erigon-snapshot", "url": "https://github.com/erigontech/erigon-snapshot/pull/329", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
210202490
Fixed issue with missing migration There were changes in the model and no makemigrations command was ran. What's this PR do? Fix issue #19 Where should the reviewer start? How should this be manually tested? Any background context you want to provide? This template was adapted ~stolen~ from Quickleft/Sprint.ly Codecov Report Merging #20 into develop will increase coverage by 0.12%. The diff coverage is 100%. @@ Coverage Diff @@ ## develop #20 +/- ## ========================================== + Coverage 90.78% 90.9% +0.12% ========================================== Files 46 47 +1 Lines 423 429 +6 ========================================== + Hits 384 390 +6 Misses 39 39 Impacted Files Coverage Δ familias/migrations/0005_auto_20170225_0116.py 100% <100%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update e9c30b5...66cab5c. Read the comment docs.
gharchive/pull-request
2017-02-25T02:10:56
2025-04-01T06:38:36.018985
{ "authors": [ "codecov-io", "fernandolobato" ], "repo": "erikiado/jp2_online", "url": "https://github.com/erikiado/jp2_online/pull/20", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
750258668
Different input frequencies are getting played as the same output frequency In #2 I referenced a bug I was investigating. Here it is. Background I'm trying to use this library to prototype a personal data over sound project. I'm working on using the presence of a sound at any of 24 different frequencies to convey the data my application needs. Eventually I will be using several of these frequencies in parallel, which maps really well to this package's support of different tracks. Issue The issue I'm finding is that when I use this package to play a test sound at each of my chosen 24 frequencies I notice that some of the higher frequencies get played as the same tone. Low Frequency Example Here's the code: from tones.mixer import Mixer from tones import SINE_WAVE from playsound import playsound mixer = Mixer(44100, 0.5) mixer.create_track(1, SINE_WAVE, attack=0.01, decay=0.1) mixer.add_tone(1, 799.872020476724, .5) mixer.add_tone(1, 899.604174163368, .5) mixer.add_tone(1, 1000.40016006403, .5) mixer.add_tone(1, 1100.5943209333, .5) mixer.add_tone(1, 1301.06687483737, .5) mixer.add_tone(1, 1400.56022408964, .5) mixer.add_tone(1, 1500.60024009604, .5) mixer.add_tone(1, 1601.53747597694, .5) mixer.add_tone(1, 1799.20834832674, .5) mixer.add_tone(1, 1899.69604863222, .5) mixer.add_tone(1, 2000.80032012805, .5) mixer.add_tone(1, 2100.84033613445, .5) mixer.add_tone(1, 2296.73863114378, .5) mixer.add_tone(1, 2396.93192713327, .5) mixer.add_tone(1, 2497.5024975025, .5) mixer.add_tone(1, 2597.4025974026, .5) mixer.add_tone(1, 2801.12044817927, .5) mixer.add_tone(1, 2903.60046457607, .5) mixer.add_tone(1, 3001.20048019208, .5) mixer.add_tone(1, 3105.5900621118, .5) mixer.add_tone(1, 3306.87830687831, .5) mixer.add_tone(1, 3401.36054421769, .5) mixer.add_tone(1, 3501.40056022409, .5) mixer.add_tone(1, 3607.50360750361, .5) mixer.write_wav('tones.wav') playsound('tones.wav') And here's a spectrogram of the frequencies I'm getting as a result: High Frequency Example It appears as though playing higher frequencies exacerbates the problem. This code plays the same frequency intervals (not note intervals) but starting 3000 Hz higher than the previous example. from tones.mixer import Mixer from tones import SINE_WAVE from playsound import playsound mixer = Mixer(44100, 0.5) mixer.create_track(1, SINE_WAVE, attack=0.01, decay=0.1) mixer.add_tone(1, 3990.42298483639, .5) mixer.add_tone(1, 4105.09031198686, .5) mixer.add_tone(1, 4201.68067226891, .5) mixer.add_tone(1, 4302.92598967298, .5) mixer.add_tone(1, 4492.36298292902, .5) mixer.add_tone(1, 4608.29493087558, .5) mixer.add_tone(1, 4699.24812030075, .5) mixer.add_tone(1, 4793.86385426654, .5) mixer.add_tone(1, 4995.004995005, .5) mixer.add_tone(1, 5102.04081632653, .5) mixer.add_tone(1, 5213.76433785193, .5) mixer.add_tone(1, 5291.00529100529, .5) mixer.add_tone(1, 5494.50549450549, .5) mixer.add_tone(1, 5580.35714285714, .5) mixer.add_tone(1, 5714.28571428571, .5) mixer.add_tone(1, 5807.20092915215, .5) mixer.add_tone(1, 6002.40096038415, .5) mixer.add_tone(1, 6105.0061050061, .5) mixer.add_tone(1, 6211.1801242236, .5) mixer.add_tone(1, 6321.11251580278, .5) mixer.add_tone(1, 6493.50649350649, .5) mixer.add_tone(1, 6613.75661375661, .5) mixer.add_tone(1, 6675.56742323097, .5) mixer.add_tone(1, 6802.72108843537, .5) mixer.write_wav('tones.wav') playsound('tones.wav') Here's the resulting spectrogram. Other Thoughts I'm wondering if this has something to do with how this package focuses on playing specific notes (i.e. music composition). Perhaps my frequencies are simply getting rounded to the closest note? I didn't see anything that would seem to be doing that in the mixer.add_tone() function, but it's an idea that I've had nagging at me while I look through things. I'm happy to help develop a solution to improve this great package, I'm just getting stuck when I tackle it on my own. Hoping @eriknyquist has some added insight into why this might be occurring. @jhale1805 I think you hit the nail on the head when you said "I'm wondering if this has something to do with how this package focuses on playing specific notes (i.e. music composition)?" I never did any sort of detailed testing of specific frequencies, like you are doing now. This module was very much intended for producing musical tones, and the extent of my testing was pretty much just using my ears to make sure things sound musically OK. That being said, I will take a look at the code which handles specific frequency values, and see if there is an obvious problem that could cause such a loss of precision. Thanks! @jhale1805 I can possibly help speed up your investigation; the problem is most likely with the _sine_wave_table function here https://github.com/eriknyquist/tones/blob/master/tones/tone.py#L6 This function is called by the Tone.samples() function (right here https://github.com/eriknyquist/tones/blob/master/tones/tone.py#L197), to obtain a set of samples that make up a single 360 degree sine wave oscillation in the desired frequency/sample rate/amplitude. The Tone.samples() function then iterates over this table multiple times, as many times as is needed to create the number of samples we need for the requested note time. My guess is that there is some loss of precision that occurs when I do period = int(rate / freq) in the _sine_wave_table function, and this is what's causing the anomaly you're seeing where the output seems to "snap" to certain frequencies. I'm not sure exactly how I would resolve that, right now, but this is the area I'm drawn to right now based on your description. OK, so after I explained that to you I'm thinking that the problem is indeed _sine_wave_table, or more specifically, the approach of generating a single period's worth of sine wave samples and then duplicating it multiple times to get the desired note length. This approach assumes that the full period of any sine wave at any frequency can be described by a discrete number of samples, when in reality, the full period of a sine wave is likely going to have some "fractional" sample at the end (e.g. a full period of 1555Hz, at 44100 sample rate, works out to 28.36 samples), unless the sine wave frequency happens to be an exact multiple of the sample rate. This might also explain the weird harmonics reported in #1, since the issue I described above would result in sine waves that are not perfect-- there would be little "blips" in the waveform between every period, which I'm guessing would result in some odd harmonic content. I think the correct way to do this would be to generate all samples for the full note length at once, instead of just doing a single period and then duplicating it, just like is being done in this stackoverflow answer; https://stackoverflow.com/questions/8299303/generating-sine-wave-sound-in-python I don't have time to work on it and test it right now (I can get to that next weekend), but I thought I would just dump that info here in case it helps you out. I think you're on to something. I plotted the output waveform using this Stack Overflow post as a guide (https://stackoverflow.com/a/18625294) and got the following output (zoomed in a lot). Just as you predicted, there is an odd change of slope at the end of each period that I don't think can be attributed to the minor imperfections introduced by using digital samples instead of an analog signal. I'll try out the solution you found and give another update in a bit. Just for reference, the exact code that produced that image from tones.mixer import Mixer from tones import SINE_WAVE mixer = Mixer(44100, 0.5) mixer.create_track(1, SINE_WAVE, attack=0.01, decay=0.1) mixer.add_tone(1, 2597.4025974026, .25) mixer.add_tone(1, 2695.41778975741, .25) mixer.add_tone(1, 2801.12044817927, .25) mixer.add_tone(1, 3001.20048019208, .25) mixer.add_tone(1, 3105.5900621118, .25) mixer.add_tone(1, 3203.07495195388, .25) mixer.add_tone(1, 3306.87830687831, .25) mixer.add_tone(1, 3501.40056022409, .25) mixer.add_tone(1, 3607.50360750361, .25) mixer.add_tone(1, 3700.96225018505, .25) mixer.add_tone(1, 3799.39209726444, .25) mixer.write_wav('tones.wav') #Addition import wave import numpy as np import matplotlib.pyplot as plt spf = wave.open('tones.wav') signal = spf.readframes(-1) signal = np.fromstring(signal, "Int16") plt.plot(signal) plt.show() #/Addition After some more tinkering it looks like your suggested solution works great! The "Low Frequency" code from my original post now registers on the spectrogram as expected: In addition to each distinct input frequency now getting its own distinct output, you'll notice that my original screenshot had a thin green line showing that the output frequency of the twelfth tone was 2203 Hz - a full 103 Hz off of the original ~2100 Hz. This new version now plays that same twelfth tone at 2103 Hz - only 3 Hz off of the original. The harmonics are still present, but not as harshly as before. The "High Frequency" example also works much better now: And the output wave form also pretty much looks like a perfect sine wave. You'll see that I submitted a merge request with my solution. As indicated there, I only tested for my specific use case, so you'll want to make appropriate updates to the other waveforms you support that I'm not as familiar with before re-publishing this package to pip. Thanks again for this great package and your help with this issue!
gharchive/issue
2020-11-25T01:45:54
2025-04-01T06:38:36.038204
{ "authors": [ "eriknyquist", "eriknyquist-avive", "jhale1805" ], "repo": "eriknyquist/tones", "url": "https://github.com/eriknyquist/tones/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
112146031
Best practice to manipulate a field's value Assume that I need to change the value of username input field upon the change of last name field. The reason why the username should be kept on the input is to let the user manipulating it. The way to do it is to call the change action to change the value of username input. But we need to override the onChange prop on lastName input. It seems sloppy and I wonder if it has drawbacks. <input {...lastName} /> <input {...username} /> The best practice to manipulate field2's value based on field1's value is to use a normalizer. Thanks. One other thing, let's say if there was no field1. For example, if we needed to change lat and lng pair input values by dragging around a marker on a Leaflet or Google map. Same thing. If you want to restrict a value based on anything, you can "normalize" it. The code in the readme shows how to keep a string value in all upper case, for example. Just as easy to keep a lat-long coordinate within a certain radius. No I didn't mean to keep it in a certain radius. See the image. I mean changing those values by dragging that blue marker while the users can change it themselves. Normalizing is done in reducer. What I say is in component layer. Sure, that can be done with onChange calls. Ideally, you'd have a map component that would either call onChange as the user drags the marker, and/or onBlur when the marker was dropped. I got it. :+1: Thanks. @erikras, do you mean call the this.props.fields.xxx.onChange() directly? @xcatliu Yes, you may do that. If onChange receives an event (if it was passed to an input), it will try to get the value from the event.target, but you can also just call onChange(newValue). @erikras Thx~ @erikras How can I know the ending of onChange? Is there any API like this: onChange(newValue, callback); No, onChange is synchronous. @erikras If I call this.props.handleSubmit() after onChange, the values seems not update. Code fragment: handleClick() { // do something this.props.fields.xxx.onChange(newValue); console.log(this.props.values); // not update this.props.handleSubmit(); // submit old values } That is true. The props get repopulated on a subsequent process tick. I suspect that there is a flaw in your design if you are wanting to onChange and handleSubmit in the same code block. But you could conceivably do something like: this.props.onSubmit({ // <---- what handleSubmit would call ...this.props.values, xxx: newValue }); @erikras Yes I get it, thanks for helping me! Is there any way to change multiple values at once without using initialize()? What I'm doing right now is: const values = getValues(form.signupPF); initialize('signupPF', Object.assign({}, values, { address, neighborhood, city, state }), Object.keys(fields)); Is this correct? @gabrielhpugliese No, there is no form of the CHANGE action that works on multiple fields. If you didn't want to use initialize, which will affect your dirty/pristine state, you will have to dispatch the CHANGE actions individually for each field. No problems with initialize, just to know if it would be OK or would cause some weird side-effect. Thanks. @erikras when I'm trying to call this.props.onSubmit I'm getting Uncaught TypeError: this.props.onSubmit is not a function. I also don't find it in http://erikras.github.io/redux-form/#/api/props - maybe the api has changed? ...I'm using 4.1.4. Tried this.props.handleSubmit? On Thu, Feb 11, 2016 at 5:33 PM Anselm Christophersen < notifications@github.com> wrote: ...I'm using 4.1.4. — Reply to this email directly or view it on GitHub https://github.com/erikras/redux-form/issues/152#issuecomment-183025937. Yeah, that works, but I want to pass additional data, like this: this.props.onSubmit({ // <---- what handleSubmit would call ...this.props.values, xxx: newValue }); Also this.props.handleSubmit(); seems to work, while doing this and trying to manipulate the data seems to just fail silently. this.props.handleSubmit(data => { }); @anselmdk Because onSubmit is a specific proprietary prop for redux-form, it is removed from the props. You will need to call your prop something else (e.g. submitForm) for it to show up in the props of your decorated component. @erikras thanks. In the end I ended up using the solution with the hidden field, as I also needed to do some different validation based on the field value - seems to work okay! @erikras The props get repopulated on a subsequent process tick I am using async validation to fetch postal code from server once member changes it. Member also has ability to change country, thus I need to rerun the validation on country change. Currently I delay the postal code field touch to allow country field to update it's value. Is there an option to determine when the field got updated such that I don't wait too long or too short? the ability to run onChange on multiple values in one action would be great. Or rather, in my situation, I want to replace an entire subdocument array with a new one. In other words, I have a "field": 'collectionFields[].orderNum'. collectionFields is an array of input fields my users can configure and reorder. So I'm changing the orderNum values for collectionFields. I was hoping to just take my array of collectionFields, change the ordering of them, and replace it with an identical array of redux-form objects, but with new values for orderNum. (my reordering component returns a new array of objects with orderNum values) Something like: this.props.fields.collectionFields.replace(orderedFields) This assuming orderedFields is an array of redux-form instances (or whatever you call them). My alternative working solution for now is: this.props.fields.collectionFields.map((myFormField, index) => { myFormField.orderNum.onChange(orderedFields[index].orderNum.value) }) It's annoying though, because the redux-form/CHANGE action is dispatched a whole bunch of times for what seems could be done in a single action. Just an enhancement recommendation. Hi, I'm trying to do this: addressUpdated(newAddress) { //TODO, tell Redux form that a value is now available! this.props.fields.address.onChange(newAddress.label); } address is a hidden field that should get a value once addressUpdated is called. I get an error Uncaught TypeError: Cannot read property 'onChange' of undefined Component is generated: <Field id="address" name="address" type="hidden" component={fieldFactory} /> const fieldFactory = ({id, input, label, type, meta: { touched, error } }) => { if(type.match(/hidden/)){ return( <div> <input id={id} {...input} type={type} /> {touched && error && <span>{error}</span>} </div> ); } } Any ideas? @szokrika I had the same issue when migrating from v5 to v6. I solved it like this by giving a ref and adding withRef={true} to the Field I would like to modify. <Field type='text' ref='name' withRef={true} label='Name' name='Name' component={renderInput} value={this.state.location.name}/> When I want to change the field value I do this this.refs.name.getRenderedComponent().props.input.onChange(newName); Please note this Cannot be used if your component is a stateless function component Hey! Im kinda new to react. And couldn't figure out how and where i should rewrite my onChange to get things working. PS. Im using react-select as my selectInput component. here is part of my form. I would appriciate of any concrete examples with the custom onChange. const BasicForm = props => { const { error, handleSubmit, pristine, reset, submitting, countries, phonePrefixes } = props; return ( <div className="form step1"> <form onSubmit={handleSubmit}> <Field name="country" className="form-control" component={selectInput} options={countries} placeholder="Country" /> <Field name="phonePrefix" className="form-control" component={selectInput} options={phonePrefixes} placeholder="Prefix" /> <button type="submit" disabled={submitting}>REGISTER <i className="fa fa-chevron-right">&nbsp;</i> </button> </form> </div> )}; @Jevgenius in your case you should write the custom onChange in the selectInput file, and pass this custom onChange to the react-select. @erikras i'm using redux-form v6, where is props.fields property ? i can't find it in the api document, Does it be removed ? @zackshen It was removed, see the v5->v6 migration guide Stories storiesOf("URLField", module) .add("default", () => ( )) code const URLField = ({ validUrl , urlDescription, primaryLabel, warningText, secondaryLabel }) => ( {primaryLabel} {secondaryLabel} {urlDescription} {warningText} ); CSS input { text-align: right; display: inline-block; padding: 10px 2px; background: ${props => props.validUrl === "true" ? url(${valid}) no-repeat left 10px center : url(${invalid}) no-repeat left 10px center}; } Any idea about how to change background @xcatliu if i may ask, where did you get the reference this.props.fields.xxx.onChange()? i wanted to try this, i cannot seems to get the fields reference from onchange of my first field?
gharchive/issue
2015-10-19T13:37:43
2025-04-01T06:38:36.067751
{ "authors": [ "Jevgenius", "anmol1591", "anselmdk", "astrauka", "chaitanya0bhagvan", "erikras", "gabrielhpugliese", "himawan-r", "joaoreynolds", "kaueburiti", "mohebifar", "saitonakamura", "szokrika", "xcatliu", "zackshen" ], "repo": "erikras/redux-form", "url": "https://github.com/erikras/redux-form/issues/152", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
127764767
resetForm() leaves a ↵ when executed. I have this form: /** * React component MessagesForm */ import React, { PropTypes } from 'react' import {reduxForm, reset} from 'redux-form' const MessagesForm = p => { const {fields: {message}} = p const addMessage = e => { if(e.keyCode == 13 && e.shiftKey == false) { p.handleSubmit() p.resetForm() } } return ( <form onSubmit={p.handleSubmit}> <div className="form-group"> <textarea {...message} value={message.value || ""} rows="1" type="text" className="form-control" onKeyDown={addMessage} placeholder="Escribe tu mensaje y presiona Enter para enviar." /> </div> </form> ) } const form = reduxForm({ form: 'message', fields: ['message'] })(MessagesForm) export default form As you can see the form is submitted when the user press the enter key (↵) and is not pressing shift. after handling the submit i reset the for immediately, but it leaves a ↵ sign instead of an empty string, so it can show the placeholder again. I need to be reset to an empty string instead of a ↵, how can i do this? As a workaround, adding setTimeout(p.resetForm, 1) makes it work. Still looking for a solution. Shouldn't you have a e.preventDefault() to prevent the ↵ from making it into the input? @erikras thanks!!! :D Reopen if this is not solved.
gharchive/issue
2016-01-20T19:27:45
2025-04-01T06:38:36.071622
{ "authors": [ "erikras", "nschurmann" ], "repo": "erikras/redux-form", "url": "https://github.com/erikras/redux-form/issues/574", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
856777498
Remove stable RNA Use the tool in the same package as BBMap. Fixed in 94e634465505874002b4431c4988b9b5df34eccd with --sequence_filter parameter.
gharchive/issue
2021-04-13T09:33:23
2025-04-01T06:38:36.072687
{ "authors": [ "erikrikarddaniel" ], "repo": "erikrikarddaniel/magmap", "url": "https://github.com/erikrikarddaniel/magmap/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
772608103
Support multiple instances of each module Currently, every module in the Glk JS ecosystem is a global object (window.GlkOte, window.Dialog, window.Quixe, etc). We would like to support the possibility of more than one. It should be possible to create instances of each and start them running, with each one talking only to its peers. This was one of the issues mentioned when talking about ES modules (https://github.com/erkyrath/glkote/pull/39). However, I want to address this separately and first. The plan is as follows: Every module will define a JS class (e.g. GlkOteClass). (Recall that JS classes are just functions that you instantiate by writing new GlkOteClass().) For backwards compatibility, each module will define an instance of its class (GlkOte). If you load the module.js in the old-fashioned way, you will wind up with window.GlkOteClass and window.GlkOte. A page can go ahead using window.GlkOte just as before. However, you can create more instances as needed. An instance must be inited by calling its init() method. You may pass in associated module instances if you want: GlkOte.init({ Dialog: new DialogClass() }); If you don't, the instance will create its own module instances where needed. Each class has two new methods: inited(): Returns whether the instance has been succcessfully inited. getlibrary(val): Returns the associated module instance by name. For example, GlkOte.getlibrary('Dialog') will return the Dialog instance being used by that GlkOte instance. Glk.getlibrary('GlkOte') will return the GlkOte being used by that Glk API instance. And so on. When implementing higher-level modules, it's generally cleaner to fetch low-level modules using getlibrary() rather than trying to cache a reference at init() time. (Init order is a pain in the butt.) I have gone through and done this for all the modules in the glkote repo. Quixe is not yet done. Why an init method rather than passing in the references to the constructor? One, that's the way it works now, and I don't want to mess around with it too much. (window.GlkOte is provided as a constructed instance which has not yet been initialized.) Two, instances need to be initialized with references to each other. (E.g. Dialog needs a reference to GlkOte and vice versa.) So you need to construct them both, then initialize them. (I see I forgot Dialog.getlibrary(), oops.) Ahh, I hadn't thought there were any circular references, but that's because I had changed Dialog to use console.log instead of GlkOte.log. The plan that got implemented in 2020 seems to be doing the job.
gharchive/issue
2020-12-22T03:27:29
2025-04-01T06:38:36.096852
{ "authors": [ "curiousdannii", "erkyrath" ], "repo": "erkyrath/glkote", "url": "https://github.com/erkyrath/glkote/issues/46", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
330016838
ERRAI-1111: Fix a bug with interrupting a navigation hide. This will properly allow us to interrupt the hiding process in navigation. Can one of the admins verify this PR? Comment with 'ok to test' to start the build. Are we able to progress this issue? @BenDol Can you provide a more detailed description of the issue? So when I rewrote the navigation to support more complex use cases one of them was that we have the ability to interrupt the hiding navigation control inside of onHiding(NavigationControl) when you call interrupt on this NavigationControl object you expect it to retain the page you are currently on, but since the navigation process often has already started taking place the URL state will in most cases already be updated. So what this ensures is that the previous state of the page is restored after the hiding is interrupted. We use this when we want to protect sensitive data input for example, so if a user attempts to click off the page we have the ability to properly cancel that navigation in the onHiding. ok to test Jenkins, please retest this. @BenDol Could you please add some tests to the behaviours this PR adds? Thanks! Can one of the admins verify this PR? Comment with 'ok to test' to start the build. Build finished. No test results found. Build finished. 2762 tests run, 5 skipped, 0 failed. Build finished. 2763 tests run, 7 skipped, 0 failed.
gharchive/pull-request
2018-06-06T20:17:16
2025-04-01T06:38:36.126011
{ "authors": [ "BenDol", "kie-ci", "kiereleaseuser2", "tiagobento" ], "repo": "errai/errai", "url": "https://github.com/errai/errai/pull/347", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1706255600
Add mailhog support to docker-compose.yml example This took me longer than I'd like to admit to figure out how to properly integrate MailHog into one of my own projects' dev environments, so I thought I'd save someone the time and add MailHog to the example docker-compose.yml. MailHog will catch all outgoing mail from Moodle so that you can easily debug/troubleshoot without worrying about actually sending emails or accidentally exposing your SMTP credentials in your codebase. Also fixed a merge conflict in the README. Hi @aleciavogel, Thanks for your contribution! The addition of MailHog support to the example docker-compose.yml is a thoughtful enhancement. This will undoubtedly be a great help in debugging and troubleshooting, especially in preventing accidental exposure of SMTP credentials. I appreciate the time you took to integrate MailHog into the alpine-moodle project and for taking a step further to share this with the community. Your effort to resolve the merge conflict in the README is also recognized and appreciated. Before we merge this, I'll run some tests to ensure everything works as expected. I'll get back to you soon. Thanks again for your contribution! Best Hi @aleciavogel, Thanks again for your valuable contribution. However, after reviewing the MailHog project, it seems to be inactive, as indicated in this issue: https://github.com/mailhog/MailHog/issues/442. This could potentially lead to support and maintenance issues down the line. Considering this, what are your thoughts on integrating "maildev" instead, as suggested in the aforementioned issue? You can find more about it here: https://maildev.github.io/maildev/. It appears to be actively maintained and could serve the same purpose effectively. Please let me know your thoughts on this proposed change. Thanks again for your input and looking forward to your response. Best Hey Ernesto, Thank you for your consideration and thoughtful responses! MailHog has always been my go-to but MailDev sure looks neat. I'll revise my PR to use MailDev instead! Unfortunately, I can't seem to get MailDev to work with your image and it's not immediately apparent as to why it's not working. I've consulted the issues for the MailDev repo to see if anyone has encountered something similar. I've tried switching between "tls" and "tcp" for the protocol env variable in the moodle service to no avail, as well as setting incoming and outgoing usernames and passwords for MailDev. Even if I don't get an error upon sending the "Lost your password?" email, it never shows up in the MailDev UI. Feel free to take a crack at it
gharchive/pull-request
2023-05-11T17:31:39
2025-04-01T06:38:36.134945
{ "authors": [ "aleciavogel", "erseco" ], "repo": "erseco/alpine-moodle", "url": "https://github.com/erseco/alpine-moodle/pull/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
197805994
Add License Hello Congrats! As any open source project, we must declare our licensing policy. Done! Thanx.
gharchive/issue
2016-12-28T06:57:03
2025-04-01T06:38:36.215001
{ "authors": [ "cengizIO", "ersinerdal" ], "repo": "ersinerdal/react-redux-immutable-ddd", "url": "https://github.com/ersinerdal/react-redux-immutable-ddd/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
49976751
Issues with multiple block-level elements in definition lists As tested on the parsedown demo: http://parsedown.org/demo?extra=1 this code does not render as it should within the definition list: Term 1 : This is a definition with two paragraphs. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus. : Second definition for term 1, also wrapped in a paragraph because of the blank line preceding it. Term 2 : This definition has a code block, a blockquote and a list. code block. > block quote > on two lines. 1. first list item 2. second list item For reference: https://michelf.ca/projects/php-markdown/extra/#def-list Thought I would provide some visuals: Thanks. I'll look into this as soon as I resolve #4. Sweet! Thanks for fixing this!
gharchive/issue
2014-11-25T04:54:13
2025-04-01T06:38:36.219234
{ "authors": [ "erusev", "rhukster" ], "repo": "erusev/parsedown-extra", "url": "https://github.com/erusev/parsedown-extra/issues/25", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2439873848
Use fixed number of threads in test262-test concurrent mode This PR makes following changes: Change -test262-test:concurrent option to be number. Use fixed number of threads in test262-test concurrent mode. It is expected to resolve issue #251 (if proper number of threads is given). Now concurrent and timeout options work well together. esmeta test262-test -test262-test:progress -test262-test:log \ -test262-test:concurrent=16 -test262-test:timeout=60 # ... 100.00% (48,376/48,376) - P:N = 25,276:23,100 => P/P = 25,276/25,276 (100.00%) [07:58] # ... - pass-rate: P/P = 25,276/25,276 (100.00%) $ esmeta test262-test -test262-test:progress -test262-test:log -test262-test:timeout=60 -test262-test:concurrent=16 # .... 100.00% (48,376/48,376) - P:N = 25,276:23,100 => P/P = 25,276/25,276 (100.00%) [07:52]
gharchive/pull-request
2024-07-31T11:59:59
2025-04-01T06:38:36.221673
{ "authors": [ "stonechoe" ], "repo": "es-meta/esmeta", "url": "https://github.com/es-meta/esmeta/pull/252", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1083966344
get rid of QConv1d and co. With pytorch 3.10 the concept of parametrization was introduced... Now pytorch provides a native way for us to inject and apply arbitrary modules to other modules parameters before they are used... E.g. A module behaving equivalently to QConv1d with Binarize can be built like this import torch from torch.nn import Conv1d from torch.nn.utils.parametrize import register_parametrization from elasticai.creator.layers import Binarize layer = Conv1d(in_channels=2, out_channels=3, kernel_size=(1,), bias=False) register_parametrization(layer, "weight", Binarize()) Therefore ~neither~ the implementations of our quantizable convolutions ~nor of our qlstm cells~ should [not] be needed anymore. If we decide to still keep the implementations we should implement them with the help of parametrization. edited description: In fact from what i can tell there is no easy way to set custom activations to be used in the RNN layers, so we still require a custom implementation to realize something like our QLSTM
gharchive/issue
2021-12-18T23:33:11
2025-04-01T06:38:36.223573
{ "authors": [ "glencoe" ], "repo": "es-ude/elastic-ai.creator", "url": "https://github.com/es-ude/elastic-ai.creator/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1687687679
Seeing intermittent future_key errors 2023-04-25 23:12:39 ERROR:flask_shell2http:future_key ebbb407f already exists 2023-04-25 23:12:39 ERROR:flask_shell2http:No report exists for key: 'ebbb407f'. These are interspersed with working calls. I turned off wait=true and switched to polling. This reduced the problem but didn't eliminate it. Are there any docs on how to endure the key doesn't already exist? I don't believe I'm managing the keys externally to shell2http. Thanks so much. Can you also tell if your clienttries to execute same command with the same args multiple times? If that is the case, you might wnat to set the force_unique_key parameter to true (see example). Ah this explains it. Yes, several calls with the same parameters are possible. I'll have a look at how to force_unique_key. Thanks for the followup.
gharchive/issue
2023-04-27T23:46:07
2025-04-01T06:38:36.243811
{ "authors": [ "bfeist", "eshaan7" ], "repo": "eshaan7/Flask-Shell2HTTP", "url": "https://github.com/eshaan7/Flask-Shell2HTTP/issues/53", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2065123733
Bug: false positive for security/detect-object-injection What version of eslint-plugin-security are you using? 2.1.0 ESLint Environment Node version: v20.10.0 npm version: v10.2.3 Local ESLint version: 8.56.0 Global ESLint version: Not found Operating System: linux 6.2.0-1018-azure What parser are you using? Default (Espree) What did you do? minimal reproduction repo: https://github.com/AnnAngela/eslint-plugin-security-rules-detect-object-injection What did you expect to happen? Nothing reported. What actually happened? https://github.com/AnnAngela/eslint-plugin-security-rules-detect-object-injection/actions/runs/7406514870/job/20151079711#step:6:7 Participation [ ] I am willing to submit a pull request for this issue. Additional comments According to the docs, I did not do any value assignment and the warning should not be reported. From what I can tell, this rule is behaving as expected and the documentation needs updating. It currently flags any function call for which an argument in the form object[key] is passed. An assignment isn't necessary, especially because, in your case, foo is being read from the environment. @nzakas THX but can you explain a bit more clearly why the assignment isn't necessary? I don't understand that why the obj[foo] would cause harm even though foo is "construct" or other special string.
gharchive/issue
2024-01-04T07:10:09
2025-04-01T06:38:36.257271
{ "authors": [ "AnnAngela", "nzakas" ], "repo": "eslint-community/eslint-plugin-security", "url": "https://github.com/eslint-community/eslint-plugin-security/issues/136", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1522976742
Bug: Local development broken URL(s) (All) What did you do? Tried writing a new blog post with a date in the future. What did you expect to happen? I expected the blog post to be visible at https://localhost:2022/blog. We had this set up so that future posts would be shown locally to make it easier to debug, but then would not be shown on the live site. I also expected the blog to update whenever I edit the Markdown file. What actually happened? The blog post does not show up on the blog page, and even if I hack it so that it does show up, the blog post is not being watched for changes so I need to stop and restart the server just to see changes. Participation [ ] I am willing to submit a pull request for this issue. Additional comments There were a bunch of changes made to package.json that I believe are causing both of these issues. For some reason, the environment variable CONTEXT is now being set here: "watch:eleventy": "cross-env CONTEXT=dev eleventy --serve --port=2022", However, we specifically expect no CONTEXT to determine whether or not to show future blog posts. I'm not sure why watching isn't working otherwise. It appears to work for .js files but it does not work for .md files. https://github.com/eslint/eslint.org/blob/427e3bc27aff8e5189b240cb36187234d8281d63/package.json#L20 Renaming the CONTEXT env fixes this issue.
gharchive/issue
2023-01-06T18:28:48
2025-04-01T06:38:36.262062
{ "authors": [ "amareshsm", "nzakas" ], "repo": "eslint/eslint.org", "url": "https://github.com/eslint/eslint.org/issues/397", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
782514368
Clean up image API code paths Most commits have individual descriptions, but at a high level I'm essentially getting rid of the complex mutually-recursing-across-processes behavior previously present in image.js. I've moved the part that actually calls into ImageMagick into its own file, so instead of commands calling magick.run which calls the image API which calls magick.run again (or magick.run calling itself in a worker thread if the image API is disabled), magick.run and the image API both delegate to the new "call ImageMagick" function. I've also gotten rid of some duplicate code for handling GIFs, removed some dead code relating to image types (it can be re-added later in a much cleaner way if needed), and fixed a bug where GIF-only commands would throw an internal error instead of displaying the intended "that isn't a GIF!" message. When testing this out on my instance, something happened where it pulled the previous image instead of the current one when running the motivate command. No idea how these changes could have caused that (or if they even did) but I'll look into it. When testing this out on my instance, something happened where it pulled the previous image instead of the current one when running the motivate command. No idea how these changes could have caused that (or if they even did) but I'll look into it. Never mind, it seems to be an issue with the dev bot as well. Never mind, it seems to be an issue with the dev bot as well. Looks pretty good. Looks pretty good.
gharchive/pull-request
2021-01-09T05:03:52
2025-04-01T06:38:36.307185
{ "authors": [ "TheEssem", "adroitwhiz" ], "repo": "esmBot/esmBot", "url": "https://github.com/esmBot/esmBot/pull/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1906962973
Add support for MCUboot MCUboot support was added for the ESP32-C3 in #49, we should continue adding support for additional devices. There is interest in this from other teams within Espressif currently, and as such it is likely a useful feature for some community members as well. See esp32c3_hal/src/lib.rs as a reference. Note that additional changes to linker scripts are required, too. [ ] ESP32 [ ] ESP32-C2 [ ] ESP32-C3 [ ] ESP32-C6 [ ] ESP32-H2 [ ] ESP32-S2 [ ] ESP32-S3 In the meantime, support for ESP32-C3 was removed I'm going to close this for now, as we have no plans on working on this any time soon. We can open a new issue if we decide to re-visit this.
gharchive/issue
2023-09-21T13:16:22
2025-04-01T06:38:36.327579
{ "authors": [ "bjoernQ", "jessebraham" ], "repo": "esp-rs/esp-hal", "url": "https://github.com/esp-rs/esp-hal/issues/806", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2518747758
feat: Drop support for ESP-IDF 4.4 and Python 3.6 Description This MR includes several fixes for the pipeline to pass: The codereport maintainer has merged my MR (https://github.com/paulgessinger/codereport/pull/4) and released a new version 0.4.0. This should resolve dependency conflicts mentioned in: https://github.com/espressif/clang-tidy-runner/pull/46#issuecomment-2258096149 Update the version of actions because we were using a deprecated version, which does not work anymore. Failed job: https://github.com/espressif/clang-tidy-runner/actions/runs/10806704544/job/29976026469 Drop support for ESP-IDF 4.4 (which was the last to support Python 3.6 so this was dropped as well) Reasons to drop ESP-IDF 4.4 TLDR: dependency hell I was trying to make the pipeline work on the latest IDF, but there was a conflict with the pygments package because IDF required >=3.13. The requirement is coming from the codereport package, so I fixed that upstream and wanted to update the version here to match at least that version (codereport>=0.4.0). Then I realized that we are still stuck with codereport version 0.2.5, because of Jinja2. The newest versions of codereport (3.1+) require Jinja2==3.1.1, but ESP-IDF has this requirement set to <3.1, so there was no way to satisfy both ESP-IDF 4.4 and the latest versions. Considering that ESP-IDF 4.4 is not supported anymore, this is IMO the best solution. This is now working in the CI with IDF 5.0+, but if we hit some dependency issue again we should remove the dependency on the codereport package, there is not a lot of code, mostly just templates for HTML, so we should consider implementing this ourselves. Related Internal tracker: IDF-10919 @dobairoland PTAL, this should be ready to merge.
gharchive/pull-request
2024-09-11T07:06:02
2025-04-01T06:38:36.630575
{ "authors": [ "peterdragun" ], "repo": "espressif/clang-tidy-runner", "url": "https://github.com/espressif/clang-tidy-runner/pull/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
260138030
BLE Notifications stop working after SPI init Hi All, I have a weird one for you. I have a program that communicates over BLE (peripheral and central, connected as a peripheral) to a computer. The computer sends some commands and then sends a message to enable/initialise the SPI Master. Before the SPI is enabled, the notifications work well, as expected. After the SPI is initialised (by calling spi_bus_initialize and spi_bus_add_device) the notifications cease to work. I have discovered that the esp_ble_gatts_send_indicate function returns ESP_FAIL and with a bit of modification to the esp_ble_gatts_send_indicate function I seem to be getting the BT_STATUS_NOMEM error. I have checked the amount of free heap to be above 80K at all times. I have also tried increasing the BT task stack size to 8192. This didn't help. The strange part that I mentioned earlier is that this only seems to happen when I build on windows (with the latest pre-compiled toolchain). If I build with the exact same source code (IDF at the same commit with no changes, and Project at the same commit with no changes) on fedora (with the latest pre-compiled toolchain), the program works fine. All notifications are sent as expected, no BT_STATUS_NOMEM occurs. We have tried to get to the bottom of this for a couple of days now to no success. Is there anything that could be causing this? I am sure it is something small or stupid that is causing this.. Please let me know if there is any more information that you would like. @projectgus @igrr Suggest that you post the two sets of bin/elf files I've seen bugs that appear on certain machines only due to memory corruption of static data combined with the build order & layout (ie on different systems the object files are linked in a different order, leads to different order of static memory addresses in RAM). So for some builds the memory corruption (buffer overflow, etc.) corrupts something harmless or lands in padding, but for other builds it may break something critical. There is an item in our ticketing system to make IDF builds more reproducable to avoid this kind of phantom problem, but there are some technical sticking points before we can achieve this. Unfortunately the heap debugging features don't extend to static memory, so if this is indeed static memory being corrupted then they're not useful. But you could try enabling heap poisoning and calling heap_caps_check_integrity() anyhow, just in case: https://esp-idf.readthedocs.io/en/latest/api-reference/system/heap_debug.html#configuration You can also manually look at the linker map files or symbol dumps (via objdump) from each of the ELF files, and look for anything which might stick out. Hi all, Thanks for your responses. It seems to have been related to a globally declared variable that was not declared as static. It seems to have a name the same as found in lots of the ble stack (ret). Declaring this variable locally in a function or adding static to the global declaration fixed the issue. Hi @lucazader, Glad you sorted this out. It seems to have been related to a globally declared variable that was not declared as static. It seems to have a name the same as found in lots of the ble stack (ret). If there's a part of IDF that includes a globally declared variable with a generic name like "ret" then this is also a bug which we should fix. I had a quick grep of the BT stack code and can't see any global symbol named "ret" (lots of local variables using this name). If you think there may be such a bug here then please reopen the issue. Hi @projectgus It was a global variable called "ret" in my code, however it seemed to conflict with the local ret variables, or at least one of them. Not 100% sure what was going on. l But it definitely was to do with that variable.
gharchive/issue
2017-09-25T02:23:15
2025-04-01T06:38:36.638853
{ "authors": [ "lucazader", "negativekelvin", "projectgus" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/1035", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2105921617
Lacking double-checked locking optimization leads to significantly slower code for static local variables on RISC-V (IDFGH-12004) Answers checklist. [X] I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there. [X] I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there. [X] I have searched the issue tracker for a similar issue and not found a similar issue. IDF version. release/v5.1 and master Espressif SoC revision. esp32c6: v0.1, esp32c3: v0.4 Operating System used. Linux How did you build your project? Command line with idf.py If you are using Windows, please specify command line type. None Development Kit. esp32c3: custom | esp32c6: esp32-c6-devkitc-1-n8 | esp32s3: custom Power Supply used. USB What is the expected behavior? C++11 standard requires the thread-safe, on-demand initialization of local static variables N2660, so GCC introduce a guard variable and guard functions to protect the underlying static variable. When first time control passes through their declaration, the guard functions __cxa_guard_* are called and ensure the successful initialization must be performed exactly once. The guard variable records the current state of the local static variable. Guard functions require some synchronization mechanisms to work, so they're somewhat heavy. GCC introduce another inlined check (double-checked locking optimization) to bypass the guard functions after the local static variable successfully initialized. Problem: Some targets may lack the double-checked locking optimization, and the guard functions are always called no matter whether the local static variable is initialized or not. What is the actual behavior? Accessing to the local static variable on some targets/gcc combination are significantly slower than others. After a bit of disassembling, I found the double-checked locking optimization is missing from all RISC-V targets on GCC 12.2, and still missing from RV32IMC targets on GCC 13.2. Affected targets: IDF ver. GCC ver. C2 C3 C6 H2 P4 S3 (XT) release/v5.1 riscv32-esp-elf/esp-12.2.0_20230208 + + + + x - master riscv32-esp-elf/esp-13.2.0_20230928 + + - - - - '+' for affected, '-' for not affected, 'x' for not supported. S3 is an Xtensa target, not affected by this problem. Just for comparison. Summary of the implemented ISA extensions for each RISC-V target: Target Implemented ISA Ext. ESP32-C2 rv32imc_zicsr_zifencei ESP32-C3 rv32imc_zicsr_zifencei ESP32-C6 rv32imac_zicsr_zifencei ESP32-H2 rv32imac_zicsr_zifencei ESP32-P4 rv32imafc_zicsr_zifencei -march parameters were taken from the particular toolchain files. Steps to reproduce. Use the default config for the each target. Design four test functions in the main/main.cpp. Normal testing function for file-scoped static variable as baseline. main.cpp: rand_global_static() Normal testing function for local static variable (testing function). main.cpp: rand_local_static() Handcrafted testing function of the correctly implemented version of 2. main.cpp: rand_opt_handcraft_local_static() Handcrafted testing function of the affected version of 2. main.cpp: rand_naive_handcraft_local_static() Benchmark all of them. For the expected code, performance of function 1.1 should be on par with 2.1 (Target with -march=rv32imac_zicsr_zifencei and compiled with GCC 13.2) For the affected code, performance of function 1.1 should be similar to 2.4(Target with -march=rv32imc_zicsr_zifencei and compiled with GCC 13.2 and 12.2) Debug Logs. * Only C3 and C6 have been tested on the real machine. Followings are some results for {c3, c6} x {release/v5.1, master}. * Benchmark results * Targets with affected codegen * ESP32-C3, release/v5.1 I (135181) local-static: native local static duration: 266015 \ I (135181) local-static: optimized handcrafted local static duration: 13537 | I (135181) local-static: naive handcrafted local static duration: 266021 / I (135191) local-static: global static duration: 9031 I (135201) local-static: penalty of local static: 96.605% ``` * ESP32-C3, master ``` I (26840) local-static: native local static duration: 202079 \ I (26840) local-static: optimized handcrafted local static duration: 14353 | I (26850) local-static: naive handcrafted local static duration: 201665 / I (26860) local-static: global static duration: 9440 I (26860) local-static: penalty of local static: 95.329% ``` * ESP32-C6, release/v5.1 ``` I (58125) local-static: native local static duration: 272966 \ I (58125) local-static: optimized handcrafted local static duration: 13946 | I (58135) local-static: naive handcrafted local static duration: 272560 / I (58145) local-static: global static duration: 9031 I (58145) local-static: penalty of local static: 96.692% ``` * Target with expected result: * ESP32-C6, master ``` I (62305) local-static: native local static duration: 13944 \ I (62305) local-static: optimized handcrafted local static duration: 13535 / I (62315) local-static: naive handcrafted local static duration: 199632 I (62315) local-static: global static duration: 9030 I (62325) local-static: penalty of local static: 35.241% ``` * Disassembly of the expected code: * ESP32-C6, master ``` 420082b8 <rand_local_static()>: { 420082b8: 1141 add sp,sp,-16 420082ba: c606 sw ra,12(sp) static prng_t s_pv_rng; 420082bc: 4080c7b7 lui a5,0x4080c 420082c0: 4e87c783 lbu a5,1256(a5) # 4080c4e8 <guard variable for rand_local_static()::s_pv_rng> // Here's a fence to safely load the ready flag from the guard variable. 420082c4: 0ff0000f fence 420082c8: 0ff7f793 zext.b a5,a5 // Double-checked to bypass the heavy __cxa_guard_* guard functions. 420082cc: /-- cb85 beqz a5,420082fc <rand_local_static()+0x44> _M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x); 420082ce: /--|-> 4080c737 lui a4,0x4080c 420082d2: | | 4f072503 lw a0,1264(a4) # 4080c4f0 <rand_local_static()::s_pv_rng> _Tp __res = __a * __x + __c; 420082d6: | | 41c657b7 lui a5,0x41c65 420082da: | | e6d78793 add a5,a5,-403 # 41c64e6d <g_saved_pc+0x13e4e71> 420082de: | | 02f50533 mul a0,a0,a5 420082e2: | | 678d lui a5,0x3 420082e4: | | 03978793 add a5,a5,57 # 3039 <RvExcFrameSize+0x2fa5> 420082e8: | | 953e add a0,a0,a5 __res %= __m; 420082ea: | | 800007b7 lui a5,0x80000 420082ee: | | 17fd add a5,a5,-1 # 7fffffff <LP_ANA_PERI+0x1ff4d3ff> 420082f0: | | 8d7d and a0,a0,a5 _M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x); 420082f2: | | 4ea72823 sw a0,1264(a4) } 420082f6: | | 40b2 lw ra,12(sp) 420082f8: | | 0141 add sp,sp,16 420082fa: | | 8082 ret static prng_t s_pv_rng; 420082fc: | \-> 4080c537 lui a0,0x4080c 42008300: | 4e850513 add a0,a0,1256 # 4080c4e8 <guard variable for rand_local_static()::s_pv_rng> 42008304: | d24fc0ef jal 42004828 <__cxa_guard_acquire> 42008308: +----- d179 beqz a0,420082ce <rand_local_static()+0x16> { seed(__s); } 4200830a: | 4585 li a1,1 4200830c: | 4080c537 lui a0,0x4080c 42008310: | 4f050513 add a0,a0,1264 # 4080c4f0 <rand_local_static()::s_pv_rng> 42008314: | f99ff0ef jal 420082ac <std::linear_congruential_engine<unsigned int, 1103515245u, 12345u, 2147483648u>::seed(unsigned int)> 42008318: | 4080c537 lui a0,0x4080c 4200831c: | 4e850513 add a0,a0,1256 # 4080c4e8 <guard variable for rand_local_static()::s_pv_rng> 42008320: | de2fc0ef jal 42004902 <__cxa_guard_release> 42008324: \----- b76d j 420082ce <rand_local_static()+0x16> ``` * ESP32-C3, release/v5.1 (handcrafted version of above-mentioned compiler-generated code) ``` 420077a6 <rand_opt_handcraft_local_static()>: { 420077a6: 1141 add sp,sp,-16 420077a8: c606 sw ra,12(sp) uint8_t r = s_rng_guard_optimized.ready; 420077aa: 3fc8c7b7 lui a5,0x3fc8c 420077ae: 76c7c783 lbu a5,1900(a5) # 3fc8c76c <s_rng_guard_optimized> __sync_synchronize(); 420077b2: 0ff0000f fence if (!r) { 420077b6: /-- cb8d beqz a5,420077e8 <rand_opt_handcraft_local_static()+0x42> _M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x); 420077b8: /--|-> 3fc8c737 lui a4,0x3fc8c 420077bc: | | 76872503 lw a0,1896(a4) # 3fc8c768 <s_rng_optimized> _Tp __res = __a * __x + __c; 420077c0: | | 41c657b7 lui a5,0x41c65 420077c4: | | e6d78793 add a5,a5,-403 # 41c64e6d <_coredump_iram_end+0x18da66d> 420077c8: | | 02f50533 mul a0,a0,a5 420077cc: | | 678d lui a5,0x3 420077ce: | | 03978793 add a5,a5,57 # 3039 <_esp_memprot_align_size+0x2e39> 420077d2: | | 953e add a0,a0,a5 __res %= __m; 420077d4: | | 800007b7 lui a5,0x80000 420077d8: | | fff7c793 not a5,a5 420077dc: | | 8d7d and a0,a0,a5 _M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x); 420077de: | | 76a72423 sw a0,1896(a4) } 420077e2: | | 40b2 lw ra,12(sp) 420077e4: | | 0141 add sp,sp,16 420077e6: | | 8082 ret if (__cxxabiv1::__cxa_guard_acquire((__cxxabiv1::__guard *)&s_rng_guard_optimized)) { 420077e8: | \-> 3fc8c537 lui a0,0x3fc8c 420077ec: | 76c50513 add a0,a0,1900 # 3fc8c76c <s_rng_guard_optimized> 420077f0: | c34fd0ef jal 42004c24 <__cxa_guard_acquire> 420077f4: +----- d171 beqz a0,420077b8 <rand_opt_handcraft_local_static()+0x12> { seed(__s); } 420077f6: | 4585 li a1,1 420077f8: | 3fc8c537 lui a0,0x3fc8c 420077fc: | 76850513 add a0,a0,1896 # 3fc8c768 <s_rng_optimized> 42007800: | ef9ff0ef jal 420076f8 <std::linear_congruential_engine<unsigned int, 1103515245u, 12345u, 2147483648u>::seed(unsigned int)> __cxxabiv1::__cxa_guard_release((__cxxabiv1::__guard *)&s_rng_guard_optimized); 42007804: | 3fc8c537 lui a0,0x3fc8c 42007808: | 76c50513 add a0,a0,1900 # 3fc8c76c <s_rng_guard_optimized> 4200780c: | cf4fd0ef jal 42004d00 <__cxa_guard_release> 42007810: \----- b765 j 420077b8 <rand_opt_handcraft_local_static()+0x12> ``` * Disassembly of the affected ()```: * ESP32-C3, release/v5.1 * ESP32-C6, release/v5.1 (disassembly for these two targets are almost identical) * ESP32-C3, master (only slightly different from the previous one, omitted) ``` 42007706 <rand_local_static()>: { 42007706: 1141 add sp,sp,-16 42007708: c606 sw ra,12(sp) static prng_t s_pv_rng; 4200770a: 3fc8c537 lui a0,0x3fc8c 4200770e: 77050513 add a0,a0,1904 # 3fc8c770 <guard variable for rand_local_static()::s_pv_rng> // Always call the guard function without checking 42007712: d12fd0ef jal 42004c24 <__cxa_guard_acquire> 42007716: /-- e90d bnez a0,42007748 <rand_local_static()+0x42> _M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x); 42007718: /--|-> 3fc8c737 lui a4,0x3fc8c 4200771c: | | 77872503 lw a0,1912(a4) # 3fc8c778 <rand_local_static()::s_pv_rng> _Tp __res = __a * __x + __c; 42007720: | | 41c657b7 lui a5,0x41c65 42007724: | | e6d78793 add a5,a5,-403 # 41c64e6d <_coredump_iram_end+0x18da66d> 42007728: | | 02f50533 mul a0,a0,a5 4200772c: | | 678d lui a5,0x3 4200772e: | | 03978793 add a5,a5,57 # 3039 <_esp_memprot_align_size+0x2e39> 42007732: | | 953e add a0,a0,a5 __res %= __m; 42007734: | | 800007b7 lui a5,0x80000 42007738: | | fff7c793 not a5,a5 4200773c: | | 8d7d and a0,a0,a5 _M_x = __detail::__mod<_UIntType, __m, __a, __c>(_M_x); 4200773e: | | 76a72c23 sw a0,1912(a4) } 42007742: | | 40b2 lw ra,12(sp) 42007744: | | 0141 add sp,sp,16 42007746: | | 8082 ret { seed(__s); } 42007748: | \-> 4585 li a1,1 4200774a: | 3fc8c537 lui a0,0x3fc8c 4200774e: | 77850513 add a0,a0,1912 # 3fc8c778 <rand_local_static()::s_pv_rng> 42007752: | fa7ff0ef jal 420076f8 <std::linear_congruential_engine<unsigned int, 1103515245u, 12345u, 2147483648u>::seed(unsigned int)> static prng_t s_pv_rng; 42007756: | 3fc8c537 lui a0,0x3fc8c 4200775a: | 77050513 add a0,a0,1904 # 3fc8c770 <guard variable for rand_local_static()::s_pv_rng> 4200775e: | da2fd0ef jal 42004d00 <__cxa_guard_release> 42007762: \----- bf5d j 42007718 <rand_local_static()+0x12> ``` More Information. Workarounds: constinit keyword for the type with constexpr constructor and default destructor. Use file-scoped static variable instead rand_global_static(); Testing code: CMakeLists.txt cmake_minimum_required(VERSION 3.16) include($ENV{IDF_PATH}/tools/cmake/project.cmake) project(static-local-variable-test) main/CMakeLists.txt idf_component_register(SRCS "main.cpp") main/main.cpp #include "esp_log.h" #include "freertos/FreeRTOS.h" #include "freertos/task.h" #include <chrono> #include <cxxabi.h> #include <numeric> #include <random> // Faster if using power of two. Make cpu busy for a while. using prng_t = std::linear_congruential_engine<unsigned, 1103515245, 12345, 1u << 31>; static const char TAG[] = "local-static"; prng_t g_rng; int rand_global_static() { return g_rng(); } /** @brief Normal local static getter */ int rand_local_static() { static prng_t s_pv_rng; return s_pv_rng(); } /** * @brief ABI-defined guard variable for local static variable and lock functions * @note Defined in the ${IDF_PATH}/components/cxx/cxx_guards.cpp */ typedef struct { uint8_t ready; uint8_t pending; } guard_t; static guard_t s_rng_guard_optimized = {0, 0}; static prng_t s_rng_optimized; int rand_opt_handcraft_local_static() { #ifdef __xtensa__ // Only S3 has this extra fence __sync_synchronize(); #endif uint8_t r = s_rng_guard_optimized.ready; __sync_synchronize(); if (!r) { if (__cxxabiv1::__cxa_guard_acquire((__cxxabiv1::__guard *)&s_rng_guard_optimized)) { new (&s_rng_optimized) prng_t; __cxxabiv1::__cxa_guard_release((__cxxabiv1::__guard *)&s_rng_guard_optimized); } } return s_rng_optimized(); } static guard_t s_rng_guard_naive = {0, 0}; static prng_t s_rng_naive; int rand_naive_handcraft_local_static() { if (__cxxabiv1::__cxa_guard_acquire((__cxxabiv1::__guard *)&s_rng_guard_naive)) { new (&s_rng_naive) prng_t; __cxxabiv1::__cxa_guard_release((__cxxabiv1::__guard *)&s_rng_guard_naive); } return s_rng_naive(); } static constexpr size_t repeat = UINT16_MAX; template <int (*Fn)(void)> static unsigned test_runner() { unsigned randval = 0; std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now(); for (size_t i = 0; i < repeat; i++) { randval += Fn(); } std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now(); unsigned dur = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count(); return dur; } static void test_task(void *) { static constexpr auto width = std::numeric_limits<unsigned>::digits10; while (true) { auto dur_l = test_runner<rand_local_static>(); auto dur_oh = test_runner<rand_opt_handcraft_local_static>(); auto dur_nh = test_runner<rand_naive_handcraft_local_static>(); auto dur_g = test_runner<rand_global_static>(); ESP_LOGI(TAG, "native local static duration: %*u", width, dur_l); ESP_LOGI(TAG, "optimized handcrafted local static duration: %*u", width, dur_oh); ESP_LOGI(TAG, "naive handcrafted local static duration: %*u", width, dur_nh); ESP_LOGI(TAG, "global static duration: %*u", width, dur_g); ESP_LOGI(TAG, "penalty of local static: %.3f%%", (1.0f - (float)dur_g / dur_l) * 100.0f); vTaskDelay(pdMS_TO_TICKS(1000)); } } extern "C" void app_main() { #if portNUM_PROCESSORS > 1 const BaseType_t core = 1; #else const BaseType_t core = 0; #endif TaskHandle_t p_tsk; assert(xTaskCreatePinnedToCore(&test_task, TAG, 4096, nullptr, CONFIG_ESP32_PTHREAD_TASK_PRIO_DEFAULT, &p_tsk, core) == pdPASS); } Problem: Some targets may lack the double-checked locking optimization, and the guard functions are always called no matter whether the local static variable is initialized or not. Unfortunately, I think this is probably expected, since the double-checked lock initialization using an atomic guard variable is only implemented in GCC on targets with support for atomic instructions (i.e. a extension on RISC-V): Static initialization expansion relies on get_guard_cond (code) get_guard_cond generates a constant zero expression if is_atomic_expensive_p is true (code) is_atomic_expensive_p calls can_compare_and_swap_p with the 2nd argument (allow_libcall) set to false (code) So if the target doesn't support atomics via instructions (only via library calls) then is_atomic_expensive_p will return true, and the atomic guard related code won't be generated. I think this probably needs to be reported in upstream GCC as a "allow atomic libcalls for double-check guard implementation" type of a feature request. Newer Espressif RISC-V chips should all have the "A" extension, so this probably won't be an issue going forward. Hi igrr: But even on the RV32IMAC targets (C6 and onward), it seems like the generated code doesn't utilize any instructions from the A extension at all? That's right, the load operation generated by build_atomic_load_type is lbu, which isn't an atomic instruction. Since the "initialized" flag is represented by 1 byte, and we don't need to atomically modify the flag, it's not necessary to use lr/sc instructions there, so seems like the compiler is doing the right thing there. Unfortunately, I don't know enough about other architectures which GCC targets to tell if there is a specific reason for using build_atomic_load_type there. I do see the same behavior for other architectures, though — https://godbolt.org/z/cz5cvabv8 illustrates the same issue on Xtensa. On ESP32-S2 (no atomic instructions) no double-checked locking is used, but on ESP32 (has atomic instructions) it is used.
gharchive/issue
2024-01-29T16:44:20
2025-04-01T06:38:36.668614
{ "authors": [ "andylinpersonal", "igrr" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/13072", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2308249487
NO_AP_FOUND_IN_AUTHMODE_THRESHOLD while threshold's condition is respected (IDFGH-12863) Answers checklist. [X] I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there. [X] I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there. [X] I have searched the issue tracker for a similar issue and not found a similar issue. IDF version. v5.2.1 Espressif SoC revision. ESP32-C6 v0.0 Operating System used. Windows How did you build your project? VS Code IDE If you are using Windows, please specify command line type. None Development Kit. ESP32-C6-WROOM-1 Power Supply used. USB What is the expected behavior? I expect the ESP32 STA to connect to the AP, as the STA threshold is set to WIFI_AUTH_WPA2 and the AP's authmode is set to >WPA2. What is the actual behavior? I am getting NO_AP_FOUND_IN_AUTHMODE_THRESHOLD, while the authmode.threshold in the STA is greater than AP's one. No matter what threshold i put in authmode.threshold, the only AP configuration that it will connect to is WPA3 Steps to reproduce. Set station config's authmode.threshold to anything/don't set it Connect to a wifi network with a security inferior to WPA3 Debug Logs. SCAN RESULT : SSID=TestReseau, AUTHMODE=3 I (241251) http_server: POST /connect.json I (241261) http_server: ssid: TestReseau, password: Password1! I (241261) wifi_manager: MESSAGE: ORDER_CONNECT_STA I (241271) wifi_manager: wifi_sta_config: ssid:TestReseau password:Password1! I (241281) wifi_manager: wifi_sta_config: sta_authmode 3 I (241291) wifi_manager: wifi_sta_config: RM enabled 1 I (241291) wifi_manager: wifi_sta_config: BTM enabled 1 I (241301) wifi_manager: wifi_sta_config: MBO enabled 1 I (241311) wifi_manager: wifi_sta_config: FT enabled 1 I (241311) wifi_manager: wifi_sta_config: OWE enabled 1 I (241321) wifi_manager: wifi_sta_config: PMF capable enabled 1 I (241331) wifi_manager: wifi_sta_config: PMF required enabled 0 I (241341) wifi_manager: wifi_sta_config: transition_disable enabled 0 I (244171) wifi_manager: WIFI_EVENT_STA_DISCONNECTED I (244171) wifi_manager: MESSAGE: EVENT_STA_DISCONNECTED with Reason code: 211 I (244181) wifi_manager: MESSAGE: EVENT_STA_DISCONNECTED with rssi: -128 I (244181) wifi_manager: Set STA IP String to: 0.0.0.0 ### More Information. I have tried setting OWE to 1 in the STA config, and also setting transition_disable to 1 in the STA config. Ultimately, the STA will only connect to the AP if the security of the AP is set to WPA3 Hi @evoon Could you please enable WIFI debug print and share the log? CONFIG_WPA_DEBUG_PRINT=y CONFIG_MBEDTLS_DEBUG=y CONFIG_MBEDTLS_DEBUG_LEVEL_VERBOSE=y CONFIG_MBEDTLS_DEBUG_LEVEL=4 CONFIG_LOG_DEFAULT_LEVEL_DEBUG=y CONFIG_LOG_DEFAULT_LEVEL=4
gharchive/issue
2024-05-21T12:57:41
2025-04-01T06:38:36.677153
{ "authors": [ "evoon", "vik-gokhale" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/13827", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
194595336
Loading application from openOCD Hi, Is it possible to upload the firmware through GDB ? When I try it, I got this error from openOCD Error: esp32.cpu0: xtensa_write_memory (line 1024): DSR (8020CC13) indicates DIR instruction generated an exception! Warn : esp32.cpu0: Failed writing 4096 bytes at address 0x3F400010 Regards Jonathan Not yet; OpenOCD is not aware that the application lives in flash and will try to write to the flash as if it's RAM. Remedying this is on our ToDo-list, but we haven't gotten around to this. Hi, Thanx for the Info. Jonathan Hi, I got this problem as well. The extensa-gdb works OK when start from the command line. But it fails with this message when I try start it from eclipse (Neon.3) - it try write to the drom0_0_seg, and, obviously fails - it read-only memory. So, can somebody point me to the eclipse configuration for avoid this write? Thanks. Never mind, I find a fix for that ..Thanks.
gharchive/issue
2016-12-09T13:20:48
2025-04-01T06:38:36.680236
{ "authors": [ "Spritetm", "dantonets", "dumarjo" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/153", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
336256945
using ccache with esp-idf Hi all, is possible to use ccache to speed up the build? Thanks Nicola Lunghi totally you can ;) many of us already use it :) ok i managed to create a directory called bin-ccache in the xtensa installation directory and link all the executables in bin to ccache then I've added the xtensa bin-ccache folder to path before the bin folder itself. and ccache works perfectly Hi @nicola-lunghi , Just FYI, if you're still using the cmake branch then you should get ccache enabled automatically if it's on your PATH: https://github.com/espressif/esp-idf/blob/feature/cmake/tools/cmake/idf_functions.cmake#L101 (But setting up links in the way you mention will also work - I have this set on my local system as well.) Angus
gharchive/issue
2018-06-27T14:34:04
2025-04-01T06:38:36.683061
{ "authors": [ "me-no-dev", "nicola-lunghi", "projectgus" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/2114", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
419746640
SPI slave broken by commit 58955a7 (IDFGH-702) Environment Development Kit: [ESP32-DevKitC|] Using VSPI with IOMUX pins as described in https://github.com/espressif/esp-idf/blob/master/docs/en/api-reference/peripherals/spi_slave.rst No dma Init code: host_(VSPI_HOST); //... spi_bus_config_t buscfg = { .mosi_io_num = 23, .miso_io_num = 19, .sclk_io_num = 18, .quadwp_io_num = -1, .quadhd_io_num = -1 }; spi_slave_interface_config_t slvcfg = { .spics_io_num = 5, .flags = 0, .queue_size = 1, .mode = 0 }; //... RETURN_NOT_OK(spi_slave_initialize(host_, &buscfg, &slvcfg, 0)); Problem Description Using ESP32 as an SPI slave, running against a non-esp MCU master. As found by git bisect, before commit 58955a7 data was transmitted reliably, after that commit started getting errors on the master (see below for the types of errors). The anatomy of the error (1 or 2 bit shifts) seems to hint at timing problems. Reverting the changes to spi_slave.c on top of master as of a few days ago (ebdcbe8c) makes the problem go away. Expected Behavior Data transmitted by the slave arrives at the master as sent. Actual Behavior Data is usually bit shifted by 1 bit (e.g. an xFE byte sent by the ESP32 slave becomes xFF at the master), sometimes (rarely) arrives correctly, sometimes it's bit shifted by 2 bits. Sorry could you please double check the commit id — 58955a you've mentioned doesn't seem to be a commit in IDF repo? Hi @igrr I was missing one more letter to be uniquely identifiable :) I changed the title/links to the right one, but here's a more direct one: https://github.com/espressif/esp-idf/commit/58955a79a27d3c7331eaec6e464878df42615a36 I'm talking specifically about the changes to spi_slave.c. I haven't tested or reverted the other changes. @dralves Can you provide the MCU you are using, as well as the spi clock speed? The problem is that, ESP32 slave has a delay (quite large!!!!!) on the MISO line after the SPI clock launch edge. When the GPIO matrix is used, it's 62.5ns, and if IOMUX isused, it's 37.5ns. Which means, it cannot meet the timing requirements when the SPI clock is above 8MHz (GPIO matrix), or 13MHz (IOMUX) (See the programming guide). In the code before, we mistakenly shifted the timing of both launch and latch edge by half a spi clock ahead. This solves some DMA issues, and made the timing performance better in some high frequency cases. But basicly it's incorrect, and complained by someone else in #1346 and #2393 . So we fixed it. If you prefer the timing configurations, you can: If you are using mode 1/3, set the mode of slave to mode 0/2, it's half a clock ahead then mode 1/3. But the DMA should be disabled. If you are using mode 0/2, turn on the DMA. The workaround in 58955a7 will help you shift the edges half a clock ahead. Hi @ginkgm The MCU/SPI master is a PIC32, mode 0, the master sets the speed at 10Mhz. I read the docs you wrote :), so even though at first I was using the GPIO matrix I changed to IOMUX. Then I started reducing the speed I went all the way down to 5MHz and stlll the comms were unreliable. That's when I finally tried looking into reverting the changes. SPI had been super-reliable before. Oh, btw, not sure if relevant this exact same code & mode & frequency on the master works flawlessly against an ESP8266. Another note I did see the comma becoming slightly better as I reduced speed. But at 5 MHz they were still very unreliable and that’s the lowest I could go in terms of speed Yet another note: The table in #1346 states: Registers mode0 mode1 mode2 mode3 SPI_CK_IDLE_EDGE 0 0 1 1 SPI_CK_I_EDGE 0 1 1 0 SPI_MISO_DELAY_MODE 0 0 0 0 SPI_MISO_DELAY_NUM 0 0 0 0 SPI_MOSI_DELAY_MODE 2 1 1 2 SPI_MOSI_DELAY_NUM 0 0 0 0 but the currently checked in code is: if (mode == 0) { //The timing needs to be fixed to meet the requirements of DMA spihost[host]->hw->pin.ck_idle_edge = 1; spihost[host]->hw->user.ck_i_edge = 0; spihost[host]->hw->ctrl2.miso_delay_mode = 0; spihost[host]->hw->ctrl2.miso_delay_num = 0; spihost[host]->hw->ctrl2.mosi_delay_mode = 2; spihost[host]->hw->ctrl2.mosi_delay_num = 2; If I understand things correctly, this doesn't match the table. Not sure whether the table is accurate, but according to the table this should be: if (mode == 0) { //The timing needs to be fixed to meet the requirements of DMA spihost[host]->hw->pin.ck_idle_edge = 0; spihost[host]->hw->user.ck_i_edge = 0; spihost[host]->hw->ctrl2.miso_delay_mode = 0; spihost[host]->hw->ctrl2.miso_delay_num = 0; spihost[host]->hw->ctrl2.mosi_delay_mode = 2; spihost[host]->hw->ctrl2.mosi_delay_num = 0; @dralves The TRM is already updated: https://www.espressif.com/sites/default/files/documentation/esp32_technical_reference_manual_en.pdf @ginkgm might it be hardware rev dependent? I have quite a few esp32 boards, but mainly use adafruits huzzah32 with the ESP-WROOM-32 (not sure which rev, but think it's pretty old). I say this because the current implementation is supposed to work well at low frequencies, (<7MHz even if I got the GPIO/IOMUX thing wrong) and that's not what I observed. @ginkgm in any case if this is only a problem for me, I already have my own fork of esp-idf (where I include arduino as a component) so I can just maintain my fork with this additional change. Feel free to close this if you think it's not relevant/not a problem for other folks. I just raised the issue in case others run into the same problem where their boards are working fine and suddenly stop working after an update. Some suggestions on using PIC32: I assume you are using the model as this spec Mode 0 correspond to CKP=0, CKE=1. SP40=15ns, 1/(15ns+62.5ns)/2=6.5MHz. I think the slack is small, and maybe you are still via the GPIO matrix? Maybe you can enable the debug message in the spi_common.c to see whether you're using the IOMUX or not. Finally, you can set SMP=1 to delay the master sample time, or use the trick I mentioned above to advance the slave launch time. thanks @dralves @ginkgm @dralves thanks alot!
gharchive/issue
2019-03-12T00:33:01
2025-04-01T06:38:36.701819
{ "authors": [ "dralves", "ginkgm", "igrr" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/3162", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
491152910
Linking Issue (IDFGH-1829) ----------------------------- Delete above ----------------------------- Environment Development Kit: ESP32-DevKitC Kit version Module or chip used: ESP32-WROOM-32 IDF version v4.1-dev-141-ga7e8d87d Build System: CMake Compiler version xtensa-esp32-elf-gcc (crosstool-NG esp32-2019r1) 8.2.0 Operating System: macOs & Ubunut Power Supply: USB Problem Description When building and flahsing example projects everything works just fine. Later i started adding src files to the projec. When building all files are compiled succesfully, while the project is been linked my src files cant be found. Below code samples and build log. Steps to repropduce Add dummy.c and dummy.h to blink example code. include and call doNothing() function defined at dummy.h Build Code to reproduce this issue // dummy.c #include "dummy.h" int doNothing.h(int i) { return i; } // dummy.h #ifndef _DUMMY_H_ #define _DUMMY_H_ int doNothing(int i); #endif //blink.c /* Blink Example This example code is in the Public Domain (or CC0 licensed, at your option.) Unless required by applicable law or agreed to in writing, this software is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ #include <stdio.h> #include "freertos/FreeRTOS.h" #include "freertos/task.h" #include "driver/gpio.h" #include "sdkconfig.h" #include "dummy.h" /* Can use project configuration menu (idf.py menuconfig) to choose the GPIO to blink, or you can edit the following line and set a number here. */ #define BLINK_GPIO CONFIG_BLINK_GPIO void app_main(void) { /* Configure the IOMUX register for pad BLINK_GPIO (some pads are muxed to GPIO on reset already, but some default to other functions and need to be switched to GPIO. Consult the Technical Reference for a list of pads and their default functions.) */ doNothing(1); gpio_pad_select_gpio(BLINK_GPIO); /* Set the GPIO as a push/pull output */ gpio_set_direction(BLINK_GPIO, GPIO_MODE_OUTPUT); while(1) { /* Blink off (output low) */ printf("Turning off the LED\n"); gpio_set_level(BLINK_GPIO, 0); vTaskDelay(1000 / portTICK_PERIOD_MS); /* Blink on (output high) */ printf("Turning on the LED\n"); gpio_set_level(BLINK_GPIO, 1); vTaskDelay(1000 / portTICK_PERIOD_MS); } } Debug Logs Checking Python dependencies... Python requirements from /Users/fede/esp/esp-idf/requirements.txt are satisfied. Executing action: all (aliases: build) Running ninja in directory /Users/fede/esp/esp-idf/examples/get-started/blink/build Executing "ninja all"... [1/8] Performing build step for 'bootloader' ninja: no work to do. [5/6] Linking CXX executable blink.elf FAILED: blink.elf : && /Users/fede/.espressif/tools/xtensa-esp32-elf/esp32-2019r1-8.2.0/xtensa-esp32-elf/bin/xtensa-esp32-elf-g++ -mlongcalls -Wno-frame-address -nostdlib CMakeFiles/blink.elf.dir/project_elf_src.c.obj -o blink.elf esp-idf/esp_ringbuf/libesp_ringbuf.a esp-idf/driver/libdriver.a esp-idf/wpa_supplicant/libwpa_supplicant.a esp-idf/efuse/libefuse.a esp-idf/bootloader_support/libbootloader_support.a esp-idf/app_update/libapp_update.a esp-idf/spi_flash/libspi_flash.a esp-idf/nvs_flash/libnvs_flash.a esp-idf/esp_wifi/libesp_wifi.a esp-idf/esp_eth/libesp_eth.a esp-idf/lwip/liblwip.a esp-idf/tcpip_adapter/libtcpip_adapter.a esp-idf/esp_event/libesp_event.a esp-idf/pthread/libpthread.a esp-idf/espcoredump/libespcoredump.a esp-idf/esp32/libesp32.a esp-idf/xtensa/libxtensa.a esp-idf/esp_common/libesp_common.a esp-idf/esp_rom/libesp_rom.a esp-idf/soc/libsoc.a esp-idf/log/liblog.a esp-idf/heap/libheap.a esp-idf/freertos/libfreertos.a esp-idf/vfs/libvfs.a esp-idf/newlib/libnewlib.a esp-idf/cxx/libcxx.a esp-idf/app_trace/libapp_trace.a esp-idf/asio/libasio.a esp-idf/cbor/libcbor.a esp-idf/coap/libcoap.a esp-idf/console/libconsole.a esp-idf/nghttp/libnghttp.a esp-idf/esp-tls/libesp-tls.a esp-idf/esp_adc_cal/libesp_adc_cal.a esp-idf/esp_gdbstub/libesp_gdbstub.a esp-idf/tcp_transport/libtcp_transport.a esp-idf/esp_http_client/libesp_http_client.a esp-idf/esp_http_server/libesp_http_server.a esp-idf/esp_https_ota/libesp_https_ota.a esp-idf/protobuf-c/libprotobuf-c.a esp-idf/protocomm/libprotocomm.a esp-idf/mdns/libmdns.a esp-idf/esp_local_ctrl/libesp_local_ctrl.a esp-idf/esp_websocket_client/libesp_websocket_client.a esp-idf/expat/libexpat.a esp-idf/wear_levelling/libwear_levelling.a esp-idf/sdmmc/libsdmmc.a esp-idf/fatfs/libfatfs.a esp-idf/freemodbus/libfreemodbus.a esp-idf/jsmn/libjsmn.a esp-idf/json/libjson.a esp-idf/libsodium/liblibsodium.a esp-idf/mqtt/libmqtt.a esp-idf/openssl/libopenssl.a esp-idf/spiffs/libspiffs.a esp-idf/ulp/libulp.a esp-idf/unity/libunity.a esp-idf/wifi_provisioning/libwifi_provisioning.a esp-idf/main/libmain.a -Wl,--cref -Wl,--Map=/Users/fede/esp/esp-idf/examples/get-started/blink/build/blink.map esp-idf/asio/libasio.a esp-idf/cbor/libcbor.a esp-idf/coap/libcoap.a esp-idf/esp_adc_cal/libesp_adc_cal.a esp-idf/esp_gdbstub/libesp_gdbstub.a esp-idf/esp_https_ota/libesp_https_ota.a esp-idf/esp_http_client/libesp_http_client.a esp-idf/esp_local_ctrl/libesp_local_ctrl.a esp-idf/esp_websocket_client/libesp_websocket_client.a esp-idf/expat/libexpat.a esp-idf/fatfs/libfatfs.a esp-idf/wear_levelling/libwear_levelling.a esp-idf/sdmmc/libsdmmc.a esp-idf/freemodbus/libfreemodbus.a esp-idf/jsmn/libjsmn.a esp-idf/libsodium/liblibsodium.a esp-idf/mqtt/libmqtt.a esp-idf/tcp_transport/libtcp_transport.a esp-idf/esp-tls/libesp-tls.a esp-idf/openssl/libopenssl.a esp-idf/spiffs/libspiffs.a esp-idf/ulp/libulp.a esp-idf/unity/libunity.a esp-idf/wifi_provisioning/libwifi_provisioning.a esp-idf/protocomm/libprotocomm.a esp-idf/esp_http_server/libesp_http_server.a esp-idf/nghttp/libnghttp.a esp-idf/protobuf-c/libprotobuf-c.a esp-idf/mdns/libmdns.a esp-idf/console/libconsole.a esp-idf/json/libjson.a esp-idf/esp_ringbuf/libesp_ringbuf.a esp-idf/driver/libdriver.a esp-idf/wpa_supplicant/libwpa_supplicant.a esp-idf/efuse/libefuse.a esp-idf/bootloader_support/libbootloader_support.a esp-idf/app_update/libapp_update.a esp-idf/spi_flash/libspi_flash.a esp-idf/nvs_flash/libnvs_flash.a esp-idf/esp_wifi/libesp_wifi.a esp-idf/esp_eth/libesp_eth.a esp-idf/lwip/liblwip.a esp-idf/tcpip_adapter/libtcpip_adapter.a esp-idf/esp_event/libesp_event.a esp-idf/pthread/libpthread.a esp-idf/espcoredump/libespcoredump.a esp-idf/esp32/libesp32.a esp-idf/xtensa/libxtensa.a esp-idf/esp_common/libesp_common.a esp-idf/esp_rom/libesp_rom.a esp-idf/soc/libsoc.a esp-idf/log/liblog.a esp-idf/heap/libheap.a esp-idf/freertos/libfreertos.a esp-idf/vfs/libvfs.a esp-idf/newlib/libnewlib.a esp-idf/cxx/libcxx.a esp-idf/app_trace/libapp_trace.a esp-idf/mbedtls/mbedtls/library/libmbedtls.a esp-idf/mbedtls/mbedtls/library/libmbedcrypto.a esp-idf/mbedtls/mbedtls/library/libmbedx509.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libcoexist.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libcore.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libespnow.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libmesh.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libnet80211.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libphy.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libpp.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/librtc.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libsmartconfig.a esp-idf/esp_ringbuf/libesp_ringbuf.a esp-idf/driver/libdriver.a esp-idf/wpa_supplicant/libwpa_supplicant.a esp-idf/efuse/libefuse.a esp-idf/bootloader_support/libbootloader_support.a esp-idf/app_update/libapp_update.a esp-idf/spi_flash/libspi_flash.a esp-idf/nvs_flash/libnvs_flash.a esp-idf/esp_wifi/libesp_wifi.a esp-idf/esp_eth/libesp_eth.a esp-idf/lwip/liblwip.a esp-idf/tcpip_adapter/libtcpip_adapter.a esp-idf/esp_event/libesp_event.a esp-idf/pthread/libpthread.a esp-idf/espcoredump/libespcoredump.a esp-idf/esp32/libesp32.a esp-idf/xtensa/libxtensa.a esp-idf/esp_common/libesp_common.a esp-idf/esp_rom/libesp_rom.a esp-idf/soc/libsoc.a esp-idf/log/liblog.a esp-idf/heap/libheap.a esp-idf/freertos/libfreertos.a esp-idf/vfs/libvfs.a esp-idf/newlib/libnewlib.a esp-idf/cxx/libcxx.a esp-idf/app_trace/libapp_trace.a esp-idf/mbedtls/mbedtls/library/libmbedtls.a esp-idf/mbedtls/mbedtls/library/libmbedcrypto.a esp-idf/mbedtls/mbedtls/library/libmbedx509.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libcoexist.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libcore.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libespnow.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libmesh.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libnet80211.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libphy.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libpp.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/librtc.a /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32/libsmartconfig.a -u esp_app_desc -L /Users/fede/esp/esp-idf/components/esp_wifi/lib_esp32 -u pthread_include_pthread_impl -u pthread_include_pthread_cond_impl -u pthread_include_pthread_local_storage_impl -L /Users/fede/esp/esp-idf/examples/get-started/blink/build/esp-idf/esp32 -T esp32_out.ld -u app_main -L /Users/fede/esp/esp-idf/examples/get-started/blink/build/esp-idf/esp32/ld -T esp32.project.ld -L /Users/fede/esp/esp-idf/components/esp32/ld -T esp32.peripherals.ld -u call_user_start_cpu0 -u ld_include_panic_highint_hdl /Users/fede/esp/esp-idf/components/xtensa/esp32/libhal.a -Wl,--gc-sections -L /Users/fede/esp/esp-idf/components/esp_rom/esp32/ld -T esp32.rom.ld -T esp32.rom.libgcc.ld -T esp32.rom.syscalls.ld -T esp32.rom.newlib-data.ld -T esp32.rom.newlib-funcs.ld -Wl,--undefined=uxTopUsedPriority -u vfs_include_syscalls_impl esp-idf/newlib/libnewlib.a -u newlib_include_locks_impl -u newlib_include_heap_impl -u newlib_include_syscalls_impl -u newlib_include_pthread_impl -lstdc++ -u __cxa_guard_dummy -u __cxx_fatal_exception -lgcov -lc -lm -lgcc && : /Users/fede/.espressif/tools/xtensa-esp32-elf/esp32-2019r1-8.2.0/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/8.2.0/../../../../xtensa-esp32-elf/bin/ld: esp-idf/main/libmain.a(blink.c.obj):(.literal.app_main+0x8): undefined reference to `doNothing' /Users/fede/.espressif/tools/xtensa-esp32-elf/esp32-2019r1-8.2.0/xtensa-esp32-elf/bin/../lib/gcc/xtensa-esp32-elf/8.2.0/../../../../xtensa-esp32-elf/bin/ld: esp-idf/main/libmain.a(blink.c.obj): in function `app_main': /Users/fede/esp/esp-idf/examples/get-started/blink/build/../main/blink.c:29: undefined reference to `doNothing' collect2: error: ld returned 1 exit status ninja: build stopped: subcommand failed. ninja failed with exit code [1] Other items if possible sdkconfig.zip Have you also modified the component CMakeLists.txt file to include the new source file? See https://docs.espressif.com/projects/esp-idf/en/latest/api-guides/build-system.html#minimal-component-cmakelists for an example. Great! that was the issue. fwiw - I had a similar issue with the ninja: build stopped: subcommand failed. and Build finished with exit code 1 non-descript errors. Turned out in my case that my default app to open .py files was VSCode and not Python. See https://github.com/nanoframework/Home/issues/564
gharchive/issue
2019-09-09T15:03:03
2025-04-01T06:38:36.712339
{ "authors": [ "fvizzon", "gojimmypi", "igrr" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/4038", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
743298544
Undefined reference to 'sysconf' and 'posix_memalign' (IDFGH-4268) Environment Environment type: [PlatformIO (Arduino)] Development Kit: [ESP32-S2-Saola-1M] Module or chip used: [ESP32-S2-WROOM] IDF version [idf-release/v4.2] Compiler version: [esp2020r2 8.2.0] Operating System: [Windows] Power Supply: [USB] Problem Description When trying to compile the MVCE below inside PlataformIO, I get undefined references to sysconf and posix_memalign. I need those functions because of a third party library (tiny-dnn especifically). Expected Behavior Are those functions implemented? What are the alternatives? Code to reproduce this issue #include <Arduino.h> #include <cstdlib> #include <thread> void setup() { void* ptr{ nullptr }; posix_memalign( &ptr, 16, 1024 ); long value = std::thread::hardware_concurrency(); } void loop() { } Debug Logs Error: [...] Building in release mode Linking .pio\build\esp32doit-devkit-v1\firmware.elf c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/bin/ld.exe: C:\Users\Giovanni\.platformio\packages\framework-arduinoespressif32\tools\sdk\esp32s2\lib\librtc.a(rtc.o)(.text.rtc_pad_gpio_wakeup+0xa9): could not decode instruction; possible configuration mismatch c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/bin/ld.exe: c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/lib\libstdc++.a(thread.o):(.literal._ZNSt6thread20hardware_concurrencyEv+0x0): undefined reference to `sysconf' c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/bin/ld.exe: c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/lib\libstdc++.a(thread.o): in function `std::thread::hardware_concurrency()': /builds/idf/crosstool-NG/.build/HOST-x86_64-w64-mingw32/xtensa-esp32s2-elf/src/gcc/libstdc++-v3/src/c++11/thread.cc:177: undefined reference to `sysconf' c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/bin/ld.exe: .pio\build\esp32doit-devkit-v1\src\main.cpp.o:(.literal._Z5setupv+0x0): undefined reference to `posix_memalign' c:/users/giovanni/.platformio/packages/toolchain-xtensa32s2/bin/../lib/gcc/xtensa-esp32s2-elf/8.2.0/../../../../xtensa-esp32s2-elf/bin/ld.exe: .pio\build\esp32doit-devkit-v1\src\main.cpp.o: in function `setup()': C:\Users\Giovanni\Desktop\auto2teste/src/main.cpp:8: undefined reference to `posix_memalign' collect2.exe: error: ld returned 1 exit status *** [.pio\build\esp32doit-devkit-v1\firmware.elf] Error 1 ===================================================================== [FAILED] Took 11.54 seconds ===================================================================== The terminal process "C:\Users\Giovanni\.platformio\penv\Scripts\pio.exe 'run'" terminated with exit code: 1. Other items platformio.ini: [env:esp32doit-devkit-v1] platform_packages = platformio/framework-arduinoespressif32 @ https://github.com/espressif/arduino-esp32.git#idf-release/v4.2 platform = espressif32 board = esp32doit-devkit-v1 framework = arduino monitor_speed = 115200 board_build.speed = 921600 board_build.partitions = partitions_custom.csv board_build.mcu = esp32s2 build_unflags = -std=gnu++11 -fno-rtti build_flags = -std=gnu++14 -DCORE_DEBUG_LEVEL=5 [...] Terminal output infos: Processing esp32doit-devkit-v1 (platform: espressif32; board: esp32doit-devkit-v1; framework: arduino) -----------------------------------------------------------------------------------------------------------------------------------------------------------------------Verbose mode can be enabled via `-v, --verbose` option CONFIGURATION: https://docs.platformio.org/page/boards/espressif32/esp32doit-devkit-v1.html PLATFORM: Espressif 32 (2.0.0) > DOIT ESP32 DEVKIT V1 HARDWARE: ESP32S2 240MHz, 320KB RAM, 4MB Flash DEBUG: Current (esp-prog) External (esp-prog, iot-bus-jtag, jlink, minimodule, olimex-arm-usb-ocd, olimex-arm-usb-ocd-h, olimex-arm-usb-tiny-h, olimex-jtag-tiny, tumpa) PACKAGES: - framework-arduinoespressif32 0.0.0+sha.29e3b64 - tool-esptoolpy 1.30000.200511 (3.0.0) - toolchain-esp32s2ulp 1.22851.191205 (2.28.51) - toolchain-xtensa32s2 1.80200.200827 (8.2.0) [...] Hi, I am facing the same issue while working with tiny-cnn. Let me know if there is any workaround for this. With regards, Aquib Jamal I made some modifications: https://github.com/GiovanniCmpaner/tiny-dnn Take a look at the commit history for more details. Also, some additional unflags and flags are needed in the platformio.ini file: build_unflags = -fno-rtti build_flags = -mtext-section-literals For some insight, because of the memory consumption of tiny-dnn, I switched over for tensorflow-lite. Thank you for sharing the repository and pointing in the right direction. I am able to compile the code as shown in your files. I had already set the rtti flags in sdkconfig file. I am not sure if it is connected to the same issue, but now I am getting linker error - "hello-world.elf section .dram0.bss' will not fit in region dram0_0_seg' region `dram0_0_seg' overflowed by 83856 bytes" I am trying to use the IRAM_DATA_ATTR and IRAM_BSS_ATTR macros, for template class variables. This results in the following error - "section attribute not allowed for 'm_weights' IRAM_BSS_ATTR ap_uint m_weights[PE][TILES];" Have you come across similar problems? Many thanks, Aquib Probably there isnt enought memory in DRAM for your variable with IRAM macro, maybe you can use std::vector for allocating it in heap, at runtime. But this erros specific to your project needs, try searching in google. Good luck! Thank you! 所以,这个问题是怎么回事,Undefined reference to 'sysconf'
gharchive/issue
2020-11-15T17:02:28
2025-04-01T06:38:36.724094
{ "authors": [ "GiovanniCmpaner", "aquibjamal", "dzz10" ], "repo": "espressif/esp-idf", "url": "https://github.com/espressif/esp-idf/issues/6119", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
964837397
Bangle2 setting menu style for lefties As lefti I scroll with my left hand through the settings menu and my finger hides the text. Can we have a left hand menu style with text that is right aligned? You mean with values on the left and the text on the right? That's probably not something I'd build in but it should be trivial to add as an app that replaces the built-in E.showMenu. I'm happy to help with that After using it for more setting, I really like to have a larger font because of the size of my finger can’t scroll between lines easy ….. The menu is designed so that you just move your finger up and down, rather than having to tap on a specific menu item for exactly that reason (I found even 50% larger didn't really help). The sensitivity of that scrolling could easily be less (or even configurable) though? Oh yes, change the sensitivity for those large menues would really help. just to add this may be fixed by https://github.com/espruino/BangleApps/issues/1040 I guess No response - lets assume that #1040 did fix it
gharchive/issue
2021-08-10T10:35:40
2025-04-01T06:38:36.785308
{ "authors": [ "MaBecker", "gfwilliams" ], "repo": "espruino/BangleApps", "url": "https://github.com/espruino/BangleApps/issues/782", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
374396188
Card group switches Add switch to a card header if there is entities with the same domain in the card and group switch is not disabled by configuration. Done
gharchive/issue
2018-10-26T13:57:39
2025-04-01T06:38:36.804502
{ "authors": [ "estevez-dev" ], "repo": "estevez-dev/ha_client", "url": "https://github.com/estevez-dev/ha_client/issues/157", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2042605348
"Process markdown files" hangs[BUG] Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: doctor publish --outputFolder . login to Microsoft Device Login using code provided "Process markdown files" hangs for 30 min There is no output in the output folder Expected behavior Process markdown files work. There are output in output folder Screenshots Desktop (please complete the following information): OS: Windows Version: 10 Additional context Add any other context about the problem here. @estruyf Could you please look into the bug? Thanks. can confirm i'm having the same issue. Reproduced from a docker container - node:lts @yuxin1234 @DennisRutherford could you try to run doctor publish --debug to see if it gives more information on why it hangs? I just released version 1.12.0, which now supports Node.js version 18 and higher. If you could update to the latest version and test it again, that would be great. Can confirm that latest version is now working to publish Thanks @DennisRutherford for verifying @estruyf Happy to help. I've got another issue where this is happening again. If I try and use certificate based authentication it gets stuck; but if I use device code it works fine. Any ideas? @estruyf Thanks for fixing it. Still hanging for me after I upgraded to 1.12.0. @yuxin1234 What do you get when you add the --debug flag? Can you give more information about your environment? Node version, ... @estruyf Below is the screenshot for running "doctor publish --debug": Node: 18.0.0 Platform: Windows 10 Thanks. Ah, I see you are not using SharePoint Online, but your own server. That might be the issue. As I have no access to an on-prem server, I won't be able to test out that use-case. @estruyf Thanks.
gharchive/issue
2023-12-14T22:33:50
2025-04-01T06:38:36.851035
{ "authors": [ "DennisRutherford", "estruyf", "yuxin1234" ], "repo": "estruyf/doctor", "url": "https://github.com/estruyf/doctor/issues/155", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
328007018
generalize time sequences Changes to libarbor Time sequences were added in src/time_sequence.hpp: added new time_seq type that implements a type-erasure interface for the concept of a time sequence generator. added poisson, regular and vector-backed implementations of the time sequence concept. Event generators: The poisson, regular and vector-backed implementations of the event generator concept were refactored to use the. Cell groups: Removed the dss_cell_group and rss_cell_group and associated types. Added a generic spike source cell that generates a sequence of spikes at time points specified by a time_seq. Using this approach, an additional cell_group specialization is not required for each type of sequence, and user-defined sequences can be used with minimal overhead. Unit tests Added unit tests for time_seq. Simplified event_generator unit tests, because much of the testing of the sequences was moved to the time_seq tests. Added unit tests for spike_source_cell_group. Changes to miniapp simplified the miniapp by removing the command line options for using an input spike chain from file. updated the miniapp recipe to use spike_source cell group instead of dss_cell_group. So, I'd like still to rename (and possibly promote outside the class) time_seq::dummy_seq, and still arguing about vector_time_seq. Other issue about splitting spike_source_cell out from spike_source_cell_group.hpp I can do later. tests/unit/test_rss_cell_group.cpp and tests/unit/test_dss_cell_group.cpp should be removed, too.
gharchive/pull-request
2018-05-31T06:15:11
2025-04-01T06:38:36.904325
{ "authors": [ "bcumming", "halfflat" ], "repo": "eth-cscs/arbor", "url": "https://github.com/eth-cscs/arbor/pull/496", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
663596647
Performance variables must be numbers I tried putting the output of "git commit" into a perf_var so that I could link changes in performance history to changes in the system configuration, but I get this error: FAILURE INFO for ... <snip> * Failing phase: performance <snip> Reason: sanity error: the value extracted for performance variable 'alaska:roce-openmpi4-ucx:git_ref' is not a number: a2cccdd-dirty There's nothing in the docs I can see that state performance variables need to be numbers. Is there a way around this, or a better way of achieving the same thing? Hi @sjpb, this behaviour is expected and it has been reported in #1146 (where ReFrame was just crashing instead of giving a message). The comment here describes what happens and why we expect a number: https://github.com/eth-cscs/reframe/issues/1146#issuecomment-580682324 What do you want to achieve exactly? How your references look like for this "performance variable?" Ok - so then this is really just a documentation issue. I'd assumed that with a reference which looked like this 'git_ref': (None, None, None, 'n/a'),, i.e. no reference, a non-numeric perf var would be ok. What I was trying to achieve (and this obviously isn't the right way) was to get a git commit into the performance log, linked to a test run, so I can link changes in performance to changes in configuration. Still interested in a way of doing that! I'd assumed that with a reference which looked like this 'git_ref': (None, None, None, 'n/a'),, i.e. no reference, a non-numeric perf var would be ok. This is something we could implement easily, since it seems that people want to use the performance variables to log non-performance information. I will open a feature request for that. What I was trying to achieve (and this obviously isn't the right way) was to get a git commit into the performance log, linked to a test run, so I can link changes in performance to changes in configuration. Still interested in a way of doing that! Makes sense. One way you could possibly do that currently, is to try to pass the git hash as a "unit", the last element of the tuple, and make sure that what you extract as a value for this variable from the output is a number (anything). Then the git hash will be logged as the unit of that variable. I know it's hacky, but it should work. FYI my eventual solution for this was to push the info into the tag instead. It isn't really a performance variable, and actually logically it makes more sense to have this available on each line of the performance log (like the reframe version) rather than generating a new line (="observation") for it. So maybe the current perf var functionality shouldn't be changed, although using tags is a bit hacky too. I considered using the info field but that seemed less appropriate. I agree. This one needs more thinking. How did you make it log the tags? Did you add another log format specifier for tags? I just added %(check_tags)s to the log format - then (outside of reframe) I have code which parses the perflogs. I see. There is also #1068 that requests other check fields to be logged as well and I'm thinking to make this more generic, so that you could easily select any test field to log, even custom ones. How that sounds? Yes that would work. Tags actually work fine to be honest though - as reframe nicely formats them as a comma-separated string in the log they're pretty easy to handle. I guess conceptually "tags" and "things I want to log" are different though. I'm closing this, too. It'll be addressed by #1068.
gharchive/issue
2020-07-22T09:01:05
2025-04-01T06:38:36.911361
{ "authors": [ "sjpb", "vkarak" ], "repo": "eth-cscs/reframe", "url": "https://github.com/eth-cscs/reframe/issues/1430", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1989638792
Miscellaneous markdown formatting edits Edit markdown formatting and page layout for consistency, clean up trailing spaces and unnecessary blank lines in code to address #87 Fix non-sequential numbering Move install directions for Chrultrabook Controller to post-install.md Remove old information regarding running Windows on RW_LEGACY if using Ryzen Merging should fail as you've changed the source directory of the docs, let me know if it doesn't work and I can fork your repo and open a PR that way. https://github.com/ethanaobrien/docz/commit/a1f8c646b1f688865e6f45e53fba0e54b82c8bdc
gharchive/pull-request
2023-11-12T23:45:53
2025-04-01T06:38:36.914045
{ "authors": [ "ethanaobrien", "marcsadler" ], "repo": "ethanaobrien/docz", "url": "https://github.com/ethanaobrien/docz/pull/8", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
1121257428
Testing the server I get this error in the console: Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'getUrls') at (index):26:37 /favicon.ico:1 Failed to load resource: the server responded with a status of 404 () (index):19 Uncaught TypeError: Cannot read properties of undefined (reading 'start') at HTMLButtonElement.<anonymous> ((index):19:55) https://emulatorjs.allancoding.ga/ And I can not get the server to start. This is strange. do you think you might be able to look into it a bit? I've been trying to work on the next version of emulatorjs I have a question is the server supposed to start automatically? No, it is not This problem was fixed the the newest update.
gharchive/issue
2022-02-01T22:17:12
2025-04-01T06:38:36.916145
{ "authors": [ "allancoding", "ethanaobrien" ], "repo": "ethanaobrien/emuserver", "url": "https://github.com/ethanaobrien/emuserver/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
195384255
Metadata extractors in all transports [x] TCP [ ] IPC Will be implemented as part of #50
gharchive/issue
2016-12-13T22:14:54
2025-04-01T06:38:36.918241
{ "authors": [ "tomusdrw" ], "repo": "ethcore/jsonrpc", "url": "https://github.com/ethcore/jsonrpc/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
768622131
Color copy past not working Hi All, When I am trying to copy and paste the color and font size. Formatting is not working. I have used ep_font_color for the color option. Desktop: OS: Window 10 Browser : chrome Version : 87.0.4280.88 Tested here and it works fine: https://video.etherpad.com/p/aUeq2TntWMJeyh7e_uIQ Please test latest code (Etherpad and plugin) before creating issues :) Hi JohnMcLear, I don't know why you have closed this issue. I have tested the issue https://video.etherpad.com/p/aUeq2TntWMJeyh7e_uIQ . The same issue is also showing on the provided URL. I have also attached the video file for your reference. Can you replicate the bug in firefox? Was this working as expected in Chrome before? Looks like an upstream bug ;\ Hi JohnMcLear, It seems to working in Firefox. But it is not working in Chrome. That would be helpful if you suggest anyway to fix it. Thanks Prolly a content collector bug related to contenteditable. Check shared.js which handles collection of pasted content. Also try git bisect on the plugin and on develop branch to see if it's a new bug or recently introduced.
gharchive/issue
2020-12-16T09:20:42
2025-04-01T06:38:36.923660
{ "authors": [ "JohnMcLear", "jinbullsushil" ], "repo": "ether/ep_font_color", "url": "https://github.com/ether/ep_font_color/issues/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
327128942
Go bindings and tests [ ] auto-generate bindings whenever Solidity file changes using go generate [ ] tests using blockchain simulator One thing to keep in mind about the blockchain simulator, I have never ever been able to get the adjust time function to work from go-ethereum's simulated backend: https://godoc.org/github.com/ethereum/go-ethereum/accounts/abi/bind/backends#SimulatedBackend.AdjustTime @postables Ah that sucks. We may not have to use it if we go with block height vs. time for the HTLC.
gharchive/issue
2018-05-28T22:50:22
2025-04-01T06:38:36.941178
{ "authors": [ "postables", "shanev" ], "repo": "ethereum-lightning/eth-lnd", "url": "https://github.com/ethereum-lightning/eth-lnd/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1978236272
Add ERC: Cross-Contract Hierarchical NFT Reopen this ongoing draft from the old PR in repo EIPs with all comments addressed. @xinbenlv Please take another look. Previous comments include adding a set method and supplementing security considerations. @SamWilsn Please help with the merge due to this issue How do I request to get approval from @eip-review-bot please? @SamWilsn Sam, can you please help take a look why the bot didn't automatically approve it when all other checks have been satisfied? @eth-bot rerun
gharchive/pull-request
2023-11-06T03:57:11
2025-04-01T06:38:36.950905
{ "authors": [ "Pandapip1", "minkyn" ], "repo": "ethereum/ERCs", "url": "https://github.com/ethereum/ERCs/pull/91", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
190962247
Static analysis: warn about too long contracts After "spurious dragon", the deploy size is limited to 24576 bytes, this should be checked. This is not really the static analyzer as it doesn't operate on the AST, rather a warning from remix. fixed We actually added this code to the compiler too :)
gharchive/issue
2016-11-22T10:33:09
2025-04-01T06:38:36.952289
{ "authors": [ "LianaHus", "axic", "chriseth" ], "repo": "ethereum/browser-solidity", "url": "https://github.com/ethereum/browser-solidity/issues/337", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
462389030
Helpers cleanup Addresses #918. It includes: Grouping helpers by category Renamings (for consistency, length, clarity) Add top-level function comments where missing, and make them consistent Remove neglected and sometimes misleading terminology section (to be readded post-freeze) Various other cosmetic cleanups I tried making BLS_WITHDRAWAL_PREFIX a Bytes1 but broke something. (I think the spec builder is confused somehow.) I think we should leave as is instead of the "default" value here. It is a configurable constant that might be changed in the future (or in different deployments) I'd prefer to revert to my last commit and get this merged. We have other things to handle that are waiting on this PR I think we should leave as is instead of the "default" value here. That's fine :) (Tried moving it to see if spec builder would be happier.) Making it Bytes1() is still probably the way forward.
gharchive/pull-request
2019-06-30T10:08:11
2025-04-01T06:38:36.955485
{ "authors": [ "JustinDrake", "djrtwo" ], "repo": "ethereum/eth2.0-specs", "url": "https://github.com/ethereum/eth2.0-specs/pull/1237", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
812588330
Data stewardship User story Story As a local pinner, I want to validate if my locally pinned content is available in the network and reupload the content if that is not the case, such that I can guarantee availability of the content via Swarm Acceptance criteria A local pinner can run a process which periodically validates if certain content is available in the network and reuploads the content if it is not availble Background With this feature, it becomes easier to guarantee availability of certain content in the network To validate whether the content is available in the network, we should watch out that the node whom we are retrieving from didn't cache it This feature may be implemented fully 2nd-layer Bonus points for considering integration with web applications or swarm-cli Tasks Task Assignee Done (1), or 0 @agazso Duplicate of #1508
gharchive/issue
2021-02-20T11:42:51
2025-04-01T06:38:37.062151
{ "authors": [ "Eknir" ], "repo": "ethersphere/bee", "url": "https://github.com/ethersphere/bee/issues/1303", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1113607242
PR previews are not updated with more commits pushed to the PR It looks like the PR previews are not updated upon new commits landing to the PR. For example see here: https://github.com/ethersphere/bee-js-docs/pull/97 This is most probably not problem of this action but other automation that does not trigger the action.
gharchive/issue
2022-01-25T09:00:34
2025-04-01T06:38:37.063569
{ "authors": [ "AuHau" ], "repo": "ethersphere/beeload-action", "url": "https://github.com/ethersphere/beeload-action/issues/14", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1791331956
New EtherspotPaymaster deployments and updated README and deploy scripts Description Deployed new EtherspotPaymaster implementation to supported chains. Added new 'required' tag for deploying just wallet factory and paymaster. Updated README with new info on deploying contracts and cleaned up code. Added initialBaseFeePerGas in Chiado network configs as required for deployment. Motivation and Context There is a new implementation of the EtherspotPaymaster after a couple of minor bug fixes. We require a method for deploying solely the EtherspotWalletFactory & EtherspotPaymaster contracts. How Has This Been Tested? Screenshots (if appropriate): Types of changes [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) @lbw33 Pls can you resolve the conflicts?
gharchive/pull-request
2023-07-06T11:06:53
2025-04-01T06:38:37.067648
{ "authors": [ "ch4r10t33r", "lbw33" ], "repo": "etherspot/etherspot-prime-contracts", "url": "https://github.com/etherspot/etherspot-prime-contracts/pull/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1694075924
Ensure data use and processing activity fields are locked down immediately after save Is your feature request related to a specific problem? A general requirement for the data use declaration form is that the Data use and Processing Activity fields are effectively locked-down in the UI (made read-only) once a data use declaration has been created on a given system. While this behavior is generally the case currently, there's an edge case where the fields are still editable immediately after saving the data use declaration initially, before navigating away from the form. Although it may be unlikely, if users edit those fields in that state when they are still editable but after saving, they could end up breaking links between any custom fields and the data use declaration, which is a side effect of how our API is implemented (and the reason we want to lock those fields down generally). Describe the solution you'd like The fields should be locked down immediately after saving the data use declaration, and stay locked down indefinitely. Describe alternatives you've considered, if any In general we'll look to rework the data use/privacy declaration API to not require this constraint, but that's a longer-term effort. Additional context Found in doing some 2.12.0 release testing cc @TheAndrewJackson @rsilvery @mfbrown @Kelsey-Ethyca Is this resolved @adamsachs ? Is this resolved @adamsachs ? nope, still there! @adamsachs still an issue? @adamsachs still an issue? yup, tested locally and this behavior still seems to be there.
gharchive/issue
2023-05-03T13:25:33
2025-04-01T06:38:37.072501
{ "authors": [ "adamsachs", "rsilvery" ], "repo": "ethyca/fides", "url": "https://github.com/ethyca/fides/issues/3205", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1360235914
Updated Configuration settings: Connector parameters and Dataset configuration Purpose Allows user to create a DB, Manual, and SASS connection. User enters connector parameters and dataset configuration for a given connection. This PR includes the following JIRA tickets: 922 - Add a Connector - DB connector configs 923 - Add a Connector - upload a DB Dataset YAML 1090 - Add a Connector - SaaS Dataset Management (YAML method) 1015 - Frontend - Configure a Manual entry Connector Changes Checklist [x] Update CHANGELOG.md file [x] Merge in main so the most recent CHANGELOG.md file is being appended to [x] Add description within the Unreleased section in an appropriate category. Add a new category from the list at the top of the file if the needed one isn't already there. [x] Add a link to this PR at the end of the description with the PR number as the text. example: #1 [ ] Applicable documentation updated (guides, quickstart, postman collections, tutorial, fidesdemo, database diagram. If docs updated (select one): [ ] documentation complete, or draft/outline provided (tag docs-team to complete/review on this branch) [ ] documentation issue created (tag docs-team to complete issue separately) [ ] Good unit test/integration test coverage [ ] This PR contains a DB migration. If checked, the reviewer should confirm with the author that the down_revision correctly references the previous migration before merging [ ] The Run Unsafe PR Checks label has been applied, and checks have passed, if this PR touches any external services Ticket Fixes #922 #923 #1090 #1015 @seanpreston If you would like to test the Create New Connection feature, you can edit the flags.json file by assigning the isActive attribute to true. Don't forget to execute npm i via clients/ops/admin-ui Terminal Hey @chriscalhoun1974 — I've given this a first pass and found the following issues: There’s no way to edit a dataset / connectionconfig once one is created [image:1C215A1B-F392-48DA-B89F-FAF76A99CE9E-494-0000112282D89F56/Screenshot 2022-09-02 at 14.41.11.png] Save YAML system throws an error without making any network request [image:DAB1D3ED-9C6D-4E4F-9C4F-541F0B43700A-494-00001125BD6FD817/Screenshot 2022-09-02 at 14.41.24.png] “Cancel” button throws an error Navigating from “dataset configuration” to “connector parameters” and back to “dataset configuration” removes any yaml [image:01E86707-0980-496F-BA00-7C1DCEC73FE9-494-0000113952A29BF6/Screenshot 2022-09-02 at 14.42.48.png] The yaml input is buggy — the linter highlights errors where none are @seanpreston I have updated the Dataset YAML editor to reference the @monaco-editor/react and js/yaml NPM packages. All of the issues have been resolved now. Let me know if you have any questions. Thank you. Thanks @chriscalhoun1974 — the Monaco editor is much nicer! This is nearly there, just a couple more things to fix: The API to create a dataset up must always send a list of datasets There's an error thrown when hitting the "Cancel" button on that same page These issues are only for DB connections, SaaS connections worked well. @seanpreston @pattisdr If the user clicks either the Cancel or Save button, the user will be redirected to the Database Connections landing page. In addition, when a connection is initially created the user will be auto redirected to either the Dataset configuration or DSR customization screen accordingly. This enhancement will provide a better overall user experience. Thanks @chriscalhoun1974 — the issues I highlighted earlier are fixed up. There's just one issue here that's a showstopper which is: the Create Connection doesn't show if no connectors are present in the DB cc @adamczepeda too because I've noticed we're generally omitting empty states from the designs These others are smaller things that shouldn't block us merging (which I'll create follow-up tickets for): Errors returned by the API for incorrectly formatted yaml are no help to the user "Amazon Redshift" isn't searchable by the string "ama" because the connector is indexed only as "redshift". We should be consistent with naming here, for instance BigQuery isn't also referred to as Google BigQuery. Let's pick a convention and get that working well across all connectors @seanpreston @adamczepeda The Create Connection doesn't show if no connectors are present in the DB issue has been resolved now. I've tested this again and found another four clean-up tasks, but nothing that'll stop us merging this increment as it's getting very large now. https://github.com/ethyca/fidesops/issues/1333 https://github.com/ethyca/fidesops/issues/1334 https://github.com/ethyca/fidesops/issues/1335 https://github.com/ethyca/fidesops/issues/1336 One followup, to make it more visible if we have a partially created webhook, we might call the secrets endpoint (with an empty dictionary) or the test endpoint when filling out the first screen of the manual webhook, which will put it in a "failed" state. Then when the fields are added, we run the secrets/test endpoint again so it should pass (this resource is only checked to see if the webhook and fields exist). This will then flag in the UI if a webhook is only partially filled out.
gharchive/pull-request
2022-09-02T13:58:05
2025-04-01T06:38:37.095404
{ "authors": [ "chriscalhoun1974", "pattisdr", "seanpreston" ], "repo": "ethyca/fidesops", "url": "https://github.com/ethyca/fidesops/pull/1247", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
516890969
Add support for RFC 5916 Add module and test for RFC 5916 Codecov Report Merging #98 into master will increase coverage by <.01%. The diff coverage is 100%. @@ Coverage Diff @@ ## master #98 +/- ## ========================================== + Coverage 99.35% 99.35% +<.01% ========================================== Files 88 89 +1 Lines 5758 5766 +8 ========================================== + Hits 5721 5729 +8 Misses 37 37 Impacted Files Coverage Δ pyasn1_modules/rfc5916.py 100% <100%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 2e6acd1...45b5fe2. Read the comment docs. Thank you!
gharchive/pull-request
2019-11-03T21:23:04
2025-04-01T06:38:37.111967
{ "authors": [ "codecov-io", "etingof", "russhousley" ], "repo": "etingof/pyasn1-modules", "url": "https://github.com/etingof/pyasn1-modules/pull/98", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
539564211
Receiving V3 Traps - Engine ID We used to work with PySNMP version 4.3.1 to receive SNMP V3 traps, and it worked perfectly. Recently, we upgraded to version 4.4.12, and the traps were not received anymore. I debugged the issue and found that the call to __getUserInfo at service.py line 759 throws a NoSuchInstanceError exception: # 3.2.4 try: (usmUserName, usmUserSecurityName, usmUserAuthProtocol, usmUserAuthKeyLocalized, usmUserPrivProtocol, usmUserPrivKeyLocalized) = self.__getUserInfo( snmpEngine.msgAndPduDsp.mibInstrumController, msgAuthoritativeEngineId, msgUserName ) I think it happens because the engine ID in the trap is not the same as the engine ID of the user I created for receiving the traps. As far as I understand from the specs, we need to use the same engine ID for the receiving user and the trap sender. If this is the case, why did it work in PySNMP version 4.3.1? Was it a bug in the library? Is engine ID matching not really mandatory? The engine ID matching is mandatory as documented in the standard, so 4.3.1 indeed has a bug there and 4.4.12 contains the fix.
gharchive/issue
2019-12-18T09:38:27
2025-04-01T06:38:37.115301
{ "authors": [ "aryes", "lextm" ], "repo": "etingof/pysnmp", "url": "https://github.com/etingof/pysnmp/issues/333", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1955177747
Access Denied When running litcher I had it working and it started giving me this all of a sudden, not sure what changed here. running it as admin Did you update Windows? I believe this could be related to #12 sadly, and I haven't fixed it yet. It requires for me to update a dependency, a big TODO from my side. I have been on Windows 11 this whole year, there was just an update the other week, so maybe that broke it? No worries, no rush. If you have the time, could you test this version? https://github.com/etra0/litcher/releases/tag/v0.4.0-alpha hudhook has been updated heavily since then.
gharchive/issue
2023-10-21T00:58:01
2025-04-01T06:38:37.125851
{ "authors": [ "bythehist", "etra0" ], "repo": "etra0/litcher", "url": "https://github.com/etra0/litcher/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
732799149
Update Building.md to mention how to build a release configuration Building.md doesn't currently mention explicitly how to build in a release configuration. This is largely fine, since very few people would ever want to build macOS or Windows in non-debug, since there are pre-built releases available. However, for Linux users, where the only way to play the game is through compiling it yourself, having the default build type be debug is a footgun for poor game performance. It would be nice to have an explicit note in Building.md as to how you can build the game in release, and that you should do so if compiling on Linux to actually play the game, not develop. An alternate (possibly poor) idea would be to have the default build be release in Linux only, still defaulting to debug for non-linux builds. Users would then have to explicitly opt into a debug build in Linux, which might be a more reasonable behavior. I'd make the PR to update it myself right now, but I'm both tired and busy so this issue is a note to myself to add it. 675508bb542268704e3a39d80fac9adeeee41808 @poco0317 That commit never made its way into master An update has not released, thus nothing has been merged to master.
gharchive/issue
2020-10-30T00:50:20
2025-04-01T06:38:37.135004
{ "authors": [ "Kangaroux", "bluebandit21", "poco0317" ], "repo": "etternagame/etterna", "url": "https://github.com/etternagame/etterna/issues/916", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
388961351
Add a new command to check sdcard info https://unix.stackexchange.com/questions/273971/how-to-get-hard-disk-information-on-linux-terminal Rejected by LEAN
gharchive/issue
2018-12-08T21:31:46
2025-04-01T06:38:37.184707
{ "authors": [ "jabrena" ], "repo": "ev3dev-lang-java/ev3dev-lang-java", "url": "https://github.com/ev3dev-lang-java/ev3dev-lang-java/issues/612", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1221378393
🛑 www.actionherojs.com is down In 3b5bb23, www.actionherojs.com (https://www.actionherojs.com) was down: HTTP code: 0 Response time: 0 ms Resolved: www.actionherojs.com is back up in bbb33ec.
gharchive/issue
2022-04-29T18:19:40
2025-04-01T06:38:37.220972
{ "authors": [ "evantahler" ], "repo": "evantahler/upptime", "url": "https://github.com/evantahler/upptime/issues/1203", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1129633812
🛑 www.actionherojs.com is down In 2e39b3f, www.actionherojs.com (https://www.actionherojs.com) was down: HTTP code: 0 Response time: 0 ms Resolved: www.actionherojs.com is back up in a2e9b3b.
gharchive/issue
2022-02-10T08:32:03
2025-04-01T06:38:37.224013
{ "authors": [ "evantahler" ], "repo": "evantahler/upptime", "url": "https://github.com/evantahler/upptime/issues/297", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
557041496
Excessive newlines appear before lists in converted document While markdown ignores more than 2 newlines, the converter currently generates 4 before lists. This creates an awkward looking document and requires a lot of search/replace to repair. Note: this issues occurs in every case I tested, including after paragraphs and a range header types (h1, h2, h3). Example: ## Outcomes 1. Improv There is a similar issue with extra newlines before and after horizontal rules. There should only be one empty line before and one after. Some text. --- Following text.
gharchive/issue
2020-01-29T18:34:14
2025-04-01T06:38:37.259201
{ "authors": [ "kaimantsch", "trixr" ], "repo": "evbacher/gd2md-html", "url": "https://github.com/evbacher/gd2md-html/issues/57", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1218315654
Netzbezug wenn PV DC-Leistung größer ist als AC-Leistung in Kombination mit DC-Speicher (Kostal) Describe the bug Habe einen Kostal Plenticore Wechselrichter mit DC-Speicher. Der Wechselrichter kann max. 10kW AC-Seitig ausgeben. Wird darüber hinaus von den Modulen mehr Strom erzeugt als der Wechselrichter AC-Seitig ausgeben kann UND ist der Speicher noch nicht geladen, so wird der überschüssige Strom direkt in den DC-Speicher geladen. (Einige Wandlungsverluste verpuffen) Das führt nun offenbar dazu, dass EVCC beispielsweise eine PV-Leistung von 11,5 kW sieht aber eigentlich für Haus und Wallbox max. 10kW zur verfügung stehen. Damit kommt es dann zum Netzbezug und gelichzeitigem Laden des Speichers. Ich hatte gedacht, dass evcc-io/evcc#3015 das Problem löst, das ist aber zumindest bei mir nicht der Fall. Steps to reproduce Kann ich so allgemein nicht sagen. In meinem Fall: Viel Sonne, die zu deutlich mehr Generatorleistung führt als der Wechselrichter AC-Seitig ausgeben kann Speicher der noch nicht auf 100% ist Auto im PV Modus laden Configuration details evcc.yaml.txt Log details log.log What type of operating system are you running? Docker container Version 0.90 (1eed3044) (nightly mit der neuen UI) Also muss ich in der Konfig unter site maxGridSupplyWhileBatteryCharging auf z.B. 50 Setzen? Genau. Mit dem Wert muss Du mal spielen, das ist die zulässige Regelbandbreite.
gharchive/issue
2022-04-27T11:41:59
2025-04-01T06:38:37.269691
{ "authors": [ "Robbe64", "andig" ], "repo": "evcc-io/docs", "url": "https://github.com/evcc-io/docs/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2286271418
Ladesteuerung für Hausakku mit Tibber Wenn das Fahrzeug schon mit günstigem Strom von Tibber geladen werden kann wäre es toll wenn auch Akkus (zum Beispiel von e3dc) über Tibber geladen werden können wenn der Strom besonders günstig ist und keine Sonne hinreichend zur Verfügung steht. Funktion soll über die Oberfläche aktiviert oder deaktiviert werden können. (Um diese beispielsweise nur im Winter zu nutzen) Mindestens in Deutschland spricht momentan noch das EEG eindeutig gegen eine solche Variante, da dann s.g. Graustrom im Speicher landen könnte. Nee, das wurde doch jetzt mit dem Solarpaket 1 geändert. Damit darf der eigene Speicher auch aus dem Netz geladen werden. premultiply @.***> schrieb am Do., 9. Mai 2024, 00:08: Mindestens in Deutschland spricht momentan noch das EEG eindeutig gegen eine solche Variante, da dann s.g. Graustrom im Speicher landen könnte. — Reply to this email directly, view it on GitHub https://github.com/evcc-io/evcc/issues/13819#issuecomment-2101583710, or unsubscribe https://github.com/notifications/unsubscribe-auth/A434VZAO2LMSPFEE4PVDGZLZBKO5RAVCNFSM6AAAAABHNRE5P2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBRGU4DGNZRGA . You are receiving this because you authored the thread.Message ID: @.***> Siehe Seite 6 letzter Absatz 😃 https://www.bmwk.de/Redaktion/DE/Downloads/S-T/solarpaket-im-ueberblick.pdf?__blob=publicationFile&v=8 Klaus-Dieter Kaiser @.***> schrieb am Do., 9. Mai 2024, 08:24: Nee, das wurde doch jetzt mit dem Solarpaket 1 geändert. Damit darf der eigene Speicher auch aus dem Netz geladen werden. premultiply @.***> schrieb am Do., 9. Mai 2024, 00:08: Mindestens in Deutschland spricht momentan noch das EEG eindeutig gegen eine solche Variante, da dann s.g. Graustrom im Speicher landen könnte. — Reply to this email directly, view it on GitHub https://github.com/evcc-io/evcc/issues/13819#issuecomment-2101583710, or unsubscribe https://github.com/notifications/unsubscribe-auth/A434VZAO2LMSPFEE4PVDGZLZBKO5RAVCNFSM6AAAAABHNRE5P2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBRGU4DGNZRGA . You are receiving this because you authored the thread.Message ID: @.***> Na Guck. Das sieht ja gut aus. Muss ich nur noch den Anbieter wechseln wenn der Vertrag ausläuft ☺️ Wäre aber vielleicht trotzdem an der Zeit das in den produktiven build zu übernehmen wo es ja jetzt offiziell erlaubt ist, oder? 😇
gharchive/issue
2024-05-08T19:14:57
2025-04-01T06:38:37.278892
{ "authors": [ "premultiply", "winfotiker" ], "repo": "evcc-io/evcc", "url": "https://github.com/evcc-io/evcc/issues/13819", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1640853632
Safeguard for wrong climating notification Is your feature request related to a problem? Please describe. Currently Renault seems to report the climater as always active. This lead to draining the house battery quite a while until I noticed the actual problem. Even switching to "off" did not stop the charging, I had to manually unplug. Describe the solution you'd like There could be a couple of solutions. it would be really useful if the climate-state would bei shown in the UI, this would have allowed me to find the problem a lot earlier. #6588 would at least pose a quick workaround in the specific case "off" should also turn of climater based charging Some sort of sanity check: i.e. climater charge should only run for an hour or so. Additional context I noticed that "log level trace" does not show any requests for Renault (anymore). Has this changed? Sollte jetzt per poll-mode mindestens als Workaround gelöst sein? it would be really useful if the climate-state would bei shown in the UI, this would have allowed me to find the problem a lot earlier. /cc @naltatis haben wir das jetzt in den Notifications mit drin? @andig der Climate Status ist schon vor einiger Zeit (neues Design) aus der UI rausgeflogen. Die Information ist in der API aber noch da. Würde vorschlagen, dass wir das als Statustext wieder mit aufnehmen.
gharchive/issue
2023-03-26T10:26:43
2025-04-01T06:38:37.282949
{ "authors": [ "andig", "naltatis", "pauxus" ], "repo": "evcc-io/evcc", "url": "https://github.com/evcc-io/evcc/issues/7062", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
146061708
img-uses-alt does not allow empty string or possible empty string This rule considers the following JSX bad: function Foo() { return <img alt={foo || ''} />; } this is also considered bad: function Foo() { return <img alt="" />; } but this is okay: function Foo() { return <img alt={foo} />; } However, I think that all three should be okay. Empty strings can be used appropriately for alt text on images that are decorative. I agree on the first one, since we can't determine the value of foo until runtime. However, not sure about the second - You can place a space in between the quotation marks (<img alt=" " />) and it should pass and still semantically represent the same thing to a screen reader. Also, I can implement case where alt="" (or any other form of undefined value) passes lint rule if role=presentation is present. I think the role=presentation bit makes sense. According to the spec, it should probably enforce alt="" if it has a presentation role. Authors SHOULD NOT provide meaningful alternative text (for example, use alt="" in HTML4) when the presentation role is applied to an image. https://www.w3.org/TR/wai-aria/roles#presentation Fixing first example in 0.5.3 - will upgrade minor on role=presentation enhancement. 0.5.3 should be done within the hour. Awesome! Thanks again! 0.5.3 published - should fix first use case + other bugs that are closed! I think this is still broken for cases like function Foo() { return <img alt={foo.bar || ''} />; } and function Foo() { return <img alt={bar() || ''} />; } and function Foo() { return <img alt={foo.bar() || ''} />; } Added test cases for those and fixed in v0.5.4 - still may be other edge cases, working on resolving cases to handle each type specified in spec Wonderful! @evcohen do you have an ETA on the role="presentation" change? No rush--I'm just wondering if I should roll with alt=" " or wait it out. @lencioni waiting for ci build to pass and then publishing v0.6.0. Error message updated and this strictly allows only the following scenario <img alt="" role="presentation" /> bad: <img alt={``} role="presentation" /> etc. as we only want to deal with literals for this case. I noticed that the Chrome audit rules allows alt="" without role="presentation" and role="presentation" without alt="", FYI: https://github.com/GoogleChrome/accessibility-developer-tools/wiki/Audit-Rules#ax_text_02 Use the attributes alt="", role="presentation" or include the image as a CSS background-image to identify it as being used purely for stylistic or decorative purposes and that it should be ignored by people using assistive technologies. Source: http://fae20.cita.illinois.edu/rule/ARIA_STRICT/IMAGE_2/ Not sure what the real rule is in this case, but as a linter, I think it's better to be opinionated in a case like this. As in, the only time alt can be undefined is when role="presentation". In this sense, we can drop the check for alt altogether if role="presentation" is present. Thoughts? My reading of the text you posted agrees with the Chrome text I lined to, and it also fits my intuitive understanding. I think it makes sense to enforce the existence of alt unless role="presentation", and if role="presentation" enforce either non-existent or empty alt.
gharchive/issue
2016-04-05T17:54:11
2025-04-01T06:38:37.294768
{ "authors": [ "evcohen", "lencioni" ], "repo": "evcohen/eslint-plugin-jsx-a11y", "url": "https://github.com/evcohen/eslint-plugin-jsx-a11y/issues/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2660263866
🛑 /getNoWatermarkUrl - Get non watermarked video url is down In a4b63ee, /getNoWatermarkUrl - Get non watermarked video url (https://tiktok.evelode.com/getNoWatermarkUrl?video_url=https://www.tiktok.com/@therock/video/7106855913906081070%3Fis_copy_url%3D1%26is_from_webapp%3Dv1&cache_timeout=0&license_key=$API_KEY) was down: HTTP code: 0 Response time: 0 ms Resolved: /getNoWatermarkUrl - Get non watermarked video url is back up in f79f090 after 5 minutes.
gharchive/issue
2024-11-14T22:42:20
2025-04-01T06:38:37.298176
{ "authors": [ "sergeykomlev" ], "repo": "evelode/tiktok-status", "url": "https://github.com/evelode/tiktok-status/issues/2473", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
236127974
Some feature needed Hey first of all thumbs up for your library. I have seen different edit libraries but this one satisfy my needs although some features are required. I have one question is it possible to get the position of each elements on the screen before saving the picture e.g Position of Text , Sticker Because I want to save this entire project while user is editing and will re load to him when he will want to resume the old editing. Is it possible. If it is then can you please do it for me? @mudassirzulfiqar Thank you for your kind comment, As for this feature yes We are going to have along with some more features next few weeks I also need these features. @mudassirzulfiqar you got any solution for it? Thank You Vishal Vanpariya
gharchive/issue
2017-06-15T09:22:10
2025-04-01T06:38:37.323270
{ "authors": [ "mudassirzulfiqar", "rkhater", "vishalvanpariya" ], "repo": "eventtus/photo-editor-android", "url": "https://github.com/eventtus/photo-editor-android/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
113913025
Disable the puppetserver report processor feature by default By default if reports is NOT defined, puppetserver will enable the "store" reports processor which will generate yaml reports in: /opt/puppetlabs/server/data/puppetserver/reports/ See: https://docs.puppetlabs.com/puppet/latest/reference/reporting_about.html#configuring-reporting If your not expecting this or dealing with this some way, you will probably run your puppetserver out of disk space. This PR defaults this to "none" which will turn this feature off. The module gives you the ability to set it to none if you'd like, but I believe this change changes the puppet default. I'm curious why the PR instead of just setting the report processor to none in your environment. Sorry, I was coming from the angle of a new user. I think Puppetlabs documentation around this sucks. By default if nothing at all is defined pupetserver will start spitting out repot files in a non-obvious directory. I'm going to guess that a new user to puppet would have no idea this is happening, and would eventually run their server out of disk space. So my initial though was this feature should be turned off by default. I'm ok if your really not in favor of this PR. Maybe as a good alternative we document that setting better in the readme. Maybe list a few possible built-in options: puppetdb,http,store (defult), log? (I've attended quite a few puppetlabs talks, and I'm actually a bit surprised this setting doesn't default to puppetdb since they encourage you to use it.) I think I want to leave this as the default, but I'd love some documentation updates!
gharchive/pull-request
2015-10-28T20:17:18
2025-04-01T06:38:37.330715
{ "authors": [ "jbehrends", "jlambert121" ], "repo": "evenup/evenup-puppet", "url": "https://github.com/evenup/evenup-puppet/pull/33", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2671013901
[Bug] 任何note标签都会自带modern样式 任何note标签都会自带modern样式,导致flat样式无法正常显示 无设定样式: 设定flat样式: 顺便建议一下样式,只是个人习惯了butterfly,总感觉现在标签有点过宽,这是我目前修改的样式 .article-container .note p { font-size: 0.7rem; line-height: 1.7; font-weight: 400; margin: 0; text-align: left; letter-spacing: 0.6px; } .note:not(.no-icon) { padding-left: 0.6rem; } 好的,采纳建议,明天改
gharchive/issue
2024-11-19T06:58:42
2025-04-01T06:38:37.361014
{ "authors": [ "MskTmi", "everfu" ], "repo": "everfu/hexo-solitude-tag", "url": "https://github.com/everfu/hexo-solitude-tag/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
54193421
'settings' should be immutable At the beginning of the middleware handler, there is: var options = settings || defaults; Later, at several places, the options object is modified. For example: options.headers = this.header['access-control-request-headers']; It means that the global middleware settings object (or defaults if no settings are specified) is modified from one request to another. I think it's dangerous and a source of bugs. Hey @mvila Thanks, this has been fixed with the PR #25
gharchive/issue
2015-01-13T13:28:40
2025-04-01T06:38:37.385094
{ "authors": [ "evert0n", "mvila" ], "repo": "evert0n/koa-cors", "url": "https://github.com/evert0n/koa-cors/issues/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
337312531
413 Request Entity Too Large I'm trying to upload a large plugin and it won't let me. I'm getting an nginx error page. To recreate: Upload a plugin using wordpress's plugin uploader. The one I used is 4.3 MB. (.zip) I'm thinking the solution would be to increase php limits, but they are already high enough in your default uploadsize.ini file. Thanks for your work on this. It's awesome! Please check your nginx webproxy to see if you set the option on upload limit there. Restart the proxy and let me know if it works. I've set things up as is. I mean, I didn't change, enable, or add anything during my set up of the webproxy. Do I need to uncomment anything? Maybe this?: USE_NGINX_CONF_FILES=true in the .env file so that it would use the uploadsize.conf file? Also, do I need to add anything to it other than this `client_max_body_size 100m;' That´s correct. Please uncomment this option and set the upload size as you need and just restart the webproxy containrs. If you are in production environment, you might want to try (on the webproxy): docker-compose restart So, it will not go off-line while restarting, if it does not work you will need to realod your webproxy with this command: docker exec -it webproxy nginx -s reload Let me know if it worked. I did as you said. I uncommented that option and set the upload size. Then I restarted the webproxy containers as well as the wordpress containers. I used docker-compose restart within the folders. Then I tried uploading the plugin and I got the same "413 Request Entity Too Large" error. I also tried reloading them with this: docker exec -it nginx-web -s reload But I got this error in terminal: OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"-s\": executable file not found in $PATH": unknown I then stopped all of the containers and started them again to see if that would work. Still the same result. This is my .env file: # # docker-compose-letsencrypt-nginx-proxy-companion # # A Web Proxy using docker with NGINX and Let's Encrypt # Using the great community docker-gen, nginx-proxy and docker-letsencrypt-nginx-proxy-companion # # This is the .env file to set up your webproxy enviornment # # Your local containers NAME # NGINX_WEB=nginx-web DOCKER_GEN=nginx-gen LETS_ENCRYPT=nginx-letsencrypt # # Your external IP address # IP=0.0.0.0 # # Default Network # NETWORK=webproxy # # Service Network (Optional) # # In case you decide to add a new network to your services containers you can set this # network as a SERVICE_NETWORK # # [WARNING] This setting was built to use our `start.sh` script or in that special case # you could use the docker-composer with our multiple network option, as of: # `docker-compose -f docker-compose-multiple-networks.yml up -d` # #SERVICE_NETWORK=webservices # # NGINX file path # NGINX_FILES_PATH=/nginx/data # # NGINX use special conf files # # In case you want to add some special configuration to your NGINX Web Proxy you could # add your files to ./conf.d/ folder as of sample file 'uploadsize.conf' # # [WARNING] This setting was built to use our `start.sh`. # # [WARNING] Once you set this options to true all your files will be copied to data # folder (./data/conf.d). If you decide to remove this special configuration # you must delete your files from data folder ./data/conf.d. # USE_NGINX_CONF_FILES=true # # Docker Logging Config # # This section offers two options max-size and max-file, which follow the docker documentation # as follow: # # logging: # driver: "json-file" # options: # max-size: "200k" # max-file: "10" # #NGINX_WEB_LOG_MAX_SIZE=4m #NGINX_WEB_LOG_MAX_FILE=10 #NGINX_GEN_LOG_MAX_SIZE=2m #NGINX_GEN_LOG_MAX_FILE=10 #NGINX_LETSENCRYPT_LOG_MAX_SIZE=2m #NGINX_LETSENCRYPT_LOG_MAX_FILE=10 This is my docker-compose.yml file: version: '3' services: nginx-web: image: nginx labels: com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true" container_name: ${NGINX_WEB:-nginx-web} restart: always ports: - "${IP:-0.0.0.0}:80:80" - "${IP:-0.0.0.0}:443:443" volumes: - ${NGINX_FILES_PATH:-./data}/conf.d:/etc/nginx/conf.d - ${NGINX_FILES_PATH:-./data}/vhost.d:/etc/nginx/vhost.d - ${NGINX_FILES_PATH:-./data}/html:/usr/share/nginx/html - ${NGINX_FILES_PATH:-./data}/certs:/etc/nginx/certs:ro - ${NGINX_FILES_PATH:-./data}/htpasswd:/etc/nginx/htpasswd:ro logging: options: max-size: ${NGINX_WEB_LOG_MAX_SIZE:-4m} max-file: ${NGINX_WEB_LOG_MAX_FILE:-10} nginx-gen: image: jwilder/docker-gen command: -notify-sighup ${NGINX_WEB:-nginx-web} -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf container_name: ${DOCKER_GEN:-nginx-gen} restart: always volumes: - ${NGINX_FILES_PATH:-./data}/conf.d:/etc/nginx/conf.d - ${NGINX_FILES_PATH:-./data}/vhost.d:/etc/nginx/vhost.d - ${NGINX_FILES_PATH:-./data}/html:/usr/share/nginx/html - ${NGINX_FILES_PATH:-./data}/certs:/etc/nginx/certs:ro - ${NGINX_FILES_PATH:-./data}/htpasswd:/etc/nginx/htpasswd:ro - /var/run/docker.sock:/tmp/docker.sock:ro - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro logging: options: max-size: ${NGINX_GEN_LOG_MAX_SIZE:-2m} max-file: ${NGINX_GEN_LOG_MAX_FILE:-10} nginx-letsencrypt: image: jrcs/letsencrypt-nginx-proxy-companion container_name: ${LETS_ENCRYPT:-nginx-letsencrypt} restart: always volumes: - ${NGINX_FILES_PATH:-./data}/conf.d:/etc/nginx/conf.d - ${NGINX_FILES_PATH:-./data}/vhost.d:/etc/nginx/vhost.d - ${NGINX_FILES_PATH:-./data}/html:/usr/share/nginx/html - ${NGINX_FILES_PATH:-./data}/certs:/etc/nginx/certs:rw - /var/run/docker.sock:/var/run/docker.sock:ro environment: NGINX_DOCKER_GEN_CONTAINER: ${DOCKER_GEN:-nginx-gen} NGINX_PROXY_CONTAINER: ${NGINX_WEB:-nginx-web} logging: options: max-size: ${NGINX_LETSENCRYPT_LOG_MAX_SIZE:-2m} max-file: ${NGINX_LETSENCRYPT_LOG_MAX_FILE:-10} networks: default: external: name: ${NETWORK:-webproxy} This is my uploadsize.conf file: client_max_body_size 1000M Now onto my wordpress files. Here is my wordpress docker-compose.yml file: version: '3' services: db: container_name: ${CONTAINER_DB_NAME} image: mariadb:latest restart: unless-stopped volumes: - ${DB_PATH}:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ${MYSQL_DATABASE} MYSQL_USER: ${MYSQL_USER} MYSQL_PASSWORD: ${MYSQL_PASSWORD} wordpress: depends_on: - db container_name: ${CONTAINER_WP_NAME} image: wordpress:latest restart: unless-stopped volumes: - ${WP_CORE}:/var/www/html - ${WP_CONTENT}:/var/www/html/wp-content - ./conf.d/uploadsize.ini:/usr/local/etc/php/conf.d/uploadsize.ini environment: WORDPRESS_DB_HOST: ${CONTAINER_DB_NAME}:3306 WORDPRESS_DB_NAME: ${MYSQL_DATABASE} WORDPRESS_DB_USER: ${MYSQL_USER} WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD} WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX} VIRTUAL_HOST: ${DOMAINS} LETSENCRYPT_HOST: ${DOMAINS} LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL} networks: default: external: name: ${NETWORK} Here is my wordpress .env file: # .env file to set up your wordpress site # # Network name # # Your container app must use a network conencted to your webproxy # https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion # NETWORK=webproxy # # Database Container configuration # We recommend MySQL or MariaDB - please update docker-compose file if needed. # CONTAINER_DB_NAME=db # Path to store your database DB_PATH=/wordpress/database/data # Root password for your database MYSQL_ROOT_PASSWORD=mypassword # Database name, user and password for your wordpress MYSQL_DATABASE=mydatabasename MYSQL_USER=myusername MYSQL_PASSWORD=mypassword # # Wordpress Container configuration # CONTAINER_WP_NAME=wordpress # Path to store your wordpress files WP_CORE=/wordpress/core/data WP_CONTENT=/wordpress/wp-content/data # Table prefix WORDPRESS_TABLE_PREFIX=wp_ # Your domain (or domains) DOMAINS=mydomain.com,www.mydomian.com # Your email for Let's Encrypt register LETSENCRYPT_EMAIL=myemail@mydomain.com Here is my wordpress uploadsize.ini file: file_uploads = On memory_limit = 3000M upload_max_filesize = 1000M post_max_size = 2000M max_execution_time = 1000 I assume you have fixed that... if not open this issue again and comment. Thanks! Sorry, I haven't been able to get back to this until now. Yes, that did fix it. Thank you! +1: Hi i forgot to add the conf.d folder. I added it, but now I get the folowing error: ERROR: for wordpress Cannot start service wordpress: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/home/*/*/*/conf.d/uploadsize.ini\\\" to rootfs \\\"/var/lib/docker/overlay2/*/merged\\\" at \\\"/var/lib/docker/overlay2/*/merged/usr/local/etc/php/conf.d/uploadsize.ini\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type How can I fix this? @Koerner can you show how your docker-compose is looking? `version: '3' services: db: container_name: ${CONTAINER_DB_NAME} image: mariadb:latest restart: unless-stopped volumes: - ${DB_PATH}:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ${MYSQL_DATABASE} MYSQL_USER: ${MYSQL_USER} MYSQL_PASSWORD: ${MYSQL_PASSWORD} wordpress: depends_on: - db container_name: ${CONTAINER_WP_NAME} image: wordpress:latest restart: unless-stopped volumes: - ${WP_CORE}:/var/www/html - ${WP_CONTENT}:/var/www/html/wp-content - ./conf.d/uploadsize.ini:/usr/local/etc/php/conf.d/uploadsize.ini environment: WORDPRESS_DB_HOST: ${CONTAINER_DB_NAME}:3306 WORDPRESS_DB_NAME: ${MYSQL_DATABASE} WORDPRESS_DB_USER: ${MYSQL_USER} WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD} WORDPRESS_TABLE_PREFIX: ${WORDPRESS_TABLE_PREFIX} VIRTUAL_HOST: ${DOMAINS} LETSENCRYPT_HOST: ${DOMAINS} LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL} networks: default: external: name: ${NETWORK}` I have an update on this. I found out why I was having an issue in the first place. I was installing the nginx files under the root user here: /nginx/data Then I installed wordpress under the sudo user here: /home/myuser/wordpress_site Therefore, I believe it was a permission problem. Wordpress wasn't able to access the webproxy's config files to read the uploadsize.conf file because it didn't have root permission. The fix: I installed both the webproxy and wordpress under the sudo user and I had no more problems. I hope this helps someone else. Or you could have a www-data owner and group for the wp files.
gharchive/issue
2018-07-01T18:45:42
2025-04-01T06:38:37.403327
{ "authors": [ "Koerner", "evertramos", "xtjoeywx" ], "repo": "evertramos/docker-wordpress-letsencrypt", "url": "https://github.com/evertramos/docker-wordpress-letsencrypt/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
816481001
feat: use bigger images in example @osdiab I saw the issue #101. The reason of why the images doesn't look very well is because they don't have a good aspect ratio. I changed the images and now they are shown as it should be, for sure that we can improve this designs but at least with this the images are shown as expected border radius definitely helps!
gharchive/pull-request
2021-02-25T14:20:27
2025-04-01T06:38:37.406257
{ "authors": [ "martinbianchi", "osdiab" ], "repo": "everydotorg/donate-button", "url": "https://github.com/everydotorg/donate-button/pull/116", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
244789505
精算一覧(サマリー)から個々の口座の記入に遷移したい Feature 精算一覧(サマリー) Problem サマリーをみて、その口座の記入をつけはじめようとすることが多いが、動線がない。 せっかく口座ごとに表示されているのでその口座の記入ができる口がほしい。鉛筆マークとか、帳簿マークとか? Goal 精算一覧(サマリー)に表示された任意の口座の一覧(記入)に簡単に遷移できる。 ついでにいえば、各月の記入に飛びたい気持ちもある。黒い字(未精算分)があるときにその月の記入に飛ぶのがいいのかもしれない。(ただ、精算作成のほうがより自然かもしれない)
gharchive/issue
2017-07-21T21:37:27
2025-04-01T06:38:37.407818
{ "authors": [ "nay" ], "repo": "everyleaf/kozuchi", "url": "https://github.com/everyleaf/kozuchi/issues/149", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
983280990
Support for plotting multiple columns in LineChart LineChart component currently requires tidy data to plot multiple lines. Add support for plotting multiple columns (i.e. 1 column per measure and plotting each). Syntax could be something like <LineChart x=date y=(measure1, measure2)/> Needs to be able to apply color palette in same way as series argument. Should probably use an array as the multi-column argument: [column_a, column_b] Can also expose arguments for series name: [“Column A”, “Column B”] And lineColor: [#4287f5, black] In addition to any other line formatting arguments. If only one color or style is provided but there are multiple series, apply that style to all series. If no styles provided, use multi-series color palette as normal
gharchive/issue
2021-08-30T23:08:17
2025-04-01T06:38:37.422984
{ "authors": [ "hughess" ], "repo": "evidence-dev/evidence", "url": "https://github.com/evidence-dev/evidence/issues/110", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
249144198
Subset URLs in ECO have changed In the 2017-07-19 release, the structure of the subset URLs changed e.g. http://purl.obolibrary.org/obo/eco#go_groupings -> http://purl.obolibrary.org/obo/eco/go_groupings I can see the motivation for changing: the old URLs were ugly. However, unfortunately we were depending on these subsets for our amigo load (see #93). As the urls changed without warning, the classes effectively slipped out of the subset for us, causing this problem: https://github.com/geneontology/amigo/issues/433 In future can you give us advance warning so we can change our configurations? Thanks. It may be easiest for you to stack with what you have for now (I recommend having the PURLs resolve) but I defer to @ktlm here. Note that the change also had consequences for obo-format users - you can see for yourself in the obo version of the file. The URI to shortform mappings for subsets is a little opaque, see http://owlcollab.github.io/oboformat/doc/obo-syntax.html section 5.9.2 Basically 'non-canonical' identifiers like 'goslim_foo' get given a hash IRI using the ontology base IRI. There were reasons for this to do with unambiguous roundtripping. Although more people are abandoning obo, unfortunately many of the consumers of eco still use it. I advise doing a diff between obo version with each release. If anything looks odd or changes unexpectedly then hold off and consult us. Thanks Chris. Very helpful. Will do. On Aug 9, 2017, at 4:02 PM, Chris Mungall notifications@github.com wrote: Note that the change also had consequences for obo-format users - you can see for yourself in the obo version of the file. The URI to shortform mappings for subsets is a little opaque, see http://owlcollab.github.io/oboformat/doc/obo-syntax.html section 5.9.2 Basically 'non-canonical' identifiers like 'goslim_foo' get given a hash IRI using the ontology base IRI. There were reasons for this to do with unambiguous roundtripping. Although more people are abandoning obo, unfortunately many of the consumers of eco still use it. I advise doing a diff between obo version with each release. If anything looks odd or changes unexpectedly then hold off and consult us. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread. OK, I consulted with @kltm, this is actually a bit more problematic for us than I thought. @mchibucos - Can we request an ASAP new release of ECO with the URLs for subsets reverted back (I can make the PR if you like). This will give us some time to make necessary software changes. We can then switch to your preferred URLs in 1-2 months, and perhaps coordinate this with wider implemented best practices across OBO. Hi @cmungall - I apologize for any issues this caused; this was actually something that I missed in a merge that changed the namespace from eco# to eco/. I'll fix it and release now. Affirmative on all. @rctauber On Aug 9, 2017, at 4:30 PM, Chris Mungall notifications@github.com wrote: OK, I consulted with @kltm, this is actually a bit more problematic for us than I thought. @mchibucos - Can we request an ASAP new release of ECO with the URLs for subsets reverted back (I can make the PR if you like). This will give us some time to make necessary software changes. We can then switch to your preferred URLs in 1-2 months, and perhaps coordinate this with wider implemented best practices across OBO. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread. Not released yet.... Sorry Chris. I was under the impression this was fixed. @rctauber any insights? Release is live, thanks for your patience @cmungall is everything OK with this now? Let me know if there's anything else that needs to be fixed. Thanks! Thanks!
gharchive/issue
2017-08-09T19:59:46
2025-04-01T06:38:37.434239
{ "authors": [ "cmungall", "mchibucos", "rctauber" ], "repo": "evidenceontology/evidenceontology", "url": "https://github.com/evidenceontology/evidenceontology/issues/149", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }