id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1393672928
🛑 Client Portal is down In e28d3db, Client Portal (https://accounts.infinite8.co) was down: HTTP code: 404 Response time: 109 ms Resolved: Client Portal is back up in 3740e92.
gharchive/issue
2022-10-02T07:30:13
2025-04-01T04:34:34.802115
{ "authors": [ "jonathanfinley" ], "repo": "infinite8co/upptime", "url": "https://github.com/infinite8co/upptime/issues/723", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
209735796
FIx documentation of microseconds duration The documentation says that u and µ are supported suffixes for durations, but only us and µs actually work. Using u or µ fails with run: parse config: time: unknown unit µ in duration 1µ and run: parse config: time: unknown unit u in duration 1u. This PR fixes the documentation. Thank you for finding and fixing that, @cm6051. I looked into it and this seems to be part of a bigger issue with InfluxDB; the proper microsecond syntax differs for writes, queries, specifying the timestamp precision in the CLI, and setting durations in the config file. I've opened an issue on the InfluxDB repo about this: https://github.com/influxdata/influxdb/issues/8053. For now, I'm not going to merge your docs PR just so the team can figure out what the proper syntax actually is. I've opened a docs issue to add a note about the current microsecond syntax behaviour. Thank you again for finding that - we really, really appreciate you letting us know and taking the time to open a docs PR!
gharchive/pull-request
2017-02-23T11:30:00
2025-04-01T04:34:34.826560
{ "authors": [ "cm6051", "rkuchan" ], "repo": "influxdata/docs.influxdata.com", "url": "https://github.com/influxdata/docs.influxdata.com/pull/1031", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
665443778
JSONDecodeError: Expecting value: line 1 column 1 (char 0) Hello All, Anyhelp is much appreciated , I am using **InfluxDB version: Version 2.0.0 (f54848f) **InfluxDB-python version: influxdb-5.3.0 **Python version: 3.7.4 **Operating system version: macOS 10.14.5 I am trying to instantiate a DF client as below client = DataFrameClient(host, port, user, password, dbname) print("Create database: " + dbname) client.create_database(dbname) But I am facing JSONDecodeError Traceback (most recent call last) in 40 41 print("Create database: " + dbname) ---> 42 client.create_database(dbname) 43 44 ~/opt/anaconda3/lib/python3.7/site-packages/influxdb/client.py in create_database(self, dbname) 736 """ 737 self.query("CREATE DATABASE {0}".format(quote_ident(dbname)), --> 738 method="POST") 739 740 def drop_database(self, dbname): ~/opt/anaconda3/lib/python3.7/site-packages/influxdb/_dataframe_client.py in query(self, query, params, bind_params, epoch, expected_response_code, database, raise_errors, chunked, chunk_size, method, dropna) 192 method=method, 193 chunk_size=chunk_size) --> 194 results = super(DataFrameClient, self).query(query, **query_args) 195 if query.strip().upper().startswith("SELECT"): 196 if len(results) > 0: ~/opt/anaconda3/lib/python3.7/site-packages/influxdb/client.py in query(self, query, params, bind_params, epoch, expected_response_code, database, raise_errors, chunked, chunk_size, method) 523 if chunked: 524 return self._read_chunked_response(response) --> 525 data = response.json() 526 527 results = [ ~/opt/anaconda3/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 895 # used. 896 pass --> 897 return complexjson.loads(self.text, **kwargs) 898 899 @property ~/opt/anaconda3/lib/python3.7/json/init.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 346 parse_int is None and parse_float is None and 347 parse_constant is None and object_pairs_hook is None and not kw): --> 348 return _default_decoder.decode(s) 349 if cls is None: 350 cls = JSONDecoder ~/opt/anaconda3/lib/python3.7/json/decoder.py in decode(self, s, _w) 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): ~/opt/anaconda3/lib/python3.7/json/decoder.py in raw_decode(self, s, idx) 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: --> 355 raise JSONDecodeError("Expecting value", s, err.value) from None 356 return obj, end JSONDecodeError: Expecting value: line 1 column 1 (char 0) were you able to reproduce the error? Like did you repeat it with another string. I'm also trying to understand if the dbname variable was assigned a name/string before being passed along because there's --> 355 raise JSONDecodeError("Expecting value", s, err.value) from None in your error log I have switched to InlfuxDB 2.0 On Mon, Jul 27, 2020 at 3:59 AM Saketha Ramanjam notifications@github.com wrote: were you able to reproduce the error? Like did you repeat it with another string. I'm also trying to understand if the dbname variable was assigned a name/string before being passed along because there's --> 355 raise JSONDecodeError("Expecting value", s, err.value) from None in your error log — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/influxdata/influxdb-python/issues/838#issuecomment-664020637, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQESTFRBZPRDYTNU5HIEALR5RVGLANCNFSM4PHDQXCQ . I have switched to InlfuxDB 2.0 does it work now? written python scripts so yes On Tue, Jul 28, 2020 at 4:19 PM Saketha Ramanjam notifications@github.com wrote: I have switched to InlfuxDB 2.0 does it work now? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/influxdata/influxdb-python/issues/838#issuecomment-664801454, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQESTEZUAITHL47WLOGN33R5ZUWVANCNFSM4PHDQXCQ . the issue also mentions the use of Influxdb 2.0 I'm trying to understand, what is it that you've done differently to make it work? getting the error JSONDecodeError: Expecting value: line 2 column 1 (char 1) can you please paste out the entire error stack? Also, please post what you're trying to achieve/trying to do. such as writing points
gharchive/issue
2020-07-24T22:10:08
2025-04-01T04:34:34.854800
{ "authors": [ "raghavchalapathy", "sakethramanujam", "sunilsaini47" ], "repo": "influxdata/influxdb-python", "url": "https://github.com/influxdata/influxdb-python/issues/838", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
489949729
creating a notification rule from any to crit should generate the correct flux looks like it is generating monitor.stateChanges(fromLevel: "unknown", toLevel: "crit") but it should be monitor.stateChanges(fromLevel: "any", toLevel: "crit") this may also solve this issue: https://github.com/influxdata/influxdb/issues/14967
gharchive/issue
2019-09-05T19:13:16
2025-04-01T04:34:34.856691
{ "authors": [ "russorat" ], "repo": "influxdata/influxdb", "url": "https://github.com/influxdata/influxdb/issues/14978", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
630108262
Write data from a URL Proposal: Extend the influx write command with the ability to write data from a URL. Current behavior: Right now you can only write line protocol or csv data from files or stdin. Desired behavior: Add a -u, --url parameter to the influx write command that grabs line protocol or csv from a remote HTTP endpoint. Use case: Easily write data from HTTP endpoints. For example, easily write CDC Covid-19 CSV data to InfluxDB: influx write \ -u https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/ecdc/full_data.csv \ --header="#datatype dateTime:2006-01-02,tag,long,long,long,long" I think that curl -f https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/ecdc/full_data.csv | influx write --format csv ... already works fine, doesn't it? When I try, I get the following error from curl: curl: (23) Failed writing body (0 != 1371) This indicates that influx write is closing the read pipe before curl is done piping data into it. This may be unique to macOS, but it's still an issue. It would also be nice to support writing from URLs without an external dependency on curl or wget. The influx pkg (now influx apply) command supports this already with the -u/--template-url flag. @jsteenb2, could the URL fetching implementation be reused here? I think it'd be nice to separate them, I'm not sure what all the -f flag looks at the system only from what I can tell. Identifying the URL as HTTP would have to be done here. The stuff pkger does is not super useful to copy paste into this situation. I'd wager that having a separate --url flag would be more explorable 🤔. If you want to use the -f flag to take remote in addition to local files you can check out the pkger implementation here: https://github.com/influxdata/influxdb/blob/bc9b69b96f4d17b8658b62576cc38b0110188f49/pkger/service.go#L2870-L2901 @jsteenb2 If you want to use the -f flag to take remote in addition to local files... No, the original proposal is to introduce a -u/--url flag to the write command, not to allow the -f flag to handle both local and remote files. I was wondering if the mechanism for reading files from a remote http endpoint could be reused here to quickly add the -u flag. ahh gotcha. All pkger does it make an http request. Its the following: https://github.com/influxdata/influxdb/blob/9ab44476170a651a46672f8338f01e72e94c4e29/pkger/parser.go#L116-L138 @sanderson thank you for sharing the motivation for doing this, there is no problem to enhance influx write with this just FYI, the pipe failed because influx write encountered an error and stopped reading the data. If I run curl in the silent mode, I can better see what is wrong $curl -f -s https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/ecdc/full_data.csv | ./influx write --header="#datatype dateTime:2006-01-02,tag,long,long,long,long" Error: Failed to write data: line 2: no measurement column found. After I add measurement, it works fine. curl -f https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/ecdc/full_data.csv | ./influx write --header="#datatype dateTime:2006-01-02,tag,long,long,long,long" --header="#constant measurement, covid" The pipe is a better solution comparing to what is now used in pkger, since it does not cache the whole CSV data in memory. Anyway, the implementation is easy. Ah, nice catch @sranka. Thanks for the follow-up.
gharchive/issue
2020-06-03T15:43:44
2025-04-01T04:34:34.866099
{ "authors": [ "jsteenb2", "sanderson", "sranka" ], "repo": "influxdata/influxdb", "url": "https://github.com/influxdata/influxdb/issues/18349", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1519470764
Large amount of RAM Large amount of RAM for a little increase cardinality. Proposal: For a cardinality of 365. I have a RAM usage of 64GB RAM. Current behavior: On starup the RAM increas to 64GB or up to 80GB and then only 5GB are used by infuxd task. By raise cardinality from 6x6 to 6x12 RAM grow from 5GB up to 12GB in normal use. Desired behavior: Limit memory growth Alternatives considered: Switch to another db Use case: Server has only an amount of RAM The solution seems to be to switch to VictoriaMetrics VictoriaMetrics is a good and resource efficent tsdb. But it could only handle timestamps with millisecond precision while InfluxDB stores timestamps with nanosecond precision. So I think I need both ore find an other alternative for InfluxDB for the high frequency data. @pki791 unfortunately no I reduced the amount of data where I use InfluxDB and add a file based buffer for high frequency data. @Nick135 Thank You for the answer. I feel bad with that RAM usage. I have now about 150 measurements, coming in every 10 seconds. I RAN it for a month, then i needed to increase RAM from 4GB to 8GB. Now after about 3 month of total runtime i already need >8GB of RAM to start influxdb. It uses about 8GB at start, after the RAM usage is reduced to 1-2GB. I planned to have about 100 times more measurements in about same groups. That startup RAM usage is very strange.... I use 2.7.4 on debian.
gharchive/issue
2023-01-04T19:28:34
2025-04-01T04:34:34.870903
{ "authors": [ "Nick135", "pki791" ], "repo": "influxdata/influxdb", "url": "https://github.com/influxdata/influxdb/issues/24022", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
411531238
storage: cleanup shard errors [x] Rebased/mergeable [x] Tests pass [x] http/swagger.yml updated (if modified Go structs or API) [x] Sign CLA (if not already signed) Hi @e-dard @zeebo , request a review. Thanks @zhulongcheng.
gharchive/pull-request
2019-02-18T15:30:16
2025-04-01T04:34:34.872959
{ "authors": [ "e-dard", "zhulongcheng" ], "repo": "influxdata/influxdb", "url": "https://github.com/influxdata/influxdb/pull/11963", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
834112163
chore: updated CONTRIBUTING.md for new package config script [X] Well-formatted commit messages [X] Rebased/mergeable [X] Tests pass [X] Documentation updated or issue created (provide link to issue/pr) Thanks for the PR @davidby-influx !
gharchive/pull-request
2021-03-17T19:33:56
2025-04-01T04:34:34.875220
{ "authors": [ "davidby-influx", "lesam" ], "repo": "influxdata/influxdb", "url": "https://github.com/influxdata/influxdb/pull/20990", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
154289728
Teach the http service how to enforce connection limits The http connection limit is for any HTTP operation and is independent of the other connection limits. It should be set to a higher value than the query limit or the write limit. The difference between this and the other connection limits is it will close out the connection immediately without any further processing. A max concurrent write limit has been added. This will prevent writes to the underlying store if the number of writers exceeds the threshold. Also removes some unused config options from the cluster config. Fixes #6559. Rather than continuing commenting on a specific line, I'll continue here. Just for clarification, is the issue the number of active /write requests being processed, or the underlying TCP connection count? Are we moving forward with considering unclosed connections to be a "bad client" and just limiting the number of actively processing /write requests? If so, it seems like a simple "channel as a semaphore" would be enough to limit that without outright rejecting requests that happen in quick succession or when the server is experiencing high load. Rejecting the connections puts the burden back on the client to retry, using a semaphore to limit concurrent processing of requests would allow requests to "build up" and then get processed as the server is able to. I think limiting the connections as you say through a semaphore would make us vulnerable to a DDOS for clients that don't act correctly. The reason to close the connections immediately would be to avoid further damage from a DDOS, but it would also mean that nobody new could connect and existing client connections could freeze the server. But to be fair, that's how PostgreSQL connections work so we may be in good company. Latest version with numbers from influx_stress for three different configurations: # With a connection limit of 5 (intentionally to show a DDOS situation with retries) $ influx_stress Total Queries: 250 Average Query Response Time: 11.29319ms Total Requests: 2127 Success: 2000 Fail: 127 Average Response Time: 56.393977ms Points Per Second: 366954 # With no connection limit $ influx_stress Total Requests: 2000 Success: 2000 Fail: 0 Average Response Time: 85.688198ms Points Per Second: 399314 Total Queries: 250 Average Query Response Time: 25.941744ms # With a connection limit of 5000 $ influx_stress Total Requests: 2000 Success: 2000 Fail: 0 Average Response Time: 86.180241ms Points Per Second: 400393 Total Queries: 250 Average Query Response Time: 24.379521ms I wouldn't trust the benchmarking results completely and the average write response time from the 5 connection limit situation looks very wrong. It doesn't look like it takes into account retries and likely counted a write that was closed immediately as a successful connection. Average query time, which I know does retries, seems to show the results better since the connection limit severely slowed down the result because of the need to continuously retry. Benchmarking: $ go test -run=LimitListener -bench=LimitListener ./services/httpd/ PASS BenchmarkLimitListener-8 5000000 349 ns/op ok github.com/influxdata/influxdb/services/httpd 2.170s I used channels since they seemed to be faster than mutexes and using the primitives in sync/atomic directly caused me to think I would make a mistake. The logic of checking and incrementing a counter didn't play out very well. I can try to do it anyway as long as we're all agreed this is the correct direction to go. LGTM. @joelegasse should take a look as well. 👍 Clean and simple, LGTM. I'm not too worried about extreme performance in accepting a network connection, since the round-trip latency is several orders of magnitude greater than waiting on a channel operation. Then I'll update this with a changelog, wait for green again, and then merge it.
gharchive/pull-request
2016-05-11T16:44:54
2025-04-01T04:34:34.881524
{ "authors": [ "joelegasse", "jsternberg", "jwilder" ], "repo": "influxdata/influxdb", "url": "https://github.com/influxdata/influxdb/pull/6601", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2055219248
🛑 alternative site is down In 338d18d, alternative site (https://almgro7al3nzy.com) was down: HTTP code: 0 Response time: 0 ms Resolved: alternative site is back up in 9b08698 after 25 minutes.
gharchive/issue
2023-12-24T21:46:29
2025-04-01T04:34:34.970688
{ "authors": [ "info-devf5r" ], "repo": "info-devf5r/VPN", "url": "https://github.com/info-devf5r/VPN/issues/1508", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1689995622
🛑 موقع تجريبي is down In e0a66f8, موقع تجريبي (https://www.nulled.to) was down: HTTP code: 403 Response time: 97 ms Resolved: موقع تجريبي is back up in ac6fb42.
gharchive/issue
2023-04-30T16:59:10
2025-04-01T04:34:34.973031
{ "authors": [ "info-devf5r" ], "repo": "info-devf5r/VPN", "url": "https://github.com/info-devf5r/VPN/issues/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1718385185
🛑 CHAT Website is down In 4aff462, CHAT Website (https://chat.devf5r.com) was down: HTTP code: 0 Response time: 0 ms Resolved: CHAT Website is back up in e945452.
gharchive/issue
2023-05-21T06:13:42
2025-04-01T04:34:34.975314
{ "authors": [ "info-devf5r" ], "repo": "info-devf5r/VPN", "url": "https://github.com/info-devf5r/VPN/issues/342", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
126516646
Add end-to-end tests for assertion definition Add a number of end-to-end tests for assertions. Most tests use sentence from wikipedia. Each test is commented with what it is testing. I see you ported the method names from whoami (I defined many of them). These method names are actually bad practice, because they don't help us understand what the test checks. I take the blame for this -- but we can use this as an opportunity to make the code better. Examples: test_complex_compound is much more informative than test_is_a_law_case. test_unicode is better than test_is_a_municipality. Try to group tests logically. I think multiple asserts in one test_ method are fine as long as they test the same 'unit' of functionality. No need to add error messages Look at argument order in https://docs.python.org/2/library/unittest.html (e.g. assertItemsEqual(actual, expected) Interestingly, this is the docstring of assertItemsEquals: {{ Help on method assertItemsEqual in module unittest.case: assertItemsEqual(self, expected_seq, actual_seq, msg=None) unbound unittest.case.TestCase method An unordered sequence specific comparison. It asserts that actual_seq and expected_seq have the same element counts. Equivalent to:: self.assertEqual(Counter(iter(actual_seq)), Counter(iter(expected_seq))) Asserts that each element has the same count in both sequences. Example: - [0, 1, 1] and [1, 0, 1] compare equal. - [0, 0, 1] and [0, 1] compare unequal. }} Interestingly, here's the docstring from unittest.TestCase.assertItemsEqual assertItemsEqual(self, expected_seq, actual_seq, msg=None) unbound unittest.case.TestCase method An unordered sequence specific comparison. It asserts that actual_seq and expected_seq have the same element counts. Equivalent to:: self.assertEqual(Counter(iter(actual_seq)), Counter(iter(expected_seq))) Asserts that each element has the same count in both sequences. Example: - [0, 1, 1] and [1, 0, 1] compare equal. - [0, 0, 1] and [0, 1] compare unequal. I don't think the order actually matters. In the error message, they just call them 'first' and 'second' list. Anyhow, I'll stick with the variable names in the docstring. lgtm! :rocket: :ship: :airplane: Closed #41
gharchive/pull-request
2016-01-13T21:17:06
2025-04-01T04:34:34.990642
{ "authors": [ "alvaromorales", "moldot" ], "repo": "infolab-csail/whoami", "url": "https://github.com/infolab-csail/whoami/pull/49", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
689058379
Update CHANGELOG.md for v.0.0.3 Closes: #214 Description Update CHANGELOG.md for v.0.0.3 For contributor use: [ ] Unit tests written [ ] Added test to CI if applicable [ ] Updated CHANGELOG_PENDING.md [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work. [ ] Updated relevant documentation (docs/) and code comments [ ] Re-reviewed Files changed in the Github PR explorer Codecov Report Merging #215 into master will increase coverage by 20.1%. The diff coverage is 50.6%. @@ Coverage Diff @@ ## master #215 +/- ## ========================================= + Coverage 13.6% 33.8% +20.1% ========================================= Files 69 67 -2 Lines 3752 4503 +751 Branches 1374 1546 +172 ========================================= + Hits 513 1524 +1011 - Misses 2618 2848 +230 + Partials 621 131 -490 Impacted Files Coverage Δ modules/src/events.rs 0.0% <ø> (ø) modules/src/ics02_client/events.rs 0.0% <ø> (ø) modules/src/ics02_client/raw.rs 0.0% <0.0%> (ø) modules/src/ics03_connection/error.rs 25.0% <0.0%> (-8.4%) :arrow_down: modules/src/ics04_channel/error.rs 23.0% <0.0%> (-2.0%) :arrow_down: modules/src/ics04_channel/packet.rs 0.0% <0.0%> (ø) modules/src/ics07_tendermint/client_def.rs 0.0% <0.0%> (ø) modules/src/ics07_tendermint/error.rs 33.3% <0.0%> (+33.3%) :arrow_up: modules/src/ics07_tendermint/msgs/update_client.rs 0.0% <0.0%> (ø) modules/src/ics23_commitment/mod.rs 0.0% <0.0%> (ø) ... and 99 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 788c36b...7cba89a. Read the comment docs. For a changelog this one is quite beautiful I gotta say. PLS don't forget to: [ ] Bump all crate versions to v0.0.3
gharchive/pull-request
2020-08-31T09:23:17
2025-04-01T04:34:35.043008
{ "authors": [ "adizere", "ancazamfir", "codecov-commenter" ], "repo": "informalsystems/ibc-rs", "url": "https://github.com/informalsystems/ibc-rs/pull/215", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
788923699
Testing & cleanup: follow-up to migration work Closes: #469 For contributor use: [ ] Updated the Unreleased section of CHANGELOG.md with the issue. [ ] If applicable: Unit tests written, added test to CI. [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work. [ ] Updated relevant documentation (docs/) and code comments. [ ] Re-reviewed Files changed in the Github PR explorer. Codecov Report Merging #535 (ec00b02) into master (b1b37f5) will increase coverage by 28.0%. The diff coverage is 70.2%. @@ Coverage Diff @@ ## master #535 +/- ## ========================================= + Coverage 13.6% 41.7% +28.0% ========================================= Files 69 133 +64 Lines 3752 8454 +4702 Branches 1374 0 -1374 ========================================= + Hits 513 3531 +3018 - Misses 2618 4923 +2305 + Partials 621 0 -621 Impacted Files Coverage Δ ...application/ics20_fungible_token_transfer/error.rs 0.0% <0.0%> (ø) ...pplication/ics20_fungible_token_transfer/events.rs 0.0% <ø> (ø) ...ion/ics20_fungible_token_transfer/msgs/transfer.rs 0.0% <0.0%> (ø) modules/src/events.rs 0.0% <0.0%> (ø) modules/src/ics02_client/error.rs 100.0% <ø> (ø) modules/src/ics02_client/events.rs 0.0% <0.0%> (ø) modules/src/ics02_client/raw.rs 0.0% <0.0%> (ø) modules/src/ics03_connection/error.rs 100.0% <ø> (+66.6%) :arrow_up: modules/src/ics03_connection/events.rs 0.0% <0.0%> (ø) modules/src/ics04_channel/error.rs 100.0% <ø> (+75.0%) :arrow_up: ... and 228 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 1a77c0a...14c5cac. Read the comment docs. Codecov Report Merging #535 (ec00b02) into master (b1b37f5) will increase coverage by 28.0%. The diff coverage is 70.2%. @@ Coverage Diff @@ ## master #535 +/- ## ========================================= + Coverage 13.6% 41.7% +28.0% ========================================= Files 69 133 +64 Lines 3752 8454 +4702 Branches 1374 0 -1374 ========================================= + Hits 513 3531 +3018 - Misses 2618 4923 +2305 + Partials 621 0 -621 Impacted Files Coverage Δ ...application/ics20_fungible_token_transfer/error.rs 0.0% <0.0%> (ø) ...pplication/ics20_fungible_token_transfer/events.rs 0.0% <ø> (ø) ...ion/ics20_fungible_token_transfer/msgs/transfer.rs 0.0% <0.0%> (ø) modules/src/events.rs 0.0% <0.0%> (ø) modules/src/ics02_client/error.rs 100.0% <ø> (ø) modules/src/ics02_client/events.rs 0.0% <0.0%> (ø) modules/src/ics02_client/raw.rs 0.0% <0.0%> (ø) modules/src/ics03_connection/error.rs 100.0% <ø> (+66.6%) :arrow_up: modules/src/ics03_connection/events.rs 0.0% <0.0%> (ø) modules/src/ics04_channel/error.rs 100.0% <ø> (+75.0%) :arrow_up: ... and 228 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 1a77c0a...14c5cac. Read the comment docs. Thanks for the review @adizere. I've addressed your suggestions. Thanks for the review @adizere. I've addressed your suggestions.
gharchive/pull-request
2021-01-19T11:00:28
2025-04-01T04:34:35.072505
{ "authors": [ "adizere", "codecov-io", "vitorenesduarte" ], "repo": "informalsystems/ibc-rs", "url": "https://github.com/informalsystems/ibc-rs/pull/535", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
166877427
Clean up and document angular bindings The current angular binding is messy and undocumented: [x] When the library is used in angular we need to clean up the global ERMrest library. [x] For the revised API sketch we do not need the factory, we just need ERMrest to be the factory. [x] We need documentation. Ideally this can be generated from jsdocs but it may not be possible and may have to be manually written. Merged the PR #107. Closing this issue.
gharchive/issue
2016-07-21T17:43:57
2025-04-01T04:34:35.079950
{ "authors": [ "jrchudy", "robes" ], "repo": "informatics-isi-edu/ermrestjs", "url": "https://github.com/informatics-isi-edu/ermrestjs/issues/104", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
295891885
add goofys @stevehadd I think this is what is needed but haven't tested. It mounts a load of buckets you likely don't kneed so you can remove them. Good luck My recommendation would be to put the mounts in /etc/fstab and then run mount -a to mount them. This way they will still be mounted after a reboot. Oh you're mounting inside the actual container! Ignore me 😆 Old will not merge.
gharchive/pull-request
2018-02-09T14:38:42
2025-04-01T04:34:35.084919
{ "authors": [ "jacobtomlinson", "tam203" ], "repo": "informatics-lab/forest", "url": "https://github.com/informatics-lab/forest/pull/35", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
712612775
Add tests for kubectl command on forbidden verbs and resources ISSUE TYPE Bug fix Pull Request SUMMARY Added a e2e test kubectl command with forbidden verb and forbidden resource. Another one for kubctl command with allowed verb and forbidden resource Fixes #350 Thanks for the PR @anoopmsivadas :pray: @anoopmsivadas Could you please rebase with develop and push again? Hello @anoopmsivadas , Thanks again for the PR! Thank you for contributing to BotKube. Could you please fill out this form, so we can send the well deserved awesome swags :slightly_smiling_face: Team BotKube
gharchive/pull-request
2020-10-01T08:18:31
2025-04-01T04:34:35.102987
{ "authors": [ "PrasadG193", "anoopmsivadas", "chetanpdeshmukh" ], "repo": "infracloudio/botkube", "url": "https://github.com/infracloudio/botkube/pull/374", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1004258778
Create Page Corpus for topic modelling https://github.com/inidun/unesco_data_collection/blob/a5a8602580d4680e381858736b64fdab1c4d62b4/courier/corpora.py#L86-L86 Only include pages specified in index as containing articles.
gharchive/issue
2021-09-22T12:50:59
2025-04-01T04:34:35.109824
{ "authors": [ "aibakeneko" ], "repo": "inidun/unesco_data_collection", "url": "https://github.com/inidun/unesco_data_collection/issues/52", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
300160670
xc contract add Ink-tone contract. add Ink-xc contract. add xc-plugin contract. add README.md xc contract modify sign
gharchive/pull-request
2018-02-26T09:19:53
2025-04-01T04:34:35.121038
{ "authors": [ "mahuaibo" ], "repo": "inklabsfoundation/inkchain", "url": "https://github.com/inklabsfoundation/inkchain/pull/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1084525312
Constraint version pydantic to ~=1.8.2 Description Constraint version pydantic to pydantic~=1.8.2 in order to work around incompatibility issue that causes the following exception: Traceback (most recent call last): File "env/bin/pytest", line 8, in <module> sys.exit(console_main()) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 143, in main config = _prepareconfig(args, plugins) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 319, in _prepareconfig pluginmanager=pluginmanager, args=args File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/helpconfig.py", line 100, in pytest_cmdline_parse config: Config = outcome.get_result() File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 1003, in pytest_cmdline_parse self.parse(args) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 1283, in parse self._preparse(args, addopts=addopts) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/config/__init__.py", line 1172, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints plugin = ep.load() File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/importlib_metadata/__init__.py", line 194, in load module = import_module(match.group('module')) File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/_pytest/assertion/rewrite.py", line 170, in exec_module exec(co, module.__dict__) File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/pytest_inmanta/plugin.py", line 44, in <module> from inmanta import compiler, config, const, module, plugins, protocol File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/compiler/__init__.py", line 25, in <module> from inmanta import const, module File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/module.py", line 67, in <module> from inmanta import const, env, loader, plugins File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/plugins.py", line 26, in <module> from inmanta import const, protocol File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/protocol/__init__.py", line 56, in <module> from . import methods, methods_v2 File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/protocol/methods.py", line 25, in <module> from inmanta import const, data File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/data/__init__.py", line 58, in <module> from inmanta import const, resources, util File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/resources.py", line 40, in <module> from inmanta.data.model import ResourceIdStr, ResourceVersionIdStr File "/home/jenkins/workspace/modules_apt_master/env/lib64/python3.6/site-packages/inmanta/data/model.py", line 34, in <module> old_field_type_schema = pydantic.schema.field_type_schema AttributeError: 'cython_function_or_method' object has no attribute 'field_type_schema' Self Check: [ ] Attached issue to pull request [x] Changelog entry [x] Type annotations are present [x] Code is clear and sufficiently documented [x] No (preventable) type errors (check using make mypy or make mypy-diff) [x] Sufficient test cases (reproduces the bug/tests the requested feature) [x] Correct, in line with design [ ] End user documentation is included or an issue is created for end-user documentation Reviewer Checklist: [ ] Sufficient test cases (reproduces the bug/tests the requested feature) [ ] Code is clear and sufficiently documented [ ] Correct, in line with design Processing this pull request Merged into branches master in 493722b2213807bfcff66cc457a70d9246edd9ac Processing #3586.
gharchive/pull-request
2021-12-20T08:57:58
2025-04-01T04:34:35.151989
{ "authors": [ "arnaudsjs", "inmantaci" ], "repo": "inmanta/inmanta-core", "url": "https://github.com/inmanta/inmanta-core/pull/3585", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1830838952
Allow product's conf.py to inject version information when building docs This allows the product to inject the version information before this file is even loaded. This in turn allows us to use this version for other purposes. This was not possible before because e.g. for the iso product the version would be inaccurate (INMANTA_DONT_DISCOVER_VERSION -> 1.0.0) until after this file had finished loading. By allowing version injection, the version becomes accurate at load-time. See inmanta/inmanta-service-orchestrator#410 Processing this pull request Merged into branches master in c1c13f1e915d3e03758ce25da814a765a7357941 Processing #6341. Processing #6340.
gharchive/pull-request
2023-08-01T09:25:27
2025-04-01T04:34:35.154813
{ "authors": [ "inmantaci", "sanderr" ], "repo": "inmanta/inmanta-core", "url": "https://github.com/inmanta/inmanta-core/pull/6339", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1117887223
Build(deps): Bump @patternfly/react-core from 4.181.1 to 4.192.15 (#2589) Pull request opened by the merge tool on behalf of #2589 Merged in 69dbe99fb57ddea2b3c18727569d270b9082b145
gharchive/pull-request
2022-01-28T22:03:10
2025-04-01T04:34:35.157320
{ "authors": [ "inmantaci" ], "repo": "inmanta/web-console", "url": "https://github.com/inmanta/web-console/pull/2590", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1138669570
Issue/2720 remove processing events Description closes #2720 Self Check: Strike through any lines that are not applicable (~~line~~) then check the box [x] Attached issue to pull request [x] Changelog entry [x] Code is clear and sufficiently documented [x] Sufficient test cases (reproduces the bug/tests the requested feature) [x] Correct, in line with design [ ] End user documentation is included or an issue is created for end-user documentation (add ref to issue here: ) Reviewer Checklist: [ ] Sufficient test cases (reproduces the bug/tests the requested feature) [ ] Code is clear and sufficiently documented [ ] Correct, in line with design Merged into branches master, iso5 in cb6ecf44509ed5db903714e82ac5b15a49881985
gharchive/pull-request
2022-02-15T13:07:11
2025-04-01T04:34:35.160714
{ "authors": [ "Stijnkool", "inmantaci" ], "repo": "inmanta/web-console", "url": "https://github.com/inmanta/web-console/pull/2769", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1756481856
No error notification for entering a wrong password when joining a board When joining a password protected board, the user has to enter the correct password. When entering a false password, no error/info messsages/toasts are shown. new design: light mode: https://www.figma.com/proto/8m2V6OJTImuOdQoxruWurP/Scrumlr?page-id=2790%3A33189&type=design&node-id=2894-36698&viewport=-15160%2C1397%2C1.04&t=Dt8VLf3oAQ2wjIwP-1&scaling=min-zoom&starting-point-node-id=2894%3A36698&mode=design dark mode: https://www.figma.com/proto/8m2V6OJTImuOdQoxruWurP/Scrumlr?page-id=2790%3A33189&type=design&node-id=2894-37407&viewport=-15160%2C1397%2C1.04&t=Dt8VLf3oAQ2wjIwP-1&scaling=min-zoom&starting-point-node-id=2894%3A36698&mode=design new design: https://www.figma.com/proto/8m2V6OJTImuOdQoxruWurP/Scrumlr?page-id=2790%3A33189&type=design&node-id=2894-40386&viewport=-2624%2C665%2C0.28&t=BhZAw6zktBy0cz0K-1&scaling=min-zoom&starting-point-node-id=2894%3A36698&mode=design Popup (Password) New: https://www.figma.com/proto/8m2V6OJTImuOdQoxruWurP/Scrumlr?page-id=1725%3A29977&type=design&node-id=2934-19217&viewport=-22298%2C2330%2C0.7&t=tDgUeWZ1Qia4MJoz-1&scaling=min-zoom&starting-point-node-id=1729%3A36833&show-proto-sidebar=1&mode=design @Kraft16 I think the first link is broken. Could you post it again? thanks @Resaki1 for the reminder. Here's the actual link to the design. Popup (Generate password): https://www.figma.com/proto/8m2V6OJTImuOdQoxruWurP/Scrumlr?page-id=1725%3A29977&type=design&node-id=4051-31455&viewport=-16948%2C2274%2C0.66&t=z1KGygPpr3bd9sKd-1&scaling=min-zoom&starting-point-node-id=1729%3A36833&show-proto-sidebar=1&mode=design You can find the Design on Figma. Go on the left menu on "🧩 Components" and then to the section "Popup (Password)":
gharchive/issue
2023-06-14T09:42:55
2025-04-01T04:34:35.228716
{ "authors": [ "Kraft-16", "Kraft16", "Resaki1", "SelinaBuff" ], "repo": "inovex/scrumlr.io", "url": "https://github.com/inovex/scrumlr.io/issues/3096", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1853595159
Update 08-types-of-wallets.mdx, adding NOW Wallet Hello! NOW Wallet is a non-custodial wallet developed by the ChangeNOW team that allows users to purchase, exchange, store, and stake ADA. Both NOW Wallet and ChangeNOW are long-time Cardano supporters and we would greatly appreciate the addition to the types-of-wallets page! Thanks for your contribution!
gharchive/pull-request
2023-08-16T16:50:29
2025-04-01T04:34:35.238517
{ "authors": [ "WalletNOW", "olgahryniuk" ], "repo": "input-output-hk/cardano-documentation", "url": "https://github.com/input-output-hk/cardano-documentation/pull/524", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
616121017
Tools to assist working with ormolu This puts infrastructure in place, but doesn't yet do the big reformat. bors merge
gharchive/pull-request
2020-05-11T19:19:58
2025-04-01T04:34:35.240715
{ "authors": [ "nc6" ], "repo": "input-output-hk/cardano-ledger-specs", "url": "https://github.com/input-output-hk/cardano-ledger-specs/pull/1438", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
623887769
Log warnings about hot key certificate expiry As the operational (hot) key gets closer and closer to expiring, the node should log warnings of increasing severity levels. Suggest issuing Warning level messages on the last 7 KES periods, and an Alert in the final KES period and subsequent periods. Could also issue an Alert severity message on each block forging failure when we have an expired hot key. Check also that the failure to get a ledger view is at Alert severity level. Covers part of https://github.com/input-output-hk/cardano-node/issues/956
gharchive/issue
2020-05-24T15:31:41
2025-04-01T04:34:35.242870
{ "authors": [ "dcoutts", "kevinhammond" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/issues/1029", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1295135863
[BUG] - Invalid snapshot DiskSnapshot followed by replaying from genesis Internal/External Internal if an IOHK staff member. Area cardano-node/snapshot Summary On testnet, node sync 100% (babbage era), when restarting the node Disk Snapshots are invalid and node needs to replay from genesis. [CLR:cardano.node.ChainDB:Info:5] [2022-07-06 04:30:21.72 UTC] Started opening Ledger DB [CLR:cardano.node.ChainDB:Error:5] [2022-07-06 04:30:22.72 UTC] Invalid snapshot DiskSnap shot {dsNumber = 62643006, dsSuffix = Nothing}InitFailureRead (ReadFailed (DeserialiseFai lure 31562794 "Decoding TxIx: too many bytes.")) [CLR:cardano.node.ChainDB:Error:5] [2022-07-06 04:30:23.69 UTC] Invalid snapshot DiskSnap shot {dsNumber = 62642581, dsSuffix = Nothing}InitFailureRead (ReadFailed (DeserialiseFai lure 31562794 "Decoding TxIx: too many bytes.")) [CLR:cardano.node.ChainDB:Info:5] [2022-07-06 04:30:23.69 UTC] Replaying ledger from genesis Steps to reproduce Steps to reproduce the behavior: Run a testnet node Sync to 100% Allow for at least 2 snapshots from babbage era (use --snapshot-interval to speed up snapshots). Alonzo snapshots work fine. Restart the node Expected behavior Snapshots should be valid, On restart, the node should resume synchronization from the latest snapshot. System info (please complete the following information): cardano-cli 1.35.0 - linux-x86_64 - ghc-8.10 git rev 9f1d7dc163ee66410d912e48509d6a2300cfa68a cardano-node 1.35.0 - linux-x86_64 - ghc-8.10 git rev 9f1d7dc163ee66410d912e48509d6a2300cfa68a Tested on: Operating System: Kubuntu 20.04 Kernel Version: 5.14.0-1042-oem OS Type: 64-bit AND Mac OS X 10.15.7 (Build 19H1824) Architecture: x86_64h Also reported here: https://github.com/input-output-hk/cardano-node/issues/4128 Resolved by https://github.com/input-output-hk/cardano-ledger/pull/2897 node sync test - node0.json log: {"app":[],"at":"2022-07-07T06:32:25.55Z","data":{"failure":"InitFailureRead (ReadFailed (DeserialiseFailure 31803211 "Decoding TxIx: too many bytes."))","kind":"TraceLedgerEvent.InvalidSnapshot","snapshot":{"kind":"snapshot"}},"env":"1.35.0:9f1d7","host":"hostname","loc":null,"msg":"","ns":["cardano.node.ChainDB"],"pid":"29128","sev":"Error","thread":"5"} This is still happening on 1.35.3… This is still happening on 1.35.3… If you just upgraded from 1.34.x or older it will need to replay from genesis. I just restarted a 1.35.3 node and it is working as expected. ... [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:56:14.40 UTC] Opened vol db [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:56:14.40 UTC] Started opening Ledger DB [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:06.51 UTC] Replaying ledger from snapshot at 7441c0d358335a158694d684cc932a1cd765ebdb9a4caf4d40fc3a05b976fb8a at slot 72215009 [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:06.53 UTC] Replayed block: slot 72215021 out of 72215764. Progress: 1.59% [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:06.84 UTC] Opened lgr db [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:06.84 UTC] Started initial chain selection [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:07.52 UTC] Pushing ledger state for block b6fe5f8f8cafec4776b520473f852e34f73895716c7ded6adb03798531d06d7b at slot 72215777. Progress: 0.00% [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:21.26 UTC] before next, messages elided = 72215782 [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:21.26 UTC] Pushing ledger state for block 9238330333b868a64384421bc2ada6df9c2837d7f3c3a139f2d5a825eba5700e at slot 72225740. Progress: 22.61% [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:21.26 UTC] Pushing ledger state for block 5e1aac7d9bd84b3a8b8b9bba9e4a8e14a2d8ae533feb6f978e1f00f7df413d2a at slot 72225797. Progress: 22.74% [CLR-:cardano.node.ChainDB:Info:5] [2022-09-22 05:58:21.29 UTC] Pushing ledger state for block d2ac336033e69ad1e045981880ee9ac4986f90bd47f5dde5988360cd16920283 at slot 72225812. Progress: 22.77% ... % cardano-node --version cardano-node 1.35.3 - darwin-x86_64 - ghc-8.10 git rev 950c4e222086fed5ca53564e642434ce9307b0b9
gharchive/issue
2022-07-06T05:02:22
2025-04-01T04:34:35.252284
{ "authors": [ "CarlosLopezDeLara", "andrejpodzimek", "jmalcolea" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/issues/4142", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1008289152
Add missing AsType for current Tx eras Motivation: Trying to deserialize a Tx era with deserialiseFromTextEnvelope requires as parameter of type AsType. Currently we only have AsByronTx and AsShelleyTx. Therefore, this PR extends this to all current eras. @dcoutts Thanks for the comment! I deprecated instead those patterns. bors r+ bors merge bors r+ bors r+
gharchive/pull-request
2021-09-27T15:25:18
2025-04-01T04:34:35.254617
{ "authors": [ "Jimbo4350", "koslambrou" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/pull/3253", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1398250110
bench: nixos service fixes & analysis improvements extend defServiceModule to allow configurable tracing of arguments and return values. fix the cardano-tracer service systemd section update tx-generator service to cardano-node service changes add withCardanoTracer option to cardano-node service fixes and improvements in locli bors r+
gharchive/pull-request
2022-10-05T19:15:12
2025-04-01T04:34:35.256847
{ "authors": [ "deepfire" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/pull/4509", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
578477403
Document Byron use-cases Context See also: https://cardanodocs.com/technical/explorer/api/ https://input-output-hk.github.io/cardano-explorer/byron/ Decision We must document relevant uses-cases from the current legacy API and shows users how to: Get summary information about an address (id, balance, type, tx ids in) Get list of blocks (hash, previous hash, next hash, slot id, utc time, size) Get list of transactions of a given block (id, utc time, resolved inputs, outputs, height, fees) Get total ADA supply Acceptance Criteria Development QA Opened on the wrong repo. Should have been on cardano-graphql.
gharchive/issue
2020-03-10T10:26:01
2025-04-01T04:34:35.260616
{ "authors": [ "KtorZ" ], "repo": "input-output-hk/cardano-rest", "url": "https://github.com/input-output-hk/cardano-rest/issues/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
232120262
[CSL-443] Make sure that that Tx processing works fine @iperesadin @iperesadin Txp works fine with wallet, but something strange happens with smart-generator: nodes process blocks and tx fine, but smart-generator doesn't work, I guess. It spams nodes by one tx. Probably we need issue for checking smart-generator.
gharchive/issue
2017-05-30T01:12:35
2025-04-01T04:34:35.262092
{ "authors": [ "jagajaga" ], "repo": "input-output-hk/cardano-sl", "url": "https://github.com/input-output-hk/cardano-sl/issues/812", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
955749518
Balance endpoint - to be retired Issue Number ADP-656 Overview [x] added endpoint, all Api types, swagger [x] Api types tested in unit testing and passing [x] added Server's balanceTransaction function capturing the logic [x] extended Transaction interface : updateTx [x] added deserialisation tests in shelley unit tests [x] added error handling [x] did some very first version coin selection with external inputs, to be replaced with ADP-1070 [x] tested in integration testing several errors (already covered tx, wrong format, cannot deserialise) plus successful rebalance - with wallet's generated CBORs [x] implement updateSealedTx in shelley being impl of interface updateTx [x] add testing for deserialisation/coin selection for plutus examples Comments Based on the PR #2896 branch. Some of this PR has moved into #2900 and #2906. @paweljakubas, would you please be able to update the title and description accordingly? @paweljakubas this PR can be closed, right?
gharchive/pull-request
2021-07-29T11:37:32
2025-04-01T04:34:35.265804
{ "authors": [ "Anviking", "paweljakubas", "rvl" ], "repo": "input-output-hk/cardano-wallet", "url": "https://github.com/input-output-hk/cardano-wallet/pull/2782", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
323470382
[DEVOPS-842] hydra CI tests for flow and eslint This PR adds hydra tests for flow and eslint which are currently both broken, but I'm opening a PR to make sure the nix integration with hydra is working properly. Review Checklist: Basics [ ] PR is updated to the most recent version of target branch (and there are no conflicts) [ ] PR has good description that summarizes all changes and shows some screenshots or animated GIFs of important UI changes [ ] CHANGELOG entry has been added and is linked to the correct PR on GitHub [ ] Automated tests: All acceptance tests are passing (npm run test) [ ] Manual tests (minimum tests should cover newly added feature/fix): App works correctly in development build (npm run dev) [ ] Manual tests (minimum tests should cover newly added feature/fix): App works correctly in production build (npm run package / CI builds) [ ] There are no flow errors or warnings (npm run flow-test) [ ] There are no lint errors or warnings (npm run lint) [ ] Text changes are proofread and approved (Jane Wild) [ ] There are no missing translations (running npm run manage-translations produces no changes) [ ] UI changes look good in all themes (Alexander Rukin) [ ] Storybook works and no stories are broken (npm run storybook) [ ] In case of npm dependency changes both package-lock.json and yarn.lock files are updated Code Quality [ ] Important parts of the code are properly documented and commented [ ] Code is properly typed with flow [ ] React components are split-up enough to avoid unnecessary re-rendering [ ] Any code that only works in Electron is neatly separated from components Testing [ ] New feature / change is covered by acceptance tests [ ] All existing acceptance tests are still up-to-date [ ] New feature / change is covered by Daedalus Testing scenario [ ] All existing Daedalus Testing scenarios are still up-to-date After Review: [ ] Merge PR [ ] Delete source branch [ ] Move ticket to done on the Youtrack board @disassembler let's wait for an approval of @cleverca22 and the CI to pass and then merge this beauty! :)
gharchive/pull-request
2018-05-16T05:07:24
2025-04-01T04:34:35.274250
{ "authors": [ "disassembler", "nikolaglumac" ], "repo": "input-output-hk/daedalus", "url": "https://github.com/input-output-hk/daedalus/pull/921", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1757227296
feat: isolate network addresses between testnet Checklist [x] JIRA - LW-6813 [ ] Proper tests implemented [ ] Screenshots added. Proposed solution Previously, the address book content is shared between preview and preprod. This PR isolates the address book content based on the network. It uses Cardano.NetworkMagics instead of Cardano.NetworkId Allure report allure-report-publisher generated test report! smokeTests: ✅ test report for 0da2e3c7 passed failed skipped flaky total result Total 19 0 0 0 19 ✅ @greatertomi Here are a few issues/inconsistencies on this PR that I was not able to reproduce in the main branch 1 - Remove an address from the list, then complete the fields to add a new one and press "Enter". Result: The "Address Removed" notification is displayed again at the bottom. Check this video 2 - Remove an address from the list. Click on another address and press "Enter". Result: The "Address Removed" notification is displayed again at the bottom. Changes are not applied. Check this video 3 - Remove an address from the list. Click on another address and press "Edit". On the edition screen, change the name and press "Enter". Result: The "Address Removed" notification is displayed again at the bottom. Changes are not applied. Check this video
gharchive/pull-request
2023-06-14T16:02:22
2025-04-01T04:34:35.284149
{ "authors": [ "gabriela-ponce", "greatertomi" ], "repo": "input-output-hk/lace", "url": "https://github.com/input-output-hk/lace/pull/126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1465196356
Clarification on Tip in Chain-Sync Messages The server's current chain-tip is also passed to the client in a number of the Chain-Sync messages, such as MsgRollforward. I'm dealing with these messages with Pallas (Rust library implementing Ouroboros, N2N to IOG node) and it appears that the Tip is one block behind the real tip, or more precisely that when following the tip the received MsgRollforward(next_block_header, tip) the tip is one block behind next_block_header. I would have thought that tip would be equal to next_block_header as it is the most recent block. I was just wondering what the logic is here or if this is a bug? For example, this is a parsed MsgRollforward message where you can see the tip is one block behind the requested next block header: rolling forward, header: (77912810,08a844eebeaaf88dc58ea4867d8da29689e5e11121e126180d6aa8f9ab2e292e,8063366), tip: Tip((77912779, 6c7f6ad4cba4c89960aed95c54086185161457d5c6de76de526cf1e6b54a3075), 8063365) Thanks! That makes sense. Some quick related questions: 1.) As the client, can we signal that we don't want pipelining to be used in the communication/we don't want to receive invalid blocks? 2.) Alternatively, as the node operator, can we configure the node to not use block diffusion pipelining? 3.) Is block diffusion pipelining only used on N2N chainsync or on N2C as well? TIA! 1.) As the client, can we signal that we don't want pipelining to be used in the communication/we don't want to receive invalid blocks? No, that is currently not possible in N2N ChainSync, as there isn't really a reason to say "I don't want to adopt blocks as fast as possible.". 2.) Alternatively, as the node operator, can we configure the node to not use block diffusion pipelining? Right now, we don't offer such a configuration option. Just like before, there isn't really a reason to say "I want my node to diffuse blocks less quickly than possble". But it would be trivial to add if someone would find a compelling use case. 3.) Is block diffusion pipelining only used on N2N chainsync or on N2C as well? It is only used in N2N ChainSync, as it is an optimization for block diffusion, and N2C is not related to that. 4.) If the server sends a MsgRollforward for an invalid block, will it immediately send a MsgRollback when it finishes validating the block and realises it is invalid? Exactly. Hope this helps! Thanks for the quick response! Just for some context, TxPipe's community tools like Oura and Scrolls pull blocks from either a N2C source (local node) or N2N source, and then process them without validation (because being able to validate them would mean having full-node capabilities). This was fine before pipelining, because even though you don't validate the blocks you are receiving them from a trusted source which has validated them, but now with pipelining even an honest node can forward invalid blocks so that assumption is no longer true for N2N. It's not the end of the world because using a N2C source is quicker, safer, and just generally more suitable anyway, but it was nice that N2N meant you didn't really need to run a node, you could just use IOG's relay for example. And if the client can't specify that they don't want to receive pipelined blocks over N2N then maybe they just wait a second after receiving it to see if they receive a MsgRollback because the block was deemed invalid by the server.
gharchive/issue
2022-11-26T16:21:45
2025-04-01T04:34:35.290643
{ "authors": [ "amesgen", "jmhrpr" ], "repo": "input-output-hk/ouroboros-network", "url": "https://github.com/input-output-hk/ouroboros-network/issues/4190", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1292771441
Test promptness of tentative followers Description Closes CAD-4195 See the module description for motivation/approach. Checklist Branch [x] Commit sequence broadly makes sense [x] Commits have useful messages [x] New tests are added if needed and existing tests are updated [ ] If this branch changes Consensus and has any consequences for downstream repositories or end users, said changes must be documented in interface-CHANGELOG.md [ ] If this branch changes Network and has any consequences for downstream repositories or end users, said changes must be documented in interface-CHANGELOG.md [ ] If serialization changes, user-facing consequences (e.g. replay from genesis) are confirmed to be intentional. Pull Request [ ] Self-reviewed the diff [x] Useful pull request description at least containing the following information: What does this PR change? Why these changes were needed? How does this affect downstream repositories and/or end-users? Which ticket does this PR close (if any)? If it does, is it linked? [ ] Reviewer requested bors merge
gharchive/pull-request
2022-07-04T07:51:05
2025-04-01T04:34:35.296254
{ "authors": [ "amesgen" ], "repo": "input-output-hk/ouroboros-network", "url": "https://github.com/input-output-hk/ouroboros-network/pull/3857", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
432418479
More serialisation code @dcoutts , I'm not sure if I'm abusing CBOR. Could you take a quick look at my definitions for ChainHash to check if they are okay? I'm basically just treating it like binary or whatever, not really generating something "structured". I don't know if that's bold or not. Uh, I think I actually already know it's wrong; I need to wrap it in a CBOR list? I guess I got this wrong in encodeChainSummary also. Yup, will fix. @mrBliss ok, added a roundtrip property (which failed), and fixed the encoders and decoders. I think this is now good to go.
gharchive/pull-request
2019-04-12T07:07:02
2025-04-01T04:34:35.298145
{ "authors": [ "edsko" ], "repo": "input-output-hk/ouroboros-network", "url": "https://github.com/input-output-hk/ouroboros-network/pull/441", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
444618032
byron proxy script generator helper I made these scripts to make it easier to launch byron proxy with the right configs for mainnet, staging and testnet. I do not think these should permanently live here, but for now, this makes it easier to test locally for devops/devs. This is the first step to integrating into iohk-ops to deploy a continuously running proxy and chain validator. @angerman could use a confirmation that moving nix-tools into the let block doesn't break anything, although I assume if CI passes, it's probably fine. spoke with @angerman and having byronProxyScripts in default.nix was causing infinite recursion because release.nix was using it for nix-tools package set. Simplest fix is to move default.nix -> nix/haskell-packages.nix and simplify default.nix to import that to get the byronProxy executable. DevOps team discussed, and we think systemd service configuration for nixos should live in the repo that's building it so we can run nixos tests in the future and know when a change breaks our service configuration. Eventually, this will go to cardano-shell when that's able to launch the proxy. If anyone wants to launch byron proxy on NixOS: https://github.com/disassembler/network/commit/7507d3dd330213fc76c6c3b1c365afaef927cd13 shows how it can be done. associated tests for the service: https://hydra.iohk.io/build/819073/download/1/log.html bors r+
gharchive/pull-request
2019-05-15T19:57:59
2025-04-01T04:34:35.301784
{ "authors": [ "disassembler" ], "repo": "input-output-hk/ouroboros-network", "url": "https://github.com/input-output-hk/ouroboros-network/pull/516", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
450227982
EUTXO spec: misc editing up to EUTXO-2 Since Kenneth is going to be merging 2 and 3, I've stopped before I got there. As before, I just edited as I saw fit going along, feel free to take whichever changes you like. I added quite a lot of comments inline. I'm sceptical of the value of the whole \sigma thing. I think we should just say that the interface to the script layer is Script -> Script -> Script -> Tx -> Bool, so it clearly gets the tx, but the question of how to encode it is the script layer's job. This is less general, but I don't think the generality is buying us much, and it makes it significantly more complicated. e.g. we can't just say "it's deterministic", we have to say "it's deterministic for appropriate choices of \sigma". We could have a section where we discuss the design space, and say "you can think of having a state summarisation function, here's why we picked 'just the current tx'". At a high level I think the structure is a bit muddled. In particular: I think we should keep discussion of operational matters or things specifically relating to Cardano to their own section. The abstract presentation of the ledger rules is great for making it easy to understand the logic of what's going on, but it's interspersed with other content at the moment. Similarly, I think we should put a discussion of what the script layer actually does in a separate section. Then we can say "they're PLC programs, you stick them together and see if there's an error". I think we should structure the description as the description of a ledger layer given a "script layer" with a particular signature. We need to revisit this interface in each section because the type of e.g. the validation function changes. I think I'm going to reject this in its entirety because I was working on it anyway (but leave the branch here for the time being so I can check back!). However I have taken your comments on board. I've got rid of the abstraction over sigma and just said what we actually use (@mchakravarty suggested this as well) and cut down on some of the other verbiage as well. I'll commit that version soon. I'm in two minds about how to structure this. Interspersing specification and explanation is a bit messy, since once you understand what's going on you probably just want to read the specification and ignore the explanation. However, we need some explanation somewhere, and I think it's probably best to have it in the same document. As you say, shifting some of it to separate sections might be sensible. Let's revisit this when I have a PR for the current version (in a day or so). Okay.
gharchive/pull-request
2019-05-30T10:02:17
2025-04-01T04:34:35.306692
{ "authors": [ "kwxm", "michaelpj" ], "repo": "input-output-hk/plutus", "url": "https://github.com/input-output-hk/plutus/pull/1067", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1599458228
PLT-1539: Add some budget tests related to Foldable and Bool Currently anyExpensive and anyCheap have the same cost. In the next PR I'll make some changes to PlutusTx.Foldable, and you'll see the differences it makes on these test cases. Hope it's not about any getting inlined? No it's not about anything getting inlined :smile:
gharchive/pull-request
2023-02-25T00:57:34
2025-04-01T04:34:35.308412
{ "authors": [ "zliu41" ], "repo": "input-output-hk/plutus", "url": "https://github.com/input-output-hk/plutus/pull/5167", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
438207542
Revert #645 It broke transaction handling in jormungandr because it uses apply_block to test whether transactions can be applied, but apply_block wants to update the chain_length / block date. But it is fine since the new created state is throw away. The transaction validation is done as follow: take what is believed to be the tip of the blockchain; apply the transaction; throw the newly created state
gharchive/pull-request
2019-04-29T08:42:44
2025-04-01T04:34:35.310010
{ "authors": [ "NicolasDP", "edolstra" ], "repo": "input-output-hk/rust-cardano", "url": "https://github.com/input-output-hk/rust-cardano/pull/652", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
935026976
Event attendees don't get email notification I am creating test events with --who 'user@gmail.com'. The user does see an invite from within the calendar app, but strangely enough never gets an email notification that they've been invited to an event. Is this the intended/expected behavior? Whenever a calendar item is created in Google Calendar web interface with an attendee, that person is notified by email. Is there any way to trigger the notification from the CLI? Figured it out. Patch below that causes email invites to be sent; perhaps this should be added as a command-line switch? `--- gcal.py.bak 2021-07-01 12:50:04.814334100 -0400 +++ gcal.py 2021-07-01 12:59:23.631646400 -0400 @@ -1371,7 +1371,7 @@ event = self._add_reminders(event, reminders) events = self.get_cal_service().events() request = events.insert(calendarId=self.cals[0]['id'], body=event) request = events.insert(calendarId=self.cals[0]['id'], body=event,sendUpdates="all") new_event = self._retry_with_backoff(request) if self.details.get('url'): @@ -1606,7 +1606,8 @@ .events() .insert( calendarId=self.cals[0]['id'], body=event body=event, sendUpdates="all" ) ) hlink = new_event.get('htmlLink') @@ -1625,7 +1626,8 @@ .events() .insert( calendarId=self.cals[0]['id'], body=event body=event, sendUpdates="all" ) ) hlink = new_event.get('htmlLink')` Worked for me, cloned the source and made your edit and emails send now. Tyvm! @dbarnett are you open to incorporating this into the tool? I can play around with argparse to make an option that enables sendUpdates and submit it as a P.R. if you're open to it (and not planning to do something along the lines already). Sure, if you can send a PR I can review. Do you know if there's any downside or what the other sendUpdates options do? I've been using it for a while, when I remember to patch gcalcli at least, and it seems to work fine. I imagine most people actually expect a calendar invite to actually send an invite to the invitees as a default option. So far I haven't discovered any downside, particularly if it's an end-user option whether to send invites or not. I haven't tested it out but I assume the sendUpdates options are either to send a notification to everyone on the invite (all) and then there must be some other option that only sends notifications to people who are added or dropped from a previously existing invite, although I don't know how that part would actually intersect with how gcalcli is modifying (?) existing invites.
gharchive/issue
2021-07-01T16:40:13
2025-04-01T04:34:35.320706
{ "authors": [ "ajkessel", "dbarnett", "ryaneggz" ], "repo": "insanum/gcalcli", "url": "https://github.com/insanum/gcalcli/issues/601", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2550697598
Change dial & set applications list as default page Hello there, i want to know if its possible to change the default dial for applications and if its possible to set the applications page as main page without opening the "folder". If not you could see this as feature request. For example the default dial could be implemented in the settings folder :D Hi, there's a folder dedicated to applications. But if what you want is a Loupedeck page containing a button for each application, you'll have to build it yourself. Only folders can be built dynamically by the plugin. I'm not sure I understand your request for a default dial for applications. Can you explain your need?
gharchive/issue
2024-09-26T14:05:14
2025-04-01T04:34:35.323961
{ "authors": [ "NicoMinza", "insideGen" ], "repo": "insideGen/Loupedeck-AudioControl-OpenPlugin", "url": "https://github.com/insideGen/Loupedeck-AudioControl-OpenPlugin/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1927394708
Update template module and development guide Update module template to select the right path for module.yml in run.py: os.path.join(os.path.dirname(__file__), 'module.yml'). Update development guide so the module would work (add environment for ZMQ sockets, deploy jaeger, etc.). See #503 for additional issue.
gharchive/issue
2023-10-05T05:13:43
2025-04-01T04:34:35.325234
{ "authors": [ "bwsw", "tomskikh" ], "repo": "insight-platform/Savant", "url": "https://github.com/insight-platform/Savant/issues/475", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
295528631
Ability to edit disabled MaskedInput If MakedEdit is rendered with disabled='true' prop, there a back-door ability, however, to edit the text. To reproduce: create some text, which matches to input mask; copy it to the clipboard; click on the input box (left corner) and paste it via Ctrl+V shortcut. The text will be inserted into. how can I fix this? '--' It is awaiting in PR: "Preventing paste in disabled state #124"
gharchive/issue
2018-02-08T14:19:12
2025-04-01T04:34:35.328574
{ "authors": [ "SergioDiniz", "alexf2" ], "repo": "insin/react-maskedinput", "url": "https://github.com/insin/react-maskedinput/issues/125", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1536224772
Add option to unhide number of replies I'm not sure if it's just me, but literally just now, Twitter for web browsers seems to have removed the reply counter for tweets (the little speech bubble icon with a number next to it). Not only does this now require one extra click from the user to comment on a tweet, it has also shifted all the other icons' positions. Hopefully a setting can be added to bring this back to how it's supposed to be. Twitter moved the views counter, replies are where it used to be so they're getting hidden (#190) This will be fixed in the next version
gharchive/issue
2023-01-17T11:30:45
2025-04-01T04:34:35.330068
{ "authors": [ "gensakudan", "insin" ], "repo": "insin/tweak-new-twitter", "url": "https://github.com/insin/tweak-new-twitter/issues/191", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2492874334
ig: Events from the host should be shown by default #1741 introduced the --host flag to be able to show also events coming from the host. The current behavior is: # show only events from containers $ ig run foo # show all events (including containers) $ ig run foo --host # show events from specific containers $ ig run foo --containername mycontainer We discussed the UX of it on https://github.com/inspektor-gadget/inspektor-gadget/pull/1741#issuecomment-1599176460, but I now feel we got it wrong. IMO it should work as: # show all events, containers, host, etc. $ ig run foo # show events from specific containers $ ig run foo --containername mycontainer And then, I'll propose to handle other cases with the -F flag. Implementing an offload mechanism to filter those events in eBPF later on: # show events not coming from a container $ ig run foo -F runtime.ContainerID=="" # show events coming only from containers $ ig run foo -F runtime.ContainerID!="" The host flag would be removed. /cc @blixtra I agree with not showing host info by default is confusing. But maybe instead of $ ig run foo -F runtime.ContainerID=="" invert the --host so that it would become $ ig run foo --host-only -F runtime.ContainerID=="" somehow feels prone to error - what if enrichment fails (somehow) and container ids are empty, but events are still coming from containers? Even if the filter expression would be converted to something that can be offloaded for a host/no-host decision to ebpf (we're just checking for mntnsid == 1, right?), this might be confusing. $ ig run foo -F runtime.ContainerID=="" $ ig run foo -F runtime.ContainerID!="" For me this is not very user friendly. I'd prefer to have dedicated flags for this. How often will somebody need to filter by events coming only from containers or only from the host? Is it that common? Let's ping @blixtra on this one, because one idea was to unify all the filtering having only --filter, i.e. possibly removing --containername, --podname, etc. As a user, events from all containers but not from the host is something we need. As someone who is also building gadgets it aligns with gadget_should_discard_mntns_id() Perhaps a --container-only flag will work: # prints all events $ ig run foo # prints events from all containers (current behavior when usingig run foo) $ ig run foo --containers-only # we don't provide any way to print "host only" events. Any opinions on this? Do you mean this? Command Includemount namespaces fromhost (/host/proc/1/ns/mnt) Includemount namespaces fromrecognised containers Includemount namespaces fromunknown sources:1. systemd services with separate mountns2. host under the minikube container3. bubblewrap, systemd-nspawn4. etc. ig run foo ✅ ✅ ✅ ig run foo --containers-only ❌ ✅ ❌ Do you mean this? Exactly. We can say that "ig run foo" shows all events (disabling filtering by mount namespace completely)
gharchive/issue
2024-08-28T19:50:28
2025-04-01T04:34:35.344106
{ "authors": [ "alban", "blanquicet", "dorser", "flyth", "mauriciovasquezbernal" ], "repo": "inspektor-gadget/inspektor-gadget", "url": "https://github.com/inspektor-gadget/inspektor-gadget/issues/3419", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
733643814
Added code to reading-files Please fill in this pull request template before submitting 1. This pull request resolves #19 2. Description Added code to the file reading-files to read a file and display the contents 3. Fill in checklist by marking [x] [x] I've read the CONTRIBUTING.md [x] I was assigned to this issue [x] My code is formatted using required linting [x] I've run my code locally and checked it works @funbeedev Please check my PR Sorry I have added the integers file to this PR by mistake I am issuing another commit which reverts the changes @funbeedev Review the changes to the reading-files.py file I am opening a new PR with fewer commits. This PR has become clumsy replaced by PR #69
gharchive/pull-request
2020-10-31T05:58:43
2025-04-01T04:34:35.372414
{ "authors": [ "funbeedev", "raghuchandan1" ], "repo": "inspirezonetech/TeachMePythonLikeIm5", "url": "https://github.com/inspirezonetech/TeachMePythonLikeIm5/pull/61", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
770437365
modified k8s master - [closed] In GitLab by @guyingyan on Dec 12, 2018, 08:33 Merges 20181212 -> dev In GitLab by @wknet123 on Nov 13, 2019, 11:33 The test coverage for backend is 23.0% 39.31% , check console log In GitLab by @wknet123 on Nov 13, 2019, 11:33 closed
gharchive/issue
2020-12-17T23:03:18
2025-04-01T04:34:35.374855
{ "authors": [ "inspuradmin" ], "repo": "inspursoft/board", "url": "https://github.com/inspursoft/board/issues/1511", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
654897085
Removed unnessesary code! @Instafluff Hey there 👋 Hope you are good :D Here are some updates. ~ Daan Thanks, Daan!!
gharchive/pull-request
2020-07-10T16:29:42
2025-04-01T04:34:35.376835
{ "authors": [ "daanbreur", "instafluff" ], "repo": "instafluff/ChatBlocks", "url": "https://github.com/instafluff/ChatBlocks/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2176839185
Add the ability to list and get basic information about skills This change adds a new subcommand group "skills" and its initial children "list" and "info". The intent of this what I think "lab list" should have been, to inform the user about the kinds of skills available in the configured taxonomy. A user can now get a list of known skills, by running: lab skills list Then get some basic information about that skill by running: lab skills info <skill_name> The initial list is pretty basic, but we can work on formatting if that is desired. Examples: new sub command group help $  lab skills --help Usage: lab skills [OPTIONS] COMMAND [ARGS]... Get information about the skills in the taxonomy. Options: --help Show this message and exit. Commands: info Get information about a specific skill. list List all skills available in taxonomy. listing all of the skills in the taxonomy repo $ lab skills list Showing skills in taxonomy_path='/home/jake/Programming/rh_llm/taxonomy': abstract action_items ... verb word_frequency getting basic info about a specific skill $ lab skills info tldr Skill: tldr Contributor: jacobcallahan Description: Create easily consumable short descriptions of how to use a command line tool. Path: /home/jake/Programming/rh_llm/taxonomy/compositional_skills/writing/freeform/technical/tldr/qna.yaml @nathan-weinberg why was this closed without explanation? @JacobCallahan It was a GitHub bug having to do with the open-sourcing, no action was taken on my part, feel free to reopen @nathan-weinberg ahhh ok good to hear. However, I do not have the permission to re-open a PR in this repo. @JacobCallahan yes it's a bit wonky, @n1hility can you share the guidance we had around how to handle this? not sure where it is myself
gharchive/pull-request
2024-03-08T21:58:17
2025-04-01T04:34:35.401946
{ "authors": [ "JacobCallahan", "nathan-weinberg" ], "repo": "instructlab/instructlab", "url": "https://github.com/instructlab/instructlab/pull/455", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
459520795
Headers for sketchup files Hey! Thanks for sharing. Can you point me how o create a header for sketchup (.skp) files? Thanks My general blackbox method is to take a collection of the files in question (*.skp), examine the first 8-16 bytes of each file, then manually look for common patterns. Here is an example with pdf files. $ find . -iname "*.pdf" -type f 2>/dev/null | while read filename; do dd if="${filename}" bs=1 count=8 2>/dev/null | xxd; done The output looks like: 00000000: 2550 4446 2d31 2e33 %PDF-1.3 00000000: 2550 4446 2d31 2e34 %PDF-1.4 00000000: 2550 4446 2d31 2e36 %PDF-1.6 Based on that, I could add a signature to the scalpel config like: pdf y 2500000 \x25\x50\x44\x46\x2d\x31\x2e It appears that scalpel after 1.90 supports regex in the config, so something like this might work: pdf y 2500000 %PDF-1.[0-9] Note that 2500000 is an arbitrary number. You may need to increase that value if the files you want to carve are larger than this. Additionally it's possible someone online has documented the file format with more detail, you might be able to reverse engineer an application which parses/loads .skp files to see what bytes they examine and how they are processed. This is asserting that the application does a sanity check to verify the file has a valid format. Lastly, please submit a pull request if you derive a robust signature. Best wishes and good luck on your file recovery! Thank you for the detailed explanation! I really appreciate it. Best regards!
gharchive/issue
2019-06-22T23:48:26
2025-04-01T04:34:35.416132
{ "authors": [ "TCMiranda", "int0x80" ], "repo": "int0x80/anti-forensics", "url": "https://github.com/int0x80/anti-forensics/issues/2", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
645519464
Question: Is it possible to generate swagger-ui.yml from java source I use your plugin for generating sourcecode from swagger.yml - no problems. I read the documentation but I do not understand, whether it is possible to generate the swagger.yml file from the sourcecode. So my question is it possible with this plugin? No. You can generate YAML from code using language specific tools, e.g. https://github.com/springfox/springfox Thanks for the fast answer.
gharchive/issue
2020-06-25T12:25:12
2025-04-01T04:34:35.418093
{ "authors": [ "ehmkah", "int128" ], "repo": "int128/gradle-swagger-generator-plugin", "url": "https://github.com/int128/gradle-swagger-generator-plugin/issues/202", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2178388495
🛑 Shopintake is down In 75c6d21, Shopintake (https://www.shopintake.com) was down: HTTP code: 403 Response time: 442 ms Resolved: Shopintake is back up in 4ab6721 after 24 minutes.
gharchive/issue
2024-03-11T07:12:48
2025-04-01T04:34:35.420445
{ "authors": [ "intakefoods" ], "repo": "intakefoods/status.intakefoods.kr", "url": "https://github.com/intakefoods/status.intakefoods.kr/issues/1867", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2100402951
🛑 Shopintake is down In 7295793, Shopintake (https://www.shopintake.com) was down: HTTP code: 403 Response time: 426 ms Resolved: Shopintake is back up in db18bc7 after 8 minutes.
gharchive/issue
2024-01-25T13:26:21
2025-04-01T04:34:35.422744
{ "authors": [ "intakefoods" ], "repo": "intakefoods/status.intakefoods.kr", "url": "https://github.com/intakefoods/status.intakefoods.kr/issues/462", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
541011178
panels of type gauge and bar gauge lose their settings When I try to import dashboards via the CR with panels of type guage or bar gauge the settings for those pannels get lost. I have tried using the json and the url form of the GrafanaDashboard type. both have same experince. I can copy the exact same JSON direclty into the same Grafana dashboard import feature and the panels import fine. This only seems to happen when the board gets imported via the Operator and I can't for the life of me figure out what is causing it. I can only assume at this point since I can copy paste directly in that the operator has to be doing something or importing the board in such a way that something somewhere is going funky. { "__inputs": [ { "name": "DS_OPENSHIFT_PROMETHEUS", "label": "openshift-prometheus", "description": "", "type": "datasource", "pluginId": "prometheus", "pluginName": "Prometheus" } ], "__requires": [ { "type": "panel", "id": "gauge", "name": "Gauge", "version": "" }, { "type": "grafana", "id": "grafana", "name": "Grafana", "version": "6.5.2" }, { "type": "datasource", "id": "prometheus", "name": "Prometheus", "version": "1.0.0" } ], "annotations": { "list": [ { "builtIn": 1, "datasource": "-- Grafana --", "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "editable": true, "gnetId": null, "graphTooltip": 0, "id": null, "iteration": 1576850855482, "links": [], "panels": [ { "datasource": "${DS_OPENSHIFT_PROMETHEUS}", "gridPos": { "h": 5, "w": 24, "x": 0, "y": 0 }, "id": 2, "options": { "fieldOptions": { "calcs": [ "lastNotNull" ], "defaults": { "links": [], "mappings": [ { "from": "", "id": 1, "operator": "", "text": "", "to": "", "type": 1, "value": "" } ], "max": 1, "min": 0, "thresholds": [ { "color": "green", "value": null }, { "color": "red", "value": 0.85 } ], "title": "", "unit": "percentunit" }, "override": {}, "values": false }, "orientation": "horizontal", "showThresholdLabels": false, "showThresholdMarkers": false }, "pluginVersion": "6.5.2", "targets": [ { "expr": "(sum by (hostname) ( heketi_device_used_bytes {job = \"$job\"} )) / (sum by (hostname) ( heketi_device_size_bytes {job = \"$job\"} ))", "format": "time_series", "instant": false, "intervalFactor": 1, "legendFormat": "{{hostname}}", "refId": "A" } ], "timeFrom": null, "timeShift": null, "title": "Storage Devices Capacity Used", "type": "gauge" } ], "schemaVersion": 21, "style": "dark", "tags": [], "templating": { "list": [ { "current": { "text": "openshift-prometheus", "value": "openshift-prometheus" }, "hide": 0, "includeAll": false, "label": "Datasource", "multi": false, "name": "DS_OPENSHIFT_PROMETHEUS", "options": [], "query": "prometheus", "refresh": 1, "regex": "", "skipUrlSync": false, "type": "datasource" }, { "allValue": null, "current": {}, "datasource": "$DS_OPENSHIFT_PROMETHEUS", "definition": "label_values(heketi_up,job)", "hide": 0, "includeAll": false, "label": "Job", "multi": false, "name": "job", "options": [], "query": "label_values(heketi_up,job)", "refresh": 2, "regex": "", "skipUrlSync": false, "sort": 1, "tagValuesQuery": "", "tags": [], "tagsQuery": "", "type": "query", "useTags": false } ] }, "time": { "from": "now-24h", "to": "now" }, "timepicker": { "refresh_intervals": [ "5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d" ] }, "timezone": "", "title": "TEST" } I'm assuming it has to do with the grafana-sdk library that we use to parse the dashboard json. I'll investigate further. Yeah. I tried to trace through the operator code to see how dashboard gets imported to see if I could isolate to whether it was something operator was doing or Grafana but I lost the thread when it called the grafana sdk. @itewk The problem lies somewhere with the grafana sdk. I'm currently working on removing it and just sending the raw json (after some validation). That seems to be working and the gauge in your example renders exactly the same operator imported vs. directly imported. @itewk This should be fixed now in this PR: #95 I've created a tag with those change import-datasources: https://quay.io/repository/integreatly/grafana-operator?tab=tags Would you mind giving it a try? Please note that since your dashboard has inputs (depends on a datasource to be present) you will have to specify it in the dashboard CR with: spec: datasources: inputName: "DS_OPENSHIFT_PROMETHEUS" datasourceName: <name of the corresponding datasource in your instance> Docs for this are in the works: https://github.com/integr8ly/grafana-operator/pull/95/files#diff-87206ddedd4460f8733d5d3074b725adR90 @pb82 tested the import-datasources image tag and it works great, fixes this issue. Thank you. @pb82 I also confirm that the import-datasources image tag resolves problems that were caused by the use of the grafana sdk. Do you know when this version will be released please ? (merged into master, github release and latest image tag) @yogeek I'll aim for a release around the 10th of January. Unfortunately there is currently no automation for creating the latest tag. Travis currently only creates tags for releases and branches. That's probably something we should automate. I'll update latest manually today. The latest tag is up to date with master now.
gharchive/issue
2019-12-20T14:04:26
2025-04-01T04:34:35.432543
{ "authors": [ "itewk", "pb82", "yogeek" ], "repo": "integr8ly/grafana-operator", "url": "https://github.com/integr8ly/grafana-operator/issues/99", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
56333568
fix #389, fix fetch thread bug PR summary, FetchThread sleeps rather than exits when there are no more messages update KafkaStreamProducerSpec to cover the case +1
gharchive/pull-request
2015-02-03T04:48:59
2025-04-01T04:34:35.449910
{ "authors": [ "clockfly", "manuzhang" ], "repo": "intel-hadoop/gearpump", "url": "https://github.com/intel-hadoop/gearpump/pull/390", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
70798900
hm11: Initial implementation This module implements support for the Grove BLE (Bluetooth Low Energy) device. It is implemented as a UART device accepting an "AT" command set. Signed-off-by: Jon Trulson jtrulson@ics.com Signed-off-by: Zion Orent zorent@ics.com Thanks, merged & closed!
gharchive/pull-request
2015-04-24T21:29:14
2025-04-01T04:34:35.458992
{ "authors": [ "Jon-ICS", "Propanu" ], "repo": "intel-iot-devkit/upm", "url": "https://github.com/intel-iot-devkit/upm/pull/174", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1654303104
docker: Use latest tag and upgrade packages to fix CVE-2023-2916 A CVE issue (https://access.redhat.com/security/cve/CVE-2023-23916) was found during the security scan. To fix this issue, the latest tag is used and the packages are upgraded to latest version. Signed-off-by: Manish Regmi manish.regmi@intel.com @mregmi, please add the sign-off to commit. Done Build Log looks good: intel-dgpu-on-premise-build-mode.log this PR looks good to me. Thanks!
gharchive/pull-request
2023-04-04T17:42:05
2025-04-01T04:34:35.497308
{ "authors": [ "hershpa", "mregmi", "uMartinXu" ], "repo": "intel/intel-data-center-gpu-driver-for-openshift", "url": "https://github.com/intel/intel-data-center-gpu-driver-for-openshift/pull/28", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2062774740
When building from source getting errors quantization. Hi, First of all thanks for this fantastic work. Great job ! After building intel-extension-for-transformers from source we noticed that we get errors when quantizing the model. After the initial Llama 2 7B model is converted to ne_llama_f32.bin (Float 32) the library fails to quantize the model getting the below error. quant_model: failed to quantize model from 'runtime_outs/ne_llama_f32.bin' Traceback (most recent call last): File "/home/ubuntu/run.py", line 9, in model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=woq_config, trust_remote_code=True) File "/opt/conda/envs/llm_hakan/lib/python3.9/site-packages/intel_extension_for_transformers/transformers/modeling/modeling_auto.py", line 265, in from_pretrained model.init( File "/opt/conda/envs/llm_hakan/lib/python3.9/site-packages/intel_extension_for_transformers/llm/runtime/graph/init.py", line 124, in init assert os.path.exists(quant_bin), "Fail to quantize model" AssertionError: Fail to quantize model What are we missing ? Do we need to compile another package as well ? Thanks @choochtech , thank you for using Itrex, is the model you are using from llama2-hf-7b on Huggingface? If not, please provide the URL of the corresponding model. Your main branch doesn't work properly. We had to download 1.3 (https://github.com/intel/intel-extension-for-transformers/releases/tag/v1.3) and build. That worked with no issues, Thanks Hi, so happy it worked with no issues. I closed this issue~ Thank you.
gharchive/issue
2024-01-02T18:12:21
2025-04-01T04:34:35.502497
{ "authors": [ "Zhenzhong1", "choochtech", "intellinjun" ], "repo": "intel/intel-extension-for-transformers", "url": "https://github.com/intel/intel-extension-for-transformers/issues/1105", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2366753645
[RAISE-BP] Add support for tt.broadcast increasing tensor rank #1428 adds support for tt.broadcast operations between tensor types of the same rank. Increasing ranks is not yet supported. Extend pass to support this flavor of tt.broadcast operations as tt.addptr inputs, e.g.: tt.broadcast %in : tensor<512xf32> -> tensor<2x512xf32> PR open and partially approved.
gharchive/issue
2024-06-21T15:16:25
2025-04-01T04:34:35.505701
{ "authors": [ "mfrancepillois", "victor-eds" ], "repo": "intel/intel-xpu-backend-for-triton", "url": "https://github.com/intel/intel-xpu-backend-for-triton/issues/1429", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2048561423
Missing level-zero-dev in basekit/Dockerfile.ubuntu-22.04 I used image basekit/Dockerfile.ubuntu-22.04, but during building my application -lze_loader was not found, because there was no level-zero-dev package. After installing it, everything works correctly. I can see that image basekit/Dockerfile.ubuntu-20.04 installs it. Is it a mistake or you removed it on purpose, for saving memory or different reason? In the latest images, the level zero dev package should be installed by default.
gharchive/issue
2023-12-19T12:27:54
2025-04-01T04:34:35.538533
{ "authors": [ "tingleby", "wozna" ], "repo": "intel/oneapi-containers", "url": "https://github.com/intel/oneapi-containers/issues/65", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
860527640
Refactoring folders /intelci: run set infra_branch=refactoring_folders /intelci: restart set infra_branch=refactoring_folders /intelci: run set infra_branch=refactoring_folders @michael-smirnov general: I think all the files related to conda-build are better to move to a subfolder of scripts rather than placing it in a root. For example, they can be moved to scripts/conda-recipe No, we want to build without a conda-build for any jobs. This scripts use for overall testing in public/private CI, without conda-build This scripts use for overall testing in public/private CI, without conda-build @PetrovKP In that case I think it worth aligning the names of build scripts on Windows (bld.bat) and Linux (build.sh) if conda-build is not used anymore. And please consider the option to place build files to a script/build, testing scripts to script/test subfolders, because the root of script folder looks like a mess. @PetrovKP maybe it is even better to align with oneDAL repo and split script folder on dev and deploy scripts. /intelci: run set infra_branch=refactoring_folders /intelci: run set infra_branch=refactoring_folders /intelci: run set infra_branch=refactoring_folders /intelci: run set infra_branch=refactoring_folders /intelci: restart set infra_branch=refactoring_folders /intelci: run set infra_branch=refactoring_folders /intelci: run set infra_branch=refactoring_folders
gharchive/pull-request
2021-04-17T22:33:03
2025-04-01T04:34:35.556469
{ "authors": [ "PetrovKP", "michael-smirnov" ], "repo": "intel/scikit-learn-intelex", "url": "https://github.com/intel/scikit-learn-intelex/pull/627", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
694159515
/delete comand I have made def delete_pipeline(instance_id, pipeline, version, video_analytics_serving, camera): """Fetch status of requested pipeline""" try: status_response = requests.delete( urljoin(video_analytics_serving, "/".join([pipeline, str(version), str(instance_id)])), timeout=TIMEOUT) if status_response.status_code == 200: return json.loads(status_response.text) except requests.exceptions.RequestException as request_error: output(camera, request_error) return None but it raises a fault 2020-09-05 19:58:41,040 Exception on /pipelines/persons_detector_no_face/2/2 [DELETE] Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.6/dist-packages/flask_cors/extension.py", line 161, in wrapped_function return cors_after_request(app.make_response(f(*args, **kwargs))) File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/usr/local/lib/python3.6/dist-packages/connexion/decorators/decorator.py", line 49, in wrapper return self.api.get_response(response, self.mimetype, request) File "/usr/local/lib/python3.6/dist-packages/connexion/apis/flask_api.py", line 136, in get_response return cls._get_response(response, mimetype=mimetype, extra_context={"url": flask.request.url}) File "/usr/local/lib/python3.6/dist-packages/connexion/apis/abstract.py", line 277, in _get_response framework_response = cls._response_from_handler(response, mimetype, extra_context) File "/usr/local/lib/python3.6/dist-packages/connexion/apis/abstract.py", line 331, in _response_from_handler return cls._build_response(mimetype=mimetype, data=response, extra_context=extra_context) File "/usr/local/lib/python3.6/dist-packages/connexion/apis/flask_api.py", line 173, in _build_response data, status_code, serialized_mimetype = cls._prepare_body_and_status_code(data=data, mimetype=mimetype, status_code=status_code, extra_context=extra_context) File "/usr/local/lib/python3.6/dist-packages/connexion/apis/abstract.py", line 403, in _prepare_body_and_status_code body, mimetype = cls._serialize_data(data, mimetype) File "/usr/local/lib/python3.6/dist-packages/connexion/apis/flask_api.py", line 190, in _serialize_data body = cls.jsonifier.dumps(data) File "/usr/local/lib/python3.6/dist-packages/connexion/jsonifier.py", line 44, in dumps return self.json.dumps(data, **kwargs) + '\n' File "/usr/local/lib/python3.6/dist-packages/flask/json/__init__.py", line 211, in dumps rv = _json.dumps(obj, **kwargs) File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps **kw).encode(obj) File "/usr/lib/python3.6/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict yield from chunks File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode o = _default(o) File "/usr/local/lib/python3.6/dist-packages/connexion/apps/flask_app.py", line 138, in default return json.JSONEncoder.default(self, o) File "/usr/local/lib/python3.6/dist-packages/flask/json/__init__.py", line 100, in default return _json.JSONEncoder.default(self, o) File "/usr/lib/python3.6/json/encoder.py", line 180, in default o.__class__.__name__) TypeError: Object of type 'State' is not JSON serializable @vtimofeev01 Can you confirm what version of VA serving you are using? I believe this was address in the 0.3.1 release - do you see it in the latest? it is so thank U
gharchive/issue
2020-09-05T20:04:05
2025-04-01T04:34:35.559723
{ "authors": [ "nnshah1", "vtimofeev01" ], "repo": "intel/video-analytics-serving", "url": "https://github.com/intel/video-analytics-serving/issues/33", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1350992140
feat(auth): auth server api for idp consent screen, update stuff to match openapi spec Changes proposed in this pull request Implements the following endpoints: GET /grant/:id/:nonce POST /grant/:id/:nonce/accept POST /grant/:id/:nonce/reject Modifies the following endpoints and moves some of their logic into the previously listed ones: GET /interact/:id/:nonce GET /interact/:id/:nonce/finish Context Closes #554. Checklist [X] Related issues linked using fixes #number [X] Tests added/updated [X] Documentation added [ ] Make sure that all checks pass https://github.com/interledger/rafiki/issues/560 created to reflect an out-of-band discussion about this PR.
gharchive/pull-request
2022-08-25T14:28:10
2025-04-01T04:34:35.600520
{ "authors": [ "njlie" ], "repo": "interledger/rafiki", "url": "https://github.com/interledger/rafiki/pull/553", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
257425004
Compiling is failing any time So I was playing around with this stuff, and it didn't compiled once. I installed everything thats needed, but which libary do I miss when compiling it? The output: daniel@daniel-8o23dad:~/projects/os/kernel$ RUST_BACKTRACE=1 xargo build --release --target x86_64-unknown-intermezzos-gnu --verbose + "rustc" "--print" "sysroot" + "rustc" "--print" "target-list" + RUSTFLAGS="--sysroot /home/daniel/.xargo" + "cargo" "build" "--release" "--target" "x86_64-unknown-intermezzos-gnu" "--verbose" Fresh console v0.1.0 (file:///home/daniel/projects/os/kernel/console) Fresh byteorder v0.5.3 Fresh raw-cpuid v2.0.1 Fresh libc v0.2.16 Fresh rustc-serialize v0.3.19 Fresh bitflags v0.7.0 Fresh rlibc v1.0.0 Fresh phf_shared v0.7.16 Fresh num-traits v0.1.36 Fresh spin v0.4.4 Fresh keyboard v0.1.0 (file:///home/daniel/projects/os/kernel/keyboard) Compiling intermezzos v0.1.0 (file:///home/daniel/projects/os/kernel) Fresh rand v0.3.14 Fresh csv v0.14.7 Fresh phf v0.7.16 Running `/home/daniel/projects/os/kernel/target/release/build/intermezzos-9f5c58ede529b3e6/build-script-build` Fresh num-complex v0.1.35 Fresh num-integer v0.1.32 Fresh lazy_static v0.2.1 Fresh phf_generator v0.7.16 Fresh num-bigint v0.1.35 Fresh num-iter v0.1.32 Fresh phf_codegen v0.7.16 Fresh num-rational v0.1.35 Fresh num v0.1.36 Fresh serde v0.6.15 Fresh serde_json v0.6.1 Fresh x86 v0.8.1 Fresh pic v0.1.0 (file:///home/daniel/projects/os/kernel/pic) Fresh interrupts v0.1.0 (file:///home/daniel/projects/os/kernel/interrupts) Running `rustc --crate-name intermezzos src/lib.rs --crate-type lib --emit=dep-info,link -C opt-level=3 -C panic=abort -C metadata=a5f8d0c159026232 -C extra-filename=-a5f8d0c159026232 --out-dir /home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps --target x86_64-unknown-intermezzos-gnu -L dependency=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps -L dependency=/home/daniel/projects/os/kernel/target/release/deps --extern spin=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libspin-2e95bf66a70d82f7.rlib --extern pic=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libpic-e52c5dc844b294ba.rlib --extern rlibc=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/librlibc-3d5ca23970e1763b.rlib --extern keyboard=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libkeyboard-eab5eb9c2612608d.rlib --extern console=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libconsole-898e9b38d65bf2a3.rlib --extern interrupts=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libinterrupts-472aab710c3d4eff.rlib --extern x86=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libx86-40e865de7e1eb5d8.rlib --extern lazy_static=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/liblazy_static-f2c83931040bd2fb.rlib --sysroot /home/daniel/.xargo -L native=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/build/intermezzos-664041d978565111/out -l static=boot` Running `rustc --crate-name intermezzos src/main.rs --crate-type bin --emit=dep-info,link -C opt-level=3 -C panic=abort -C metadata=73ba34d58d8c787f -C extra-filename=-73ba34d58d8c787f --out-dir /home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps --target x86_64-unknown-intermezzos-gnu -L dependency=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps -L dependency=/home/daniel/projects/os/kernel/target/release/deps --extern spin=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libspin-2e95bf66a70d82f7.rlib --extern pic=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libpic-e52c5dc844b294ba.rlib --extern rlibc=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/librlibc-3d5ca23970e1763b.rlib --extern keyboard=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libkeyboard-eab5eb9c2612608d.rlib --extern console=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libconsole-898e9b38d65bf2a3.rlib --extern interrupts=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libinterrupts-472aab710c3d4eff.rlib --extern x86=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libx86-40e865de7e1eb5d8.rlib --extern lazy_static=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/liblazy_static-f2c83931040bd2fb.rlib --extern intermezzos=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libintermezzos-a5f8d0c159026232.rlib --sysroot /home/daniel/.xargo -L native=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/build/intermezzos-664041d978565111/out` warning: unused extern crate --> src/main.rs:12:1 | 12 | extern crate rlibc; | ^^^^^^^^^^^^^^^^^^^ | = note: #[warn(unused_extern_crates)] on by default warning: unused extern crate --> src/main.rs:13:1 | 13 | extern crate spin; | ^^^^^^^^^^^^^^^^^^ warning: unused extern crate --> src/main.rs:15:1 | 15 | extern crate console; | ^^^^^^^^^^^^^^^^^^^^^ error: linking with `gcc` failed: exit code: 1 | = note: "gcc" "-Wl,--script=layout.ld" "-Wl,--nmagic" "-nostartfiles" "-L" "/home/daniel/.xargo/lib/rustlib/x86_64-unknown-intermezzos-gnu/lib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/intermezzos-73ba34d58d8c787f.0.o" "-o" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/intermezzos-73ba34d58d8c787f" "-Wl,--gc-sections" "-nodefaultlibs" "-L" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps" "-L" "/home/daniel/projects/os/kernel/target/release/deps" "-L" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/build/intermezzos-664041d978565111/out" "-L" "/home/daniel/.xargo/lib/rustlib/x86_64-unknown-intermezzos-gnu/lib" "-Wl,-Bstatic" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/librlibc-3d5ca23970e1763b.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libkeyboard-eab5eb9c2612608d.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/liblazy_static-f2c83931040bd2fb.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libintermezzos-a5f8d0c159026232.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libinterrupts-472aab710c3d4eff.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libspin-2e95bf66a70d82f7.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libpic-e52c5dc844b294ba.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libx86-40e865de7e1eb5d8.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libphf-59dad5c67dcb08ae.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libphf_shared-2f2f832e88cc18d1.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libraw_cpuid-836331a5adb599e3.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libbitflags-df92c0f529cb93d8.rlib" "/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libconsole-898e9b38d65bf2a3.rlib" "/home/daniel/.xargo/lib/rustlib/x86_64-unknown-intermezzos-gnu/lib/libcore-53a02f271ea0ee12.rlib" "-Wl,-Bdynamic" = note: /usr/bin/ld: /home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libintermezzos-a5f8d0c159026232.rlib(boot.o): relocation R_X86_64_32 against `.bss' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: error: ld returned 1 exit status error: aborting due to previous error error: Could not compile `intermezzos`. Caused by: process didn't exit successfully: `rustc --crate-name intermezzos src/main.rs --crate-type bin --emit=dep-info,link -C opt-level=3 -C panic=abort -C metadata=73ba34d58d8c787f -C extra-filename=-73ba34d58d8c787f --out-dir /home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps --target x86_64-unknown-intermezzos-gnu -L dependency=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps -L dependency=/home/daniel/projects/os/kernel/target/release/deps --extern spin=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libspin-2e95bf66a70d82f7.rlib --extern pic=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libpic-e52c5dc844b294ba.rlib --extern rlibc=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/librlibc-3d5ca23970e1763b.rlib --extern keyboard=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libkeyboard-eab5eb9c2612608d.rlib --extern console=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libconsole-898e9b38d65bf2a3.rlib --extern interrupts=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libinterrupts-472aab710c3d4eff.rlib --extern x86=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libx86-40e865de7e1eb5d8.rlib --extern lazy_static=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/liblazy_static-f2c83931040bd2fb.rlib --extern intermezzos=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/deps/libintermezzos-a5f8d0c159026232.rlib --sysroot /home/daniel/.xargo -L native=/home/daniel/projects/os/kernel/target/x86_64-unknown-intermezzos-gnu/release/build/intermezzos-664041d978565111/out` (exit code: 101) Hm; I'm not sure what this error is about. I haven't had the time to work on this in a few months; I wonder if something changed in Rust nightly and this has bitrotted? :/ @steveklabnik What do you mean by 'bitrotted'? And yeah, maybe something has changed in Rust. I will keep this first down, so it won't be important fir now. https://en.wikipedia.org/wiki/Software_rot ; basically, the code used to work, but something changed, and now it doesn't work anymore, because it's old. Kind of like rot with food or other perishable items. @steveklabnik Ok :D, learned something new again Will the Project continue to be developed? Absolutely; this is a long-term thing for me. I've been trying to schedule some regular time in the near future, but haven't yet. Hobby time is hard to find sometimes 😄 @steveklabnik I am very anxious for the project to be further developed and for the conclusion of the Book @daniel-Q6wUOI are you still seeing this? Never tried it again. Can be closed. Am 06.03.2018 12:00 schrieb Steve Klabnik notifications@github.com: @daniel-Q6wUOIhttps://github.com/daniel-q6wuoi are you still seeing this? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/intermezzOS/kernel/issues/117#issuecomment-370744803, or mute the threadhttps://github.com/notifications/unsubscribe-auth/Acweh3vc_tez0_XPGqLJw5wTDvn4eAUKks5tbmxMgaJpZM4PWTba. Thanks / sorry! We're hoping that things will be much easier in the future, but we're not quite there yet.
gharchive/issue
2017-09-13T15:30:42
2025-04-01T04:34:35.625523
{ "authors": [ "daniel-Q6wUOI", "esdrastarsis", "oldaniel", "steveklabnik" ], "repo": "intermezzOS/kernel", "url": "https://github.com/intermezzOS/kernel/issues/117", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2002203340
chore: further automate release process (#462) PR Type [x] Build-related changes Release What Is the Current Behavior? Currently charts and the semantic version type are needed for a release release pipeline (e.g. icm-as:minor). This will be changed by using an automatic approach where commits and theire messages deliver the needed information. Issue Number: Closes #462 What Is the New Behavior? Chart changes could be released by just starting the github action and selecting the branch to release. Afterwards a PullReqest will be made with the changes. Does this PR Introduce a Breaking Change? [x] No Other Information the pullrequest also switches from bump2version to bump-my-version using the label ignore-for-release-notes synchonization PullRequests are ignored for the release notes generation by github We should also update the PR template to reflect the new guidelines and that following them is mandatory. Maybe we could even require it in some way by a action, but that may be too hard to do (but maybe this already exists). I also updated the PR template and added a link to the wiki.
gharchive/pull-request
2023-11-20T13:24:27
2025-04-01T04:34:35.696456
{ "authors": [ "cortlepp-intershop", "khauser" ], "repo": "intershop/helm-charts", "url": "https://github.com/intershop/helm-charts/pull/468", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
835976140
feat: filter Availability of Products in product-links-carousel PR Type [ ] Bugfix [x] Feature [ ] Code style update (formatting, local variables) [ ] Refactoring (no functional changes, no API changes) [ ] Build-related changes [ ] CI-related changes [ ] Documentation content changes [ ] Application / infrastructure changes [ ] Other: What Is the Current Behavior? The product links carousel has no option to filter products (inStock/Availability). Issue Number: Closes # What Is the New Behavior? The product links carousel has now an input parameter to configure if you want to filter the products (inStock/Availability). Does this PR Introduce a Breaking Change? [ ] Yes [x] No Other Information If you have a better option to filter the products with one rest call please tell me and change it. The shoppingFacade.products$(skus: string[]) { return this.store.pipe(select(getProducts, { skus })); } can be an option but only return the products from the state management and does not make a rest call. @carlosDjangoo The ShoppingFacade.products$ doesn't exist anymore in the current version. Consider upgrading, product contexts are awesome 😉 The product$ stream from ShoppingFacade was the right choice. I don't think we currently have a REST call to fetch multiple products individually (only for categories or master products). This would be a nice feature though, which could be used in products.effects. I supplied a fixup commit, that purely focuses on RxJS streams, which makes helper methods and destroy$ obsolete. I think the setting to true in src/app/pages/product/product-links/product-links.component.html should be considered a project customization and not be part of this PR, but that is just an opinion. 😄 @carlosDjangoo The ShoppingFacade.products$ doesn't exist anymore in the current version. Consider upgrading, product contexts are awesome 😉 The product$ stream from ShoppingFacade was the right choice. I don't think we currently have a REST call to fetch multiple products individually (only for categories or master products). This would be a nice feature though, which could be used in products.effects. I supplied a fixup commit, that purely focuses on RxJS streams, which makes helper methods and destroy$ obsolete. I think the setting to true in src/app/pages/product/product-links/product-links.component.html should be considered a project customization and not be part of this PR, but that is just an opinion. 😄 @dhhyi Thanks this is a good idea with rxjs 😉 The setting 'true' in product-links.component.html was only to show you an example, this is part of project you are right. An individually product rest call would be a nice feature, because some customers want to have individual product lists and suggestions. And call every product individually can be run in a performence issue then. The configuration parameter can be useful in projects, that's why we decided to leave the code as it is. We added code for the product links list, additionally.
gharchive/pull-request
2021-03-19T13:11:40
2025-04-01T04:34:35.704852
{ "authors": [ "SGrueber", "carlosDjangoo", "dhhyi" ], "repo": "intershop/intershop-pwa", "url": "https://github.com/intershop/intershop-pwa/pull/626", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
282891002
Timezone behavior problems -shifting days I am using the component with moment-timezone in order to fix the event time when viewed from various timezones. ie. if an event is on the 18th at 9:00 in Berlin, it should display the same time and date from whether you're viewing this from Hong-Kong or New York. The following code makes this work: moment.tz.setDefault("Europe/Berlin"); BigCalendar.momentLocalizer(moment); However, if I change the local viewing time east-bound (into the past) the time stays the same but the date calendar displays the event a day earlier (17th at 7:00). Opening the event form displays the correct date in the date-picker controls that also uses moment.js. It seems the calendar grid shifts a day to the past including the starting week day (from sunday to saturday) Does anybody have any clue what could be causing this? I realized this is a problem that has existed for a while. There currently is a workaround in issue #118
gharchive/issue
2017-12-18T14:09:44
2025-04-01T04:34:35.836082
{ "authors": [ "VladimirJelincic" ], "repo": "intljusticemission/react-big-calendar", "url": "https://github.com/intljusticemission/react-big-calendar/issues/651", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
439199049
[FEATURE]오버랩 이벤트 로그가 필요합니다. 여러 기능과 이벤트 추가에 따라 버그가 발생하는 경우가 있기 때문에 물체 사이에 오버랩 이벤트가 일어나면 이를 브라우저 콘솔에 로그해주는 기능을 추가해야 합니다. 이 기능은 설정에서 키고 끄는게 가능 해야합니다. 제가 기능 추가했습니다.
gharchive/issue
2019-05-01T15:11:47
2025-04-01T04:34:35.858148
{ "authors": [ "cuppaamukkaacoffee" ], "repo": "inureyes/gradios", "url": "https://github.com/inureyes/gradios/issues/522", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
376477026
global: make new release 1.1.0 And add pending deprecation warning Closed by https://github.com/inveniosoftware/invenio-theme/pull/126.
gharchive/issue
2018-11-01T16:51:58
2025-04-01T04:34:35.863229
{ "authors": [ "diegodelemos", "lnielsen" ], "repo": "inveniosoftware/invenio-theme", "url": "https://github.com/inveniosoftware/invenio-theme/issues/124", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
712457885
Update failing. When running invoke update I get the error below. I'm not sure what additional arguments its looking for and I couldn't find anything in the docs for this. https://inventree.readthedocs.io/en/latest/start/update/ root@server:/var/www/html/inventree# invoke update 'update' did not receive all required positional arguments! That's a weird one. There should be no positional arguments required. Are you up to date with the latest code? Running invoke update works for me on latest master (checked just now) All up to date. I think Il try a clean install and see what happens. Installing a fresh copy didn't work on that server, it was running Ubuntu 18.04. I spun up a new Ubuntu 20.04 container copied over the original install and it worked fine there, so it was something with that container. Might have been a weird version of invoke? Glad you got it working anyhow
gharchive/issue
2020-10-01T04:05:39
2025-04-01T04:34:35.866188
{ "authors": [ "SchrodingersGat", "WeredPerson" ], "repo": "inventree/InvenTree", "url": "https://github.com/inventree/InvenTree/issues/1012", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1363820202
Required for build order logic error Please verify that this bug has NOT been raised before. [X] I checked and didn't find similar issue Describe the bug* If I not have misunderstood the intention with the "Required for build orders" page I think there is a bug. It's provides a great summary that I use to place purchase and build orders but it happens now and then that I find parts in the list where I already have enough quantity. I found issue #2399 related to the the same page but it's more for adding additional info - for sure good suggestions that would make procurement easier. Steps to Reproduce One recent example: I have one part with a quantity of 30 in stock Allocate 20 for a new build order The part stock info looks like this All OK, but the part is anyway listed as required in the "Required for build orders" page with stock quantity 10. Expected behavior The part in the example should not be listed as required in the summary view Deployment Method [ ] Docker [ ] Bare metal Version Information Version Information: InvenTree-Version: 0.9.0 dev Django Version: 3.2.14 Commit Hash: ef12a834e Commit Date: 2022-08-01 Database: mysql Debug-Mode: True Deployed using Docker: False Relevant log output No response @gunstr thanks for reporting. Looking at this, it seems that this "required quantity" feature needs a lot of attention. There a few different ways in which it is currently broken. The main reason that relates to your issue here is that the required_parts_to_complete_build method does not take into account the quantity currently allocated against the build: https://github.com/inventree/InvenTree/blob/ea60cdc6a8a7eaa5a445148f5067e5c0137c5377/InvenTree/build/models.py#L1066 Also, this is a very expensive calculation currently, and needs some attention to make it more efficient. Not stale @SchrodingersGat should this be tagged for 0.10.0? Adding the fix I have done if it helps anyone. No guarantees it's correct and complete, but at least for my use case it does the job.... I changed line https://github.com/inventree/InvenTree/blob/34bb40d4398c18efec65c59ac476b30e62044a43/InvenTree/build/models.py#L1080 to required_quantity_to_complete_build = self.remaining * bom_item.quantity - self.allocated_quantity(bom_item)
gharchive/issue
2022-09-06T21:11:48
2025-04-01T04:34:35.874367
{ "authors": [ "SchrodingersGat", "gunstr", "matmair" ], "repo": "inventree/InvenTree", "url": "https://github.com/inventree/InvenTree/issues/3653", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1686203294
Sales order Contact Please verify that this feature request has NOT been suggested before. [X] I checked and didn't find a similar feature request Problem statement when creating a sales order there's a field called Contact what is the idea behind this field? when i click it i can not enter any data nor does it give me a dropdown or any. Suggested solution INstruct me how to use it and where it gets its data. but suppose you have a new contact at the customer than it would be great to be able to just type the name in there anduse that in the report Describe alternatives you've considered add a list of available contacts for a company but by default a company only has one contact in InvenTree. though in reality this could be multi0ple persons Examples of other systems No response Do you want to develop this? [ ] I want to develop this. @mabroens I have updated the docs here - https://docs.inventree.org/en/latest/order/company/#contacts
gharchive/issue
2023-04-27T06:31:24
2025-04-01T04:34:35.877804
{ "authors": [ "SchrodingersGat", "mabroens" ], "repo": "inventree/InvenTree", "url": "https://github.com/inventree/InvenTree/issues/4707", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
493045819
exporting 'category' field adds description Example CSV that fails to import: id, name, description, keywords, category, IPN 1, TLV316IDBVT, IC OP AMP CMOS RR 10MHZ SOT-23, OpAmp, “electronics/semi/ic/linear”, 500000001 2, TLV2316IDGKR, IC OP AMP GP 10MHZ 8VSSOP, OpAmp, “electronics/semi/ic/linear”, 500000002 Error: Cannot assign "'“electronics/semi/ic/linear”'": "Part.category" must be a "PartCategory" instance. invenTree appears to assign incorrect columns to the imported data set showing: ROW ERRORS ID NAME IS_TEMPLATE VARIANT_OF DESCRIPTION KEYWORDS CATEGORY IPN Adding the two missing columns IS_TEMPATE and VARIANT_OF lines up the imported data correctly but still fails with the same error as mentioned above. Ok I think I have worked this out now - see https://github.com/inventree/InvenTree/pull/524 After a lot more reading on how the import/export plugin works, the admin interface is now working really well. You can export data cleanly, and now the import works too! Any row with an ID matching one in the database will be UPDATED Any row with a blank or unknown ID will be CREATED Any row with invalid ForeignKey data will show an error and must be fixed in the external file
gharchive/issue
2019-09-12T21:48:34
2025-04-01T04:34:35.882253
{ "authors": [ "SchrodingersGat", "gitjoost" ], "repo": "inventree/InvenTree", "url": "https://github.com/inventree/InvenTree/issues/512", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1457615970
Add note about PLUGIN_ON_STARTUP setting Body of the issue When running in a container environment (e.g docker) need to ensure that plugins are installed whenever the container is started! Ref: https://github.com/inventree/InvenTree/issues/3969 Also should add a note to the FAQ
gharchive/issue
2022-11-21T09:37:23
2025-04-01T04:34:35.883709
{ "authors": [ "SchrodingersGat" ], "repo": "inventree/inventree-docs", "url": "https://github.com/inventree/inventree-docs/issues/396", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1358808729
Android : Actions auto dismisses notifications On Android whenever an action is pressed on a notification the notification is dismissed. I assume this is the default behavior and for most people it makes sense. But there is some cases where you would like for that action to not dismiss the notification. I also saw some apps that do this exact thing so this is a must for many app developers. I have looked at the documentation and cannot find anything relating to preventing that or else. Thank you. STR : Have a notification with an action. Press action Notification dissapears automatically. Was able to reproduce with a bare project here is the test: "@notifee/react-native": "^5.6.0", import { StatusBar } from "expo-status-bar"; import { StyleSheet, Text, View, Button } from "react-native"; import notifee, { AndroidImportance } from "@notifee/react-native"; const addChannel = async () => { await notifee.createChannel({ id: "test", name: "bla", lights: false, vibration: true, importance: AndroidImportance.DEFAULT, }); }; export default function App() { addChannel(); return ( <View style={styles.container}> <Text>Open up App.js to start working on your app!</Text> <Button title={"allo"} onPress={async () => { await notifee.displayNotification({ id: "one two", title: "bla", body: "blabla", android: { channelId: "test", timestamp: Date.now(), actions: [ { title: "hi", pressAction: { id: "hello", }, }, ], }, }); }} ></Button> <StatusBar style="auto" /> </View> ); } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: "#fff", alignItems: "center", justifyContent: "center", }, }); could be https://notifee.app/react-native/docs/android/behaviour#auto-cancelling ? could be notifee.app/react-native/docs/android/behaviour#auto-cancelling ? That is definetly it, couldnt find anything about it inside Actions And Press Actions doc.
gharchive/issue
2022-09-01T12:59:08
2025-04-01T04:34:35.902072
{ "authors": [ "PaperbagWriter", "mikehardy" ], "repo": "invertase/notifee", "url": "https://github.com/invertase/notifee/issues/509", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1899478835
[enhancement]: Canvas: Ability to draw straight lines Is there an existing issue for this? [X] I have searched the existing issues Contact Details No response What should this feature add? Add the ability to draw straight lines on the canvas (by pressing Shift or using a separate tool). WHY? Since SD cannot correctly take into account perspective, when some foreground object blocks the visibility of the background perspective lines, we hace have to complete and correct the lines manually (and do new generation on it). It would be convenient if it were possible to draw them straight, and like a ruler to extend the existing lines of the scene (and then may be erasing the extra parts of the line) Alternatives No response Additional Content No response This is now implemented in canvas v2.
gharchive/issue
2023-09-16T15:47:44
2025-04-01T04:34:35.953672
{ "authors": [ "psychedelicious", "vrubzov1957" ], "repo": "invoke-ai/InvokeAI", "url": "https://github.com/invoke-ai/InvokeAI/issues/4559", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2224373657
Add simplified model manager install API to InvocationContext Summary This adds three model manager-related methods to the InvocationContext uniform API. They are accessible via context.models.*: load_ckpt_from_path(model_path: Path, loader: Optional[Callable[[Path], Dict[str, Tensor]]] = None) -> LoadedModel Load the checkpoint-format model file located at the indicated Path. This will instantiate the model at the indicated path and load it into the model manager RAM cache if needed. If the optional loader argument is provided, the loader will be invoked to load the model into memory. Otherwise the method will call safetensors.torch.load_file() or torch.load() (with a pickle scan) as appropriate to the file suffix. Be aware that the LoadedModel object will have a config attribute of None. download_and_cache_ckpt( source: str | AnyHttpUrl, access_token: Optional[str] = None, timeout: Optional[int] = 0) -> Path Download the model file located at source to the models cache and return its Path. This will check models/.download_cache for the desired model file and download it from the indicated source if not already present. The local Path to the downloaded file is then returned. An example of this can be found in invokeai.backend.image_util.depth_anything.__init__, where a call the download_with_progress_bar() has been replaced with: depth_anything_model_path = self.context.models.download_and_cache_ckpt(DEPTH_ANYTHING_MODELS[model_size]) load_ckpt_from_url(source: str | AnyHttpUrl, access_token: Optional[str] = None, timeout: Optional[int] = 0, loader: Optional[Callable[[Path], Dict[str, torch.Tensor]]] = None) -> LoadedModel Download, cache and load the model file located at the indicated URL. This call combines download_and_cache_checkpt() with load_ckpt_from_path() to allow the loading of a remote or local model given its source URL. If need be, the model will be downloaded and cached. Other Changes This PR performs a migration, in which it renames models/.cache to models/.convert_cache, and migrates previously-downloaded ESRGAN, openpose, DepthAnything and Lama inpaint models from the models/core directory into models/.download_cache. There are a number of legacy model files in models/core, such as GFPGAN, which are no longer used. This PR deletes them and tidies up the models/core directory. Related Issues / Discussions I have systematically replaced all the calls to download_with_progress_bar(). This function is no longer used elsewhere and has been removed. QA Instructions I have added unit tests for the three new calls. You may test that the load_ckpt_from_url() call is working by running the upscaler within the web app. On first try, you will see the model file being downloaded into the models .cache directory. On subsequent tries, the model will either load from RAM (if it hasn't been displaced) or will be loaded from the filesystem. Merge Plan Squash merge when approved. Checklist [X] The PR has a short but descriptive title, suitable for a changelog [X] Tests added / updated (if applicable) [X] Documentation added / updated (if applicable) There are two issues with the current code. The first is that the models download to the .cache directory and may be cleared periodically by the model manager in which case they will be re-downloaded when next needed. I’m not sure whether this is a feature or a bug, but I’m going to change the code so that models download somewhere that isn’t volatile. The second issue is that if a model is already installed (usually in models/core), the existing copy is ignored. I’ll add a migrate script. @psychedelicious @RyanJDick I think I've responded to all comments and suggestions. Thanks for the reviews! I'll do a full review later, we still have too much coupling between the context/model manager and some of the utility classes. I've just pushed changes that lift the loading out of the LaMa and DepthAnything classes, but not DWOpenPose - I'm not familiar with how ONNX loads models and ran out of time for this for now.. There just doesn't seem to be a good way for utility classes to access the model manager service in the running app without going through the context. One out would be to have the model manager set a singleton in the way that the configuration service does, thereby allowing the utility classes make a call to get_model_manager(). Of course, there would have to be a bit of setup in the event that the model manager hadn't already been initialized. There just doesn't seem to be a good way for utility classes to access the model manager service in the running app without going through the context. One out would be to have the model manager set a singleton in the way that the configuration service does, thereby allowing the utility classes make a call to get_model_manager(). Of course, there would have to be a bit of setup in the event that the model manager hadn't already been initialized. Do my changes in these two commits not do this? feat(backend): lift managed model loading out of lama class feat(backend): lift managed model loading out of depthanything class There just doesn't seem to be a good way for utility classes to access the model manager service in the running app without going through the context. One out would be to have the model manager set a singleton in the way that the configuration service does, thereby allowing the utility classes make a call to get_model_manager(). Of course, there would have to be a bit of setup in the event that the model manager hadn't already been initialized. Do my changes in these two commits not do this? feat(backend): lift managed model loading out of lama class feat(backend): lift managed model loading out of depthanything class Sure. Those look good. Sorry, no we need to do the same pattern for that last processor, DWOpenPose - it uses ONNX models and I'm not familiar with how they work. I can take care of that but won't be til next week. Sorry, no we need to do the same pattern for that last processor, DWOpenPose - it uses ONNX models and I'm not familiar with how they work. I can take care of that but won't be til next week. I'll take a look at it Friday. I've refactored DWOpenPose using the same pattern as in the other backend image processors. I also added some of the missing typehints so there are fewer red squigglies. I noticed that there is a problem with the pip dependencies. If the onnxruntime package is installed, then even if onnxruntime-gpu is installed as well, the onnx runtime won't use the GPU (see https://github.com/microsoft/onnxruntime/issues/7748). You have to remove onnxruntime and then install onnxruntime-gpu. I don't think pyproject.toml provides a way for an optional dependency to remove a default dependency. Is there a workaround? I noticed that there is a problem with the pip dependencies. If the onnxruntime package is installed, then even if onnxruntime-gpu is installed as well, the onnx runtime won't use the GPU (see https://github.com/microsoft/onnxruntime/issues/7748). You have to remove onnxruntime and then install onnxruntime-gpu. I don't think pyproject.toml provides a way for an optional dependency to remove a default dependency. Is there a workaround? I think we'd need to just update the installer script with special handling to uninstall those packages if they are already installed. It's probably time to revise our optional dependency lists. I think "cuda" and "cpu" make sense to be the only two user-facing options. "xformers" is extraneous now (torch's native SDP implementation is just as fast), so it could be removed. Thanks for cleaning up the pose detector. It would be nice to use the model context so we get memory mgmt, but that is a future task. I had some feedback from earlier about the public API that I think was lost: Token: When would a node reasonably provide an API token? We support regex-matched tokens in the config file. I don't think this should be in the invocation API. Timeout: Similarly, when could a node possibly be able to make a good determination of the timeout for a download? It doesn't know the user's internet connection speed. It's a user setting - could be globally set in the config file and apply to all downloads. If both of those args are removed, then load_ckpt_from_path and load_ckpt_from_url look very similar. I think this is maybe what @RyanJDick was suggesting with a single load_custom_model method. Also, will these methods work for diffusers models? If so, "ckpt" probably doesn't need to be in the name. Thanks for cleaning up the pose detector. It would be nice to use the model context so we get memory mgmt, but that is a future task. I had some feedback from earlier about the public API that I think was lost: Token: When would a node reasonably provide an API token? We support regex-matched tokens in the config file. I don't think this should be in the invocation API. Timeout: Similarly, when could a node possibly be able to make a good determination of the timeout for a download? It doesn't know the user's internet connection speed. It's a user setting - could be globally set in the config file and apply to all downloads. If both of those args are removed, then load_ckpt_from_path and load_ckpt_from_url look very similar. I think this is maybe what @RyanJDick was suggesting with a single load_custom_model method. Also, will these methods work for diffusers models? If so, "ckpt" probably doesn't need to be in the name. I'll remove the arguments and consolidate the two calls. Diffusers loading is NOT supported right now, so ckpt needs to stay in the names. Probably doesn't make sense to spend time on the onnx loading. This is the only model that uses it. Right now the regex token handling is done in a part of the install manager that is not called by the simplifed API. I'll move this code into the core download() routine so that tokens are picked up whenever a URL is requested. Sounds good. I think you're saying this should be a global config option and I agree with that. Can we get the config migration code in so that I have a clean way of updating the config? I don't think any migration is necessary - just add a sensible default value. I'll check back in on the config migration PR this week. Not currently. It only works with checkpoints. I'd planned to add diffusers support later, but I guess I should do that now. Converting to draft. Ok, thanks. The onnxruntime model loading architecture seems to be very different from what the model manager expects. In particular, the onnxruntime.InferenceSession() constructor doesn't seem to provide any way to accept a model that has been read into RAM or VRAM. The closest I can figure is that you can pass the constructor an IOBytes object to a serialized version of the model in memory. This will require some architectural changes in the model manager that should be its own PR. I've played with this a bit. It is easy to load the openpose onnx sessions into the RAM cache and they will run happily under the existing MM cache system. However, Onnx sessions do their own internal VRAM/CUDA management, and so I found that for the duration of the time that the session object is in RAM, it holds on to a substantial chunk of VRAM (1.7GB). The openpose session is only used during conversion of an image into a pose model, and I think it's better to have slow disk-based loading of the openpose session than to silently consume a chunk of VRAM that interferes with later generation. @psychedelicious This is ready for your review now. There are now just two calls: load_and_cache_model() and download_and_cache_model() which return a locally cached Path and LoadedModel respectively. In addition, the model source can now be a URL, a local Path, or a repo_id. Support for the latter involved my refactoring the way that multifile downloads work. @psychedelicious I just updated the whole thing to work properly with the new (and very nice) Pydantic-based events. I've also added a new migration. Please review when you can. This merge took me over an hour to resolve and I'd like to avoid repeating the experience. I removed a number of unnecessary changes in invocation_context.py, mostly extraneous type annotations. If mypy is complaining about these, then that's a mypy problem, because all the methods are annotated correctly. @psychedelicious I've addressed the remaining issues you raised. Thanks for a thorough review. I removed a number of unnecessary changes in invocation_context.py, mostly extraneous type annotations. If mypy is complaining about these, then that's a mypy problem, because all the methods are annotated correctly. I also moved load_model_from_url from the main model manager class into the invocation context. Yes, mypy is having trouble tracking the return type of several methods. I haven't figured out what causes the problem and don't want to add a # type: ignore. But maybe I should 'cause I'm not ready to turn to pyright. Yes, mypy is having trouble tracking the return type of several methods. I haven't figured out what causes the problem and don't want to add a # type: ignore. But maybe I should 'cause I'm not ready to turn to pyright. We shouldn't add # type: ignore, that will stop all type checkers from doing anything - including pyright. The places where you made code quality concessions to satisfy mypy involve very straightforward types - either your mypy config is borked or mypy itself is borked. FWIW, I've found pyright to be much faster, more thorough and more correct than mypy. @RyanJDick Would you mind doing one last review of this PR? Yes, mypy is having trouble tracking the return type of several methods. I haven't figured out what causes the problem and don't want to add a # type: ignore. But maybe I should 'cause I'm not ready to turn to pyright. We shouldn't add # type: ignore, that will stop all type checkers from doing anything - including pyright. The places where you made code quality concessions to satisfy mypy involve very straightforward types - either your mypy config is borked or mypy itself is borked. FWIW, I've found pyright to be much faster, more thorough and more correct than mypy. You've convinced me. I've switched to pyright! @RyanJDick Would you mind doing one last review of this PR? Looks like 43/44 files have changed since I last looked 😅 . I'll plan to spend a chunk of time on this tomorrow. @RyanJDick Can narrow that down to reviewing invocation_context.py, which changes the public API and is more important to get right the first time. Thanks. @RyanJDick I've fixed the issues you identified.
gharchive/pull-request
2024-04-04T03:31:18
2025-04-01T04:34:35.985990
{ "authors": [ "RyanJDick", "lstein", "psychedelicious" ], "repo": "invoke-ai/InvokeAI", "url": "https://github.com/invoke-ai/InvokeAI/pull/6132", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1524427827
🛑 Bitwage API Docs is down In e9e9476, Bitwage API Docs (https://developer.bitwage.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Bitwage API Docs is back up in 9f3e379.
gharchive/issue
2023-01-08T09:41:00
2025-04-01T04:34:35.989590
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/127", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2393414259
🛑 Bitwage API (Sandbox) is down In 12997f9, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 96 ms Resolved: Bitwage API (Sandbox) is back up in aa633a6 after 13 minutes.
gharchive/issue
2024-07-06T05:46:55
2025-04-01T04:34:35.992089
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/2742", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2267027902
🛑 Bitwage API (Sandbox) is down In f3da71b, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 308 ms Resolved: Bitwage API (Sandbox) is back up in 0c991c6 after 52 minutes.
gharchive/issue
2024-04-27T13:58:18
2025-04-01T04:34:35.994364
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/376", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2267498301
🛑 Bitwage API (Sandbox) is down In 4ff5af5, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 248 ms Resolved: Bitwage API (Sandbox) is back up in 756a63d after 1 hour, 31 minutes.
gharchive/issue
2024-04-28T11:29:48
2025-04-01T04:34:35.996879
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/407", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2495832312
🛑 Bitwage API (Sandbox) is down In c4e7f37, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 154 ms Resolved: Bitwage API (Sandbox) is back up in 3b54639 after 6 minutes.
gharchive/issue
2024-08-29T23:35:52
2025-04-01T04:34:35.999183
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/4517", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2543937596
🛑 Bitwage API (Sandbox) is down In 277f107, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 284 ms Resolved: Bitwage API (Sandbox) is back up in d6a9685 after 1 hour, 16 minutes.
gharchive/issue
2024-09-23T23:29:40
2025-04-01T04:34:36.001449
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/5278", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2632332726
🛑 Bitwage API (Sandbox) is down In 7498396, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 49 ms Resolved: Bitwage API (Sandbox) is back up in 826f051 after 1 hour, 3 minutes.
gharchive/issue
2024-11-04T09:50:20
2025-04-01T04:34:36.003733
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/6499", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2676422725
🛑 Bitwage API (Sandbox) is down In 07eb67a, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 49 ms Resolved: Bitwage API (Sandbox) is back up in b0b59bc after 58 minutes.
gharchive/issue
2024-11-20T16:19:58
2025-04-01T04:34:36.006020
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/6985", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2691767365
🛑 Bitwage API (Sandbox) is down In 837ec3b, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 62 ms Resolved: Bitwage API (Sandbox) is back up in 92073d0 after 23 minutes.
gharchive/issue
2024-11-25T18:21:01
2025-04-01T04:34:36.008516
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/7130", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1051407594
Add new ioBroker.cleveron adapter Adapter-checker checked, Appveyor-tested RE-CHECK! Automated adapter checker ioBroker.cleveron [ ] :heavy_exclamation_mark: [E201] Bluefox was not found in the collaborators on NPM!. Please execute in adapter directory: npm owner add bluefox iobroker.cleveron [ ] :eyes: [W400] Cannot find "cleveron" in latest repository Add comment "RE-CHECK!" to start check anew RE-CHECK! Automated adapter checker ioBroker.cleveron :thumbsup: No errors found [ ] :eyes: [W400] Cannot find "cleveron" in latest repository Add comment "RE-CHECK!" to start check anew "RE-CHECK!" RE-CHECK! Forum testing thread But since cleveron is a startup, I don't expect really feedback yet. Hi, sorry that it took that long, here are my review comments: Your testing is red, it seems to be like a broken package-lock.json file The way you encrypt the password works ... but you could have it more easy by using "encryptedNative" - see readme, optional please remove https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L71 if not needed same for https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L77 without logic for statechanges please remove https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L153 There are unknown roles used e.g. in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L451 ... please verify the used roles and do not invent new ones yourself --> https://github.com/ioBroker/ioBroker/blob/master/doc/STATE_ROLES.md maybe startusing async and await all the object creations? Then such hacks like in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L521 are not needed ... they will break! The timeout set anew in yur timeout function can not be stopped by unload. https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L149 Please check and asjust, thank you Thanks for the Feedback here what I could do: Your testing is red, it seems to be like a broken package-lock.json file package-lock.json fixt, testing ok now. The way you encrypt the password works ... but you could have it more easy by using "encryptedNative" - see readme, optional. tried to settle it with 'encryptedNative' but did not work - will change that later. please remove https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L71 if not needed done same for https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L77 done without logic for statechanges please remove https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L153 done There are unknown roles used e.g. in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L451 ... please verify the used roles and do not invent new ones yourself --> https://github.com/ioBroker/ioBroker/blob/master/doc/STATE_ROLES.md done maybe startusing async and await all the object creations? Then such hacks like in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L521 are not needed ... they will break! fixed with 'asyc function' and 'await ... setObjectNotExistsAsync' The timeout set anew in yur timeout function can not be stopped by unload. https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L149 fixed, changed to setInterval / clearInterval thank you for your support Thanks for the Feedback here what I could do: Your testing is red, it seems to be like a broken package-lock.json file package-lock.json fixt, testing ok now. The way you encrypt the password works ... but you could have it more easy by using "encryptedNative" - see readme, optional. tried to settle it with 'encryptedNative' but did not work - will change that later. please remove https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L71 if not needed done same for https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L77 done without logic for statechanges please remove https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L153 done There are unknown roles used e.g. in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L451 ... please verify the used roles and do not invent new ones yourself --> https://github.com/ioBroker/ioBroker/blob/master/doc/STATE_ROLES.md done maybe startusing async and await all the object creations? Then such hacks like in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L521 are not needed ... they will break! fixed with 'asyc function' and 'await ... setObjectNotExistsAsync' The timeout set anew in yur timeout function can not be stopped by unload. https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L149 fixed, changed to setInterval / clearInterval thank you for your support Hi, I had anther look: Please do not log the password :-) Then all encryption is useless :-) Which problems you had with encryptedNative? if you "Play around" did you alwys did "iob upload adaptername"? Note: if you do it that way then you also need to "await" the call in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L341 ... else this runs async in parallel ... this is ok in your case because last command How many objects are created there usually? Because also these many states will be set later and they are not set "Asnc" ... If there are too many this can produce issues in execution ... maybe also use setStateAsync with await? Additionally please also relese new version to npm after adjustments. Thank you! Hi Thanks a lot for review & feedback. Please do not log the password :-) Then all encryption is useless :-) done, sorry Which problems you had with encryptedNative? if you "Play around" did you alwys did "iob upload adaptername"? Thanks for insisting on this one. I tried again and now it works. Might be, that I did not change index_m.html with my last tries. Note: if you do it that way then you also need to "await" the call in https://github.com/iobroker-community-adapters/ioBroker.cleveron/blob/main/main.js#L341 ... else this runs async in parallel ... this is ok in your case because last command 'till now, that was my way to handle asynchronity (cascade). Left it as is, might change it in a futur version, if the API the adapter works with will get more complex. How many objects are created there usually? Because also these many states will be set later and they are not set "Asnc" ... If there are too many this can produce issues in execution ... maybe also use setStateAsync with await? Difficult to say. It's 13 objects per (physical) device. I think that the average number of devices per ioBroker-instance will be below 10. Nevertheless, changed to await setStateAsync. Additionally please also relese new version to npm after adjustments. done, it's iobroker.cleveron@0.0.4 Thanks a lot for doing this really great job with ioBroker Thanks for insisting on this one. Sure because it makes the developer live thaaaat much easier :-) Thank you very much for the changes
gharchive/pull-request
2021-11-11T22:19:48
2025-04-01T04:34:36.064501
{ "authors": [ "Apollon77", "GermanBluefox", "forelleblau" ], "repo": "ioBroker/ioBroker.repositories", "url": "https://github.com/ioBroker/ioBroker.repositories/pull/1520", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1907666259
Certificate request failed: Error: Didn't finalize order: Unhandled status '403' Multihost with Master (Raspi3 node 18.17.1 ) and Slave (Raspi4 node 16.20.0), Host both 5.0.12 Port 80 is forwarded to the Raspi4 with ACME-0.1.0 instance. letsencrypt works before well in admin-instance. When I activate the ACME-Adapter I get the following error code: "Certificate request for xxx.hopto.org failed: Error: Didn't finalize order: Unhandled status '403'. This is not one of the known statuses... Requested: 'xxx.hopto.org' Validated: '' { "type": "urn:ietf:params:acme:error:orderNotReady", "detail": "Order's status (\"valid\") is not acceptable for finalization", "status": 403 } Please open an issue at https://git.rootprojects.org/root/acme.js On https://git.rootprojects.org/root/acme.js it seems there is no activity. Hello, same for me. But Single iobroker Host. Letsencrypt works before well in web-instance. Now after Update need to use ACME. Same Error Code than Tottback. I'm using IPv6 only. Port 443 and 80 is forwarded to iobroker Host. @Tottback @njdsih Please add all involved versions adapter: admin: js-controller: node: O/S: adapter: acme 0.1.0 admin: 6.10.1 js-controller: 5.0.12 node: v18.17.1 O/S: Ubuntu 22.04.3 LTS Am Sa., 30. Sept. 2023 um 15:17 Uhr schrieb Martin M. < @.***>: @Tottback https://github.com/Tottback @njdsih https://github.com/njdsih Please add all involved versions adapter: admin: js-controller: node: O/S: — Reply to this email directly, view it on GitHub https://github.com/iobroker-community-adapters/ioBroker.acme/issues/49#issuecomment-1741764852, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2MIAOAIPM37GT3EUOH4MUDX5ALVZANCNFSM6AAAAAA5CCKEMM . You are receiving this because you were mentioned.Message ID: @.*** com> Assuming you are using HTTP challenge. Is http://xxx.hopto.org/ reachable from the public internet? And forwarded to the port configured in ACME? Yes the xxx.hoptp.org ist reachable by Internet for sure. And the Port 80 is forwarded to ACME. There is a Connection to letsencrypt and the failure Message comes from ACME from my understanding. I tried also on raspi3 and when I dont forward the Port 80 to this raspi I get a "No Response" Error, as expected. Ok. Please run in debug and let us know the result. You should see messages like challenge server starting, challenge request, etc. OK, here it is: 2023-10-04 21:39:50.217 - info: host.Raspi4 "system.adapter.acme.0" disabled 2023-10-04 21:39:50.218 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=false) 2023-10-04 21:39:50.229 - debug: acme.0 (9784) Init http-01 challenge server 2023-10-04 21:39:50.237 - debug: acme.0 (9784) Using URL: https://acme-staging-v02.api.letsencrypt.org/directory 2023-10-04 21:39:51.360 - debug: acme.0 (9784) Loaded existing ACME account: {"_id":"acme.0.account","type":"meta","common":{"name":"account","type":"json","role":"json"},"native":{"full":{"key":{"kty":"EC","crv":"P-256","x":"Dy2pAiM4Y6RvsaWDiH5ccJ6wCrhIjFrBEwP4nfumtnc","y":"MkjgKRJKFHvubFqBZEwrzjUnNZ4LrvRj3FXZ-4hKoeo","kid":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/119007074"},"contact":["mailto:xxx.yyy@abc.de"],"initialIp":"93.244.148.226","createdAt":"2023-09-19T16:45:57.208054823Z","status":"valid"},"key":{"kty":"EC","crv":"P-256","d":"R3CXyakbpMF7mYTtg4pjVGSKLv_abZ9Kvu_wD4KMy8w","x":"Dy2pAiM4Y6RvsaWDiH5ccJ6wCrhIjFrBEwP4nfumtnc","y":"MkjgKRJKFHvubFqBZEwrzjUnNZ4LrvRj3FXZ-4hKoeo","kid":"2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI"}},"from":"system.adapter.acme.0","ts":1696448389642,"acl":{"object":1636,"owner":"system.user.admin","ownerGroup":"system.group.administrator"},"user":"system.user.admin"} 2023-10-04 21:39:51.365 - debug: acme.0 (9784) Collection: {"id":"yyy","commonName":"xxx.hopto.org","altNames":""} 2023-10-04 21:39:51.367 - debug: acme.0 (9784) domains: ["xxx.hopto.org"] 2023-10-04 21:39:51.379 - info: acme.0 (9784) Collection yyy does not exist - will create 2023-10-04 21:39:56.101 - debug: acme.0 (9784) init: {"type":"*"} 2023-10-04 21:39:56.120 - debug: acme.0 (9784) _set: {"challenge":{"identifier":{"type":"dns","value":"xxx.hopto.org"},"challenges":[{"type":"http-01","status":"pending","url":"https://acme-staging-v02.example.com/0","token":"test-07fcc60e819ae721fd9c9f3c60901ac0-0"},{"type":"dns-01","status":"pending","url":"https://acme-staging-v02.example.com/1","token":"test-07fcc60e819ae721fd9c9f3c60901ac0-1","_wildcard":true}],"expires":"2023-10-04T19:40:56.114Z","type":"http-01","status":"pending","url":"https://acme-staging-v02.example.com/0","token":"test-07fcc60e819ae721fd9c9f3c60901ac0-0","hostname":"xxx.hopto.org","altname":"xxx.hopto.org","thumbprint":"2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI","keyAuthorization":"test-07fcc60e819ae721fd9c9f3c60901ac0-0.2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI","challengeUrl":"http://xxx.hopto.org/.well-known/acme-challenge/test-07fcc60e819ae721fd9c9f3c60901ac0-0"}} 2023-10-04 21:39:56.122 - debug: acme.0 (9784) Added test-07fcc60e819ae721fd9c9f3c60901ac0-0 - DB now contains: 1 2023-10-04 21:39:56.134 - info: acme.0 (9784) challengeServer listening on 0.0.0.0 port 80 2023-10-04 21:40:01.203 - debug: acme.0 (9784) challengeServer request: /.well-known/acme-challenge/test-07fcc60e819ae721fd9c9f3c60901ac0-0 2023-10-04 21:40:01.205 - debug: acme.0 (9784) Got challenge for test-07fcc60e819ae721fd9c9f3c60901ac0-0 2023-10-04 21:40:01.221 - debug: acme.0 (9784) remove: {"challenge":{"identifier":{"type":"dns","value":"xxx.hopto.org"},"challenges":[{"type":"http-01","status":"pending","url":"https://acme-staging-v02.example.com/0","token":"test-07fcc60e819ae721fd9c9f3c60901ac0-0"},{"type":"dns-01","status":"pending","url":"https://acme-staging-v02.example.com/1","token":"test-07fcc60e819ae721fd9c9f3c60901ac0-1","_wildcard":true}],"expires":"2023-10-04T19:40:56.114Z","type":"http-01","status":"pending","url":"https://acme-staging-v02.example.com/0","token":"test-07fcc60e819ae721fd9c9f3c60901ac0-0","hostname":"xxx.hopto.org","altname":"xxx.hopto.org","thumbprint":"2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI","keyAuthorization":"test-07fcc60e819ae721fd9c9f3c60901ac0-0.2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI","challengeUrl":"http://xxx.hopto.org/.well-known/acme-challenge/test-07fcc60e819ae721fd9c9f3c60901ac0-0"}} 2023-10-04 21:40:01.222 - debug: acme.0 (9784) DB now contains: 0 2023-10-04 21:40:02.241 - debug: acme.0 (9784) ACME: certificate_order: [object Object] 2023-10-04 21:40:11.103 - debug: acme.0 (9784) ACME: challenge_status: [object Object] 2023-10-04 21:40:11.104 - debug: acme.0 (9784) remove: {"challenge":{"identifier":{"type":"dns","value":"xxx.hopto.org"},"status":"valid","expires":"2023-10-19T16:51:27Z","challenges":[{"type":"http-01","status":"valid","url":"https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/8384334344/BEK_1w","token":"I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g","validationRecord":[{"url":"http://xxx.hopto.org/.well-known/acme-challenge/I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g","hostname":"xxx.hopto.org","port":"80","addressesResolved":["93.244.148.226"],"addressUsed":"93.244.148.226"}],"validated":"2023-09-19T16:51:27Z"}],"type":"http-01","url":"https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/8384334344/BEK_1w","token":"I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g","validationRecord":[{"url":"http://xxx.hopto.org/.well-known/acme-challenge/I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g","hostname":"xxx.hopto.org","port":"80","addressesResolved":["93.244.148.226"],"addressUsed":"93.244.148.226"}],"validated":"2023-09-19T16:51:27Z","hostname":"xxx.hopto.org","altname":"xxx.hopto.org","thumbprint":"2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI","keyAuthorization":"I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g.2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI","challengeUrl":"http://xxx.hopto.org/.well-known/acme-challenge/I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g"}} 2023-10-04 21:40:11.107 - debug: acme.0 (9784) DB now contains: 0 2023-10-04 21:40:12.417 - debug: acme.0 (9784) ACME: certificate_status: [object Object] 2023-10-04 21:40:14.620 - debug: acme.0 (9784) ACME: certificate_status: [object Object] 2023-10-04 21:40:14.624 - error: acme.0 (9784) Certificate request for yyy (xxx.hopto.org) failed: Error: Didn't finalize order: Unhandled status '403'. This is not one of the known statuses... Requested: 'xxx.hopto.org' Validated: '' { "type": "urn:ietf:params:acme:error:orderNotReady", "detail": "Order's status (\"valid\") is not acceptable for finalization", "status": 403 } Please open an issue at https://git.rootprojects.org/root/acme.js 2023-10-04 21:40:14.626 - debug: acme.0 (9784) Done 2023-10-04 21:40:14.646 - debug: acme.0 (9784) No collections found 2023-10-04 21:40:14.705 - info: host.Raspi4 "system.adapter.acme.0" enabled 2023-10-04 21:40:14.705 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=false) 2023-10-04 21:40:14.739 - info: acme.0 (9784) Terminated (ADAPTER_REQUESTED_TERMINATION): Processing complete 2023-10-04 21:40:15.599 - debug: acme.0 (9784) Shutdown... 2023-10-04 21:40:16.114 - info: host.Raspi4 instance system.adapter.acme.0 terminated with code 11 (ADAPTER_REQUESTED_TERMINATION) 2023-10-04 21:40:18.284 - info: host.Raspi4 instance scheduled system.adapter.acme.0 0 24 * * * 2023-10-04 21:40:18.331 - info: host.Raspi4 instance system.adapter.acme.0 started with pid 10725 2023-10-04 21:40:24.272 - debug: acme.0 (10725) Redis Objects: Use Redis connection: 192.168.178.199:9001 2023-10-04 21:40:24.436 - debug: acme.0 (10725) Objects client ready ... initialize now 2023-10-04 21:40:24.442 - debug: acme.0 (10725) Objects create System PubSub Client 2023-10-04 21:40:24.446 - debug: acme.0 (10725) Objects create User PubSub Client 2023-10-04 21:40:24.644 - debug: acme.0 (10725) Objects client initialize lua scripts 2023-10-04 21:40:24.667 - debug: acme.0 (10725) Objects connected to redis: 192.168.178.199:9001 2023-10-04 21:40:24.799 - debug: acme.0 (10725) Redis States: Use Redis connection: 192.168.178.199:9000 2023-10-04 21:40:24.861 - debug: acme.0 (10725) States create System PubSub Client 2023-10-04 21:40:24.865 - debug: acme.0 (10725) States create User PubSub Client 2023-10-04 21:40:25.004 - debug: acme.0 (10725) States connected to redis: 192.168.178.199:9000 2023-10-04 21:40:25.644 - info: acme.0 (10725) starting. Version 0.1.0 in /opt/iobroker/node_modules/iobroker.acme, node: v16.20.0, js-controller: 5.0.12 2023-10-04 21:40:25.724 - debug: acme.0 (10725) config: {"maintainerEmail":"xxx.yyy@abc.de","useStaging":true,"http01Active":true,"port":80,"bind":"0.0.0.0","dns01Active":false,"dns01Module":"","dns01OapiUser":"","dns01OapiKey":"","dns01OclientIp":"","dns01Okey":"","dns01Osecret":"","dns01Otoken":"","dns01Ousername":"","dns01OverifyPropagation":false,"dns01PpropagationDelay":120000,"collections":[{"id":"yyy","commonName":"xxx.hopto.org","altNames":""}]} 2023-10-04 21:40:26.217 - info: host.Raspi4 "system.adapter.acme.0" disabled 2023-10-04 21:40:26.229 - debug: acme.0 (10725) Init http-01 challenge server 2023-10-04 21:40:26.218 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=true) 2023-10-04 21:40:26.237 - debug: acme.0 (10725) Using URL: https://acme-staging-v02.api.letsencrypt.org/directory 2023-10-04 21:40:27.455 - debug: acme.0 (10725) Loaded existing ACME account: {"_id":"acme.0.account","type":"meta","common":{"name":"account","type":"json","role":"json"},"native":{"full":{"key":{"kty":"EC","crv":"P-256","x":"Dy2pAiM4Y6RvsaWDiH5ccJ6wCrhIjFrBEwP4nfumtnc","y":"MkjgKRJKFHvubFqBZEwrzjUnNZ4LrvRj3FXZ-4hKoeo","kid":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/119007074"},"contact":["mailto:xxx.yyy@abc.de"],"initialIp":"93.244.148.226","createdAt":"2023-09-19T16:45:57.208054823Z","status":"valid"},"key":{"kty":"EC","crv":"P-256","d":"R3CXyakbpMF7mYTtg4pjVGSKLv_abZ9Kvu_wD4KMy8w","x":"Dy2pAiM4Y6RvsaWDiH5ccJ6wCrhIjFrBEwP4nfumtnc","y":"MkjgKRJKFHvubFqBZEwrzjUnNZ4LrvRj3FXZ-4hKoeo","kid":"2KyY8iAgg9Fmh0B0yWho3uxsiqBp4bhM5HxXbndubvI"}},"from":"system.adapter.acme.0","ts":1696448425627,"acl":{"object":1636,"owner":"system.user.admin","ownerGroup":"system.group.administrator"},"user":"system.user.admin"} Certificate request for yyy (xxx.hopto.org) failed: Error: Didn't finalize order: Unhandled status '403' At first glance I believe this is saying when the certificate issuer tried to establish validity of the order by hitting the challenge URL (http://xxx.hopto.org/.well-known/acme-challenge/I77c82jOZmKxziisxg3PpU3P2ZL2L3nIzbE840S4s8g) a 403 error came back. Which seems to imply there's something wrong with your firewall or port forwarding. Port Forwarding in Router was not changed (TCP 80 -> Raspi4: 80) and worked well with LetsEncrypt-Handling in admin-instance. before, Error 403: Forbidden – you don't have permission to access this resource is an HTTP status code that occurs when the web server understands the request but can't provide additional access I know what 403 means. When I get a chance will look in the code but I think the challenge server should be logging an incoming request and if that doesn't happen the request from certificate issuer can't be getting to the challenge server. Thanks, I didn't know 403 in details, therefore this was just for my documentation ;-) I tried with IP4 and IP6 access in ACME-instance, but see no difference. Should I try to use different access data ? Thanks for analyzing. I have only HTTP activated. I've copied the log to notepad++ for anomyzing. Seems the CRs get lost by that somehow, sorry. Now I have a different issue: I can't start the ACME adapter anymore because "already running". Already re-installed ACME and restarted IOB without change. Sorry for mixing up the issues here. Could be related to #43? Seems not to be similar: "ADAPTER_ALREADY_RUNNING" is not mentioned there. Error comes from host not acme-Instance ?! host.Raspi4 | 2023-10-09 08:07:29.712 | error | instance system.adapter.acme.0 terminated with code 7 (ADAPTER_ALREADY_RUNNING) -- | -- | -- | -- host.Raspi4 | 2023-10-09 08:07:22.329 | info | instance system.adapter.acme.0 started with pid 9119 host.Raspi4 | 2023-10-09 08:07:22.288 | info | instance scheduled system.adapter.acme.0 0 23 * * * host.Raspi4 | 2023-10-09 08:07:18.710 | info | stopInstance system.adapter.acme.0 (force=false, process=false) host.Raspi4 | 2023-10-09 08:06:28.781 | error | instance system.adapter.acme.0 terminated with code 7 (ADAPTER_ALREADY_RUNNING) host.Raspi4 | 2023-10-09 08:06:21.117 | info | instance system.adapter.acme.0 started with pid 7547 host.Raspi4 | 2023-10-09 08:06:21.080 | info | instance scheduled system.adapter.acme.0 0 24 * * * host.Raspi4 | 2023-10-09 08:06:17.649 | info | stopInstance system.adapter.acme.0 (force=false, process=false) I tried again after some minor adapter updates. Now it works suddenly, I get a certificate and the collection is available. Status der Sammlungen ID Status Domänen Testumgebung Läuft ab xxx OK xxxx.hopto.org 8.1.2024, 18:40:56 But now the adapter stuck in a loop and fill the log: I had to stop it manual. (see log) And to be honest I have no clue how to use the certificates in the Web-instance. I created certifcate entries in the system settings and entered the certificates data from the log. I used this entries in the web-instace and this works, the certificate is accepted as valid. But this could not be the intended way, for sure. 2023-10-20 18:13:44.120 - info: host.Raspi4 "system.adapter.acme.0" enabled 2023-10-20 18:13:44.121 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=true) 2023-10-20 18:13:44.133 - info: acme.0 (12625) Terminated (ADAPTER_REQUESTED_TERMINATION): Processing complete 2023-10-20 18:13:44.780 - debug: acme.0 (12625) Shutdown... 2023-10-20 18:13:45.213 - info: host.Raspi4 instance system.adapter.acme.0 terminated with code 11 (ADAPTER_REQUESTED_TERMINATION) 2023-10-20 18:13:47.546 - info: host.Raspi4 instance scheduled system.adapter.acme.0 0 22 * * * 2023-10-20 18:13:47.580 - info: host.Raspi4 instance system.adapter.acme.0 started with pid 13072 2023-10-20 18:13:53.557 - debug: acme.0 (13072) Redis Objects: Use Redis connection: 192.168.xxx.yyy:9001 2023-10-20 18:13:53.693 - debug: acme.0 (13072) Objects client ready ... initialize now 2023-10-20 18:13:53.708 - debug: acme.0 (13072) Objects create System PubSub Client 2023-10-20 18:13:53.715 - debug: acme.0 (13072) Objects create User PubSub Client 2023-10-20 18:13:53.975 - debug: acme.0 (13072) Objects client initialize lua scripts 2023-10-20 18:13:53.996 - debug: acme.0 (13072) Objects connected to redis: 192.168.xxx.yyy:9001 2023-10-20 18:13:54.138 - debug: acme.0 (13072) Redis States: Use Redis connection: 192.168.xxx.yyy:9000 2023-10-20 18:13:54.270 - debug: acme.0 (13072) States create System PubSub Client 2023-10-20 18:13:54.273 - debug: acme.0 (13072) States create User PubSub Client 2023-10-20 18:13:54.392 - debug: acme.0 (13072) States connected to redis: 192.168.xxx.yyy:9000 2023-10-20 18:13:55.104 - info: acme.0 (13072) starting. Version 0.1.0 in /opt/iobroker/node_modules/iobroker.acme, node: v16.20.0, js-controller: 5.0.12 2023-10-20 18:13:55.248 - debug: acme.0 (13072) config: {"maintainerEmail":"xyz@abc.de","useStaging":false,"http01Active":true,"port":80,"bind":"0.0.0.0","dns01Active":false,"dns01Module":"acme-dns-01-dnsimple","dns01OapiUser":"","dns01OapiKey":"","dns01OclientIp":"","dns01Okey":"","dns01Osecret":"","dns01Otoken":"","dns01Ousername":"","dns01OverifyPropagation":false,"dns01PpropagationDelay":120000,"collections":[{"id":"TestCert","commonName":"xxx.hopto.org","altNames":""}]} 2023-10-20 18:13:55.808 - info: host.Raspi4 "system.adapter.acme.0" disabled 2023-10-20 18:13:55.809 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=true) 2023-10-20 18:13:55.810 - info: host.Raspi4 stopInstance canceled schedule system.adapter.acme.0 2023-10-20 18:13:55.825 - debug: acme.0 (13072) Init http-01 challenge server 2023-10-20 18:13:55.833 - debug: acme.0 (13072) Using URL: https://acme-v02.api.letsencrypt.org/directory 2023-10-20 18:13:57.230 - debug: acme.0 (13072) Loaded existing ACME account: {"_id":"acme.0.account","type":"meta","common":{"name":"account","type":"json","role":"json"},"native":{"full":{"key":{"kty":"EC","crv":"P-256","x":"9NtKVZytQYM1CpIcrJcMp3j8BpfEDO2ms6h5-Q99DVg","y":"td0UNs8xvHXYVSCDUsijstVouw8g_yEF5_nl1wF6qqY","kid":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/121249004"},"contact":["mailto:xyz@abc.de"],"initialIp":"93.245.xx.yy","createdAt":"2023-10-09T11:21:09.49813702Z","status":"valid"},"key":{"kty":"EC","crv":"P-256","d":"Ib2QJKnOMu0T6zSwQIoRMDLhLSlP7XOvHpqQ7Evlmgw","x":"9NtKVZytQYM1CpIcrJcMp3j8BpfEDO2ms6h5-Q99DVg","y":"td0UNs8xvHXYVSCDUsijstVouw8g_yEF5_nl1wF6qqY","kid":"Hn9goYdSW6lpV_u7WfUM6ENU2M_32YUGWapVX3l4UF8"}},"from":"system.adapter.acme.0","ts":1697818435068,"acl":{"object":1636,"owner":"system.user.admin","ownerGroup":"system.group.administrator"},"user":"system.user.admin"} 2023-10-20 18:13:57.244 - debug: acme.0 (13072) Collection: {"id":"TestCert","commonName":"xxx.hopto.org","altNames":""} 2023-10-20 18:13:57.251 - debug: acme.0 (13072) domains: ["xxx.hopto.org"] 2023-10-20 18:13:57.309 - debug: acme.0 (13072) Existing: TestCert: {"from":"acme.0","key":"-----BEGIN RSA PRIVATE KEY-----xxx==\n-----END RSA PRIVATE KEY-----","cert":"-----BEGIN CERTIFICATE-----yyy-----END CERTIFICATE-----\n","chain":["-----BEGIN CERTIFICATE-----zzz-----END CERTIFICATE-----\n","-----BEGIN CERTIFICATE-----aaa-----END CERTIFICATE-----\n\n-----BEGIN CERTIFICATE-----bbb-----END CERTIFICATE-----\n"],"domains":["xxx.hopto.org"],"staging":false,"tsExpires":1704735656000} 2023-10-20 18:13:58.699 - debug: acme.0 (13072) Existing cert: {"publicModulus":"yyy","notBefore":"Oct 10 17:40:57 2023 GMT","notAfter":"Jan 8 17:40:56 2024 GMT","altNames":["xxx.hopto.org"],"ocspList":["http://r3.o.lencr.org"]} 2023-10-20 18:13:58.701 - debug: acme.0 (13072) Collection TestCert certificate already looks good 2023-10-20 18:13:58.722 - debug: acme.0 (13072) existingCollectionIds: ["TestCert"] 2023-10-20 18:13:58.744 - info: host.Raspi4 "system.adapter.acme.0" enabled 2023-10-20 18:13:58.745 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=true) 2023-10-20 18:13:58.761 - info: acme.0 (13072) Terminated (ADAPTER_REQUESTED_TERMINATION): Processing complete 2023-10-20 18:13:59.354 - debug: acme.0 (13072) Shutdown... 2023-10-20 18:14:00.889 - info: host.Raspi4 instance system.adapter.acme.0 terminated with code 11 (ADAPTER_REQUESTED_TERMINATION) 2023-10-20 18:14:02.148 - info: host.Raspi4 instance scheduled system.adapter.acme.0 0 22 * * * 2023-10-20 18:14:02.186 - info: host.Raspi4 instance system.adapter.acme.0 started with pid 13528 2023-10-20 18:14:07.973 - debug: acme.0 (13528) Redis Objects: Use Redis connection: 192.168.xxx.yyy:9001 2023-10-20 18:14:08.122 - debug: acme.0 (13528) Objects client ready ... initialize now 2023-10-20 18:14:08.129 - debug: acme.0 (13528) Objects create System PubSub Client 2023-10-20 18:14:08.133 - debug: acme.0 (13528) Objects create User PubSub Client 2023-10-20 18:14:08.378 - debug: acme.0 (13528) Objects client initialize lua scripts 2023-10-20 18:14:08.397 - debug: acme.0 (13528) Objects connected to redis: 192.168.xxx.yyy:9001 2023-10-20 18:14:08.539 - debug: acme.0 (13528) Redis States: Use Redis connection: 192.168.xxx.yyy:9000 2023-10-20 18:14:08.604 - debug: acme.0 (13528) States create System PubSub Client 2023-10-20 18:14:08.608 - debug: acme.0 (13528) States create User PubSub Client 2023-10-20 18:14:08.736 - debug: acme.0 (13528) States connected to redis: 192.168.xxx.yyy:9000 2023-10-20 18:14:09.593 - info: acme.0 (13528) starting. Version 0.1.0 in /opt/iobroker/node_modules/iobroker.acme, node: v16.20.0, js-controller: 5.0.12 2023-10-20 18:14:09.681 - debug: acme.0 (13528) config: {"maintainerEmail":"xyz@abc.de","useStaging":false,"http01Active":true,"port":80,"bind":"0.0.0.0","dns01Active":false,"dns01Module":"acme-dns-01-dnsimple","dns01OapiUser":"","dns01OapiKey":"","dns01OclientIp":"","dns01Okey":"","dns01Osecret":"","dns01Otoken":"","dns01Ousername":"","dns01OverifyPropagation":false,"dns01PpropagationDelay":120000,"collections":[{"id":"TestCert","commonName":"xxx.hopto.org","altNames":""}]} 2023-10-20 18:14:10.158 - info: host.Raspi4 "system.adapter.acme.0" disabled 2023-10-20 18:14:10.159 - info: host.Raspi4 stopInstance system.adapter.acme.0 (force=false, process=true) 2023-10-20 18:14:10.160 - info: host.Raspi4 stopInstance canceled schedule system.adapter.acme.0 During my tests for #43 I'm also getting this. Using staging server. Put some logging in the HTTP challenge server and it appears to be functioning correctly. Maybe this is a timing issue with the CA? @Tottback is this now working for you? The restart loop should be fixed in main branch so would be good if you can install direct from Github and let us know. Hello, I'm trying the ACME mainline version from Github and it seems to work fine now. Thanks. ACME is only activated one time as defined by the cronjob. Remark: According to log it seems there is one shutdown too much. "warn: acme.0 (xx) Shutdown called but nothing to do" 2023-11-13 22:00:04.275 - info: host.Raspi4 instance system.adapter.acme.0 started with pid 4367 2023-11-13 22:00:10.881 - debug: acme.0 (4367) Redis Objects: Use Redis connection: 192.xx:9001 2023-11-13 22:00:11.044 - debug: acme.0 (4367) Objects client ready ... initialize now 2023-11-13 22:00:11.051 - debug: acme.0 (4367) Objects create System PubSub Client 2023-11-13 22:00:11.055 - debug: acme.0 (4367) Objects create User PubSub Client 2023-11-13 22:00:11.291 - debug: acme.0 (4367) Objects client initialize lua scripts 2023-11-13 22:00:11.316 - debug: acme.0 (4367) Objects connected to redis: 192.xx:9001 2023-11-13 22:00:11.487 - debug: acme.0 (4367) Redis States: Use Redis connection: 192.xx:9000 2023-11-13 22:00:11.551 - debug: acme.0 (4367) States create System PubSub Client 2023-11-13 22:00:11.554 - debug: acme.0 (4367) States create User PubSub Client 2023-11-13 22:00:11.697 - debug: acme.0 (4367) States connected to redis: 192.xx:9000 2023-11-13 22:00:12.402 - info: acme.0 (4367) starting. Version 0.1.0 (non-npm: iobroker-community-adapters/ioBroker.acme) in /opt/iobroker/node_modules/iobroker.acme, node: v18.17.1, js-controller: 5.0.12 2023-11-13 22:00:12.494 - debug: acme.0 (4367) config: {"maintainerEmail":"xx@gmx.de","useStaging":false,"http01Active":true,"port":80,"bind":"0.0.0.0","dns01Active":false,"dns01Module":"acme-dns-01-dnsimple","dns01OapiUser":"","dns01OapiKey":"","dns01OclientIp":"","dns01Okey":"","dns01Osecret":"","dns01Otoken":"","dns01Ousername":"","dns01OverifyPropagation":false,"dns01PpropagationDelay":120000,"collections":[{"id":"xx","commonName":"xx.hopto.org","altNames":""}]} 2023-11-13 22:00:12.498 - debug: acme.0 (4367) Init http-01 challenge server 2023-11-13 22:00:12.507 - debug: acme.0 (4367) Using URL: https://acme-v02.api.letsencrypt.org/directory 2023-11-13 22:00:15.129 - debug: acme.0 (4367) Loaded existing ACME account: {"_id":"acme.0.account","type":"meta","common":{"name":"account","type":"json","role":"json"},"native":{"full":{"key":{"kty":"EC","crv":"P-256","x":"9NtKVZytQYM1CpIcrJcMp3j8BpfEDO2ms6h5-Q99DVg","y":"td0UNs8xvHXYVSCDUsijstVouw8g_yEF5_nl1wF6qqY","kid":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/121249004"},"contact":["mailto:xx@gmx.de"],"initialIp":"xx","createdAt":"2023-10-09T11:21:09.49813702Z","status":"valid"},"key":{"kty":"EC","crv":"P-256","d":"Ib2QJKnOMu0T6zSwQIoRMDLhLSlP7XOvHpqQ7Evlmgw","x":"9NtKVZytQYM1CpIcrJcMp3j8BpfEDO2ms6h5-Q99DVg","y":"td0UNs8xvHXYVSCDUsijstVouw8g_yEF5_nl1wF6qqY","kid":"Hn9goYdSW6lpV_u7WfUM6ENU2M_32YUGWapVX3l4UF8"}},"from":"system.adapter.acme.0","ts":1699909212383,"acl":{"object":1636,"owner":"system.user.admin","ownerGroup":"system.group.administrator"},"user":"system.user.admin"} 2023-11-13 22:00:15.133 - debug: acme.0 (4367) Collection: {"id":"xx","commonName":"xx.hopto.org","altNames":""} 2023-11-13 22:00:15.135 - debug: acme.0 (4367) domains: ["xx.hopto.org"] 2023-11-13 22:00:15.151 - debug: acme.0 (4367) Existing: xx: {"from":"acme.0","key":"-----BEGIN RSA PRIVATE KEY-----\yy.hopto.org"],"staging":false,"tsExpires":1704735656000} 2023-11-13 22:00:16.581 - debug: acme.0 (4367) Existing cert: {"publicModulus":"xx","publicExponent":"010001","subject":{"commonName":"yy.hopto.org"},"issuer":{"commonName":"R3","countryName":"US","organizationName":"Let's Encrypt"},"serial":"0477A0A56850B9F01242721200CC036FBD01","notBefore":"Oct 10 17:40:57 2023 GMT","notAfter":"Jan 8 17:40:56 2024 GMT","altNames":["xx.hopto.org"],"ocspList":["http://r3.o.lencr.org"]} 2023-11-13 22:00:16.583 - debug: acme.0 (4367) Collection xx certificate already looks good 2023-11-13 22:00:16.628 - debug: acme.0 (4367) existingCollectionIds: ["xx"] 2023-11-13 22:00:16.631 - debug: acme.0 (4367) Shutdown... 2023-11-13 22:00:16.632 - warn: acme.0 (4367) Shutdown called but nothing to do 2023-11-13 22:00:16.634 - debug: acme.0 (4367) No previously shutdown adapters to restart Hello, I'm trying the ACME mainline version from Github and it seems to work fine now. Thanks. Good to know, thanks. Remark: According to log it seems there is one shutdown too much. "warn: acme.0 (xx) Shutdown called but nothing to do" This isn't a problem. The HTTP challenge server was initialised but as a certificate wasn't ordered on this run it didn't actually start listening on the defined port and therefore didn't need to do anything on shutdown. Expected behaviour. OK. Seems that "warn" is then maybe not the appropriate level for that message, if it's no problem and expected behavior.
gharchive/issue
2023-09-21T19:57:59
2025-04-01T04:34:36.107221
{ "authors": [ "Tottback", "mcm1957", "njdsih", "raintonr" ], "repo": "iobroker-community-adapters/ioBroker.acme", "url": "https://github.com/iobroker-community-adapters/ioBroker.acme/issues/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2482206642
Please consider fixing issues detected by repository checker Notification from ioBroker Check and Service Bot Dear adapter developer, I'm the ioBroker Check and Service Bot. I'm an automated tool processing routine tasks for the ioBroker infrastructure. I have recently checked the repository for your adapter opi for common errors and appropiate suggestions to keep this adapter up to date. This check is based the current head revisions (master / main branch) of the adapter repository Please see the result of the check below. ioBroker.opi - ERRORS: [ ] :heavy_exclamation_mark: [E019] Invalid repository URL in package.json: https://github.com/ioBroker/ioBroker.opi. Expected: git@github.com:iobroker-community-adapters/ioBroker.opi.git or https://github.com/iobroker-community-adapters/ioBroker.opi.git [ ] :heavy_exclamation_mark: [E032] No dependency declared for @iobroker/adapter-core. Please add "@iobroker/adapter-core":"3.1.6" to dependencies at package.json [ ] :heavy_exclamation_mark: [E036] @iobroker/testing 4.1.1 specified. 4.1.3 is required as minimum, 4.1.3 is recommended. Please update devDependencies at package.json [ ] :heavy_exclamation_mark: [E150] No "common.connectionType" found in io-package.json [ ] :heavy_exclamation_mark: [E152] No "common.dataSource" found in io-package.json [ ] :heavy_exclamation_mark: [E204] Version "0.0.1" listed at common.news at io-package.json does not exist at NPM. Please remove from news section. [ ] :heavy_exclamation_mark: [E606] Current adapter version 0.1.2 not found in README.md [ ] :heavy_exclamation_mark: [E802] No topics found in the repository. Please go to "https://github.com/iobroker-community-adapters/ioBroker.opi", press the settings button beside the about title and add some topics. [ ] :heavy_exclamation_mark: [E852] .npmignore not found WARNINGS: [ ] :eyes: [W105] Missing suggested translation into uk of "common.titleLang" in io-package.json. [ ] :eyes: [W109] Missing suggested translation into uk of "common.desc" in io-package.json. [ ] :eyes: [W113] Adapter should support compact mode [ ] :eyes: [W145] Missing suggested translation into uk of some "common.news" in io-package.json. [ ] :eyes: [W170] "common.keywords" should not contain "iobroker, adapter, smart home" io-package.json [ ] :eyes: [W184] "common.main" is deprecated and ignored. Please remove from io-package.json. Use "main" at package.json instead. [ ] :eyes: [W184] "common.materialize" is deprecated for admin >= 5 at io-package.json. Please use property "adminUI". [ ] :eyes: [W504] setInterval found in "main.js", but no clearInterval detected SUGGESTIONS: [ ] :pushpin: [S522] Please consider migrating to admin 5 UI (jsonConfig). Please review issues reported and consider fixing them as soon as appropiate. Errors reported by repository checker should be fixed as soon as possible. Some of them require a new release to be considered as fixed. Please note that errors reported by checker might be considered as blocking point for future updates at stable repository. Warnings reported by repository checker should be reviewed. While some warnings can be ignored due to good reasons or a dedicated decision of the developer, most warnings should be fixed as soon as appropiate. Suggestions reported by repository checker should be reviewed. Suggestions can be ignored due to a decision of the developer but they are reported as a hint to use a configuration which might get required in future or at least is used be most adapters. Suggestions are always optional to follow. You may start a new check at any time by adding the following comment to this issue: @iobroker-bot recheck Please note that I (and the server at GitHub) have always plenty of work to do. So it may last up to 30 minutes until you see a reaction. I will drop a comment here as soon as I start processing. Feel free to contact me (@iobroker-bot) if you have any questions or feel that an issue is incorrectly flagged. And THANKS A LOT for maintaining this adapter from me and all users. Let's work together for the best user experience. your ioBroker Check and Service Bot @mcm1957 for evidence Last update at Fri, 06 Sep 2024 09:42:10 GMT Issue outdated due to RECREATE request. Follow up issue #133 has been created. This issue will be closed. your ioBroker Check and Service Bot
gharchive/issue
2024-08-23T04:11:51
2025-04-01T04:34:36.127445
{ "authors": [ "ioBroker-Bot" ], "repo": "iobroker-community-adapters/ioBroker.opi", "url": "https://github.com/iobroker-community-adapters/ioBroker.opi/issues/130", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
976320343
2FA Code required / bug? hi, i will fight with iobroker.synology adapter since my update from dsm6 to dsm7. i try to delete the 1.0.0 stable and install these newest version which will include "dsm7 fix" in release notes. I always get these error in debug mode "*** ERROR : src: *sendPolling syno[dsm][getPollingData] code: 403 message: 2-step verification code required" the correct 2FA code is stored in instanz settings and will checked aprox 10 times?! please test with 1.1.3 works now - many thanks!
gharchive/issue
2021-08-22T08:33:38
2025-04-01T04:34:36.130629
{ "authors": [ "MeisterTR", "nils50122" ], "repo": "iobroker-community-adapters/ioBroker.synology", "url": "https://github.com/iobroker-community-adapters/ioBroker.synology/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2217236598
Keine Verbindung In meinem System sind 2 Wärmepumpen mit ISM7 im gleichen Netz. Auf die beiden Wärmepumpen kann ich über die Wolf-Smart-Set zugreifen. Im ioBroker habe ich den Wolf Smartset Adapter installiert (1.1.1). Aber ich bekomme keine Verbindung. Ich habe auch versucht über die Version 1.0.0 erst die Adpter zu installieren. Leider hat das auch nicht funktioniert. Dann habe ich einen ISM7 aus dem System genommen. Leider auch ohne Erfolg. Was mache ich falsch? ioBroker, v6.13.16 Wolf Smartset 1.1.1 und 1.0.0 Bei mir ähnlich: versuche den Adapter seit mehreren Tage initial einzurichten: bisher ohne Erfolg, die Fehlermeldungen sind identisch Installation : Wolf CHA-07 mit ISM7 Modul (über Ethernet im lokalen LAN) , Portal und eBus LED sind an, Wolf SmartSet app eingerichtet und funktioniert. SystemInfo (Docker Container on Synology NAS): Adapter version: 1.1.1 Plattform: docker (official image - v9.1.2) Betriebssystem: linux Architektur: x64 Node.js: v18.20.1 NPM: 10.5.1 Admin: v6.13.16 Adapter Downgrade auf 1.0.0 und Upgrade haben auch nicht geholfen. Ich kann auch mit der Version 1.0.0 kein Device auswählen. Anbei Screenshot vom Protokoll der letzten Aktion und komplettes Log. iobroker-wolf-smartset.log Habe eine Lösung für mich gefunden: siehe issue#304 fixed at 2.1.1
gharchive/issue
2024-03-31T23:01:19
2025-04-01T04:34:36.135625
{ "authors": [ "flingo64", "mcm1957", "pp222NOH" ], "repo": "iobroker-community-adapters/ioBroker.wolf-smartset", "url": "https://github.com/iobroker-community-adapters/ioBroker.wolf-smartset/issues/330", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
244578531
custom webpack config not loaded using --prod Short description of the problem: Hello, I am using package.json to define custom webpack config file via ionic_webpack and it is not called/loaded on building with --prod flag, as workaround I had to downgrade ionic-app-scripts to 1.2.5 to make it work again. What behavior are you expecting? custom webpack file to be callled/loaded Working aliases same way as they work without --prod flag. Steps to reproduce: setup package.json "config": { "ionic_webpack": "./config/webpack-extension.config.js" }, webpack-extension.config.js let useDefaultConfig = require('@ionic/app-scripts/config/webpack.config.js') module.exports = function () { console.log('process.env.IONIC_ENV: ', process.env.IONIC_ENV) // without using --prod, this log 'process.env.IONIC_ENV:dev' into console useDefaultConfig.resolve.alias = { "@app/env": path.resolve(__dirname + '/../src/environments/env.' + process.env.IONIC_ENV + '.ts'), }; return useDefaultConfig; } environment.ts import { isDEV_, GET_HOST_, } from '@app/env' build with prod ionic cordova build android --device --prod Which @ionic/app-scripts version are you using? tested with 2.0.1, 1.3.12, 1.3.7, 1.3.4 finnaly 1.2.5 did work well Other information: (e.g. stacktraces, related issues, suggestions how to fix, stackoverflow links, forum links, etc) console: ionic cordova build android --device --prod Running app-scripts build: --prod --iscordovaserve --externalIpRequired --nobrowser [08:14:24] build prod started ... [08:14:24] clean started ... [08:14:24] clean finished in 2 ms [08:14:24] copy started ... [08:14:24] ngc started ... [08:14:48] ngc finished in 24.13 s [08:14:48] preprocess started ... [08:14:48] deeplinks started ... [08:14:48] deeplinks finished in 27 ms [08:14:48] optimization started ... [08:14:57] copy finished in 32.90 s [WARN] Error occurred during command execution from a CLI plugin (@ionic/cli-plugin-cordova). Your plugins may be out of date. Error: ./src/environments/environment.js Module not found: Error: Can't resolve '@app/env' in '/Users/myuser/myproject/client/src/environments' resolve '@app/env' in '/Users/myuser/myproject/client/src/environments' Parsed request is a module using description file: /Users/myuser/myproject/client/package.json (relative path: ./src/environments) Field 'browser' doesn't contain a valid alias configuration after using description file: /Users/myuser/myproject/client/package.json (relative path: ./src/environments) resolve as module /Users/myuser/myproject/client/src/environments/node_modules doesn't exist or is not a directory /Users/myuser/myproject/client/src/node_modules doesn't exist or is not a directory /Users/myuser/myproject/node_modules doesn't exist or is not a directory /Users/myuser/node_modules doesn't exist or is not a directory /Users/node_modules doesn't exist or is not a directory /node_modules doesn't exist or is not a directory looking for modules in /Users/myuser/myproject/client/node_modules using description file:/Users/myuser/myproject/client/package.json (relative path: ./node_modules) Field 'browser' doesn't contain a valid alias configuration after using description file: /Users/myuser/myproject/client/package.json (relative path: ./node_modules) using description file: /Users/myuser/myproject/client/package.json (relative path: ./node_modules/@app/env) no extension Field 'browser' doesn't contain a valid alias configuration /Users/myuser/myproject/client/node_modules/@app/env doesn't exist .js Field 'browser' doesn't contain a valid alias configuration /Users/myuser/myproject/client/node_modules/@app/env.js doesn't exist .ts Field 'browser' doesn't contain a valid alias configuration /Users/myuser/myproject/client/node_modules/@app/env.ts doesn't exist as directory /Users/myuser/myproject/client/node_modules/@app/env doesn't exist [/Users/myuser/myproject/client/src/environments/node_modules] [/Users/myuser/myproject/client/src/node_modules] [/Users/myuser/myproject/node_modules] [/Users/myuser/node_modules] [/Users/node_modules] [/node_modules] [/Users/myuser/myproject/client/node_modules/@app/env] [/Users/myuser/myproject/client/node_modules/@app/env.js] [/Users/myuser/myproject/client/node_modules/@app/env.ts] [/Users/myuser/myproject/client/node_modules/@app/env] @ ./src/environments/environment.js 1:0-62 @ ./src/app/main.ts npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! myproject@ build-android-prod: `node config/setMobileExports && ionic cordova build android --device --prod` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the myproject@ build-android-prod script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /Users/myuser/.npm/_logs/2017-07-21T06_15_04_196Z-debug.log additional info from debug.log 13 verbose stack Error: myproject@ build-android-prod: `node config/setMobileExports && ionic cordova build android --device --prod` 13 verbose stack Exit status 1 13 verbose stack at EventEmitter.<anonymous> (/Users/myuser/.npm-global/lib/node_modules/npm/lib/utils/lifecycle.js:289:16) 13 verbose stack at emitTwo (events.js:106:13) 13 verbose stack at EventEmitter.emit (events.js:191:7) 13 verbose stack at ChildProcess.<anonymous> (/Users/myuser/.npm-global/lib/node_modules/npm/lib/utils/spawn.js:40:14) 13 verbose stack at emitTwo (events.js:106:13) 13 verbose stack at ChildProcess.emit (events.js:191:7) 13 verbose stack at maybeClose (internal/child_process.js:877:16) 13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5) 14 verbose pkgid myproject@ 15 verbose cwd /Users/myuser/myproject/client 16 verbose Darwin 16.6.0 17 verbose argv "/usr/local/bin/node" "/Users/myuser/.npm-global/bin/npm" "run" "build-android-prod" 18 verbose node v6.9.5 19 verbose npm v5.3.0 Thank you Oh, I was looking to do this too (extend the paths in webpack). Since it does not work with --prod I suppose I should wait until it gets fixed. I am having the same issue. Same here Any news on this one ? I am still having issue (with using phaser-ce) +1 +1 +1 +1
gharchive/issue
2017-07-21T06:44:47
2025-04-01T04:34:36.192597
{ "authors": [ "distante", "franbueno", "kinglionsoft", "l0ne", "luckylooke", "renanmzmendes", "samuelbirk", "tgensol" ], "repo": "ionic-team/ionic-app-scripts", "url": "https://github.com/ionic-team/ionic-app-scripts/issues/1137", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
461869877
bug: A hardware back button doesn't work if a current page is the only one in the DOM. Bug Report Ionic version: [x] 4.4.2 Current behavior: A hardware back button doesn't work if current page is the only one in the DOM. It doesn't work even when defaultHref is set. Back button in left top corner works as expected Expected behavior: Clicking a hardware back button should works exactly the same way as a back button in the left top corner when a defaultHref is set. It should be possible to navigate forward just when app starts or page has been refreshed and then navigate back with a hardware back button. Steps to reproduce: Open app on Android device Navigate to any page (home -> any page) Refresh the app (I go to chrome://inspect, open app and click f5) back button in top left corner works but hardware back button doesn’t or Open app on Android device Navigate to any page just when app starts (just after platform.ready()) back button in top left corner works but hardware back button doesn’t Ionic info: Ionic: ionic (Ionic CLI) : 4.12.0 (C:\UnBackedUp_Data\Applications\nvm\v10.15.3\node_modules\ionic) Ionic Framework : not installed @angular-devkit/build-angular : 0.13.9 @angular-devkit/schematics : 7.3.9 @angular/cli : 7.3.9 @ionic/angular-toolkit : 1.3.0 Cordova: cordova (Cordova CLI) : 9.0.0 (cordova-lib@9.0.1) Cordova Platforms : android 8.0.0 Cordova Plugins : cordova-plugin-ionic-keyboard 2.1.3, cordova-plugin-ionic-webview 3.1.2, (and 21 other plugins) System: NodeJS : v10.15.3 (C:\Program Files\nodejs\node.exe) npm : 6.4.1 OS : Windows 10 Thanks for the issue! Are you trying this in a Cordova/Capacitor application, or in a web browser? Hi. I'm trying it only in the cordova app. Thanks for the follow up. I believe this is expected behavior. If you were to open a new tab in Chrome (assume you can bypass the start page), the browser's back button would not be active since there is nothing to "navigate back to". The same goes for a Cordova application. When starting an application, there is nothing in the navigation history stack, so there is nothing to navigate back to. The defaultHref property on ion-back-button does not alter your navigation stack. Does this resolve your issue? Also, I understand that some users expect the app to exit when using the hardware back button when there is nothing to navigate back to. Are you referring to this instead? Hi, Sorry for the delay. I was on the holidays. I don't believe that it is expected behavior. In my case, one of the details page is being opened when user clicks on the push notification. Then user is not able to go back (details -> list -> home page) using hardware back button. It is interesting because back button in the top left corner works as expected (it is navigating back to the list). From technical point of view, I'm listening for click a push notification event just after platform.ready. this.platform.ready().then(() => { this.appCenterPush.addEventListener('notificationReceived').subscribe(() => { this.router.navigate(['detailsPage']); }); }); Thanks for the follow up. Can you provide a repo with the code required to reproduce this issue? What I am seeing is that when either refreshing the application or when navigating right after platform.ready the ion-back-button does not appear. The hardware back button does let me go back, but that is because the browser/webview's internal state still has data from my previous session (before I refreshed). Here is a repo with the code required to reproduce the issue: https://github.com/maciejkoch/ionic4-hardware-back-button-issue Please run this code on the android using cordova. At the beginning I'm simulating click on the push notification which open custom page: this.platform.ready().then(() => { this.statusBar.styleDefault(); this.splashScreen.hide(); // I'm simulating user clicks on the push notification and then custom page is being opened this.router.navigate(['details']); }); Thanks for the repo! I am getting a few errors on compilation. Can you resolve the errors in your test repo? I didn't push all files.. sorry my bad. You can check it now. Any update? Hey! I'm facing the same issue. Did you have any update ? Thx Hi everyone, There have been numerous back button updates since this issue was created. Does this issue still occur with the latest version of Ionic Framework?
gharchive/issue
2019-06-28T05:57:44
2025-04-01T04:34:36.257323
{ "authors": [ "alexbonhomme", "liamdebeasi", "maciejkoch" ], "repo": "ionic-team/ionic", "url": "https://github.com/ionic-team/ionic/issues/18651", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1162950046
Προσθήκη του Κ. Βόγκλη ως Ακαδημαϊκό Υπότροφο Σχετικό Issue closes #249 Σχετικό Pull Request στο all_collections ioniodi/all_collections#15 Συνεργάτες που θα κάνουν σχόλια ή θα αξιολογήσουν τις αλλαγές (κάντε mention @ τουλάχιστον δύο) @john7665 @ApoLaz Προτεινόμενες Αλλαγές -- Προσθήκη του Κ. Βόγκλη στο αρχείο _data/authors.yml. Στο di.ionio υπάρχει μόνο το όνομα και το e-mail, οπότε μόνο αυτά πρόσθεσα, μαζί με το asset nophoto.jpg, που υπάρχει και σε άλλους καθηγητές. Demo Υπενθυμίσεις [x] Έχω ανοίξει από πριν issue για τον καλό συντονισμό του project, το οποίο έχει πάρει το πράσινο φως με την αντίστοιχη ετικέτα [x] Έχω ενημερώσει το issueNo παραπάνω με τον αριθμό του αντίστοιχου θέματος, ώστε να κλείσει αυτόματα με την αποδοχή αυτού του αιτήματος [x] Έχω δημιουργήσει branch για τις αλλαγές @p17anto2 θεωρώ οτι μόνο το pull request σου στο ioniodi/all_collections χρειάζεται σύμφωνα με τις οδηγίες Τα περισσότερα δεδομένα της ιστοσελίδας π.χ., ανακοινώσεις, μαθήματα, καθηγητές, βρίσκονται στο submodule all_collections. το αίτημα ενσωμάτωσης μπορεί να γίνει μόνο εκεί που είναι το αρχείο που αλλάζετε ναι έχεις απόλυτο δίκιο ήρθα σε αυτό τo pull από το hyperlink του άλλου σου pull και από εκτεταμένη ταχύτητα σχολίασα χωρίς να δω το commit σου για το authors.yml
gharchive/pull-request
2022-03-08T17:51:56
2025-04-01T04:34:36.274909
{ "authors": [ "ApoLaz", "p17anto2" ], "repo": "ioniodi/sitegr", "url": "https://github.com/ioniodi/sitegr/pull/325", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
355961576
Incompatability with Redux 4 in react-redux-subspace/redux-subspace I am trying to used subspaced from the react-redux-subspace and am currently getting an error ...my-project-path/node_modules/redux-subspace/src/index.d.ts (34,21): Namespace '"...my-project-path/node_modules/redux/index"' has no exported member 'GenericStoreEnhancer'. <Route path="something" component={subspaced((state) => state["module-namespace"], "module-namespace")(Component)}/> My Setup package version(s) redux 4.0.0 redux-subspace 2.5.0 react-redux-subspace 2.5.0 Not sure if this is already known and part of #87 or not, but if you are aware you can feel free to just close this issue Yes, I was aware the types are not compatible, but I do not believe there was ever an issue to capture it. They have been fixed in master, as part of the redux-observable v1 compatability changes , but have not been released yet as not all packages have been confirmed to be compatible with redux 4 yet. I'll keep this open until a new version goes out, just in case anyone else comes looking. Fixed in #97
gharchive/issue
2018-08-31T12:31:03
2025-04-01T04:34:36.279239
{ "authors": [ "carl-berg", "mpeyper" ], "repo": "ioof-holdings/redux-subspace", "url": "https://github.com/ioof-holdings/redux-subspace/issues/95", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2213946211
Forcing data for sandbox users @ZacharyWills @KatherinePowell-NOAA Derrick would like a list of all data needed by prospective users so let's use this issue to create that list. LiveOcean: All data available on UW apogee server, accessed with a user account Patrick helped me get set up. ECCOFS: NECOFS: SECOFS: GRTOFS Sea Surface Height as boundary and forcing but want to transition to satellite altimetry except SECOFS might be too small of a model to use the altimetry data HERR NOAA data tank has all data he needs Input/output requirements for each day the output size is 3-4GB use 1000-2000 CPU’s runtime: completes in about 9 minutes about 1 million nodes multiple vertical levels with up to 1m resolution From Chris Peternostro: a list of Data Providers that need to be available for sandbox modelers NCEP: https://nomads.ncep.noaa.gov/pub/data/nccf/com/nosofs/prod/ (24 hours) https://nomads.ncep.noaa.gov/pub/data/nccf/com/nosofs/v3.5/ (or specific version) https://ftp.ncep.noaa.gov/data/nccf/com/nosofs/prod/ (24 hours) CO-OPS THREDDS: https://opendap.co-ops.nos.noaa.gov/thredds/catalog/catalog.html NODD: https://registry.opendata.aws/noaa-ofs/
gharchive/issue
2024-03-28T19:14:17
2025-04-01T04:34:36.284893
{ "authors": [ "KatherinePowell-NOAA", "Michael-Lalime" ], "repo": "ioos/Cloud-Sandbox", "url": "https://github.com/ioos/Cloud-Sandbox/issues/77", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }