id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
487314804 | Emit error for empty fork stmt
Purpose
Fixes https://github.com/ballerina-platform/ballerina-lang/issues/18025
Approach
Describe how you are implementing the solutions along with the design details.
Samples
Provide high-level details about the samples related to this feature.
Remarks
List any other known issues, related PRs, TODO items, or any other notes related to the PR.
Check List
[ ] Read the Contributing Guide
[ ] Required Balo version update
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[ ] Added necessary tests
[ ] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[ ] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Can't we enforce this in the grammar instead?
Can't we enforce this in the grammar instead?
This is to avoid the conflict form Hasitha's PR
| gharchive/pull-request | 2019-08-30T06:40:11 | 2025-04-01T06:38:01.014344 | {
"authors": [
"MaryamZi",
"rdhananjaya"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/18354",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
503302200 | Fix multiple sync sends to a errored worker
Purpose
Fixes #19115
Approach
Describe how you are implementing the solutions along with the design details.
Samples
Provide high-level details about the samples related to this feature.
Remarks
List any other known issues, related PRs, TODO items, or any other notes related to the PR.
Check List
[ ] Read the Contributing Guide
[ ] Required Balo version update
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[ ] Added necessary tests
[ ] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[ ] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Codecov Report
Merging #19321 into ballerina-1.0.x will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## ballerina-1.0.x #19321 +/- ##
================================================
Coverage 15.73% 15.73%
================================================
Files 47 47
Lines 1265 1265
Branches 197 197
================================================
Hits 199 199
Misses 1053 1053
Partials 13 13
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8b93018...bb9fa55. Read the comment docs.
| gharchive/pull-request | 2019-10-07T08:26:00 | 2025-04-01T06:38:01.024296 | {
"authors": [
"codecov-io",
"vinok88"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/19321",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
508864965 | Update performance test results for Ballerina v1.0.x
Purpose
This PR updates the performance test results for Ballerina v1.0.1x
Check List
[x] Read the Contributing Guide
[ ] Required Balo version update
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[ ] Added necessary tests
[ ] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[ ] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Codecov Report
Merging #19464 into ballerina-1.0.x will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## ballerina-1.0.x #19464 +/- ##
================================================
Coverage 15.73% 15.73%
================================================
Files 47 47
Lines 1265 1265
Branches 197 197
================================================
Hits 199 199
Misses 1053 1053
Partials 13 13
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 69bc949...4a2c7d3. Read the comment docs.
Latest results: https://github.com/ballerina-platform/ballerina-lang/pull/19515
| gharchive/pull-request | 2019-10-18T05:37:54 | 2025-04-01T06:38:01.034227 | {
"authors": [
"codecov-io",
"ldclakmal"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/19464",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
539442475 | Introduce BBE for OAuth2 inbound authentication
Purpose
This PR introduces a BBE for OAuth2 inbound authentication.
Additionally, this updates the description of the BBE related to JWT inbound auth.
Fixes #19765
Check List
[x] Read the Contributing Guide
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[ ] Added necessary tests
[ ] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[ ] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Closing this with the duplicate https://github.com/ballerina-platform/ballerina-lang/pull/20447 created because of Travis failure here.
| gharchive/pull-request | 2019-12-18T04:21:05 | 2025-04-01T06:38:01.039275 | {
"authors": [
"ldclakmal"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/20441",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1351807634 | Fix type cast code action for numeric optional types
Purpose
$subject to suggest type cast code action as a quick fix when the expected type is a optional numeric type.
Fixes #37465
Samples
Remarks
List any other known issues, related PRs, TODO items, or any other notes related to the PR.
Check List
[x] Read the Contributing Guide
[ ] Updated Change Log
[ ] Checked Tooling Support (#)
[x] Added necessary tests
[x] Unit Tests
[ ] Spec Conformance Tests
[ ] Integration Tests
[ ] Ballerina By Example Tests
[x] Increased Test Coverage
[ ] Added necessary documentation
[ ] API documentation
[ ] Module documentation in Module.md files
[ ] Ballerina By Examples
Changes look good. BTW, do we need to handle cases like,
function test(int|1.0 num) { // or test(int|decimal num)
float? num2 = num;
}
Yes we should cover this. +1
Created https://github.com/ballerina-platform/ballerina-lang/pull/38285 for master. Closing as not mandatory for 2.x
| gharchive/pull-request | 2022-08-26T06:22:43 | 2025-04-01T06:38:01.045427 | {
"authors": [
"malinthar"
],
"repo": "ballerina-platform/ballerina-lang",
"url": "https://github.com/ballerina-platform/ballerina-lang/pull/37533",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1055996292 | Tooling check for Random module
Description:
$Subject
Tested the tooling support in VS code against SL Beta4 RC3. Everything is rendered properly.
| gharchive/issue | 2021-11-17T11:06:19 | 2025-04-01T06:38:01.050847 | {
"authors": [
"MadhukaHarith92"
],
"repo": "ballerina-platform/ballerina-standard-library",
"url": "https://github.com/ballerina-platform/ballerina-standard-library/issues/2401",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
919758885 | Improve dependency graph generator
$subject since it is recommended to only use platform level URL to identify packages repositories. Because of this, we cannot identify dependencies based on the repository URL.
The script is improved to parse the gradle.properties file to identify the version key of each modules' dependencies.
Default version_key is stdlib + Capitalised artifact Name + Version
This can be overridden in the modules.json.
Improvement because of https://github.com/ballerina-platform/ballerina-release/issues/547
We should ask the repo owners to clean up the gradle.properties file after this. I think there are places where gradle.properties file has versions that aren't used in the build, and some of the gradle.properties files may not follow the naming convention too. (Especially we have to check the cases like OAuth2).
We should ask the repo owners to clean up the gradle.properties file after this. I think there are places where gradle.properties file has versions that aren't used in the build, and some of the gradle.properties files may not follow the naming convention too. (Especially we have to check the cases like OAuth2).
@ThisaruGuruge Tracked it here, https://github.com/ballerina-platform/ballerina-release/issues/547#issuecomment-860365346
| gharchive/pull-request | 2021-06-13T09:38:58 | 2025-04-01T06:38:01.054062 | {
"authors": [
"ThisaruGuruge",
"niveathika"
],
"repo": "ballerina-platform/ballerina-standard-library",
"url": "https://github.com/ballerina-platform/ballerina-standard-library/pull/1454",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1633271843 | Update Google Vision connector category
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
This is to move the Google Cloud Vision connector to the category AI/Google as part of an effort to move all AI connectors to a new category.
One line release note:
This will move the Google Cloud Vision connector to the category AI/Google.
Type of change
Please delete options that are not relevant.
[x] This change requires a documentation update
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2023-03-21T05:48:58 | 2025-04-01T06:38:01.071116 | {
"authors": [
"AathmanT",
"CLAassistant"
],
"repo": "ballerina-platform/openapi-connectors",
"url": "https://github.com/ballerina-platform/openapi-connectors/pull/753",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
137348642 | Z-wave battery sensor (from thermostat)
Home Assistant release (hass --version):
0.14
Python release (python3 --version):
3.4
Component/platform:
z-wave
Description of problem:
no sensors for my batteries of my radiator thermostats
Expected:
see the battery level of my stellaz radiator thermostats
Additional info:
6-02-29 20:10:21 ERROR (Thread-8) [homeassistant.components.light] Error while setting up platform zwave
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/homeassistant/helpers/entity_component.py", line 97, in _setup_platform
discovery_info)
File "/usr/local/lib/python3.4/dist-packages/homeassistant/components/light/zwave.py", line 36, in setup_platform
add_devices([ZwaveDimmer(value)])
File "/usr/local/lib/python3.4/dist-packages/homeassistant/helpers/entity_component.py", line 146, in add_entities
if self.component.add_entity(entity):
File "/usr/local/lib/python3.4/dist-packages/homeassistant/helpers/entity_component.py", line 118, in add_entity
entity.update_ha_state()
File "/usr/local/lib/python3.4/dist-packages/homeassistant/helpers/entity.py", line 161, in update_ha_state
device_attr = self.device_state_attributes
File "/usr/local/lib/python3.4/dist-packages/homeassistant/components/zwave.py", line 323, in device_state_attributes
battery_level = self._value.node.get_battery_level()
File "/usr/local/lib/python3.4/dist-packages/openzwave-0.3.0b8-py3.4.egg/openzwave/command.py", line 284, in get_battery_level
for val in self.get_battery_levels():
File "/usr/local/lib/python3.4/dist-packages/openzwave-0.3.0b8-py3.4.egg/openzwave/command.py", line 306, in get_battery_levels
type='Byte', readonly=True, writeonly=False)
File "/usr/local/lib/python3.4/dist-packages/openzwave-0.3.0b8-py3.4.egg/openzwave/node.py", line 403, in get_values
for value in self.values:
RuntimeError: dictionary changed size during iteration
This seems to be an issue with Python Open Z-Wave iterating over a dictionary that is being changed mid-iteration. Best to report it there.
| gharchive/issue | 2016-02-29T19:32:42 | 2025-04-01T06:38:01.074555 | {
"authors": [
"balloob",
"luxus"
],
"repo": "balloob/home-assistant",
"url": "https://github.com/balloob/home-assistant/issues/1448",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
143731952 | Support keep-alive in REST API
Home Assistant release (hass --version):
0.17.0.dev0
Python release (python3 --version):
3.4.2
Component/platform:
http
Description of problem:
I have a bunch of energy sensors of the same kind as described in this blog post: https://batilanblog.wordpress.com/2015/02/17/using-ec3k-with-raspberry-pi/
The ec3k package uses gnu-radio which unfortunately is not Python 3-compatibile so it was not possible to integrate this package into its own HA-component.
Instead I use the REST API to feed HA with events (current power, on/off state, total amount of energy used) received from the sensors by a small radio listener script.
I noticed that HA seems to drop the connecton after each POST (I'm posting using the requests library which should support keep-alive if supported by the server).
Minor suggestion: support HTTP keep-alive to let clients re-use connections (or make it a configurable option). Currently the my sensors feed HA with updates about 1/s and there really is no need to re-open a new connection for each update.
Expected:
Allow the client to re-use the connection.
Problem-relevant configuration.yaml entries and steps to reproduce:
Traceback (if applicable):
Additional info:
We're using the built in python http server. I don't think that it supports
it
On Sat, Mar 26, 2016, 10:19 Erik notifications@github.com wrote:
Home Assistant release (hass --version):
0.17.0.dev0
Python release (python3 --version):
3.4.2
Component/platform:
http
Description of problem:
I have a bunch of energy sensors of the same kind as described in this
blog post:
https://batilanblog.wordpress.com/2015/02/17/using-ec3k-with-raspberry-pi/
The ec3k package uses gnu-radio which unfortunately is not Python
3-compatibile so it was not possible to integrate this package into its own
HA-component.
Instead I use the REST API to feed HA with events (current power, on/off
state, total amount of energy used) received from the sensors by a small
radio listener script.
I noticed that HA seems to drop the connecton after each POST (I'm posting
using the requests library which should support keep-alive if supported by
the server).
Minor suggestion: support HTTP keep-alive to let clients re-use
connections (or make it a configurable option).
Expected:
Allow the client to re-use the connection.
Problem-relevant configuration.yaml entries and steps to reproduce:
Traceback (if applicable):
Additional info:
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/balloob/home-assistant/issues/1619
it seems to be supported: https://docs.python.org/3/library/http.server.html#http.server.BaseHTTPRequestHandler.protocol_version
protocol_version
This specifies the HTTP protocol version used in responses. If set to 'HTTP/1.1', the server will permit HTTP persistent connections; however, your server must then include an accurate Content-Length header (using send_header()) in all of its responses to clients. For backwards compatibility, the setting defaults to 'HTTP/1.0'.
I can confirm that keep-alive/connection re-use seems to work fine with the REST-API by setting protocol_version='HTTP/1.1' as in https://github.com/balloob/home-assistant/pull/1624
Closed by https://github.com/balloob/home-assistant/pull/1624 thanks!
strange, after updating to latest dev today, the REST API seems to drop HTTP connections again. I will investigate some more.
The command
curl -4Iv http://localhost:8123/ http://localhost:8123 2>&1 | grep connection
should not return Closing connection 0, instead it should reuse the connection (Re-using existing connection).
In fact it seems to reply with HTTP 1.0:
* Connected to localhost (127.0.0.1) port 8123 (#1)
> HEAD / HTTP/1.1
> User-Agent: curl/7.38.0
> Host: localhost:8123
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 405 Method Not Allowed
HTTP/1.0 405 Method Not Allowed
< Server: HomeAssistant/1.0 Python/3.4.2
Is this only for me or can anybody reproduce this?
Ah,
self.protocol_version = 'HTTP/1.1'
should be moved to the _handle_request method.
I guess this was what I had in my local branch when confirming https://github.com/balloob/home-assistant/pull/1624 in my comment https://github.com/balloob/home-assistant/issues/1619#issuecomment-202109737 ... sorry for that. I'll see if I can add a test case for this.
Alrighty, that's too bad. If you can open a PR to fix this, bueno
Sorry for the delay, had some vacation and got some tan.
Added PR for upgrade to HTTP/1.1.
Created two separate PR:s so the test cases can be merged first to verify that they indeed do fail, before the fix is applied.
However, HA still seems to drop the connection anyway, i.e. no keepalive
> curl -4Iv http://localhost:8123/ http://localhost:8123/ 2>&1 | grep -i connection
* Connection #0 to host localhost left intact
* Connection 0 seems to be dead!
* Closing connection 0
* Connection #1 to host localhost left intact
I can confirm this is no longer an issue with the new HTTP impementation (i.e. keepalive works as expected). Thank you!
| gharchive/issue | 2016-03-26T17:18:52 | 2025-04-01T06:38:01.092993 | {
"authors": [
"balloob",
"molobrakos"
],
"repo": "balloob/home-assistant",
"url": "https://github.com/balloob/home-assistant/issues/1619",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1214339579 | Slider Component
Slider component background width calculation is off
When giving the Slider component a max value of 20000, the calculation of the background-styles are getting calculated depending on the step-size, e.g. when the step is 5000, the percentage of the bg is 5000%.
Also, the ticks on the slider are not available. The flag hasTicks is used, but does nothing.
Working in chrome on macOs
Steps to reproduce the issue
Add a BalSlider component
Add a max value of 20.000
Add a step value of 5.000
Run page and select a value.
Additional information
Screenshots or code
Notes
https://github.com/baloise/design-system/pull/603
| gharchive/issue | 2022-04-25T11:05:30 | 2025-04-01T06:38:01.097089 | {
"authors": [
"ThomasSeyssensTPO",
"hirsch88"
],
"repo": "baloise/design-system",
"url": "https://github.com/baloise/design-system/issues/602",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1245035077 | Datepicker - Months not updated when navigating backwards
Month in datepicker does not get updated
Detailed description
Currently, we are using the datepicker component for the SOB of SoBa (banking.baloise.ch). In the second page, the component can be accessed and whenever the user tries to navigate backwards, it the month will not get updated automatically. It will still display e.g. May for example instead of April.
However, the forward navigations do not have this issue and the month will get updated from May --> June for example.
This issue can be observed on the SoBa site: www.banking.baloise.ch
Steps to reproduce the issue
Go to second page
Navigate to datepicker field and open it via mouse click
Navigate backwards by clicking on "<"
It can be observed that description of the month in the header of this component does not get updated
Screenshot will display that the month is changed to April (30 days) but the month title is still May
moved issue
| gharchive/issue | 2022-05-23T11:42:06 | 2025-04-01T06:38:01.101225 | {
"authors": [
"hirsch88",
"jacnguymesoneer"
],
"repo": "baloise/design-system",
"url": "https://github.com/baloise/design-system/issues/618",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
891348114 | Add the client certificates to download at https://mtls.run/pt/sidecar
At this page there are the server certificates and there is a curl example, but the curl uses a client key/cert that is not available to download.
These files can be downloaded at https://mtls.run/pt/ambassador
Hi @tiagostutz! I would like to claim this issue.
Hi @cristiangutie 👋
Awsome.! Thanks for your help.
I just assigned this to you. If you need any assistance, please let me know.
@tiagostutz I made a pull request (#12). Please review it.
| gharchive/issue | 2021-05-13T20:28:53 | 2025-04-01T06:38:01.134707 | {
"authors": [
"cristiangutie",
"tiagostutz"
],
"repo": "bancodobrasil/mtls-best-friend",
"url": "https://github.com/bancodobrasil/mtls-best-friend/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
668693540 | Reaplce styled-components by pure HTML tags
Currently we have some components using styled-components. Replace those by "vanilla" html tags.
Don't need to bring the style, we will tackle the style thing on another issue.
Hi there, @stokesy94 !! 👋🏻
Sure, just assigned that to you.! Nice to have you on board. 😃
Hi Tiago,
I am still working on this thanks!
Matt
Sent from Yahoo Mail for iPhone
On Wednesday, August 12, 2020, 13:43, Tiago de Oliveira Stutz notifications@github.com wrote:
Hi @stokesy94 ! I hope you are doing well..!
I'd like to know whether you are working on this issue or if we could assign it to another contributor. If you are working on this, just let me know. If not, we could assign it to another contributor and create another one for you as soon as you want to.
What do you say?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Hi there @stokesy94 !
This issue is reaching 30 days kinda stale and I checked that you still haven't forked the repo. I'm sorry but I have to unassign it from you because there are people looking for simple issues to contribute here and we have to keep the project pace.
If you want, I will gladly create another issue for you. ☕
Thanks!
@tiagostutz, can I take this up?
Hey @Bob160 ! How are you doing?
Sure, go ahead. Just assigned to you.! Nice!
🚀
Hey @Bob160 ! How are you doing?
Sure, go ahead. Just assigned to you.! Nice!
🚀
Hi @tiagostutz . I'm doing very well, thank you! How are you doing? Waited for you to create the new issue for me as you said but decided to choose one on my own.
Let's get to work, shall we?
I'm doing great, thanks!
Nice move.! I've been on a rush on last weeks, thats why I couldn't create the issue... but now your are good to go!
Please let me know whether you need any assistance.
Hi @tiagostutz, can you please throw more light on what this project entails?
Thanks
Hi @tiagostutz hope all is well with you?
Still waiting for your response so we can get this issue sorted.
Hope to hear back from you soon.
Cheers
Hi there @Bob160 ! I hope you are doing well.!
Sorry for the delay... so, this project is about creating a tool that offers a less "frictional" choosing experience for scenarios in which the user may have lots of options. As you know, choosing between lots of options requires a heavy cognitive load and can be frustrating, sometimes. This tool will offer for websites - such as e-commerces - and alternative for that.
The background brief of this project can be found here: https://github.com/bancodobrasil/stop-analyzing/issues/2
If you have any further doubts, please let me know.!
hey @tiagostutz, I am doing well. How are you? Yes I can take this issue, can you please explain it to me. Also, is there some place I can contact you email or something. Thanks!!
I'm doing great, thanks!
Just assigned this issue you to you.
You can reach me at Telegram or Twitter. My user is @tiagostutz.
I would like to contribute to #51. How do I get started
Hi @OluyemisiA !
You can put the project to run. Have you already followed the README instructions to try to put it to run?
@tiagostutz Yes, I followed the instructions and the project is running already
Ok. Now, you can perform the required adjustments and submit the PR. Need any assistance on that or you are good to go?
I want to know if the replacement is a specific file or all the files?
Hi all,
Please can I be removed from this email chain?
Kind Regards,
Matt
Sent from Yahoo Mail for iPhone
On Saturday, October 3, 2020, 16:24, OluyemisiA notifications@github.com wrote:
I want to know if the replacement is a specific file or all the files?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Hi @stokesy94 I think I'm not able to do so as it is a preference configured by you. In the e-mail you receive there's a text at the end with a link to unsubscribe. The text is: "Reply to this email directly, view it on GitHub, or unsubscribe."
@OluyemisiA all files, please. And also remove the imports and the package.json dependency.
i can take this if you want! @tiagostutz
Hi there @poleselfg !! Welcome to stop-analyzing.! Thanks for offering help.
I have just assigned it to you. If you still can tackle this issue, please go ahead.
| gharchive/issue | 2020-07-30T12:54:48 | 2025-04-01T06:38:01.149088 | {
"authors": [
"Bob160",
"OluyemisiA",
"poleselfg",
"priyanshu0405",
"stokesy94",
"tiagostutz"
],
"repo": "bancodobrasil/stop-analyzing-embed",
"url": "https://github.com/bancodobrasil/stop-analyzing-embed/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1096850777 | 请问生成结束之后,如何禁用弹出资源管理器呢
生成器版本: 3.5.1
@angryLid 开启GlobalConfig的disableOpenDir配置
@angryLid 开启GlobalConfig的disableOpenDir配置
非常抱歉,文档看的有点草率了。
| gharchive/issue | 2022-01-08T06:41:11 | 2025-04-01T06:38:01.236728 | {
"authors": [
"angryLid",
"lanjerry"
],
"repo": "baomidou/generator",
"url": "https://github.com/baomidou/generator/issues/167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
815081815 | 启动的时候出现2次
当前使用版本(必填,否则不予处理)
3.4.1
该问题是如何引起的?(确定最新版也有问题再提!!!)
配置了自定义的sqlSessionFactory
public class MybatisPlusConfig {
@Autowired
private Environment env;
@Autowired
private DataSource dataSource;
@Autowired
private MybatisPlusProperties properties;
@Autowired(required = false)
private Interceptor[] interceptors;
@Autowired(required = false)
private DatabaseIdProvider databaseIdProvider;
@Autowired
private ResourceLoader resourceLoader = new DefaultResourceLoader();
/**
* 分页插件 旧版
*/
// @Bean
// public PaginationInterceptor paginationInterceptor()
// {
// return new PaginationInterceptor();
// }
/**
* 乐观锁插件
*
* @return
*/
// @Bean
// public OptimisticLockerInterceptor optimisticLockerInterceptor() {
// return new OptimisticLockerInterceptor();
// }
/**
* 新的分页插件,一缓和二缓遵循mybatis的规则,需要设置 MybatisConfiguration#useDeprecatedExecutor = false 避免缓存出现问题
*/
@Bean
public MybatisPlusInterceptor mybatisPlusInterceptor() {
MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
// 乐观锁插件
interceptor.addInnerInterceptor(new OptimisticLockerInnerInterceptor());
// 分页插件
interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL));
return interceptor;
}
@Bean
public ConfigurationCustomizer configurationCustomizer() {
return configuration -> configuration.setUseDeprecatedExecutor(false);
}
/**
* 这里全部使用mybatis-autoconfigure 已经自动加载的资源。不手动指定 配置文件和mybatis-boot的配置文件同步
*
* @return
*/
@Bean(name = "sqlSessionFactory")
public MybatisSqlSessionFactoryBean mybatisSqlSessionFactoryBean() {
log.info("初始化SqlSessionFactory");
log.info("dynamicDataSource:{}", dataSource);
log.info("mybatisPlus Properties:{}", this.properties);
MybatisSqlSessionFactoryBean mybatisPlus = new MybatisSqlSessionFactoryBean();
mybatisPlus.setDataSource(dataSource);
mybatisPlus.setVfs(SpringBootVFS.class);
if (StringUtils.hasText(this.properties.getConfigLocation())) {
mybatisPlus.setConfigLocation(this.resourceLoader.getResource(this.properties.getConfigLocation()));
}
mybatisPlus.setConfiguration((MybatisConfiguration)properties.getConfiguration());
if (!ObjectUtils.isEmpty(this.interceptors)) {
mybatisPlus.setPlugins(this.interceptors);
}
mybatisPlus.setPlugins(new Interceptor[] { // PerformanceInterceptor(),OptimisticLockerInterceptor()
mybatisPlusInterceptor() // 添加分页功能
});
MybatisConfiguration mc = new MybatisConfiguration();
mc.setDefaultScriptingLanguage(MybatisXMLLanguageDriver.class);
mybatisPlus.setConfiguration(mc);
if (this.databaseIdProvider != null) {
mybatisPlus.setDatabaseIdProvider(this.databaseIdProvider);
}
if (StringUtils.hasLength(this.properties.getTypeAliasesPackage())) {
mybatisPlus.setTypeAliasesPackage(this.properties.getTypeAliasesPackage());
}
if (StringUtils.hasLength(this.properties.getTypeHandlersPackage())) {
mybatisPlus.setTypeHandlersPackage(this.properties.getTypeHandlersPackage());
}
if (!ObjectUtils.isEmpty(this.properties.resolveMapperLocations())) {
mybatisPlus.setMapperLocations(this.properties.resolveMapperLocations());
}
return mybatisPlus;
}
}
重现步骤(如果有就写完整)
报错信息
启动时出现
_ _ |_ _ |. ___ _ | _
| | |/|)(| | |\ |)|||\
/ |
3.4.1
_ _ |_ _ |. ___ _ | _
| | |/|)(| | |\ |)|||\
/ |
3.4.1
这个信息出现2次,同一个项目,另一个模块启动正常,分页正常,出现2次的这个模块分页出现2次 limit ,分页插件只注册了一次
自行排查
| gharchive/issue | 2021-02-24T04:36:06 | 2025-04-01T06:38:01.245072 | {
"authors": [
"jenven",
"miemieYaho"
],
"repo": "baomidou/mybatis-plus",
"url": "https://github.com/baomidou/mybatis-plus/issues/3350",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1364407759 | saveBatch 插入太慢 https://mp.baomidou.com/guide/crud-interface.html#insertbatchsomecolumn 访问不了!
mysql 选装一下一下这个插件 https://mp.baomidou.com/guide/crud-interface.html#insertbatchsomecolumn
其他数据库 自己拼sql
mybatis-plus 为了多数据源兼容性 是for loop insert
Originally posted by @lltx in https://github.com/baomidou/mybatis-plus/issues/3360#issuecomment-789371126
https://baomidou.com/pages/49cc81/#insertbatchsomecolumn
| gharchive/issue | 2022-09-07T09:58:26 | 2025-04-01T06:38:01.248437 | {
"authors": [
"15003476628",
"miemieYaho"
],
"repo": "baomidou/mybatis-plus",
"url": "https://github.com/baomidou/mybatis-plus/issues/4798",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1526030955 | Don't hash the crate name in the cache key
I'd like to see the cached crate when looking at the caches for the repo, right now for example I see like 10 entries with cargo-install-{hash}
Hello! Thanks for the suggestion, I just released a new update that changes the cache key format. Since it is a major update, you'll need to manually update your workflows to use the v2 tag (changelog).
| gharchive/issue | 2023-01-09T17:40:27 | 2025-04-01T06:38:01.249911 | {
"authors": [
"ForsakenHarmony",
"baptiste0928"
],
"repo": "baptiste0928/cargo-install",
"url": "https://github.com/baptiste0928/cargo-install/issues/11",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
949604907 | Refactor: Implement chainable AST / functional AST handlers
The current implementation generates the AST on every needed point instead of using the existing one if no modifications are made.
All code regarding this can be found in
lib/ast-helpers.js
lib/jsx-helpers.js
The needed change is to have a chainable contructor that allows doing the same manipulations as already in the functions but instead using a single ast as needed.
The other approach is to be functional and pass the ast around in the functions as a parameter, though that increases the amount of memory transferred around so the chain approach is preferrable.
Post the modification,
Test the application by visiting the web url : http://localhost:3000 or whatever port you have it on and then check if both the default code and the example code run (Click the View Example button ) properly without any issues.
https://github.com/barelyhuman/hen/commit/407c2479a04a3134c21af6bc00b281db68b675d4
Initial changes done,
can be optimized even more to reduce the transform time from the API for larger code snippets.
| gharchive/issue | 2021-07-21T11:20:59 | 2025-04-01T06:38:01.264143 | {
"authors": [
"barelyhuman"
],
"repo": "barelyhuman/hen",
"url": "https://github.com/barelyhuman/hen/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
142645275 | Updated instagram icon
Updated instagram svg and styles to make it scale properly as an svg background image. Closes #411.
See this update on my blog
After this update:
Thanks.
| gharchive/pull-request | 2016-03-22T13:10:55 | 2025-04-01T06:38:01.277975 | {
"authors": [
"domfarolino",
"firsttopman"
],
"repo": "barryclark/jekyll-now",
"url": "https://github.com/barryclark/jekyll-now/pull/464",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2681976781 | add Tipping Point to base web directory
What changed? Why?
Notes to reviewers
How has it been tested?
🟡 Heimdall Review Status
Requirement
Status
More Info
Reviews
🟡
0/2
Denominator calculation
Show calculation
1 if user is bot
0
1 if user is external
0
From .codeflow.yml
1
Additional review requirements
Show calculation
Max
0
0
From CODEOWNERS
0
Global minimum
0
Max
1
1
1 if commit is unverified
1
Sum
2
@Ncastro878
Heya! Thanks for building on Base!
We require that defi apps linked to from base.org include a terms of service / terms and conditions page for users. It doesn't appear that this is present on the linked page. In order to continue the review, could you all please update and resubmit?
| gharchive/pull-request | 2024-11-22T06:35:45 | 2025-04-01T06:38:01.290975 | {
"authors": [
"Ncastro878",
"cb-heimdall",
"wbnns"
],
"repo": "base-org/web",
"url": "https://github.com/base-org/web/pull/1280",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1852760398 | When your auditor table uses UUIDs as Primary Key
I'm just recording this in case it helps someone else or, after reviewing, the team tell me a better way! 😄
We use GUIDs as primary key for all tables so my Auditor record primary key starts with:
035d675a-0aa8-...
When audit records are created, the auditor_id is set as:
id
status
session_id
auditor_id
1
1
4
35
1
1
3
35
1
1
2
35
1
1
1
35
I rolled back the migration and changed the script to
t.references :auditor, null: false, type: :uuid
After re-migrating, the system successfully records my Auditor account with no, so far, further errors!
Bump. Thank you. That one had me stumped for awhile. Might be worth an add to the README.
| gharchive/issue | 2023-08-16T08:28:07 | 2025-04-01T06:38:01.295487 | {
"authors": [
"bpurinton",
"colinbruce"
],
"repo": "basecamp/audits1984",
"url": "https://github.com/basecamp/audits1984/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1165459556 | #170 - Calculate toast
Created toast on calculation start. But not understand delete calculation logic, if it removes automatically🤔
@Valexr we need a bot who will be checking an uncommented console.log calls at each PR :laughing:
| gharchive/pull-request | 2022-03-10T16:37:11 | 2025-04-01T06:38:01.300549 | {
"authors": [
"Valexr",
"blokhin"
],
"repo": "basf/bscience-gui",
"url": "https://github.com/basf/bscience-gui/pull/34",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2514339155 | 🛑 wpsitedoctors is down
In 5e8d433, wpsitedoctors (https://wpsitedoctors.com/) was down:
HTTP code: 503
Response time: 582 ms
Resolved: wpsitedoctors is back up in 61dc09a after 31 minutes.
| gharchive/issue | 2024-09-09T15:57:51 | 2025-04-01T06:38:01.303202 | {
"authors": [
"basharatreyaz"
],
"repo": "basharatreyaz/uptimebot",
"url": "https://github.com/basharatreyaz/uptimebot/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
28405778 | Update rebar.config
A things I added for my own use, you can have them if you want.
Do not want, if this is a problem put something like this inf your .gitconfig:
[url "http://github.com"]
insteadOf = ssh://git@github.com/
| gharchive/pull-request | 2014-02-27T09:46:22 | 2025-04-01T06:38:01.306229 | {
"authors": [
"MisaKondo",
"Vagabond"
],
"repo": "basho/lager",
"url": "https://github.com/basho/lager/pull/208",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
175342907 | Add HyperLogLog Data Type
Adds HLL data type.
Test with a devrel of this Riak branch: https://github.com/basho/riak_ee/tree/develop-2.2, configured with the included tools change.
The HllPrecision bucket type property needs to have support added to FetchBucketPropsCommand and StoreBucketPropsCommand
Docs can be found at: https://github.com/basho/private_basho_docs/commit/c07fb253252398b0d007952ba910bc6bf315292d
Ok, everything should be addressed now.
:+1:
| gharchive/pull-request | 2016-09-06T20:26:09 | 2025-04-01T06:38:01.308806 | {
"authors": [
"alexmoore",
"lukebakken"
],
"repo": "basho/riak-go-client",
"url": "https://github.com/basho/riak-go-client/pull/72",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
280569784 | ENH: Add out-of-sample prediction to IV
Add out-of-sample prediction to IV models
This pull request introduces 6 alerts - view on lgtm.com
new alerts:
6 for Potentially uninitialized local variable
Comment posted by lgtm.com
| gharchive/pull-request | 2017-12-08T17:58:29 | 2025-04-01T06:38:01.310793 | {
"authors": [
"bashtage"
],
"repo": "bashtage/linearmodels",
"url": "https://github.com/bashtage/linearmodels/pull/135",
"license": "NCSA",
"license_type": "permissive",
"license_source": "github-api"
} |
87048335 | Add opsgenie
Add opsgenie class for know who is on call, stats of alert (UNACK, ACK,...)
For now we can't add it because, our schedule is not implement
Close for now, no information in SDK
| gharchive/issue | 2015-06-10T17:40:02 | 2025-04-01T06:38:01.332759 | {
"authors": [
"basti1dr",
"bdronneau"
],
"repo": "basti1dr/imwatchingyou",
"url": "https://github.com/basti1dr/imwatchingyou/issues/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1275862636 | Feature/add docker image
#59
WIP
Add initial Dockerfile + .dockerignore
Add documentation for docker compose
Updated docker-compose to V2 variant docker compose
CI through docker compose (untested)
CI push docker image to dockerhub on release (untested)
Hi ! Thank you for this beautiful PR! I leave @jmsche the review, he will know better than me.
However, I think it will merge after my #62 which should change the way adapters are handled. Small changes will certainly be expected, at the level of the documentation that you have updated for example, since we will no longer need to use the environment variables of S3, for the reasons described below.
Thank you very much for the contribution! :)
Hey @bastien70,
Thanks for the heads-up. I'll adapt the current documentation in the meantime. I currently have the PR mark as draft as I was still testing if the docker image works correctly in existing projects and with local development.
Hi ! Small question, have you or are you going to write tests to check that the docker container you are setting up works and is correctly configured?
And if I understood correctly, you modified the CI so that the tests are launched thanks to Docker, right? In which case, your PR may be merged before mine.
Hello!
To be more specific on what I was asking:
Basically we have our tests (PHPUnit) which will test the entirety of dbsaver (verify that each feature works). Similarly, how are you going to ensure that the dbsaver docker image you created deploys correctly and without issues?
The goal is that when the CI will launch the PhpUnit tests, it can also check each time that your dbsaver docker image is still compliant, that it deploys correctly, that we have not forgotten anything.
How do you handle this? Maybe you have already answered it, but I must admit that Docker is not our specialty at jmsche and myself, so we will not know if what you have coded is good (but I have the feel like you know what you're doing), or if you've written any tests to make sure the dbsaver docker image works perfectly
Hello @ToshY !
Indeed, now that the tests will be launched based on the docker image, you are right, if from the start it does not work, there is a direct problem with the docker image.
Regarding the docker-release, can you tell me more?
dbsaver docker image is created/updated on every merge? I'm not sure I understand this part correctly.
I could of course add the phpunit, phpcs and phpstan checks as well in there?
=> It would be safer yes, even if a priori we are already well thanks to the phpunit tests launched from docker
Also, I would like to point out that I'm also not an "expert" on docker,......
=> Although you're not an expert on it, it's still great PR! Good game :)
Hey @bastien70
dbsaver docker image is created/updated on every merge? I'm not sure I understand this part correctly.
Maybe I was unclear about that, but I meant on a new release an image will be build and pushed. Currently in the docker-release.yml the following can be found:
on:
release:
types: [published]
...
So it would make sense that upon a new release the workflow gets triggered.
I also have a question/request regarding the Taskfile. Using the docker compose commands in the CI makes it visually kind of crowded, so is it acceptable to install Taskfile in the CI? It would reduce the run for each step to just a single task command then.
p.s. I normally use Makefiles so I don't have this dilemma :wink:
I also have a question/request regarding the Taskfile. Using the docker compose commands in the CI makes it visually kind of crowded, so is it acceptable to install Taskfile in the CI? It would reduce the run for each step to just a single task command then. p.s. I normally use Makefiles so I don't have this dilemma wink
Yeah no problem! :)
Hey @bastien70
Hope you don't mind that this is taking some time, but I'll give you an update in the meantime:
Simplified the commands in the CI by using Taskfile with the help of arduino/setup-task@v1 action.
Simplified Dockerfile.
Added a docker-compose.dev.yaml for local development. While it should be relatively easy to use, I will write documentation for it so future contributors (or you 😉) can also use it.
Todo:
Further testing of dev image, and testing of production image.
One of the things that I think is important is that we should start building building a dev image, tag it as bastien70/dbsaver:dev and push it to Dockerhub. This way we can speed up the tests in the CI, because then it only needs to pull the image. If we don't create a dev image it needs to build the image for every run, which is significantly slower compared to just downloading the image.
To do that I actually need to know if you already have a Dockerhub account? And if not, could you make one? As you stated that Docker is not your specialty, I'm not sure if you're familiar with building, tagging and pushing images? If not, the commands are relatively simply but I can guide you through it if needed (already added a command in the Taskfile for building the image).
If I'm confident enough in the current images, I'll let you know, so that we can start making the dev image before actually merging the PR.
Hey @bastien70
I have a question, what is the caddy-server for?
Well I realized that if I want to keep contributing to this application, it would be easier for me to add a relatively simple webserver, like caddy, to the docker-compose.dev.yaml. So I could just use task docker:up:dev to up the services, and after having add dbsaver.local / mail.dbsaver.local to my systems hosts file, I can easily access the application and mail client through those URLs.
So to answer your question: "For easier setup/access of the application using docker".
Even though you may not be very comfortable to docker yet, I hope you/jmsche will try it out before we actually merge the PR. I will of course update the documentation before that to try to make clear on how to use it 😉
"Will you still be there to manage docker files?"
Yeah I am still interested in contributing, that includes managing the Dockerfile. However, there shouldn't be a need to change the Dockerfile very often unless there's some (security) issues or need for additional (php) extensions or packages. I will probably still need to update the contribution guidelines to make sure that for new features that require certain extensions/packages that the Dockerfile is updated as well.
Hey @bastien70
To give an update, the last couple of days I've been busy on fixing new CI workflows, making/publishing of the images, and testing the images. To ease the CI testing, I've done this in a temporary private repository, so I can test both the workflows as the images without making a mess here. I'm at the point that everything is green in the CI, so I'm almost ready to commit the last changes back here. I will also include Minio in it then (as you mentioned it in #62 )
I hope to commit my last changes by the end of the week so it can get reviewed 👌
Hello @ToshY
Greate news! :)
Waiting for your last commit :)
Hey @bastien70,
I think it's as good as done. I will review it myself once more this weekend before I'll mark it for review for @jmsche.
I'm missing French documentation regarding the docker compose contribution part, which I just cannot write. From my perspective, it also seems double the work to document in two languages, but maybe we should save that discussion for another time.
Currently the image repository is set to bastien70/dbsaver, which means that you should create a repository called dbsaver on Dockerhub. This is under the assumption that you've made an account with the username bastien70. If you've created your account under another username, please let me know so I can update this.
After you created a Dockerhub account, could you update the repo's (action) secrets by adding DOCKER_USERNAME and DOCKER_PASSWORD? This is needed so the CI can push the new images to your dockerhub repository.
I also took the liberty to incorporate security-check and dependabot workflows. I could create a separate PR for this if you'd like, but I thought it would be just as easy to include it here as I was overhauling the CI anyway.
Hello @ToshY ! Thanks for this PR :)
I will check for docker account on Monday.
I have not yet looked in detail at the files you have modified, but suddenly you confirm that even with this PR it will be possible to launch the project locally and contribute without using docker for those who would like to do it "normally" ?
And so you confirm once again that after this PR merged, it should be very rare that we have to edit it in the future?
Hey @bastien70,
I have not yet looked in detail at the files you have modified, but suddenly you confirm that even with this PR it will be possible to launch the project locally and contribute without using docker for those who would like to do it "normally" ?
I have tried to keep the original way of working unaffected, and with that I mean, in case you and others still want to develop locally without docker, you should still be able to so (although I haven't tested this myself).
There are however some changes to existing files which could affect local development:
The .env files are updated to use mysql:3306 instead of 127.0.0.1:3306 in the DATABASE_URL. You would need to create a .env.local and override the DATABASE_URL back to 127.0.0.1:3306 (in case you have a local MySQL instance).
The contribution task has slightly changed, but because it already required docker compose before this change, it should not affect your current local development.
The task commands for the current way of working are kept mostly the same, except for replacing symfony console by bin/console. This also should not effect your current local development.
And so you confirm once again that after this PR merged, it should be very rare that we have to edit it in the future?
Normally the Dockerfile itself shouldn't have the need for much update unless you decide to add some features that require additional php extensions, packages or patches for high/severe security updates. As an example, let's say you want to add the php redis extension for a new feature, this means the Dockerfile should simply be appended with the redis entry at RUN install-php-extensions redis. If you need additional packages, let's say for example FFmpeg, you can add that by just appending it to apk add ffmpeg part.
Regarding security, I made the workflow docker-base.yml, which will update both base and dev images weekly, so that you'll get patches and security updates (from PHP 8.1.X alpine) for those images. When you eventually publish a new release in Github, it will run the docker-release.yml workflow, which will build and push base, dev and prod images of the application to Dockerhub.
If you have any more questions let me know 😄
Hello @ToshY !
I just created my docker account and added the secrets keys, you should be able to continue now :)
https://hub.docker.com/repository/docker/bastien70/dbsaver
Tell me if you need anything else :D
Hey @bastien70
Keen eye, I totally forgot that 🤦♂️ I will start on that README.
Hey @bastien70
Something like this?
Hello @ToshY yes it seems good :)
I haven't had much time to rework the last few weeks, so I'll try to this next week. Also realized that there still may be some small issues with the Dockerfile user permissions, so still have to investigate that further. Sorry for the delay in communication.
Due to circumstances, I shall no longer continue my work on this. Sorry for any inconvenience this may have caused.
| gharchive/pull-request | 2022-06-18T19:10:58 | 2025-04-01T06:38:01.362505 | {
"authors": [
"ToshY",
"bastien70"
],
"repo": "bastien70/dbsaver",
"url": "https://github.com/bastien70/dbsaver/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2395227463 | BatLit community: shall we use the coviho as the BatLit community?
shall we continue to use the coviho community and rename as batlit?
here we added already the first stack of Aja's test corpus for interaction.
https://zenodo.org/communities/coviho/records?q=&l=list&p=1&s=10&sort=newest
We also added ca 140 publications with bat virus interactions @jhpoelen used for Globis
In this sense, it would make to rename the repository to BatLit.
We also need to make sure that the publcations here are not added a second time - see deduplication issue: https://github.com/bat-literature/bat-literature.github.io/issues/6#issuecomment-2213580719
@myrmoteras we already have a batlit community registered in Zenodo (both production and sandbox). I'd favor leaving the decision up to the curator of the BatLit corpus to incorporate works like those from CoViHo. This follows curation step https://batlit.org/#feedback-workflow .
the Batlit in Zenodo has nothing in it, so we can delete it an use instead the covhio, which is also run through this bat community.
To start yet another one doesn't make sense, especially they cover the same.
I agree that the batlit community is currently empty.
However, the origin of the CoViHo is different than the batlit corpus. This difference is reflected in the different metadata (linkage to origin) as well as the associated curators.
The current curatorial workflows are documented on https://batlit.org . Which workflow would you imagine this merging of CoViHo into BatLit fall into?
Coviho had exactly the same purpose of BatLit, to create a repo for bat literature which I proposed to this bat community, and we started now to build.
We should allow others to contribute, be it adding single publications, or using other batch upload tools. we cannot talk openness and democratization of resources is we control building resources beyond a very basic policy what can and what can't be uploaded in the scope of building a global bat scientific publication resource.
I much agree to CoViHo and BatLit have much in common. Also, I think that allowing others to contribute is important. @ajacsherman has been working hard to make this happen with the pdfs shared with her.
And, I'd favor to keep the workflow and provenance of the items in the BatLit corpus consistent. Currently, the procedure to include/update stuff into BatLit is recorded on https://batlit.org . What procedure would you like to use to add the CoViHo corpus to the BatLit corpus?
Perhaps this is something to best discuss as a group.
let's discuss.
I understand your point, but this is extremely restrictive to a method and has a dependency, Aja to prepare data and getting paid for it. There are other tools to make deposit, eg the Zenodo tool Lycophron as an example, that has a different concept of uploading.
From my perspective the crux is that these publications need be accessible with the respective metadata that mininmally provides the provenance, eg a DOI of the original publication or enough metadata needed to discover origin of who published where the article.
Everybody uploading the the community has to get the right to do so, which is still restrictive but at the same time get's a credit and we do not get spammed, which is unfortunately a very common phenomon in Zenodo.
Perhaps good to distinguish two concerns:
open access to publications
curating thematic collections of accessible publications
In my mind 2. involves the work of curators and involves effort (time/money) to make decisions on which works to include / exclude from the corpus.
Are you concerned about 1. (providing access to some pdf and metadata) or 2.?
I am concerened to build corpora of publications, such as all the bats. that is 1 and 2, with an emphasis of access.
I understand you are concerned about building corpora of publications.
With BatLit, as outlined in https://batlit.org, we have citable, transferrable, and versioned corpus of publications that can be published across various platforms and storage media.
With CoViHo, we have a Zenodo community that includes records containing some metadata and pdfs. However, as far as I know, there's no way to easily transfer and cite a versioned copy of CoViHo.
In order to add CoViHo to an existing corpus, or to build CoViHo as an independent corpus, the associated (meta) data needs to be compiled, versioned, packaged and published so that it can live (and be cited) independent of Zenodo in specific and the internet in general.
closing stale issue. Please free to re-open if needed.
| gharchive/issue | 2024-07-08T10:14:43 | 2025-04-01T06:38:01.382218 | {
"authors": [
"jhpoelen",
"myrmoteras"
],
"repo": "bat-literature/bat-literature.github.io",
"url": "https://github.com/bat-literature/bat-literature.github.io/issues/18",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2655723074 | PHP warning
PHP 8.3 Warning: foreach() argument must be of type array|object, null given in [.../site/modules/RockLoaders/RockLoaders.module.php:29]
probably if the module setup is not done yet
thx, should be fixed with the latest commit. can you check and reopen in case it needs more work?
| gharchive/issue | 2024-11-13T14:37:02 | 2025-04-01T06:38:01.402913 | {
"authors": [
"BernhardBaumrock",
"tbba"
],
"repo": "baumrock/RockLoaders",
"url": "https://github.com/baumrock/RockLoaders/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1384624573 | Feature Idea : Keep track of books lended to friends
I usually lend books to friends and being able to keep the record of to whom I lent which book in JELU would be great !
Yes why not.
I think we need at least a boolean to flag a book as lended.
The personal notes field could be used to track to whom the book was lended and a return date or something.
This should be available in 0.32.0
| gharchive/issue | 2022-09-24T10:32:51 | 2025-04-01T06:38:01.404605 | {
"authors": [
"Ombrelin",
"bayang"
],
"repo": "bayang/jelu",
"url": "https://github.com/bayang/jelu/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1656090659 | Best way to enable and see logs when Tomcat serves HTTP 400 error
This may just be a request for some additional detail in the documentation (which I'm happy to PR once I figure this out) on how to get logs out of jelu.
I've deployed jelu using the docker container behind Authelia and SWAG (an Nginx reverse proxy).
When I browse to jelu through SWAG I get a HTTP 400 error back from Tomcat. The jelu.log file mentioned in the docs doesn't create a new log line when the 400 is served.
Here is what the request that is arriving at jelu looks like (I used docker-http-https-echo to see what a request being passed through SWAG after Authelia authentication looks like)
{
"path": "/",
"headers": {
"remote-user": "jdoe",
"remote-groups": "admins,jelu",
"remote-name": "John Doe",
"remote-email": "jdoe@example.com",
"connection": "close",
"host": "jelu.example.com",
"x-forwarded-for": "192.168.0.105",
"x-forwarded-host": "jelu.example.com:443, jelu.example.com",
"x-forwarded-method": "GET",
"x-forwarded-proto": "https",
"x-forwarded-server": "jelu.example.com",
"x-forwarded-ssl": "on",
"x-forwarded-uri": "/",
"x-original-url": "https://jelu.example.com/",
"x-real-ip": "192.168.1.105",
"user-agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/111.0",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"accept-language": "en-US,en;q=0.5",
"accept-encoding": "gzip, deflate, br",
"upgrade-insecure-requests": "1",
"sec-fetch-dest": "document",
"sec-fetch-mode": "navigate",
"sec-fetch-site": "same-site",
"sec-fetch-user": "?1",
"cookie": "authelia_session=BWy-REDACTED-onb"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "jelu.example.com",
"ip": "192.168.1.105",
"ips": [
"192.168.1.105"
],
"protocol": "https",
"query": {},
"subdomains": [
"jelu"
],
"xhr": false,
"os": {
"hostname": "f27ee356b422"
},
"connection": {}
}
And my application.yml contains
jelu:
auth:
proxy:
enabled: true
adminName: "jdoe"
header: Remote-User # https://github.com/linuxserver/docker-swag/blob/ce32306873ded414980e60d9c46eb87464ce2f3b/root/defaults/nginx/authelia-location.conf.sample#L20
ldap:
enabled: false
server:
port: 8400
I've confirmed that if I bypass SWAG and Authelia and just do a curl -i http://jelu:8400 from the SWAG container, I get a HTTP 200 and HTML from jelu, so there's something about the request after it goes through SWAG that is causing jelu to choke.
So my main question is, how do I enable additional logging to see why it is that Tomcat is returning an HTTP 400 when I browse to jelu through SWAG?
Okay, I'll try to have a look when I have some time.
Just know that your exact setup does work, since someone else already opened an issue for it : #64
A 400 http code is weird, because it has nothing to do with authentication, it is just a bad request.
In the meantime, if you want the tomcat access logs you can have a look at this :
https://www.baeldung.com/spring-boot-embedded-tomcat-logs
Which basically is :
add server.tomcat.accesslog.enabled=true somewhere (as an env variable or in your yaml config file)
and if you really need to delve into tomcat internals, you want to add :
logging.level.org.apache.tomcat=DEBUG
logging.level.org.apache.catalina=DEBUG
again, as env vars or in your yaml.
But I think access logs should already be useful.
Also you can start by making sure you application.yml file is really picked by the jelu app in your container (ie check your volumes)
Thanks for that guidance.
Also you can start by making sure you application.yml file is really picked by the jelu app in your container (ie check your volumes)
I feel confident it is as I set the server port in application.yml and I then see in the container output from the app
2023-04-06 04:25:52.721 INFO 1 --- [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8400 (http) with context path ''
Implying that the file is being read.
I first enabled just access logs which write out to /tmp/tomcat.8400.17324305235024466065/log/access_log.*.log
Those access logs show the 400 but don't give much more information
172.21.0.4 - - [06/Apr/2023:04:36:23 +0000] "GET / HTTP/1.1" 400 435
So I then set the tomcat and catalina logging level to DEBUG. This produced additional log lines in the docker container output.
Log lines from application startup
Of the various log lines that show up on application start, these two seem potentially noteworthy, though, given that they're of log level DEBUG they may also be innocuous.
org.apache.tomcat.jni.LibraryNotFoundError: Can't load library: /app/bin/libtcnative-2.so
Full log lines
2023-04-06 04:32:03.022 DEBUG 1 --- [main] o.a.catalina.core.AprLifecycleListener : The Apache Tomcat Native library could not be found using names [tcnative-2, libtcnative-2, tcnative-1, libtcnative-1] on the java.library.path [/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]. The errors reported were [Can't load library: /app/bin/libtcnative-2.so, Can't load library: /app/bin/liblibtcnative-2.so, Can't load library: /app/bin/libtcnative-1.so, Can't load library: /app/bin/liblibtcnative-1.so, no tcnative-2 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib], no libtcnative-2 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib], no tcnative-1 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib], no libtcnative-1 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]]
org.apache.tomcat.jni.LibraryNotFoundError: Can't load library: /app/bin/libtcnative-2.so, Can't load library: /app/bin/liblibtcnative-2.so, Can't load library: /app/bin/libtcnative-1.so, Can't load library: /app/bin/liblibtcnative-1.so, no tcnative-2 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib], no libtcnative-2 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib], no tcnative-1 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib], no libtcnative-1 in java.library.path: [/usr/java/packages/lib, /usr/lib64, /lib64, /lib, /usr/lib]
at org.apache.tomcat.jni.Library.<init>(Library.java:93) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.apache.tomcat.jni.Library.initialize(Library.java:234) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.apache.catalina.core.AprLifecycleListener.init(AprLifecycleListener.java:201) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.apache.catalina.core.AprLifecycleListener.isAprAvailable(AprLifecycleListener.java:112) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory.getDefaultServerLifecycleListeners(TomcatServletWebServerFactory.java:182) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory.<init>(TomcatServletWebServerFactory.java:129) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryConfiguration$EmbeddedTomcat.tomcatServletWebServerFactory(ServletWebServerFactoryConfiguration.java:76) ~[spring-boot-autoconfigure-2.7.6.jar:2.7.6]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Unknown Source) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213) ~[spring-beans-5.3.24.jar:5.3.24]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.getWebServerFactory(ServletWebServerApplicationContext.java:219) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:182) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:162) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:577) ~[spring-context-5.3.24.jar:5.3.24]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.6.jar:2.7.6]
at io.github.bayang.jelu.JeluApplicationKt.main(JeluApplication.kt:20) ~[classes/:0.38.0]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Unknown Source) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[app/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[app/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[app/:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[app/:na]
2023-04-06 04:32:03.393 DEBUG 1 --- [main] o.apache.tomcat.util.compat.Jre19Compat : Class not found so assuming code is running on a pre-Java 19 JVM
Full log lines
2023-04-06 04:32:03.393 DEBUG 1 --- [main] o.apache.tomcat.util.compat.Jre19Compat : Class not found so assuming code is running on a pre-Java 19 JVM
java.lang.ClassNotFoundException: java.lang.WrongThreadException
at java.base/java.net.URLClassLoader.findClass(Unknown Source) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(Unknown Source) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:135) ~[app/:na]
at java.base/java.lang.ClassLoader.loadClass(Unknown Source) ~[na:na]
at java.base/java.lang.Class.forName0(Native Method) ~[na:na]
at java.base/java.lang.Class.forName(Unknown Source) ~[na:na]
at org.apache.tomcat.util.compat.Jre19Compat.<clinit>(Jre19Compat.java:37) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.apache.tomcat.util.compat.JreCompat.<clinit>(JreCompat.java:72) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.apache.catalina.startup.Tomcat.<clinit>(Tomcat.java:1299) ~[tomcat-embed-core-9.0.69.jar:9.0.69]
at org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory.getWebServer(TomcatServletWebServerFactory.java:194) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.createWebServer(ServletWebServerApplicationContext.java:184) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:162) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:577) ~[spring-context-5.3.24.jar:5.3.24]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.6.jar:2.7.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.6.jar:2.7.6]
at io.github.bayang.jelu.JeluApplicationKt.main(JeluApplication.kt:20) ~[classes/:0.38.0]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Unknown Source) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[app/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[app/:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[app/:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[app/:na]
Log lines when the HTTP 400 is triggered
2023-04-06 04:51:27.907 DEBUG 1 --- [http-nio-8400-Acceptor] o.apache.tomcat.util.threads.LimitLatch : Counting up[http-nio-8400-Acceptor] latch=1
2023-04-06 04:51:27.909 DEBUG 1 --- [http-nio-8400-exec-4] o.a.tomcat.util.net.SocketWrapperBase : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper@4672a6ee:org.apache.tomcat.util.net.NioChannel@1ff243e5:java.nio.channels.SocketChannel[connected local=/172.21.0.5:8400 remote=/172.21.0.4:55770]], Read from buffer: [0]
2023-04-06 04:51:27.910 DEBUG 1 --- [http-nio-8400-exec-4] org.apache.tomcat.util.net.NioEndpoint : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper@4672a6ee:org.apache.tomcat.util.net.NioChannel@1ff243e5:java.nio.channels.SocketChannel[connected local=/172.21.0.5:8400 remote=/172.21.0.4:55770]], Read direct from socket: [935]
2023-04-06 04:51:27.915 DEBUG 1 --- [http-nio-8400-exec-4] o.apache.tomcat.util.threads.LimitLatch : Counting down[http-nio-8400-exec-4] latch=1
2023-04-06 04:51:27.915 DEBUG 1 --- [http-nio-8400-exec-4] org.apache.tomcat.util.net.NioEndpoint : Calling [org.apache.tomcat.util.net.NioEndpoint@17c001a3].closeSocket([org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper@4672a6ee:org.apache.tomcat.util.net.NioChannel@1ff243e5:java.nio.channels.SocketChannel[connected local=/172.21.0.5:8400 remote=/172.21.0.4:55770]])
Which doesn't give me a sense of what the issue is.
I also disabled Authelia in SWAG and tested. Comparing what the request is that arrives in the docker container the only differences when Authelia is disabled is that these headers are now not present (which makes sense)
"remote-user": "jdoe",
"remote-groups": "admins,jelu",
"remote-name": "John Doe",
"remote-email": "jdoe@example.com",
"pragma": "no-cache",
"cache-control": "no-cache",
And this new header is present
"if-none-match": "W/\"5c2-sOYCy+nQ9kwC/VoHDW9ESPxN6AA\"",
But, disabling Authelia doesn't solve the problem. With the request passing through SWAG without any authentication still triggers the 400 in Tomcat.
I'll try bypassing SWAG and comparing how the requests appear (direct vs proxied) to see if I can identify what it is about the request coming through SWAG that causes Tomcat to reject it.
Ok,
the tomcat debug logs are not relevant.
Can you make sure that your proxy tries to reach jelu through http and not https ?
Yes I've confirmed that the proxy (swag) is using http
set $upstream_proto http;
I wasn't ever able to get this working and wasn't able to get any greater logging information than what I mentioned above.
that is weird, but my first guess is still one of the best options to explain why you are receiving a 400 and not another error.
Your first block of test in your first message, which is therequest received by jelu after the proxy if I understand correctly has this line :
"protocol": "https",
can you double check please ?
| gharchive/issue | 2023-04-05T18:23:38 | 2025-04-01T06:38:01.423112 | {
"authors": [
"bayang",
"gene1wood"
],
"repo": "bayang/jelu",
"url": "https://github.com/bayang/jelu/issues/66",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
586163601 | [Feature] Add inferAll method
What does this PR do?
Create inferAll method that infers all node's states from the network according the given and options.
Where should the reviewer start?
src/utils/inferAll.ts
What testing has been done on this PR?
yarn test
How should this be manually tested?
yarn test
Any background context you want to provide?
I've updated the Junction Tree algorithm to use only WeakMap instead of a mix of WeakMap and Map to improve the performance.
I've tested to only use Map and create a key to store cliques and potentials to avoid wrong inferring when the user mutates the network or the given (#32 ) but the performance was not so good so I create an option called force to prevent this error to happening.
I also create a script to benchmark the inferAll using WeakMap and Map approaches:
Benchmark: https://gist.github.com/nolleto/523da8f5d229cbe0e49cd279ae9091db
Results:
// With WeakMap (this PR) - https://github.com/nolleto/bayesjs/tree/infer-all-with-weak-map
// The mutation error will happen
// inferAll executed 100 times: 972.847ms
// inferAll executed 500 times: 3563.248ms
// inferAll executed 1000 times: 6863.624ms
// inferAll executed 2000 times: 13624.510ms
// inferAll executed 5000 times: 33492.657ms
// Total time: 58608.248ms
// With WeakMap and Map (current - until v0.5.0) - https://github.com/nolleto/bayesjs/tree/infer-all-with-current-algorithm
// The mutation error will happen
// inferAll executed 100 times: 987.221ms
// inferAll executed 500 times: 5654.897ms
// inferAll executed 1000 times: 11278.152ms
// inferAll executed 2000 times: 22390.635ms
// inferAll executed 5000 times: 55837.960ms
// Total time: 96266.677ms
// With WeakMap (potentials) and Map (cliques (id, cpt)) - https://github.com/nolleto/bayesjs/tree/infer-all-mapping-network-nodes
// The mutation error will NOT happen
// inferAll executed 100 times: 57444.127ms
// inferAll executed 500 times: 276823.310ms
// Too long...
What are the relevant issues?
#18 #32
Screenshots (if appropriate)
NA
:tada: This PR is included in version 0.6.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2020-03-23T12:13:52 | 2025-04-01T06:38:01.431571 | {
"authors": [
"nolleto"
],
"repo": "bayesjs/bayesjs",
"url": "https://github.com/bayesjs/bayesjs/pull/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
449759689 | Remove GCP Credentials check
What this PR does and why we need it:
This PR removes the sanity check for GOOGLE_APPLICATION_CREDENTIALS in the environment since our BazelCI Ubuntu VMs doesn't have that value set.
Does this require a change in the script's interface or the BigQuery's table structure?
No.
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
LGTM
| gharchive/pull-request | 2019-05-29T11:47:15 | 2025-04-01T06:38:01.437407 | {
"authors": [
"joeleba"
],
"repo": "bazelbuild/bazel-bench",
"url": "https://github.com/bazelbuild/bazel-bench/pull/28",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2740105189 | Add protoc-gen-validate 1.0.4.bcr.1
This fixes compatibility with more recent versions of rules_python.
Hello @bazelbuild/bcr-maintainers, modules without existing maintainers (protoc-gen-validate) have been updated in this PR. Please review the changes.
@bazel-io skip_check unstable_url
@keith patching the old version until #3414 is figured out. The current version 1.0.4 on BCR is not compatible with recent rules_python version (still uses name instead of hub_name for pip dependencies).
yea, note that as of bazel 7.4.x you can use a single_module_override to patch MODULE.bazel files to avoid this kinda issue. still good to fix for everyone tho :+1:
yea, note that as of bazel 7.4.x you can use a single_module_override to patch MODULE.bazel files to avoid this kinda issue. still good to fix for everyone tho 👍
Thanks for the hint. As we are providing an SDK to our users, we are not the root module and therefore cannot use overrides.
| gharchive/pull-request | 2024-12-14T19:12:54 | 2025-04-01T06:38:01.444360 | {
"authors": [
"bazel-io",
"keith",
"mering"
],
"repo": "bazelbuild/bazel-central-registry",
"url": "https://github.com/bazelbuild/bazel-central-registry/pull/3420",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
409038611 | Support bash autocomplete
Since bazelisk downloads entire binaries, autocomplete scripts are not accessible. bazelisk could potentially also download the autocomplete scripts to a predetermined location to make it easier invoke.
+1, this will be excellent to have.
I suppose you can do this manually with instructions from https://docs.bazel.build/versions/master/completion.html#bash
$ bazel build //scripts:bazel-complete.bash
$ cp bazel-bin/scripts/bazel-complete.bash /etc/bash_completion.d/bazelisk
# new shell
$ bazelisk <TAB>
analyze --nodeep_execroot
analyze-profile --noexoblaze
aquery --noexpand_configs_in_place
--batch --noexperimental_oom_more_eagerly
--batch_cpu_scheduling --noidle_server_tasks
--blazerc= --noignore_all_rc_files
..
yeah that's what i ended up doing... I was hoping something like a bundled bash completion script that bazelisk could provide, but it's not super blocking
I would love to have this myself, but currently don't have time to implement this - if someone wants to contribute a PR, I would be very happy to review it.
I'll take a stab at this when I have time over the next few days.
@sudoforge Thanks! If you could find a solution that also works with zsh completions, I'd be extra grateful 😊 (I was a long-time bash user, but since macOS Catalina switched to zsh by default, I gave it a try and stuck with it.)
I'll be looking to add completion for all shells that Bazel supports -- off the top of my head, I think that's just Bash and ZSH.
+1 would be super useful to have
FWIW this is done now for zsh on Homebrew installs of bazelisk: https://github.com/bazelbuild/homebrew-tap/pull/89
I've got it working on my mac using zsh but it's painfully slow. Are others experiencing the same?
Having done some comparisons between bazel and bazelisk using zsh completion on my mac, bazelisk is orders of magnitude slower.
Having done some comparisons between bazel and bazelisk using zsh completion on my mac, bazelisk is orders of magnitude slower.
I wonder if https://github.com/bazelbuild/bazelisk/pull/248 would improve this
I've given some thought to this and will be sending a PR out in the next few days with a proposed solution.
Any update here? I'd be glad to help if I can.
@sudoforge Is this still something you're planning to open?
I was looking at this briefly, and my assumption as to what needs to be done:
Update DownloadRelease to also fetch the auto-complete scripts
To get the auto-complete scripts, the source needs to be downloaded (e.g. https://releases.bazel.build/6.0.0/release/bazel_6.0.0.tar.gz), and for simplicity probably just extract the entire "scripts/" directory (which includes bazel-complete-header.bash and such, and other auto-completes).
Make that scripts/ sub-directory available in a deterministic place, so switching bazel versions also switches the underlying scripts
Seem reasonable to others?
| gharchive/issue | 2019-02-12T00:06:01 | 2025-04-01T06:38:01.488780 | {
"authors": [
"UebelAndre",
"andrewring",
"evie404",
"finn-ball",
"gibfahn",
"jin",
"kylepl",
"philwo",
"rickypai",
"rpwoodbu",
"sudoforge"
],
"repo": "bazelbuild/bazelisk",
"url": "https://github.com/bazelbuild/bazelisk/issues/29",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2207370086 | Update CHANGELOG for v2024.03.26
Please let me know of any important changes that you want to make visible by adding to the Change log.
https://github.com/bazelbuild/intellij/pull/6262
https://github.com/bazelbuild/intellij/pull/6310
https://github.com/bazelbuild/intellij/pull/6030
https://github.com/bazelbuild/intellij/pull/6257
| gharchive/pull-request | 2024-03-26T06:31:06 | 2025-04-01T06:38:01.493205 | {
"authors": [
"mai93",
"satyanandak"
],
"repo": "bazelbuild/intellij",
"url": "https://github.com/bazelbuild/intellij/pull/6321",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
158288833 | Issue 45: add version to footer
Add version number to the footer.
Add the version number to the app context so it's automatically available in all templates.
Coverage decreased (-0.1%) to 59.595% when pulling 7e0e53303c04a86b3dbc17a18a2ecd600f4e282a on janetriley:develop into e3203599f097d0db6eb7f936eb6b8d079d4414a6 on bbengfort:develop.
@janetriley nice job! Thanks for the contribution!
| gharchive/pull-request | 2016-06-03T04:26:41 | 2025-04-01T06:38:01.594028 | {
"authors": [
"bbengfort",
"coveralls",
"janetriley"
],
"repo": "bbengfort/baleen",
"url": "https://github.com/bbengfort/baleen/pull/59",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
248951197 | Safeguard subject property (and others) against SMTP CRLF injection attacks
It is possible to set a subject which contains newlines and custom SMTP protocol directives which directly sets the body of the email. This can be an issue when the subject comes from an external resource.
As a matter of precaution, Simple Java Mail should simply remove newline characters from all values (except for the body).
Also see:
http://www.cakesolutions.net/teamblogs/2008/05/08/email-header-injection-security
https://security.stackexchange.com/a/54100/110048
https://www.owasp.org/index.php/Testing_for_IMAP/SMTP_Injection_(OTG-INPVAL-011)
http://cwe.mitre.org/data/definitions/93.html
Released in 4.3.0.
| gharchive/issue | 2017-08-09T08:13:11 | 2025-04-01T06:38:01.674731 | {
"authors": [
"bbottema"
],
"repo": "bbottema/simple-java-mail",
"url": "https://github.com/bbottema/simple-java-mail/issues/88",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1571609404 | chore: correct links in issue template
Previous links was invalid
Thanks for the PR 👍
| gharchive/pull-request | 2023-02-05T20:55:30 | 2025-04-01T06:38:01.675537 | {
"authors": [
"bcakmakoglu",
"tchiotludo"
],
"repo": "bcakmakoglu/vue-flow",
"url": "https://github.com/bcakmakoglu/vue-flow/pull/648",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2280099957 | [Arknights Fan TRPG] アップデート
・成功、失敗とクリティカル、エラーの優先度をルール準拠に変更。
・ヘルプメッセージの誤りを修正。
・ココフォリアのルーム変数に対応するため、役職名の後ろに0がつくと動作せず、1がつくと動作するように整備。
対応遅れてしまい申し訳ありません。
コミット整理承知いたしました。
Pull request の前にフォーク元(BcDice本体)の変更をマージして取り込んでおく処理は止めておいたほうが良いでしょうか?
こちらですが、前回の Arknights Fan TRPG の取り込みが、squash merge となっており、コミットがまとめられてます。#666 f1db21f2e26739e2e8ae8c442434f92d9b65234b
なので、手元とBCDice 本体のツリー情報に差分があり、過去のコミットがこのPRに混じっているという状況になっています。
したがって、BCDice の master から新しくブランチを切り直して、そこに今回の差分を乗せるという形が良いと思います。
こちらの方針については、Arknights Fan TRPG のコードの共同開発者の @NOBUTOKA さんにもDiscord で伝えてあります。
よろしくお願いします。
ありがとうございます。discordの方確認致しました。2度お手間をおかけする形となってしまい申し訳ありませんでした。
対応したく思いますので、少々お時間頂けますと幸いです。
| gharchive/pull-request | 2024-05-06T06:04:18 | 2025-04-01T06:38:01.702769 | {
"authors": [
"Ayase00",
"raa0121"
],
"repo": "bcdice/BCDice",
"url": "https://github.com/bcdice/BCDice/pull/700",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1468599262 | Improve release notes formatting
Would it be possible to adjust the formatting of https://www.bouncycastle.org/releasenotes.html?
In my opinion it has the following two issues which make it extremely difficult to skim and navigate:
Every header is numbered
This makes it pretty difficult to tell apart the release version from the header number, e.g.:
Here you might initially think the version is "2.2.1"
All headers have the same size
Even though some headers are logically sub-headers of others, they all have the same size and you have to pay close attention to the header numbers to understand which section belongs to which version, e.g.:
This screenshot even highlights a different issue, that the header numbers are apparently manually applied and now two headers have the number "2.2.2" making it even more difficult to read.
Personally I think a formatting similar to the following one (that is, no header numbering and hierarchical headers) would be easier to understand and read:
1.72.3
Date: 2022, November 20th
Defects fixed
PGP patch release - fix for pom file in 1.72.2 jar file for JDK15to18 version..
...
1.72
Date: 2022, September 25th
Defects Fixed
There were parameter errors in XMSS^MT OIDs for XMSSMT_SHA2_40/4_256 and XMSSMT_SHA2_60/3_256. These have been fixed.
...
Additional Features and Functionality
BCJSSE: TLS 1.3 is now enabled by default where no explicit protocols ...
Seems to be obsolete with the new redesigned website now. The release notes are now under https://www.bouncycastle.org/download/bouncy-castle-java/#release-notes
| gharchive/issue | 2022-11-29T19:37:52 | 2025-04-01T06:38:01.709524 | {
"authors": [
"Marcono1234"
],
"repo": "bcgit/bc-java",
"url": "https://github.com/bcgit/bc-java/issues/1288",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1453635455 | Update Wally Database for Gold
Describe the task
Update the existing database storing procedures to be able to handle the new Keycloak token structure
Acceptance Criteria
[ ] Database can properly save user info from Keycloak
Additional context
How WALLY is saving things to the database is relatively unknown to me at the time of writing this other than the logic happens on the backend. I do know the important tables relating to Keycloak having looked at it in openshift. There is a user table and a user_map_layer table in the database.
The user_map_layer table only uses idir's for saving/looking up info; that will still exist in gold but the field in the token could be a different name so that will possibly need to be updated.
The user layer is going to require a migration as the user_uuid is no longer going to exist in the gold realm; that is what will become the GUID. I have uploaded the files that pair the UUID to the correct user's GUID from Zorin to the Dev Chat files section, and I believe Norris has experience writing a translation table for this kind of work so he will probably be able to give more info on how to go about doing this.
This work is being done in WALLY ticket #697
| gharchive/issue | 2022-11-17T16:29:09 | 2025-04-01T06:38:01.713145 | {
"authors": [
"LolandaE",
"jakemorr"
],
"repo": "bcgov-c/wally",
"url": "https://github.com/bcgov-c/wally/issues/657",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1980286835 | Lock down pg_tileserv layers by role
Right now the pg_tileserv doesn't have any authentication/authorization, it's only limited by showing/hiding the layer definitions within the arches implementation. The WMS is accessed via a proxy which could be used to limit the access of each layer by role/user.
Taking the MVP label off as this isn't relevant until we include the next user group.
| gharchive/issue | 2023-11-06T23:59:06 | 2025-04-01T06:38:01.714357 | {
"authors": [
"bferguso"
],
"repo": "bcgov/BCHeritage",
"url": "https://github.com/bcgov/BCHeritage/issues/660",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1526529886 | Create payment-noplugins.yaml
Test API Gateway config generated from OpenAPI spec using gwa tool
@eyjwarren
| gharchive/pull-request | 2023-01-10T00:04:11 | 2025-04-01T06:38:01.742760 | {
"authors": [
"eyjwarren",
"vyasworks"
],
"repo": "bcgov/PaymentCommonComponent",
"url": "https://github.com/bcgov/PaymentCommonComponent/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1090032957 | Content for begin/resume application page
Should support UX of application process:
who should be filling out this form
how long will it take to apply to the program
what information will be needed to complete
what happens if they lose username/password
that they can start/come back to form
dates/deadlines for completion of application
timeline on gov response to applicants
how information will be used
any legal stuff
Rachel to send FOIPPA language to Elliott/Bryan for form internal landing page
Instructions have been drafted. Need to review against this card to ensure completion.
BCeID piece needs to be send to Tim McGuire, Geomark piece needs to be sent to Brian.
Dupe of #31
| gharchive/issue | 2021-12-28T17:25:48 | 2025-04-01T06:38:01.752027 | {
"authors": [
"BryanGK",
"rgreensp"
],
"repo": "bcgov/connectivity-intake",
"url": "https://github.com/bcgov/connectivity-intake/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2399458861 | Bug - geographic names maximum query length
Describe the Bug
if the spreadsheet contains too many geographic locations the api request fails
Steps To Reproduce
Steps to reproduce the behaviour:
User/Role: uploader
upload the suvi dataset from teams
See error
the backend shows this error: Client Error: Request-URI Too Long for url
this indicates we need to change the way the request is sent, eg batches
PR up for this does address the maximum query length, but I found when testing on a huge file it takes FOREVER to load. Maybe we should discuss this api call further.
| gharchive/issue | 2024-07-10T00:20:04 | 2025-04-01T06:38:01.754433 | {
"authors": [
"JulianForeman",
"emi-hi"
],
"repo": "bcgov/cthub",
"url": "https://github.com/bcgov/cthub/issues/361",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
908494838 | Filter refresh and postal code editing
HCAP-527 and 520 are back with a vengeance.
MoH can now update participants postal codes. This also updates their FSA
Fixed issue where the filter would reset after being idle for 5 minutes
I had to lint ignore a warning to get this to work. I really think we should rework the SiteTable to use less useEffect hooks
We intend to do a release containing only email blast changes to prod. This is on hold until we've done so. See discussion.
| gharchive/pull-request | 2021-06-01T16:46:25 | 2025-04-01T06:38:01.809417 | {
"authors": [
"ashtonmeuser",
"dbayly-freshworks"
],
"repo": "bcgov/hcap",
"url": "https://github.com/bcgov/hcap/pull/402",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2725415297 | LCFS - Error saving edits to Allocation agreement data
Describe the Bug:
Edits made to rows in the Allocation Agreement input form are not saving. Attempting to save changes results in an error, and the data remains unchanged.
Expected Behaviour:
Edits to rows in the Allocation Agreement input form should be saved successfully without any errors.
Actual Behaviour:
When attempting to save changes to a row, an error occurs.
The edits are not saved, and the original data remains unchanged.
Implications:
Users are unable to update Allocation Agreement data, potentially leading to incomplete or inaccurate compliance reports.
Steps To Reproduce:
User/Role: BCeID user
Navigate to the Allocation Agreement input form.
Make edits to any row.
Observe the error and verify that the changes are not saved.
Additional Context:
It is normal to get 422s while the row is invalid or incomplete, but it should be showing an error message for what is required. Was the label stuck saying Updating Row?
Yes the row was complete/valid but it hung on Updating row
| gharchive/issue | 2024-12-08T17:37:00 | 2025-04-01T06:38:01.813532 | {
"authors": [
"airinggov",
"dhaselhan"
],
"repo": "bcgov/lcfs",
"url": "https://github.com/bcgov/lcfs/issues/1401",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1512436594 | Search results need formatting rethink
[x] No results found fixed
[x] Search results should collapse pathways and steps into one list
[x] Add descriptions to each type (cat, path, act)
Skipping the descriptions as I think the UI ended up working better without them.
| gharchive/issue | 2022-12-28T06:05:14 | 2025-04-01T06:38:01.815326 | {
"authors": [
"allanhaggett"
],
"repo": "bcgov/learningcurator",
"url": "https://github.com/bcgov/learningcurator/issues/225",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1603817054 | Update spacing for responsiveness in events widget
Issue #1184 :
https://github.com/bcgov/met-public/issues/1184
-Make the event widget more responsiveness at different sizes
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of the met-public license (Apache 2.0).
Codecov Report
Merging #1298 (d10eba4) into main (44fd9c0) will increase coverage by 0.02%.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #1298 +/- ##
==========================================
+ Coverage 72.68% 72.71% +0.02%
==========================================
Files 303 303
Lines 8505 8505
Branches 599 599
==========================================
+ Hits 6182 6184 +2
+ Misses 2230 2228 -2
Partials 93 93
Flag
Coverage Δ
metweb
66.65% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
...s/engagement/view/widgets/Events/InPersonEvent.tsx
75.00% <ø> (ø)
.../engagement/view/widgets/Events/VirtualSession.tsx
75.00% <ø> (ø)
...-web/src/components/engagement/view/EmailPanel.tsx
15.15% <0.00%> (ø)
met-api/src/met_api/schemas/engagement.py
92.45% <0.00%> (+3.77%)
:arrow_up:
| gharchive/pull-request | 2023-02-28T21:09:36 | 2025-04-01T06:38:01.824726 | {
"authors": [
"codecov-commenter",
"djnunez-aot"
],
"repo": "bcgov/met-public",
"url": "https://github.com/bcgov/met-public/pull/1298",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2641947471 | feat: #1630 Pagination access control privilege endpoint
Adjust endpoint "/access_control_privileges" to "/access-control-privileges"
Add pagination schema objects for pagination related classes used by endpoint.
Add "page_params" query parameter for "/access-control-privileges?application_id" for pagination capability.
Use "create_date" as default sorting.
Add simple paginate abstract repository to provide pagination functionality for server/admin_management.
Implement paginate repository in access_control_privilege_repository for getting delegated admins.
access_control_privilege_service uses and return paged delegated admins from repository paged method.
Client-Code gen for frontend admin_management api.
Adjust frontend due to endpoint change.
Fix affected tests.
I'm having trouble running this in docker since pytest is not installed for runtime Docker containers.
Looks like we have a from pytest import Session in server/admin_management/api/app/repositories/simple_paginate_repository.py
Perhaps it should be from sqlalchemy.orm import Session
Log
2024-11-13 18:30:03 INFO: Will watch for changes in these directories: ['/usr/src']
2024-11-13 18:30:03 INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
2024-11-13 18:30:03 INFO: Started reloader process [7] using WatchFiles
2024-11-13 18:30:04 Process SpawnProcess-1:
2024-11-13 18:30:04 Traceback (most recent call last):
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
2024-11-13 18:30:04 self.run()
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/multiprocessing/process.py", line 108, in run
2024-11-13 18:30:04 self._target(*self._args, **self._kwargs)
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started
2024-11-13 18:30:04 target(sockets=sockets)
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 65, in run
2024-11-13 18:30:04 return asyncio.run(self.serve(sockets=sockets))
2024-11-13 18:30:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run
2024-11-13 18:30:04 return runner.run(main)
2024-11-13 18:30:04 ^^^^^^^^^^^^^^^^
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
2024-11-13 18:30:04 return self._loop.run_until_complete(task)
2024-11-13 18:30:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-13 18:30:04 File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
2024-11-13 18:30:04 await self._serve(sockets)
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 76, in _serve
2024-11-13 18:30:04 config.load()
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/config.py", line 433, in load
2024-11-13 18:30:04 self.loaded_app = import_from_string(self.app)
2024-11-13 18:30:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/importer.py", line 22, in import_from_string
2024-11-13 18:30:04 raise exc from None
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
2024-11-13 18:30:04 module = importlib.import_module(module_str)
2024-11-13 18:30:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-13 18:30:04 File "/usr/local/lib/python3.12/importlib/__init__.py", line 90, in import_module
2024-11-13 18:30:04 return _bootstrap._gcd_import(name[level:], package, level)
2024-11-13 18:30:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-13 18:30:04 File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
2024-11-13 18:30:04 File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
2024-11-13 18:30:04 File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
2024-11-13 18:30:04 File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
2024-11-13 18:30:04 File "<frozen importlib._bootstrap_external>", line 995, in exec_module
2024-11-13 18:30:04 File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
2024-11-13 18:30:04 File "/usr/src/api/app/main.py", line 7, in <module>
2024-11-13 18:30:04 from api.app.routers import (router_access_control_privilege,
2024-11-13 18:30:04 File "/usr/src/api/app/routers/router_access_control_privilege.py", line 6, in <module>
2024-11-13 18:30:04 from api.app.routers.router_guards import (
2024-11-13 18:30:04 File "/usr/src/api/app/routers/router_guards.py", line 12, in <module>
2024-11-13 18:30:04 from api.app.routers.router_utils import (
2024-11-13 18:30:04 File "/usr/src/api/app/routers/router_utils.py", line 5, in <module>
2024-11-13 18:30:04 from api.app.services.access_control_privilege_service import \
2024-11-13 18:30:04 File "/usr/src/api/app/services/access_control_privilege_service.py", line 9, in <module>
2024-11-13 18:30:04 from api.app.repositories.access_control_privilege_repository import \
2024-11-13 18:30:04 File "/usr/src/api/app/repositories/access_control_privilege_repository.py", line 9, in <module>
2024-11-13 18:30:04 from api.app.repositories.simple_paginate_repository import \
2024-11-13 18:30:04 File "/usr/src/api/app/repositories/simple_paginate_repository.py", line 10, in <module>
2024-11-13 18:30:04 from pytest import Session
2024-11-13 18:30:04 ModuleNotFoundError: No module named 'pytest'
It certainly shouldn't be from pytest should be sqlalchemy. But this is interesting, I ran many times in my local with just now again and was fine, don't know why my local does not catch this. Sometimes I don't understand Python, quite flexible and seems it is smart enough to correct this things.
It certainly shouldn't be from pytest should be sqlalchemy. But this is interesting, I ran many times in my local with just now again and was fine, don't know why my local does not catch this. Sometimes I don't understand Python, quite flexible and seems it is smart enough to correct this things.
@ianliuwk1019 i think it's because when you run locally in your terminal the pytest package is installed, but when you run it in docker or in a test or prod environment it doesn't get installed since it's only a dev dependency
Do you think we should re-generate client-code-gen after the sortOrder change? I'm not sure if it's affected
Yes, it does not re-generate/update new objects, but I update the json (has one single difference only).
| gharchive/pull-request | 2024-11-07T19:20:09 | 2025-04-01T06:38:01.836996 | {
"authors": [
"craigyu",
"ianliuwk1019"
],
"repo": "bcgov/nr-forests-access-management",
"url": "https://github.com/bcgov/nr-forests-access-management/pull/1649",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1492713979 | fix/333 - server/backend dorker-compose runs as api user
next up / the root docker-compose, then new migrations
it should totally be renamed Dorker
| gharchive/pull-request | 2022-12-12T20:39:18 | 2025-04-01T06:38:01.838393 | {
"authors": [
"franTarkenton",
"webgismd"
],
"repo": "bcgov/nr-forests-access-management",
"url": "https://github.com/bcgov/nr-forests-access-management/pull/334",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2196291405 | DR-> Date get backdated by one date when saving an edit action with the current date
Scenario:
Navigate to the Data register on TEST environment
Make a minor, major and a repeal edit on any of the parks.
For the date, select the current date
Save the changes and see the Affective date displayed after saving the changes.
Expected:
The date selected by the user should be saved as is.
Issue:
When the current date is selected, the saved date is one day behind. This is a common issue for all three edit operations.
https://github.com/bcgov/parks-data-register/assets/65190263/56a6e7fa-dbc1-4b6b-b153-44eede86b31d
November 4, 2024, tested this bug to see if I can replicate. Used Golden Ears Park, Maple Ridge:
Minor edit > Changed display name to Golden Ears Park - Lindsay's Park
Legal name > Changed effective date to today's date. Remained as expected.
Repeal > repealed park. Date remained as expected.
Last updated date also displayed correctly.
@davidclaveau @manuji - can I get one or two of you to take another look at this one and confirm that the bug is no longer there.
cc @Dianadec
I can double check.
Tested on TEST: cannot recreate the issue. Closing the ticket.
| gharchive/issue | 2024-03-19T23:59:33 | 2025-04-01T06:38:01.842900 | {
"authors": [
"LindsayMacfarlane",
"manuji"
],
"repo": "bcgov/parks-data-register",
"url": "https://github.com/bcgov/parks-data-register/issues/428",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
508147429 | As a staff person I want to set up agency agreements in the digital system so that correct parties can sign.
REQUIREMENTS
some individuals have more than one agreement and a different agent for each RAN (ex. Douglas Lake with ranch managers) --> could be set up either from a RAN focus or an individual focus. Since agency agreements are on file and files are by RAN lets go with RAN as starting point
ideally this process could be managed completely by an agreement holder within the system --> first step process is to have staff involved in administration of the agreements (so they are kept in the loop)
[ ] staff person (admin?) can type/select a RAN from their district
[ ] list of agreement holders is shown
[ ] one or more agreement holders can be selected (people who don't want to sign)
[ ] if agent is in system -- select from existing list (ex. another AH)
[ ] if not already in system -- add name and email address
[ ] agents not currently in system receive email with instructions to obtain BCeID and initiate account in MyRangeBC
[ ] staff knows when a new agent has logged in to MyRangeBC and needs to be linked as an agent (either automated or from instructions to agent user to email them)
[ ] staff to finalize agreement through confirming that names/BCeIDs (this is more important for existing agreements -- do we need to do this staff step? @LisaMoore1 to find out)
[ ] agents having accounts (will be all - either new through process above or existing) will receive email with agreement information and link to visit to confirm agent status
[ ] confirming agent status screen to have stock language from range branch (reviewed by legal) and may include options for "long term - until revoked" or "time bound" (first run could be long term with requirement to revoke when desired time has elapsed)
[ ] agency agreements must be identified as either "in effect" or "not in effect" (ex. revoked or expired)
[ ] once agency agreement in place the agent can:
[ ] receive ALL notifications, see all content and take all action of the AH for whom they act as agent
[ ] AH still received all content and can take all action if they choose (either AH OR agent can take actions)
[ ] agent information is available in RUP basic information (either always or option to view) for any RUP version created/signed during a time in which agent(s) are in place
[ ] staff have option to revoke an agency agreement upon request of the AH (Is a signed request by an AH adequate to send an invite for confirmation from the agent? @LisaMoore1 to confirm)
NB: had considered that a different process could exist for agreements already on file but if we have to confirm with agents on existing ones anyway we might as well prepare our stock agency agreement language and update them to the new digital version
@micheal-w-wells
| gharchive/issue | 2019-10-16T23:15:16 | 2025-04-01T06:38:01.853931 | {
"authors": [
"LisaMoore1",
"ZoeSimon"
],
"repo": "bcgov/range-web",
"url": "https://github.com/bcgov/range-web/issues/263",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2313786599 | CTHUB - Make Column Casing Consistent
Task Description
Create a preparation function for the GER upload (potentially all uploads, ask Katia about this) that keeps the title of each column's capitalization consistent.
Purpose
It makes it more readable and also allows for us to better check what is inside a cell in the future for duplicates/naming schemes etc...
Acceptance Criteria
[ ] The read/returned columns should all be capitalized the same way
Development Checklist
[ ] Create either a re-usable preparation function for specifically GER or write it into the actual uploader to capitalize column names the same way.
Additional context
Please see GER Notes 1.1 located in the GER_Notes file contained in the shared CTHUB folder in Teams.
We can use built in functions to make everything capitals or lowercase or title structure so this shouldn't be much work.
Replaced with card #312 because this one was opened under ZEVA and not CTHUB.
| gharchive/issue | 2024-05-23T20:31:57 | 2025-04-01T06:38:01.861274 | {
"authors": [
"JulianForeman",
"shayjeff"
],
"repo": "bcgov/zeva",
"url": "https://github.com/bcgov/zeva/issues/2176",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
547128609 | 0.73 Freezing When Stopping Containers
Just hangs after selecting yes.
It is happen always in my cases
The same here - happens almost always. I don't remember a successful 'yes'.
#201 merged; to be included in v0.7.6
| gharchive/issue | 2020-01-08T22:03:59 | 2025-04-01T06:38:01.863343 | {
"authors": [
"bcicen",
"d10xa",
"joesixpack",
"kmazurek244"
],
"repo": "bcicen/ctop",
"url": "https://github.com/bcicen/ctop/issues/190",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
279673361 | There is a warning when unlink session files "No such file or directory"
Hi,
I found there is a warning message "unlink(/var/lib/php/session/ci_session398713f044003b888df479c96bd27010820f1053): No such file or directory" in logs. But I check that if there is a session file before executing unlink session files.
Best,
Tyler Teng
No such code exists in CodeIgniter.
Note that this is a bug tracker; we don't provide help for code that you write here. Please post your questions on our forums instead.
Also, post actual code instead of screenshots.
(duplicate of #5350)
| gharchive/issue | 2017-12-06T08:56:22 | 2025-04-01T06:38:01.865743 | {
"authors": [
"narfbg",
"tyl569"
],
"repo": "bcit-ci/CodeIgniter",
"url": "https://github.com/bcit-ci/CodeIgniter/issues/5351",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
318776144 | php7
CI什么时候基于PHP7版本的开发呢?
English only, thanks
| gharchive/issue | 2018-04-30T03:07:02 | 2025-04-01T06:38:01.866757 | {
"authors": [
"jim-parry",
"xiaolifeidao2016"
],
"repo": "bcit-ci/CodeIgniter",
"url": "https://github.com/bcit-ci/CodeIgniter/issues/5484",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
344118985 | Rework Dotenv testing to use VFS
In response to #1106
Doesn't affect PHP7.1 tests; still getting the 28 errors in the PHP7.2 tests.
So, it's arguably better, anr should be merged, but we still have a travis-ci problem :(
| gharchive/pull-request | 2018-07-24T16:46:23 | 2025-04-01T06:38:01.867702 | {
"authors": [
"jim-parry"
],
"repo": "bcit-ci/CodeIgniter4",
"url": "https://github.com/bcit-ci/CodeIgniter4/pull/1112",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2252515267 | CommandError: Error rendering template to disk
Hello and thanks for your work
I can't manage to execute the command :
$ manage.py renderstatic
the error reported is :
CommandError: Error rendering template to disk: landing/home.html
here's my django project tree :
src
├── db
├── landing
│ ├── migrations
│ ├── static
│ │ └── landing
│ │ ├── css
│ │ ├── fonts
│ │ ├── img
│ │ └── js
│ ├── templates
│ │ └── landing
│ │ └── home.html
├── src
└── static
content related to django-render-static package of src/settings.py :
INSTALLED_APPS = [
'render_static',
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
BASE_DIR / "templates/",
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
STATIC_TEMPLATES = {
'templates': ['landing/home.html']
}
STATIC_URL = 'static/'
STATIC_ROOT = BASE_DIR / 'static/'
could you help me ?
Hi @Etilem thanks for giving the library a try! By default the static template engine searches for static templates in a directory called static_templates. So if you rename your templates directory to static_templates it should work for you.
Alternatively you can configure the directory loaders will look in. Understand that the static templates engine is a close mirror of the normal templates configuration. This means the STATIC_TEMPLATES setting looks very similar to the TEMPLATES setting, but they are completely separate. STATIC_TEMPLATES controls the static templates rendering (at package or deployment time) and TEMPLATES controls Django's dynamic templates (at serving time).
Here is an example configuration that would make renderstatic look in the templates directory instead:
STATIC_TEMPLATES = {
'ENGINES': [
'BACKEND': 'render_static.backends.StaticDjangoTemplates',
'OPTIONS': {
'app_dir': 'templates', # search this directory in apps for templates
'loaders': [
'render_static.loaders.StaticAppDirectoriesBatchLoader',
],
'builtins': ['render_static.templatetags.render_static']
},
],
'templates': ['landing/home.html']
}
thank you for the comprehensive answer, it works indeed with :
STATIC_TEMPLATES = {
'ENGINES': [{
'BACKEND': 'render_static.backends.StaticDjangoTemplates',
'OPTIONS': {
'app_dir': 'templates', # search this directory in apps for templates
'loaders': ['render_static.loaders.StaticAppDirectoriesBatchLoader'],
'builtins': ['render_static.templatetags.render_static']
},
}],
'templates': ['landing/home.html']
}
by the way, there's a typo in documentation at line 56 of https://github.com/bckohan/django-render-static/blob/main/doc/source/configuration.rst?plain=1, it says STATIC_TEMPALTES instead of STATIC_TEMPLATES
| gharchive/issue | 2024-04-19T09:27:01 | 2025-04-01T06:38:01.873698 | {
"authors": [
"Etilem",
"bckohan"
],
"repo": "bckohan/django-render-static",
"url": "https://github.com/bckohan/django-render-static/issues/143",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
123903486 | Include of non-modular header inside framework module
Hi Brad
I have built a Cocoa Touch Framework written in Swift (using Xcode 7.2) utilising BDBOAuth1Manager 2.0.0 with no problem.
However, when I 'pod install' my aforementioned framework into a Swift standalone app (and as a result BDBOAuth1Manager gets installed as a dependency) I get the following errors when I try to build:
/Path/To/StandaloneApp/Pods/BDBOAuth1Manager/BDBOAuth1Manager/BDBOAuth1RequestSerializer.h:23:9: Include of non-modular header inside framework module 'BDBOAuth1Manager.BDBOAuth1RequestSerializer'
/Path/To/StandaloneApp/Pods/BDBOAuth1Manager/BDBOAuth1Manager/BDBOAuth1SessionManager.h:23:9: Include of non-modular header inside framework module 'BDBOAuth1Manager.BDBOAuth1SessionManager'
I've tried setting CLANG_ALLOW_NON_MODULAR_INCLUDES_IN_FRAMEWORK_MODULES = 'YES', but didn't seem to help. I guess it wouldn't be an elegant solution anyway.
I've read through quite a few threads e.g 1 2 but couldn't really find a proper solution to my problem.
However, I came across this issue and the fix. I'm not sure if it's related but seems very much the same problem I'm facing.
Unfortunately, I know practically nothing about Obj C so can't really debug myself, but was wondering if you could look into this and perhaps give me some pointers if I could resolve this issue myself or does this require the same fix as the one for AFNetworking?
Or am I chasing a red herring?
Thanks
+1 from me
| gharchive/issue | 2015-12-26T01:44:35 | 2025-04-01T06:38:01.886926 | {
"authors": [
"chiswicked",
"lucasleongit"
],
"repo": "bdbergeron/BDBOAuth1Manager",
"url": "https://github.com/bdbergeron/BDBOAuth1Manager/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
66412102 | BDlib/NEI compatibility issue.
NEI fails to load up and if I try to get into NEI's creative tab, my client crashes.
http://pastebin.com/E1SQDeC7
Same issue as bdew/pressure#36
Please try the version i linked there and report back (in that issue, i'm closing this one to keep all information in once place)
| gharchive/issue | 2015-04-05T07:12:34 | 2025-04-01T06:38:01.888478 | {
"authors": [
"AnodyneEntity",
"bdew"
],
"repo": "bdew/bdlib",
"url": "https://github.com/bdew/bdlib/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1342561529 | Fix dropdown button padding issue
Fixes: https://github.com/bdlukaa/fluent_ui/issues/475
Pre-launch Checklist
[x] I have updated CHANGELOG.md with my changes
[x] I have run "dart format ." on the project
[x] I have added/updated relevant documentation
Tests are failing. Could you take a look?
Done!
| gharchive/pull-request | 2022-08-18T04:58:18 | 2025-04-01T06:38:01.891518 | {
"authors": [
"bdlukaa",
"loic-sharma"
],
"repo": "bdlukaa/fluent_ui",
"url": "https://github.com/bdlukaa/fluent_ui/pull/476",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
373946547 | 模块化 独立的模块
[ ] memfn.js
[ ] memoize.js
[ ] parsePlaceholder.js
等到需要复用的时候再抽离出来
需要抽离 memoize.js memfn.js
memfn.js
memoize.js
已沉淀 https://github.com/imcuttle/memoize-fn
| gharchive/issue | 2018-10-25T13:26:21 | 2025-04-01T06:38:01.898592 | {
"authors": [
"imcuttle"
],
"repo": "be-fe/cz-conventional-changelog-befe",
"url": "https://github.com/be-fe/cz-conventional-changelog-befe/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1564439474 | Max Amount Error - Report from Community
System information
Below is the video that the community member noticed. When a user clicks "max amount" and then deletes the amount, an error appears.
Uploading 010101011 (1).mov…
Expected behavior
The error should not appear
@GabrielBuragev
@DocSmoove can you reupload the video and wait a bit longer. You created the issue without the video being uploaded.
https://user-images.githubusercontent.com/107990484/216257746-3b2c1e75-33d8-4354-8262-09108a5eef7a.mov
Ah, my bad. Here is the uploaded video. @andfletcher @GabrielBuragev
| gharchive/issue | 2023-01-31T14:38:56 | 2025-04-01T06:38:01.911229 | {
"authors": [
"DocSmoove",
"andfletcher"
],
"repo": "beamer-bridge/beamer",
"url": "https://github.com/beamer-bridge/beamer/issues/1439",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
177102398 | fix /usr/bin/env: 'node\r': No such file or directory
Use Unix line endings on cli.js to fix crash on Linux
indeed. this is problem on linux.
$ /usr/local/bin/subdownload
: No such file or directory
cc @beatfreaker
the #8 fix is bogus and doesn't fix anything
merging this would also close #3 and #4
but the maintainer has to publish new npm version
seems the @ryantate13 fix also doesn't change anything. just renames file two times.
i'll submit another PR with the proper fix
hmm. seems the problem is the way release is made. the cli.js is ok in repository, just when checkouted on windows and released from there, the files end up with DOS EOL.
| gharchive/pull-request | 2016-09-15T06:51:13 | 2025-04-01T06:38:01.920580 | {
"authors": [
"glensc",
"ryantate13"
],
"repo": "beatfreaker/subdownloader",
"url": "https://github.com/beatfreaker/subdownloader/pull/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
204619990 | ..?
i didn't touch the thing, but for some reason the repo test failed...? apparently the css was incorrect, but it's valid.
Please check if what you want to add to beautiful-discord-community/resources list meets quality standards before sending pull request. Thanks!
Please provide package links to:
github.com repo:
embed url:
Make sure that you've checked the boxes below before you submit PR:
[x] I have added my package in alphabetical order
[ ] I know that this package was not listed before
[x] I have added @import compatible link (ex. rawgit.com) to the repo and to my pull request
[x] I have read Contribution guidelines.
Thanks for your PR, you're awesome! :+1:
Still failing the test, sorry 😓 , I can't accept the PR till it passes.
(no like literally, I cant, the button is greyed out)
Zeta fix the build 👀
No wait the CSS is legit broken
Apparently..
content:" ";
Thank you! For some reason the script thinks it's invalid.
Is there a way to make the check "not required?"
fix ur shite
this is a travesty
I can't fix it, @TheBITLINK. It's something strange with the test, thinking that a quote is unclosed.
If we accept with a check on this future PRs will fail on it
I don't know, you'll have to fix it.
@LewisTehMinerz can't you just remove whatever symbol you used here?
content:" ";
Or just use multiple spaces, idk
This has dragged on for longer then I would've liked it, please open a PR when you fix the issue.
| gharchive/pull-request | 2017-02-01T15:46:11 | 2025-04-01T06:38:01.944373 | {
"authors": [
"LewisTehMinerz",
"TheBITLINK",
"TriggerRimfire",
"jakeoid",
"zet4"
],
"repo": "beautiful-discord-community/resources",
"url": "https://github.com/beautiful-discord-community/resources/pull/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
747089932 | Broken link for "gh-pages"
Describe the bug
A clear and concise description of what the bug is.
I guess the username has been changed, it would be better if you can fix the gh-pages link to this one.
To Reproduce
Steps to reproduce the behavior: NA
Hello @madhankumar028 thank you for reporting this issue, yes the link in the repository description was broken.
I just fixed it
thanks :)
| gharchive/issue | 2020-11-20T02:36:43 | 2025-04-01T06:38:01.946732 | {
"authors": [
"antonioru",
"madhankumar028"
],
"repo": "beautifulinteractions/beautiful-react-diagrams",
"url": "https://github.com/beautifulinteractions/beautiful-react-diagrams/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
705028587 | Overindentation when using "class" as a key in an object
Expected:
{
class: {
a: 1,
b: 2,
c: 3,
}
}
Actual:
{
class: {
a: 1,
b: 2,
c: 3,
}
}
Another keyword needing a special case.
Fix for this issue released with #2013
| gharchive/issue | 2020-09-20T01:58:15 | 2025-04-01T06:38:01.948504 | {
"authors": [
"bitwiseman",
"laggingreflex",
"mhnaeem"
],
"repo": "beautify-web/js-beautify",
"url": "https://github.com/beautify-web/js-beautify/issues/1838",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1673010292 | Logout on JWT token expiration
on 401 response cleanup authentication state and redirect to login
Mostly patient portal related at the moment
I can see the token placed in localStorage after being authenticated, it seems to be base 64 encoded though.
The logic I've followed was to apply btoa decoding function which gave me a UUID sort of string (when I try it using decodeJwt function I've got something like 'notValidJWT').
I'm just trying to access the 'exp' property from the JWT token stored in localStorage so that I can remove it as soon as it gets expired.
Would anybody be able to give me a hint, plz :)
P.D. I saw there's also a refresh_token that is handled by the Token interface but I guess that's being handled automatically. Whoever I'm still wondering whether Do we need to implement the refresher logic though
| gharchive/issue | 2023-04-18T12:23:28 | 2025-04-01T06:38:01.970776 | {
"authors": [
"ccastri",
"m0rl"
],
"repo": "beda-software/fhir-emr",
"url": "https://github.com/beda-software/fhir-emr/issues/118",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
276199279 | Could not able to run on my system
Can you give me an example of command line usage of this
How did you install it? via pip?
If you install with pip just run point it at two text files: wer ref.txt hyp.txt.
Thank you
| gharchive/issue | 2017-11-22T20:44:24 | 2025-04-01T06:38:02.052387 | {
"authors": [
"belambert",
"shivangsoni"
],
"repo": "belambert/asr-evaluation",
"url": "https://github.com/belambert/asr-evaluation/issues/18",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1013201326 | Operator perbandingan
Deskripsi (Description)
Checklist:
Umum:
[x] Saya menambah algoritma terbaru.
[ ] Saya memperbaiki algoritma yang sudah ada.
[ ] Saya memperbaiki dokumentasi.
[x] Saya menambah dokumentasi.
Contributor Requirements (Syarat Kontributor) dan Lain-Lain:
[x] Saya sudah membaca (I have read) CONTRIBUTING dan sudah menyetujui semua syarat.
[x] Saya telah menambahkan docstring yang memberikan penjelasan maksud dari kode yang saya buat.
[x] Saya menggunakan bahasa Indonesia untuk memberikan penjelasan dari kode yang saya buat.
Unit Testing dan Linting:
[x] black
[x] flake8
Environment
Saya menggunakan (I'm using):
os = linux
python = python 3.8.10
Issue #32
Oh ya, markdownnya ga sekalian?
tinggal merge kan?
| gharchive/pull-request | 2021-10-01T10:54:44 | 2025-04-01T06:38:02.063079 | {
"authors": [
"athallahmaajid",
"norinorin"
],
"repo": "bellshade/Python",
"url": "https://github.com/bellshade/Python/pull/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
270119995 | Update contributors
👋
Thanks for your contributions! 😄
| gharchive/pull-request | 2017-10-31T21:10:04 | 2025-04-01T06:38:02.075591 | {
"authors": [
"ben-eb",
"rodweb"
],
"repo": "ben-eb/caniuse-lite",
"url": "https://github.com/ben-eb/caniuse-lite/pull/12",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
124938551 | Update mdast to remark
Hey! Just wanted ping and request a name change from mdast to remark.
You’ve probably already seen it but here’s a list of tips for updating. And here’s a list of reasons why to update.
Let me know if I can help! :smile:
I've been meaning to get around to this (& others) but updating is quite time consuming, spent at least an hour converting https://github.com/ben-eb/mdast-autolink-headings. I'll get around to it but it's low down on the priority list right now.
Really? Sorry, I can imagine. I’ll do a PR of the replaces!
Yep, wasn't as simple as 'just' a name change either because of babel 6 and babel-tape-runner 1.3.0, so had to update the whole module. In the process of upgrading modules I maintain to babel 6, ava, eslint etc but it's a slog. :frowning:
https://github.com/ben-eb/remark-autolink-headings/commit/31306a0b61ea3c9b204165df92e537ecde843126
:anguished:
https://github.com/ben-eb/mdast-highlight.js/commit/0a13fe724df57856ba3178651fee44bd45343405
Coool! :+1:
| gharchive/issue | 2016-01-05T10:32:20 | 2025-04-01T06:38:02.079797 | {
"authors": [
"ben-eb",
"wooorm"
],
"repo": "ben-eb/mdast-highlight.js",
"url": "https://github.com/ben-eb/mdast-highlight.js/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
807318134 | Try again button for giving permission to mic not working
When I clicked on the mic button and denied it the permission it showed me a prompt to try again. But now when I click on try again it does not work.
Same thing happen to me right now!
| gharchive/issue | 2021-02-12T15:19:26 | 2025-04-01T06:38:02.094304 | {
"authors": [
"rampa2510",
"yezz123"
],
"repo": "benawad/dogehouse",
"url": "https://github.com/benawad/dogehouse/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
818114401 | Add prettier pre-commit hook
Fixes #460
The reason why this has so many changes is pre-commit fixed some files.
Any reason why so many rules are turned off?
this is just preference, they can be turned on.
I thought that extending airbnb was enough to use their rules. So it is unnecessary to turn off the rules. But maybe I am wrong?
@benawad can I ask why you merged this? it seemed that @nadirabbas was still working on this
@WikiRik It is still working tho
@WikiRik you can make another PR probably to optimize the rules probably.
| gharchive/pull-request | 2021-02-28T03:56:57 | 2025-04-01T06:38:02.096687 | {
"authors": [
"WikiRik",
"nadirabbas"
],
"repo": "benawad/dogehouse",
"url": "https://github.com/benawad/dogehouse/pull/471",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
739049675 | Adrianjg NSFW ascii image
Got user in title with ascii image, I guess people just can't help themselves. I'd rather not paste pic here.
Might want to start blocking github users so at least there is some repercussions.
I believe this user is this one @adrianjgil
I hardcoded the id to ban for now, but will probably need to add a system for this
This reminds me, I will work on a npm package to detect ascii art sometime this weekend.
I hardcoded the id to ban for now, but will probably need to add a system for this
Also not sure what information you get with github auth, but users can change github username, so not sure if the ban can be tied to something more precise like email/s or some other user github uuid. Just a few thoughts.
| gharchive/issue | 2020-11-09T13:55:59 | 2025-04-01T06:38:02.100247 | {
"authors": [
"TheFern2",
"benawad"
],
"repo": "benawad/vscode-stories",
"url": "https://github.com/benawad/vscode-stories/issues/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1040560482 | evertyhign probe,m
everythig broken ??WTf
proof
https://cdn.discordapp.com/attachments/902309433747587112/904450308325920788/unknown.png
| gharchive/issue | 2021-10-31T19:19:43 | 2025-04-01T06:38:02.159402 | {
"authors": [
"benbirchpersonal",
"zacherysun"
],
"repo": "benbirchpersonal/string-impl",
"url": "https://github.com/benbirchpersonal/string-impl/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2600566212 | Added more missing predicates
This PR adds the remaining predicates from #17.
I would greatly appreciate your feedback @benbovy regarding covers and covered_by, as the way it is implemented now appears to mostly work, but fails when checking polygons that partly share a boundary. This is consistent with https://github.com/r-spatial/s2/blob/b495b0df53bffc7dc1ad3780fe8ac208e78cd1bf/R/s2-predicates.R#L121C1-L124C2 for example, but deviates from PostGIS IIUC (cmp. https://postgis.net/docs/ST_Covers.html). From a user perspective, I would expect the test to work.
Thanks for the PR @JoelJaeschke !
I suspect that that R s2 deviating from PostGIS is not intentional? If we want a behavior consistent with PostGIS the implementation of covers should probably check both "s2_contains" and "s2_intersects" (not only "s2_contains").
I briefly checked in the S2geometry tests: for two polygons sharing one edge, the open vs. closed polygon model has no influence on the result of the DIFFERENCE Boolean operation (used by "s2_contains") but yields different results for the INTERSECTION operation.
(S2geometry also seems to assume that the edges of polylines - and polygons? - are directed by default and takes that into account for the intersection operation... this might also explain some specific behavior).
Actually, what I thought was incorrect behavior is likely expected and working correctly. I was thinking of this from a planar geometry point of view, but since S2 assumes a spherical earth, the horizontal edges are not actually straight, but rather follow the earth's curvature. Therefore, even though in planar geometry, one polygon would be covered, in spherical geometry this is not true anymore.
I tested this in BigQuery (using the GeoViz tool) and the following query
select st_geogfromtext('polygon((-118 60, -118 23, 34 23, 34 60, -118 60))') p1, st_covers(st_geogfromtext('polygon((-118 60, -118 23, 34 23, 34 60, -118 60))'), st_geogfromtext('polygon((-118 60, -118 23, 34 23, 34 60, -118 60))'))
union all
select st_geogfromtext('polygon((-118 60, -118 23, -18 23, -18 60, -118 60))') p2, st_covers(st_geogfromtext('polygon((-118 60, -118 23, 34 23, 34 60, -118 60))'), st_geogfromtext('polygon((-118 60, -118 23, -18 23, -18 60, -118 60))'));
where in the second select, the polygon is similar to the way I wrote the tests. However, when visualizing the resulting geometries, this is the result:
As can be seen, the red polygon is definitely not covered by the "parent" polygon. BigQuery also returns false for the second ST_Covers. So I think the test rightfully fails. Let me know if this makes sense to you @benbovy . Otherwise, I would simply remove the last test from the PR and it should be good to go.
Ah yes that makes sense!
(BTW we should implement quick visualization of geography objects in Spherely as it helps a lot!)
Instead of simply removing the failing tests, could you please keep these and make sure that the boundaries of the two polygons partially overlap along a great circle (longitude)?
(I'm also curious to see if it works as expected when the overlapping boundaries correspond to a shell ring for polygon A and a hole ring for polygon B... I guess it should work?)
Instead of simply removing the failing tests, could you please keep these and make sure that the boundaries of the two polygons partially overlap along a great circle (longitude)?
This has turned out to be much tougher than I anticipated. The standard S2BooleanOperation::Contains in s2geometry does not handle shared boundaries on polygons as I had hoped.. (well, it does, but not in a way that would help when implementing the covers logic, see https://github.com/google/s2geometry/issues/387).
I am not sure whether I grok the concepts in s2geometry fully, but as I understood, there is no API that handles the covers check natively. The S2Polygon class has a method called ApproxContains, which solves the issue of shared boundaries on polygons, but only when using an approximation, i.e. no exact checks. However, this would require some custom dispatching logic that differs between polygons and polylines/points and I am not quite sure whether this is something that should be implemented in spherely or rather in s2geography?
Below is some code that demonstrates the issue in more detail (only depends on s2geometry):
#include <iostream>
#include <memory>
#include <s2/s2latlng.h>
#include <s2/s2point.h>
#include <s2/s2loop.h>
#include <s2/s2polygon.h>
#include <s2/s2builder.h>
#include <s2/s2builderutil_s2polygon_layer.h>
#include <s2/s2boolean_operation.h>
int main(int argc, char** argv) {
const std::vector<S2Point> parent_vertices = {
S2LatLng::FromDegrees(60, -118).ToPoint(),
S2LatLng::FromDegrees(23, -118).ToPoint(),
S2LatLng::FromDegrees(23, 34).ToPoint(),
S2LatLng::FromDegrees(60, 34).ToPoint()
};
std::unique_ptr<S2Loop> parent_loop = std::make_unique<S2Loop>(std::move(parent_vertices));
S2Polygon parent_poly(std::move(parent_loop));
const std::vector<S2Point> interior_vertices = {
S2LatLng::FromDegrees(40, -117).ToPoint(),
S2LatLng::FromDegrees(37, -117).ToPoint(),
S2LatLng::FromDegrees(37, -116).ToPoint(),
S2LatLng::FromDegrees(40, -116).ToPoint()
};
std::unique_ptr<S2Loop> interior_loop = std::make_unique<S2Loop>(std::move(interior_vertices));
S2Polygon interior_poly(std::move(interior_loop));
const std::vector<S2Point> shared_bound_vertices = {
S2LatLng::FromDegrees(40, -118).ToPoint(),
S2LatLng::FromDegrees(23, -118).ToPoint(),
S2LatLng::FromDegrees(23, 34).ToPoint(),
S2LatLng::FromDegrees(40, 34).ToPoint()
};
std::unique_ptr<S2Loop> shared_bound_loop = std::make_unique<S2Loop>(std::move(shared_bound_vertices));
S2Polygon shared_bound_poly(std::move(shared_bound_loop));
const std::vector<S2Point> crossing_vertices = {
S2LatLng::FromDegrees(40, -120).ToPoint(),
S2LatLng::FromDegrees(37, -120).ToPoint(),
S2LatLng::FromDegrees(37, -116).ToPoint(),
S2LatLng::FromDegrees(40, -116).ToPoint()
};
std::unique_ptr<S2Loop> crossing_loop = std::make_unique<S2Loop>(std::move(crossing_vertices));
S2Polygon crossing_poly(std::move(crossing_loop));
S2BooleanOperation::Options options;
options.set_polygon_model(S2BooleanOperation::PolygonModel::CLOSED);
options.set_polyline_model(S2BooleanOperation::PolylineModel::OPEN);
const auto contains_interior = S2BooleanOperation::Contains(parent_poly.index(), interior_poly.index()) ? "true" : "false";
std::cout << "Contains interior? " << contains_interior << std::endl;
const auto contains_shared = S2BooleanOperation::Contains(parent_poly.index(), shared_bound_poly.index()) ? "true" : "false";
std::cout << "Contains shared? " << contains_shared << std::endl;
const S1Angle tol = S1Angle::Degrees(1e-16);
const auto approx_contains_shared = parent_poly.ApproxContains(shared_bound_poly, tol) ? "true" : "false";
std::cout << "Approx contains shared? " << approx_contains_shared << std::endl;
const auto contains_crossing = S2BooleanOperation::Contains(parent_poly.index(), crossing_poly.index()) ? "true" : "false";
std::cout << "Contains crossing? " << contains_crossing << std::endl;
const auto approx_contains_crossing = parent_poly.ApproxContains(crossing_poly, tol) ? "true" : "false";
std::cout << "Approx contains crossing? " << approx_contains_crossing << std::endl;
return 0;
}
Output when called is
Contains interior? true
Contains shared? false
Approx contains shared? true
Contains crossing? false
Approx contains crossing? false
@JoelJaeschke thanks for further diving into this!
Exploring this using R's s2 (given this is already implemented there), this is a reproducer of your example above I think (with the shared boundary that gives a wrong result):
# s2_make_polygon takes vector of longitudes and vector of latitudes
> parent_poly <- s2_make_polygon(c(-118, -118, 34, 34), c(60, 23, 23, 60))
> shared_bound_poly <- s2_make_polygon(c(-118, -118, 34, 34), c(40, 23, 23, 40))
> s2_contains(parent_poly, shared_bound_poly)
[1] FALSE
> s2_covers(parent_poly, shared_bound_poly)
[1] FALSE
and this indeed gives False while you would expect True for both cases (so this is actually not specific to covers being added here, but already an issue with contains as well, AFAIU)
Testing what has been said on the s2geometry, that this is a 50/50 chance of falling left or right of the edge, I tested with some other coordinates along the edge:
> for (lat_high in 40:30) {
+ shared_bound_poly <- s2_make_polygon(c(-118, -118, 34, 34), c(lat_high, 23, 23, lat_high))
+ print(paste(lat_high, s2_contains(parent_poly, shared_bound_poly), s2_covers(parent_poly, shared_bound_poly)))
+ }
[1] "40 FALSE FALSE"
[1] "39 FALSE FALSE"
[1] "38 TRUE TRUE"
[1] "37 FALSE FALSE"
[1] "36 FALSE FALSE"
[1] "35 TRUE TRUE"
[1] "34 FALSE FALSE"
[1] "33 FALSE FALSE"
[1] "32 FALSE FALSE"
[1] "31 FALSE FALSE"
[1] "30 TRUE TRUE"
Here it was not 50/50, but at least you see that indeed it is a bit (deterministically) random.
So while this is definitely a usability issue (and ideally we would be able to solve that. I thought that maybe snapping would have helped, but that didn't seem to make a difference ..), I would for now just implement this as you were doing, and document this as a gotcha (also given that R s2 is doing the same). And then we can later try to improve this.
Did you also test this example in BigQuery? (wondering if it can handle this case better ..)
For testing, we can use an example case where there is a shared boundary, but only fully shared edges (so there is never a vertex that falls somewhere in the middle of the edge of the other). For example:
> parent_poly <- s2_make_polygon(c(-118, -118, -118, 34, 34, 34), c(60, 40, 23, 23, 40, 60))
> shared_bound_poly <- s2_make_polygon(c(-118, -118, 34, 34), c(40, 23, 23, 40))
> s2_contains(parent_poly, shared_bound_poly)
[1] TRUE
> s2_covers(parent_poly, shared_bound_poly)
[1] TRUE
Hey @jorisvandenbossche, thanks for your feedback.
So while this is definitely a usability issue (and ideally we would be able to solve that. I thought that maybe snapping would have helped, but that didn't seem to make a difference ..),
Behind the scenes, the ApproxContains function does use snapping with the given tolerance, if that is what you mean? Maybe, if this option is more exposed in higher-level APIs, that could be used? But in general, I agree that its probably easiest to just document this example and make the user aware.
Did you also test this example in BigQuery? (wondering if it can handle this case better ..)
I did, BigQuery does in fact handle this case properly, although I only tested this with one polygon and ST_Covers. But happy to re-test some more cases.
For testing, we can use an example case where there is a shared boundary, but only fully shared edges (so there is never a vertex that falls somewhere in the middle of the edge of the other).
Sounds good to me, I will push the changes
I implemented all the feedback from above and documented the behavior in the tests a bit. If there is anything you would like me to change, let me know.
After thinking about this more, there is probably one way how this could work properly. Given two MutableS2ShapeIndex instances a and b, we would have to build the intersection of the two and then test the result for equality against b. If this is true, it should mean that b is fully covered by a.
I gave implementing this a go, but I struggle with how to make it work when just given two MutableS2ShapeIndex instances. Maybe someone who has more experience with s2geometry can take a stab at this.
Hey @jorisvandenbossche, is there anything missing that I should add and/or change here?
Given that this is something that ideally should be solved in the s2geography layer (since this also affects R), I opened an issue over there to describe this specific case -> https://github.com/paleolimbot/s2geography/issues/44
So let's continue the discussion there.
| gharchive/pull-request | 2024-10-20T15:25:26 | 2025-04-01T06:38:02.191535 | {
"authors": [
"JoelJaeschke",
"benbovy",
"jorisvandenbossche"
],
"repo": "benbovy/spherely",
"url": "https://github.com/benbovy/spherely/pull/56",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
937062730 | Cannot read property 'values' of undefined", falling back to the default summary source=console
ERRO[0054] handleSummary() failed with error "TypeError: ejs:294
292| Iterations
293|
294| Total<%= data.metrics.iterations.values.count %>
295| Rate<%= data.metrics.iterations.values.rate.toFixed(2) %>/s
296|
ERRO[0059] handleSummary() failed with error "TypeError: ejs:294
292| Iterations
293|
294| Total<%= data.metrics.iterations.values.count %>
295| Rate<%= data.metrics.iterations.values.rate.toFixed(2) %>/s
296|
297|
| gharchive/issue | 2021-07-05T12:54:26 | 2025-04-01T06:38:02.196635 | {
"authors": [
"hattersharath"
],
"repo": "benc-uk/k6-reporter",
"url": "https://github.com/benc-uk/k6-reporter/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1449917815 | API deprecate Objective.to_dict in favor of Objective.get_objective
closes #460
Codecov Report
Merging #489 (c5dc0e1) into main (5bd2525) will decrease coverage by 0.04%.
The diff coverage is 50.00%.
:exclamation: Current head c5dc0e1 differs from pull request most recent head 542bdb5. Consider uploading reports for the commit 542bdb5 to get more accurate results
@@ Coverage Diff @@
## main #489 +/- ##
==========================================
- Coverage 53.10% 53.06% -0.05%
==========================================
Files 42 42
Lines 2736 2742 +6
Branches 500 501 +1
==========================================
+ Hits 1453 1455 +2
- Misses 1171 1174 +3
- Partials 112 113 +1
I tested it and it works as expected. However if we update a benchmark by replacing to_dict with get_objective, and the benchmark is run with a previous version of benchopt, it will no longer work as Objective does not have a to_dict objective...
Is there a way to not break things? Keep the to_dict in the benchmark, which calls get_objective and warns to update benchopt?
This is indeed an issue... I think we should add a min_benchopt_version in the benchmarks, that we bump once we break compat and if you run a benchmark with older version of benchopt, it warns that you should update
| gharchive/pull-request | 2022-11-15T14:54:48 | 2025-04-01T06:38:02.207056 | {
"authors": [
"codecov-commenter",
"mathurinm",
"tomMoral"
],
"repo": "benchopt/benchopt",
"url": "https://github.com/benchopt/benchopt/pull/489",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
188202991 | Uatp / Airplus Format
First of all, the library is amazing and works like a charm.
I need that uatp/airplus credit card format is added to format it correctly, so, i open this issue asking if its possible to do that with the pattern of uatp/airplus credit card:
pattern:"1[0-9]{12,15}"
Example of uatp/airplus credit card: 1920 0200 000000.
Thank u very much.
Regards.
i added this ,
exports.uatp = new Type('Uatp', {
pattern: /^1[0-9]\d{12,15}$/,
eagerPattern: /^(1[1-4])/
})
| gharchive/issue | 2016-11-09T09:41:06 | 2025-04-01T06:38:02.215439 | {
"authors": [
"llesterdayBay",
"sajanyamaha"
],
"repo": "bendrucker/angular-credit-cards",
"url": "https://github.com/bendrucker/angular-credit-cards/issues/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
115248451 | Update dependencies
Update dependencies and make some compatibles changes.
Thanks!
| gharchive/pull-request | 2015-11-05T09:58:31 | 2025-04-01T06:38:02.216242 | {
"authors": [
"bendrucker",
"th507"
],
"repo": "bendrucker/git-log-parser",
"url": "https://github.com/bendrucker/git-log-parser/pull/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1503731313 | cache busting for demos by installing hamster-html
hamster-html is a static website builder, and it has this special cache-busting feature that fixes caching problems for users.
website repo and humanoid repo already use hamster-html to build their web pages
we need to apply this same pattern, of using hamster-html to build a website with cache busting, onto every demo repo that we have (nubs, mule, etc)
the github workflows will likely need to be updated, at least release.yaml
website repo and humanoid repo already use hamster-html to build their web pages
humanoid doesn't have cache busting implemented yet, will set it up tho
| gharchive/issue | 2022-12-19T23:04:25 | 2025-04-01T06:38:02.237707 | {
"authors": [
"PaulAroo",
"chase-moskal"
],
"repo": "benevolent-games/website",
"url": "https://github.com/benevolent-games/website/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
102271114 | A∩B∩(!C) highlight
I want to start off by saying thank you for creating such an awesome library! I am looking to create visualizations that will allow me to light up A∩B∩(!C), like that shown in the screenshot below.
Any chance this is already in the works? Thanks again!
You're the second person to ask for this - https://github.com/benfred/venn.js/issues/23 has the original request.
Its a little tricky to extend the existing approach because the areas might not be contiguous (like in that issue).
I've wondered about using SVG clipping to do this: like define the path as it is now and clip the other circles from it like this . I think that might be the easiest way to do it, but I haven't had time to try =(
| gharchive/issue | 2015-08-21T00:57:02 | 2025-04-01T06:38:02.249797 | {
"authors": [
"benfred",
"josephshum"
],
"repo": "benfred/venn.js",
"url": "https://github.com/benfred/venn.js/issues/43",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
90677891 | Tagging releases
When you bump the npm version, it would be helpful to also tag the release, so tarballs get generated.
I might package private for Debian, if I can lay my hand on a tarball...
Any news on the matter?
Please, I would just need a "git tag v0.1.6" followed by "git push --tags" and it's done... Or perhaps github makes it even easier than that...
Done! Sorry for the delay, and I'll try to be better about this in the future (this is one of my older repositories, unfortunately somewhat neglected).
| gharchive/issue | 2015-06-24T13:37:14 | 2025-04-01T06:38:02.363955 | {
"authors": [
"SnarkBoojum",
"benjamn"
],
"repo": "benjamn/private",
"url": "https://github.com/benjamn/private/issues/10",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
124617083 | Correct label position only when overlapped label is actually shown
Closes #65
Any chance in getting this pulled? Is there something I need to do?
Sorry @obfuscoder - I've been horribly delinquent with this project. I'll take a look at this tonight.
Yeah this makes sense, cool. Actually the change needed to be made to the raw source code (in the d3pie-source folder) rather than the generated d3pie folder - but that's fine. I'm going to merge this in now and tweak it myself.
I really appreciate the pull request, and again sorry for the incredibly long wait.
| gharchive/pull-request | 2016-01-02T23:23:18 | 2025-04-01T06:38:02.380997 | {
"authors": [
"benkeen",
"obfuscoder"
],
"repo": "benkeen/d3pie",
"url": "https://github.com/benkeen/d3pie/pull/98",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
650965063 | After I choose a move, it takes too much effort to get back to the move chooser
Solution: After a move is played, automatically focus the move choosing button.
I thought I just had to add $("#cycleMovesBtn").focus() to the play-move handler, but our hacky keyboard/gamepad setup breaks it. We'll need to rework that to fix this.
| gharchive/issue | 2020-07-04T22:24:00 | 2025-04-01T06:38:02.384533 | {
"authors": [
"benknoble"
],
"repo": "benknoble/limchess",
"url": "https://github.com/benknoble/limchess/issues/26",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
677527596 | no-cycle rule fails to recognize external module
In a project setup with yarn workspaces, no-cycle fails to recognize a @scope/package package as external, even if the path is under node_modules. The reason is that no-cycle uses isExternalModule:
https://github.com/benmosher/eslint-plugin-import/blob/master/src/rules/no-cycle.js#L44
but there is a regex condition that says that module is not "external", but "scoped".
https://github.com/benmosher/eslint-plugin-import/blob/3e65a70bc73e404ace72ee858889e39732284d12/src/core/importType.js#L45
I think the right solution for consistency is to use resolveImportType from importType?
https://github.com/benmosher/eslint-plugin-import/blob/master/src/core/importType.js#L83-L93
Alternatively, replace the regular expression with "anything that does not start with a ..
Actually, my motivation is actually not necessarily considering the import external. But making sure that no-cycle reads the target file with the right settings from the target .eslintrc! This was more of a workaround.
@davazp any chance you could provide a repro repo, or a PR with a failing test case? :-)
| gharchive/issue | 2020-08-12T09:05:37 | 2025-04-01T06:38:02.396227 | {
"authors": [
"davazp",
"ljharb"
],
"repo": "benmosher/eslint-plugin-import",
"url": "https://github.com/benmosher/eslint-plugin-import/issues/1877",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1265516089 | Domain discriminator
Thanks for your work, I have a question about your implementation of the domain discriminator, I use the two sketch datasets you recommend in your paper, and use the domain discriminator recommended in your project, use 512×7×7 feature maps for channel attention, as done in CBAM, flattened as input to the domain discriminato, but I still can't achieve the volXel IoU in your paper after adding the domain discriminator on the airplane class, my way only achieves 0.490, 0.486. Can you give me some suggestions? Looking forward to your reply!
Hi, sorry for the late reply!
We do find training directly with the domain discriminator to be somehow unstable. Please try training the model without the domain discriminator first then finetune with it under a low learning rate.
| gharchive/issue | 2022-06-09T02:36:32 | 2025-04-01T06:38:02.402336 | {
"authors": [
"QHxingchen",
"bennyguo"
],
"repo": "bennyguo/sketch2model",
"url": "https://github.com/bennyguo/sketch2model/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
145935775 | Webpakc Config use resolve.root
Using the webpack option resolve.root is more appropriate than changing resolve.modulesDirectories.
Documentation
Thanks @albertorestifo!
| gharchive/pull-request | 2016-04-05T09:35:07 | 2025-04-01T06:38:02.422076 | {
"authors": [
"albertorestifo",
"bensmithett"
],
"repo": "bensmithett/webpack-css-example",
"url": "https://github.com/bensmithett/webpack-css-example/pull/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1617641234 | Add blue wires to TTF board for instrumentation.
Need NIC_PWR_GOOD wired to UART RX pin.
Need all scan chain signals blue wired to attach probes.
Done. Thanks.
| gharchive/issue | 2023-03-09T17:04:04 | 2025-04-01T06:38:02.440796 | {
"authors": [
"bentprong"
],
"repo": "bentprong/ocp_ttf",
"url": "https://github.com/bentprong/ocp_ttf/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
198684450 | Problem implementing garage door - Home app doesn't show correct status
I'm implementing the GarageDoor interface, however I'm finding that the iOS Home app doesn't display the correct door state. Here's my implementation:
public class RemoteGarageDoor implements GarageDoor {
/**
* A connection to the door which allows data to be sent and received.
*/
private final GarageDoorConnection connection;
private final int id;
private final String label;
private final String manufacturer;
private final String model;
private final String serialNumber;
/**
* Futures passed to {@link #getCurrentDoorState()} which must be completed next time the door
* status is updated.
*/
private final Set<CompletableFuture<DoorState>> getDoorStateFutures =
ConcurrentHashMap.newKeySet();
/**
* Futures passed to {@link #getObstructionDetected()} which must be completed next time the
* door status is updated.
*/
private final Set<CompletableFuture<Boolean>> getObstructionDetectedFutures =
ConcurrentHashMap.newKeySet();
/**
* Future from the last call to {@link #setTargetDoorState(DoorState)} which must be
* completed when the door reaches the target state.
*/
private CompletableFuture<Void> targetStateReachedFuture;
private DoorState lastKnownDoorState = CLOSED;
private boolean lastKnownObstructionDetected = false;
private DoorState targetDoorState = CLOSED;
/**
* Whether or not the current target door state has been sent to the door.
*/
private boolean targetHasBeenRequested = true;
private HomekitCharacteristicChangeCallback doorStateCallback;
private HomekitCharacteristicChangeCallback targetDoorStateCallback;
private HomekitCharacteristicChangeCallback obstructionDetectedCallback;
public RemoteGarageDoor(final GarageDoorConnection connection,
final int id,
final String label,
final String manufacturer,
final String model,
final String serialNumber) {
this.connection = checkNotNull(connection, "connection cannot be null.");
this.id = id;
this.label = checkNotNull(label, "label cannot be null.");
this.manufacturer = checkNotNull(manufacturer, "manufacturer cannot be null.");
this.model = checkNotNull(model, "model cannot be null.");
this.serialNumber = checkNotNull(serialNumber, "serialNumber cannot be null.");
// Use a background thread for looping
Executors.newSingleThreadExecutor().execute(() -> {
while (true) {
try {
loop();
} catch (final Exception e) {
Lumberjack.log(ERROR, "Error occurred in RemoteGarageDoor loop.", e);
}
}
});
}
@Override
public String getLabel() {
return label;
}
@Override
public String getManufacturer() {
return manufacturer;
}
@Override
public String getModel() {
return model;
}
@Override
public String getSerialNumber() {
return serialNumber;
}
@Override
public int getId() {
return id;
}
@Override
public void identify() {
Lumberjack.log(WARNING, "identify() not implemented for RemoteGarageDoor.");
}
@Override
public CompletableFuture<DoorState> getCurrentDoorState() {
final CompletableFuture<DoorState> future = new CompletableFuture<>();
// Save the future so that it can be completed by the looper
getDoorStateFutures.add(future);
return future;
}
@Override
public CompletableFuture<DoorState> getTargetDoorState() {
return CompletableFuture.completedFuture(targetDoorState);
}
@Override
public CompletableFuture<Boolean> getObstructionDetected() {
final CompletableFuture<Boolean> future = new CompletableFuture<>();
// Save the future so that it can be completed by the looper
getObstructionDetectedFutures.add(future);
return future;
}
@Override
public CompletableFuture<Void> setTargetDoorState(final DoorState targetDoorState) throws
Exception {
final CompletableFuture<Void> future = new CompletableFuture<>();
if (this.targetDoorState != targetDoorState) {
// Deliver callback if necessary
if (targetDoorStateCallback != null) {
targetDoorStateCallback.changed();
}
// Save the new target state
this.targetDoorState = targetDoorState;
// Save the future so it can be completed by the looper
targetStateReachedFuture = future;
// Indicate to the looper that the new target has not yet been requested
targetHasBeenRequested = false;
}
return future;
}
@Override
public void subscribeCurrentDoorState(final HomekitCharacteristicChangeCallback callback) {
doorStateCallback = callback;
}
@Override
public void subscribeTargetDoorState(final HomekitCharacteristicChangeCallback callback) {
targetDoorStateCallback = callback;
}
@Override
public void subscribeObstructionDetected(final HomekitCharacteristicChangeCallback callback) {
obstructionDetectedCallback = callback;
}
@Override
public void unsubscribeCurrentDoorState() {
doorStateCallback = null;
}
@Override
public void unsubscribeTargetDoorState() {
targetDoorStateCallback = null;
}
@Override
public void unsubscribeObstructionDetected() {
obstructionDetectedCallback = null;
}
/**
* Requests that the door open/close etc. or requests a status update from the door. The
* former has priority.
*/
private void loop() {
if (targetDoorState != lastKnownDoorState && !targetHasBeenRequested) {
requestDoorStateChange();
targetHasBeenRequested = true;
} else {
final ResponsePacket response = connection.sendRequestAndAwaitResponse(
RequestPacket.newGetStatusUpdatedRequestPacket());
// The door may not always be reachable.
if (response != null) {
processUpdate(response);
}
}
}
/**
* Sends a request to the door to change state.
*/
private void requestDoorStateChange() {
//TODO implement confirmation return oacket, OK for now while using a wired connection
if (targetDoorState == OPEN) {
connection.sendRequest(RequestPacket.newOpenDoorRequestPacket());
} else if (targetDoorState == CLOSED) {
connection.sendRequest(RequestPacket.newCloseDoorRequestPacket());
} else if (targetDoorState == STOPPED) {
connection.sendRequest(RequestPacket.newStopDoorRequestPacket());
} else {
throw new RuntimeException("The target state is invalid.");
}
}
/**
* Processes a status update from the door. Pending futures are completed if appropriate and
* the necessary callbacks are delivered.
*
* @param update
* the update from the door, not null
*/
private void processUpdate(final ResponsePacket update) {
// Copy the current status
final DoorState oldState = lastKnownDoorState;
final boolean oldObstructionDetected = lastKnownObstructionDetected;
// Update the current status
lastKnownDoorState = update.getDoorState();
lastKnownObstructionDetected = update.getObstructionDetectedStatus();
// If the target state has been reached, complete the relevant future
if (targetDoorState == update.getDoorState() && targetStateReachedFuture != null) {
targetStateReachedFuture.complete(null);
targetStateReachedFuture = null; // Completed, so discard
}
// Complete futures of getDoorState()
for (final CompletableFuture<DoorState> future : getDoorStateFutures) {
future.complete(update.getDoorState());
getDoorStateFutures.remove(future); // Completed, so discard
}
// Complete futures of getObstructionDetected()
for (final CompletableFuture<Boolean> future : getObstructionDetectedFutures) {
future.complete(update.getObstructionDetectedStatus());
getObstructionDetectedFutures.remove(future); // Completed, so discard
}
// Deliver door state change callback if necessary
if (oldState != update.getDoorState() && doorStateCallback != null) {
doorStateCallback.changed();
}
// Deliver obstruction detected changed callback if necessary
if (oldObstructionDetected != update.getObstructionDetectedStatus()
&& obstructionDetectedCallback != null) {
obstructionDetectedCallback.changed();
}
}
}
Problems occur when the door state is changed. For example:
The iOS home app shows the status of closed. This is expected and reflects the internal representation of the door state.
Using the home app, a request to open the door is made.
The mock door opens and my garage door implementation completes appropriate saved futures.
New calls to getCurrentDoorState() result in the future being completed with a status of open.
Despite this, the home app continues to show "Opening..." as the door status.
My unit tests show that the futures are being completed as expected and the correct callbacks are being delivered. Also right now I'm mocking out the connection to the physical door, so there's no packet loss or other networking issues.
Any ideas?
After further investigation, I've found that execution blocks when HomekitCharacteristicChangeCallback.changed() is called in void processUpdate(final ResponsePacket update). I mocked these callbacks in my unit tests which would explain why they passed by my integration tests don't. Is this blocking behaviour expected?
@beowulfe
Is there an expected response time from the characteristic requests? I'm trying to do something somewhat similar to Matthew, in which I want to delegate all the actual work to the microcontroller on the garage door, and have my HAP server send MQTT requests and correlate the responses.
Currently I make a CompletableFuture when the value is requested. Any subsequent requests while I'm waiting for a response are given the same instance of the CompletableFuture. Then when I get a response back over MQTT I notify the subscriber (if any), and then complete the future, satisfying all the pending requests.
However since the controller may be offline, I need to do something meaningful if I don't get a response back in a reasonable time, or the gets for will block for long periods.
Is this a suitable approach to take, and what would be reasonable timeout behavior? I think returning null generates errors in the framework, but perhaps a suitable "no response" behavior in iOS will still happen.
If you can answer these questions I'd be happy to write a bit of documentation on the wiki page, and update the demo, perhaps add something a bit more complicated than the light.
Are you still having problems with the latest main branch?
I abandoned my project years ago.
| gharchive/issue | 2017-01-04T11:15:28 | 2025-04-01T06:38:02.458345 | {
"authors": [
"MatthewDavidBradshaw",
"MatthewTamlin",
"TyrantFox",
"ccutrer"
],
"repo": "beowulfe/HAP-Java",
"url": "https://github.com/beowulfe/HAP-Java/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
27049984 | Consumer Example
I'm trying to figure out the proper way to use Bernard for consuming messages. I have the following code(using IronMQ driver):
$router = new ContainerAwareRouter( $container, array( 'UpdatePostSnapshots' => 'update_snapshots_service' ) );
$consumer = new Consumer( $router, new MiddlewareBuilder() );
$consumer->consume( $queueFactory->create( 'post-update-snapshots-msg' ), array( 'max-runtime' => 900 ) );
I run it as a custom console command in Symfony. It works well, however IronMQ dashboard shows several thousand API requests within a few minutes(just for one consumed message).
How do I optimize this, so the worker would not go crazy over IronMQ with million requests a day for not a large queue.
Looks like you sorted it out.
| gharchive/issue | 2014-02-06T13:45:01 | 2025-04-01T06:38:02.515020 | {
"authors": [
"sagikazarmark",
"websirnik"
],
"repo": "bernardphp/bernard",
"url": "https://github.com/bernardphp/bernard/issues/100",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
253344976 | allocation exceeds
@bertrandmartel,
Good Morning..
One more bug to debug
same as #34, use speedTestSocket.setUploadStorageType(UploadStorageType.FILE_STORAGE); to use file storage rather than UploadStorageType.RAM_STORAGE which is the default
| gharchive/issue | 2017-08-28T14:23:05 | 2025-04-01T06:38:02.536917 | {
"authors": [
"bertrandmartel",
"juniormj"
],
"repo": "bertrandmartel/speed-test-lib",
"url": "https://github.com/bertrandmartel/speed-test-lib/issues/51",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
595500247 | SSO possible solution
The single sign-on alternative to the one all which was previously mentioned in the meeting on Sunday ( one all SSO)
Additional context
The setup we need to perform is mentioned here in the link below
https://auth0.com/docs/cms/wordpress/installation#option-1-standard-setup
So, who is responsible for the blog? There are still many days, and you have found the documentation, so go ahead and do it.
@SachinMCReddy
since we decided to use the mini-blog as the solution for blog feature, sso is no longer needed.
| gharchive/issue | 2020-04-06T23:59:49 | 2025-04-01T06:38:02.542313 | {
"authors": [
"SachinMCReddy",
"bestksl",
"mhassany"
],
"repo": "bestksl/SSW695",
"url": "https://github.com/bestksl/SSW695/issues/93",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
281820877 | Implement version of BetaCalibration with only one parameter a (a=b,m=0.5)
There are multiple versions of BetaCalibration with a different number of parameters:
abm: all parameters are estimated
ab: parameter m is fixed to 0.5 and a and b are estimated
am: parameter b is fixed with the same value as a, and a and m are estimated
The commit f879d853b8341df4fe3fd4215e854b56d36e2691 in the branch feat_param_a has a initial version that needs to be revised and tested.
I see that the problem is that the line 307 is only a placeholder to save the results, but it is not used later for the predictions. I was expecting the new _BetaCal to predict symetric functions but that is not the case.
I see that it is necessary that the LogisticRegression is fitted with equal variance on both classes. I am trying to find out how this is done.
Commit 67a6a7b solves the a = b problem, but the m does not seem to behave as an m=0.5
@perellonieto, I tried using the "a" option from the PyPi repository (pip install betacal), but it still has the bug fixed above. Can you merge the fix you implemented here? Is there still an issue with the m parameter?
Commit 124aefb8c2e8d124cf18a0a24755db2e93411e35 (now merged into master) solves these issues.
The new version will soon be available in PyPi (pip install betacal).
The new version (1.1.0) is already available through pip install betacal.
Thanks for the patience.
| gharchive/issue | 2017-12-13T16:52:28 | 2025-04-01T06:38:02.552616 | {
"authors": [
"perellonieto",
"simontindemans",
"tmfilho"
],
"repo": "betacal/python",
"url": "https://github.com/betacal/python/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
701218720 | Prise en compte de la banque
Il faudrait prendre en compte notre banque et nos investissements.
Investir 150000€ chez LCL ça pollue beaucoup plus qu'un petit compte courant au crédit coopératif.
Voir #314
Voir #314
| gharchive/issue | 2020-09-14T15:43:51 | 2025-04-01T06:38:02.614381 | {
"authors": [
"Benjamin-Boisserie-ABC",
"publicodes"
],
"repo": "betagouv/ecolab-data",
"url": "https://github.com/betagouv/ecolab-data/issues/464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.