id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
183231569 | Remove the Jenkins setup wizard
On first provisioning of the vagrant box when running the Jenkins server the user has to go through a security setup wizard. This should be removed to further speed up the process of having the Jenkins ready.
I've done this before so might be able to help, it's a case of adding a lot of jenkins config files and is quite verbose
Any contributions are welcome really.
This was resolved with #10.
Thank you for all your contributions @munkiepus. I will add you as a contributor to the repository so that you can directly create branches and review and merge future PR-s.
| gharchive/issue | 2016-10-15T20:35:38 | 2025-04-01T06:38:29.876099 | {
"authors": [
"edinc",
"munkiepus"
],
"repo": "edinc/vagrant-jenkins",
"url": "https://github.com/edinc/vagrant-jenkins/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
440471118 | Adapt generated tmux config to new tmux 2.9 syntax
Closes #102
Closes #100
Inspired by @secuvim in #102
I do not know if this is all that is needed, but it worked for me.
This would break the backwards compatibility for users with tmux versions < 1.9.
To keep the backwards compatibility you could wrap your changes in something like:
let tmux_version = system("tmux -V")
if (tmux_version < 'tmux 1.9')
let misc_options = ...
let win_options = ...
else
let misc_options = ...
let win_options = ...
elseif
This is just a suggestion as I am not the maintainer of this package.
@He-Ro Thanks for implementing a fix.
@edkolev What is your opinion on this?
Some of the presets appear to write into win_options['window-status-activity-attr'] which causes problems if you're using one of those presets:
https://github.com/edkolev/tmuxline.vim/blob/c8a0295eb34bf11447779a5a203fd472147788a7/autoload/tmuxline/presets/powerline.vim#L26
https://github.com/edkolev/tmuxline.vim/blob/c8a0295eb34bf11447779a5a203fd472147788a7/autoload/tmuxline/presets/nightly_fox.vim#L15
https://github.com/edkolev/tmuxline.vim/blob/c8a0295eb34bf11447779a5a203fd472147788a7/autoload/tmuxline/presets/crosshair.vim#L23
@He-Ro amazing work on this, thanks! Could you also:
git grep and change everywhere window-status-activity-attr => window-status-activity-style
add yourself to the CONTRIBUTORS.md in the root of the project
@khalsah good catch!
This would break the backwards compatibility for users with tmux versions < 1.9.
@edkolev What is your opinion on this?
I'm fine with breaking compatibility for tmux <1.9 - it's been more than 5 years since 1.9 was released. Adding a conditional in the vim script would not work when you share the generated tmux conf file between machines with different versions of tmux.
Added the requested changes.
Also found a mention of attr in the README.
Thanks for working on this. Just noting https://github.com/tmux/tmux/wiki/FAQ#how-do-i-translate--fg--bg-and--attr-options-into--style-options and curious if this was a known issue here.
and curious if this was a known issue here
Could you clarify the question? The link is pointing to a wiki entry about migration; the wiki entry isn't a know issue
Gotcha, maybe this issue isn't related. I had to manually fix up some tmuxline generated settings to comply with the Tmux 2.9 Syntax ralated to -style and -attr. If thosed fixes have been merged in, I can try to run a new snapshot and see if that works.
This PR addresses exactly this - the *-fg/bg to *-style migration. And yes, you should be able to create a snapshot which works with the lates tmux.
| gharchive/pull-request | 2019-05-05T17:50:27 | 2025-04-01T06:38:29.891641 | {
"authors": [
"He-Ro",
"edkolev",
"khalsah",
"secuvim",
"shrop"
],
"repo": "edkolev/tmuxline.vim",
"url": "https://github.com/edkolev/tmuxline.vim/pull/104",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1535115367 | 🛑 Siakad Polinema is down
In d38fb84, Siakad Polinema (http://siakad.polinema.ac.id/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Siakad Polinema is back up in 29a90bf.
| gharchive/issue | 2023-01-16T15:28:35 | 2025-04-01T06:38:29.894311 | {
"authors": [
"edoaurahman"
],
"repo": "edoaurahman/check-web-uptime",
"url": "https://github.com/edoaurahman/check-web-uptime/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1273520714 | Workaround to repair IsLastestVersion for Android.
Quick implementation from Nick Kovalsky workaround :
https://stackoverflow.com/questions/72407251/how-to-get-version-number-of-application-from-play-store-using-xamarin-forms/72643625#72643625
Fixes #43 #42 .
Changes proposed in this pull request:
Repair plugin for Android Platform.
Thank you for the PR.
Please remove the second commit - a test project should not be included in this PR, just the fix. That commit is also breaking the build.
Thank you.
Beta nuget is available here: https://www.nuget.org/packages/Xam.Plugin.LatestVersion/2.1.1-beta.107
| gharchive/pull-request | 2022-06-16T12:49:27 | 2025-04-01T06:38:29.954472 | {
"authors": [
"Jerome-Liger",
"edsnider"
],
"repo": "edsnider/latestversionplugin",
"url": "https://github.com/edsnider/latestversionplugin/pull/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1202239008 | 🛑 Webmail-DGE is down
In 780e7e4, Webmail-DGE (http://www.webmail.mendoza.edu.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Webmail-DGE is back up in fb9aa22.
| gharchive/issue | 2022-04-12T18:29:14 | 2025-04-01T06:38:29.957129 | {
"authors": [
"edu-red"
],
"repo": "edu-red/DGE-GOV",
"url": "https://github.com/edu-red/DGE-GOV/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1797591090 | 🛑 Portal-DGE is down
In 93f61cc, Portal-DGE (https://www.mendoza.edu.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Portal-DGE is back up in a66ab92.
| gharchive/issue | 2023-07-10T21:09:11 | 2025-04-01T06:38:29.959467 | {
"authors": [
"edu-red"
],
"repo": "edu-red/DGE-GOV",
"url": "https://github.com/edu-red/DGE-GOV/issues/122",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
619861171 | WIP [BD-6] OEP-18 Compliance and tests Python 3.8
Add setup.py install_requirements definition.
Update travis file using new requirements.
Create requirements folder.
create pip_tools, base, test and travis requirements files.
Add Makefile.
Add makefile upgrade command.
Include requirements files generated using upgrade command.
Add openedx.yaml and include OEPs list.
Add python 3.8 to tests.
Thanks for the pull request, @ericfab179! I've created OSPR-4550 to keep track of it in JIRA. JIRA is a place for product owners to prioritize feature reviews by the engineering development teams.
Feel free to add as much of the following information to the ticket:
supporting documentation
edx-code email threads
timeline information ("this must be merged by XX date", and why that is)
partner information ("this is a course on edx.org")
any other information that can help Product understand the context for the PR
All technical communication about the code itself will still be done via the GitHub pull request interface. As a reminder, our process documentation is here.
@ericfab179 🎉 Your pull request was merged!
Please take a moment to answer a two question survey so we can improve your experience in the future.
| gharchive/pull-request | 2020-05-18T02:20:58 | 2025-04-01T06:38:30.028715 | {
"authors": [
"edx-webhook",
"ericfab179"
],
"repo": "edx/TinCanPython",
"url": "https://github.com/edx/TinCanPython/pull/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
76818565 | API client now passes the user's full name to ecommerce
This merge also requires a merge on ecommerce and edx-platform with tyler-jwt-fullname, so keep that in mind.
@clintonb @rlucioni FYI: @wedaly @jimabramson
Tests are required.
@clintonb Will do that now
Minor: Cleanup the commit message. We generally use a simple statement of the overall change as the title (e.g. Added support for passing full name via JWT). Additional information can be provided as a paragraph or list after the title.
Added support for passing full name via JWT
- User's full name can be passed to the API when using JWT authentication. This field is optional.
- Email is no longer required for JWT authentication.
Aside from the message change, :+1: . Please await a second approval from @rlucioni or @jimabramson before merging.
Oh, I actually not aware that Git messages could be multiline like that. I'll change it then force a push. Thanks @clintonb.
Once you get a clean build, :+1: .
@Nickersoft I'm surprised to see a major version bump on this change; was expecting 0.5.0. Is there something backwards-incompatible?
@jimabramson My mistake. The constructor was changed in a backwards-incompatible manner. We can make it backwards-compatible if you'd like.
ah. somehow i missed that. As it's backward-incompatible, then 1.0.0 is fine and I'll shut up.
| gharchive/pull-request | 2015-05-15T18:45:35 | 2025-04-01T06:38:30.036752 | {
"authors": [
"Nickersoft",
"clintonb",
"jimabramson",
"rlucioni"
],
"repo": "edx/ecommerce-api-client",
"url": "https://github.com/edx/ecommerce-api-client/pull/8",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
810119276 | Navigate within App from dates to course component
Description
LEARNER-8214
Navigate within app when a link is clicked on Full Page dates screen
@omerhabib26 The app still navigates to screen for the un-available course unit.
https://user-images.githubusercontent.com/43750646/108349158-78f9bb00-7204-11eb-9459-e5fbb3221edc.mp4
| gharchive/pull-request | 2021-02-17T11:57:10 | 2025-04-01T06:38:30.056334 | {
"authors": [
"farhan-arshad-dev",
"omerhabib26"
],
"repo": "edx/edx-app-android",
"url": "https://github.com/edx/edx-app-android/pull/1515",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
121565553 | Update AUTHORS and bump version
Update to AUTHORS and bump version to 1.3.0
@clintonb @rlucioni
:+1:
:+1:
| gharchive/pull-request | 2015-12-10T20:00:20 | 2025-04-01T06:38:30.112083 | {
"authors": [
"clintonb",
"mjfrey",
"rlucioni"
],
"repo": "edx/edx-rest-api-client",
"url": "https://github.com/edx/edx-rest-api-client/pull/21",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2136866221 | Add manual control of Repeat and Shuffle
Related to #8 is adding the ability to toggle shuffle and repeat. The play queue should have the following states:
Shuffle on: randomize the order of the queue every time shuffle is turned on.
Shuffle off:
Version 1: It would be fine to just not change the order of the current play queue for V1 of this feature .
Version 2: If playing from a defined playlist (either a playlist, album list, or a future "All Songs" view), turning shuffle off should reorder the queue according to the list display order, with playback continuing from whatever queue position holds the currently playing song. If playing from a queue modified with "Add to Queue", just leave the queue in its current order.
Version 3: If the queue was created with "Play Next" or "Add to Queue", reorder the queue according to the order of whatever view it was started with and play back from whatever queue position the current song is at.
Repeat off: play to the end of the current queue and stop.
Repeat One: continuously play the current song, only changing if the song is manually changed by the user (by choosing a new song or pressing the Next button).
Repeat All: play to the end of the current queue; if Shuffle is on randomize the queue order and start from the beginning, or if shuffle is off just go back to the beginning of the queue and continue playback.
When starting playback by choosing a song or pressing Play, the Shuffle state should be whatever it was previously. When starting playback by pressing Shuffle, the Shuffle state should be turned on.
Defs want to add this. At the moment playback issues are my priority. I would like to have a solid foundation before bolting on features.
| gharchive/issue | 2024-02-15T15:46:57 | 2025-04-01T06:38:30.195930 | {
"authors": [
"eeston",
"jgoguen"
],
"repo": "eeston/Jello",
"url": "https://github.com/eeston/Jello/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1148697706 | 🛑 Nodo UTT Ramos Mejia is down
In c4ef14b, Nodo UTT Ramos Mejia (https://www.uttnodoramosmejia.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nodo UTT Ramos Mejia is back up in d53b9f4.
| gharchive/issue | 2022-02-23T23:41:39 | 2025-04-01T06:38:30.201402 | {
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/261",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1148971510 | 🛑 Tierra Firme is down
In 69bb2d7, Tierra Firme (https://www.tierrafirmenodoutt.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tierra Firme is back up in 473d37b.
| gharchive/issue | 2022-02-24T07:55:29 | 2025-04-01T06:38:30.203849 | {
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/306",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1150115545 | 🛑 Nodo UTT Ramos Mejia is down
In a8d9866, Nodo UTT Ramos Mejia (https://www.uttnodoramosmejia.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nodo UTT Ramos Mejia is back up in 3673f33.
| gharchive/issue | 2022-02-25T07:07:56 | 2025-04-01T06:38:30.206230 | {
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/484",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1178539169 | 🛑 Almacén BioPandora is down
In a5cc926, Almacén BioPandora (https://www.biopandora.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Almacén BioPandora is back up in c1d82f9.
| gharchive/issue | 2022-03-23T18:59:50 | 2025-04-01T06:38:30.208869 | {
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/4939",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1182137971 | 🛑 Almacén BioPandora is down
In a275909, Almacén BioPandora (https://www.biopandora.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Almacén BioPandora is back up in ee65c2f.
| gharchive/issue | 2022-03-26T22:35:34 | 2025-04-01T06:38:30.211220 | {
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/5442",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1192432463 | 🛑 Tierra Firme is down
In 34cfe86, Tierra Firme (https://www.tierrafirmenodoutt.com.ar) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tierra Firme is back up in 9cc655c.
| gharchive/issue | 2022-04-04T23:37:26 | 2025-04-01T06:38:30.213606 | {
"authors": [
"efecear"
],
"repo": "efecear/upptime",
"url": "https://github.com/efecear/upptime/issues/6599",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1238451144 | Ошибка в решении задачи №1 варианта от 9 июня 2013 года
Ошибка в одном знаке. Правильный ответ -
в чем можно убедиться непосредственно.
Почему? a_1 должен быть равен 1, а не 1/2.
Да, меня что-то переклинило, сорри)
| gharchive/issue | 2022-05-17T10:48:30 | 2025-04-01T06:38:30.222415 | {
"authors": [
"dqdp",
"efiminem"
],
"repo": "efiminem/supershad",
"url": "https://github.com/efiminem/supershad/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2746919406 | 🛑 CDN-Mikrotik-2 is down
In f4c67ec, CDN-Mikrotik-2 (http://118.179.50.70) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CDN-Mikrotik-2 is back up in 3f74049 after 49 minutes.
| gharchive/issue | 2024-12-18T06:35:28 | 2025-04-01T06:38:30.224970 | {
"authors": [
"eftiarhossain279"
],
"repo": "eftiarhossain279/mynyc",
"url": "https://github.com/eftiarhossain279/mynyc/issues/521",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2763551167 | 🛑 CDN-Mikrotik-2 is down
In 0eb6f66, CDN-Mikrotik-2 (http://118.179.50.70) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CDN-Mikrotik-2 is back up in ad4217b after 30 minutes.
| gharchive/issue | 2024-12-30T18:22:04 | 2025-04-01T06:38:30.227463 | {
"authors": [
"eftiarhossain279"
],
"repo": "eftiarhossain279/mynyc",
"url": "https://github.com/eftiarhossain279/mynyc/issues/785",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
512066305 | Variant type 2 (vt: 2) has to be implemented
GlorpInt2Test
This is solved in this PR https://github.com/tesonep/pharo-com/pull/10
PR merged, this issue can ble closed.
| gharchive/issue | 2019-10-24T16:57:43 | 2025-04-01T06:38:30.228742 | {
"authors": [
"eMaringolo",
"eftomi"
],
"repo": "eftomi/pharo-ado",
"url": "https://github.com/eftomi/pharo-ado/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2230480118 | I created firebase but I don't know how to create music.
I created firebase but I don't know how to create music.
Daily Songs with Firestore
How to create firestore database for daily songs?
| gharchive/issue | 2024-04-08T07:34:26 | 2025-04-01T06:38:30.253833 | {
"authors": [
"NSystemx"
],
"repo": "eggsy/website",
"url": "https://github.com/eggsy/website/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
634690429 | This site hackcss.egoist.moe can’t be reached
Hi,
This site hackcss.egoist.moe can’t be reached :(
Best,
Kokiro
Hi @Kokiro,
today I tried to restore the website with the style guidelines: https://hackcssbckp.herokuapp.com/
hope it helps! 😀
According to CNAME at github-pages the working address is hackcss.egoist.sh
| gharchive/issue | 2020-06-08T14:50:00 | 2025-04-01T06:38:30.260357 | {
"authors": [
"Kokiro",
"mugiwarafx",
"onjin"
],
"repo": "egoist/hack",
"url": "https://github.com/egoist/hack/issues/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1081228264 | Applying egg to SSA-based IRs
Hi! What are the prospects of utilizing egg to perform rewrite-driven transformations on SSA-based IRs (?)
I read this paper (whose authors appear to be collaborators with y'all): https://www.cs.cornell.edu/~ross/publications/eqsat/ where they describe E-PEG-based program representation.
On the other hand, I'm working on IRs which are closer to strict SSA (where all control flow is de-sugared to basic blocks and branching) and everything is linear (e.g. blocks are vectors of instructions) vs. strictly tree-like.
I'd love to find or understand how equality saturation could be applied to this representation structure -- it's likely that maybe this information can be gleaned from the above paper (but I expect that maintainers here might be able to unpack this more than I can).
Hi! I use egg for a language which is mostly-SSA. In my case, things "just work" so far because I mostly care about straight line blocks. The only additional information I track which may be necessary for other SSA-based usages as well is to make sure you don't use variables before they are defined. This, in my case, was possible with a simple e-class analysis.
Hi, I think @chandrakananandi is right. I don't think there is any major technical blocker stopping you from doing this today!
The original eqsat paper you linked to contains a lot of good ideas on how to encode your problem into an e-graph. If I'm understanding your setting, you'll still want E-PEGs if you want to encode loops in a transparent way that you can optimize through. If you only care about optimizing one basic block (one DFG) at a time, then you don't need them.
The egg paper (and tool) are mostly innovating in how equality saturation is done. How you encode your problem is still up to you!
Despite there not being any huge blockers, I still think it's a large and challenging task, and one that I'd like to try to tackle at some point (if I can find the time), or see someone else take a stab at!
One thing that I will add: you'll have a much easier time with a tree- or dag-like IR that a linear, mutating IR. All the rewrites that you do are over trees or dags, and just overall the e-graph doesn't do a lot for you if your language is heavily sequential. If that's your case, consider building def-use chains or some other method to make things more graph-like for the e-graph.
Also, this doesn't seem to be an issue, so I'm converting it to a discussion.
| gharchive/issue | 2021-12-15T16:16:51 | 2025-04-01T06:38:30.286203 | {
"authors": [
"chandrakananandi",
"femtomc",
"mwillsey"
],
"repo": "egraphs-good/egg",
"url": "https://github.com/egraphs-good/egg/issues/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1887127447 | An extra bottom-up recursive extractor
Hi,
Before I saw #8 I made a a recursive extractor, too.
This code is much ugglier than #8, but is faster on some problems (for example acyclic graphs). On acyclic graphs it's able to extract in one pass through, without building the dependencies.
Taking the cumulative times for all three extractors on my machine:
Loaded 454 jsons.
...
Cumulative time for bottom-up: 3386ms
Cumulative time for bottom-up-analysis: 2483ms
Cumulative time for bottom-up-recursive: 977ms
bottom-up is the one currently in,
bottom-up-analysis is #8,
bottom-up-recursive is this PR.
Is there benefit in having all three, or should the two new ones be combined?
Good question! @Bastacyclop what do you think? Offhand, I'm leaning toward fewer, more distinct algorithms, so unless we can characterize that these have different advantages, I'd prefer to have just one.
Running the benchmarks with my plot.py version gives:
Loaded 675 jsons.
---- output/babble -- bottom-up results:
tree mean: 383.4740
dag mean: 202.1272
micros mean: 4099.8324
tree quantiles: 38.00, 201.00, 368.00, 531.00, 1138.00
dag quantiles: 31.00, 124.50, 199.00, 266.50, 476.00
micros quantiles: 145.00, 1710.50, 3735.00, 5617.50, 16994.00
---- output/babble -- bottom-up-analysis results:
tree mean: 383.4740
dag mean: 202.1618
micros mean: 9924.3410
tree quantiles: 38.00, 201.00, 368.00, 531.00, 1138.00
dag quantiles: 31.00, 124.50, 199.00, 266.50, 476.00
micros quantiles: 293.00, 3834.50, 9014.00, 13213.50, 34602.00
---- output/babble -- bottom-up-recursive results:
tree mean: 383.4740
dag mean: 202.1272
micros mean: 6283.2775
tree quantiles: 38.00, 201.00, 368.00, 531.00, 1138.00
dag quantiles: 31.00, 124.50, 199.00, 266.50, 476.00
micros quantiles: 173.00, 2303.50, 5119.00, 8275.50, 25213.00
---- output/egg -- bottom-up results:
tree mean: 3.7143
dag mean: 3.2857
micros mean: 7137.1071
tree quantiles: 1.00, 1.00, 3.00, 6.00, 13.00
dag quantiles: 1.00, 1.00, 3.00, 5.00, 13.00
micros quantiles: 12.00, 20.50, 57.00, 463.25, 148309.00
---- output/egg -- bottom-up-analysis results:
tree mean: 3.7143
dag mean: 3.2857
micros mean: 16172.1786
tree quantiles: 1.00, 1.00, 3.00, 6.00, 13.00
dag quantiles: 1.00, 1.00, 3.00, 5.00, 13.00
micros quantiles: 19.00, 50.25, 88.00, 861.75, 317778.00
---- output/egg -- bottom-up-recursive results:
tree mean: 3.7143
dag mean: 3.2857
micros mean: 9666.6786
tree quantiles: 1.00, 1.00, 3.00, 6.00, 13.00
dag quantiles: 1.00, 1.00, 3.00, 5.00, 13.00
micros quantiles: 13.00, 29.00, 78.50, 693.75, 183555.00
---- output/flexc -- bottom-up results:
tree mean: 322.0000
dag mean: 85.0000
micros mean: 50959.7857
tree quantiles: 89.00, 92.00, 404.50, 546.00, 546.00
dag quantiles: 35.00, 37.00, 91.50, 137.00, 137.00
micros quantiles: 30125.00, 33549.75, 54544.50, 58975.50, 102810.00
---- output/flexc -- bottom-up-analysis results:
tree mean: 322.0000
dag mean: 84.4286
micros mean: 43439.8571
tree quantiles: 89.00, 92.00, 404.50, 546.00, 546.00
dag quantiles: 35.00, 36.00, 91.50, 136.00, 137.00
micros quantiles: 28394.00, 31866.75, 44750.00, 49550.25, 79446.00
---- output/flexc -- bottom-up-recursive results:
tree mean: 322.0000
dag mean: 85.0000
micros mean: 36553.7857
tree quantiles: 89.00, 92.00, 404.50, 546.00, 546.00
dag quantiles: 35.00, 37.00, 91.50, 137.00, 137.00
micros quantiles: 26775.00, 30454.50, 35102.00, 38385.25, 73826.00
---- output/tensat -- bottom-up results:
tree mean: 1858437671761.2795
dag mean: 5.9862
micros mean: 737891.6000
tree quantiles: 2.69, 4.32, 966.23, 2320915813054.39, 9300713475693.02
dag quantiles: 0.82, 0.94, 4.41, 8.35, 18.81
micros quantiles: 869.00, 22696.25, 230507.50, 1727811.50, 2385394.00
---- output/tensat -- bottom-up-analysis results:
tree mean: 1858437671761.2795
dag mean: 5.9793
micros mean: 584151.0000
tree quantiles: 2.69, 4.32, 966.23, 2320915813054.39, 9300713475693.02
dag quantiles: 0.82, 0.94, 4.41, 8.34, 18.80
micros quantiles: 495.00, 18215.00, 143353.50, 1247708.75, 2311130.00
---- output/tensat -- bottom-up-recursive results:
tree mean: 1858437671761.2795
dag mean: 5.9916
micros mean: 69755.5000
tree quantiles: 2.69, 4.32, 966.23, 2320915813054.39, 9300713475693.02
dag quantiles: 0.82, 0.94, 4.41, 8.36, 18.84
micros quantiles: 168.00, 10949.75, 54289.00, 69088.25, 349218.00
tree costs are basically the same, as expected.
bottom up recursive is faster than bottom up analysis for all datasets, in particular for tensat: is this only from skipping building dependencies or is there more going on?
I feel like we should be able to combine the three bottom up versions, with some more experiments and code cleanup. Maybe by (1) doing a first pass without building dependencies; and (2) if dependencies are used, doing a bottom up analysis based on unique queues. The question would be whether using unique queues for the second stage brings performance benefits or not (I think it should: https://github.com/egraphs-good/egg/issues/239). To properly evaluate that I would like to see datasests with more costly to compute, child-dependent cost functions.
PS: one thing to consider is that in egg the dependencies don't need to be computed as they are already stored in the e-graph.
I would like to consider computation of dependencies (parents) as somewhat negligible, as its only linear and as @Bastacyclop says it's already there in many contexts.
I'd like to preserve the "dumb" bottom up extractor as a base case. Ideally, we could consolidate the "smarter" bottom-up extractors into one (i.e., those that do not aim to do cost sharing, as a possible definition). Thoughts on that?
Before I read this closely I also worked on a bottom-up extractor (#20)
Sorry for duplicated work! I would also be happy with consolidating this, #20, and #8 if possible.
With the recent changes to the bottom-up extractor(#20), there's now only a small time advantage for the extractor that I proposed introducing (Note these times differ from before because extra problems have been added). Currently:
Cumulative time for faster-bottom-up: 2060ms [The one in #20]
Cumulative time for bottom-up-recursive: 1533ms [The one in this PR].
Cumulative time for bottom-up: 4471ms
Meaning that the extractor in this PR is only about 25% faster than the others, but is much uglier.
However, there are some tweaks to the "faster-bottom-up" extractor which brings down its runtime to almost the same as the extractor in this PR:
###################################################
faster-bottom-up vs faster-bottom-up-old
extractors: ['faster-bottom-up', 'faster-bottom-up-old']
cumulative time for faster-bottom-up: 1649ms
cumulative time for faster-bottom-up-old: 2060ms
cumulative tree cost for faster-bottom-up: 18584377265237
cumulative tree cost for faster-bottom-up-old: 18584377265237
cumulative dag cost for faster-bottom-up: 78037
cumulative dag cost for faster-bottom-up-old: 78037
Cumulative time for faster-bottom-up: 1649ms
Cumulative time for faster-bottom-up-old: 2060ms
faster-bottom-up / faster-bottom-up-old
geo mean
tree: 1.0000
dag: 1.0000
micros: 0.8184
quantiles
tree: 1.0000, 1.0000, 1.0000, 1.0000, 1.0000
dag: 1.0000, 1.0000, 1.0000, 1.0000, 1.0000
micros: 0.3611, 0.7825, 0.8227, 0.8611, 1.8333
So I've changed this PR to now just introduce some small speedups to the faster-bottom-up extractor, as well as fixing up attribution to @Bastacyclop.
:+1:
| gharchive/pull-request | 2023-09-08T07:35:57 | 2025-04-01T06:38:30.295764 | {
"authors": [
"Bastacyclop",
"TrevorHansen",
"mwillsey",
"oflatt"
],
"repo": "egraphs-good/extraction-gym",
"url": "https://github.com/egraphs-good/extraction-gym/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
164080791 | OffHeapChainMap thieving argument is ignored
OffHeapChainMap is backed by EvictionListeningReadWriteLockedOffHeapClockCache but the latter lacks a constructor that accepts shareByThieving argument.
The offheap store lib should be modified to add such constructor so that OffHeapChainMap can make use of it.
Given how busted ARC is at the moment this turns out to be a non-issue. That said @AbfrmBlr is going to fixing the broken ARC implementation in clustered (that he wrote) and extending it to cover unclustered offheap and disk. He's going to run headlong in to this as a result.
This will be fixed under #2215
| gharchive/issue | 2016-07-06T13:53:27 | 2025-04-01T06:38:30.304644 | {
"authors": [
"chrisdennis",
"lorban"
],
"repo": "ehcache/ehcache3",
"url": "https://github.com/ehcache/ehcache3/issues/1292",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
540026667 | ehrbase/project_management#98 added ACTION examples and updated AQL test suite data
@testautomation this includes the action compositions and the queries were updated too.
:heavy_check_mark:
| gharchive/pull-request | 2019-12-19T01:56:07 | 2025-04-01T06:38:30.352575 | {
"authors": [
"ppazos",
"testautomation"
],
"repo": "ehrbase/ehrbase",
"url": "https://github.com/ehrbase/ehrbase/pull/89",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
294369745 | glaceful close ssl socket
Hi: @eidheim
void close() noexcept {
error_code ec;
std::unique_lock<std::mutex> lock(socket_close_mutex); // The following operations seems to be needed to run sequentially
socket->lowest_layer().shutdown(asio::ip::tcp::socket::shutdown_both, ec);
socket->lowest_layer().close(ec);
}
In this way, we directly close tcp socket, why we not close SSL socket first in HTTPS server?
Thank you.
Last time I studied this I came to the conclusion that calling ssl::stream::shutdown was not needed. Though I might be wrong! By the way, ssl::stream does not have a close-member function.
Although, thank you for bringing this up. I'll add a couple of labels to this issue.
@eidheim In a rare case, If I don't add mutex before shutdown/close, Segment error happen. Why you choose use mutex before shutdown/close?
| gharchive/issue | 2018-02-05T12:04:36 | 2025-04-01T06:38:30.366489 | {
"authors": [
"eidheim",
"lxlenovostar"
],
"repo": "eidheim/Simple-Web-Server",
"url": "https://github.com/eidheim/Simple-Web-Server/issues/207",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
414813601 | Use latest pickle protocol
See also #51
Closed in favour of #51
| gharchive/issue | 2019-02-26T20:53:01 | 2025-04-01T06:38:30.372273 | {
"authors": [
"eigenein"
],
"repo": "eigenein/iftttie",
"url": "https://github.com/eigenein/iftttie/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
432721804 | Reorder parameters within functions
Fixes #141
Really great contribution @alpeshjamgade . Thanks
I would be reviewing in a day or two. Meanwhile could you make one change.
define a private function _scr(M, c=default value, G=default value for calculating Rs. Putting units for calculating Rs seems unnecessary and slow, and may cause problems while implementing jit for the functions.
Also, I guess you have made some merge commits while rebasing your branch. Try to fix that. Maybe refer https://github.com/k88hudson/git-flight-rules/ .
One word of advice, try to create separate branches for fixing and developing things. The master should not ever diverge from the upstream. Believe me, I have learnt that the hard way.
Really great contribution @alpeshjamgade . Thanks
I would be reviewing in a day or two. Meanwhile could you make one change.
define a private function _scr(M, c=default value, G=default value for calculating Rs. Putting units for calculating Rs seems unnecessary and slow, and may cause problems while implementing jit for the functions.
Also, I guess you have made some merge commits while rebasing your branch. Try to fix that. Maybe refer https://github.com/k88hudson/git-flight-rules/ .
One word of advice, try to create separate branches for fixing and developing things. The master should not ever diverge from the upstream. Believe me, I have learnt that the hard way.
thank you
you asked me to define a private function _scr.
Before the schwarzschild_radius function in schwarzschid_utils.py was getting any M value as input weather it has astropy unit or not and return Rs in astropy units and also 'Rs' was the input to the most functions which was in astropy units only, but when i changed Rs with "M", i needed to make sure that "M" should be in astropy units , that means everytime i call such function i need to provide M value in astropy units otherwise it would throw an error, and we dont expect users to provide M in astropy units, it would be great if they can just give M value as input and if the given function need to calculate Rs, it would need M in astropy units then schwarzschild_radius function will convert M to astropy units, thast just what i did. please tell me if i can do any better.
i think we dont need private _scr function, if we want Rs in floats we can just asked for it with command Rs.value , otherwise it will always be in astropy units. please tell me if i dont understand something.
i think we dont need private _scr function, if we want Rs in floats we can just asked for it with command Rs.value , otherwise it will always be in astropy units. please tell me if i dont understand something.
User always specifies things with units, but during heavy calculation internally, we don't use units. As functions like christoffels() are called thousands of time in a loop to get the trajectory it may be good to save some computation by not using astropy.units. But still wait some time, I am not sure what to do. @shreyasbapat Suggestions??
i have just created another pull request #196 for this issue , this time i didnt messed up my master. sorry for this. i did not consider on which branch i was making changes before.
| gharchive/pull-request | 2019-04-12T19:58:04 | 2025-04-01T06:38:30.385946 | {
"authors": [
"alpeshjamgade",
"ritzvik"
],
"repo": "einsteinpy/einsteinpy",
"url": "https://github.com/einsteinpy/einsteinpy/pull/194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1756838022 | Revert workaround for micrometer bug in tests
#3187 added a workaround for a micrometer bug. We should revert this workaround as soon as a micrometer version with a fix is available.
fixed in https://github.com/elastic/apm-agent-java/pull/3264
| gharchive/issue | 2023-06-14T12:51:13 | 2025-04-01T06:38:30.439609 | {
"authors": [
"JonasKunz",
"jackshirazi"
],
"repo": "elastic/apm-agent-java",
"url": "https://github.com/elastic/apm-agent-java/issues/3189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
17849524 | Update to Elasticsearch 0.90.3
Issue reported in https://groups.google.com/d/msg/elasticsearch/bpdetIvIg5M/cMAUGwhRVf0J
No problem with elasticsearch 0.90.2 and the cloud-aws 1.12.0 plugin (other than the usual node discovery flakiness), but I'm unable to get elasticsearch 0.90.3 to start:
Initialization Failed ...
1) IllegalStateException[This is a proxy used to support circular references involving constructors. The object we're proxying is not constructed yet. Please wait until after injection has completed to use this object.]2) NoSuchMethodError[org.elasticsearch.discovery.zen.ZenDiscovery.<init>(Lorg/elasticsearch/common/settings/Settings;Lorg/elasticsearch/cluster/ClusterName;Lorg/elasticsearch/threadpool/ThreadPool;Lorg/elasticsearch/transport/TransportService;Lorg/elasticsearch/cluster/ClusterService;Lorg/elasticsearch/node/settings/NodeSettingsService;Lorg/elasticsearch/cluster/node/DiscoveryNodeService;Lorg/elasticsearch/discovery/zen/ping/ZenPingService;)V]
Do I need to wait for a new version of cloud-aws, or is there some other problem here?
My /etc/elasticsearch/elasticsearch.yml:
cluster.name: foo
plugin.mandatory: cloud-aws,lang-javascript
cloud:
aws:
access_key: ********
secret_key: ********
region: us-east-1
discovery:
type: ec2
ec2:
ping_timeout: 15s
gateway:
type: s3
s3:
bucket: bar
For anyone (like me) who is seeing this error, my fix was to update to the correct version (as indicated by the README) - I accidentally updated Elasticsearch to 1.5.0 without updating elasticsearch-cloud-aws.
@jacobwgillespie ha, thanks for pointing this out, I did the same thing!
I just ran into this issue today, and it had nothing to do with mis-matched ES/Plugin versions. This error is also thrown if your config values for the cloud-aws plugin are not correct.
What I had:
cloud:
aws:
access_key: XXX
secret: XXX
region: us-east-1
What I needed:
cloud:
aws:
access_key: XXX
secret_key: XXX
region: us-east-1
The major difference between the two of them being cloud.aws.secret in the first, non-working example; which is changed to cloud.aws.secret_key in the second, working example.
So, apparently, this exact same error is what you'll get if you've completely borked your cloud-aws config. So, keep that in mind!
Maybe in a future release, if you could detect a bad config state like this (access key is present, but secret key is missing) and throw an error, that would be pretty great. Alternatively, implementing it so that an error is thrown if any unknown string appears in the cloud.aws namespace would work just as well (so, an error would have been thrown because cloud.aws.secret is not a recognized and valid key), but I don't know how feasible that is to do. Just some food for thought!
Hopefully someone finds this useful!
@hjc1710 I think it's useful and I agree that we should better catch that kind of error.
May be you would like to open an issue in elasticsearch repo now that we moved aws plugin there?
Thanks!
Awesome, thanks @dadoonet! I did not know that this plugin had moved and the official repo is elastic/elasticsearch now. Anyway, I opened up an issue there for this very feature, after doing a bit of rewording and thinking. If you guys need my help for that feature (for whatever reason), I'm happy to help!
| gharchive/issue | 2013-08-09T07:03:51 | 2025-04-01T06:38:30.781952 | {
"authors": [
"dadoonet",
"hjc1710",
"jacobwgillespie",
"rabidscorpio"
],
"repo": "elastic/elasticsearch-cloud-aws",
"url": "https://github.com/elastic/elasticsearch-cloud-aws/issues/31",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
430381348 | Adding search_analyzer to mapping
I can't figure out how to add the search_analyzer option to the mapping block. The Mappings class only provides indexes as a way to add fields.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-analyzer.html
My bad, documentation parsing error from my side.
| gharchive/issue | 2019-04-08T10:47:27 | 2025-04-01T06:38:30.794539 | {
"authors": [
"sl0thentr0py"
],
"repo": "elastic/elasticsearch-rails",
"url": "https://github.com/elastic/elasticsearch-rails/issues/873",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1508117402 | elasticsearch.keystore: Device or resource busy
Chart version: 8.5.1
Kubernetes version: 1.23.12-gke.1600
Kubernetes provider: GKE (Google Kubernetes Engine)
Helm Version: version.BuildInfo{Version:"v3.9.3", GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", GitTreeState:"clean", GoVersion:"go1.19"}
Describe the bug:
Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
Steps to reproduce:
Create the secret
kubectl create secret generic elk-backup --from-file=gcs.client.elk-backup.credentials_file=./elk-backup.json
To add these secrets to the keystore:
keystore:
- secretName: elk-backup
Expected behavior:
mount the secret in keystore should work
Provide logs and/or server output (if relevant):
Be careful to obfuscate every secrets (credentials, token, public IP, ...) that could be visible in the output before copy-pasting
Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:420)
at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:266)
at java.base/java.nio.file.Files.move(Files.java:1430)
at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:498)
at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:412)
at org.elasticsearch.cli.keystore.AddStringKeyStoreCommand.executeCommand(AddStringKeyStoreCommand.java:102)
at org.elasticsearch.cli.keystore.BaseKeyStoreCommand.execute(BaseKeyStoreCommand.java:64)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:94)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
Any additional context:
I can seeing this error in pod with role master
I'm not sure but maybe it's because the keystore file is mounted as a subpath in the chart
- name: keystore
mountPath: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
I don't know how elasticsearch-keystore tool works but it looks like it's trying to replace elasticsearch.keystore file by elasticsearch.keystore.tmp file instead of replacing the content of it.
I reproduce the issue by trying to do the following command:
mv elasticsearch.keystore.tmp elasticsearch.keystore
mv: cannot move 'elasticsearch.keystore.tmp' to 'elasticsearch.keystore': Device or resource busy
cp command works just fine
cp elasticsearch.keystore.tmp elasticsearch.keystore
I see 2 possibilities to fix that:
change the way elasticsearch-keystore write content in keystore file
add the possibility to specify a custom path for the keystore file and mount it in is own directory to prevent the use of subpath
Hi @TanguyPatte
seems like there is something wrong with the current statefulset.yaml that causes this issue.
FWIW deploying with this template works well for elasticsearch 8.6.2 : https://github.com/elastic/helm-charts/blob/d4e9f6bc47cf7f7ad4dfaaec102e1327d8a345e3/elasticsearch/templates/statefulset.yaml
but there may be more recent iterations that may work.
Chart version: 8.5.1
Kubernetes version: 1.23.12-gke.1600
Kubernetes provider: GKE (Google Kubernetes Engine)
Helm Version: version.BuildInfo{Version:"v3.9.3", GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", GitTreeState:"clean", GoVersion:"go1.19"}
Describe the bug: Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
Steps to reproduce:
Create the secret
kubectl create secret generic elk-backup --from-file=gcs.client.elk-backup.credentials_file=./elk-backup.json
To add these secrets to the keystore:
keystore:
- secretName: elk-backup
Expected behavior: mount the secret in keystore should work
Provide logs and/or server output (if relevant):
Be careful to obfuscate every secrets (credentials, token, public IP, ...) that could be visible in the output before copy-pasting
Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:420)
at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:266)
at java.base/java.nio.file.Files.move(Files.java:1430)
at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:498)
at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:412)
at org.elasticsearch.cli.keystore.AddStringKeyStoreCommand.executeCommand(AddStringKeyStoreCommand.java:102)
at org.elasticsearch.cli.keystore.BaseKeyStoreCommand.execute(BaseKeyStoreCommand.java:64)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:94)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
Any additional context: I can seeing this error in pod with role master
@SashaShcherbyna
I have got the same error, Did you have a solution to resolve this issue?
I can resolve this issue because I have not set ELASTIC_PASSWORD with my own credential.
Just add
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-config-credentials
key: password
secret:
enabled: false
and then add the keystore just normal.
| gharchive/issue | 2022-12-22T15:49:57 | 2025-04-01T06:38:30.978241 | {
"authors": [
"Drookoo",
"SashaShcherbyna",
"TanguyPatte",
"adrifermo",
"ppatcha"
],
"repo": "elastic/helm-charts",
"url": "https://github.com/elastic/helm-charts/issues/1748",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
732479122 | fix: close fd after debug logs are written
fix #110
runner end should be called after reset since we are logging the output inside reset
Moved from nextTick to setTimeout as its called after all the micro tasks and ensure all log statements are printed before we end stream.
:green_heart: Build Succeeded
the below badges are clickable and redirect to their specific view in the CI or DOCS
Expand to view the summary
Build stats
Build Cause: [Pull request #112 opened]
Start Time: 2020-10-29T16:43:01.167+0000
Duration: 14 min 22 sec
Test stats :test_tube:
Test
Results
Failed
0
Passed
42
Skipped
0
Total
42
| gharchive/pull-request | 2020-10-29T16:42:53 | 2025-04-01T06:38:32.685694 | {
"authors": [
"apmmachine",
"vigneshshanmugam"
],
"repo": "elastic/synthetics",
"url": "https://github.com/elastic/synthetics/pull/112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
795081257 | Fix 2 errors
fix an import typo
add a missing exception definition
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
@gongweibao please review this pr, thanks~
@gongweibao please review this pr, thanks~
| gharchive/pull-request | 2021-01-27T13:01:00 | 2025-04-01T06:38:32.690068 | {
"authors": [
"CLAassistant",
"Ruminateer",
"tizhou86"
],
"repo": "elasticdeeplearning/edl",
"url": "https://github.com/elasticdeeplearning/edl/pull/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
642569562 | How to cite this work in my assignment?
Thank you for your work and may I know how should I cite your works in my project?
这里已经写好了构建的
| gharchive/issue | 2020-06-21T14:04:33 | 2025-04-01T06:38:32.703451 | {
"authors": [
"Bigsheng97",
"iswyq"
],
"repo": "elbuco1/CBAM",
"url": "https://github.com/elbuco1/CBAM/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2079872843 | Set the docker image tag to run
This pull request makes possible to run a specific Home Assistant docker image tag sending a variable to the script. If the variable is not sent, the version included in .hass/config/.HA_VERSION file will be used as the tag. Also, to generate the docker cache key, the aforementioned file will be used instead of the package.json.
coverage: 100.0%. remained the same
when pulling b723a4b7d1ede517f74d010d5fef0427c01cad17 on run_docker_image_tag_dynamic
into ebd6f452dca960044803e08becec7c7e2efbfd39 on master.
| gharchive/pull-request | 2024-01-12T23:19:43 | 2025-04-01T06:38:32.705729 | {
"authors": [
"coveralls",
"elchininet"
],
"repo": "elchininet/keep-texts-in-tabs",
"url": "https://github.com/elchininet/keep-texts-in-tabs/pull/39",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1709083549 | 🛑 Instagram is down
In 5d4c8f1, Instagram (https://www.instagram.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Instagram is back up in dc19a67.
| gharchive/issue | 2023-05-14T21:34:34 | 2025-04-01T06:38:32.713065 | {
"authors": [
"eldikra"
],
"repo": "eldikra/monitoreo",
"url": "https://github.com/eldikra/monitoreo/issues/265",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1726528588 | 🛑 Instagram is down
In 3cdf44f, Instagram (https://www.instagram.com) was down:
HTTP code: 429
Response time: 371 ms
Resolved: Instagram is back up in fa90aaf.
| gharchive/issue | 2023-05-25T21:47:15 | 2025-04-01T06:38:32.715411 | {
"authors": [
"eldikra"
],
"repo": "eldikra/monitoreo",
"url": "https://github.com/eldikra/monitoreo/issues/354",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
538141121 | [Feature Request] Add Deploy to Heroku button to README.md
Feature Request
Hi there! Great application! I really love how simple and privacy focused it is.
It would be great to have a Deploy to Heroku button, would help drive up adoption!
Documentation
A "Deploy to Heroku" button would be cool, but I guess that #73 should be finished first. Help is welcome!
Will be part of v1.4.3.
| gharchive/issue | 2019-12-16T01:34:42 | 2025-04-01T06:38:32.732620 | {
"authors": [
"aleccool213",
"electerious"
],
"repo": "electerious/Ackee",
"url": "https://github.com/electerious/Ackee/issues/72",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
178858243 | Feature request: make the auto-fullscreen behaviour configurable
In PR #583, a new behaviour was introduced that would cause the picture to switch to fullscreen if the viewer didn't move his mouse for >=1 second.
In my case, I need people (elderly relatives) to always see the top bar with the title. I have for now achieved that for myself by commenting out the relevant part from that PR in view.js, but it would be nicer if there was a config switch in the DB to turn the auto-fullscreen off. Notably I wouldn't have to redo this after every update and rebuild lychee.
+1 for that, also being able to adjust that 1 second delay would be nice.
A setting isn't planned, but we could increase the delay. I agree that 1 second might be too fast for some users.
The next version will use a 2.5s delay. 1 second was too aggressive.
| gharchive/issue | 2016-09-23T11:57:07 | 2025-04-01T06:38:32.734729 | {
"authors": [
"electerious",
"jullit31",
"mhellwig"
],
"repo": "electerious/Lychee",
"url": "https://github.com/electerious/Lychee/issues/625",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2094762315 | Listener warning
I have a view with a lot of small components that all have a small query listening for updates to their specific object — which should be a supported use case I'd assume? Or if there is an actual limit you want to warn against, set it to that. But I'm assuming the eventemitter is just set to the default (11 I think).
Looking at the notifier code under src/notifiers/event.ts, we have the following:
// Global singleton that all event notifiers use by default. Emitting an event
// on this object will notify all subscribers in the same thread. Cross thread
// notifications use the `./bridge` notifiers.
const globalEmitter = new EventEmitter()
// Increase the maximum number of listeners because multiple components
// use this same emitter instance.
globalEmitter.setMaxListeners(250)
This limit was increased to 250 by @thruflo int https://github.com/electric-sql/electric/pull/377 - since this is a global emitter I find it hard to think of any reasonable way to adjust this limit for a large application while still retaining the ability to detect if listeners are being accumulated or not appropriately removed. Maybe removing this warning and checking if many listeners are being added in the same "place", or having them be identified somehow could help construct a better warning.
For your particular case, would a "parent" component with view-only children work as well or would it complicate the code too much?
Yeah I'm sure I could refactor — I think ideally though the subscriptions are efficient enough it's not necessary.
I haven't noticed any slowdowns with the app & memory is fine — though all these listeners are to objects that are rarely changed. So what exactly is the upper limit is pretty arbitrary depending on what's updating in your app.
@samwillis what's your take on this? I think it makes sense to remove the warning altogether as we explicitly use a global emitter - if we want to catch leaks we can implement our own tests or mechanisms from within our EventNotifier to catch them.
| gharchive/issue | 2024-01-22T21:13:24 | 2025-04-01T06:38:32.739006 | {
"authors": [
"KyleAMathews",
"msfstef"
],
"repo": "electric-sql/electric",
"url": "https://github.com/electric-sql/electric/issues/868",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2467801095 | PGliteWorker not working with drizzle
OS: windows 11
Browser: chrome 127.0.6533.119
framework: solid-start
The minimal reproduction repository: GitHub Repo Link
description:
I am using PGliteWorker with drizzle. I followed the instructions on https://pglite.dev/docs/multi-tab-worker to create a worker wrapper:
PGliteWorker.ts
worker({ async init() { const pg = await PGlite.create({...}); await pg.exec(...); const db = drizzle(pg, { schema }); const ret = await db.query.verification_token.findMany(); console.log(ret); return pg; }, });
I can see the results in the browser console.
However, if I
entry-client.tsx
const pg = await PGliteWorker.create( new Worker(PGliteWorkerUrl, { type: "module", }) ); const db = drizzle(pg, { schema }); const ret = await db.query.verification_token.findMany(); console.log(ret);
The browser console gives the following error:
Uncaught DOMException: Failed to execute 'postMessage' on 'BroadcastChannel': value => value could not be cloned.
at j.g (http://localhost:3001/_build/node_modules/.pnpm/@electric-sql+pglite@0.2.0/node_modules/@electric-sql/pglite/dist/worker/index.js?v=c34c2280:201:21)
at j.query (http://localhost:3001/_build/node_modules/.pnpm/@electric-sql+pglite@0.2.0/node_modules/@electric-sql/pglite/dist/worker/index.js?v=c34c2280:98:56)
at async PglitePreparedQuery.execute (http://localhost:3001/_build/node_modules/.vinxi/client/deps/drizzle-orm_pglite.js?v=d787a4e7:58:20)
at async http://localhost:3001/_build/@fs/D:/ToramCalculator/src/entry-client.tsx:16:13
I've experienced this issue as well when trying to write to the DB with drizzle. My read queries were working. I wonder if this issue is an issue with drizzle
The Failed to execute 'postMessage' on 'BroadcastChannel' error makes me think it could be our side as we use a BroadcastChannel to communicate with the worker. It's either:
drizzle is trying to pass an un-cloneable object as a parameter to the query api.
or we have something in our api that doesn't work with the worker and is not currently covered by the tests.
or both, which is my suspicion:
I'm 99% sure we will find it's the parser query option (https://pglite.dev/docs/api#query-options) and will need a bit of a refactor.
The issue seems to be that we are trying to transfer ParserOptions which contains anonymous functions to the worker without proper serialization.
Either the parsing is done at the calling thread (and the ParserOptions are stored and handled there) or we assume some limitations on the functions being passed and serialize them and deserialize them to run in the worker (e.g. with a toString()/eval() combination).
Since these parsers might depend on various imports and what not it might be better to handle deserialization in the caller.
| gharchive/issue | 2024-08-15T10:23:54 | 2025-04-01T06:38:32.748408 | {
"authors": [
"KiaClouth",
"TheAndrewJackson",
"msfstef",
"samwillis"
],
"repo": "electric-sql/pglite",
"url": "https://github.com/electric-sql/pglite/issues/208",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2651383360 | [BUG] Cost data is wrongly mapped into Budget graphs
Current Situation
Having this cost data in November
The data is shown in the Budget Graphs in Januaray
Desired
Data of November should be shown in November also on Budget-Chart
I just debugged this screen myself.
The cost data is passed into the BudgetChart component as an array of monthly costs here: https://github.com/electrolux-oss/infrawallet/blob/main/plugins/infrawallet/src/components/Budgets/Budgets.tsx#L250
"id": "AWS",
"reports": {
"2023-12": 19.6492511966,
"2024-01": 1939.0336325847002,
"2024-02": 1610.1251975480995,
"2024-03": 2149.6599309432,
"2024-04": 2629.0959797089004,
"2024-05": 2603.6509832689,
"2024-06": 2438.023622028399,
"2024-07": 2767.510390127999,
"2024-08": 9312.452064800302,
"2024-09": 8760.504147668702,
"2024-10": 10055.072295576203,
"2024-11": 5529.263631147701
}
}
It's then converted to a running sum here: https://github.com/electrolux-oss/infrawallet/blob/main/plugins/infrawallet/src/components/Budgets/Budgets.tsx#L91-L98
{
0: 19.6492511966,
1: 1958.6828837813002,
2: 3568.8080813294,
3: 5718.4680122726,
4: 8347.5639919815,
5: 10951.214975250401,
6: 13389.2385972788,
7: 16156.748987406798,
8: 25469.2010522071,
9: 34229.7051998758,
10: 44284.77749545201,
11: 49814.04112659971
}
This is then plotted on the chart, with a constant set of x-axis titles: https://github.com/electrolux-oss/infrawallet/blob/main/plugins/infrawallet/src/components/Budgets/Budgets.tsx#L122
xAxis={[
{
data: ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'],
scaleType: 'band',
},
]}
I think that the x-axis labels just need to be dynamically generated by mapping the keys from the original report variable to the month names.
@tollercode @darylgraham Can you check if version 0.2.0-20241118212048-4e21a4a works for you?
| gharchive/issue | 2024-11-12T08:07:55 | 2025-04-01T06:38:32.775770 | {
"authors": [
"darylgraham",
"emillg",
"tollercode"
],
"repo": "electrolux-oss/infrawallet",
"url": "https://github.com/electrolux-oss/infrawallet/issues/124",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
373760120 | npm start did not open the app
[x] I have read the contribution documentation for this project.
[x] I agree to follow the code of conduct that this project follows, as appropriate.
[x] I have searched the issue tracker for an issue that matches the one I want to file, without success.
Please describe your issue:
I followed the instruction on https://github.com/electron-userland/electron-forge and start a demo project my-app, but when I run npm start, it didn't open the electron. It did stop on "electron-squirrel-startup processing squirrel command .".
Console output when you run electron-forge with the environment variable DEBUG=electron-forge:*. (Instructions on how to do so here). Please include the stack trace if one exists.
Put the console output here
D:\code\electron\my-app>npm start
> my-app@1.0.0 start D:\code\electron\my-app
> electron-forge start
√ Checking your system
√ Locating Application
√ Preparing native dependencies
√ Launching Application
D:\code\electron\my-app>npm start -v
6.4.1
D:\code\electron\my-app>set DEBUG=*
D:\code\electron\my-app>npm start
> my-app@1.0.0 start D:\code\electron\my-app
> electron-forge start
- Checking your system electron-forge:check-system checking system, create ~/.skip-forge-system-check to stop doing this +0ms
√ Checking your system
- Locating Application electron-forge:project-resolver searching for project in: D:\code\electron\my-app +0ms
electron-forge:project-resolver electron-forge compatible package.json found in D:\code\electron\my-app\package.json +15ms
√ Locating Application
- Preparing native dependencies electron-rebuild rebuilding with args: [Arguments] {
'0':
{ buildPath: 'D:\\code\\electron\\my-app',
electronVersion: '3.0.6',
arch: 'x64' } } +0ms
electron-rebuild rebuilding with args: D:\code\electron\my-app 3.0.6 x64 [] false https://atom.io/download/electron [ 'prod', 'optional' ] false +15ms
electron-rebuild exploring D:\code\electron\my-app\node_modules\electron-squirrel-startup +4ms
electron-rebuild exploring D:\code\electron\my-app\node_modules\debug +5ms
electron-rebuild exploring D:\code\electron\my-app\node_modules\debug\node_modules\ms +5ms
electron-rebuild exploring D:\code\electron\my-app\node_modules\ms +5ms
electron-rebuild identified prod deps: Set { 'electron-squirrel-startup': true, debug: true, ms: true } +7ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules +11ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@electron-forge +6ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@electron-forge\async-ora\node_modules +5ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@electron-forge\cli\node_modules +10ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@electron-forge\core\node_modules +20ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@electron-forge\installer-dmg\node_modules +12ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@electron-forge\shared-types\node_modules +22ms
\ Preparing native dependencies electron-rebuild scanning: D:\code\electron\my-app\node_modules\@types +11ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@types\electron-packager\node_modules +5ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\@types\electron-packager\node_modules\@types +7ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\are-we-there-yet\node_modules +15ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\asar\node_modules +11ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\camelcase-keys\node_modules +27ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\concat-stream\node_modules +28ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\cross-spawn\node_modules +10ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\cross-zip\node_modules +4ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\debug\node_modules +8ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\decompress-zip\node_modules +6ms
| Preparing native dependencies electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron\node_modules +13ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron-download\node_modules +3ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron-osx-sign\node_modules +8ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron-packager\node_modules +7ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron-rebuild\node_modules +9ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron-winstaller\node_modules +11ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\electron-winstaller\node_modules\asar\node_modules +4ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\execa\node_modules +15ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\execa\node_modules\cross-spawn\node_modules +4ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\external-editor\node_modules +8ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\extract-zip\node_modules +3ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\find-up\node_modules +10ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\flora-colossus\node_modules +4ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\fstream\node_modules +14ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\galactus\node_modules +3ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\get-package-info\node_modules +13ms
/ Preparing native dependencies electron-rebuild scanning: D:\code\electron\my-app\node_modules\global-prefix\node_modules +26ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\http-signature\node_modules +9ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\inquirer\node_modules +7ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\load-json-file\node_modules +32ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\mkdirp\node_modules +36ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\mksnapshot\node_modules +6ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\mksnapshot\node_modules\fs-extra\node_modules +9ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\node-abi\node_modules +8ms
- Preparing native dependencies electron-rebuild scanning: D:\code\electron\my-app\node_modules\node-gyp\node_modules +11ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\normalize-package-data\node_modules +7ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\nugget\node_modules +4ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\ora\node_modules +13ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\path-type\node_modules +21ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\redent\node_modules +29ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\request\node_modules +3ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\run-async\node_modules +13ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\spawn-rx\node_modules +16ms
\ Preparing native dependencies electron-rebuild scanning: D:\code\electron\my-app\node_modules\temp\node_modules +22ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\touch\node_modules +13ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\verror\node_modules +17ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\wide-align\node_modules +9ms
electron-rebuild scanning: D:\code\electron\my-app\node_modules\yargs-parser\node_modules +16ms
√ Preparing native dependencies
√ Launching Application
Thu, 25 Oct 2018 03:28:11 GMT electron-squirrel-startup processing squirrel command `.`
What command line arguments are you passing?
Put the arguments here
What does your config.forge data in package.json look like?
Paste the config.forge JSON object here
Please provide either a failing minimal testcase (with a link to the code) or detailed steps to
reproduce your problem. Using electron-forge init is a good starting point, if that is not the
source of your problem.
Hi, I have the same problem on Win 10 64bit.
I just started using electron, and electron-forge. To reproduce, just start from scratch on a new machine and follow the electron-forge page.
After running those commands, there are a bunch of error messages and the app window never appears.
Separately, I was able to get the regular electron demo to work from their man page:
What version of Electron Forge is this?
If you are using node 11, then you should use electron 3.0.8 or newer.
see more details on https://github.com/electron/electron/pull/15470
If that's the issue, I'm inclined to close this as not a bug in Electron Forge.
@malept, @vic2r I tested it and confirmed what I guess.
Ok, thank you guys for resolving this issue for me. I am still learning.
I am on node version 11.x. I thought I was on electron 3.0.8. When I install electron-forge and look the json package.json (there are many nested version under folders called "node_modules" it always says 2.0.8. I've tried googling, and several commands to update electron inside my test projects. I'm sure it's something simple, can either of you point me in the right direction? How do I "use electron version 3.0.8 or newer"?
Thanks again to both of you.
| gharchive/issue | 2018-10-25T03:36:46 | 2025-04-01T06:38:32.831335 | {
"authors": [
"liudonghua123",
"malept",
"vic2r"
],
"repo": "electron-userland/electron-forge",
"url": "https://github.com/electron-userland/electron-forge/issues/607",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
230019792 | Mac: Failed at the electron@1.6.8 postinstall script 'node install.js'.
Received the following message:
Failed at the electron@1.6.8 postinstall script 'node install.js'. (Exit Status 1)
node.js (and I assume npm with it) were just downloaded and installed from their site.
Electron version: electron@1.6.8
Operating system: macOS Sierra 10.12.4
I'm brand new to working with these types of files and installers, so I'm not sure how to correct this.
Also, I need to know if I can do this as a global install, as I would not like to confront the prospect of installing this in every single project folder.
In my case, npm install electron -g will do the trick.
I am seeing the same error. Is there a work around for this issue?
I got around this issue by looking at the link "https://docs.npmjs.com/getting-started/installing-npm-packages-locally". I tried a bunch of things and not sure exactly what fixed it but I did run "npm init" with defaults and then the "npm install mapbox-map-image-export -g" command seemed to work
I've got the same error.
electron@1.6.11 postinstall: node install.js. Exit status 1
Node.js version: v8.1.4
Electron version: v1.6.11
OS: Ubuntu 16.04
@ryan-christopher I updated my node to last version and the error fixed.
The electron-prebuilt repo is being retired and its code has been moved into the electron/electron repo. For the sake of historical transparency, we will leave GitHub Issues enabled on this repository, but if you are still affected by the issue reported here, please open a new issue on electron/electron repo and reference this issue from it so people can get the full context. The electron repository has a large and active contributor community, so your issue is more likely to get the attention it deserves there. Thanks!
| gharchive/issue | 2017-05-19T16:05:20 | 2025-04-01T06:38:32.838579 | {
"authors": [
"aliir74",
"firsttracks",
"ryan-christopher",
"shakhassan",
"zeke"
],
"repo": "electron-userland/electron-prebuilt",
"url": "https://github.com/electron-userland/electron-prebuilt/issues/254",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2138564852 | 当使用React时,build会导致找不到文件
问题概况
使用环境:MacOS m2 air (arm64架构,使用electron-builder作为打包工具)
打包命令:"build:mac": "tsc && vite build && electron-builder"
使用版本:
{
"electron-builder": "^24.6.3",
"vite-plugin-electron": "^0.28.2",
"vite-plugin-electron-renderer": "^0.14.5"
}
问题:打包后程序白屏,并且提示```Not allowed to load local resource: file:///Users/<myUserName>/code/<projectName>/dist/mac-arm64/<projectName>.app/Contents/Resources/app.asar/dist/index.html```
尝试过的解决方法
尝试过使用--config里写入asar: false,但是signing时报错
Command failed: codesign --sign 71F68ECE58812EEDF39B778BFE80088167174542 --force --timestamp --options runtime --entitlements并且还有Permission denied的提示
个人理解
按照普遍理想而说应该是直接找app.asar就完事了(吧?),实际解包后发现也确实只有一个asar的压缩文件,也压根不是一个目录啊
本项目较为私密,不可以提供开源仓库,请谅解
运行tsc && vite build看一下是否生成了dist文件夹
有生成
那应该是electron-builder配置文件里files字段没加dist,不然肯定会打包进去的
我添加了dist了
{
"asar": true,
"appId": "com.rntimer.app",
"files": ["dist-electron", "dist"],
"mac": {
"artifactName": "${productName}_${version}.${ext}",
"target": ["dmg", "zip"]
},
"win": {
"target": [
{
"target": "nsis",
"arch": ["x64"]
}
],
"artifactName": "${productName}_${version}.${ext}"
},
"nsis": {
"oneClick": false,
"perMachine": false,
"allowToChangeInstallationDirectory": true,
"deleteAppDataOnUninstall": false
},
"directories": {
"output": "release/${version}",
"buildResources": "build"
}
}
并且我尝试用文本编辑器打开我的app.asar,其中也确实有dist/index.html和其相关的文件
还是提供一下minimal reproduction吧
| gharchive/issue | 2024-02-16T12:51:27 | 2025-04-01T06:38:32.848405 | {
"authors": [
"liangmiQwQ",
"subframe7536"
],
"repo": "electron-vite/vite-plugin-electron",
"url": "https://github.com/electron-vite/vite-plugin-electron/issues/218",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
180556326 | fix broken link to npmjs package
The previous link led to a 404 page on npmjs. 😄
grazie!
| gharchive/pull-request | 2016-10-03T03:19:10 | 2025-04-01T06:38:32.852541 | {
"authors": [
"stve",
"zeke"
],
"repo": "electron/electron.atom.io",
"url": "https://github.com/electron/electron.atom.io/pull/501",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1370104841 | "Unidentified developer" when opening universal app
Hello, we switched from an app for x86_64 to an universal app. The app runs fine on all platforms when I sign them locally with my developer identity.
However, when we go through Testflight, we are greeted with a "... cannot be opened because the developer cannot be verified" message. This only happens with the universal build, the x86_64 only one works fine. And if I force-resign the app, then it also works fine.
This is how we sign:
electron-osx-sign --entitlements=./entitlements/entitlements.mac.plist --entitlements-inherit=./entitlements/entitlements.mac.inherit.plist --entitlements-loginhelper=./entitlements/entitlements.mac.inherit.plist --identity=${CERTIFICATE_DEVELOPER_SHA} --keychain=signing-cert-keychain --provisioning-profile=./embedded.provisionprofile --type=distribution "${appName}" productbuild --component "${appName}" /Applications --sign "${CERTIFICATE_INSTALLER_SHA}" "app-name-${params.UPLOAD_ARTIFACT_VERSION}.pkg"
Checking the app with the usual tools (codesign,pkgutil, spctl) gives no clue to any problems. Only at runtime it doesn't work.
I have run out of ideas to try..
I have exactly the same problem, already spent 5 days trying different things but I've exhausted my options.
Does force resigning the universal app actually work for you and you can run it in Test Flight? I couldn't even manage to do that yet.
I meant that resigning locally with my developer identity makes it runnable for me, but I have not tried uploading the resigned app onto Testflight.
Did you test if the same problem occurs when you attempt an app store release? Speculating if this is a problem with Testflight..
I meant that resigning locally with my developer identity makes it runnable for me, but I have not tried uploading the resigned app onto Testflight.
I see, I also have no problems running the app locally when it is signed with the developer identity and notarized. However for TestFlight and the Mac Store it has to be signed either with 3rd Party Mac Developer Application or Apple Distribution certificate. And when we do this we get the above problem.
Did you test if the same problem occurs when you attempt an app store release? Speculating if this is a problem with Testflight..
Yes, the app gets rejected for the same reason. I created a separate issue with more info since I am using electron-builder (not osx-sign directly) here:
#https://github.com/electron-userland/electron-builder/issues/7171
I've attached a build log there which confirms signing goes well, so the problem has to be elsewhere.
Have you tried regenerating the provisioning profile? Will try that next (unfortunately I don't have direct access to App Store so it's a slow process here).. after all the provisioning profile links the developer with the app..
Have you tried regenerating the provisioning profile?
Yes, no difference. Transporter usually complains if your provisioning profile is not correct, so you will never reach TestFlight deployment if it was wrong. But you are welcome to try it in case I missed something.
just fyi, I had another wild theory.. maybe the universal build is loading different dylibs on startup than the x86 build. That could lead it to not find something. If it doesn't find it, the loader goes to search for it. And while searching it checks some invalid directories, which are forbidden unless some entitlements are given, and therefore the signature is rejected.
So i checked and the only diffrence between a working x86 app (left) and the universal app (right) was this:
I think I found the reason. My app uses native modules and they need to be signed, too. For that reason, they are unpacked, using the asarUnpack config option.
However, unlike with the x86 build, in the universal build I have two copies of the same node module. One in the app.asar.unpacked folder, and one in the app.asar archive itself. And only the "unpacked" one is signed. 😨
Ok I have my solution, if I set mergeASARs to false, I get a correctly signed and launching app. Albeit, it still seems that the native modules are duplicated.
I would close this issue now, or leave it to you @gpronet , as I think my solution won't help you
Amazing, mergeASARs: false fixes the issue for me as well.
A non critical dependency of my app was using just one native module, so when I removed it, the error messages were gone as well even without mergeASARs: false. Thank you for pointing me in the right direction @lukas2.
Wow, happy to hear! 🙂
Actually it clicked for me when you said in the other thread that the "Move to Bin"-deleted files are hidden in Trash. I examined mine and saw my module in unsigned form. Then I knew there had to be more than one.. :)
Ok closing this.
I'm still stuck with this error :-(
Can you guys please post your package.json and entitlement files?
Hello i still have this problem
mergeASARs: false
I've set it up > mergeASARs: false and the eror still show in testflight
i using electron + svelkit
this my electron and builder version:
"electron": "^26.2.2",
"electron-builder": "^24.6.4",
| gharchive/issue | 2022-09-12T15:31:21 | 2025-04-01T06:38:32.950412 | {
"authors": [
"batis97",
"gpronet",
"lukas2",
"technotip"
],
"repo": "electron/osx-sign",
"url": "https://github.com/electron/osx-sign/issues/266",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1304466980 | Hotfix/support backfill changes on redshift
support backfill changes for alerts model on redshift
LGTM!
Could you please change to branch to v0.4.2.1?
| gharchive/pull-request | 2022-07-14T08:48:40 | 2025-04-01T06:38:32.982886 | {
"authors": [
"IDoneShaveIt",
"elongl"
],
"repo": "elementary-data/elementary",
"url": "https://github.com/elementary-data/elementary/pull/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
59320138 | Remove version constraint from the installation docs
If one was to copy and paste the Composer installation command from the README when it has the ~1 version constraint they might be confused as to why they don't get all the functionality shown in the docs. Removing the version constraint will have Composer install the latest tagged version and thus they should get all the current functionality.
Cool!
| gharchive/pull-request | 2015-02-28T02:12:47 | 2025-04-01T06:38:33.130203 | {
"authors": [
"dwightwatson",
"elfet"
],
"repo": "elfet/cherimola",
"url": "https://github.com/elfet/cherimola/pull/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1780356326 | 🛑 UCU - Koha is down
In 9ab1963, UCU - Koha (http://biblioteca.ucu.edu.ar/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: UCU - Koha is back up in 45007ae.
| gharchive/issue | 2023-06-29T08:22:26 | 2025-04-01T06:38:33.137639 | {
"authors": [
"elfoche"
],
"repo": "elfoche/monitoreo",
"url": "https://github.com/elfoche/monitoreo/issues/2170",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1291519630 | 🛑 MCU - GRH is down
In 6d96763, MCU - GRH (http://produccion.cdeluruguay.gob.ar/GRH/forms/login.jsp) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MCU - GRH is back up in 7257ed7.
| gharchive/issue | 2022-07-01T15:02:25 | 2025-04-01T06:38:33.140103 | {
"authors": [
"elfoche"
],
"repo": "elfoche/monitoreo",
"url": "https://github.com/elfoche/monitoreo/issues/620",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
477787265 | Problem with searching
Hi!
I have some problems with zamunda.net. For some reason when Elementum successful login to zamunda and i search for some movie, Elementum find results with right search string: http://zamunda.net/bananas?c42=1&c25=1&c35=1&c46=1&c20=1&c19=1&c5=1&c24=1&c31=1&c28=1&search=avengers+endgame+2019&incldead=1&field=name
but after that Elementum create another request with other string: https://zamunda.net:443/login.php?returnto=%2Fbananas%3Fc42%3D1%26c25%3D1%26c35%3D1%26c46%3D1%26c20%3D1%26c19%3D1%26c5%3D1%26c24%3D1%26c31%3D1%26c28%3D1%26search%3Davengers%2Bendgame%2B2019%26incldead%3D1%26field%3Dname
and this string loged me out from Zamunda and finaly Elementum cannot display any results.
Can you help with this?
If you need user and pass for Zamunda, tell me where to send them.
Thank you in advance!
I fix the problem.
| gharchive/issue | 2019-08-07T08:40:06 | 2025-04-01T06:38:33.143500 | {
"authors": [
"martinstz"
],
"repo": "elgatito/plugin.video.elementum",
"url": "https://github.com/elgatito/plugin.video.elementum/issues/464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2541055842 | 400 Login failed on [toloka] provider
When try to execute any search on toloka provider it returns no links.
From log debug I can see errors of 400 bad requests and information that Request Header or Cookie Too Large.
Not sure is it something about client, configuration or the provider itself.
I've verified that the login/password are correct.
I've also get try to lgin from postman using this form-data from config
"{'username': USERNAME, 'password': PASSWORD, 'autologin': '1', 'login': 'Enter'}"
and it seems to be working, at least it returns me redirect instead of bad request.
Here is the part of log debug:
2024-09-22 12:49:48.502 T:4449 debug <general>: [script.elementum.burst] Searching with payload (general): {'proxy_url': '', 'internal_proxy_url': 'http://10.10.10.108:65222', 'elementum_url': 'http://10.10.10.108:65220', 'silent': False, 'skip_auth': False, 'query': 'FOO'}
2024-09-22 12:49:48.505 T:4449 warning <general>: [script.elementum.burst] Burstin' with Гуртом
2024-09-22 12:49:48.506 T:4449 warning <general>: [script.elementum.burst] No 'en' translation available...
2024-09-22 12:49:48.511 T:3816 debug <general>: ------ Window Init (DialogExtendedProgressBar.xml) ------
2024-09-22 12:49:48.512 T:4449 debug <general>: [script.elementum.burst] Translated titles from Elementum: {'source': 'FOO', 'original': 'FOO'}
2024-09-22 12:49:48.513 T:4468 debug <general>: [script.elementum.burst] [toloka] Processing toloka with general method
2024-09-22 12:49:48.514 T:4468 debug <CSettingsManager>: requested setting (filter_music) was not found.
2024-09-22 12:49:48.515 T:4468 debug <general>: [script.elementum.burst] [toloka] General URL: https://toloka.to/tracker.php?nm=QUERYEXTRA&o=10
2024-09-22 12:49:48.517 T:4468 debug <general>: [script.elementum.burst] [toloka] execute_process for toloka with <function extract_torrents at 0x7e114938>
2024-09-22 12:49:48.518 T:4449 debug <general>: [script.elementum.burst] Timer: 0s / 27s
2024-09-22 12:49:48.527 T:4468 debug <CAddonSettings[0@plugin.video.elementum]>: trying to load setting definitions from old format...
2024-09-22 12:49:48.549 T:4468 debug <general>: [script.elementum.burst] [toloka] Queries: ['{title}']
2024-09-22 12:49:48.550 T:4468 debug <general>: [script.elementum.burst] [toloka] Extras: ['']
2024-09-22 12:49:48.551 T:4468 debug <general>: [script.elementum.burst] [toloka] Before keywords - Query: '{title}' - Extra: ''
2024-09-22 12:49:48.552 T:4468 warning <general>: [script.elementum.burst] [toloka] Falling back to original title in absence of None language title
2024-09-22 12:49:48.557 T:4468 warning <general>: [script.elementum.burst] [toloka] Using translated 'original' title 'FOO'
2024-09-22 12:49:48.559 T:4468 debug <general>: [script.elementum.burst] [toloka] After keywords - Query: '%D1%80%D0%B0%D1%82%D0%B0%D1%82%D1%83%D0%B9' - Extra: ''
2024-09-22 12:49:48.560 T:4468 debug <general>: [script.elementum.burst] - toloka query: '%D1%80%D0%B0%D1%82%D0%B0%D1%82%D1%83%D0%B9'
2024-09-22 12:49:48.561 T:4468 debug <general>: [script.elementum.burst] -- toloka url_search before token: 'https://toloka.to/tracker.php?nm=FOO&o=10'
2024-09-22 12:49:48.561 T:4468 debug <general>: [script.elementum.burst] --- toloka using POST payload: {}
2024-09-22 12:49:48.562 T:4468 debug <general>: [script.elementum.burst] ----toloka filtering with post_data: {}
2024-09-22 12:49:48.562 T:4468 debug <CSettingsManager>: requested setting (toloka_passkey) was not found.
2024-09-22 12:49:48.577 T:4468 debug <general>: [script.elementum.burst] Opening URL: b'https://toloka.to/login.php'
2024-09-22 12:49:48.770 T:4449 debug <general>: [script.elementum.burst] Timer: 0s / 27s
2024-09-22 12:49:49.029 T:4468 info <general>: Skipped 1 duplicate messages..
2024-09-22 12:49:49.029 T:4468 debug <general>: [script.elementum.burst] Status for b'https://toloka.to/login.php' : 400
2024-09-22 12:49:49.031 T:4468 critical <general>: [script.elementum.burst] [toloka] Login failed: 400
2024-09-22 12:49:49.034 T:4468 debug <general>: [script.elementum.burst] [toloka] Failed login content: '<html>\r\n<head><title>400 Request Header Or Cookie Too Large</title></head>\r\n<body bgcolor="white">\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>Request Header Or Cookie Too Large</center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n<!-- a padding to disable MSIE and Chrome friendly error page -->\r\n<!-- a padding to disable MSIE and Chrome friendly error page -->\r\n<!-- a padding to disable MSIE and Chrome friendly error page -->\r\n<!-- a padding to disable MSIE and Chrome friendly error page -->\r\n<!-- a padding to disable MSIE and Chrome friendly error page -->\r\n<!-- a padding to disable MSIE and Chrome friendly error page -->\r\n'
2024-09-22 12:49:49.047 T:4468 warning <general>: [script.elementum.burst] [toloka] >> Гуртом returned 0 results in 0.5 seconds
System: Android TV
Kodi version: 21.1
Elementum version: 0.1.103
Elementum burst version: 0.0.89
Any suggestions will be appreciate.
@olegmiercoles
Maybe it is because of long cookie.
There is a file with cookies in .kodi/temp/burst/common_cookies.jar (see https://kodi.wiki/view/Kodi_data_folder for location of .kodi in your OS)
It is a plain text file, you can open it and search for toloka and see if cookie is "bad" (maybe it has some garbage and it is indeed too long).
Anyway - remove the line for toloka and try again. also, you can backup that file and remove the file completely.
Also, in burst settings in "maintenance" tab you can remove all cookies - in case if you unable to get access to file system of your device.
There is no "login_headers": field for toloka so it should not be "Request Header" issue.
https://github.com/elgatito/script.elementum.burst/blob/97dcfb60aa43fa646712c6ec60fb9fc7ee80ecc0/burst/providers/providers.json#L2381
Thank you, @antonsoroko
I did as you suggested: I've renamed common_cookies.jar to common_cookies_bkp.jar and it helped.
Probably I will keep backup-file for some time, but later will remove it.
Thank you again for assistance!
@olegmiercoles I am just curious - have you tried to take a look into that file? Interesting how a line for toloka looks like. Probably it has some garbage for some reason. If you remove toloka_sid from that line - you can share it here (so it will not have your login cookie).
Up to you of course.
Sure. I did my the best to cleanup all the hashes and IDs. Hope, I didn't miss anything :)
First line for toloka_data was pretty huge combo of hash+autologinid+hash+userid+hash
And toloka_302_u also contained around 8.000 characters hash inside.
#LWP-Cookies-2.0
Set-Cookie3: toloka_data="something-was-here-like-b%432%3b%9Bs%3d%autologinid%here-as-well-something-likeb%432%3b%9Bs%3d%userid-and-here-the-same"; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-05-29 18:56:57Z"; httponly=None; version=0
Set-Cookie3: toloka_302_tt=there-was-some-numbers; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-05-29 18:56:57Z"; version=0
Set-Cookie3: toloka_302_f="there-was-something-like-in-the-first-line-a%54b%33%"; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-05-29 18:56:57Z"; version=0
Set-Cookie3: toloka_302_uf=there-was-some-numbers; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-05-29 18:56:57Z"; version=0
Set-Cookie3: toloka_302_u="about-8.000-characters-was-here"; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-05-29 18:56:57Z"; version=0
Set-Cookie3: toloka___tt=there-was-some-numbers; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-03-04 19:58:37Z"; version=0
Set-Cookie3: toloka___f="there-was-something-like-in-the-first-line-a%54b%33%"; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-03-04 19:58:37Z"; version=0
Set-Cookie3: toloka___uf=0; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-03-04 19:58:37Z"; version=0
Set-Cookie3: toloka___u="there-was-something-like-in-the-first-line-a%54b%33%"; path="/"; domain="toloka.to"; path_spec; secure; expires="2025-03-04 19:58:37Z"; version=0
@olegmiercoles thanks!
so looks like by default many popular HTTP servers have total limit for HTTP Header (and cookie is a part of header) as 8KB.
so looks like in total your cookies were >8KB, thus i think we found the root cause.
somehow toloka website generated such long cookie.
i guess we could create a "blacklist" for cookies (e.g. ignore and do not save some cookies), although i am not sure if we really need it, since i see such issue for the first time. maybe @elgatito can add more ideas.
but anyway, thanks for info.
if there will be more issues like this then we can circle back to this.
@antonsoroko Quick googling says there is nothing "ready" in Python's requests to control size of a request to avoid such errors.
I also have problems with Toloka, even with sync, it does something (I was not debugging those), that invalidates a session everywhere, not only on Elementum/Burst side.
Not sure if we should/can do something with it.
| gharchive/issue | 2024-09-22T12:28:25 | 2025-04-01T06:38:33.156393 | {
"authors": [
"antonsoroko",
"elgatito",
"olegmiercoles"
],
"repo": "elgatito/script.elementum.burst",
"url": "https://github.com/elgatito/script.elementum.burst/issues/438",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
313520137 | Missing HTTP CONNECT method
Pretty straight forward, can't seem to do an HTTP CONNECT request.
For example:
$ http-prompt http://example.com
Version: 0.11.2
http://example.com> httpie connect
http http://example.com/connect
should actually be
http-prompt http://example.com
Version: 0.11.2
http://example.com> httpie connect
http CONNECT http://example.com
Addressed in #145, shipped in v1.0.0.
| gharchive/issue | 2018-04-11T23:19:27 | 2025-04-01T06:38:33.160030 | {
"authors": [
"eliangcs",
"wheelerlaw"
],
"repo": "eliangcs/http-prompt",
"url": "https://github.com/eliangcs/http-prompt/issues/142",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
141747801 | archetype generation command fails
I copied the command from your Wiki to start a project from the Maven archetype:
mvn archetype:generate -DarchetypeGroupId=org.jogger -DarchetypeArtifactId=jogger-archetype -DarchetypeVersion=0.9.0 -DarchetypeRepository=http://repository.elibom.net/nexus/content/repositories/releases/
This was the output:
mvn archetype:generate -DarchetypeGroupId=org.jogger -DarchetypeArtifactId=jogger-archetype -DarchetypeVersion=0.9.0 -DarchetypeRepository=http://repository.elibom.net/nexus/content/repositories/releases/
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> maven-archetype-plugin:2.4:generate (default-cli) > generate-sources
@ standalone-pom >>>
[INFO]
[INFO] <<< maven-archetype-plugin:2.4:generate (default-cli) < generate-sources
@ standalone-pom <<<
[INFO]
[INFO] --- maven-archetype-plugin:2.4:generate (default-cli) @ standalone-pom --
-
[INFO] Generating project in Interactive mode
[INFO] Archetype defined by properties
Downloading: http://repository.elibom.net/nexus/content/repositories/releases/or
g/jogger/jogger-archetype/0.9.0/jogger-archetype-0.9.0.jar
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.974 s
[INFO] Finished at: 2016-03-18T10:25:32+09:00
[INFO] Final Memory: 15M/245M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-archetype-plugin:2
.4:generate (default-cli) on project standalone-pom: The desired archetype does
not exist (org.jogger:jogger-archetype:0.9.0) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please rea
d the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureExc
eption
Hi @garfieldnate ,
I think this project is pretty much dead. In case you are looking for alternatives I suggest you take a look at either Spark or Pippo. Jooby might also be worth a try eventhough I have no experience with it.
That's too bad, this was very quick to set up. It's just that the author's repo site is down. Maybe they could be moved to Maven Central if they could be found. I'll leave this issue here for future reference.
| gharchive/issue | 2016-03-18T01:20:23 | 2025-04-01T06:38:33.166383 | {
"authors": [
"garfieldnate",
"wowselim"
],
"repo": "elibom/jogger",
"url": "https://github.com/elibom/jogger/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1687749741 | Preprint history event data parse, update fixture.
Added new parsing functions to gather preprint version event history data.
The existing test fixture 2022.10.17.512253.docmap.json is updated to reflect the latest docmap data available for it.
Re issue https://github.com/elifesciences/issues/issues/7721
| gharchive/pull-request | 2023-04-28T01:23:42 | 2025-04-01T06:38:33.171915 | {
"authors": [
"gnott"
],
"repo": "elifesciences/docmap-tools",
"url": "https://github.com/elifesciences/docmap-tools/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2592827835 | MSID: 99785 Version: 1 DOI: 10.1101/2024.07.15.603526
MSID: 99785
Version: 1
Preprint DOI: https://doi.org/10.1101/2024.07.15.603526
Step 1. Awaiting reviews
Editorial to post reviews via hypothesis
Useful links:
DocMap: https://data-hub-api.elifesciences.org/enhanced-preprints/docmaps/v2/by-publisher/elife/get-by-manuscript-id?manuscript_id=99785
New model tracking: https://docs.google.com/spreadsheets/d/1_fHaoOy7hjyocptKtVJRijeNpUY4hBS7Ck_aVmx6ZJk/
Reviews on sciety: https://sciety.org/articles/activity/10.1101/2024.07.15.603526
For trouble shooting (e.g. no Docmaps available):
DocMap issue addressing: https://miro.com/app/board/uXjVNCwK6EI=/
Explore DataHub DocMaps API: https://lookerstudio.google.com/reporting/4c2f0368-babb-4beb-b5b3-497e7e7b0f08/page/ejphD
Unmatched submissions and preprints: https://lookerstudio.google.com/u/0/reporting/9f86204f-3bf7-477c-9b18-5c5ef141bf69/page/p_gxi57ha93c
Unmatched manuscripts spreadsheet: https://docs.google.com/spreadsheets/d/15QcK8w-ssB7109RQEDtFpJPZ0J5HTGxoHa_2TtpMBbg/edit#gid=1336081641
Step 2. Preview reviewed preprint
Production QC content ahead of publication
Instructions:
QC preview: https://prod--epp.elifesciences.org/previews/99785v1
Update ticket with any problems (add blocked label)
When QC OK, add QC OK label to ticket and add publication date and time to https://docs.google.com/spreadsheets/d/1amAlKvdLcaDp5W8Z8g77NmkwbMF5n_u89ArSqPMO8jg
Move card to next column
(At end of the day post link in #enhanced-preprint and ask for PDF to be generated)
Useful links:
Preprint DOI : https://doi.org/10.1101/2024.07.15.603526
Confirm reviews returned by EPP: https://prod--epp.elifesciences.org/api/reviewed-preprints/99785/v1/reviews
To update the MECA path in the docmap: https://docs.google.com/spreadsheets/d/1mctCQuNFBjSn97Lihy7_vBO6z7-N-oqyLv4clyi6zHg
Step 3: Awaiting search reindex
This step adds the reviewed preprint to the homepage: https://elifesciences.org
The search reindex is triggered once an hour. We need the reviewed preprint to be indexed as the search application serves the journal homepage.
Useful links:
Jenkins pipeline to reindex search can be triggered sooner or monitored here: https://alfred.elifesciences.org/job/process/job/process-reindex-reviewed-preprints/
Step 4: Published! PDF requested
Waiting for PDF to be generated
Useful links:
PDF tracking: https://docs.google.com/spreadsheets/d/106_XeDjmuBae7gexOTNzg60lapeqjl2aRn9DzupGyS8/
Step 5: Introduce PDF to data folder and git repo
Upload PDF to relevent folder in git repo https://github.com/elifesciences/enhanced-preprints-data/
Step 6: Done!
[ ] Kettle is on!
Hi @acollings / @FionaBryant, please could you take a look, and if necessary tweak the assessment for this one?
The authors solidly connect proteostasis ...
Thanks @fred-atherden this has been fixed
Many thanks!
WOS query sent
Confirmed Ok to proceed RE WOS.
Waiting for https://sciety.org/evaluations/hypothesis:nlvGcJarEe-W4aOBdYgobQ/content to update
| gharchive/issue | 2024-10-16T19:15:16 | 2025-04-01T06:38:33.190307 | {
"authors": [
"acollings",
"fred-atherden"
],
"repo": "elifesciences/publish-reviewed-preprints-issues",
"url": "https://github.com/elifesciences/publish-reviewed-preprints-issues/issues/1423",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1197428562 | 🛑 Radarr is down
In 503e26a, Radarr (https://radarr.elightcap.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Radarr is back up in baa21c0.
| gharchive/issue | 2022-04-08T14:59:35 | 2025-04-01T06:38:33.194757 | {
"authors": [
"elightcap"
],
"repo": "elightcap/statuspage",
"url": "https://github.com/elightcap/statuspage/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
990046247 | demjeson install cannot be done since setuptools upgrade, so neither justpy can.
Today on a new deploy I did :
pip3 install justpy
but ends up failing because:
ERROR: Cannot install justpy==0.0.5, justpy==0.0.6, justpy==0.0.7, justpy==0.0.8, justpy==0.0.9, justpy==0.1.0, justpy==0.1.1, justpy==0.1.2, justpy==0.1.3, justpy==0.1.4 and justpy==0.1.5 because these package versions have conflicting dependencies.
The conflict is caused by:
justpy 0.1.5 depends on demjson>=2.2.4
justpy 0.1.4 depends on demjson>=2.2.4
justpy 0.1.3 depends on demjson>=2.2.4
justpy 0.1.2 depends on demjson>=2.2.4
justpy 0.1.1 depends on demjson>=2.2.4
justpy 0.1.0 depends on demjson>=2.2.4
justpy 0.0.9 depends on demjson>=2.2.4
justpy 0.0.8 depends on demjson>=2.2.4
justpy 0.0.7 depends on demjson>=2.2.4
justpy 0.0.6 depends on demjson>=2.2.4
justpy 0.0.5 depends on demjson>=2.2.4
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
it is an open issue at demjson repo:
https://github.com/dmeranda/demjson/issues/40
Downgrading to setuptools 57.5.0 it des not work, it installs but while doing:
import justpy
gives:
import justpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/giodegas/dev/AQ2009/myenv/lib/python3.8/site-packages/justpy/__init__.py", line 1, in <module>
from .justpy import *
File "/data/giodegas/dev/AQ2009/myenv/lib/python3.8/site-packages/justpy/justpy.py", line 13, in <module>
from .chartcomponents import *
File "/data/giodegas/dev/AQ2009/myenv/lib/python3.8/site-packages/justpy/chartcomponents.py", line 2, in <module>
import demjson
File "/data/giodegas/dev/AQ2009/temp/demjson/demjson.py", line 645
class json_int( (1L).__class__ ): # Have to specify base this way to satisfy 2to3
^
SyntaxError: invalid syntax
@elimintz what can I do now?
Thank you for your support.
Thanks for alerting me to this. I'll try to figure something out. In the meantime, it seems that if you downgrade to anything less than python 3.10 and setuptools to 57.5.0 it should work. Let me know if it doesn't
I am using python 3.8.10 now, setuptools 57.5.0 with a virtual env, still with the import problem.
I may try downgrading setuptools more.
Ok, please let me know how it goes
ok found setuptools 56.0.0 is OK!
this is now my pipi listi in the virtual environment:
pip list
Package Version
------------------ ---------
addict 2.4.0
aiofiles 0.7.0
anyio 3.3.0
asgiref 3.4.1
certifi 2021.5.30
charset-normalizer 2.0.4
click 8.0.1
demjson 2.2.4
h11 0.12.0
httpcore 0.13.6
httpx 0.19.0
idna 3.2
itsdangerous 2.0.1
Jinja2 3.0.1
justpy 0.1.5
MarkupSafe 2.0.1
pip 21.2.4
pkg_resources 0.0.0
rfc3986 1.5.0
setuptools 56.0.0
sniffio 1.2.0
starlette 0.16.0
uvicorn 0.15.0
websockets 9.1
Thanks for finding a workaround.
I am now in the process of publishing a new version without using setuptools (using flit). Perhaps this will solve the issue.
btw, using docker and latest python3:8 image I found this better setuptools 57.4.0:
pip list
Package Version
------------------ ---------
addict 2.4.0
aiofiles 0.7.0
anyio 3.3.0
asgiref 3.4.1
certifi 2021.5.30
charset-normalizer 2.0.4
click 8.0.1
demjson 2.2.4
h11 0.12.0
httpcore 0.13.6
httpx 0.19.0
idna 3.2
itsdangerous 2.0.1
Jinja2 3.0.1
justpy 0.1.5
MarkupSafe 2.0.1
pip 21.2.4
rfc3986 1.5.0
setuptools 57.4.0
sniffio 1.2.0
starlette 0.16.0
uvicorn 0.15.0
websockets 9.1
wheel 0.37.0
You might want to switch to demjson3 in case you really only want to support Python 3 anymore.
Thank you for the suggestion. I will make the change.
I have the same problem but am unable to downgrade to Python <3.8.8 because I'm on a Mac with an M1 chip.
Hi,
Is there any solution yet for this problem? JustPy isn't just getting installed!
Please see if you can port things to demjson3.
Thanks,
Sam
I need to find time to release a new version with this. There is a demjson compatible package called demjson3 that solves this issue. If you want to fix it locally, change all import demjson lines to import demjson3 as demjson
And you need to install demjson3.
The advantage of using demjson is that it can parse correctly javascript objects where the keys do not need to be between quotes.
same problem here on python 3.6.9
downgrading setuptools to 56.0.0 as giodegas stated worked for me.
@elimintz Replacing demjson with demjson3 indeed fixes the issue for me. Also see PR.
Version 0.2.3 is out that should this problem. Replaced demjson with demjson3.
Would appreciate confirmation that this indeed is the case. Did not have time for too much testing but the changes were very limited. The changes are not reflected in the code on github yet but all I did was replace import demjson with import demjson3 as demjson in 4 places.
@poke1024 @1081 @nielstron @giodegas @Ledjob @docsteveharris @Flova
Some preliminary test show it is ok now. Thank you.
It works fine for me too
duplicate of #408
| gharchive/issue | 2021-09-07T14:27:27 | 2025-04-01T06:38:33.206775 | {
"authors": [
"Flova",
"Ledjob",
"WolfgangFahl",
"docsteveharris",
"elimintz",
"giodegas",
"nielstron",
"poke1024",
"samiit"
],
"repo": "elimintz/justpy",
"url": "https://github.com/elimintz/justpy/issues/301",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
361228128 | New Worlds
Hello, I've been trying to get Star Trek New Worlds to run, with
DDrawCompat= 1
DSoundCtrl= 1
I do get into the game, however after a few seconds it just freezes.
I'm using compatibility mode for windows 7, 16 bit color mode, disabled optimizations for full screen.
I've added the logfile I got from Process Monitor:
Logfile.zip
I got the feeling I'm missing something really small, since it does start and you can move around for a few seconds.
(Edited with new logfile, previous didn't properly include the whole game session)
dxwrapper-stnw.log
Here's the wrapper log when the issue occurs
It looks like the game is crashing. Try setting DSoundCtrl = 0. Also try adding this line into the ini file: HandleExceptions = 0. This will allow you to see the crash.
If that does not work, try using the attached updated files. This works with Star Trek Armada 1 and Star Trek Armada 2.
ddraw.zip
The 2 settings seemed to have worked (at least it doesn't crash in 10 seconds), I did not use the Armada files. The game needed to be launched from a FAT32 usb stick or partition to work properly. I didn't have time to test the game properly but thus far it seems ok!
Closing this since the issue seems to be resolved. If the issue comes back you can reopen.
| gharchive/issue | 2018-09-18T10:02:36 | 2025-04-01T06:38:33.216951 | {
"authors": [
"LAguido",
"elishacloud"
],
"repo": "elishacloud/dxwrapper",
"url": "https://github.com/elishacloud/dxwrapper/issues/31",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
2045314965 | feat(wes): add service-info
Description
This PR creates a service info component for WES.
closing this will create a PR for the same, as there have been changes in design package.
| gharchive/pull-request | 2023-12-17T17:59:36 | 2025-04-01T06:38:33.218058 | {
"authors": [
"JaeAeich"
],
"repo": "elixir-cloud-aai/cloud-components",
"url": "https://github.com/elixir-cloud-aai/cloud-components/pull/213",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1759636256 | Fix type definition
+ @type parsec_success :: {:ok, tokens, String.t(), context(), integer(), integer()}
- @type parsec_success :: {:ok, tokens, String.t(), context(), {integer(), integer()}, integer()}
as consumed by e.g. makeup_elixir or makeup_erlang.
I take this time to also introduce dialyxir to CI, which surfaces another issue (opaqueness-related) already mentioned next to stream_data.
Actions missing from this:
waiting for a stream_data update and subsequent import here
(potentially) updating nimble_parsec (as per acceptance of this pull request) [optional]
Edit: while reading the CONTRIBUTING guide I didn't quite understand how to handle the RELEASE.md part (is this required?). Regarding the CHANGELOG.md, do you prefer I do it? Or do you, prior to release?
A PR to fix the specs is welcome but we don't plan to introduce dialyzer at the moment, thank you :)
Sure. I can remove that bit. Thanks.
@josevalim, shall I wait for a nimble_parsec release to update this? Or are you good without it? Thanks.
:green_heart: :blue_heart: :purple_heart: :yellow_heart: :heart:
The goal of this update is to fix a Dialyzer -related issue with makeup_erlang and makeup_elixir (for which now I'm thinking you don't want dialyxir introduced 😄). Do you think it would make sense to tag-release it, to prevent consumers from finding a dialyxir issue.
| gharchive/pull-request | 2023-06-15T22:38:00 | 2025-04-01T06:38:33.263954 | {
"authors": [
"josevalim",
"paulo-ferraz-oliveira"
],
"repo": "elixir-makeup/makeup",
"url": "https://github.com/elixir-makeup/makeup/pull/60",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2276470711 | Fix catching the object by value.
Fixes #4
Fixed the CPPCheck issue, but this line that was changed is not covered by tests.
Going to create a test for this to get the patch coverage up.
That last commit is for the release build. I should have put it in the preprocessor def as an else though since iostream is already included in the debug build.
Fixy fix.
Fixed a bug where capturing multiple streams in the same test resulted in an Access Violation on Windows cl in Debug.
PR now has sufficient patch coverage for merge.
| gharchive/pull-request | 2024-05-02T20:58:47 | 2025-04-01T06:38:33.267066 | {
"authors": [
"eljonny"
],
"repo": "eljonny/TestCPP",
"url": "https://github.com/eljonny/TestCPP/pull/10",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
214978284 | Cannot understand how any function that requires the Model can be used
For example when I have an instance of the DatePicker type as returned by init, how can I then call getDate in order to obtain the Maybe Date value? getDate requires a Model which is not what I have.
Without being able to understand how to call getDate, setDate or setFilter I really can't work out how to synchronise two date pickers into a range.
Would really appreciate some help (if I am missing something) or some clarification if a fix of some kind is required.
Happy to help if I can by the way.
I'm glad you noticed that too! It occurred to me recently that those functions wouldn't do anyone any good. You're not missing anything; I think a (hopefully fairly simple) fix is in order to convert them to DatePicker -> instead of Model ->. I'll certainly get to it soon if no one else does.
Hey, if you're happy with that change to the interface, I'm happy to have a go at making that change myself and send you a PR. I'm keen to get involved because I think there are a number of enhancements that we would like to make in the future (keyboard control, multi-month views, internationalisation etc) so it would be good to get familiar with things.
| gharchive/issue | 2017-03-17T11:34:50 | 2025-04-01T06:38:33.275215 | {
"authors": [
"bbqbaron",
"julianjelfs"
],
"repo": "elm-community/elm-datepicker",
"url": "https://github.com/elm-community/elm-datepicker/issues/26",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
364837135 | The type Additive have incorrect variants.
The type Additive have the variants AdditiveNone | AdditiveReplace, but should have AdditiveReplace | AdditiveSum (with correction to the corresponding function in TypesToStrings.elm).
Source: https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/additive
Comes up when combining transform animations in this way:
<rect x="10" y="10" width="40" height="20"
style="stroke: #000000; fill: none;">
<animateTransform attributeName="transform" attributeType="XML"
type="scale"
from="1" to="3"
begin="0s" dur="10s"
repeatCount="indefinite"
additive="sum"
/>
<animateTransform attributeName="transform" attributeType="XML"
type="rotate"
from="0 30 20" to="360 30 20"
begin="0s" dur="10s"
fill="freeze"
repeatCount="indefinite"
additive="sum"
/>
(http://tutorials.jenkov.com/svg/svg-animation.html)
I think I can fix this.
| gharchive/issue | 2018-09-28T10:44:44 | 2025-04-01T06:38:33.280064 | {
"authors": [
"RalfNorthman"
],
"repo": "elm-community/typed-svg",
"url": "https://github.com/elm-community/typed-svg/issues/27",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
256704017 | Compiler freezes when annotating a weird self-referential type
Trying to compile the following code causes the compiler (elm 0.18.0 on Archlinux) to freeze. If I remove the type annotation from idF it will produce an error message about a weird self-referential type, but with the annotation it just gets stuck.
module Test exposing (..)
type alias Focus b s =
{ get : b -> s
, update : (s -> s) -> b -> b
}
create : (b -> s) -> ((s -> s) -> b -> b) -> Focus b s
create get update =
{ get = get
, update = update
}
idF : Focus { r | id : a } a
idF =
create .id (\f s -> { s | id = f }) -- should be { s | id = f s.id }
Development build captures it:
It is not pointing out the infiniteness in an ideal way, but it does not hang at least.
| gharchive/issue | 2017-09-11T13:33:27 | 2025-04-01T06:38:33.282393 | {
"authors": [
"evancz",
"nonpop"
],
"repo": "elm-lang/elm-compiler",
"url": "https://github.com/elm-lang/elm-compiler/issues/1643",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
111322017 | Fix typo in If Expressions section
Hope this will make it less confusing for people who follow in the future
Nice, thanks!
| gharchive/pull-request | 2015-10-14T04:33:37 | 2025-04-01T06:38:33.283239 | {
"authors": [
"HarleyKwyn",
"evancz"
],
"repo": "elm-lang/elm-lang.org",
"url": "https://github.com/elm-lang/elm-lang.org/pull/399",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1857677467 | Failed parsing Firefox header
I tried to parse my browsers header and ran into the invalid media type error indicating that the header is syntactically invalid.
Does that mean that Firefox does not implement the header format correctly or is it an error in this project?
I added a test case for the header (text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8).
Result:
=== RUN TestGetMediaType/Firefox_header
contenttype_test.go:221: Unexpected error "invalid media type" for text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
--- FAIL: TestGetMediaType (0.00s)
Ah nvm, I did not realize there is GetAcceptableMediaTypeFromHeader.
| gharchive/issue | 2023-08-19T11:15:34 | 2025-04-01T06:38:33.308122 | {
"authors": [
"nothub"
],
"repo": "elnormous/contenttype",
"url": "https://github.com/elnormous/contenttype/issues/12",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1936236335 | Sign In With Solana with Solana Mobile Stack
Solana Mobile Team is implementing the SIWS (Sign In With Solana) API on Mobile Wallet Adapter. (Probably on v2.0)
https://github.com/solana-mobile/mobile-wallet-adapter/issues/439
After that, we can use SIWS on both of Web Apps and Solana dApps on the Saga phone.
For now, SIWS is available with Next.js WebApp with Skeet.
Check /webapp folder for that.
https://github.com/elsoul/skeet-solana-mobile-stack/tree/main/webapp
Wallet Adapter for Web is here
https://github.com/elsoul/skeet-solana-mobile-stack/blob/main/webapp/src/components/providers/SolanaWalletProvider.tsx
Mobile Wallet Adapter specification
Version: 2.0.0-DRAFT
https://solana-mobile.github.io/mobile-wallet-adapter/spec/spec.html
It's done already
| gharchive/issue | 2023-10-10T20:58:01 | 2025-04-01T06:38:33.362107 | {
"authors": [
"KishiTheMechanic"
],
"repo": "elsoul/skeet-solana-mobile-stack",
"url": "https://github.com/elsoul/skeet-solana-mobile-stack/issues/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
135067967 | edit: Fix crash in history listing mode
When you had too few history entries to display in the terminal,
entering history listing mode attempted to access a negative index and
crashed.
I've wanted to make two additional edits but was unsure about their possible side-effects on other parts.
The first was not entering history listing mode if there was no history. Off I went to startHistoryListing, but got stuck a bit in the semantics. What happens if newHistoryListing returns an error (which makes sense in our situation)? Would the user still be in history listing mode? Perhaps it'll be best then to move the mode switch below the check.
The second was in trimToLines, I badly yearned to put a if len(b.cells) < low check in the beginning, but it may introduce subtle bugs (why aren't things trimming?) if used in certain contexts, and it felt wrong to do it without more knowledge of the system.
Great job on the shell, btw. It's mighty impressive.
Hi! Thanks for the fix. You are correct that the mode switch should be moved down, and I prefer to leave trimToLines as it is, due to the concerns you just stated.
By the way, history listing was something I didn't finish; the ultimate goal is to steal the design ptpython's history listing (#63), which allows you to scroll through the whole history and more importantly, compose a chunk of code by cherry-picking multiple entries from the history. Before trying to implement it, I observed that it has quite a lot in common with existing completion and navigation listings -- esp. wrt. the scrolling and trimming behavior -- so a good abstraction should be made to capture the common behavior. Unable to come up with a good abstraction, I didn't bother to think hard and turned to other parts of elvish instead, leaving this unfinished thing in the code :)
If you find this interesting enough, you are more than welcome to contribute. Keep me informed about your progress, so that I won't rewrite the whole line editor in a midnight and ruin all your efforts.
| gharchive/pull-request | 2016-02-20T12:39:00 | 2025-04-01T06:38:33.369052 | {
"authors": [
"Zirak",
"xiaq"
],
"repo": "elves/elvish",
"url": "https://github.com/elves/elvish/pull/149",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
365065757 | add --nobrowser field to test codecoverage.
Outline
Running embark test -c will cause the browser to automatically open with the code coverage report. The goal of this task is to add --nobrowser field for users that don't want this.
Acceptance Criteria
running embark test -c --nobrowser should running the tests & code coverage as normal, but not open the browser.
embark test -c should work as now.
PR opened: https://github.com/embark-framework/embark/pull/950
@iurimatias Seems like the task is already done
Thanks for pointing this out @subramanianv !
Yes this landed as https://github.com/subramanianv/embark/commit/890b46977780d3b3d0199ba8c459c102d6f85596
Closing this one.
@vs77bb this issue won't allow me to payout and has been closed already, can you help?
Hi @StatusSceptre it seems you need to approve me now and I will click on the 'submit' afterwards.
Approved @cryptomental
@StatusSceptre thank you, I submitted via GitCoin.
| gharchive/issue | 2018-09-28T22:52:08 | 2025-04-01T06:38:33.442053 | {
"authors": [
"PascalPrecht",
"StatusSceptre",
"cryptomental",
"iurimatias",
"subramanianv"
],
"repo": "embark-framework/embark",
"url": "https://github.com/embark-framework/embark/issues/941",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1213277157 | Status Bar and Cooling Icon
I was using the previous plug-in which used to provide a cooling or heating status in the status bar of each room/level but I am not getting this function with this plug-in. I would also get a change in the colour icon of each unit based on activity from Green to Blue when Cooling and to Orange when Heating. Is this option available with this plug-in.
Thanks
Is this due different plugin functionality or due to a change in HomeKit, i.e. how Apple chooses to display it? Do you have a reference implementation of a plugin that achieves these UI distinctions?
Hi
Thank you for getting back to me. Prior to using this plug-in, I was using the Panasonic Air Conditioner plug-in by Cody1515 which has now been archived. He mentioned that your plug-in is based on the same coding, the difference being that yours uses the platform rather than individual accessories/units (trying to explain this as best as I can but my knowledge is pretty basic) His plug-in also did not initially update the status bar nor did it control the icon colour based on setting, Green for Standby, Blue for Cooling and Orange for Heating. But one of the last updates included these functions which were quite useful. So for example, when using the cooling option, the Green Circle would turn Blue and the Status Bar on the top of the Room/Level would change from Idle to Cooling and note the temperature much like the Heatmiser accessories would show the Room was Heating. I hope this clarifies the “issue”. As I am very dependent on this plug-in, I would be willing to donate but there is not such option on your plug-in.
Many Thanks
I’m attaching screenshots I found of the features I was trying to explain.
This is the Status Bar up top that shows that the AC is on Cooling and the Temperature of the Room
And this is a side by side of my Underfloor Heating on Standby (Green) and the AC on Cooling (Blue) and again showing the Temperature within the circle.
I hope this helps
Thanks again
The results of my debugging session suggest that we might be dealing with a Homebridge or HomeKit bug.
The status icon in the Home app is controlled by the CurrentHeaterCoolerState characteristic of the accessory.
On this line, we set the current state to IDLE when the AC is in cooling mode and the current temperature is less than the set temperature. However, in my test the Home app displays the status of the AC as inactive.
In my debugging session, I also tried setting the current state to INACTIVE instead of IDLE and I got the same result on the UI.
As an additional test, I set the current state to HEATING. The Home app UI reflected this correctly (as Heating), which leaves me thinking that Homebridge or HomeKit don't distinguish between the IDLE and INACTIVE modes.
@1homebridge, upon further exploration I can confirm that I actually see the "Cooling" indicator and the blue arrow in my Home app.
In your previous comments, you uploaded images of how you want it to look like, but can you upload a screenshot of how it actually looks like for you right now?
My previous comment alleged a bug with regards to how IDLE and INACTIVE lead to the same UI representation. It does not, however, confirm your original thesis of the indicators not being available at all. Could you help me clarify the problem statement?
The overlap between the IDLE and INACTIVE statuses is addressed in this issue in the Homebridge repository.
can you set Debug to "true" in your config. Restart and post the output from your log.
Maybe we can resolve the issue then.
My config looks like this;
{ "name": "Homebridge Panasonic AC Platform", "email": "xxxxxxxxxxxxxxxxxxx@xxx.xxx", "password": "xxxxxxxxxxxxxxxxxx", "exposeOutdoorUnit": true, "debugMode": true, "platform": "Panasonic AC Platform" },
@1homebridge, is this the same type of issue as in #23 and #31?
Hi, thanks for getting back to me.
Yes, though with the new UI in iOS16 the Status Bar no longer indicates between Idle-Cooling-Heating states so the issue is just to obtain an indoor temperature reading from the outdoor unit as described in #23 and #31 to stop getting a 0.0* indicator. This would then allow the units to show the Blue colour indicating that they are in Cooling mode as requested in the other threads.
Thanks
Okay, thanks for confirming and the additional context. Will deal with it through the other open issues.
| gharchive/issue | 2022-04-23T11:30:44 | 2025-04-01T06:38:33.476620 | {
"authors": [
"1homebridge",
"JurgenLB",
"embee8"
],
"repo": "embee8/homebridge-panasonic-ac-platform",
"url": "https://github.com/embee8/homebridge-panasonic-ac-platform/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
164588776 | Update to latest minimatch to avoid deprecation warning
Avoids npm WARN deprecated minimatch@2.0.10: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
@stefanpenner Awesome!!
| gharchive/pull-request | 2016-07-08T18:27:19 | 2025-04-01T06:38:33.478273 | {
"authors": [
"mdentremont"
],
"repo": "ember-cli/broccoli-concat",
"url": "https://github.com/ember-cli/broccoli-concat/pull/62",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2203758991 | Convert EmbraceDeliveryServiceTest to use a real DeliveryCacheManager
Goal
Testing
Release Notes
WHAT:
WHY:
WHO:
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#618
#617 👈
#616
master
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @bidetofevil and the rest of your teammates on Graphite
Merge activity
Mar 26, 3:09 AM EDT: @bidetofevil started a stack merge that includes this pull request via Graphite.
| gharchive/pull-request | 2024-03-23T07:54:15 | 2025-04-01T06:38:33.589040 | {
"authors": [
"bidetofevil"
],
"repo": "embrace-io/embrace-android-sdk",
"url": "https://github.com/embrace-io/embrace-android-sdk/pull/617",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2349740517 | Add in an additional step to validate in ExecutionCoordinator
Goal
Testing
Release Notes
WHAT:
WHY:
WHO:
#960 👈
master
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @bidetofevil and the rest of your teammates on Graphite
Merge activity
Jun 13, 2:22 AM EDT: @bidetofevil merged this pull request with Graphite.
| gharchive/pull-request | 2024-06-12T21:51:31 | 2025-04-01T06:38:33.593815 | {
"authors": [
"bidetofevil"
],
"repo": "embrace-io/embrace-android-sdk",
"url": "https://github.com/embrace-io/embrace-android-sdk/pull/960",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
422986597 | Bug: hover data when geo is changed
When hovering, the data onHover is not changed after the data is updated from a subregion to country view.
Resolves in the latest version
| gharchive/issue | 2019-03-19T22:42:05 | 2025-04-01T06:38:33.602139 | {
"authors": [
"emeeks",
"susielu"
],
"repo": "emeeks/react-dorling-map",
"url": "https://github.com/emeeks/react-dorling-map/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1063990003 | La'Tonia Mertica EW Mentee Assessment (to date)
as much as could complete on my own
Hello Those Powering EW,
Didn't see option to add reviewer(s) et cetera. Apologies if this pull request is in error in any way.
Thanks for this opportunity, please stay safe.
| gharchive/pull-request | 2021-11-26T00:37:37 | 2025-04-01T06:38:33.603403 | {
"authors": [
"LaTonia-Mertica"
],
"repo": "emergentworks/mentee-assessments",
"url": "https://github.com/emergentworks/mentee-assessments/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1477168867 | Update pure_glow example to glutin 0.30
A lot has changed in glutin 0.30, it would be nice if the pure_glow example could be updated to glutin 0.30 :)
I had some code that was based on the pure_glow example before, now I'm updating it to glutin 0.30, but I'm not sure if I'm doing everything correctly with the new ways of doing things.
You can find my code here: https://github.com/rust-windowing/glutin/issues/1445#issuecomment-1337903593
I would appreciate if you can let me know if it's correct. The pure_glow example could be updated similarly.
@coderedart Thanks. And should I use .with_profile(GlProfile::Core) or not? :)
@coderedart Thanks. And should I use .with_profile(GlProfile::Core) or not? :)
always use Core, unless you are targeting really ancient hardware.
| gharchive/issue | 2022-12-05T18:33:53 | 2025-04-01T06:38:33.668882 | {
"authors": [
"Boscop",
"coderedart"
],
"repo": "emilk/egui",
"url": "https://github.com/emilk/egui/issues/2393",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2297249972 | Fix: Use default features in Image Crate
Closes #4489
Related #4495
Fix: Use default features in Image Crate
Because only .png is available after update #4495.
Required to use JPEG, etc.
No - eframe only need png support, and should not be paying for the compilation of ten other image formats
| gharchive/pull-request | 2024-05-15T08:46:55 | 2025-04-01T06:38:33.671225 | {
"authors": [
"emilk",
"rustbasic"
],
"repo": "emilk/egui",
"url": "https://github.com/emilk/egui/pull/4498",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2135733164 | Add support for iOS PWA app bar
Add the ability to match the colour of the iOS PWA application bar to the overlay colour to create a more native feel.
Examples
Without iOS App Bar Support
With iOS App Bar Support
Solution
User needs these meta tags
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" />
<meta name="apple-mobile-web-app-capable" content="yes" />
Changes made in vaul/src/index.tsx to dynamically change the background colour to match overlay background colour
Will also solve some of the issues with this issue https://github.com/emilkowalski/vaul/issues/259
maybe, can solve this issue #199 too
It seems like #199 isn't an issue with body backgrounds.
| gharchive/issue | 2024-02-15T06:16:35 | 2025-04-01T06:38:33.674810 | {
"authors": [
"keeganpotgieter"
],
"repo": "emilkowalski/vaul",
"url": "https://github.com/emilkowalski/vaul/issues/269",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2320095587 | How to swipe outside of the drawer to open/close?
The ChatGPT mobile app has the most beautiful drawer functionality, and I'm trying to recreate that.
At any time the user can side swipe the chat or drawer area to open or close the drawer and shift the entire page layout.
My main hurdle is being able to swipe outside of the drawer while still being able to select what's beneath that outside swipeable area...
Does my entire layout need to be inside the drawer? With only half of it looking like the drawer? And without the drawer being allowed to fully close?
Any suggestion anyone reading this has would be fantastic.
Thank you!
check MUI swipable drawer
@max-17 I haven't tried this yet but I think a horizontal CSS scroll snap on the layout solves my needs
That's not supported here. The Drawer here is a Dialog meaning that it usually sits on top of other elements.
If you'd want to use this Drawer for something like ChatGPT's mobile app you could position the Drawer off screen initially and add an additional drag event to the body that would translate the content.
@max-17 Did you create a fork and build this functionality?
I was able to achieve this quite easily:
Use a controlled Vaul drawer, ie open and onOpenChange
Add a react-swipeable handler to my layout, and capture specific onSwipeStart events within X pixels of the edge of the screen and set open to true
| gharchive/issue | 2024-05-28T04:27:43 | 2025-04-01T06:38:33.678981 | {
"authors": [
"RickRyan26",
"emilkowalski",
"isaachinman",
"max-17"
],
"repo": "emilkowalski/vaul",
"url": "https://github.com/emilkowalski/vaul/issues/360",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
957532127 | Update data-dall.Rmd
Handful of typo corrections & suggested syntax revisions. Looks like this is / will be a great resource!
This is a quick test with just a handful of suggested edits. Happy to modify process if there's a better way to submit these, e.g. with separate commits (e.g. I'm not sure "consumer robotics" company is necessarily better descriptor than "e-commerce" company).
Wow - thanks so much for taking the time, @jonspring ! I really appreciate these
| gharchive/pull-request | 2021-08-01T16:24:30 | 2025-04-01T06:38:33.685485 | {
"authors": [
"emilyriederer",
"jonspring"
],
"repo": "emilyriederer/data-disasters",
"url": "https://github.com/emilyriederer/data-disasters/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
255306189 | Add PROJECT_BRIEF to Doxyfile
Come on, why wasn't it done before?
Let me know if I fucked this up, I don't have Doxygen installed.
"why wasn't it done before?" Because I expect people consulting the source code documentation to already know what source they are studying...
| gharchive/pull-request | 2017-09-05T14:55:04 | 2025-04-01T06:38:33.713659 | {
"authors": [
"NieDzejkob",
"idmean"
],
"repo": "emojicode/emojicode",
"url": "https://github.com/emojicode/emojicode/pull/84",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1964292352 | query to get all elements
what should I specify in the query to get all elements without filters?
Thanks for asking @ArtiikSK
Could you detail what are you trying to do?
Are you launching a web server to try?
You can add a vídeo or screenshot too if you like.
| gharchive/issue | 2023-10-26T20:06:48 | 2025-04-01T06:38:33.763412 | {
"authors": [
"ArtiikSK",
"herrardo"
],
"repo": "empathyco/x",
"url": "https://github.com/empathyco/x/issues/1342",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
954029439 | How can we display Custom Error message
//custom validate method
validator.validator.custom = function(el, event){ if($(el).is('[name=password]') && $(el).val().length < 5){ return 'Your password is too weak.'; } }
This method only return but it can not display any error on the screen.
The returned string is used as error message, you can define rule and message for specific input.
Okay, thanks @emretulek
| gharchive/issue | 2021-07-27T15:56:08 | 2025-04-01T06:38:33.791112 | {
"authors": [
"ateequrrahman97",
"emretulek"
],
"repo": "emretulek/jbvalidator",
"url": "https://github.com/emretulek/jbvalidator/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
122428944 | update stripe api version
The change logs after 2015-09-08 don't have any major changes in the koudoku code.
LGTM
hitomi :heart:
| gharchive/pull-request | 2015-12-16T04:53:07 | 2025-04-01T06:38:33.856027 | {
"authors": [
"mediavrog",
"pcboy",
"shawila"
],
"repo": "en-japan/koudoku",
"url": "https://github.com/en-japan/koudoku/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1508553652 | Schedule notifications upon registering or claiming rewards
Currently, we use the feed to schedule notifications for the registering and meetup reminders, but we can actually schedule them from within the app when we do certain actions.
After registering, we can schedule two notifications, which should be reminders for the next meetup:
24 hours before the meetup time
1 hour before the meetup time
The meetup time can be fetched here: https://github.com/encointer/encointer-wallet-flutter/blob/6fc236c622932d2b503f2ae5055c846dd6c7ccaa/lib/store/encointer/sub_stores/community_store/community_store.dart#L64. The value can't be null after registering, as it can only be null if we have not chosen a community.
After claiming the rewards, we can set a reminder to register at the start of the registering phase. Note: We must only schedule the notification if we claim the rewards in the attesting phase, not in the registering phase, obviously.
The timestamp to use is the the nextPhaseTimestamp: https://github.com/encointer/encointer-wallet-flutter/blob/6fc236c622932d2b503f2ae5055c846dd6c7ccaa/lib/store/encointer/encointer.dart#L81
Caveats:
How do we ensure unique IDs that don't overlap with the IDs we assign to the notifications we get from the feed? Maybe we have to maintain an internal global counter for that.
We should only schedule notifications if we are connected to the parachain on kusama, the nctr-k
If we use global counter How do we ensure unique IDs that don't overlap with the IDs we assign to the notifications we get from the feed? Maybe we have to maintain an internal global counter for that.
Can we use cache?
I suggest generate meetupId by meetupTime
void main() {
final meetupTimeAfter1Day = DateTime.now().add(const Duration(days: 1)).millisecondsSinceEpoch;
final meetupTimeAfter7Days = DateTime.now().add(const Duration(days: 7)).millisecondsSinceEpoch;
final meetupTimeAfter10Days = DateTime.now().add(const Duration(days: 10)).millisecondsSinceEpoch;
final meetupTimeAfter15Days = DateTime.now().add(const Duration(days: 15)).millisecondsSinceEpoch;
final meetupTimeAfter30Days = DateTime.now().add(const Duration(days: 30)).millisecondsSinceEpoch;
print(generateMeetupIdByTimeStamp(meetupTimeAfter1Day)); // 1
print(generateMeetupIdByTimeStamp(meetupTimeAfter7Days)); // 7
print(generateMeetupIdByTimeStamp(meetupTimeAfter10Days)); // 10
print(generateMeetupIdByTimeStamp(meetupTimeAfter15Days)); // 15
print(generateMeetupIdByTimeStamp(meetupTimeAfter30Days)); // 30
}
int generateMeetupIdByTimeStamp(int meetupTime) {
final now = DateTime.now().millisecondsSinceEpoch;
int id = 0;
int c = meetupTime - now;
do {
c -= 86400000; // 1 day = 86400000 milliseconds
id++;
} while (c > 0);
return id;
}
So I think we can generate dynamic id without using any cache. Please let me know your opinion.
Hi @Eldar2021,
You suggestions look good!
When user registers to a meetup, can he register to another meetup before that meetup ends?
Yes, this is possible, but the app should not allow that because you have not yet got your reputation and you will be a newbie again. You should only be allowed to do that after the rewards have been claimed.
If we use global counter How do we ensure unique IDs that don't overlap with the IDs we assign to the notifications we
In general, I like your approach of using the meetup time. Why do you need to divide by 86400000, is the number too big? I think this could lead to problems where we end up with the same ID for reminders, which are very close to eachother, do you agree?
Is there registration deadline time? or registration finishes when meetup starts?
The registration deadline is when the registering phase is over. Remember, we have 3 phases REGISTERING > ASSIGNING > ATTESTING. As long as we are in the registering phase, the nextPhaseTimeStamp is the deadline for registering. And in the assigning phase you can already register for the meetup in the next assigning phase.
| gharchive/issue | 2022-12-22T21:15:07 | 2025-04-01T06:38:33.935786 | {
"authors": [
"Eldar2021",
"clangenb"
],
"repo": "encointer/encointer-wallet-flutter",
"url": "https://github.com/encointer/encointer-wallet-flutter/issues/927",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
452099739 | Convert Image to Canvas?
Is it possible to convert a image to a canvas and back so I can draw onto it?
if not could you add Graphics.NewCanvas(Image Image) and Graphics.NewCanvas(ImageData ImageData)?
There is no such method, but you can draw an image onto the canvas.
but this requires the draw event right?
I was trying to do it so it could be used anywhere
See Love.Graphics.Present. This should be what you are looking for.
can support be added so that images and canvases can be explicitly converted back and forth?
a example here :
copy the utils class into your project:
https://gist.github.com/endlesstravel/027799eb772d644b0d4110284256da6a
use it like :
static public void Test_Issue75_ToPintImage()
{
ISSUE_75.Init();
var imgData = ISSUE_75.PrintImage(300, 300, () =>
{
Graphics.SetColor(Color.LightPink);
Graphics.Rectangle(DrawMode.Fill, 0, 0, 100, 100);
Graphics.SetColor(Color.White);
Graphics.Circle(DrawMode.Line, 100, 100, 20);
});
Resource.EncodeToFile("test.png", imgData, ImageFormat.PNG);
}
can this be added to love2dcs as a explicit image conversion?
| gharchive/issue | 2019-06-04T16:43:42 | 2025-04-01T06:38:33.960895 | {
"authors": [
"Shadowblitz16",
"Shylie",
"endlesstravel"
],
"repo": "endlesstravel/Love2dCS",
"url": "https://github.com/endlesstravel/Love2dCS/issues/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1216637282 | [Fedora-Linux] Update EOL dates and command
F36 was delayed, which pushed back the F34 end of life date: https://fedorapeople.org/groups/schedule/f-36/f-36-key-tasks.html
F35 EOL date from: https://fedorapeople.org/groups/schedule/f-37/f-37-key-tasks.html
As for the command change, it's due to lsb_release not being installed by default anymore
Fedora is delayed again. EOL moved to 2022-06-07. Could be delayed more.
| gharchive/pull-request | 2022-04-27T00:44:36 | 2025-04-01T06:38:33.963312 | {
"authors": [
"Evernow",
"istiak101"
],
"repo": "endoflife-date/endoflife.date",
"url": "https://github.com/endoflife-date/endoflife.date/pull/1102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
238738518 | Feature request: style dots and corners
If you can add more flexible feature to style qt such as this
I am very grateful! :grin:
Hi @mortezakarimi thank you. I will not add this myself but if someone is willing to contribute on this, that would be great.
The new upstream QR code library supports that now.
@DASPRiD What do you mean by "The new upstream QR code library"?
@DASPRiD What do you mean by "The new upstream QR code library"?
BaconQRCode, which this library is using.
@DASPRiD Do you by any chance have a reference where I can learn how to do so?
I can't seem to find anything about changing the style.
@DASPRiD Do you by any chance have a reference where I can learn how to do so?
I can't seem to find anything about changing the style.
There's not really any documentation for this, but the ImageRenderer takes a RendererStyle object, which can be configured. Best to look at the source:
https://github.com/Bacon/BaconQrCode/blob/master/src/Renderer/ImageRenderer.php#L26
https://github.com/Bacon/BaconQrCode/blob/master/src/Renderer/RendererStyle/RendererStyle.php
Closed as this will not be implemented here.
| gharchive/issue | 2017-06-27T05:08:34 | 2025-04-01T06:38:33.971651 | {
"authors": [
"DASPRiD",
"MelchiorKokernoot",
"endroid",
"mortezakarimi"
],
"repo": "endroid/qr-code",
"url": "https://github.com/endroid/qr-code/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2087197774 | chore(main): release 0.2.3
:robot: I have created a release beep boop
0.2.3 (2024-01-17)
Bug Fixes
mise tasks are experimental and must be activated (4ab703b)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/engeir/volcano-core/releases/tag/v0.2.3 :sunflower:
| gharchive/pull-request | 2024-01-17T23:43:01 | 2025-04-01T06:38:33.989565 | {
"authors": [
"engeir"
],
"repo": "engeir/volcano-core",
"url": "https://github.com/engeir/volcano-core/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163583363 | Links to social media posts
It would be really handy if there were a way to access the links leading to the social media posts from within the entry. I'd like to automatically display the links as part of the craft entry. "Join the discussion on Facebook/Twitter", etc.
Just to confirm, this would be from the front-end? This way, you could post the entry to Facebook, have it record the posted URL, and then being able to access in your template?
Something like:
{% set post = craft.socialPoster.post({ account: 'facebook', entryId: entry.id }) %}
{% if post.url %}
<a href="{{ post.url }}" target="_blank">Join the discussion on Facebook</a>
{% endif %}
Exactly. That way you could drive traffic from the website to the social media posts.
Just to follow this up - this is now implemented in 1.2.0. Use the following template code:
{% set posts = craft.socialPoster.posts({ element: entry }) %}
{% for post in posts %}
<a href="{{ post.url }}" target="_blank">
<i class="fa fa-{{ post.handle }}-square"></i> Join the discussion on {{ post.handle | capitalize }}
</a>
{% endfor %}
| gharchive/issue | 2016-07-03T22:38:37 | 2025-04-01T06:38:33.997629 | {
"authors": [
"engram-design",
"pixeljitsu"
],
"repo": "engram-design/SocialPoster",
"url": "https://github.com/engram-design/SocialPoster/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2738220999 | Repo and Org Variables
I wonder if this statement holds true:
https://github.com/engswee/flashpipe/blob/687862ea650f903531ae0dfe13d1459582c95847/docs/github-actions-sync-apim.md?plain=1#L28
Github Actions offers Repo and Org variables (doc)
Are you referring to something else?
Thanks for highlighting this. Unfortunately, documentation often can't keep up with the speed that new features are introduced 😅
If you see the screenshot from the following page, you can see that variables were not there in the past.
https://engswee.github.io/flashpipe/github-actions-sync-to-git.html
I'd have to admit that I can't keep track of all the new features that are constantly being rolled out, so have definitely missed this one out. It's good to know about this, so that I can use it in my workflows and also update the documentation.
I'll keep this issue open until I get around to updating the documentation 😉
| gharchive/issue | 2024-12-13T11:53:32 | 2025-04-01T06:38:34.000913 | {
"authors": [
"ambravo",
"engswee"
],
"repo": "engswee/flashpipe",
"url": "https://github.com/engswee/flashpipe/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1023243522 | Prepare 6.0.1
Stepping through the commits is the easiest way to review this because of the docs changes.
Draft because we are waiting on https://github.com/enketo/openrosa-xpath-evaluator/pull/136 and then a 2.0.9 release.
Verified the following:
[ ] npm update
[ ] npm audit fix --production
[ ] npm run test
[ ] npm run test-browsers
[ ] npm run beautify
[ ] npm run build-docs
npm run test-browers has a scary Firefox failure. It'd be good to see if it fails on v5.17.6.
```Firefox 93.0 (Mac OS 10.15) merging an instance into the model when the record contains namespaced attributes, the merged result is CORRECTLY namespaced namespaces are added correctly FAILED``
https://github.com/enketo/enketo-core/pull/822 should also be merged first, right?
| gharchive/pull-request | 2021-10-12T00:44:47 | 2025-04-01T06:38:34.019057 | {
"authors": [
"lognaturel",
"yanokwa"
],
"repo": "enketo/enketo-core",
"url": "https://github.com/enketo/enketo-core/pull/828",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
921127286 | Site created on startup
The default site is created on application start. WIth app and info-page content types, including populated fields.
Should we just remove the site init code and let the user manually setup office-league with site office-league?
This would still keep the init of the office-league repoes but remove the auto generated site.
Remove the auto generated site. Keep the generated repo storrage.
| gharchive/issue | 2021-06-15T08:11:07 | 2025-04-01T06:38:34.031071 | {
"authors": [
"poi33"
],
"repo": "enonic/app-office-league",
"url": "https://github.com/enonic/app-office-league/issues/462",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2008567301 | fix: swap the order of handling order and pred in the graph select func
multi order may generate predicates, so running the order funcs first allows reusing aliases.
If the predicates run first if there is a multiOrder field that generates it's own predicates, the predicates are unable to find the alias of the join and errors not finding the column on the table.
Related to: https://github.com/ent/contrib/pull/559
Thanks for the contribution, @michaelcaulley 🚀
| gharchive/pull-request | 2023-11-23T16:51:25 | 2025-04-01T06:38:34.070386 | {
"authors": [
"a8m",
"michaelcaulley"
],
"repo": "ent/ent",
"url": "https://github.com/ent/ent/pull/3841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
132623609 | Add support for more template generators
YAML
cfndsl - https://github.com/stevenjack/cfndsl
Any others?
It might be nice to have JSON support with comments. Would need to find a library to strip them out I guess.
Another one might be just using plain old ERB?
If you want JSON support with comments, technically you can just use YAML. An alternate syntax of YAML is actually just JSON + comments, so any YAML parser should be able to parse a JSON file with comments in it.
[1] pry(main)> require 'yaml'
=> true
[2] pry(main)> test = "{\n#testing\n}"
=> "{\n#testing\n}"
[3] pry(main)> YAML.load(test)
=> {}
So yeah, +1 for YAML input. should be pretty trivial to add, I'll fork and see.
cfndsl support would be really good as it is the most popular generator of cloudformation templates that is written in ruby. it should be really easier to integrate as it is just running cfndsl commands.
I also use the ppjson to pretty print the generated cloudformation code.
I've built alot of examples for cfndsl at https://github.com/neillturner/cfndsl_examples
and there is a utility to convert templates to cfndsl format that is very handy.
Hi @neillturner we have added basic CfnDsl template support with #99. However, this dose not add support for using cfndsl variables. How would you like that supported in SM? How would you imagine a user supply values for cfndsl variables when using a cfndsl template with SM?
| gharchive/issue | 2016-02-10T07:04:26 | 2025-04-01T06:38:34.094383 | {
"authors": [
"flyinbutrs",
"gstamp",
"neillturner",
"stevehodgkiss",
"thekindofme"
],
"repo": "envato/stack_master",
"url": "https://github.com/envato/stack_master/issues/81",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.