id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1149835176
🛑 Almacén Reina Batata is down In f622bb5, Almacén Reina Batata (https://www.reina-batata.com.ar) was down: HTTP code: 0 Response time: 0 ms Resolved: Almacén Reina Batata is back up in f33b2d7.
gharchive/issue
2022-02-24T22:34:17
2025-04-01T04:34:06.046205
{ "authors": [ "efecear" ], "repo": "efecear/upptime", "url": "https://github.com/efecear/upptime/issues/429", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1175913016
🛑 Almacén BioPandora is down In da81d2f, Almacén BioPandora (https://www.biopandora.com.ar) was down: HTTP code: 0 Response time: 0 ms Resolved: Almacén BioPandora is back up in 6618e76.
gharchive/issue
2022-03-21T20:35:38
2025-04-01T04:34:06.048515
{ "authors": [ "efecear" ], "repo": "efecear/upptime", "url": "https://github.com/efecear/upptime/issues/4657", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1213056661
🛑 Almacén Reina Batata is down In 18a1dd0, Almacén Reina Batata (https://www.reina-batata.com.ar) was down: HTTP code: 0 Response time: 0 ms Resolved: Almacén Reina Batata is back up in 610bc46.
gharchive/issue
2022-04-23T01:27:42
2025-04-01T04:34:06.051425
{ "authors": [ "efecear" ], "repo": "efecear/upptime", "url": "https://github.com/efecear/upptime/issues/8551", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1258100149
🛑 Almacén Reina Batata is down In 68c1536, Almacén Reina Batata (https://www.reina-batata.com.ar) was down: HTTP code: 0 Response time: 0 ms Resolved: Almacén Reina Batata is back up in 24d22ac.
gharchive/issue
2022-06-02T12:13:40
2025-04-01T04:34:06.053708
{ "authors": [ "efecear" ], "repo": "efecear/upptime", "url": "https://github.com/efecear/upptime/issues/9743", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1149258218
New feature: image detection strategy "edge" Hi, with this pull request I want to share the work of my valued colleague @gautamilango and me. It introduces another strategy edgeto detect images in situations when too much pixels are different and further loweringconfidence would lead to false (or no) results. The key here is to pre-process both images (reference image and screenshot) with canny edge detection (https://scikit-image.org/). This reduces the images to the most relevant parts before they are compared. It makes image detection extremely rebust against inpredictable colour nuances (example: "hover"-colors for buttons) compression artifacts (e.g. when testing over RDP/Citrix, which compress screen information depending on the available bandwidth) pixel deviations of dynamic image sources (e.g. map from 3rd party providers) ... Debug Image is a helper keyword and starts a UI (written by Gautam) which gives worthful insights about the best strategy (default oder edge), its parameters (confidence, egde detection params), the number of matches (unknown so far) the match score of the best match (also unknown, done with try&error). There is also a detailled debugger. It shows the edge detection results in oder to fine-tune the parameters. We are already using the edge strategy successfully in production (synthetic e2e monitoring with Checkmk/Robotmk) and it works pretty well. To make this strategy as integrated as possible, we have put a lot of effort into this, without breaking any existing functionality. Thanks to ABRAXAS Informatik AG in Switzerland, which spent the ressources to make this possible at all! 👍 That's all for now - we are looking forward to hearing from you. Any suggestion is highly appreciated! :-) Best regards, Simon (simon.meggle at elabit.de) PS: See also https://blog.robotmk.org/ Hi @simonmeggle, This on the first glance seems an awesome addtition to the library! I need to check this thoroughly which I haven't had time yet. I hope to check this in the coming weeks. Hi Tattoo, glad to hear that you find this useful. I understand that this is a big commit which needs to be carefully reviewed. Take your time, I can provide you any assistance you need! Hi Tattoo, I just want to ask if I can be of any help? Regards, Simon I I am already looking for such a feature, as I often struggle with (nested) remote desktop connections. This would be very useful in our environment. Hope it gets implemented soon. Happy new year! Is there already a timeline to implement this. This function would be very helpful for my use cases. Many thanks. Is there already a timeline to implement this. It will be implemented soon, but obviously not in this library (the pr still hangs, no one has ever reviewed it).
gharchive/pull-request
2022-02-24T12:59:47
2025-04-01T04:34:06.063858
{ "authors": [ "7meis", "PhilippLemke", "Tattoo", "roland-gsell", "simonmeggle" ], "repo": "eficode/robotframework-imagehorizonlibrary", "url": "https://github.com/eficode/robotframework-imagehorizonlibrary/pull/57", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
630885255
Testing components that use Telerik components I have a Blazor component (page) that uses Telerik components which I wish to test but find that the Telerik handles clicks etc differently from regular Blazor components. Let's say we have a component <SomeComponent>: <TelerikDropDownList Data="@Data" @bind-Value="SelectedValue" Id="value-picker" OnChange="UpdateSomething"> </TelerikDropDownList> <span> @code { [Inject] private ISomeService SomeService { get; set; } private List<Option> Data = new List<Option>() { new Option{Value = 1, Text = "Option1"}, new Option{Value = 1, Text = "Option2"} }; private int SelectedValue{get; set;} private async Task UpdateSomething() { var result = SomeService.SomeMethod(SelectedValue); // do something with result } public class Option{ public int Value{get; set;} public string Text{get; set;} } } And I wish to test that when "Option2" is selected then SomeService.SomeMethod is called with the chosen value. I have set up a test: SomeComponentTests.razor: @using Telerik.Blazor.Components @inherits TestComponentBase <Fixture Test="SomeTestMethod"> <ComponentUnderTest> <TelerikRootComponent> <SomeComponent /> </TelerikRootComponent> </ComponentUnderTest> </Fixture> and SomeComponentTests.razor.cs: using Bunit; using Moq; using System; using System.Collections.Generic; using Telerik.Blazor.Components; namespace App.Tests.Pages { public partial class IndexTests { public void GivenTenantAndDateRangesShouldPopulateChart(Fixture fixture) { // Arrange fixture.Services.AddSingleton<ITelerikStringLocalizer, TelerikStringLocalizer>(); fixture.Services.AddMockJsRuntime(); var someServiceMock = new Mock<ISomeService>(MockBehavior.Loose); fixture.Services.AddSingleton(someServiceMock.Object); var cut = fixture.GetComponentUnderTest<TelerikRootComponent>(); SelectDropDownValue(cut, 2); // Should choose 2nd option in dropdown // Act cut.Render(); // Assert someServiceMock.Verify(x => x.SomeMethod(2), Times.Once); // 'Value' for 2nd option is 2 } private void SelectDropDownValue(IRenderedComponent<TelerikRootComponent> component, int option) { component.Find("#value-picker").Click(); // Throws 'The element does not have an event handler for event 'onclick', nor any other events.' component.Find($".k-list.k-reset > li:nth-of-type({option})").Click(); } private void SelectDropDownValue2(IRenderedComponent<TelerikRootComponent> component, int option) { component.Find("span:nth-of-type(1) .k-input").Click(); // Throws 'The element does not have an event handler for event 'onclick', nor any other events.' component.Find($".k-list.k-reset > li:nth-of-type({option})").Click(); } } } but always get the same exception on Click() method call. The css selectors seem to be correct as when running the application and calling these methods via console using jQuery (e.g. $('#value-picker').click()) everything works as expected (opens the dropdown). How would it be possible to achieve the desired result (get the test to work)? @mihail-vladov @EdCharbeneau Thanks for a well described question. Makes it much easier to provide help. I do not know how the click bindings are done in Teleriks components, but It could be that his is related to issue #119, i.e. that Telerik binds click handlers to elements higher up the DOM tree, and not on the individual elements. To verify this, inspect cut.Markup and and look for attributes with a blazor: prefix. Those are how event handlers are represented in the DOM tree. If the dropdown you are trying to change is a select or input element, you might need to use cut.Find("#value-picker").Change(ID OF OPTION TO SELECT) instead. Thank you for the quick reply! I inspected the markup and didn't find any attributes with blazor: prefix. Full markup: <span aria-disabled="false" aria-haspopup="listbox" aria-activedescendant="92ce9128-012a-4f8a-84d8-0ae0fb9185eb" role="listbox" aria-describedby="9b9734a9-5d93-46ea-9c05-826e337c39f5" class="k-widget k-dropdown k-header telerik-blazor" data-id="8df2b45d-7ac8-4fa6-9db5-2c2940194068" style=" width: 300px;" tabindex="0" aria-expanded="false"> <span class="k-dropdown-wrap k-state-default "> <span class="k-input" id="9b9734a9-5d93-46ea-9c05-826e337c39f5"> </span> <span class="k-select"> <span class="k-icon k-i-arrow-60-down"></span> </span> </span> <select value="0" tabindex="-1" id="value-picker" style="opacity: 0; width: 0px; border: 0px; z-index: -1; position: absolute;"> </select> </span> <div class="telerik-blazor k-animation-container " data-id="9467b203-60f7-43f1-962f-069ecfb94f81" style="width: ; height:200px; z-index: 10002; "> <div data-id="79910c8f-fc2b-4566-93aa-1567136af670" style="height: 200px; transition-delay: 0ms; transition-duration: 300ms; display: none;" class="k-popup k-reset"> </div> </div> Tryng to change the dropdown value using Change() resulted in a similar exception The element does not have an event handler for the event 'onchange', nor any other events. OK. The <select> certainly looks empty and does not have any child items, nor does it have any event handlers attached. Just for comparison, can you copy the same markup from the browser when rendering the component there? Interesting. Do you see items in the dropdown in the browser? In that case, it looks as if it created using JavaScript. Yes, the items are visible in the browser. So probably yeah, they are manipulated by JS. I see that when I open the dropdown, an div with class k-list-scroller appears in the k-animation-container etc Ok. I thought Telerik was not depending on JavaScript in their components in this way. That will make it hard to test with bUnit then. But you would also be dedicating a test to testing at least some parts or Teleriks components, which is probably a waste of time. The alternative is to make your UpdateSomething() method internal, and make it accessible to the test project using the InternalVisibleTo attribute. Then you can invoke it from a test and verify its behavior that way. I am also thinking about shallow render/mocking components, and that would certainly solve this issue as well, but alas, that is not support currently. The TelerikDropDown uses an animation container to create a drop-down effect. We use this because the default HTML drop down doesn't allow for us to have templates and other enhancements we add. Animations require JS to work properly, Blazor doesn't support them at the framework level. I don't believe we're doing what is classically referred to as DOM manipulation here. Instead it's a JavaScript interop that enables us to identify when the animation starts and stops. Hello @EdCharbeneau and thank you for the answer. Is this approach similar for other Telerik components as well meaning it is difficult (impossible?) to use a unit testing framework like bUnit for writing a test scenario as in the opening post (i.e. when this is selected/chosen/inputted verify that a service is called with that method)? I don't believe we're doing what is classically referred to as DOM manipulation here. Instead it's a JavaScript interop that enables us to identify when the animation starts and stops. I imagine there is a have a good reason for Teleriks approach. My general impression is that the Telerik components are generally following Blazor best practices. That said, from what @joonastamm has shown here, it does look like JavaScript is used to both generate the items in the drop down and listen to click events on them and the drop down, which means that bUnit have nothing to trigger. @joonastamm if the above is true, then your best bet is to make your UpdateSomething() method internal and call it directly, e.g. cut.Instance.UpdateSomething(), and that way tigger whatever logic you want to test. Remember, the primary thing related to your test should not be to test Teleriks components, so even though it is unfortunate that you cannot test that you have provided the Telerik dropdown with the right callback method, at least you can test that the callback method performs as expected. No worries, I'll check with engineering to see what the lifecycle is like for this one. Something may have changed since I looked at the source last. @joonastamm This is typical with components that require animation. Like Drawer, Menu, and Grid filter menus. Sorry it's making testing difficult. @EdCharbeneau thanks for checking up on this. I am looking into making scenarios like these easier by finding a way to enable shallow rendering (#17) or mocking/faking certain components (#53). Until then, bUnit is certainly meeting its limits in scenarios like this. Ill close this issue as it seems there is no easy way to hook into this directly for now.
gharchive/issue
2020-06-04T14:40:35
2025-04-01T04:34:06.094864
{ "authors": [ "EdCharbeneau", "egil", "joonastamm" ], "repo": "egil/bunit", "url": "https://github.com/egil/bunit/issues/141", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2247925311
changes made for 90 Days and 30th April scenario Please review and merge this code or let me know if any changes or improvements need to be done @nmohammednawaz-egov where is the amount reversal logic, after 90 days if you are creating a new PI, then the amount which is deducted from the funds that has to be released.
gharchive/pull-request
2024-04-17T10:16:07
2025-04-01T04:34:06.121738
{ "authors": [ "nmohammednawaz-egov", "shailesh-egov" ], "repo": "egovernments/DIGIT-Works", "url": "https://github.com/egovernments/DIGIT-Works/pull/1701", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
935116098
Code cleanup Naming as per conventions Private repository
gharchive/pull-request
2021-07-01T18:38:11
2025-04-01T04:34:06.122458
{ "authors": [ "Sarvesh-eGov" ], "repo": "egovernments/core-services", "url": "https://github.com/egovernments/core-services/pull/828", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1642009997
Keep scale and offset on set new image When using setImage it forces the interactiveImageView to set the image in the center rather than keep its current position and scale. Is there any way to set a new image (replace) while keeping the current offset and scale properties? Hi @adirkol, I will check during the next few days if we can add such a feature. Thanks for your suggestion, it looks a good addition to the library. Added new method "updateImage" that updates only the image of the ImageView without changing other attributes. In version 1.0.25 we added a new method "updateImage" that updates only the image of the ImageView without changing other attributes. Not exactly. Indeed updateImage updates the image, but it doesn't keep the position offset/scale. What I'm doing right now is saving the offset on the delegate and reapplying it by setContentOffset every time... @adirkol You are right, I did not run enough tests before the release. Please check version 1.9.26 to see if the issue is resolved. @adirkol You are right, I did not run enough tests before the release. Please check version 1.9.26 to see if the issue is resolved. According to version 1.9.26, updateImage indeed solves the problem. However, if you call updateImage while image is still moving (animated on panning), is shown the image in the actual position, which can outside the interactiveImageView. Not perfect, but works. You can solve it by making sure the contentOffset is always "inside" the box in updateImage: var offsetX = offset.x < 0 ? 0 : offset.x offsetX = offsetX > imageToLoad.size.width*scale-fromView.frame.width ? imageToLoad.size.width*scale-fromView.frame.width : offsetX var offsetY = offset.y < 0 ? 0 : offset.y offsetY = offsetY > imageToLoad.size.height*scale-fromView.frame.height ? imageToLoad.size.height*scale-fromView.frame.height : offsetY self.offset = CGPoint(x: offsetX, y: offsetY)
gharchive/issue
2023-03-27T12:11:22
2025-04-01T04:34:06.129891
{ "authors": [ "adirkol", "egzonpllana" ], "repo": "egzonpllana/InteractiveImageView", "url": "https://github.com/egzonpllana/InteractiveImageView/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
609521376
有点问题想咨询下作者 想请问下系统连接物联网设备具体是怎么做的,个人觉得这个系统非常厉害想学习下 目前预留的设备接入方法只有 http 协议,即系统中有一个 REST 接口供设备调用,设备在通过 http 调用接口时传输设备的 ID 和 secretKey ,以及要发送的数据流的字段名和数据,系统在收到请求后会根据设备的 ID 和 secretKey 验证设备所关联的数据流,若和请求中的数据流字段一致则接受该数据流字段的数据,并存入数据库。这样就完成了一次设备向平台发送数据
gharchive/issue
2020-04-30T02:40:29
2025-04-01T04:34:06.146050
{ "authors": [ "FreezeLin", "sunriseydy" ], "repo": "ehaut/syhthems-platform", "url": "https://github.com/ehaut/syhthems-platform/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
146670213
API to get complete Ehcache configuration in text format It will be helpful if we can get an API which will give us the complete Ehcache Configuration in text format (Probably xml). This will be helpful for debugging purpose and sharing the configuration among Ehcache users. I think this is a good idea... but I would vote for not XML. XMLs are easy to parse (using some standard parsers) and help in understanding the tree structure. Moreover, for debugging, I could just use the same XML and can create original like configuration to debug/reproduce the issue with 0% error rate. The set of all programatic configurations is larger than the universe of all XML configurations... not every one is mappable. Moreover, for debugging, I could just use the same XML and can create original like configuration to debug/reproduce the issue with 0% error rate. Do we really want to make that a goal/feature? It's a pretty lofty one, and one that caused us pain in the past. I'd want to see a strong use case for this. After discussion with @mathieucarbou, there is some need to be able to visualise configuration, without ever thinking about XML format. In this context, the work for this enhancement is to created a formatted String out of the Configuration object and its graph. This should give us a valid String representation from the Cache level as well.
gharchive/issue
2016-04-07T16:05:51
2025-04-01T04:34:06.149411
{ "authors": [ "chrisdennis", "ljacomet", "skbansal" ], "repo": "ehcache/ehcache3", "url": "https://github.com/ehcache/ehcache3/issues/951", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1660335378
Optimization + fix W3C validator errors [x] Optimize graphics [x] Remove deprecated scrolling="no" in iframe [x] Fix <TeamPerson> validity (span -> div) [x] Fix wrong CSS values [x] Load all resources after button click Before After W3C errors/warnings Before After Before After Image optimization
gharchive/pull-request
2023-04-10T06:55:24
2025-04-01T04:34:06.219459
{ "authors": [ "a-chabin" ], "repo": "ekaterinburgdev/ekaterinburg.dev", "url": "https://github.com/ekaterinburgdev/ekaterinburg.dev/pull/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1123909866
Unable to install v2 using go get I tried go get -u github.com/eko/gocache/v2/cache And got # github.com/XiaoMi/pegasus-go-client/idl/base ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\blob.go:18:31: not enough arguments in call to iprot.ReadBinary have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\blob.go:27:26: not enough arguments in call to oprot.WriteBinary have ([]byte) want (context.Context, []byte) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\error_code.go:105:34: not enough arguments in call to iprot.ReadString have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\error_code.go:110:26: not enough arguments in call to oprot.WriteString have (string) want (context.Context, string) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\gpid.go:18:25: not enough arguments in call to iprot.ReadI64 have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\gpid.go:30:23: not enough arguments in call to oprot.WriteI64 have (int64) want (context.Context, int64) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\rpc_address.go:26:31: not enough arguments in call to iprot.ReadI64 have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\base\rpc_address.go:35:23: not enough arguments in call to oprot.WriteI64 have (int64) want (context.Context, int64) # github.com/XiaoMi/pegasus-go-client/idl/cmd ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:42:36: not enough arguments in call to iprot.ReadStructBegin have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:47:55: not enough arguments in call to iprot.ReadFieldBegin have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:61:25: not enough arguments in call to iprot.Skip have (thrift.TType) want (context.Context, thrift.TType) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:80:31: not enough arguments in call to iprot.ReadFieldEnd have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:84:31: not enough arguments in call to iprot.ReadStructEnd have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:91:31: not enough arguments in call to iprot.ReadString have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:100:37: not enough arguments in call to iprot.ReadListBegin have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:108:32: not enough arguments in call to iprot.ReadString have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:115:29: not enough arguments in call to iprot.ReadListEnd have () want (context.Context) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:122:34: not enough arguments in call to oprot.WriteStructBegin have (string) want (context.Context, string) ..\..\go\pkg\mod\github.com\!xiao!mi\pegasus-go-client@v0.0.0-20211220102249-0e7f49437ffe\idl\cmd\cmd.go:122:34: too many errors Version 1.2 also does not work, with the same problem. OS: Windows 11 64 bit Golang: Version 1.17 Any help is appreciated. Thank you. Running go mod tidy solved to me ^ I also get the same on a go build for my application, a go mod tidy did not fix this for me... golang version 1.17.9 issue found, there is a newer version of github.com/pegasus-kv/thrift but its not used by github.com!xiao!mi\pegasus-go-client yet, to fix add replace github.com/pegasus-kv/thrift => github.com/pegasus-kv/thrift v0.13.0 // indirect to your go mod file issue found, there is a newer version of github.com/pegasus-kv/thrift but its not used by github.com!xiao!mi\pegasus-go-client yet, to fix add replace github.com/pegasus-kv/thrift => github.com/pegasus-kv/thrift v0.13.0 // indirect to your go mod file This worked for me. Actually allowed me to build. Thanks for this. Hello, the pegasus store is now in a separated Go module so this should be fixed in case you only use the Gocache library (by importing github.com/eko/gocache/v4/lib). I close this issue for now but feel free to reopen if you still have any issue with this new v4.0.0 release.
gharchive/issue
2022-02-04T07:50:54
2025-04-01T04:34:06.242874
{ "authors": [ "coolblknerd", "danilopolani", "eko", "ffleader1", "jack-evans" ], "repo": "eko/gocache", "url": "https://github.com/eko/gocache/issues/126", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1603756305
FreecacheStore does not implement FreecacheClientInterface Hello, While trying to implement freecache store I noticed a blocking issue for me : "github.com/eko/gocache/store/freecache/v4".FreecacheStore does not implement "github.com/eko/gocache/store/freecache/v4".FreecacheClientInterface (wrong type for method Clear) With the function Clear(ctx context.Context) error we are also compliant to store.StoreInterface, so I think the only issue is in the FreecacheClientInterface Clear method. Steps for Reproduction Simply try to pass a freecache.FreecacheStore to freecache.NewFreecache Expected behavior: That freecache.FreecacheStore implements fully freecache.NewFreecache so that method Clear from interface require a Context or not Actual behavior: freecache.FreecacheClientInterface Clear() method does not require any argument and no return freecache.FreecacheStore Clear(_ context.Context) error require context.Context but discard it Versions: github.com/eko/gocache/lib/v4 v4.1.3 github.com/eko/gocache/store/freecache/v4 v4.1.2 Thanks New to go, not an issue sorry
gharchive/issue
2023-02-28T20:23:09
2025-04-01T04:34:06.247184
{ "authors": [ "BaptisteLemarcis" ], "repo": "eko/gocache", "url": "https://github.com/eko/gocache/issues/202", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
671560661
TypeError: write() got an unexpected keyword argument 'start_frame' Hi, Where does this error come from ? Even on AD_live perdiction I got the same error. For information on the last commit VideoIterTrain has been change to VideoIter so I made some small change Traceback (most recent call last): File "video_demo.py", line 268, in dir_list=cd3_extartion(video_parth,device=device) File "video_demo.py", line 89, in cd3_extartion features_writer.write(feature=outputs[i], video_name=vid_name, start_frame=start_frame, dir=dir) TypeError: write() got an unexpected keyword argument 'start_frame' Resolved, small change on feature_extractor.py with def store(self, feature, start_frame): self.data[start_frame // self.chunk_size] = list(feature.cpu().numpy()) def write(self, feature, video_name, start_frame, dir): if not self.has_video(): self._init_video(video_name, dir) if self._is_new_video(video_name, dir): self.dump() self._init_video(video_name, dir) self.store(feature, start_frame)
gharchive/issue
2020-08-02T06:13:50
2025-04-01T04:34:06.250195
{ "authors": [ "yosagaf" ], "repo": "ekosman/AnomalyDetectionCVPR2018-Pytorch", "url": "https://github.com/ekosman/AnomalyDetectionCVPR2018-Pytorch/issues/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
198521234
Issue/#128 include dockerfiles to start from cloud image Now there are two new commands: ekumenlabs-terminus-intel ekumenlabs-terminus-nvidia These two use the ekumenlabs/gazebo-terminus image instead of compiling it from source to save time. Readme has been updated. @Shokman in another issue I'll split the dockerfiles. Perfect.
gharchive/pull-request
2017-01-03T17:09:49
2025-04-01T04:34:06.261126
{ "authors": [ "Shokman", "agalbachicar" ], "repo": "ekumenlabs/terminus", "url": "https://github.com/ekumenlabs/terminus/pull/138", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2670227446
Termux Support Please make that tool compatible with termux that would be very handy. installation: curl -sSf https://sshx.io/get | sed 's|/usr/local/bin|/data/data/com.termux/files/usr/bin|g' | sh Error: ❯ sshx 2024-11-18T22:47:35.651654Z ERROR sshx: transport error Caused by: 0: error trying to connect: dns error: failed to lookup address information: Try again 1: dns error: failed to lookup address information: Try again 2: failed to lookup address information: Try again ❯ pkg install tsu #for running with root ❯ sudo sshx 2024-11-18T22:47:35.651654Z ERROR sshx: transport error Caused by: 0: error trying to connect: dns error: failed to lookup address information: Try again 1: dns error: failed to lookup address information: Try again 2: failed to lookup address information: Try again Nice! It looks like installation worked, but there's a DNS error from your logs. Any idea where that might be coming from? I don't know, maybe any url is blocked by my dns. Kindly test it on your end. I'd need more details -- unfortunately I don't have termux as you can see. Could you try debugging a bit more on your side, with tools like dig, strace or tcptraceroute? You can use ChatGPT to help with this if you're not familiar with network debugging. i tried you can check logs here. If there is any specefic command kindly share. ❯ strace -e trace=network sshx socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0, [6, 7]) = 0 socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0) = 9 connect(9, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 24) = -1 ENOENT (No such file or directory) u2024-11-19T01:50:13.337578Z ERROR sshx: transport error Caused by: 0: error trying to connect: dns error: failed to lookup address information: Try again 1: dns error: failed to lookup address information: Try again 2: failed to lookup address information: Try again +++ exited with 1 +++ It looks like you have your DNS resolver configured to nscd, and it's not working right now. This is not a bug in sshx and is likely a misconfiguration of DNS on your Termux environment. These suggestions might help https://chatgpt.com/share/673bff16-7194-800e-9255-6edee8150efc
gharchive/issue
2024-11-18T23:07:01
2025-04-01T04:34:06.265007
{ "authors": [ "ekzhang", "sleeping3119" ], "repo": "ekzhang/sshx", "url": "https://github.com/ekzhang/sshx/issues/102", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2256009558
implement 'copy ... with ...' This is intended for use with immutable types and I think we should initially restrict its usage to those. Syntax: let x2 be copy x with a to 3, b to 4 set x to copy x with a to 3, b to 4 return copy x with a to 3, b to 4 Note, advantage of starting with the word copy (which wasn't there in our original sketch) is: readability/verbalisation avoids potential left-recursion issue in parsing Once copy is recognised, the rest of the template can be set up (and shown as a completion) Current status CopyWith exists as a parse node, used as an alternative within ExprNode. No tests exists yet. Best to do these as combined parse/compile tests. Picking up bug originally reported in #710. Any kind of expression that isn't a simple value/reference used in a to clause in copy...with results in an error. For example any one of the three to clauses in the statement below will cause the error: return copy g with priorTail to gbody.get(0), body to gbody + g.head, head to new Square(newX, newY)
gharchive/issue
2024-04-22T09:11:12
2025-04-01T04:34:06.281222
{ "authors": [ "richardpawson" ], "repo": "elan-language/LanguageAndIDE", "url": "https://github.com/elan-language/LanguageAndIDE/issues/311", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1986068267
Add option to make queue part of RabbitMQ transaction names Is your feature request related to a problem? It is possible to have multiple Events which are sent to the same exchange. Therefore it may be difficult to distinguish the transactions if the exchange is used as part of the name. Describe the solution you'd like As a solution it would be great to have a configuration option, so that it is possible to use the queue as a part of the transaction name instead of the exchange. Additional context There is a similair feature request from @JonasKunz in the issue #3415 With the difference, that he would like to use routing key as part of the name. He proposed an enum-like configuration option (e.g. rabbitmq_naming_mode). This would fit very well as it could then be extended to have 3 modes (exchange, queue and routing-key). Implemented in #3424 . Thanks a lot for the support. This feature is really helping us a lot.
gharchive/issue
2023-11-09T17:11:59
2025-04-01T04:34:06.287697
{ "authors": [ "Cortana7", "JonasKunz" ], "repo": "elastic/apm-agent-java", "url": "https://github.com/elastic/apm-agent-java/issues/3421", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
365973763
Use microsecond epoch timestamps Prepares for https://github.com/elastic/apm-server/issues/1340 Includes guards against clock drift Codecov Report Merging #232 into master will increase coverage by 0.1%. The diff coverage is 94.44%. @@ Coverage Diff @@ ## master #232 +/- ## =========================================== + Coverage 73.63% 73.74% +0.1% - Complexity 1125 1134 +9 =========================================== Files 118 119 +1 Lines 4218 4235 +17 Branches 416 417 +1 =========================================== + Hits 3106 3123 +17 Misses 913 913 Partials 199 199 Impacted Files Coverage Δ Complexity Δ ...o/elastic/apm/report/serialize/DateSerializer.java 100% <100%> (ø) 12 <2> (ø) :arrow_down: ...o/elastic/apm/impl/transaction/EpochTickClock.java 100% <100%> (ø) 7 <3> (?) ...ain/java/co/elastic/apm/impl/transaction/Span.java 98.03% <100%> (+0.08%) 22 <4> (+1) :arrow_up: ...ain/java/co/elastic/apm/impl/ElasticApmTracer.java 79.56% <100%> (-0.15%) 40 <1> (ø) .../co/elastic/apm/impl/transaction/AbstractSpan.java 75.36% <100%> (+1.11%) 22 <2> (ø) :arrow_down: ...a/co/elastic/apm/impl/transaction/Transaction.java 78.48% <100%> (+0.55%) 25 <3> (+1) :arrow_up: ...lastic/apm/report/serialize/DslJsonSerializer.java 87.24% <100%> (ø) 120 <0> (ø) :arrow_down: .../main/java/co/elastic/apm/opentracing/ApmSpan.java 75.67% <33.33%> (ø) 18 <2> (ø) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 3427129...f37a89b. Read the comment docs. This can be merged before the APM Server changes as this PR does not change the JSON structure yet. Once the Server is ready to accept an epoch microsecond timestamp, the agent just needs some small adjustments.
gharchive/pull-request
2018-10-02T16:13:42
2025-04-01T04:34:06.301816
{ "authors": [ "codecov-io", "felixbarny" ], "repo": "elastic/apm-agent-java", "url": "https://github.com/elastic/apm-agent-java/pull/232", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
272554616
Agent produces empty field names Version 6.0 rc 1 I see fields like this being indexed "cookies": { "Max-Age": "86400", "Path": "/", "sessionid": "mdoieddd6stjvfm3obprohg8ebam8xpc", "expires": "Tue, 07-Nov-2017 12:01:27 GMT", "": "HttpOnly", "csrftoken": "mvy4wo38fXRwGmPwQn7huElMvDhRn9Hk" } I'm alittle unsure how these are successfully indexed as this object should be rejected. Attempting to index these objects containing this structure results in an error. Looks like there's a mix of properties for the cookie (Max-age and Path) and actual cookies (sessionid, csrftoken) Probably a bug in our cookie parsing. it splits sessionid=mdoieddd6stjvfm3obprohg8ebam8xpc; expires=07-Nov-2017 12:01:27 GMT HttpOnly up when it shouldn't Turns out the workload script sends an invalid Cookie header. It currently just gets the Set-Cookie header from the response, and re-uses it as the Cookie header. These two headers have different formatting, but not different enough to fail catastrophically. Django's cookie parser code returns an eerily familiar result: In [1]: from django.http.cookie import parse_cookie In [2]: parse_cookie("csrftoken=********; expires=Mon, 05-Nov-2018 12:35:24 GMT; Max-Age=31449600; Path=/; sessionid=********; expires=Tue, 07-Nov-2017 12:35:24 GMT; HttpOnly; Max-Age=86400; Path=/") Out[2]: {'': 'HttpOnly', 'Max-Age': '86400', 'Path': '/', 'csrftoken': '********', 'expires': 'Tue, 07-Nov-2017 12:35:24 GMT', 'sessionid': '********'} @watson as the author of workload, do you have any idea how to handle cookie management correctly? Here's the workload script: https://github.com/elastic/demos/blob/master/cyclops/docker_loadgen/src/worker.js So the reason the data failed to index for me was the APM mapping wasn't applied when i tried to restore the data. For data from beats, where dynamic mappings might be k, the APM mapping is essential - there are parts of the docs which are defn likely to be invalid with respect to Elasticsearch e.g. empty field names, and for which the agent has no control (as shown by @beniwohli above). This really isn't an issue for most users, but it should be made clear mapping changes with APM data is probably something you should undertake with extreme care. I can see users enabling indexing on certain parts of the hierarchy to extract metrics - something they need to do with caution. The Workload module does not contain a cookie jar, so if it receives a Set-Cookie header from the server, it doesn't do anything about it. This can done in user-land by checking for the Set-Cookie header on the visit event and then making sure to set the appropriate Cookie header in for instance a filter. @watson something similar is happening already AFAICT (the Set-Cookie header is reused as Cookie header). Guess something like https://www.npmjs.com/package/set-cookie-parser could be used to convert the Set-Cookie header into a Cookie header? (I was hoping for a nice cookiejar hook in workload 😁 ) It was in fact just sending back the Set-Cookie as the Cookie, which is what we thought would work. A cookie jar would be nice, but I am rotating through ~100 users to simulate traffic. After some serious digging, it appears that the load-gen should just be sending the cookie names and values, and not the other options. A sanity check in curl against the same server yielded: (all redacted) Initial Request: $ >rm -f cookies && curl -v -b cookies -c cookies -I http://localhost:8000/ * Trying 192.168.1.60... * TCP_NODELAY set * Connected to localhost (192.168.1.60) port 8000 (#0) > HEAD / HTTP/1.1 > Host: localhost:8000 > User-Agent: curl/7.54.0 > Accept: */* > * HTTP 1.0, assume close after body HTTP/1.0 200 OK Date: Thu, 09 Nov 2017 19:53:02 GMT Server: WSGIServer/0.2 CPython/3.5.2 X-Frame-Options: SAMEORIGIN Vary: Accept-Language, Cookie Content-Language: en Content-Type: text/html; charset=utf-8 Set-Cookie: csrftoken=zz-CSRFTOKEN-zz; expires=Thu, 08-Nov-2018 19:53:02 GMT; Max-Age=31449600; Path=/ Set-Cookie: sessionid=zz-SID-SID-SID-zz; expires=Fri, 10-Nov-2017 19:53:02 GMT; HttpOnly; Max-Age=86400; Path=/ the resulting cookies file: jamie@orion[14:53:02]:~/Projects/GitRepo/demos/cyclops/docker_loadgen/src $ >cat cookies # Netscape HTTP Cookie File # https://curl.haxx.se/docs/http-cookies.html # This file was generated by libcurl! Edit at your own risk. localhost FALSE / FALSE 1541706782 csrftoken zz-CSRFTOKEN-zz #HttpOnly_localhost FALSE / FALSE 1510343582 sessionid zz-SID-SID-SID-zz second run: jamie@orion[14:53:19]:~/Projects/GitRepo/demos/cyclops/docker_loadgen/src $ >curl -v -b cookies -c cookies -I http://localhost:8000/ * Trying 192.168.1.60... * TCP_NODELAY set * Connected to localhost (192.168.1.60) port 8000 (#0) > HEAD / HTTP/1.1 > Host: localhost:8000 > User-Agent: curl/7.54.0 > Accept: */* > Cookie: csrftoken=zz-CSRFTOKEN-zz; sessionid=zz-SID-SID-SID-zz > * HTTP 1.0, assume close after body HTTP/1.0 200 OK Date: Thu, 09 Nov 2017 19:55:00 GMT Server: WSGIServer/0.2 CPython/3.5.2 X-Frame-Options: SAMEORIGIN Vary: Accept-Language, Cookie Content-Language: en Content-Type: text/html; charset=utf-8 * Replaced cookie csrftoken="zz-CSRFTOKEN-zz" for domain localhost, path /, expire 1541706900 Set-Cookie: csrftoken=zz-CSRFTOKEN-zz; expires=Thu, 08-Nov-2018 19:55:00 GMT; Max-Age=31449600; Path=/ * Replaced cookie sessionid="zz-SID-SID-SID-zz" for domain localhost, path /, expire 1510343700 Set-Cookie: sessionid=zz-SID-SID-SID-zz; expires=Fri, 10-Nov-2017 19:55:00 GMT; HttpOnly; Max-Age=86400; Path=/ So I will make it only send the name/value We should note that the cookie parsing should still be verified for when the agent is used in a server capacity- httponly and secure are valid attributes for server cookies- what I learned is that their presence implies a value of true (httponly=true). @jamiesmith the Cookie request header doesn't define httponly or any other fields except for the key/value pairs. These are only available in the Set-Cookie response header. These two headers have a completely different syntax, and one cannot interchange them. As such, I don't see what the agent could do when the wrong format is used in the Cookie header. We rely on the underlying framework (Flask/Werkzeug or Django) to decode the Cookie header, and even these two frameworks behave differently when presented with a Set-Cookie header: In [1]: from werkzeug.http import parse_cookie In [2]: parse_cookie("csrftoken=********; expires=Mon, 05-Nov-2018 12:35:24 GMT; Max-Age=31449600; Path=/; sessionid=********; expires=Tue, 07-Nov-2017 12:35:24 GMT; HttpOnly; Max-Age=86400; Path=/") Out[2]: {'HttpOnly; Max-Age': '86400', 'csrftoken': '********', 'sessionid': '********'} In [3]: from django.http.cookie import parse_cookie In [4]: parse_cookie("csrftoken=********; expires=Mon, 05-Nov-2018 12:35:24 GMT; Max-Age=31449600; Path=/; sessionid=********; expires=Tue, 07-Nov-2017 12:35:24 GMT; HttpOnly; Max-Age=86400; Path=/") Out[4]: {'': 'HttpOnly', 'Max-Age': '86400', 'Path': '/', 'csrftoken': '********', 'expires': 'Tue, 07-Nov-2017 12:35:24 GMT', 'sessionid': '********'} Closing this as I don't see any needed action on the agent side. Feel free to re-open if you don't agree :)
gharchive/issue
2017-11-09T12:56:08
2025-04-01T04:34:06.339336
{ "authors": [ "beniwohli", "gingerwizard", "jamiesmith", "roncohen", "watson" ], "repo": "elastic/apm-agent-python", "url": "https://github.com/elastic/apm-agent-python/issues/95", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
774110855
helpers: stringify() 'log.level' as a top-level dotted field per https://github.com/elastic/ecs-logging/pull/33 I need to get this in and publish a new helpers package before completing https://github.com/elastic/ecs-logging-js/pull/23 :green_heart: Build Succeeded the below badges are clickable and redirect to their specific view in the CI or DOCS Expand to view the summary Build stats Build Cause: Pull request #25 opened Start Time: 2020-12-24T00:40:54.383+0000 Duration: 6 min 2 sec Closing this in favour of https://github.com/elastic/ecs-logging-js/pull/27 I don't know why this one pushed to a branch on elastic/ecs-logging-js rather than my fork. I suspect I chose the wrong option when using gh pr create .... Closing this in favour of https://github.com/elastic/ecs-logging-js/pull/27 I don't know why this one pushed to a branch on elastic/ecs-logging-js rather than my fork. I suspect I chose the wrong option when using gh pr create ....
gharchive/pull-request
2020-12-24T00:40:47
2025-04-01T04:34:06.637868
{ "authors": [ "apmmachine", "trentm" ], "repo": "elastic/ecs-logging-js", "url": "https://github.com/elastic/ecs-logging-js/pull/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2104143427
feat: instrument @elastic/elasticsearch package This instrumentation is not in any of the opentelemetry repos. That would be a good example for instrumentations we could provide earlier in our distro before promoting them to upstream. IIUC this PR adds OTEL instrumentation natively in elasticsearch client rigtht? If so we should add only instrumentation for elasticsearch's versions using a @elastic/transport 8.6.0 or below which is @elastic/elasticsearch <=8.15 We should add a test/isntr-elasticsearch.test.js to sanity check that it works. (I tried locally and it does not.) I got this trace: ------ trace 6ed383 (1 span) ------ span 7ce6ce "tcp.connect" (7.8ms, SPAN_KIND_INTERNAL) ------ trace ee3359 (2 spans) ------ span eb3d59 "GET" (3.0ms, STATUS_CODE_ERROR, SPAN_KIND_CLIENT, GET http://metadata.google.internal./computeMetadata/v1/instance) +0ms `- span b2ce11 "tcp.connect" (2.7ms, STATUS_CODE_ERROR, SPAN_KIND_INTERNAL) ------ trace ede1e1 (2 spans) ------ span 52dbcc "GET" (3003.2ms, STATUS_CODE_ERROR, SPAN_KIND_CLIENT, GET http://169.254.169.254/computeMetadata/v1/instance) +0ms `- span 83b5f5 "tcp.connect" (3002.5ms, SPAN_KIND_INTERNAL) ------ trace d451bb (1 span) ------ span f65260 "GET" (21.4ms, SPAN_KIND_CLIENT, GET http://localhost:9200/_search?q=pants -> 200) @david-luna I could steal this one if you like. I had a start at adding instr-elasticsearch.test.js locally. I know why it isn't working yet: It is because @elastic/transport@8.7.0 is only part of the requirement for OTel native instr working. It also requires @elastic/elasticsearch@8.15.0, which includes a "meta" object with ES client request details passed down to @elastic/transport. That was added in https://github.com/elastic/elastic-client-generator-js/pull/42, but needs a new @elastic/elasticsearch release. (Note to self: g eon3) I had a start at adding instr-elasticsearch.test.js locally. I have a draft PR to add a test here: https://github.com/elastic/elastic-otel-node/pull/264 It is waiting on an es client 8.15.0 release first. @trentm ill be happy to review your PR :)
gharchive/issue
2024-01-28T12:34:56
2025-04-01T04:34:06.685543
{ "authors": [ "david-luna", "trentm" ], "repo": "elastic/elastic-otel-node", "url": "https://github.com/elastic/elastic-otel-node/issues/28", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
120877101
Upload 2.1.0 artifacts to mavencentral Would be great if one could upload the 2.1.0 .jar file to MavenCentral or bintry. Hi @rkrombho, I'll be pushing them (2.0.1 and 2.1.0) today. I have been traveling and I was never able to publish the full upload (always on flaky WiFi). Thanks for the reminder! Still don't see 2.1.0! With 2.0.0 I'm getting: WARNING: Module [elasticsearch-groovy-module] - Unable to load extension class [org.elasticsearch.groovy.action.deletebyquery.DeleteByQueryRequestExtensions] Thanks in advance! +1 +1 Still unavailable: $ ./client.groovy Resolving dependency: org.elasticsearch#elasticsearch-groovy;2.1.0 {default=[default]} org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: General error during conversion: Error grabbing Grapes -- [unresolved dependency: org.elasticsearch#elasticsearch-groovy;2.1.0: not found] +1 All, I've uploaded the artifacts from 2.0.x to 2.1.x. I'm running into an issue with my tests literally not running via Gradle for ES 2.2+, so that will be delayed, but I finally have gotten around to this one. Sorry it took so long! @scrotty WARNING: Module [elasticsearch-groovy-module] - Unable to load extension class [org.elasticsearch.groovy.action.deletebyquery.DeleteByQueryRequestExtensions] This is because the DeleteByQueryRequest code is no longer included with ES by default. If you want this functionality (or for the warning to go away), then you can include the delete by query plugin. org.elasticsearch.plugin:delete-by-query:${ES_VERSION} Given that I don't want to encourage its use, I may remove it in a future version, which will remove this warning. Error is sill in the same place with delete-by-query plugin installed: $ rpm -q elasticsearch elasticsearch-2.1.1-1 $ /usr/share/elasticsearch/bin/plugin list Installed plugins in /usr/share/elasticsearch/plugins: - head - hq - delete-by-query $ cat node.groovy #!/usr/bin/env groovy @Grapes(@Grab(group='org.elasticsearch', module='elasticsearch-groovy', version='2.1.0')) import org.elasticsearch.groovy.client.GClient import org.elasticsearch.groovy.node.GNode import static org.elasticsearch.groovy.node.GNodeBuilder.nodeBuilder GNode node = nodeBuilder().node(); GClient client = node.client(); node.close(); $ ./node.groovy фев 03, 2016 8:24:33 AM org.codehaus.groovy.runtime.m12n.MetaInfExtensionModule newModule WARNING: Module [elasticsearch-groovy-module] - Unable to load extension class [org.elasticsearch.groovy.action.deletebyquery.DeleteByQueryRequestExtensions] Caught: BUG! exception in phase 'conversion' in source unit '/home/enp/twibo/node.groovy' # Licensed to the Apache Software Foundation (ASF) under one or more BUG! exception in phase 'conversion' in source unit '/home/enp/twibo/node.groovy' # Licensed to the Apache Software Foundation (ASF) under one or more Caused by: java.lang.ClassNotFoundException: # Licensed to the Apache Software Foundation (ASF) under one or more Even adding this plugin to groovy script by: @Grapes(@Grab(group='org.elasticsearch.plugin', module='delete-by-query', version='2.1.1')) import org.elasticsearch.groovy.action.deletebyquery.DeleteByQueryRequestExtensions will not work because there are no such class: $ unzip -l /home/enp/.groovy/grapes/org.elasticsearch.plugin/delete-by-query/jars/delete-by-query-2.1.1.jar | grep DeleteByQueryRequestExtensions $ So, groovy client for ES is completely broken now. This is now only warning, this prevents to run any groovy code as above. @enp The error you're seeing is because you're using Grab and not removing groovy-all: @Grab(group='org.elasticsearch', module='elasticsearch-groovy', version='1.7.1') @GrabExclude('org.codehaus.groovy:groovy-all') As a side note, your use of G* classes suggests that it's out of date as well. To test that ES Groovy 2.1.1 is indeed working, I quickly ran: package org.elastic.test import org.elasticsearch.client.Client import org.elasticsearch.client.transport.TransportClient import org.elasticsearch.common.settings.Settings import org.elasticsearch.common.transport.InetSocketTransportAddress import org.elasticsearch.action.search.SearchResponse import java.net.InetAddress class Main { static void main(String[] args) { TransportClient client = TransportClient.builder().settings(Settings.settingsBuilder { cluster.name = "es-2.1.1" }).build() // identical to the Java client: client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("localhost"), 9300)) SearchResponse response = client.searchSync { indices "_all" } println "${response.hits.hits.length}" client.close() } } I used this Gradle build file: apply plugin: 'application' apply plugin: 'groovy' mainClassName = 'org.elastic.test.Main' repositories { mavenCentral() } dependencies { compile "org.elasticsearch:elasticsearch-groovy:2.1.1" } Error is sill in the same place with delete-by-query plugin installed: The Warning does indeed still happen with it. I apparently removed the DeleteByQuery code back around ES 1.5 when it was removed from the codebase. Now that it was brought back, I need to rethink that code though. I may make my own plugin to support it for those few that actually want it; I'll make a separate issue for it. All the warning is saying is that the Extension Module does not apply to anything or it does not exist. In this case it's both, but that has no negative impact beyond the annoying log line. @enp Also, have a look over the basic documentation for the Groovy client. It has changed dramatically since I took it over starting in the 1.4.x days. https://github.com/elastic/elasticsearch-groovy/tree/master/docs I definitely need to add more to these and update them. With Elastic{ON} around the corner, I should hopefully have some time to do that. Thanks for gradle example and especially for excluding hint. Is this grab-specific issue an can be resolved only in grab code? Where can I see groovydoc for es client? @enp Is this grab-specific issue an can be resolved only in grab code? Yeah, the issue that you're seeing is specific to Grab (see issue #29 where someone pointed it out to me and we worked toward working around it, including a JIRA issue for Groovy to stop it from trying to add Groovy to its own classpath). Where can I see groovydoc for es client? I don't host it online anywhere, but that's a good idea because I do publish them with every release. I'll make an issue for that (#34).
gharchive/issue
2015-12-07T21:55:45
2025-04-01T04:34:06.704214
{ "authors": [ "AlexKovynev", "enp", "joergrech", "johngamarra", "pickypg", "rkrombho", "scrotty" ], "repo": "elastic/elasticsearch-groovy", "url": "https://github.com/elastic/elasticsearch-groovy/issues/30", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
985034818
ILM remove_lifecycle seems to be missing? Hi Im trying to use remove_lifecycle but im getting: Can't locate object method "remove_lifecycle" via package "Search::Elasticsearch::Client::7_0::Direct::ILM" @Euromancer I think you are refering to ilm.remove_policy API. We need to update the documentation for Ilm namespace here. Finally, I released 8.0.0 updated and tested using Elasticsearch 8.5.3. I also updated the docs with an Elastic Cloud connection example
gharchive/issue
2021-09-01T11:25:59
2025-04-01T04:34:06.715751
{ "authors": [ "Euromancer", "ezimuel" ], "repo": "elastic/elasticsearch-perl", "url": "https://github.com/elastic/elasticsearch-perl/issues/210", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
200026483
Disable Elastic search as whole or Indexing temporarily I have integrated Elasticsearch in my ROR project. i want to disable elasticsearch if a specific flag value is 1 and enable if flag value is 0. How can i do that is there any specific method to disable indexing of elasticsearch. Seems the same as #643 @MuneebSarfraz94 There's an example in the "Custom Callbacks": after_commit on: [:create] do __elasticsearch__.index_document if self.published? end Is this what you're after?
gharchive/issue
2017-01-11T08:01:53
2025-04-01T04:34:06.717539
{ "authors": [ "Meekohi", "MuneebSarfraz94", "karmi" ], "repo": "elastic/elasticsearch-rails", "url": "https://github.com/elastic/elasticsearch-rails/issues/656", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
956131973
Validate 'ml.post_data' API Receiving an error regarding @timestamp not working with UserDefinedValue? Perhaps some JavaScript weirdness happening here? cc @delvedor The Request/Response weren't named to exactly match the API, although I think the API would have been better named ml.post_job_data... not sure what we should do here? Added new types to support the multiple ways of inputting data to ml.post_data. The name comes from the rest-api-spec, so there isn't much we can do other than ping someone from ML and offer a feedback Yeah it's fine as-is, was just highlighting the change of the namespace/request object to match the post_data name.
gharchive/pull-request
2021-07-29T19:12:45
2025-04-01T04:34:06.719667
{ "authors": [ "sethmlarson" ], "repo": "elastic/elasticsearch-specification", "url": "https://github.com/elastic/elasticsearch-specification/pull/515", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
201461843
Update dependencies as minimatch@2.0.10 is vulnerable to Regular Expression Denial of Service (aka ReDOS) As stated in this advisory (https://nodesecurity.io/advisories/118) minimatch <=3.0.1 is vulnerable to RegexDOS, and it seems like that version is one of the dependencies generator-kibana relies at the moment. You can confirm that by running the following command inside the project containing the plugin Michaels-MacBook-Pro:plugin michaelhidalgo$ ls README.md node_modules public index.js package.json server npm ls minimatch demo@0.0.0 /Users/michaelhidalgo/Desktop/elastic/plugin ├─┬ babel-eslint@4.1.8 │ └─┬ babel-core@5.8.38 │ └── minimatch@2.0.10 └─┬ eslint@1.10.3 └── minimatch@3.0.3 So one of the dependencies of babel depends on the version that might be vulnerable. Thanks for letting us know. The problem is our use of an outdated version of babel, which is a larger issue unfortunately. I think that because that vulnerable version of minimatch is limited to babel, the code at runtime is still safe. My guess is that babel uses minimatch for its process, and should not affect the code it outputs, but I haven't dug into it, so I could be wrong. Yes you are right, if you drill down the search using npm, an outdated version of babel is the one who has it as a dependency. Michaels-MacBook-Pro:plugin michaelhidalgo$ npm ls minimatch@2.0.10 sekurity@0.0.0 /Users/michaelhidalgo/Desktop/elastic/plugin └─┬ babel-eslint@4.1.8 └─┬ babel-core@5.8.38 └── minimatch@2.0.10 I believe the real risk is if someone can trigger the Denial of Service condition from up the chain, that is by using babel, maybe is worth to do a PoC and determine if it is exploitable. Btw, I found out that the same issue is happening in kibana (master branch) but also the problem is with tough-cookie@2.2.2 (here is the advisory https://nodesecurity.io/advisories/130). But I should open an issue on kibana though. Now that Kibana is using a newer babel, this package can in fact be updated. The template is already updated: https://github.com/elastic/template-kibana-plugin/blob/master/template/package.json Cool, I will give it a try and keep you posted.
gharchive/issue
2017-01-18T02:21:32
2025-04-01T04:34:07.036274
{ "authors": [ "michaelhidalgo", "w33ble" ], "repo": "elastic/generator-kibana-plugin", "url": "https://github.com/elastic/generator-kibana-plugin/issues/43", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
769416922
searchBody range was seperated Context In order to help us troubleshoot issues with this project, you must provide the following details: ElasticDump version: 6.62.1 Elasticsearch version: 7.40.0 Node.js version :v10.23.0 Full command you are having issue with (with inputs and outputs) python3 os.sys()执行下面命令 elasticdump --input=http://ip:port/index-2020-12-12 --output=./output/2020-12-12/index-2020-12-12_from03_to04_.json --fsCompress=gzip --fileSize=3gb --limit=300 --concurrency=1 --noRefresh --searchBody{"query":{"range":{"@timestamp":{"from":"2020-12-12T03:00:00.000Z","to":"2020-12-12T04:00:00.000Z"}}},"stored_fields":["*"],"_source":true} ps aux 看到如下结果 root 19954 4.8 0.3 814436 241636 ? Sl Dec15 137:28 node /opt/elk/elasticdump/node-v10.23.0-linux-x64/bin/elasticdump --input=http://ip:port/index-2020-12-12 --output=./index-2020-12-12_from3_to4_.json --fsCompress=gzip --fileSize=10gb --limit=300 --concurrency=1 --noRefresh --searchBodyquery:{range:{@timestamp:from:2020-12-12T03:00:00.000Z}} --searchBodyquery:{range:{@timestamp:to:2020-12-12T04:00:00.000Z}} --searchBodystored_fields:[*] --searchBody_source:true --searchBody{"query":{"range":{"@timestamp":{"from":"2020-12-12T03:00:00.000Z","to":"2020-12-12T04:00:00.000Z"}}} 被拆分成 --searchBodyquery:{range:{@timestamp:from:2020-12-12T03:00:00.000Z}} --searchBodyquery:{range:{@timestamp:to:2020-12-12T04:00:00.000Z}} 从导出的结果看到有些数据不在"range":{"@timestamp":{"from":"2020-12-12T03:00:00.000Z","to":"2020-12-12T04:00:00.000Z"}}范围内 I'm not sure what the error or problem is. Issues without replication steps will be closed What do you mean by seperated ?
gharchive/issue
2020-12-17T01:59:25
2025-04-01T04:34:08.840445
{ "authors": [ "ferronrsmith", "zhj12388" ], "repo": "elasticsearch-dump/elasticsearch-dump", "url": "https://github.com/elasticsearch-dump/elasticsearch-dump/issues/761", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1335854503
apps sc: update welcoming dashboard What this PR does / why we need it: Removed the "support" part of the welcoming dashboards from opensource project and added it as an option in the cluster configs. Wasn't able to find a solution that didn't involve adding the config to customer envs, but I made a simple optional script to ease the "migration". Which issue this PR fixes : fixes #1068 Special notes for reviewer: Added a migration script, but is not required. Checklist: [X] Added relevant notes to WIP-CHANGELOG.md [X] Proper commit message prefix on all commits [ ] Updated the public facing documentation Is this changeset backwards compatible for existing clusters? Applying: [X] is completely transparent, will not impact the workload in any way. [ ] requires running a migration script. [ ] will create noticeable cluster degradation. E.g. logs or metrics are not being collected or Kubernetes API server will not be responding while upgrading. [ ] requires draining and/or replacing nodes. [ ] will change any APIs. E.g. removes or changes any CK8S config options or Kubernetes APIs. [ ] will break the cluster. I.e. full cluster migration is required. Chart checklist (pick exactly one): [X] I upgraded no Chart. [ ] I upgraded a Chart and determined that no migration steps are needed. [ ] I upgraded a Chart and added migration steps. Made some changes @robinAwallace @viktor-f, the text content is no longer customizable from config but I dont think it has to be. There is no way to reach files outside of the chart, the only workaround is using a symlink. This is not a good solution because you can not use environment variables in symlinks, i.e. $CK8S_CONFIG_PATH so the symlink would need to point to something like ../../../../ops/support.md and that would assume that everyone has the ops and apps repo in the same folder. Could also introduce complications if we were to re-structure this repo. I will go back to the "one-line string" in config solution.
gharchive/pull-request
2022-08-11T11:54:58
2025-04-01T04:34:08.851517
{ "authors": [ "davidumea" ], "repo": "elastisys/compliantkubernetes-apps", "url": "https://github.com/elastisys/compliantkubernetes-apps/pull/1109", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
356969381
(v6) Webpack Plugin: Getting it to compile [x ] I have read the contribution documentation for this project. [x ] I agree to follow the code of conduct that this project follows, as appropriate. [ x] I have searched the issue tracker for an issue that matches the one I want to file, without success. This isn't so much an issue, as maybe a documentation change. I was attempting to setup my project to use the new webpack plugin. I created a fresh project using the electron-forge beta, installed the @electron-forge/plugin-webpack, setup 'main' and 'renderer' webpack configs, as well as made the necessary adjustments to my package.json and the loadUrl calls according to the guidelines. I extracted my 'forge' config into an external config file: const path = require('path'); const { WebpackPlugin } = require('@electron-forge/plugin-webpack'); const forgeConfig = { "packagerConfig": {}, "makers": [ { "name": "@electron-forge/maker-squirrel", "config": { "name": "testforge" } }, { "name": "@electron-forge/maker-zip", "platforms": [ "darwin" ] }, { "name": "@electron-forge/maker-deb", "config": {} }, { "name": "@electron-forge/maker-rpm", "config": {} } ], plugins: [ new WebpackPlugin({ mainConfig: path.resolve(__dirname, 'webpack.main.config.js'), renderer: { config: path.resolve(__dirname, 'webpack.renderer.config.js'), entryPoints: [{ html: path.resolve(__dirname, 'src/index.html'), js: path.resolve(__dirname, 'src/renderer.js'), name: 'main_window', }], }, }), ], }; module.exports = forgeConfig; At first I had written this file in pure es6 (import, export, etc), but it didn't like that, so I dropped back. My next issue was that I kept getting an error that WebpackPlugin is not a constructor. I got around this by changing line 2 of the file to const WebpackPlugin = require('@electron-forge/plugin-webpack').default;. This resolved my constructor error, but now it won't find the compiled 'main' process. > testforge@1.0.0 start /Users/me/Projects/testforge > electron-forge start ✔ Checking your system ✔ Locating Application ✔ Preparing native dependencies ✔ Compiling Main Process Code ✔ Launch Dev Servers ✔ Compiling Preload Scripts ✔ Launching Application Webpack Output Available: http://localhost:9000 App threw an error during load Error: Cannot find module '/Users/me/Projects/testforge/src/index.js' at webpackMissingModule (/Users/me/Projects/testforge/.webpack/main/index.js:96:45) at Object.<anonymous> (/Users/me/Projects/testforge/.webpack/main/index.js:96:173) at __webpack_require__ (/Users/me/Projects/testforge/.webpack/main/index.js:21:30) at /Users/me/Projects/testforge/.webpack/main/index.js:85:18 at Object.<anonymous> (/Users/me/Projects/testforge/.webpack/main/index.js:88:10) at Object.<anonymous> (/Users/me/Projects/testforge/.webpack/main/index.js:102:3) at Module._compile (module.js:642:30) at Object.Module._extensions..js (module.js:653:10) at Module.load (module.js:561:32) at tryModuleLoad (module.js:504:12) webpack built e0a93e053d5442f91c8f in 498ms The only real change I had made was to add a very simple renderer file, but my webpack module rule was using the babel-loader with the @babel/preset-env. Thinking they might not be there, I added those via npm as well, along with @babel/core. This took care of my issue. Once I had all the pieces, it all came together. Below are the bits that are important: package.json { ... "main": "./.webpack/main", "config": { "forge": "./forge.config.js" }, "devDependencies": { ... "@babel/core": "^7.0.0", "@babel/preset-env": "^7.0.0", ... "babel-loader": "^8.0.2", ... } ... } forge.config.js const path = require('path'); const WebpackPlugin = require('@electron-forge/plugin-webpack').default; const pluginConfig = { "packagerConfig": {}, "makers": [ { "name": "@electron-forge/maker-squirrel", "config": { "name": "karaokeedge" } }, { "name": "@electron-forge/maker-zip", "platforms": [ "darwin" ] }, { "name": "@electron-forge/maker-deb", "config": {} }, { "name": "@electron-forge/maker-rpm", "config": {} } ], plugins: [ new WebpackPlugin({ mainConfig: path.resolve(__dirname, 'webpack.main.config.js'), renderer: { config: path.resolve(__dirname, 'webpack.renderer.config.js'), entryPoints: [{ html: path.resolve(__dirname, 'src/index.html'), js: path.resolve(__dirname, 'src/renderer.js'), name: 'main_window', }], }, }), ], }; module.exports = pluginConfig; webpack.main.config.js const path = require('path'); const sharedModule = { rules: [ { test: /\.js$/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'], }, }, }, ], }; const mainConfig = { entry: [ path.resolve(__dirname, 'src/index.js'), ], module: sharedModule, }; module.exports = mainConfig; webpack.renderer.config.js const path = require('path'); const sharedModule = { rules: [ { test: /\.js$/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'], }, }, }, ], }; const rendererConfig = { module: sharedModule, resolve: { extensions: ['.js', '.jsx', '.json'], }, }; module.exports = rendererConfig; I wasn't able to use pure es6 on my webpack configs either, nor could I use the '.babel.js' trick on those files to get them to use pure es6, but that's a small issue in the grand scheme of things. I'm now, finally, on my way. Thanks to all those who work on this project. Does this go to forge.config.js or to package.json (and if yes does it go under config/forge or elsewhere. Official doc is not very clear in this. { plugins: [ ['@electron-forge/plugin-webpack', { mainConfig: './webpack.main.config.js', renderer: { config: './webpack.renderer.config.js', entryPoints: [{ html: './src/renderer/index.html', js: './src/renderer/index.js', name: 'main_window' }] } }] ] }
gharchive/issue
2018-09-04T20:45:01
2025-04-01T04:34:08.961896
{ "authors": [ "cutterbl", "gorn" ], "repo": "electron-userland/electron-forge", "url": "https://github.com/electron-userland/electron-forge/issues/562", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1124883504
fix(publisher-electron-release-server): set knownLength option for asset upload closes: #2087 [x] I have read the contribution documentation for this project. [x] I agree to follow the code of conduct that this project follows, as appropriate. [x] The changes are appropriately documented (if applicable). [x] The changes have sufficient test coverage (if applicable). [x] The testsuite passes successfully on my local machine (if applicable). Summarize your changes: Resubmitting #2088 after rebasing. Thanks! I'm very glad to finally see this merged :)
gharchive/pull-request
2022-02-05T09:35:38
2025-04-01T04:34:08.965339
{ "authors": [ "monsterkrampe" ], "repo": "electron-userland/electron-forge", "url": "https://github.com/electron-userland/electron-forge/pull/2706", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
192860892
Unable to build an application for Raspberry Pi 3 Using electron package manager am unable to build an application for the armv7a architecture i.e., Raspberry Pi 3 .. Does it support for this architecture.. As of Electron Packager version 8.0.0, the armv7l architecture is supported. However, due to the way that ARM architectures identify themselves, you need to explicitly specify the arch parameter yourself. Yes I have seen armv7l architecture is supported . But Raspberry Pi 3 is raising an error that architecture not found when I specify the --arch=armv7a . And if I compile it using armv7l I am unable to run the application created. What is armv7a? It the processor present in Raspberry Pi 3 Model B which we are using. Is there any difference it shows for the both armv7a and armv7l. I don't know exactly. It's confusing because my Raspberry Pi 3 is running a stable Raspbian image, and when I run uname -a on it, it reports armv7l. However, according to Wikipedia it runs https://en.wikipedia.org/wiki/ARM_Cortex-A53 which is armv8, which is not currently supported by Electron. The only architectures that Electron Packager supports is a subset of the architectures that Electron supports. All of the "Electron does not build/ for some version of ARM" issues seem to be getting merged into https://github.com/electron/electron/issues/259. On my raspberry pi 3 running on New Raspbian Pixel desktop when I run uname -a it shows Linux. Sorry, uname -a shows everything. I specifically meant uname -m.
gharchive/issue
2016-12-01T14:27:14
2025-04-01T04:34:08.970581
{ "authors": [ "malept", "rajuece" ], "repo": "electron-userland/electron-packager", "url": "https://github.com/electron-userland/electron-packager/issues/535", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
137135449
Split up arguments section into OS-specific subsections Fixes #237. Blocked by #278.
gharchive/pull-request
2016-02-29T02:55:36
2025-04-01T04:34:08.971532
{ "authors": [ "malept" ], "repo": "electron-userland/electron-packager", "url": "https://github.com/electron-userland/electron-packager/pull/274", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1338158426
[Help] clone 下来后在 mac 打包出 exe,拷到 window 上安装后打开白屏,报错 "Not allowed to load local resource: file:///.../resources/app.asar/dist/index.html" [Help] clone 下来后在 mac 打包出 exe,拷到 window 上安装后打开白屏,报错 "Not allowed to load local resource: file:///.../resources/app.asar/dist/index.html" 运行的命令是 vue-tsc --noEmit && electron-builder -w Mac 可以用吗?我手里没有 Windows Not allowed to load local resource: file:///.../resources/app.asar/dist/index.html 你看看具体的 file:/// 这个路径文件存在不存在。可以先把 electron-builder.json5 中的 asar 设置成 false 方便调试。 应该是我姿态资源太大了 因为我在实现一个离线安装 里面有视频和课件 我在 public 里放了5个 G 的视频 如果我只放几百兆的资源就没问题 大佬对于我这种需求有什么建议吗 求抱大腿 静态资源太大了,这不是个好的软件。你这快赶上一个 3A 大作的大小了。 资源应该刚到服务器上,你这边真的要本地用,可以开发下资源管理功能。 默认安装是没有静态资源的,可以通过资源管理功能让用户选择按需要下载到本地,下载后就读取本地的用。 感谢大佬解答 那我想做一个离线加载本地文件 有网缓存线上文件 这样一个需求 你建议是用现在这个仓库还是 https://github.com/electron-vite/electron-vite-boilerplate 呢 electron-vite-boilerplate 和这个都可,前者更偏底层,如果你想研究原理
gharchive/issue
2022-08-14T07:22:38
2025-04-01T04:34:08.975357
{ "authors": [ "CzyYYDS", "caoxiemeihao" ], "repo": "electron-vite/electron-vite-vue", "url": "https://github.com/electron-vite/electron-vite-vue/issues/241", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1385972757
aliases in main? I'm attempting to port my electron-webpack project over to vite using vite-plugin-electron. https://github.com/imbateam-gg/titan-reactor I'd like to retain the structure src/renderer, src/common, src/main. With both renderer and main having aliases to common for shared utilities and types. There is a resolve option in main for this plugin but it seems to overwrite the previous configuration. export default { plugins: [ electron({ main: { entry: 'electron/main/index.ts', vite: { resolve: { // Electron-Main alias alias: {}, }, }, }, }), ], resolve: { // Electron-Renderer alias alias: {}, }, }
gharchive/issue
2022-09-26T12:12:27
2025-04-01T04:34:08.977564
{ "authors": [ "alexpineda", "caoxiemeihao" ], "repo": "electron-vite/vite-plugin-electron", "url": "https://github.com/electron-vite/vite-plugin-electron/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
256797191
add app.json for heroku Not actually needed for this branch, but Heroku requires it for Review Apps to work.
gharchive/pull-request
2017-09-11T18:21:54
2025-04-01T04:34:08.979707
{ "authors": [ "zeke" ], "repo": "electron/electron.atom.io", "url": "https://github.com/electron/electron.atom.io/pull/770", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2213127934
Unexpected token E in JSON at position 0 We are facing the below error on using @electron/notarize v2.3.0 Not sure, what's the actual issue here. We are providing all the required params to the notarize function: Having the same issue This is the same issue as #177. Same here Looks like it is trying to parse a JSON that is not actually a JSON. notarytool returns a string instead of JSON when there is an error occurs. I modified the node_modules/@electron/notarize/lib/notarytool.js file to make it throw an error with the JSON string, so that I can see the actual error message from Apple. const result = yield (0, spawn_1.spawn)('xcrun', notarizeArgs); throw new Error(result.output.trim()); const parsed = JSON.parse(result.output.trim()); For me the error is "Error: HTTP status code: 403. Invalid or inaccessible developer team ID for the provided Apple ID. Ensure the Team ID is correct and that you are a member of that team." Turned out that you need to sign some new agreement on Apple Developer site. Strangely, it still fails for me when using the plugin as is. This change in node_modules/@electron/notarize/lib/notarytool.js magically fixes it for me: const result = yield (0, spawn_1.spawn)('xcrun', notarizeArgs); console.log('Attempting to parse that crap:', result.output.trim()); let parsed; try { parsed = JSON.parse(result.output.trim()); } catch (e) { console.error("Failed to parse JSON:", e); throw new Error(result.output.trim()); } Had the same issue, currently forking the repo to apply the above fix. Rollback to version 2.2.0 where JSON is parsed correctly. "@electron/notarize": "2.2.0" Also be sure to add resolutions to your package.json so your other electron packages use the correct version. "resolutions": { "@electron/notarize": "2.2.0" } Ok, with the rollback to 2.2.0 the error has changed. now it says "Error: HTTP status code: 401. Not authenticated. Make sure all authentication arguments are correct." but when I run notarize locally everything is fine. The problem occurs only on gitactions. I use the three environment variables: APPLE_API_KEY: ~/private_keys/AuthKey_${{ secrets.api_key_id }}.p8 APPLE_API_KEY_ID: ${{ secrets.api_key_id }} APPLE_API_ISSUER: ${{ secrets.api_key_issuer_id }} Turned out that you need to sign some new agreement on Apple Developer site. Hi @andelf, can you provide more feedback as to what was missing? I am encountering the same problem. May I ask if there is any way to solve it? I have encountered this issue in my attempts. Leaving this here as it may help someone else. The error for me happened when the notarize function tried to parse the response from the initial request. In my case, it ended up being an issue with authentication as my Apple ID was wrong, so it return an error message ( Error: HTTP status code: 401. Invalid credentials... hence the start with E) I guess electron-notarize could be more helpful in providing a better error as suggested in #191 Turned out that you need to sign some new agreement on Apple Developer site. Thank you I was also able to fix this by logging into my Apple Developer account and accepting the updated agreement. This should be fixed by #191 If someone like me ended here today looking for solutions, make sure to npm update @electron/notarize and, as stated before, accept new terms on the Apple side of things. For anyone else looking for quick link, go to https://appstoreconnect.apple.com/business and accept any agreements you have there 👍
gharchive/issue
2024-03-28T12:32:50
2025-04-01T04:34:09.072621
{ "authors": [ "Chamarsh", "EternallLight", "Igloczek", "akshitcompro", "andelf", "austinlangdon", "cpvalente", "remloyal", "rotu", "stevebauman", "therockerline", "warrenday", "yingchen-liu" ], "repo": "electron/notarize", "url": "https://github.com/electron/notarize/issues/186", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1221897876
Use simple queries when there are no parameters The simple query mechanism is simpler and more efficient than the extended query protocol we currently use for all queries. Doing this for all queries with no parameters is inefficient since the simple protocol always returns data in text format. We now use simple protocol for our own queries where we know there's no output, otherwise always use the extended protocol.
gharchive/issue
2022-04-30T18:55:41
2025-04-01T04:34:09.085408
{ "authors": [ "elektito" ], "repo": "elektito/pgtrio", "url": "https://github.com/elektito/pgtrio/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
55755923
Adds chart in "Get Involved" Fixes #206. Uses Chart.js and https://github.com/Regaddi/Chart.StackedBar.js For the "fixed" label, to provide good contrast we should be closer to something real bright like #95A5AC Can we remove the large h1? I feel like it takes up a lot of space and doesn't add a terrible amount of value.
gharchive/pull-request
2015-01-28T13:28:48
2025-04-01T04:34:09.145467
{ "authors": [ "danrabbit", "emersion" ], "repo": "elementary/mvp", "url": "https://github.com/elementary/mvp/pull/248", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
810478735
🛑 Ombi is down In 8754f2d, Ombi (https://ombi.lepallec.tv) was down: HTTP code: 0 Response time: 0 ms Resolved: Ombi is back up in 96e135a.
gharchive/issue
2021-02-17T19:40:11
2025-04-01T04:34:09.176923
{ "authors": [ "elepallec" ], "repo": "elepallec/upptime", "url": "https://github.com/elepallec/upptime/issues/79", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1089043823
AttributeError: module 'tensorflow' has no attribute 'io' I use the latest code of "highway_env", but it has the error, "AttributeError: module 'tensorflow' has no attribute 'io'". How to solve it? My version of tensorflow is 1.2.0. Hi @liuqi1998, I don't think the issue is related to this project, since tensorflow is not used anywhere in this codebase. Oh, I understand it, but I cannot get the effect in the example picture. What's wrong? Message ID: @.***> It may be caused by the use tensorboardx in the rl-agents repository. You can try to reinstall it manually. OK. I will try it, and I also have a question in this, is 'tensorboard' equal to 'tensorboardx'? Message ID: @.***> No, tensorboardx is a portage of tensorboard that is compatible with pytorch, while tensorboard only supports tensorflow OK, I understand it. Thank you for answering my question. Message ID: @.***> Is there any sample code combined with this "highway_env" environment? I need to use this environment for my research. Message ID: @.***> In my simulation experiment, automatic vehicles rarely change lanes, and there are 947 collisions in the 1000 times the experiment shows the effect. Is it related to my reward setting? Is there any sample code combined with this "highway_env" environment? I need to use this environment for my research. You can find sample scripts in notebook in the scripts/ directory. In my simulation experiment, automatic vehicles rarely change lanes, and there are 947 collisions in the 1000 times the experiment shows the effect. Is it related to my reward setting? What environment are you using? Are the collisions caused by the agent or other vehicles? I use the “highway-fast-v0” environment. The collisions are caused by the agent. And most of the causes of the collision are caused by the simulated green car (agent) not changing lanes. Message ID: @.***> Ah I see, this is related to you learning algorithm then, this repository only specifies the environment. However, you should be able to reach pretty good performance for the highway-fast variant, certainly better that 947/1000 collisions. Is that statistic reported after the agent has trained? It is expected that you see a lot of collisions during training, but it should eventually converge to something better. You may also want to try different types of observations. The last collision rate of 947 / 1000 is the collision rate after my training. After I adjusted the collision reward, high-speed reward and lane change reward, the collision rate decreased to 42 / 1000. Message ID: @.***>
gharchive/issue
2021-12-27T07:54:18
2025-04-01T04:34:09.186965
{ "authors": [ "eleurent", "liuqi1998" ], "repo": "eleurent/highway-env", "url": "https://github.com/eleurent/highway-env/issues/251", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1487987205
🛑 UNER - DDJJ is down In 95fc93c, UNER - DDJJ (https://autogestion.uner.edu.ar/) was down: HTTP code: 502 Response time: 833 ms Resolved: UNER - DDJJ is back up in 2b891ba.
gharchive/issue
2022-12-10T05:22:18
2025-04-01T04:34:09.194356
{ "authors": [ "elfoche" ], "repo": "elfoche/monitoreo", "url": "https://github.com/elfoche/monitoreo/issues/1232", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1702595589
🛑 MCU - GTH is down In cf3cef1, MCU - GTH (http://produccion.cdeluruguay.gob.ar/GTH/forms/login.jsp) was down: HTTP code: 404 Response time: 857 ms Resolved: MCU - GTH is back up in f5bdc80.
gharchive/issue
2023-05-09T18:57:09
2025-04-01T04:34:09.197059
{ "authors": [ "elfoche" ], "repo": "elfoche/monitoreo", "url": "https://github.com/elfoche/monitoreo/issues/2000", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1243442719
🛑 MCU - GRH is down In fb33c92, MCU - GRH (http://produccion.cdeluruguay.gob.ar/GRH/forms/login.jsp) was down: HTTP code: 0 Response time: 0 ms Resolved: MCU - GRH is back up in 1c8d3bb.
gharchive/issue
2022-05-20T17:36:13
2025-04-01T04:34:09.199410
{ "authors": [ "elfoche" ], "repo": "elfoche/monitoreo", "url": "https://github.com/elfoche/monitoreo/issues/467", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1452240701
Can I automatically add movies and shows that I have watched to the library? And if so , how? Whenever I watch a movie or show and i want to add it to library , I always have to manual open context menu and add it. This is a little bit annoying as I have to do it every single time. Is there a way for it to automatically be added after I watch even a bit of the episode / movie? I guess - no, see https://github.com/elgatito/plugin.video.elementum/blob/a3e1bed8489344518de32421ba373c6956bfa8db/resources/language/messages.pot#L981-L997 for currently available syncs. Ah , how unfortunate. You can add manually something, but we don't have automatic addition.
gharchive/issue
2022-11-16T20:36:33
2025-04-01T04:34:09.201310
{ "authors": [ "antonsoroko", "dhr-uvin", "elgatito" ], "repo": "elgatito/plugin.video.elementum", "url": "https://github.com/elgatito/plugin.video.elementum/issues/893", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2126683511
MSID: 87561 Version: 2 DOI: 10.1101/2023.04.19.537448 MSID: 87561 Version: 2 Preprint DOI: https://doi.org/10.1101/2023.04.19.537448 Step 1. Awaiting reviews Editorial to post reviews via hypothesis Useful links: DocMap: https://data-hub-api.elifesciences.org/enhanced-preprints/docmaps/v2/by-publisher/elife/get-by-manuscript-id?manuscript_id=87561 New model tracking: https://docs.google.com/spreadsheets/d/1_fHaoOy7hjyocptKtVJRijeNpUY4hBS7Ck_aVmx6ZJk/ Reviews on sciety: https://sciety.org/articles/activity/10.1101/2023.04.19.537448 For trouble shooting (e.g. no Docmaps available): DocMap issue addressing: https://miro.com/app/board/uXjVNCwK6EI=/ Explore DataHub DocMaps API: https://lookerstudio.google.com/reporting/4c2f0368-babb-4beb-b5b3-497e7e7b0f08/page/ejphD Unmatched submissions and preprints: https://lookerstudio.google.com/u/0/reporting/9f86204f-3bf7-477c-9b18-5c5ef141bf69/page/p_gxi57ha93c Unmatched manuscripts spreadsheet: https://docs.google.com/spreadsheets/d/15QcK8w-ssB7109RQEDtFpJPZ0J5HTGxoHa_2TtpMBbg/edit#gid=1336081641 Step 2. Preview reviewed preprint Production QC content ahead of publication Instructions: QC preview: https://prod--epp.elifesciences.org/previews/87561v2 Update ticket with any problems (add blocked label) When QC OK, add QC OK label to ticket and add publication date and time to https://docs.google.com/spreadsheets/d/1amAlKvdLcaDp5W8Z8g77NmkwbMF5n_u89ArSqPMO8jg Move card to next column (At end of the day post link in #enhanced-preprint and ask for PDF to be generated) Useful links: Preprint DOI : https://doi.org/10.1101/2023.04.19.537448 Confirm reviews returned by EPP: https://prod--epp.elifesciences.org/api/reviewed-preprints/87561/v2/reviews To update the MECA path in the docmap: https://docs.google.com/spreadsheets/d/1mctCQuNFBjSn97Lihy7_vBO6z7-N-oqyLv4clyi6zHg Step 3: Awaiting search reindex This step adds the reviewed preprint to the homepage: https://elifesciences.org The search reindex is triggered once an hour. We need the reviewed preprint to be indexed as the search application serves the journal homepage. Useful links: Jenkins pipeline to reindex search can be triggered sooner or monitored here: https://alfred.elifesciences.org/job/process/job/process-reindex-reviewed-preprints/ Step 4: Published! PDF requested Waiting for PDF to be generated Useful links: PDF tracking: https://docs.google.com/spreadsheets/d/106_XeDjmuBae7gexOTNzg60lapeqjl2aRn9DzupGyS8/ Step 5: Introduce PDF to data folder and git repo Upload PDF to relevent folder in git repo https://github.com/elifesciences/enhanced-preprints-data/ Step 6: Done! [ ] Kettle is on! Thanks Rebecca. Corrected package uploaded to s3://prod-elife-epp-meca/87561-v2-meca.zip. Should be ready to check on Monday.
gharchive/issue
2024-02-09T08:29:11
2025-04-01T04:34:09.234208
{ "authors": [ "fred-atherden", "nlisgo" ], "repo": "elifesciences/enhanced-preprints-import", "url": "https://github.com/elifesciences/enhanced-preprints-import/issues/2735", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1202840617
Problems showing and hiding popup with color picker I followed the example at https://justpy.io/quasar_tutorial/QColor/ to implement a color picker element for NiceGUI. But I struggle with showing and hiding the popup on demand. Problem 1: The following code contains two popups, one for the input field and one standalone which is triggered via show and hide buttons. But when clicking the icon in the input field, both popups are shown. I tried to distinguish them as cleanly as possible with different names - in case the name_dict is mixing them up -, but without success. Problem 2: Both buttons seem to toggle the second popup. So the "show" button hides the popup if it is already visible and vice versa. This doesn't make sense to me. import justpy as jp def color_test(): wp = jp.QuasarPage() in1 = jp.QInput(filled=True, style='width:10em', a=wp) j1 = jp.parse_html(""" <q-icon name="colorize" class="cursor-pointer"> <q-popup-proxy transition-show="scale" transition-hide="scale" name="popup1"> <q-color name="color_input1"/> </q-popup-proxy> </q-icon> """) in1.add_scoped_slot('append', j1) j1.name_dict['color_input1'].on('change', lambda sender, _: print(sender.value)) j2 = jp.parse_html(""" <q-popup-proxy transition-show="scale" transition-hide="scale" name="popup2"> <q-color name="color_input2"/> </q-popup-proxy> """, a=wp) j2.name_dict['color_input2'].on('change', lambda sender, _: print(sender.value)) def show(): j2.name_dict['popup2'].value = True def hide(): j2.name_dict['popup2'].value = False jp.Button(text='Show', a=wp, click=lambda *_: show()) jp.Button(text='Hide', a=wp, click=lambda *_: hide()) return wp jp.justpy(color_test) I guess it somehow has to do with changing focus. When clicking on "show" while the popup is visible, it looses focus and, thus, closes. But why can "hide" open it? And it seems that focussing the input element opens the second popup. This would explain why clicking the icon (which focusses the input) opens both popups. But how can the second popup depend on the input element? Any help in understanding and improving this behavior is very much appreciated! This is indeed very weird. I'll take a close look at this next week. Hi Eli, any update on the issue with showing/hiding color picker popups? I'm still very much looking forward to integrating these elements in our framework for an ongoing project. Not yet. Unfortunately, I won't have time to look at it this week. It seems like the issue is caused by Quasar's default behavior of triggering popups from parent events. I noticed in a different context that menus would open when clicking next to them. This can be avoided using the "no-parent-event" prop. In the example above I need to change both popup proxies to <q-popup-proxy ... no-parent-event> Now we need an explicit trigger for the first popup, e.g. def show_j1(): j1.name_dict['popup1'].value = True j1.name_dict['colorize'].on('click', lambda sender, _: show_j1()) Although the default behavior is a bit weird, adding no-parent-event solves this issue in my point of view.
gharchive/issue
2022-04-13T06:53:06
2025-04-01T04:34:09.240703
{ "authors": [ "elimintz", "falkoschindler" ], "repo": "elimintz/justpy", "url": "https://github.com/elimintz/justpy/issues/368", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1275769371
Updated README.md - Added Streamlit based WebApp 👨‍💻✅ Hi @elisemercury , Kudos to you for bringing up difPy. I worked on developing a simple streamlit based webapp on the same and I think it will be fruitful to have it as a part of README here as the motivation behind developing this came from your work 😄! Title: Streamlit based Duplicate Images Finder Github Repo : https://github.com/prateekralhan/Streamlit-based-Duplicate-Images-Finder Happy opensourcing! Hi @prateekralhan, Thanks a lot for creating this webapp, it's great! This front-end definitely makes the usage of difPy much more user friendly 😀 Happy to merge your pull request! Thanks again and all the best Elise
gharchive/pull-request
2022-06-18T11:42:06
2025-04-01T04:34:09.247182
{ "authors": [ "elisemercury", "prateekralhan" ], "repo": "elisemercury/Duplicate-Image-Finder", "url": "https://github.com/elisemercury/Duplicate-Image-Finder/pull/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
347234058
jump to custom url when finish bundling? It's useful to allow jump to custom url, if i want to debug code in a special host: app.use(webpackServeWaitpage(options, { url: 'https://www.google.com/' })) Because we can't modify online html by webpack, so we can open localhost at first, and jump to special url after finsh bundling. Actually it is possible by creating a custom local ejs file and providing that url as a parameter to it. I don't see it as a good feature since this page comes up only if the build hasn't finished yet. If it did then you will be left where you browsed to and not the url you provided. I think it's a good feature, it seems like "open a local page to see build progress, and jump to the real page after it has done." If the waitpage is the same as the open url provided by webpack-serve, it will not work if the target url is a remote one. so you want to set the open url (in webpack-serve) to your localhost and change webpack-serve-waitpage to redirect to a different url? @elisherer Yes, and webpack-serve-waitpage just need receive redirect param to allow a custom jump. The default is false, which does not have an impact on the existing design. I think in the way it is handled today (meta refresh), it's very difficult to accomplish what you want. Do you want to try to create a PR? As I said, it is not possible to implement this feature using the current design, If you can think of a way to implement it, you are welcome to create a PR, Closed.
gharchive/issue
2018-08-03T02:10:01
2025-04-01T04:34:09.251311
{ "authors": [ "ascoders", "elisherer" ], "repo": "elisherer/webpack-serve-waitpage", "url": "https://github.com/elisherer/webpack-serve-waitpage/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
573283114
Removed unneeded ANGLE files to speedup compilation. Also updated MetalANGLE Anw, what did change between this branch and angle-metal-backend branch? Seems like the shadow quality has become worse and high dpi is not working correctly. When high dpi is enabled, SDL set window's layer scale factor to 2. It makes metal layer created a framebuffer with doubled size. The problem is that the Urho3D OGLGraphics layer still think the the framebuffer's resolution is the same, hence the final scene is only rendered to 1/4 of the screen. FYI, the updated MetalANGLE has some optimizations for dynamic buffer update. I think it should be useful for Urho3D. Recently these was an interesting event, there was a user reported to me that he is using MetalANGLE for his game. And his game is using irrlicht (a quite old engine). So I forked irrlicht, integrating MetalANGLE into it and noticed irrlicht is updating buffer in a way that I didn't expect. It was not efficient, So these couple of days I have been trying to optimize that part inside MetalANGLE. Thanks for that. I don’t have much free time right now I will work on that during the next weekend. Yes there is an old issue with high dpi that needs some fixing , I will try to fix it I disabled high dpi for now so it the same now as it was in angle-metal-backend branch . On which device did you reproduce this issue ? I used my macbook connected to an external monitor. That monitor doesn’t even have retina
gharchive/pull-request
2020-02-29T12:08:54
2025-04-01T04:34:09.254584
{ "authors": [ "elix22", "kakashidinho" ], "repo": "elix22/Urho3D", "url": "https://github.com/elix22/Urho3D/pull/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
164716363
replace cast/4 with cast/3 in docs Since cast/4 is deprecated. :heart: :green_heart: :blue_heart: :yellow_heart: :purple_heart:
gharchive/pull-request
2016-07-10T13:46:57
2025-04-01T04:34:09.255742
{ "authors": [ "josevalim", "koolhazcker" ], "repo": "elixir-ecto/ecto", "url": "https://github.com/elixir-ecto/ecto/pull/1559", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1575504877
Silence DDL log / notices when repository log config is false Silences DDL log notices when the repo log config is false. Fixes #477 Looks like this test is failing due to the change: https://github.com/elixir-ecto/ecto_sql/blob/master/integration_test/pg/migrations_test.exs#L43 You could change its log to something other than false and it will pass. It might be a good idea to update the docs too to say log: false silences the messages from the database. Right now it's not obvious it belongs in the scope: https://github.com/elixir-ecto/ecto_sql/blob/master/lib/ecto/migrator.ex#L204 `:log` - the level to use for logging of migration instructions. Defaults to `:info`. Can be any of `Logger.level/0` values or a boolean. I'll try to run the mysql & mssql tests using earthly locally first, getting a few failures with my mysql 8.0 install. doesn't seem to like my M1 Mac, I'll try testing it on a Linux box tonight Error: build target: build main: failed to solve: Earthfile line 109:8 with docker run: pull: resolve image config for mysql:5.7: no match for platform in manifest sha256:8cf035b14977b26f4a47d98e85949a7dd35e641f88fc24aa4b466b36beecf9d6: not found in github.com/deepfryed/ecto_sql:master+integration-test-mysql --MYSQL=5.7 Thank you
gharchive/pull-request
2023-02-08T05:26:13
2025-04-01T04:34:09.259098
{ "authors": [ "deepfryed", "greg-rychlewski" ], "repo": "elixir-ecto/ecto_sql", "url": "https://github.com/elixir-ecto/ecto_sql/pull/478", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2472782979
Notificate errors by email or Telegram Hi! First of all, just to say thank you all for this awesome and really great project that ease to track errors without building a heavy infrastructure and a lot of configuration. Congratulations! It works really nice just installing it and with the initial configuration steps you give in the documentation. I am really interested in the notification of errors, because it can be really useful. I will implement it in a project handling the events emitted by ErrorTracker.Telemetry. I just want to know if you are interested in having it implemented in ErrorTracker or if you prefer to keep it out by the moment: If you want it implemented in ErrorTracker, I can directly fork the project and start from there following your conventions, etc. If you want it outside of ErrorTracker, I will probably create a simple and easy package that implement this feature. In case of 2, I would like to confirm if you agree with naming the package using error_tracker_ prefix. In any case, 1 or 2, my idea and the roadmap in my head (I have not draw it yet!) is: Implement the handling of events. Let the user the possibility to send notifications by email. ... and then by Telegram. I really prefer 3, because I use it more than the email, but I think it I will need to research and study more the implementation of that part. Of course, any advice you can give is more than welcome! So I am going to wait for you answers ❤️, I am no in any hurry, so I can do it with calm in any of the cases. Again, thank you very much for free and maintain this project! Regards Hi! Thanks for opening this issue to check it out with the team. We think the project itself is not yet at a point in which adding notifications makes sense for us, as we think that if it is added to the core it should be extremely configurable, have several providers, allow to extend to custom ones, etc. However, we want to thank you for your effort and we think that the best approach right now is your second option: implement it in a separate error_tracker_whatever package. If in the future it make sense to have it in the core package we can work on moving the functionality there :) Also, please let us know once it is published for us to link it from the main package documentation and repository! I'm marking the issue as completed but feel free to comment on it to let us know when it's ready! Hi @odarriba! Thank you very much for your kindly answer ❤️ Completely understandable and reasonable! If in the future it make sense to have it in the core package we can work on moving the functionality there :) Also, please let us know once it is published for us to link it from the main package documentation and repository! Of course! And who better to test it (and crash it!) than you. Once I have a minimum viable package to release it in Hex, I will notice you here. See you (I hope sooner than later)! In any case, 1 or 2, my idea and the roadmap in my head (I have not draw it yet!) is: 1. Implement the handling of events. 2. Let the user the possibility to send notifications by email. 3. ... and then by Telegram. Hi @ivanhercaz , If I understand correctly what your asking for, this is almost exactly what tower tries to solve. Handling of (error) events and passing them on to one or several reporters. You can accomplish almost exactly what you want with tower_error_tracker, tower_email and writing a new tower_telegram (doesn't exist yet) and including those 3 as dependencies in your app, then config :tower, reporters: [TowerEmail, TowerErrorTracker, TowerTelegram]. Hope it helps. Hi @ivanhercaz , If I understand correctly what your asking for, this is almost exactly what tower tries to solve. Handling of (error) events and passing them on to one or several reporters. Hi @grzuy! I admit you that tower was a complete unknown project for me until now and it is really awesome. You can accomplish almost exactly what you want with tower_error_tracker, tower_email and writing a new tower_telegram (doesn't exist yet) and including those 3 as dependencies in your app, then config :tower, reporters: [TowerEmail, TowerErrorTracker, TowerTelegram]. And even more awesome you have all of this reporters implemented, even having a specific for ErrorTracker, it is like to mix two awesome pieces in a wonderful puzzle! I began with the idea of the Telegram reporter development for ErrorTracker, but I couldn't advance a lot, just visualize the idea and schematize it. So at this moment I tend toward the idea of create the Telegram reporter for towerbecause it seems like 2x1,ErrorTrackercan benefit of it while even the reporter could be used without it, and in additiontower` also providers a reporter structure that can ease the development process. Hope it helps. It helps a lot. I am not going to have time to well test it, maybe just in local, but I will do it. And even if I get some free time I will try to do the Telegram reporter. We think the project itself is not yet at a point in which adding notifications makes sense for us, as we think that if it is added to the core it should be extremely configurable, have several providers, allow to extend to custom ones, etc. @odarriba, given what you mentioned me previously about the notifications, given tower exists and there is already an existing integration with ErrorTracker, do you think it will have sense ErrorTracker itself provides the reporters? Hey everybody! While I think that a plugin to get notifications by Telegram is great, I think that for 99% of users, email notifications can be enough, and that can be done with a simple GenServer like this: defmodule MyApp.ErrorTracker.Notifier do use GenServer alias MyAppWeb.Emails.{ErrorTrackerNotifierEmail, SystemMailer} def start_link(_opts) do GenServer.start_link(__MODULE__, %{}) end def init(state) do :ok = :telemetry.attach_many( "error-tracker-notifier", [ [:error_tracker, :error, :new], [:error_tracker, :error, :unresolved] ], &handle_event/4, nil ) {:ok, state} end def handle_event([:error_tracker, :error, event_type], _measurements, metadata, _config) do send_email(event_type, metadata) end defp send_email(event_type, metadata) do ErrorTrackerNotifierEmail.new_error(event_type, metadata) |> SystemMailer.deliver() end end And then starting it up in the application: Supervisor.child_spec({MyApp.ErrorTracker.Notifier, name: :error_tracker_notifier}, id: :error_tracker_notifier), The rest is just a standard mailer. I'll be happy to send a PR to add this example to the documentation guides. Hey everybody! While I think that a plugin to get notifications by Telegram is great, I think that for 99% of users, email notifications can be enough Hi @jaimeiniesta! Yes, I agree, telegram is just an example of possibility, and a niche haha. I'll be happy to send a PR to add this example to the documentation guides. Please do! That would be a great contribution @jaimeiniesta
gharchive/issue
2024-08-19T08:33:32
2025-04-01T04:34:09.275322
{ "authors": [ "crbelaus", "grzuy", "ivanhercaz", "jaimeiniesta", "odarriba" ], "repo": "elixir-error-tracker/error-tracker", "url": "https://github.com/elixir-error-tracker/error-tracker/issues/54", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1971118829
Revisit Series.mask vs. Series.filter Description This discussion came up in another issue. I'm breaking it out into its own issue so it can be tracked independently. Show/hide recap @cigrainger: And I also agree that it's confusing we have Series.mask/2 instead of Series.filter/2. @philss or @josevalim I'm sure there was a convo about this but I can't remember what the reasoning was for this anymore. https://github.com/elixir-explorer/explorer/pull/326#issue-1352162455 @josevalim We changed the implementation and renamed at the same time but I am fine with reverting the name back to filter. :) It should be a quick change and we can add: @deprecated "Use Explorer.Series.filter/2 instead" def mask(s1, s2), do: filter(s1, s2) It will certainly be much easier to find. @billylanchantin It may be worth having both since mask/2 and filter/2 accept different datatypes: mask takes a boolean series while filter/2/filter_with/2 take a query/function. If you have the boolean series on hand, you'd want mask/2. But if you're finding that you need to build the boolean series e.g. with transform/2 only to pass it right into mask/2, filter/2 would be convenient. @josevalim The issue is that doing it with a function is horribly expensive and should be generally avoided. The issue is that doing it with a function is horribly expensive and should be generally avoided. I agree! I was mostly focused on making the DataFrame and Series APIs similar. Polars as a few filter functions which take either an expression or a boolean series (mask). They're a bit inconsistent with how they do that, though: Entity Lang Function Expr Mask DataFrame Python DataFrame.filter ✅ ✅ DataFrame Rust DataFrame::filter ❌ ✅ LazyFrame Python LazyFrame.filter ✅ ✅ LazyFrame Rust LazyFrame::filter ✅ ❌ Series Python Series.filter ❌ ✅ Series Rust Series::filter ❌ ✅ Explorer OTOH introduced the concept of a mask function. I think this was a good call. It helps hint what the input should be: filter functions accept expressions mask functions accept masks So if Explorer is to have a Series.filter, I think the least surprising choice would be to have it accept a query as well. Unfortunately for the goal of a mask/filter distinction, I couldn't find a way in Polars to filter a Series on an expression (other than I guess wrapping it in a DataFrame then converting back?). Here are the options I see: # Option Pro Con 1. Keep things as they are Consistent meaning for mask Lack of filter surprises newcomers 2. Rename Series.mask to Series.filter Newcomers find filter quicker filter is inconsistent across Series and DataFrame 3. Keep Series.mask and hack together a Series.filter that accepts an expression Consistent and easy for newcomers to find Hacks are bad My first choice is 3. Assuming the hack isn't terrible, it helps newcomers find the functionality and keeps the meanings consistent. We can also document that Series.mask should be preferred. My second choice is 1. I think I favor consistency over newcomer surprise. We could possibly add an example to the docs to help newcomers find Series.mask if the question comes up often enough. My last choice is 2. I think Series.filter working like DataFrame.mask while also having a DataFrame.filter would be confusing long term. Honestly, an implementation of Series.filter(fn x -> x end) (or filter_with) where we put the series in a dataframe, filter it, and then read the series out, sounds very elegant to me and it would also be optimized quite cleanly and most likely the most efficient approach too. Polars may even have better APIs. This would also allow us to introduce map (or mutate?) and arrange for individual series. filter/filter_with docs could point out to mask for when the lazy expression version is not enough. Thoughts? Yeah I'd love to use Series.{filter,filter_with} if I could! This would also allow us to introduce map (or mutate?) and arrange for individual series. filter/filter_with docs could point out to mask for when the lazy expression version is not enough. Thoughts? For map/mutate, I know there's already Series.transform so we'll want to keep that in mind. But yes I agree. Assuming the overhead with wrapping-then-unwrapping is acceptable, we should be able to bring a lot of the DataFrame specific functionality over to Series. I'm happy to try for a filter/filter_with PR to see how the idea shakes out. mutate would be different than transform, because mutate would work on lazy series. I would start with filter_with cause I am not sure how we can do filter without an anonymous function. In DFs we refer to them by column name but that's not an option here. Please go ahead! In DFs we refer to them by column name but that's not an option here. Oh of course. I'll think about it, but nothing clean comes to immediately to mind. Well that is a SUPER elegant solution. What a great idea. I'm in full support. This would also allow us to introduce map (or mutate?) and arrange for individual series. filter/filter_with docs could point out to mask for when the lazy expression version is not enough. Thoughts? What would arrange be for individual Series? @cigrainger that was what i thought but i guess it doesn't make much sense since we would only arrange ourselves? Exactly, I think it would just be a sort. Though maybe you could provide a sorter? E.g. on strings you could sort by ends_with or similar? (speaking of which I need to add some more string ops) Correct. So I guess we could have some use cases? Definitely. We will need to decide on the naming though. Today we have Series.sort. Should it be sort_with or arrange_with? Good question. I'm struggling to disentangle the macro aspect from the motivation to introduce _with variants. I don't have a strong opinion about sort vs arrange here (largely because in R they're different anyway -- sorting a vector in R is done with base R's sort). Do we need a _with or could we just make it multi-arity? That's a good discussion. We don't need _with in series. It depends on how consistent we want to be with DF and within ourselves. The question is: can we overload sort? Would the two options (direction and nils) apply to our function-based version? I think they do apply. We have a direction selector in DataFrame.arrange. I think we could also use nils in DataFrame.arrange. Thought: for the macro versions, what if we did like Ecto and had them provide the column as an argument? E.g. we make a filter/3: dates = [~D[2023-11-01], ~D[2023-11-02], ~D[2023-11-03]] |> S.from_list() |> S.filter(date, date > ^~D[2023-11-01]) That would work but, at the same time, everyone is using pipelines to transfrom series today anyway, so requesting a pipeline inside the anonymous function is not that bad. I think #728 closed this issue. We discussed several other additions to Series in that PR. Shall I close this issue and open another to track those additions? Yeah!
gharchive/issue
2023-10-31T18:54:44
2025-04-01T04:34:09.307000
{ "authors": [ "billylanchantin", "cigrainger", "josevalim" ], "repo": "elixir-explorer/explorer", "url": "https://github.com/elixir-explorer/explorer/issues/726", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1532809984
Is it possible to remove the last layers from a neural network? In order to use an image classifier model for image embedding, one strategy is to use the output of a hidden layer as the embedding. One can therefore remove the last few layers of a trained classifier model such as ResNet. I could not find a function to remove layers in Axon though. Is there is a way to do that? You can pop layers out of the model with Axon.pop_node/1: model = Axon.input("foo") |> Axon.dense(32) |> Axon.dense(1) IO.inspect model #Axon< inputs: %{"foo" => nil} outputs: "dense_1" nodes: 3 > {_node, updated_model} = Axon.pop_node(model) IO.inspect updated_model #Axon< inputs: %{"foo" => nil} outputs: "dense_0" nodes: 2 > Thanks a lot! Sorry I missed it in the docs
gharchive/issue
2023-01-13T19:33:49
2025-04-01T04:34:09.360477
{ "authors": [ "lucaong", "seanmor5" ], "repo": "elixir-nx/axon", "url": "https://github.com/elixir-nx/axon/issues/446", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
435373039
Secure by default The Motivation Since my earlier days as developer I always have been astonished with the mindset of our industry of being insecure by default, where normally security is opt-in, instead of opt-out, and this mindset is one of the main culprits for so many data-breaches that occur every week/month/year. So the current mindset is to release software with all the doors and windows of your home open, while your are away, and then hope that the developers using it will learn how to properly close all that doors and windows, and then also hope that when they learn how to properly do it, that they don't forget one open. Just to put things in perspective, try to use shodan.io to see how software ends up in production without being properly secured, due to the current mindset of our industry of preferring convenience over security. This all started in this tweet and I was asked in this other tweet to open this issue. The Context So this have been brought up before by @lau in the issue #235 and by @rawkode in issue #138. By @lau The default adapter is httpc, which does not check certificates when using HTTPS. This seems like an unsafe default. Many people will probably use the default and use it to communicate with APIs. Perhaps with sensitive data. I cannot agree more with this, but the issue on the time was solved with only a note in the README. Alerts, notes in the README or any other docs will be missed, ignored or forgotten, by beginners, juniors and senior developers, because human beings tend to prefer convenience above anything else., but convenience should not be put in front of Security. The Proposal @teamon says on this comment: The point of Tesla is not to get rid of hackney, nor any other http adapter. The point is to let the end use choose the adapter that suits best or write a custom one if needed. So in the README we have: Configure default adapter in config/config.exs (optional). But instead we should have: Configure default adapter in config/config.exs (REQUIRED). And we could then have a list of possible secure by default adapters. This change would leave Tesla agnostic of an adapter as per the goal of @teamon, but would not leave Tesla insecure by default... The developer would be required to explicitly opt-in for the adapter to use. So this means that if the Tesla was to be used without configuring the adapter, an exception should be raised with a very clear message. Once this is a breaking change a major version would be needed. PS: I appeal to all developers to always embrace Secure By Default aproach when building software. 👍 BTW when creating a client dynamically, app config (config/config.exs) may not be used - e.g. using Tesla.client/2. btw, to find the default CA bundle, you can do something like def default_cert_bundle() do cond do File.exists?("/etc/ssl/cert.pem") -> "/etc/ssl/cert.pem" File.exists?("/etc/pki/tls/cert.pem") -> "/etc/pki/tls/cert.pem" File.exists?("/usr/lib/ssl/cert.pem") -> "/usr/lib/ssl/cert.pem" File.exists?("/etc/ssl/certs/ca-certificates.crt") -> "/etc/ssl/certs/ca-certificates.crt" Code.ensure_loaded(:certifi) == {:module, :certifi} -> apply(:certifi, :cacertfile, []) true -> nil end end and the ssl_verify_fun library should be used for proper hostname verification, e.g.: ssl_options: [ verify: :verify_peer, verify_fun: &:ssl_verify_hostname.verify_fun/3, depth: 69, cacertfile: default_cert_bundle() ] Alerts, notes in the README or any other docs will be missed, ignored or forgotten, by beginners, juniors and senior developers, because human beings tend to prefer convenience above anything else., but convenience should not be put in front of Security. @Exadra37 has got a point here. I recently shot myself in the foot. I have, too, a background in IT security before I switched to programming, and I'm well aware of how TLS works, and what are the Elixir HTTP libraries which are secure by default and which who are not. It didn't prevent me from becoming convinced around 2 years ago that the default adapter of Tesla was Hackney, which is secure by default. What a surprise when I read last week the following comment I let 3 years ago in my code: default httpc Tesla's adapter is unsafe (does not check TLS certificates) So the question is not whether a programmer should carefully read the README (by the way the formatting of the warning makes it look less important than the rest of the text!). It seems this security vulnerability will be fixed with v2.0. Meanwhile, I suggest writing a flashy warning at the very beginning of the README. Another option would be to raise on use of https URI with httpc when SSL options were not set. Well I am astonished that after more then a year have passed by Tesla still continue to use unsafe defaults. Why is security not taken more seriously? I keep being downvoted and libraries depending on Tesla continue to be vulnerable: iex> LinkPreview.create "https://badssl.com" {:ok, %LinkPreview.Page{ description: "🎛Dashboard", images: [%{url: "front-page-icons/chrome.svg"}], original_url: "https://badssl.com", title: "badssl.com", website_url: "badssl.com" }} That request should fail, but once it uses Tesla with the insecure default then it succeeds. https://github.com/appunite/link_preview/issues/17 👋🏻 sorry it took me this long to get to this issue. I am trying to do some maintenance. Before I continue, let me clarify that I **NEVER** use the httpc client ever, and I **AGREE** that the HTTP client should be secure by default. Period. httpc is the built-in client from Erlang itself. I will continue being the default adapter because anything else will require the installation of extra dependencies. I am going to try to address the situation and try my best to add sensitive defaults around httpc, being said, let us keep in mind that httpc itself is at fault here to some extent, OTP team should be fixing such situation, to begin with. @Exadra37 I would love your support since it seems you like the Security topic. @voltone (https://elixirforum.com/u/voltone/summary on the ElixirForum) might be another good person to get some ideas from. He's posted quite a bit about security and the beam and is part of the security EEF working group (e.g. here's an old post about httpc security). And here's the related page for httpc https://github.com/erlef/security-wg/blob/92345ab62864ebb4efab11479cc40298f314c47a/docs/secure_coding_and_deployment_hardening/inets.md You are quite possibly already aware/talking to him but I figured I'd mention him in case it's helpful! How about we switch to mint as the default adapter, making :httpc an explicit choice (config :tesla, adapter: Tesla.Adatper.Httpc)? This would require a good error message in case mint is not installed, but I think it's worth it. That was quick (5 years 🙀 ) 😞
gharchive/issue
2019-04-20T08:53:54
2025-04-01T04:34:09.376724
{ "authors": [ "Exadra37", "axelson", "chulkilee", "myfreeweb", "tanguilp", "teamon", "yordis" ], "repo": "elixir-tesla/tesla", "url": "https://github.com/elixir-tesla/tesla/issues/293", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1751326426
Fix elixir invalid command for bin/credo-language-server I the following error when running /bin/credo-language-server on Linux. This fixes it. /usr/bin/env: ‘elixir --sname undefined’: No such file or directory /usr/bin/env: use -[v]S to pass options in shebang lines Apparently this is from coreutils 8.30 onwards, and I recall that Apple ships ancient versions of these tools, so not sure about compatibility. I tested this and it seems to work on macOS
gharchive/pull-request
2023-06-11T08:46:33
2025-04-01T04:34:09.378696
{ "authors": [ "mhanberg", "wkirschbaum" ], "repo": "elixir-tools/credo-language-server", "url": "https://github.com/elixir-tools/credo-language-server/pull/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
124550664
New hook I feel we could use a hook in the getBoardTree function to allow adding to that query. That would compliment integrate_modify_board, which now would need to do its own query should an addon add any columns to the boards or categories tables. I'll leave that one to you so you can put it where you think it's best. ;D Just doing this now ... I think It will take two hooks, one to the query for the select and the join ... and then one in the loop passing the row.
gharchive/issue
2016-01-01T16:55:23
2025-04-01T04:34:09.385298
{ "authors": [ "Spuds", "emanuele45" ], "repo": "elkarte/Elkarte", "url": "https://github.com/elkarte/Elkarte/issues/2337", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
138525980
Change install hook order When installing a package, it would be more logical to call the link hook from the install hook. This is the case in the uninstall hook which calls the unlink hook. It would be nice to have this consistent, but changing it now would brake a lot of ellipsis.sh files. It is however preferred to call the install hook before the link hook. This should not cause trouble because linking and unlinking already works after installing a package. The benefit of this change would be that we could compile/build files that would later be linked/unlinked. Funny thing is I made this inconsistency myself #23. I Should have removed the other call to the link hook. The more I think about it the more I'm getting convinced I did the right thing in that PR. Maybe its better to do this the same way when uninstalling packages. It would make overwriting the uninstall hook easier, and would force people to write unlink hooks that work independently of the uninstall hook. Changing this also shouldn't brake many ellipsis.sh files (uninstall hook is far less used, and default unlink hook can be called 2 times without problems) That makes sense. I'm already working on a PR!
gharchive/pull-request
2016-03-04T17:00:22
2025-04-01T04:34:09.389212
{ "authors": [ "groggemans", "zeekay" ], "repo": "ellipsis/ellipsis", "url": "https://github.com/ellipsis/ellipsis/pull/43", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
492994714
trouble trying to import random-extra modules on elm 0.18 i'm using random-extra 2.0.0 compatible version with elm 0.18 but in spite of the package successfully installation and present on the packages folder of elm-stuff the compiler gives an error when trying to import one of the modules, say Random.String for instance, saying it cannot find the module in question. i'm on win10. the only dependency for random-extra is elm-core, which my project uses version 5.1.1 which fits the spec for random-extra. i have tried to clean exact-dependencies.json with no avail, to reinstall everything from scratch, and also to clone the src locally and add a ref to it on the source-directories section of elm-package but the error persist on either cases. any sugestions are welcome. thx turns out it was just a misconfig on my side on a fork of another project not an issue whatsoever with the package
gharchive/issue
2019-09-12T19:45:20
2025-04-01T04:34:09.392772
{ "authors": [ "neocris" ], "repo": "elm-community/random-extra", "url": "https://github.com/elm-community/random-extra/issues/23", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
101162045
Algoritmic improvement in Dict Replaces https://github.com/elm-lang/core/pull/255. The essence is that calls max (RBNode_elm_builtin c k v l r) are replaced by maxWithDefault k v r, where the improvement is that the latter function is less complicated (fewer arguments, less pattern matching) and does not need an artificial (never triggered) Debug.crash case. Two further notes: The function min is never called, also not exported, thus could be eliminated. The two functions maxWithDefault and remove_max are always called in parallel, and have the same recursion structure, so could in principle be merged into a single function. Awesome, I like this one a lot more, nice refactor! Let's get rid of min in a PR of its own. I don't know the code well enough to say on the second one. if you think it's better, it makes sense to me to try it. PR for removing min: https://github.com/elm-lang/core/pull/351 About the other potential change (merging maxWithDefault and remove_max), this would really require first having some benchmarking in place. The idea would be to use a classic tupling transformation. That is, maxWithDefault : k -> v -> Dict k v -> (k, v) and remove_max : NColor -> k -> v -> Dict k v -> Dict k v -> Dict k v would be replaced by new_function : NColor -> k -> v -> Dict k v -> Dict k v -> ((k, v), Dict k v) with the understanding that semantically new_function c k v l r = (maxWithDefault k v r, remove_max c k v l r) but actually new_function performs only one traversal instead of the two independent traversals of remove_max and maxWithDefault. In theory, this transformation should always be an improvement, but in practice it shows (at the very least in Haskell, partly due to issues with laziness) that very often it is not. The extra tuples may lead to space overhead, extra construction and deconstruction work, etc.
gharchive/pull-request
2015-08-15T10:44:28
2025-04-01T04:34:09.401475
{ "authors": [ "evancz", "jvoigtlaender" ], "repo": "elm-lang/core", "url": "https://github.com/elm-lang/core/pull/350", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
20614463
Collage makes covered input elements unreachable Here it is demonstrated that an input element that's not in the first place in a collage is unreachable. It's also demonstrated that flow doesn't have this problem. I've tested it with hoverables (left) and customButtons (right) http://share-elm.com/sprout/5252c33fe4b0d6a98b152f5e Flow has other troubles but seems to be getting input detection right. I believe this was fixed at some point with the pointer-events property. Let's open a new issue over on this repo if there are still problems!
gharchive/issue
2013-10-07T14:27:50
2025-04-01T04:34:09.403712
{ "authors": [ "VulumeCode", "evancz" ], "repo": "elm-lang/elm-compiler", "url": "https://github.com/elm-lang/elm-compiler/issues/283", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
165669869
Exploits allowing arbitrary JS code execution via Elm code As I understand it, the only ways someone can publish a new Elm package and use it to run arbitrary JS code (hijacking the entire Elm runtime of an unsuspecting package consumer) are the following: node "script" [] [ text "injectMalware()" ] (works with Html.node and Html.Keyed.node) a [ href "javascript:injectMalware()" ] [] button [ property "onclick" "injectMalware()" ] [] img [ property "onload" "injectMalware()", src "blah.jpeg" ] [] (also works with other elements that support onload, e.g. body) These aren't self-contained examples, granted, which I guess means they aren't proper SSCCEs, but I'm not sure how useful that'd be here. I can make runnable examples of these if that'd be useful, but I assume it wouldn't help. I found a library that sanitizes user-submitted HTML to prevent XSS attacks. We could use some of these techniques to patch vulnerabilities. Note: There do exist real-world non-malware uses of this exploit. Closing this off would be a breaking change for them. Chatted with @rtfeldman, and the resulting suggestion is: node disallows "script" and crashes property disallows all usage of "on..." as an argument href changes all things that start with `"javascript:..." with "javascript:void(0) // elm-lang/html does not allow arbitrary JS for security" We decided in #47 and #46 that having node and property be a certain level of unsafe was a reasonable compromise given how people are likely to write code. SVG import Svg exposing (..) import Svg.Attributes exposing (..) main = svg [ ] [ script [type' "text/javascript" ] [text "alert('ok')"] ] elm form action import Html exposing (..) import Html.Attributes exposing (..) main = Html.form [action "javascript:alert('ok')"] [ input [type' "submit"] [] ] I am sure there are some more. I am sure there are some more. As with Pokemon, "Gotta catch 'em all!" 😄 Thanks for these @jweir! Please post any others you can think of. I came across https://html5sec.org/ which has a lot of vectors. Could be a good reference. i.e. the object tag and data URIs. import Html exposing (..) import Html.Attributes exposing (..) import Json.Encode as Json main = object [property "data" (Json.string "data:text/html;base64,PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==")] [] With the data URI is it going to be possible to sanitize everything? Can a package include image resources? These could include SVG which scripts. @jweir wow, that's a long list! It looks like the on... filter would catch many of them. Do you see any others besides data that wouldn't be covered by these countermeasures? The on... filter blocks a trick we use in elm-mdl to blur stuff on click: div [ attribute "onClick" "this.blur();" ] [...] Is it possible the on-filter can be left off until elm provides a canonical method to work with focus/blur? I understand such a method is in the works already. @rtfeldman I think the list of countermeasures is a good start and don't forget form.action. Do packages allow external resources: images, or css? If resources are allowed then there is the data property on object. I can inject JS and access the Elm app by loading an external HTML file with a script in it. I am not sure about inlining with a data URI though – on Chrome at least – those seem get sandboxed in their own document. I.E. they can execute JS, but not reference their parent. Something to research – I haven't found any good documentation. I imagine filtering would be occurring at the VirtualDom level, i.e. changes would apply to Html and Svg? If so the above SVG example would be caught by the countermeasures. Referencing this: https://github.com/elm-lang/virtual-dom/issues/31#issuecomment-235375650 It's not an arbitrary code execution vulnerability, but being able to set property "innerHTML" (Json.string "borked!") on a node apparently breaks the expected indexed access to child nodes and will throw an error if the non-Elm child node we inserted with this innerHTML business doesn't have the same API available as the node that vdom is actually trying to work with. Pretty ugly, it wouldn't be fun to have something like this on elm-package, and I can't think of a legitimate use case for property "innerHTML". A rigorous treatment of securely generating HTML or DOM nodes is https://rawgit.com/mikesamuel/sanitized-jquery-templates/trunk/safetemplate.html?printable . This is what the Go standard library html/template package used to guide its design. If you are designing some kind of drawing application, you want to be able to keep track of a pointer that leaves the drawing area to know if the pointer went up. This is possible using what is called pointer capture. Elm events mechanism does not allow to use it, so I suggest people to use the attribute trick if they want this behavior: attribute "onpointerdown" "event.target.setPointerCapture(event.pointerId);"
gharchive/issue
2016-07-14T22:07:22
2025-04-01T04:34:09.418313
{ "authors": [ "cobalamin", "debois", "evancz", "jweir", "mpizenberg", "rtfeldman", "sethwklein" ], "repo": "elm-lang/html", "url": "https://github.com/elm-lang/html/issues/56", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
155806164
Chaining Html.App.map messes up As reported by @pdamoc in https://github.com/elm-lang/html/issues/16 and further clarified by @debois. import Html exposing (Html, button, div, text) import Html.App as App import Html.Events exposing (onClick) main : Program Never main = App.beginnerProgram { model = 0 , update = always <| (+) 1 , view = view } view : Int -> Html () view model = div [] [ text (toString model) , button [ onClick () ] [ text "Inc" ] ] |> App.map identity |> App.map identity Notes. The repeated App.map is necessary to trigger the error. Adding a second button before the text (see below) suppresses the JS error, but causes updates to change the text on the left-most button instead of the text element: [ button [ onClick () ] [ text "Inc" ] , text (toString model) , button [ onClick () ] [ text "Inc" ] ] I'm also seeing interesting behavior with nested components, which I thought would be fixed by https://github.com/elm-lang/core/commit/cce1dc54f15ff9bae5d2900faf2661f2d1cd83b2 - but it wasn't. With a component nesting another component and doing Html.App.map to wrap all actions coming from the sub-component, the outer update is being called with the sub-component's messages not wrapped, which should be impossible. The pattern-matching then does unpredictable things, as it often goes down the default: branch, which could do all sorts of bad things since it's assuming that default: branch matched the only remaining pattern (common example: accessing fields that don't exist, and assigning them on the model as undefined). I'm seeing this in a pretty complicated stack - so I'll try to whittle it down to a smaller repro. Just wanted to follow-up to see if any of this sounds related. @vito, is your problem similar to my the second context I described in elm-lang/html/#16 ? Here is a simplified version: https://gist.github.com/pdamoc/27e4198940d50e0e38b5fba05304d799 Build WidgetList.elm and then: type "cat", hit enter, type "dog", hit enter, type anything. I've also run into the same behavior as @vito in a fairly complicated stack on 0.17. I'll try to create a small example. Fixed by https://github.com/elm-lang/virtual-dom/pull/20? Should be fixed by #20
gharchive/issue
2016-05-19T18:40:03
2025-04-01T04:34:09.425616
{ "authors": [ "blakesweeney", "evancz", "jvoigtlaender", "pdamoc", "vito" ], "repo": "elm-lang/virtual-dom", "url": "https://github.com/elm-lang/virtual-dom/issues/21", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
968841384
:wrench: Some fixes and features. Solved some issues and added some features: Fixes Solve #609 Solve #708 Solve #288 Solve #249 Solve #180 Features Better service navigation logic on open. Better usability: if click on cancel button return to chosen active service. Help needed to build the project! I have read the contribution readme but without success to get the build process work.
gharchive/pull-request
2021-08-12T13:35:06
2025-04-01T04:34:09.436423
{ "authors": [ "phlegx" ], "repo": "elninotech/uppload", "url": "https://github.com/elninotech/uppload/pull/728", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
454810718
Panic when using $nil as a key Using $nil as a key in a map leads to a panic when accessed: ⬥ elvish ⬥ xx = [&] ⬥ xx[$nil] = foo ⬥ put $xx ▶ panic: interface conversion: string is not hashmap.node: missing method assoc goroutine 924 [running]: github.com/elves/elvish/vendor/github.com/xiaq/persistent/hashmap.(*bitmapNodeIterator).fixCurrent(0xc00051c880) /opt/pkg/gopkg/src/github.com/elves/elvish/vendor/github.com/xiaq/persistent/hashmap/hashmap.go:437 +0xa5 github.com/elves/elvish/vendor/github.com/xiaq/persistent/hashmap.(*bitmapNode).iterator(0xc0002dda40, 0xc000267d01, 0x166b5c0) /opt/pkg/gopkg/src/github.com/elves/elvish/vendor/github.com/xiaq/persistent/hashmap/hashmap.go:423 +0x6d github.com/elves/elvish/vendor/github.com/xiaq/persistent/hashmap.(*hashMap).Iterator(0xc0002d2c90, 0x1544940, 0xc0002d2c90) /opt/pkg/gopkg/src/github.com/elves/elvish/vendor/github.com/xiaq/persistent/hashmap/hashmap.go:65 +0x34 github.com/elves/elvish/eval/vals.Repr(0x1544940, 0xc0002d2c90, 0x8000000000000000, 0x20, 0x4) /opt/pkg/gopkg/src/github.com/elves/elvish/eval/vals/repr.go:57 +0x288 github.com/elves/elvish/eval.relayChanToFile(0xc0002ce120, 0xc000010010, 0x15a1074, 0x4, 0xc000430074) /opt/pkg/gopkg/src/github.com/elves/elvish/eval/std_ports.go:42 +0xec created by github.com/elves/elvish/eval.newStdPorts /opt/pkg/gopkg/src/github.com/elves/elvish/eval/std_ports.go:26 +0xf9 Exception: elvish exited with 2 [tty], line 1: elvish I get "Compilation error: variable $nil not found". What version and system are you running this on? @doubleagent, you are probably using the stable version. it crashes on master. Aha. This is not surprising. I have not implemented support for nil keys in the hash map module...
gharchive/issue
2019-06-11T17:18:46
2025-04-01T04:34:09.482726
{ "authors": [ "SolitudeSF", "doubleagent", "hanche", "xiaq" ], "repo": "elves/elvish", "url": "https://github.com/elves/elvish/issues/835", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
596261942
option parsing is too lenient While responding to feedback on PR 937 I changed the math:is-inf command to accept a &sign option. I was surprised by two things: I could pass invalid values to the option. In this instance the option should only allow integers (or perhaps float64). Passing an invalid number caused the value to be silently ignored and the default was used instead. I could specify invalid options. This is true for other builtins that accept options; e.g., echo &sep=: &argle=bargle abc def silently ignores the unexpected &argle option. Sounds like something worth spending some time to fix before next release.
gharchive/issue
2020-04-08T02:19:56
2025-04-01T04:34:09.484745
{ "authors": [ "krader1961", "xiaq" ], "repo": "elves/elvish", "url": "https://github.com/elves/elvish/issues/958", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1922245084
Bug: Binary Data Not Parsed Correctly in WebSocket Message Event I've encountered a bug concerning the handling of binary data sent over a WebSocket in the Elysia.js framework. When sending binary data such as a Blob, Uint8Array, or an ArrayBuffer to the server's WebSocket endpoint, the message event handler within the .ws method does not get triggered correctly. Instead, an error is thrown indicating z.charCodeAt is not a function, which suggests the framework might be attempting to parse the message as a string. Steps to Reproduce: Set up a minimal Elysia.js server with a WebSocket endpoint as shown below: new Elysia() .get("/", () => sendScript(clientScript)) .ws("/ws", { // does not get called message: (ws, message) => console.log(message, typeof message, message instanceof Buffer) }) .listen(3000); function clientScript() { const ws = new WebSocket("ws://localhost:3000/ws"); // Send any type of binary data supported by the browser (Uint8Array, ArrayBuffer, Blob) const binaryData = new Uint8Array(1) || new ArrayBuffer(2) || new Blob([new Uint8Array(3)]); ws.onopen = () => ws.send(binaryData); } const sendScript = (script: Function) => new Response(`<html><body><script>${script.toString().replace(/^.*?{\n|}$/g,"")}</script></body></html>`, { headers: { "content-type": "text/html" } }); Navigate to http://localhost:3000 in a web browser and observe the server console for errors and logs. Expected Behavior: The websocket message event handler should be triggered, and the binary data should be logged to the server console. Actual Behavior: An error is thrown on the server-side: z.charCodeAt is not a function. (In 'z.charCodeAt(0)', 'z.charCodeAt' is undefined) Comparison to Bun.sh: When implementing a similar setup directly with the Bun.sh runtime (code snippet provided below), the binary data is handled correctly and logged to the server console as expected. Bun.serve({ fetch: (req, server) => server.upgrade(req) ? undefined : sendScript(clientScript), websocket: { // gets called and logs the buffer message: (ws, message) => console.log(message, typeof message, message instanceof Buffer) }, port: 3000 }); function clientScript() { const ws = new WebSocket("ws://localhost:3000/ws"); // Send any type of binary data supported by the browser (Uint8Array, ArrayBuffer, Blob) const binaryData = new Uint8Array(1) || new ArrayBuffer(2) || new Blob([new Uint8Array(3)]); ws.onopen = () => ws.send(binaryData); } const sendScript = (script: Function) => new Response(`<html><body><script>${script.toString().replace(/^.*?{\n|}$/g,"")}</script></body></html>`, { headers: { "content-type": "text/html" } }); Environment: Elysia.js version: 0.7.15 Bun runtime version: 1.0.2 OS: macOS 14.0 I've faced the same issue. I look it up and it seems like in ws.parseMessage in https://github.com/elysiajs/elysia/blob/main/src/index.ts#L2372 seems to incorrectly assume that message, which define as any 🤔, is always is String and call charCodeAt without checking. I'll introduce a teeny-tiny fix PR soon. Should be fixed in #269
gharchive/issue
2023-10-02T16:27:17
2025-04-01T04:34:09.492351
{ "authors": [ "SaltyAom", "itpcc", "tomeryp" ], "repo": "elysiajs/elysia", "url": "https://github.com/elysiajs/elysia/issues/247", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
248065875
Server Can i put this mod in my server? Yes
gharchive/issue
2017-08-04T17:15:26
2025-04-01T04:34:09.494555
{ "authors": [ "SirTroia", "unascribed" ], "repo": "elytra/DavincisVessels", "url": "https://github.com/elytra/DavincisVessels/issues/174", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1107261734
Deprecate NPM package npm deprecate ember-cli-mocha "Use ember-mocha directly." Increase visibility that this is deprecated. The README already has a deprecation notice. https://github.com/ember-cli/ember-cli-mocha/pull/363 done ✅
gharchive/issue
2022-01-18T19:05:01
2025-04-01T04:34:09.643644
{ "authors": [ "Turbo87", "bmish" ], "repo": "ember-cli/ember-cli-mocha", "url": "https://github.com/ember-cli/ember-cli-mocha/issues/365", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
462311124
Route deactivate hook Hey, just noticed that the route deactivate hook runs after a test has completed. In our case, we have the deactivate hook fire off a network request, which throws an error because the test server is already shut down. Is this expected behavior for how the test harness works? A simple short-term solution is to not fire the network request if in a test environment, but this seems less than ideal. Thanks! We're experiencing the same problem. We first upgraded ember-source to 3.16.0. Afterwards, we also upgraded ember-data to 3.16.0. We are now facing a bunch of failing tests, which are unsuccessful in calling rollbackAttributes() from deactivate route hooks. The error says: Cannot read property 'removeCompletelyFromOwn' of undefined - this is something very deep within ember-data internals and does not happen when the application runs outside of testing environment. Any help / feedback from the team is highly appreciated! @arm1n For our app, we tracked this message down to a 404 for a belongsTo, where we were then deleting the parent record. I feel as though this shouldn't result in an error, though.
gharchive/issue
2019-06-29T15:11:00
2025-04-01T04:34:09.711895
{ "authors": [ "James1x0", "arm1n", "thec0keman" ], "repo": "emberjs/ember-qunit", "url": "https://github.com/emberjs/ember-qunit/issues/514", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
612344602
View Components and Code Refactoring based on #29 Thanks, great PR :)
gharchive/pull-request
2020-05-05T05:17:04
2025-04-01T04:34:09.785735
{ "authors": [ "ManojKiranA", "emchooo" ], "repo": "emchooo/mailness", "url": "https://github.com/emchooo/mailness/pull/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
578863311
Label Bollettino Su TCI abbiamo sempre usato una label Bollettino che se veniva messa su un ISSUE allora abilitavamo l'invio della notizia su Telegram e Twitter in automatico. Che dite? Funzionava bene e facilitava di molto il lavoro sui social Da fare assolutamente! Occhio però che se attiviamo bollettino dobbiamo scrivere i titoli delle issue in modo che siano leggibili in un tweet... Giusto! @cristigalas ci ricordi quanti caratteri max? Se pensiamo a Twitter direi che il massimo è 240 ma dobbiamo lasciare spazio per link e hashtag... anzi dobbiamo decidere che hashtag usare. Apro issue.
gharchive/issue
2020-03-10T21:09:19
2025-04-01T04:34:09.818892
{ "authors": [ "cristigalas", "favoeva", "iltempe" ], "repo": "emergenzeHack/covid19italia", "url": "https://github.com/emergenzeHack/covid19italia/issues/52", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
386521498
UPX Support? Hey, thought this was a pretty cool project. Would you be interested in adding UPX support? https://github.com/uPlexa/uplexa I think I'd accept an extension that doesn't mess with the base functionality, which is Monero support. Otherwise feel free to fork this project if you find it useful to start something else.
gharchive/issue
2018-12-02T03:09:07
2025-04-01T04:34:09.839098
{ "authors": [ "QuantumLeaper", "emesik" ], "repo": "emesik/monero-python", "url": "https://github.com/emesik/monero-python/issues/39", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
69553171
Rust beta compatibility A few tweaks to get the library building on 1.0.0-beta.2. And a slight adjustment to the linker flags to get it to link on OS X (verified still builds on Linux with Travis CI). Thank you! This fix has been released in latest version of rust-uchardet on crates.io. My apologies for the slow response—I was busy with a non-Rust-related project elsewhere!
gharchive/pull-request
2015-04-20T10:09:21
2025-04-01T04:34:09.847900
{ "authors": [ "emk", "wezm" ], "repo": "emk/rust-uchardet", "url": "https://github.com/emk/rust-uchardet/pull/2", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
2254960717
Support Convolutional Neural Networks It would be great to support some basic CNNs. In a manner that is just as easy to install and use as the tree-based-ensembles that we have now. That means that there should be a core module that supports CNN models, but that the particular model/weights can be loaded at runtime. CNNs would enable many computer vision tasks, such as Image Classification and Object Detection. But they are also frequently used for audio tasks, using spectrograms as the input. Thus it is also relevant part of or complement to #6 TensorFlow Lite for Microcontrollers is one of the most established here. But I find it to be quite large - just the interpreter takes 16 kB+. And I believe that it being in C++ may make it extra challenging to support as a dynamic native module (the mpy-ld linker is quite limited). It is also already available in OpenMV, so those who want to use it with MicroPython can already access it from there. Initial module code merged in #8 Need still to test on device and provide some documentation / examples. Example code now at https://github.com/emlearn/emlearn-micropython/tree/master/examples/mnist_cnn The MNIST CNN has been tested on ESP32 and looks to work. So we can close this as implemented :)
gharchive/issue
2024-04-21T08:14:40
2025-04-01T04:34:09.850835
{ "authors": [ "jonnor" ], "repo": "emlearn/emlearn-micropython", "url": "https://github.com/emlearn/emlearn-micropython/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1217272388
Bug in post to AuthUser model in version 2.4.x Other bugs are happening with version 2.4.x. It is when I create a new user. I overwritten the email field so that it is not unique to avoid the problem [1] email = Field(length=255) class User(AuthUser): tablename = 'auth_users' belongs_to({'institucion': 'Institucion'}) avatar = Field.upload(autodelete=True) email = Field(length=255) When I make the post: curl -X POST http://localhost:8080/main/api/v1/admin/usuarios -H 'Content-Type: application/json' -d '{"email": "juan@mail.com", "first_name": "Juan", "institucion": 2, "last_name": "Lopez", "password": "123456"}' this error occurs ERROR in handlers [/venv/lib/python3.8/site-packages/emmett/asgi/handlers.py:340]: Application exception: Traceback (most recent call last): File "/venv/lib/python3.8/site-packages/emmett/asgi/handlers.py", line 325, in dynamic_handler http = await self.router.dispatch(ctx.request, ctx.response) File "/venv/lib/python3.8/site-packages/emmett/routing/router.py", line 249, in dispatch return await match.dispatch(reqargs, response) File "/venv/lib/python3.8/site-packages/emmett/routing/dispatchers.py", line 72, in dispatch rv = self.response_builder(await self.f(**reqargs), response) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 328, in flow output = await pipe_method(f, **kwargs) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 234, in pipe return await next_pipe(**kwargs) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 328, in flow output = await pipe_method(f, **kwargs) File "/venv/lib/python3.8/site-packages/emmett/tools/auth/apis.py", line 277, in pipe return await next_pipe(**kwargs) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 369, in flow return await pipe_method(f, **kwargs) File "/venv/lib/python3.8/site-packages/emmett/tools/service.py", line 28, in pipe_request return self.encoder(await next_pipe(**kwargs)) File "/venv/lib/python3.8/site-packages/emmett/serializers.py", line 55, in _json_default raise TypeError TypeError INFO: 127.0.0.1:59876 - "POST /main/api/v1/admin/usuarios HTTP/1.1" 500 Internal Server Error José [1] https://github.com/emmett-framework/emmett/issues/428 With orjson installed: ERROR in handlers [/venv/lib/python3.8/site-packages/emmett/asgi/handlers.py:340]: Application exception: Traceback (most recent call last): File "/venv/lib/python3.8/site-packages/emmett/asgi/handlers.py", line 325, in dynamic_handler http = await self.router.dispatch(ctx.request, ctx.response) File "/venv/lib/python3.8/site-packages/emmett/routing/router.py", line 249, in dispatch return await match.dispatch(reqargs, response) File "/venv/lib/python3.8/site-packages/emmett/routing/dispatchers.py", line 72, in dispatch rv = self.response_builder(await self.f(**reqargs), response) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 328, in flow output = await pipe_method(f, **kwargs) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 234, in pipe return await next_pipe(**kwargs) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 328, in flow output = await pipe_method(f, **kwargs) File "/venv/lib/python3.8/site-packages/emmett/tools/auth/apis.py", line 277, in pipe return await next_pipe(**kwargs) File "/venv/lib/python3.8/site-packages/emmett/pipeline.py", line 369, in flow return await pipe_method(f, **kwargs) File "/venv/lib/python3.8/site-packages/emmett/tools/service.py", line 28, in pipe_request return self.encoder(await next_pipe(**kwargs)) TypeError: Type is not JSON serializable: LazyCrypt @josejachuf that's not a bug, is intended. Password fields are not JSON serializable. I suggest you to filter what you serialize, with REST extensions you might exclude those fields with the rest_rw dictionary in your model or a custom Serializer. Thanks @gi0baro This works fine: rest_rw = { 'institucion': True, 'password': (False, True) } I thought it was a bug, since in 2.3.1 it functioned well. The 2.4.x seems to be stricter in many things
gharchive/issue
2022-04-27T12:18:18
2025-04-01T04:34:09.875756
{ "authors": [ "gi0baro", "josejachuf" ], "repo": "emmett-framework/emmett", "url": "https://github.com/emmett-framework/emmett/issues/431", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1601322938
0.123.11 NOT working whilst 0.123.8 working For all my games, I have been using 0.123.8. All working great without issue, DLAA+F preset etc. Updated to 0.123.11 for all my games. Same setup, same ini. However, it just does not work with this version. in the log, for some games it's blank, for other games it can not find the dlss.dll All .ini are the same. Even when I only change the DLSSTweaks .dll but keep the ini's, its not working. Using 0.123.8's dll makes it working again. So its definetly a .dll problem. What I think is, it just doesn't load the DLSSTweak's DLL into the game. I'm not sure why tho. Whilst its working great with previous version. Games that was working for 0.123.8 but not working with 0.123.11 now: -Ready or Not -Hitman 3 -Red Dead Redemption 2 Tested both 3.1.1 Public and 3.1.1 DEV version DLL I'm using 516.59 Should I update my driver? Aha yeah that's probably it, the user I mentioned was actually using the exact same version and had same issue with it not applying, after they updated to 528.49 it worked fine. Aha yeah that's probably it, the user I mentioned was actually using the exact same version and had same issue with it not applying, after they updated to 528.49 it worked fine. Just updated my driver to 528.49 and now it's working fine. Thank you!
gharchive/issue
2023-02-27T14:41:14
2025-04-01T04:34:09.882247
{ "authors": [ "emoose", "erdo4" ], "repo": "emoose/DLSSTweaks", "url": "https://github.com/emoose/DLSSTweaks/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
214146721
Can't install latest emqttd 2.1.0 Environment OS: MacOS Sierra 10.12.3 Erlang/OTP: 19 EMQ: 2.1.0 Description A description of the issue Please install or compile a rebar in your MacOS. For example: ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null brew install rebar Yeah. My rebar was broken, reinstalling fixed the issue.
gharchive/issue
2017-03-14T17:23:34
2025-04-01T04:34:09.939371
{ "authors": [ "grutabow", "thechamp" ], "repo": "emqtt/emqttd", "url": "https://github.com/emqtt/emqttd/issues/947", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1896508922
[Bug] unable to decode JSON in output The messages are not decoden in JSON on the output window. Environment macOS Ventura 13.5.2 Chrome 116.0.5845.187 MQTTX Web version 1.9.5 In the screenshot the messages in plaintext (correct), in Base64 (correct), JSON (wrong), Hex (correct). Same messages in the Mac client are displayd with the corrected JSON format. What's wrong? Thank you Fixes #1418 . This bug has been addressed in this pull request and will be included in our next release, Version 1.9.6. Thank you for bringing this to our attention. Thank you for being so patient! We have addressed and resolved this issue in the https://github.com/emqx/MQTTX/releases/tag/v1.9.6 It helps us continually improve our product. If you encounter any issues while using the new version, please do not hesitate to let us know. Thank you once again!
gharchive/issue
2023-09-14T12:49:36
2025-04-01T04:34:09.943157
{ "authors": [ "ni00", "vellanix", "ysfscream" ], "repo": "emqx/MQTTX", "url": "https://github.com/emqx/MQTTX/issues/1418", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
767808824
Require implementation of syscall 2 (sys_open) Required to allow listening socket within a Keep. Dec 15 16:40:55 raption enarx-keepldr[142419]: init_gdt Dec 15 16:40:55 raption enarx-keepldr[142419]: mmap(0x0, 0x890, 0x3, 0x22, 0xffffffffffffffff, 0x0) Dec 15 16:40:55 raption enarx-keepldr[142419]: SC> mmap(0, 2192, …) = 0x00007f2f2c0ae000 Dec 15 16:40:55 raption enarx-keepldr[142419]: arch_prctl(0x1002, 0x7f2f2c0ae7a0) Dec 15 16:40:55 raption enarx-keepldr[142419]: SC> arch_prctl(ARCH_SET_FS, 0x7f2f2c0ae7a0) = 0 Dec 15 16:40:55 raption enarx-keepldr[142419]: set_tid_address(0x7f2f2c0ad0a0) Dec 15 16:40:55 raption enarx-keepldr[142419]: poll(0x7ff0b7177d00, 0x3, 0x0) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigaction(0xd, 0x7ff0b7177b40, 0x7ff0b7177b60, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigaction(0xb, 0x0, 0x7ff0b7177ca0, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigprocmask(0x1, 0x7ff0b7177ca0, 0x0, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigaction(0xb, 0x7ff0b7177c80, 0x0, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigaction(0x7, 0x0, 0x7ff0b7177ca0, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigaction(0x7, 0x7ff0b7177c80, 0x0, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: sigaltstack(0x0, 0x7ff0b7177cc0) Dec 15 16:40:55 raption enarx-keepldr[142419]: brk(0x0) Dec 15 16:40:55 raption enarx-keepldr[142419]: SC> brk(0x0000000000000000) = 0x555522f4f000 Dec 15 16:40:55 raption enarx-keepldr[142419]: brk(0x555522f50000) Dec 15 16:40:55 raption enarx-keepldr[142419]: SC> brk(0x0000555522f50000) = 0x555522f50000 Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigprocmask(0x0, 0x7f2f2bbafac0, 0x7ff0b7177b00, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: rt_sigprocmask(0x2, 0x7ff0b7177b00, 0x0, 0x8) Dec 15 16:40:55 raption enarx-keepldr[142419]: unsupported syscall: 2 Dec 15 16:40:55 raption enarx-keepldr[142419]: unsupported syscall: 204 Dec 15 16:40:55 raption enarx-keepldr[142419]: unsupported syscall: 204 Dec 15 16:40:55 raption enarx-keepldr[142419]: unsupported syscall: 291 Dec 15 16:40:55 raption enarx-keepldr[142419]: unsupported syscall: 213 This fails when running in a KVM keep, but is fine in a Nil Keep: https://github.com/MikeCamel/enarx-wasmldr/blob/https-wasmldr-exp/src/main.rs I'm not sure we want to support open(). We may need another alternative. LOL! [pid 554356] open("/home/harald/git/enarx/enarx-wasmldr/target/x86_64-unknown-linux-musl/release/build/openssl-sys-240c0b512c71fad7/out/openssl-build/install/ssl/openssl.cnf", O_RDONLY) = -1 ENOENT (No such file or directory) @haraldh - could you explain why you've closed this? @MikeCamel we don't need open() So as long as the others are implemented, it should no longer error out? I'll hold you to that. :-)
gharchive/issue
2020-12-15T16:49:34
2025-04-01T04:34:10.009548
{ "authors": [ "MikeCamel", "haraldh", "npmccallum" ], "repo": "enarx/enarx-keepldr", "url": "https://github.com/enarx/enarx-keepldr/issues/257", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1703829300
Better UX for Transaction History Here you go: https://www.figma.com/file/Tgg1gbPcAIiHgogxhC1GTZ/Transaction-History?type=design&node-id=0-1&t=XdftRTr6FOnJvUeD-0 @azackmatoff have a look and ask questions if things are unclear. Created subtasks for the 2 pages defined in Figma
gharchive/issue
2023-05-10T12:34:16
2025-04-01T04:34:10.119091
{ "authors": [ "Malixxa", "clangenb" ], "repo": "encointer/encointer-wallet-flutter", "url": "https://github.com/encointer/encointer-wallet-flutter/issues/1189", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1299442151
Add Solr See https://solr.apache.org/downloads.html#about-versions-and-support Changed name to "Apache Solr", matching Tomcaat (Should probably change for log4j as well). Looked into automating this, but the git repository doesn't have the tags we want: https://github.com/apache/solr/. This is the complete list of releases from Git: releases/lucene-solr/3.1 releases/lucene-solr/3.2 releases/lucene-solr/3.3 releases/lucene-solr/3.4.0 releases/lucene-solr/3.5.0 releases/lucene-solr/3.6.0 releases/lucene-solr/3.6.1 releases/lucene-solr/3.6.2 releases/lucene-solr/4.0.0 releases/lucene-solr/4.0.0-alpha releases/lucene-solr/4.0.0-beta releases/lucene-solr/4.1.0 releases/lucene-solr/4.10.0 releases/lucene-solr/4.10.1 releases/lucene-solr/4.10.2 releases/lucene-solr/4.10.3 releases/lucene-solr/4.10.4 releases/lucene-solr/4.2.0 releases/lucene-solr/4.2.1 releases/lucene-solr/4.3.0 releases/lucene-solr/4.3.1 releases/lucene-solr/4.4.0 releases/lucene-solr/4.5.0 releases/lucene-solr/4.5.1 releases/lucene-solr/4.6.0 releases/lucene-solr/4.6.1 releases/lucene-solr/4.7.0 releases/lucene-solr/4.7.1 releases/lucene-solr/4.7.2 releases/lucene-solr/4.8.0 releases/lucene-solr/4.8.1 releases/lucene-solr/4.9.0 releases/lucene-solr/4.9.1 releases/lucene-solr/5.0.0 releases/lucene-solr/5.1.0 releases/lucene-solr/5.2.0 releases/lucene-solr/5.2.1 releases/lucene-solr/5.3.0 releases/lucene-solr/5.3.1 releases/lucene-solr/5.3.2 releases/lucene-solr/5.4.0 releases/lucene-solr/5.4.1 releases/lucene-solr/5.5.0 releases/lucene-solr/5.5.1 releases/lucene-solr/5.5.2 releases/lucene-solr/5.5.3 releases/lucene-solr/5.5.4 releases/lucene-solr/5.5.5 releases/lucene-solr/6.0.0 releases/lucene-solr/6.0.1 releases/lucene-solr/6.1.0 releases/lucene-solr/6.2.0 releases/lucene-solr/6.2.1 releases/lucene-solr/6.3.0 releases/lucene-solr/6.4.0 releases/lucene-solr/6.4.1 releases/lucene-solr/6.4.2 releases/lucene-solr/6.5.0 releases/lucene-solr/6.5.1 releases/lucene-solr/6.6.0 releases/lucene-solr/6.6.1 releases/lucene-solr/6.6.2 releases/lucene-solr/6.6.3 releases/lucene-solr/6.6.4 releases/lucene-solr/6.6.5 releases/lucene-solr/6.6.6 releases/lucene-solr/7.0.0 releases/lucene-solr/7.0.1 releases/lucene-solr/7.1.0 releases/lucene-solr/7.2.0 releases/lucene-solr/7.2.1 releases/lucene-solr/7.3.0 releases/lucene-solr/7.3.1 releases/lucene-solr/7.4.0 releases/lucene-solr/7.5.0 releases/lucene-solr/7.6.0 releases/lucene-solr/7.7.0 releases/lucene-solr/7.7.1 releases/lucene-solr/7.7.2 releases/lucene-solr/7.7.3 releases/lucene-solr/8.0.0 releases/lucene-solr/8.1.0 releases/lucene-solr/8.1.1 releases/lucene-solr/8.2.0 releases/lucene-solr/8.3.0 releases/lucene-solr/8.3.1 releases/lucene-solr/8.4.0 releases/lucene-solr/8.4.1 releases/lucene-solr/8.5.0 releases/lucene-solr/8.5.1 releases/lucene-solr/8.5.2 releases/lucene-solr/8.6.0 releases/lucene-solr/8.6.1 releases/lucene-solr/8.6.2 releases/lucene-solr/8.6.3 releases/lucene-solr/8.7.0 releases/lucene-solr/8.8.0 releases/lucene-solr/8.8.1 releases/lucene/1.0.1 releases/lucene/1.2 releases/lucene/1.2-rc1 releases/lucene/1.2-rc2 releases/lucene/1.2-rc3 releases/lucene/1.2-rc4 releases/lucene/1.2-rc5 releases/lucene/1.3 releases/lucene/1.3-rc1 releases/lucene/1.3-rc2 releases/lucene/1.3-rc3 releases/lucene/1.4 releases/lucene/1.4-rc1 releases/lucene/1.4-rc2 releases/lucene/1.4-rc3 releases/lucene/1.4.1 releases/lucene/1.4.2 releases/lucene/1.4.3 releases/lucene/1.9 releases/lucene/1.9-rc1 releases/lucene/1.9.1 releases/lucene/2.0.0 releases/lucene/2.1.0 releases/lucene/2.2.0 releases/lucene/2.3.0 releases/lucene/2.3.1 releases/lucene/2.3.2 releases/lucene/2.4.0 releases/lucene/2.4.1 releases/lucene/2.9.0 releases/lucene/2.9.1 releases/lucene/2.9.2 releases/lucene/3.0.0 releases/lucene/3.0.1 releases/solr/1.1.0 releases/solr/1.2.0 releases/solr/1.3.0 releases/solr/1.4.0 releases/solr/9.0.0 Notable, the 8.11 series (the current one) is missing. It seems to be directly published on https://dlcdn.apache.org/lucene/solr/. Could automate via Docker, but the releases are lagging there by a few days. Will note on the talk page, and update in a future PR
gharchive/pull-request
2022-07-08T21:03:17
2025-04-01T04:34:10.163978
{ "authors": [ "Rudloff", "captn3m0" ], "repo": "endoflife-date/endoflife.date", "url": "https://github.com/endoflife-date/endoflife.date/pull/1389", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1089115542
[django] cleanup and fix eol dates EOL dates fixed 3.1.14 version added some not needed flags removed trivial cosmetic changes Source : https://www.djangoproject.com/download/
gharchive/pull-request
2021-12-27T09:48:46
2025-04-01T04:34:10.166074
{ "authors": [ "usta" ], "repo": "endoflife-date/endoflife.date", "url": "https://github.com/endoflife-date/endoflife.date/pull/678", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2441057677
"sonucunda" kelimesi Merhaba , Sınıf için öncelikle teşekkürler . Bir projemizde faydalanıyoruz . Bazı eklerde yapıyoruz, bitirince paylaşacağım burada.. bir kelime başımızı ağrıttı ilginç şekilde. "sonucunda" kelimesi "s 10 üçunda" olarak çevriliyor. enteresan bir durum, "on" kelimesinden sonra "uc" gelirse bu sorun yaşanıyor sanıyorum Çözüm : "üç": ["[üÜ]+[çÇcC]+", "[uU]+[cC]+"] satırını ["[üÜ]+[çÇcC]+"] bu hale getirmek
gharchive/issue
2024-07-31T22:38:04
2025-04-01T04:34:10.167946
{ "authors": [ "myenen" ], "repo": "endrcn/word2number_turkish", "url": "https://github.com/endrcn/word2number_turkish/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
198655057
Sending a message might throw a TransferException It seems that there is no error handling for the exceptions that Guzzle may throw. public function sendMessages(array $messages, array $options = []) { … $client = new GuzzleClient(); $adapter = new GuzzleAdapter($client); $request = new Request('POST', $this->baseUrl, ['Content-Type' => 'application/json'], json_encode($json)); $response = $adapter->sendRequest($request); … } It might be useful for the consumer of this bundle to not have to worry about those. A possible solution is to throw a generic error that the user can catch. What do you think? Hi @TFarla, fair point. I agree a generic exception would be a more future proof and user friendly. I'll let you know when it is added. Exceptions that occur when performing the request are now handled. As not only Guzzle exceptions occur (there was a use case where I received a Http\Client\Exception\HttpException) I catch every possible exception to make sure everything results in a RequestException.
gharchive/issue
2017-01-04T08:38:08
2025-04-01T04:34:10.170311
{ "authors": [ "TFarla", "endroid" ], "repo": "endroid/CmSms", "url": "https://github.com/endroid/CmSms/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
805336562
Create README In GitLab by @arcln on May 13, 2019, 23:19 In GitLab by @arcln on May 15, 2019, 15:30 mentioned in commit ffbf981d3cca7161213c25d912364c8e1f8b6353 In GitLab by @arcln on May 15, 2019, 15:31 mentioned in commit 384c3a6b0129a9a72d99a7033937ef6b26f6a4b9 In GitLab by @arcln on May 15, 2019, 15:49 closed via merge request !7 In GitLab by @arcln on May 15, 2019, 15:49 mentioned in commit bd21233e21e14a2f46b01be23fb44a4f685c8f66 In GitLab by @arcln on May 15, 2019, 15:49 closed via commit 384c3a6b0129a9a72d99a7033937ef6b26f6a4b9
gharchive/issue
2021-02-10T09:30:10
2025-04-01T04:34:10.184770
{ "authors": [ "paullaffitte" ], "repo": "enix/dothill-csi", "url": "https://github.com/enix/dothill-csi/issues/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2642841596
🛑 Dipuravanarentals is down In eb98846, Dipuravanarentals (https://dipuravanarentals.com) was down: HTTP code: 500 Response time: 570 ms Resolved: Dipuravanarentals is back up in ab7c1cd after 10 minutes.
gharchive/issue
2024-11-08T04:45:12
2025-04-01T04:34:10.190235
{ "authors": [ "enlivenonedrive" ], "repo": "enlivenonedrive/upptime", "url": "https://github.com/enlivenonedrive/upptime/issues/371", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
814605572
heroku locally and remotely [x] Heroku locally and remotely, issue updated #77 I have added a condition based on Environmental Variables, which is like: First it checks for DB_HEROKU_POSTGRES variable which is Heroku PostgreSQL db locally, If it is not found, It checks for db.sqlite3 ,If it is not exists, It checks for DB_HEROKU variable which is Heroku db AWS remotely. It is working exactly same as the video of a person you shared with me. Thank you, Best Regards Umar Ahmed :+1: Thanks, @UmarGit! I pulled your changes and I can't figure out where I insert the Postgres URI. For example, line 97-98 reads: if DB_HEROKU_POSTGRES != 'None': DATABASES = {'default': dj_database_url.config(env="DB_HEROKU_POSTGRES", default=DB_HEROKU_POSTGRES, conn_max_age=600)} For the config method parameters, for default I entered: 'postgres://jckmeibmxypuvn:11033b956dd9cc6011e8efb7d23d56bb277bef7e9634a31ae0f0b304f34e9667@ec2-3-231-241-17.compute-1.amazonaws.com:5432/d9eds0cfpna1li' - - with quotes and it appears that sqlite continues to be in play when I am expecting Django to serve the data from the remote postgres silo locally when I am at http://127.0.0.1:8000/tarot_key/4 in my web browser.
gharchive/pull-request
2021-02-23T16:07:27
2025-04-01T04:34:10.204901
{ "authors": [ "UmarGit", "enoren5" ], "repo": "enoren5/tarot_juicer", "url": "https://github.com/enoren5/tarot_juicer/pull/79", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1734210215
Typing set in CB and accepting it quickly is putting at or select_columns in the node Discord username No response What type of issue is this? Intermittent – Occurring irregularly Is this issue blocking you from using Enso? [ ] Yes, I can't use Enso because of this issue. Is this a regression? [ ] Yes, previous version of Enso did not have this issue. What issue are you facing? https://github.com/enso-org/enso/assets/12892578/dd05cf45-f581-40e9-a85a-fad830676309 Expected behaviour node should contain method that was typed How we can reproduce it? open CB connected to a table (occurs more repeatably on bigger workflows like Colorado COVID example) type set and press enter quickly Screenshots or screencasts No response Logs No response Enso Version nightly 31.05 Browser or standalone distribution Standalone distribution (local project) Browser Version or standalone distribution standalone Operating System MacOS Operating System Version No response Hardware you are using No response Even after #6875 it still reproducible, however, I'm not able to put at. select_columns is put only if I had a node returning Table selected when opening CB. The strange thing is, that when typing slowly, at any point select_columns is selected. Perhaps it some race between "rearrange entries", "accept selected entry", and "select best match".
gharchive/issue
2023-05-31T13:17:11
2025-04-01T04:34:10.226920
{ "authors": [ "farmaazon", "sylwiabr" ], "repo": "enso-org/enso", "url": "https://github.com/enso-org/enso/issues/6908", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1697274733
Fix issues with missing sourcemaps Pull Request Description Fixes #6559: DevTools failed to load source map: Could not load content for http://localhost:8080/preload.cjs.map: Load canceled due to load timeout DevTools failed to load source map: Could not load content for http://localhost:8080/pkg.js.map: Load canceled due to load timeout Important Notes None Checklist Please ensure that the following checklist has been satisfied before submitting the PR: [x] The documentation has been updated, if necessary. [x] Screenshots/screencasts have been attached, if there are any visual changes. For interactive or animated visual changes, a screencast is preferred. [x] All code follows the Scala, Java, and Rust style guides. In case you are using a language not listed above, follow the Rust style guide. All code has been tested: [x] Unit tests have been written where possible. [x] If GUI codebase was changed, the GUI was tested when built using ./run ide build. oh @mwu-tow you might want to check that this fixes the issue. note that the current fix for preload.cjs puts the sourcemap inline inside the js file, which will probably slow down startup, but hopefully shouldn't be noticeable @somebody1234 Why the preload map needs to be inline, while we can ship pkg.js.map? We've been really very aggressively trying to keep the size down, so increasing the core download size might be somewhat iffy. alternatively i can just take the preload.cjs stuff out of this PR and create a new one if i find a better solution @somebody1234 we don't serve preload.cjs - in fact i can't even see it in devtools. i can do some further research into whether we can get it working, but note that preload.cjs.map does exist beside preload.cjs, but for some reason it's trying to fetch /preload.cjs.map when preload.cjs isn't even served by the server The preload script is loaded by the Electron before loading the actual site, it is shipped and served. I think it does not show up because the server does not serve files from the root of the resources but only from the assets subdirectory. See the resources in the Electron bundle: 🟢 Did QA, the issue is fixed. Thanks for taking care of it!
gharchive/pull-request
2023-05-05T09:10:42
2025-04-01T04:34:10.234497
{ "authors": [ "mwu-tow", "somebody1234" ], "repo": "enso-org/enso", "url": "https://github.com/enso-org/enso/pull/6572", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1436325731
[contrib] entgql generates invalid schema while using field.Bytes [x] The issue is present in the latest release. [x] I have searched the issues of this repository and believe that this is not a duplicate. Current Behavior 😯 Define a ent schema with a bytes value field.Bytes("example") and then integrate the entgql extension to generate the graphql schema. It generates a gql type of []bytes! Expected Behavior 🤔 It should generate a valid gql schema. Maybe String! Your Environment 🌎 Tech Version Go 1.19.? Ent 0.11.? Database Cockroach Driver https://github.com/lib/pq Hello, thank you for opening the issue. For the []byte field, you should use entgql.Type('String') to let EntGQL which is the type for the field. You can use any scalar type here, for support encode/decode from the correct binary format. If you still have a question, please let me know. Thanks. I just noticed that right now and was about to update the issue. Thanks for the support, @giautm. Closing, but feel free to reopen it if you still need help.
gharchive/issue
2022-11-04T16:18:49
2025-04-01T04:34:10.240303
{ "authors": [ "a8m", "giautm", "nicolasparada" ], "repo": "ent/ent", "url": "https://github.com/ent/ent/issues/3065", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1650870187
Provide a option to disable RETURNING clause [x] I have searched the issues of this repository and believe that this is not a duplicate. Summary 💡 I am working on dqlite with entgo. It seems everything work right except RETURNING. For some reason, it didn't be implement properly in dqlite. After read some code. I noticed this clause is also not supported by MySQL, thus entgo has already have some alternative for RETURNING such as https://github.com/ent/ent/blob/27bc0470ebd3ddff01c7eaac6e5dbabbd6880f53/dialect/sql/sqlgraph/graph.go#L1871-L1883 But it seems only change behavior follow the dialect type. I wonder is that possible to add a option to disable RETURNING even on other dialect. Motivation 🔦 Add an option to make ent behavior like RETURNING not supported. Thanks for proposing this, @Zxilly. The RETURNING clause was added to SQLite more than two years ago. Unfortunately, we don't plan to support additional dialects in Ent at this stage, but we aim to make it more extensible in the future. My suggestion to you is to implement the dialect.Driver for dqlite and patch this logic outside Ent. Closing, but please feel free to join our Discord community in case you need more help with this 😎
gharchive/issue
2023-04-02T09:07:53
2025-04-01T04:34:10.244862
{ "authors": [ "Zxilly", "a8m" ], "repo": "ent/ent", "url": "https://github.com/ent/ent/issues/3430", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1413498427
adding protocol to final stream intel Risk Level: low Testing: unit tests Docs Changes: n/a Release Notes: yes Part of https://github.com/envoyproxy/envoy-mobile/issues/1594 cc @colibie /retest
gharchive/pull-request
2022-10-18T16:14:55
2025-04-01T04:34:10.327005
{ "authors": [ "alyssawilk" ], "repo": "envoyproxy/envoy-mobile", "url": "https://github.com/envoyproxy/envoy-mobile/pull/2613", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
508509681
Use-after-free: Envoy::Extensions::Common::Wasm::Context::defineMetric envoy-wasm sha: 29b71643c999af4e31ba86d41740edbef71c73fe proxy-sha: a9ed9c7b938dfa6ca9f8916e4cef65906a49446f ==22==ERROR: AddressSanitizer: heap-use-after-free on address 0x6110005ac0c0 at pc 0x000009b09423 bp 0x7fffd1ace8b0 sp 0x7fffd1ace8a8 READ of size 8 at 0x6110005ac0c0 thread T0 #0 0x9b09422 in Envoy::Extensions::Common::Wasm::Context::defineMetric(Envoy::Extensions::Common::Wasm::Context::MetricType, absl::string_view, unsigned int*) /proc/self/cwd/external/envoy/source/extensions/common/wasm/wasm.cc:1999:31 #1 0x9ae54ac in Envoy::Extensions::Common::Wasm::defineMetricHandler(void*, Envoy::Extensions::Common::Wasm::Word, Envoy::Extensions::Common::Wasm::Word, Envoy::Extensions::Common::Wasm::Word, Envoy::Extensions::Common::Wasm::Word) /proc/self/cwd/external/envoy/source/extensions/common/wasm/wasm.cc:726:26 #2 0x755dc67 in Envoy::Extensions::Common::Wasm::Null::Plugin::proxy_defineMetric(Envoy::Extensions::Common::Wasm::Context::MetricType, char const*, unsigned long, unsigned int*) /proc/self/cwd/bazel-out/k8-opt/bin/external/envoy/source/extensions/common/wasm/null/_virtual_includes/null_plugin_lib/extensions/common/wasm/null/wasm_api_impl.h:218:7 #3 0x755dc67 in Envoy::Extensions::Common::Wasm::Null::Plugin::defineMetric(Envoy::Extensions::Common::Wasm::Context::MetricType, absl::string_view, unsigned int*) /proc/self/cwd/external/envoy/api/wasm/cpp/proxy_wasm_api.h:787 #4 0x755dc67 in Envoy::Extensions::Common::Wasm::Null::Plugin::MetricBase::resolveFullName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /proc/self/cwd/external/envoy/api/wasm/cpp/proxy_wasm_api.h:914 #5 0x755d846 in Envoy::Extensions::Common::Wasm::Null::Plugin::MetricBase::resolveWithFields(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) /proc/self/cwd/external/envoy/api/wasm/cpp/proxy_wasm_api.h:881:10 #6 0x7549ef2 in void Envoy::Extensions::Common::Wasm::Null::Plugin::Metric::record<char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(unsigned long, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) /proc/self/cwd/external/envoy/api/wasm/cpp/proxy_wasm_api.h:948:20 #7 0x7547e1f in Envoy::Extensions::Common::Wasm::Null::Plugin::Stats::PluginRootContext::onConfigure(std::__1::unique_ptr<Envoy::Extensions::Common::Wasm::Null::Plugin::WasmData, std::__1::default_delete<Envoy::Extensions::Common::Wasm::Null::Plugin::WasmData> >) /proc/self/cwd/extensions/stats/plugin.cc:86:9 #8 0x75b7bf8 in Envoy::Extensions::Common::Wasm::Null::NullPlugin::onConfigure(unsigned long, unsigned long, unsigned long) /proc/self/cwd/external/envoy/source/extensions/common/wasm/null/null_plugin.cc:362:9 0x6110005ac0c0 is located 0 bytes inside of 224-byte region [0x6110005ac0c0,0x6110005ac1a0) freed by thread T0 here: #0 0x720ed22 in __interceptor_free (/usr/local/bin/envoy+0x720ed22) #1 0xc2e855e in std::__1::unique_ptr<Envoy::Stats::Scope, std::__1::default_delete<Envoy::Stats::Scope> >::~unique_ptr() /usr/lib/llvm-8/bin/../include/c++/v1/memory:2606:19 #2 0xc2e855e in Envoy::Server::ListenerImpl::ListenerImpl(envoy::api::v2::Listener const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Envoy::Server::ListenerManagerImpl&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, bool, unsigned long, Envoy::ProtobufMessage::ValidationVisitor&) /proc/self/cwd/external/envoy/source/server/listener_manager_impl.cc:353 #3 0xc2f0b9f in Envoy::Server::ListenerManagerImpl::addOrUpdateListener(envoy::api::v2::Listener const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) /proc/self/cwd/external/envoy/source/server/listener_manager_impl.cc:547:36 #4 0xc423985 in Envoy::Server::LdsApiImpl::onConfigUpdate(google::protobuf::RepeatedPtrField<envoy::api::v2::Resource> const&, google::protobuf::RepeatedPtrField<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /proc/self/cwd/external/envoy/source/server/lds_api.cc:62:29 #5 0xc4261f4 in Envoy::Server::LdsApiImpl::onConfigUpdate(google::protobuf::RepeatedPtrField<google::protobuf::Any> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /proc/self/cwd/external/envoy/source/server/lds_api.cc:110:3 previously allocated by thread T0 here: #0 0x720f0a3 in __interceptor_malloc (/usr/local/bin/envoy+0x720f0a3) #1 0x7f0d5de978c9 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libc++.so.1+0x878c9) #2 0xc58f570 in Envoy::Stats::ThreadLocalStoreImpl::createScope(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) /proc/self/cwd/external/envoy/source/common/stats/thread_local_store.cc:109:20 #3 0xc2e32ec in Envoy::Server::ListenerImpl::ListenerImpl(envoy::api::v2::Listener const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Envoy::Server::ListenerManagerImpl&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, bool, unsigned long, Envoy::ProtobufMessage::ValidationVisitor&) /proc/self/cwd/external/envoy/source/server/listener_manager_impl.cc:202:45 @jplevyak @PiotrSikora [Envoy (Epoch 0)] [2019-10-17 15:16:23.724][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:71] Envoy version: 08cec447dfc349b314e226dc66132dbdb5d08fc9/1.12.0-dev/Clean/RELEASE/BoringSSL [Envoy (Epoch 0)] [2019-10-17 15:16:23.724][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #0: __restore_rt [0x7f0ce73c4890] [Envoy (Epoch 0)] [2019-10-17 15:16:23.731][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #1: Envoy::Extensions::Common::Wasm::defineMetricHandler() [0x1041b14] [Envoy (Epoch 0)] [2019-10-17 15:16:23.738][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #2: Envoy::Extensions::Common::Wasm::Null::Plugin::MetricBase::resolveFullName() [0xb43016] [Envoy (Epoch 0)] [2019-10-17 15:16:23.745][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #3: Envoy::Extensions::Common::Wasm::Null::Plugin::MetricBase::resolveWithFields() [0xb42f19] [Envoy (Epoch 0)] [2019-10-17 15:16:23.751][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #4: Envoy::Extensions::Common::Wasm::Null::Plugin::Metric::record<>() [0xbc37f4] [Envoy (Epoch 0)] [2019-10-17 15:16:23.758][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #5: Envoy::Extensions::Common::Wasm::Null::Plugin::Stats::PluginRootContext::onConfigure() [0xbc255f] [Envoy (Epoch 0)] [2019-10-17 15:16:23.765][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #6: Envoy::Extensions::Common::Wasm::Null::NullPlugin::onConfigure() [0xc3f36a] [Envoy (Epoch 0)] [2019-10-17 15:16:23.771][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #7: std::__1::__function::__func<>::operator()() [0xc42e75] [Envoy (Epoch 0)] [2019-10-17 15:16:23.778][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #8: Envoy::Extensions::Common::Wasm::getOrCreateThreadLocalWasm() [0x10530a8] [Envoy (Epoch 0)] [2019-10-17 15:16:23.784][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #9: std::__1::__function::__func<>::operator()() [0xe7e87e] [Envoy (Epoch 0)] [2019-10-17 15:16:23.790][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #10: std::__1::__function::__func<>::operator()() [0x1b04158] [Envoy (Epoch 0)] [2019-10-17 15:16:23.796][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #11: Envoy::ThreadLocal::InstanceImpl::SlotImpl::set() [0x1b01f11] [Envoy (Epoch 0)] [2019-10-17 15:16:23.803][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #12: Envoy::ThreadLocal::InstanceImpl::Bookkeeper::set() [0x1b015d4] [Envoy (Epoch 0)] [2019-10-17 15:16:23.809][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #13: Envoy::Extensions::HttpFilters::Wasm::FilterConfig::FilterConfig() [0xe7e219] [Envoy (Epoch 0)] [2019-10-17 15:16:23.816][20][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #14: Envoy::Extensions::HttpFilters::Wasm::WasmFilterConfig::createFilterFactoryFromProtoTyped() [0xe7b897] This problem was addressed by https://github.com/envoyproxy/envoy-wasm/pull/229 Looks like the NullVm Scope& is still the one that it was created with but that one is being deleted... need to figure a way around that. How might I reproduce this so I can tell that it is fixed ? I am getting the similar crash with stackdriver filter. Whenever a listener updates happen and a grpc call is made, it will trigger the crash at this line: https://github.com/envoyproxy/envoy-wasm/blob/05ae2c0407b784df39830e82e82dfe0d155f8a20/source/extensions/common/wasm/wasm.cc#L1641 I guess since we use the same vm name, I guess the nullvm instance is not recycled when listener updates. The plugin (scope) referenced by the nullvm was already freed with the old listener and causing the grpc call to fail. Since Vm's lifecycle is detached from the listener that creates the VM, shall we consider avoid referencing any listener related state inside wasm instance? Yes. I'll put out a design document. On Fri, Oct 18, 2019, 9:57 AM Pengyuan Bian notifications@github.com wrote: Since Vm's lifecycle is detached from the listener that creates the VM, shall we consider avoid referencing any listener related state inside wasm instance? — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/envoyproxy/envoy-wasm/issues/255?email_source=notifications&email_token=AACEXIG2CGQFXGQC6GI7VKLQPHTG7A5CNFSM4JB2LO6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBVEK4Y#issuecomment-543835507, or unsubscribe https://github.com/notifications/unsubscribe-auth/AACEXIBUGKQIMGRBKHCGFD3QPHTG7ANCNFSM4JB2LO6A . @jplevyak another crash [Envoy (Epoch 0)] [2019-10-18 21:28:55.803][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:83] Caught Segmentation fault, suspect faulting address 0x78 [Envoy (Epoch 0)] [2019-10-18 21:28:55.803][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:70] Backtrace (use tools/stack_decode.py to get line numbers): [Envoy (Epoch 0)] [2019-10-18 21:28:55.803][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:71] Envoy version: 5a5c3d2a9d5f02c8f8a5d22581097ff9a5766149/1.12.0-dev/Clean/RELEASE/BoringSSL [Envoy (Epoch 0)] [2019-10-18 21:28:55.803][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #0: __restore_rt [0x7f9b323e1890] [Envoy (Epoch 0)] [2019-10-18 21:28:55.810][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #1: Envoy::Extensions::Common::Wasm::recordMetricHandler() [0x1041736] [Envoy (Epoch 0)] [2019-10-18 21:28:55.817][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #2: Envoy::Extensions::Common::Wasm::Null::Plugin::Stats::PluginRootContext::report() [0xbc3f96] [Envoy (Epoch 0)] [2019-10-18 21:28:55.825][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #3: std::__1::__function::__func<>::operator()() [0xc3f7b9] [Envoy (Epoch 0)] [2019-10-18 21:28:55.832][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #4: Envoy::Extensions::Common::Wasm::Context::onLog() [0x1050ea4] [Envoy (Epoch 0)] [2019-10-18 21:28:55.839][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #5: Envoy::Extensions::Common::Wasm::Context::log() [0x1050dbe] [Envoy (Epoch 0)] [2019-10-18 21:28:55.846][90][critical][backtrace] [bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:75] #6: Envoy::Http::ConnectionManagerImpl::ActiveStream::~ActiveStream() [0x1d50dd2] https://github.com/envoyproxy/envoy-wasm/pull/271 https://github.com/envoyproxy/envoy-wasm/pull/272
gharchive/issue
2019-10-17T14:19:07
2025-04-01T04:34:10.338256
{ "authors": [ "bianpengyuan", "jplevyak", "mandarjog" ], "repo": "envoyproxy/envoy-wasm", "url": "https://github.com/envoyproxy/envoy-wasm/issues/255", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
734751454
v2 <-> v3 TlsContext causes temporary downtime Title: v2 <-> v3 TlsContext causes temporary downtime Description: Switching the TLS context version on a cluster causes a downtime Apply this change (full cluster below) to a v2 cluster via XDS: > "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", < "@type": "type.googleapis.com/envoy.api.v2.auth.UpstreamTlsContext", Fails a few requests with client disconnected, failure reason: TLS error: Secret is not supplied by SDS. Repro steps: I can easily reproduce this with Istio control plane just swapping out the version. I assume it could be reproduce with file based XDS but haven't produced a minimal reproducer Full cluster: { "transportSocketMatches": [ { "name": "tlsMode-istio", "match": { "tlsMode": "istio" }, "transportSocket": { "name": "envoy.transport_sockets.tls", "typedConfig": { "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", "commonTlsContext": { "tlsCertificateSdsSecretConfigs": [ { "name": "default", "sdsConfig": { "apiConfigSource": { "apiType": "GRPC", "grpcServices": [ { "envoyGrpc": { "clusterName": "sds-grpc" } } ] } } } ], "combinedValidationContext": { "defaultValidationContext": { "matchSubjectAltNames": [ { "exact": "spiffe://cluster.local/ns/testyomesh/sa/default" } ] }, "validationContextSdsSecretConfig": { "name": "ROOTCA", "sdsConfig": { "apiConfigSource": { "apiType": "GRPC", "grpcServices": [ { "envoyGrpc": { "clusterName": "sds-grpc" } } ] } } } }, "alpnProtocols": [ "istio-peer-exchange", "istio" ] }, "sni": "outbound_.80_._.testyomesh-1.testyomesh.svc.cluster.local" } } }, { "name": "tlsMode-disabled", "match": {}, "transportSocket": { "name": "envoy.transport_sockets.raw_buffer" } } ], "name": "outbound|80||testyomesh-1.testyomesh.svc.cluster.local", "type": "EDS", "edsClusterConfig": { "edsConfig": { "ads": {} }, "serviceName": "outbound|80||testyomesh-1.testyomesh.svc.cluster.local" }, "connectTimeout": "10s", "circuitBreakers": { "thresholds": [ { "maxConnections": 4294967295, "maxPendingRequests": 4294967295, "maxRequests": 4294967295, "maxRetries": 4294967295 } ] }, "filters": [ { "name": "istio.metadata_exchange", "typedConfig": { "@type": "type.googleapis.com/udpa.type.v1.TypedStruct", "typeUrl": "type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange", "value": { "protocol": "istio-peer-exchange" } } } ] } @howardjohn I could only reproduce (w a mocking config) this behavior on 1.14.x branch, i.e. istio proxy 1.6.x, but not with 1.15.x, 1.16.x or latest master. Can you confirm that's the case? If that's the case the upgrade path will be update to 1.7.x proxy first then migrate to v3 transport socket. I'll try to figure out which commit exactly fixes this and report back.
gharchive/issue
2020-11-02T19:09:28
2025-04-01T04:34:10.344326
{ "authors": [ "howardjohn", "lizan" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/issues/13864", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
600763268
build: upgrade to bazel 3.0.0 Signed-off-by: Lizan Zhou lizan@tetrate.io Just for my own knowledge, how come https://github.com/fweikert/bugs/issues/628 doesn't actually break Envoy? eg https://github.com/envoyproxy/envoy/blob/4e1f9f95618ca354b255fb207a5a6e3097d687d0/bazel/BUILD#L238 FWIW, I've been compiling Envoy on Bazel 3.0 since it was released including test runs without issues apart from warnings (x86_64 Linux and MacOS and aarch64). @asraa Looking at bazel issue https://github.com/bazelbuild/bazel/issues/8622 seems the flag isn't flipped in 3.0 but delayed to 4.0. @mattklein123 I believe some of them are still valid issue but just delayed to a later version. I'll go over them to see whether it is fixed or not.
gharchive/pull-request
2020-04-16T05:42:28
2025-04-01T04:34:10.347628
{ "authors": [ "asraa", "lizan", "moderation" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/10805", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1120906170
runtime: flipping http2_new_codec_wrapper false See https://github.com/envoyproxy/envoy/issues/19761 @yanavlasov I can't remember the tagging we're supposed to do - can you remind me where it's docced? Was set true in https://github.com/envoyproxy/envoy/pull/19614 cc @birenroy Ah, Fixes commit, set.
gharchive/pull-request
2022-02-01T16:17:43
2025-04-01T04:34:10.349805
{ "authors": [ "alyssawilk" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/19770", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1500864405
Recommend filters not decrease buffer limits Commit Message: Recommend filters not decrease buffer limits Additional Description: Just changing flow_control.md to recommend not calling set(En|De)coderBufferLimit when it would decrease the buffer limit, as that's likely to conflict with other filters, vs. increasing the limit is unlikely to cause conflicts. This is inspired by #23774, in which it turned out that a code solution to this awkwardness is not reasonable, so a documentation solution is the next best thing. Risk Level: None, it's documentation. Testing: No, it's documentation. Docs Changes: Yes, it is. Release Notes: n/a Platform Specific Features: n/a /assign @KBaichoo
gharchive/pull-request
2022-12-16T21:06:25
2025-04-01T04:34:10.351649
{ "authors": [ "KBaichoo", "ravenblackx" ], "repo": "envoyproxy/envoy", "url": "https://github.com/envoyproxy/envoy/pull/24604", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }