id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1756043881
Proposal: Introduce DataLoader for Ballerina GraphQL Summary DataLoader is a versatile tool used for accessing various remote data sources in GraphQL. Within the realm of GraphQL, DataLoader is extensively employed to address the N+1 problem. The aim of this proposal is to incorporate a DataLoader functionality into the Ballerina GraphQL package. Goals Implement DataLoader as a sub-module of the Ballerina GraphQL package. Motivation The N+1 problem The N+1 problem can be exemplified in a scenario involving authors and their books. Imagine a book catalog application that displays a list of authors and their respective books. When encountering the N+1 problem, retrieving the list of authors requires an initial query to fetch author information (N), followed by separate queries for each author to retrieve their books (1 query per author). This results in N+1 queries being executed, where N represents the number of authors, leading to increased overhead and potential performance issues. Following is a GraphQL book catalog application written in Ballerina which susceptible to N +1 problem import ballerina/graphql; import ballerina/sql; import ballerina/io; import ballerinax/java.jdbc; import ballerinax/mysql.driver as _; service on new graphql:Listener(9090) { resource function get authors() returns Author[]|error { var query = sql:queryConcat(`SELECT * FROM authors`); io:println(query); stream<AuthorRow, sql:Error?> authorStream = dbClient->query(query); return from AuthorRow authorRow in authorStream select new (authorRow); } } isolated distinct service class Author { private final readonly & AuthorRow author; isolated function init(AuthorRow author) { self.author = author.cloneReadOnly(); } isolated resource function get name() returns string { return self.author.name; } isolated resource function get books() returns Book[]|error { int authorId = self.author.id; var query = sql:queryConcat(`SELECT * FROM books WHERE author = ${authorId}`); io:println(query); stream<BookRow, sql:Error?> bookStream = dbClient->query(query); return from BookRow bookRow in bookStream select new Book(bookRow); } } isolated distinct service class Book { private final readonly & BookRow book; isolated function init(BookRow book) { self.book = book.cloneReadOnly(); } isolated resource function get id() returns int { return self.book.id; } isolated resource function get title() returns string { return self.book.title; } } final jdbc:Client dbClient = check new ("jdbc:mysql://localhost:3306/mydatabase", "root", "password"); public type AuthorRow record { int id; string name; }; public type BookRow record { int id; string title; }; Executing the query { authors { name books { title } } } on the above service will print the following SQL queries in the terminal SELECT * FROM authors SELECT * FROM books WHERE author = 10 SELECT * FROM books WHERE author = 9 SELECT * FROM books WHERE author = 8 SELECT * FROM books WHERE author = 7 SELECT * FROM books WHERE author = 6 SELECT * FROM books WHERE author = 5 SELECT * FROM books WHERE author = 4 SELECT * FROM books WHERE author = 3 SELECT * FROM books WHERE author = 2 SELECT * FROM books WHERE author = 1 where the first query returns 10 authors then for each author a separate query is executed to obtain the book details resulting in a total of 11 queries which leads to inefficient database querying. The DataLoader allows us to overcome this problem. DataLoader The DataLoader is the solution found by the original developers of the GraphQL spec. The primary purpose of DataLoader is to optimize data fetching and mitigate performance issues, especially the N+1 problem commonly encountered in GraphQL APIs. It achieves this by batching and caching data requests, reducing the number of queries sent to the underlying data sources. DataLoader helps minimize unnecessary overhead and improves the overall efficiency and response time of data retrieval operations. Success Metrics In almost all GraphQL implementations, the DataLoader is a major requirement. Since the Ballerina GraphQL package is now spec-compliant, we are looking for ways to improve the user experience in the Ballerina GraphQL package. Implementing a DataLoader in Ballerina will improve the user experience drastically. Description The DataLoader batches and caches operations for data fetchers from different data sources.The DataLoader requires users to provide a batch function that accepts an array of keys as input and retrieves the corresponding array of values for those keys. API DataLoader object This object defines the public APIs accessible to users. public type DataLoader isolated object { # Collects a key to perform a batch operation at a later time. pubic isolated function load(anydata key); # Retrieves the result for a particular key. public isolated function get(anydata key, typedesc<anydata> t = <>) returns t|error; # Executes the user-defined batch function. public isolated function dispatch(); }; DefaultDataLoader class This class provides a default implementation for the DataLoader isolated class DefaultDataLoader { *DataLoader; private final table<Key> key(key) keys = table []; private table<Result> key(key) resultTable = table []; private final (isolated function (readonly & anydata[] keys) returns anydata[]|error) batchLoadFunction; public isolated function init(isolated function (readonly & anydata[] keys) returns anydata[]|error batchLoadFunction) { self.batchLoadFunction = batchLoadFunction; } // … implementations of load, get and dispatch methods } type Result record {| readonly anydata key; anydata|error value; |}; The DefaultDataLoader class is an implementation of the DataLoader with the following characteristics: Inherits from DataLoader. Maintains a key table to collect keys for batch execution. Stores/caches results in a resultTable. Requires an isolated function batchLoadFunction to be provided during initialization. init method The init method instantiates the DefaultDataLoader and accepts a batchLoadFunction function pointer as a parameter. The batchLoadFunction function pointer has the following type: isolated function (readonly & anydata[] keys) returns anydata[]|error Users are expected to define the logic for the batchLoadFunction, which handles the batching of operations. The batchLoadFunction should return an array of anydata where each element corresponds to a key in the input keys array upon successful execution. load method The load method takes an anydata key parameter and adds it to the key table for batch execution. If a result is already cached for the given key in the result table, the key will not be added to the key table again. get method The get method takes an anydata key as a parameter and retrieves the associated value by looking up the result in the result table. If a result is found for the given key, this method attempts to perform data binding and returns the result. If a result cannot be found or data binding fails, an error is returned. dispatch method The dispatch method invokes the user-defined batchLoadFunction. It passes the collected keys as an input array to the batchLoadFunction, retrieves the result array, and stores the key-to-value mapping in the resultTable. Requirements to Engaging DataLoader in GraphQL Module To integrate the DataLoader with the GraphQL module, users need to follow these three steps: Identify the resource method (GraphQL field) that requires the use of the DataLoader. Then, add a new parameter map<dataloader:DataLoader> to its parameter list. Define a matching remote/resource method called loadXXX, where XXX represents the Pascal-cased name of the GraphQL field identified in the previous step. This method may include all/some of the required parameters from the graphql field and the map<dataloader:DataLoader> parameter. This function is executed as a prefetch step before executing the corresponding resource method of GraphQL field. (Note that both the loadXXX method and the XXX method should have same resource accessor or should be remote methods) Annotate the loadXXX method written in step two with @dataloader:Loader annotation and pass the required configuration. This annotation helps avoid adding loadXXX as a field in the GraphQL schema and also provides DataLoader configuration. Loader annotation # Provides a set of configurations for the load resource method. public type LoaderConfig record {| # Facilitates a connection between a data loader key and a batch function. # The data loader key enables the reuse of the same data loader across resolvers map<isolated function (readonly & anydata[] keys) returns anydata[]|error> batchFunctions; |}; # The annotation to configure the load resource method with a DataLoader public annotation LoaderConfig Loader on object function; The following section demonstrates the usage of DataLoader in Ballerina GraphQL. Modifying the Book Catalog Application to Use DataLoader In the previous Book Catalog Application example SELECT * FROM books WHERE author = ${authorId} was executed each time for N = 10 authors. To batch these database calls to a single request we need to use a DataLoader at the books field. The following code block demonstrates the changes made to the books field and Author service class. import ballerina/graphql.dataloader; isolated distinct service class Author { private final readonly & AuthorRow author; isolated function init(AuthorRow author) { self.author = author.cloneReadOnly(); } isolated resource function get name() returns string { return self.author.name; } // 1. Add a map<dataloader:DataLoader> parameter to it’s parameter list isolated resource function get books(map<dataloader:DataLoader> loaders) returns Book[]|error { dataloader:DataLoader bookLoader = loaders.get("bookLoader"); BookRow[] bookrows = check bookLoader.get(self.author.id); // get the value from DataLoader for the key return from BookRow bookRow in bookrows select new Book(bookRow); } // 3. add dataloader:Loader annotation to the loadXXX method. @dataloader:Loader { batchFunctions: {"bookLoader": bookLoaderFunction} } // 2. create a loadXXX method isolated resource function get loadBooks(map<dataloader:DataLoader> loaders) { dataloader:DataLoader bookLoader = loaders.get("bookLoader"); bookLoader.load(self.author.id); // pass the key so it can be collected and batched later } } // User written code to batch the books isolated function bookLoaderFunction(readonly & anydata[] ids) returns BookRow[][]|error { readonly & int[] keys = <readonly & int[]>ids; var query = sql:queryConcat(`SELECT * FROM books WHERE author IN (`, sql:arrayFlattenQuery(keys), `)`); io:println(query); stream<BookRow, sql:Error?> bookStream = dbClient->query(query); map<BookRow[]> authorsBooks = {}; checkpanic from BookRow bookRow in bookStream do { string key = bookRow.author.toString(); if !authorsBooks.hasKey(key) { authorsBooks[key] = []; } authorsBooks.get(key).push(bookRow); }; final readonly & map<BookRow[]> clonedMap = authorsBooks.cloneReadOnly(); return keys.'map(key => clonedMap[key.toString()] ?: []); }; executing the following query { authors { name books { title } } after incorporating DataLoader will now include only two database queries. SELECT * FROM authors SELECT * FROM books WHERE author IN (1,2,3,4,5,6,7,8,9,10) Engaging DataLoader with GraphQL Engine At a high level the GraphQL Engine breaks the query into subproblems and then constructs the value for the query by solving the subproblems as shown in the below diagram. Following algorithm demonstrates how the GraphQL engine engages the DataLoader at a high level. The GraphQL engine searches for the associated resource/remote function for each field in the query. If a matching resource/remote function with the pattern loadXXX (where XXX is the field name) is found, the engine: Creates a map of DataLoader instances using the provided batch loader functions in the @dataloader:Loader annotation. Makes this map of DataLoader instances available for both the XXX and loadXXX functions. Executes the loadXXX resource method and generates a placeholder value for that field. If no matching loadXXX function is found, the engine executes the corresponding XXX resource function for that field. After completing the above steps, the engine generates a partial value tree with placeholders. The engine then executes the dispatch() function of all the created DataLoaders. For each non-resolved field (placeholder) in the partial value tree: Executes the corresponding resource function (XXX). Obtains the resolved value and replaces the placeholder with the resolved value. If the resolved value is still a non-resolved field (placeholder), the process repeats steps 1-7. Finally, the fully constructed value tree is returned. Future Plans The DataLoader object will be enhanced with the following public methods: loadMany: This method allows adding multiple keys to the key table for future data loading. getMany: Given an array of keys, this method retrieves the corresponding values from the DataLoader's result table, returning them as an array of anydata values or an error, if applicable. The return type is (anydata | error)[]. clear: This method takes a key as a parameter and removes the corresponding result from the result table, effectively clearing the cached data for that key. clearAll: This method removes all cached results from the result table, providing a way to clear the entire cache in one operation. prime: This method takes a key and an anydata value as arguments and replaces or stores the key-value mapping in the result table. This allows preloading or priming specific data into the cache for efficient retrieval later. These methods enhance the functionality of the DataLoader, providing more flexibility and control over data loading, caching, and result management. Not sure loadXXX is an actual resources method. It is an internal method for pre-processing and not part of the graphql schema. Not sure loadXXX is an actual resources method. It is an internal method for pre-processing and not part of the graphql schema. We need to map the exact remote/resource method and the corresponding loader function. Two resources can have different accessors and the same path inside a service. There isn't a way to provide two loader functions in that scenario. I agree. But My point is these functions are internal functions and mapping can do internally. Also, these functions are not part of the GraphQL original schema. By defining these functions as resource functions break the tools such as schema generation, isn't it? Yes, this is a valid point. Shall we have a meeting to check out the alternatives? Meantime, we will go ahead with this approach. We are hoping to release this as an experimental feature first. Will that be okay? Had a discussion with @sameerajayasoma @shafreenAnfar @ThisaruGuruge regarding this issue: Following points were discussed in the meeting: Since batch functions are global, is there a way to associate them with the graphql:Service instead of the loadXXX function? Perhaps with the graphql:ServiceConfig? Instead of passing the batch function pointer, can't we directly pass the DataLoader instance? Instead of defining loadXXX as a resource/remote method, can't we have regular methods? The current approach of using resource/remote methods (for loadXXX) was employed to uniquely identify the corresponding loadXXX method for a given field. This was necessary because there can be two methods with the same name but can have different resource accessors or remote keywords, resulting in ambiguity. To avoid this ambiguity, we can consider annotating the field and passing the corresponding loadXXX method's function pointer via annotation. Following changes will be made to the API according to the meeting with the team: The (prefetch) loadXXX method will be renamed to preXXX. The preXXX method signature will be changed to a regular method instead of resource/remote methods. As an advanced case, the user can override the default preXXX method using the resource config annotation. See the example below: isolated distinct service class Author { //... isolated function prefetchBooks(graphql:Context ctx) { // ... } @graphql:ResourceConfig { // ... other fields prefetch: self.prefetchBooks } isolated resource function get books(graphql:Context ctx) returns Book[]|error { // ... } remote function books(BookInput[] input) returns Book[]|error { // ... } } The @dataloader:Loader will be removed from the API, and the user will be able to register the dataloader in the context object and access the dataloader from the context object. With this API change, the mapdataloader:DataLoader parameter is removed from both the preXXX and the resolver methods. The following are the two new methods that will be added to the context: public isolated class Context { // ... omitted for brevity public isolated function registerDataLoader(string key, dataloader:DataLoader dataLoader) { // ... } public isolated function getDataLoader(string key) returns dataloader:DataLoader { // ... // panic if no key found } } The load method in the dataloader will be renamed to add: public isolated function add(anydata key); Putting it all together, the following example demonstrates the usage of the new API: Example @graphql:ServiceConfig { contextInit: isolated function (http:RequestContext requestContext, http:Request request) returns graphql:Context { graphql:Context context = new; context.registerDataLoader("bookLoader", new DefaultDataLoader(batchBooks)); return context; } } service on new graphql:Listener(9090) { // ... omitted for brevity } isolated distinct service class Author { //... isolated function preBooks(graphql:Context ctx) { dataloader:DataLoader bookLoader = ctx.getDataLoader("bookLoader"); bookLoader.load(self.author.id); } isolated resource function get books(graphql:Context ctx) returns Book[]|error { dataloader:DataLoader bookLoader = ctx.getDataLoader("bookLoader"); return bookLoader.get(self.author.id); } } Here's the corrected version of your text: As for @MaryamZi's comment, it is currently not possible to pass an instance method reference to the annotation. As an alternative approach, we have considered passing the prefetch method name to the @graphql:ResourceConfig annotation. Example: isolated distinct service class Author { //... isolated function prefetchBooks(graphql:Context ctx) { // ... } @graphql:ResourceConfig { // ... other fields prefetchMethodName: "prefetchBooks" } isolated resource function get books(graphql:Context ctx) returns Book[]|error { // ... } remote function books(BookInput[] input) returns Book[]|error { // ... } } We could validate the existence and signature of the "prefetchBooks" at compile time using a compiler plugin. What do you think, @sameerajayasoma?
gharchive/issue
2023-06-14T04:43:09
2025-04-01T04:56:07.520594
{ "authors": [ "MohamedSabthar", "ThisaruGuruge", "hasithaa" ], "repo": "ballerina-platform/ballerina-standard-library", "url": "https://github.com/ballerina-platform/ballerina-standard-library/issues/4569", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
268475256
[Debugger] For connectors, complex values should be renamed to a proper indicator. more information here https://github.com/ballerinalang/composer/issues/2984 Can you please elaborate on this issue please. An example would be great. In the given sample req and res are empty structs (earlier it was listed as complex_value). We could include some information in req to ease debugging process. @lafernando closing this since this is a exact duplicate of https://github.com/ballerinalang/ballerina/issues/4124
gharchive/issue
2017-10-25T17:02:01
2025-04-01T04:56:07.525485
{ "authors": [ "lafernando", "pahans" ], "repo": "ballerinalang/ballerina", "url": "https://github.com/ballerinalang/ballerina/issues/3760", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
201463840
Add HTTP Protocol related functions HTTP protocol related utilities has to be added as functions (e.g: Set status code, disable chunking, enforce http 1.0 etc.). Since this issue is not specific about what should be included, created a new issue (#4267) to track the missing feature (enforcing HTTP 1.0) out of the features mentioned. Closing the original issue since set status code and disable chunking are already supported and a new issue #4267 is created to track the missing feature.
gharchive/issue
2017-01-18T02:36:43
2025-04-01T04:56:07.527077
{ "authors": [ "anupama-pathirage", "kasun04", "pubudu91" ], "repo": "ballerinalang/ballerina", "url": "https://github.com/ballerinalang/ballerina/issues/846", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
224368215
Directly forward video content bitstream to Chromecast device when video is full-screen Do you think that PyChromecast can also take advantage of this method that the Google Chrome development team is experimenting with? https://plus.google.com/+FrancoisBeaufort/posts/bCAFwn7EQGi http://www.androidpolice.com/2017/04/25/google-testing-way-make-tab-casting-video-suck-lot-less-can-try/ Not looked any into the code of this, just read about it on The Verge plus Engadget and thought of PyChromecast http://www.theverge.com/platform/amp/circuitbreaker/2017/4/25/15427094/chromecast-tab-casting-video-quality-improved-desktop https://www.engadget.com/amp/2017/04/25/chrome-improves-casting-video-quality/ this has little to do with this project. This is a feature of the Chrome browser which used to render the content locally before sending to the chromecast.
gharchive/issue
2017-04-26T07:23:44
2025-04-01T04:56:07.534926
{ "authors": [ "Hedda", "uudruid74" ], "repo": "balloob/pychromecast", "url": "https://github.com/balloob/pychromecast/issues/171", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1629464553
Atrace not getting collected on Moto G40 Phone Device WHAT I used the custom script to profile an android app. The app is in release build with the profileable tag enabled. On debugging and going a little deep into the source code i realised that the atrace was not getting collected. On debugging a little more i see the error log Unable to open file5037 (.package.appname) S 1113 1113 0 0 -1 1077952832 2417899 0 63 0 45820 17497 0 0 20 0 117 0 119065447 25016229888 58295 18446744073709551615 1 1 0 0 0 0 4612 1 1073775864 0 0 0 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0 The Unable to open file comes from line number https://github.com/bamlab/flashlight/blob/94c07a10f7811e83f8e7dcc01537e310e9957b87/packages/android-performance-profiler/cpp-profiler/src/atrace.cpp#L56 On checking the path of the systrace file the source code mentions std::string traceOutputPath = "/sys/kernel/debug/tracing/trace_pipe"; However on my device i see the trace file being created at the path /sys/kernel/tracing/trace_pipe. To verify this on one terminal i ran the command adb shell atrace -c view -t 999 and in another terminal did a simple cat /sys/kernel/tracing/trace_pipe . And i was able to get the atrace.. What am i to do here? is it possible the trace file locations vary as per devices? Also how do i fix the current case? Device Info [media.recorder.show_manufacturer_and_model]: [true] [ro.board.platform]: [sm6150] [ro.boot.hardware]: [qcom] [ro.boot.hardware.revision]: [pvt] [ro.boot.hardware.sku]: [XT2147-1] [ro.boot.product.hardware.sku]: [d] [ro.boot.revision]: [pvt] [ro.boot.secure_hardware]: [1] [ro.boot.serialno]: [ZD2222GDQ8] [ro.build.version.sdk]: [30] [ro.hardware]: [qcom] [ro.hardware.egl]: [adreno] [ro.hardware.nfc_nci]: [pn54x] [ro.hardware.sensors]: [hanoip] [ro.hardware.soc.manufacturer]: [qcom] [ro.hardware.vulkan]: [adreno] [ro.mot.build.version.sdk_int]: [31] [ro.opa.device_model_id]: [motorola-hanoip] [ro.product.brand]: [motorola] [ro.product.build.version.sdk]: [30] [ro.product.manufacturer]: [motorola] [ro.product.model]: [moto g(40) fusion] [ro.product.name]: [hanoip_retail] [ro.product.odm.brand]: [motorola] [ro.product.odm.manufacturer]: [motorola] [ro.product.odm.model]: [moto g(60)] [ro.product.product.brand]: [motorola] [ro.product.product.manufacturer]: [motorola] [ro.product.product.model]: [moto g(60)] [ro.product.product.name]: [hanoip_retail] [ro.product.system.brand]: [motorola] [ro.product.system.manufacturer]: [motorola] [ro.product.system.model]: [moto g(60)] [ro.product.system_ext.brand]: [motorola] [ro.product.system_ext.manufacturer]: [motorola] [ro.product.system_ext.model]: [moto g(60)] [ro.product.vendor.brand]: [motorola] [ro.product.vendor.manufacturer]: [motorola] [ro.product.vendor.model]: [moto g(60)] [ro.revision]: [pvt] [ro.serialno]: [ZD2222GDQ8] [ro.system.build.version.sdk]: [30] [ro.system_ext.build.version.sdk]: [30] [ro.vendor.boot.serialno]: [ZD2222GDQ8] [ro.vendor.hw.revision]: [pvt] [ro.vendor.product.hardware.sku.variant]: [d] [ro.vendor.product.name]: [hanoip_retail] Custom Script const polling = pollPerformanceMeasures(pid, (measure) => { measures.push(measure); console.log(`JS Thread CPU Usage: ${measure.cpu.perName["(mqt_js)"]}%`); console.log(`Main Thread CPU Usage: ${measure.cpu.perName["UI Thread"]}%`); console.log(`RAM Usage: ${measure.ram}MB`); }); setTimeout(() => { polling && polling.stop(); fs.writeFileSync('results.json', JSON.stringify(measures)); }, 10000); Just followed the documentation.. HI @nitish24p, thanks for raising this issue, and apologies for the delay 🙏 Nice debugging! Nice that you were able to find the proper file path! One thought: the app doesn't need to be profileable. That shouldn't matter but is it still /sys/kernel/tracing/trace_pipe if the app is not profileable? Might be depending on the version of ftrace according to this. If we need to handle multiple paths depending on the device, we'll handle all of them. In case you'd like to contribute and test it out locally, there's a guide here on how to build/test the C++ executable, but let me know if you have any questions! In any case, I'll run a script at some point on our AWS device farm to check those paths work for all devices Also just adding Unable to open file in case of error is kind of horrendous on our side, so we'll need better error handling 😅 Hey @Almouro , Sure can help out if need be. Presently for my case, i made a fork and pushed a new tag with the path updated to the right one and built the binary files again and its working as expected. Hi @nitish24p, at last, this should be fixed, I was able to reproduce on other devices. Can you confirm on your end? 🙏 Do i need to update the version to latest? Hi @nitish24p , doing a bit of cleanup on Github issues, so I'm closing this one for now for lack of activity ☺️ I think this is fixed now, but do feel free to reopen if the problem is still there! 🙏
gharchive/issue
2023-03-17T15:00:39
2025-04-01T04:56:07.623999
{ "authors": [ "Almouro", "nitish24p" ], "repo": "bamlab/flashlight", "url": "https://github.com/bamlab/flashlight/issues/78", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
151989840
set header for get share page share page now return 403 when user agent not set Thanks!
gharchive/pull-request
2016-04-29T23:50:39
2025-04-01T04:56:07.631113
{ "authors": [ "banbanchs", "ezioruan" ], "repo": "banbanchs/pan-baidu-download", "url": "https://github.com/banbanchs/pan-baidu-download/pull/30", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2075928102
点击添加更多可访问的照片后 怎么不要显示视频呢? 提bug前必看 请先回答下列三个问题,否则不允处理,谢谢配合。 1、我最新的Demo是否有这个bug?【如果Demo没问题,请升级新版】 答:有 2、你用的是什么版本?升级到最新版后是否正常? 答:最新版本 3、是否有改动过我库内部的代码?【如有,请说明改动点】 答:无 bug内容描述 点击添加更多可访问的照片后 显示了视频 我如何复现这个bug? imagePickerVc.allowPickingVideo = NO; imagePickerVc.allowTakeVideo = NO; imagePickerVc.allowTakePicture = NO; imagePickerVc.allowPickingMultipleVideo = NO; imagePickerVc.allowPickingGif = NO; imagePickerVc.allowPickingOriginalPhoto = NO; 截图 其它说明 有没有其它要补充的?比如你的初始化TZImagePickerController的代码 无 这个是调的系统的: [[PHPhotoLibrary sharedPhotoLibrary] presentLimitedLibraryPickerFromViewController:self] 没有控制参数,应该做不到的
gharchive/issue
2024-01-11T07:17:25
2025-04-01T04:56:07.634790
{ "authors": [ "banchichen", "twlk-jzy" ], "repo": "banchichen/TZImagePickerController", "url": "https://github.com/banchichen/TZImagePickerController/issues/1665", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
591376409
Fix warning: MobileCoreServices has been renamed. Use CoreServices instead. Close #1295 If our library was installed via CocoaPods, Xcode 11.4 will issue the warning "MobileCoreServices has been renamed. Use CoreServices instead.". How to fix: Do not explicitly link MobileCoreServices.framework in the CocoaPods generated project, by deleting MobileCoreServices from the spec.frameworks attribute in our podspec file. More info: https://github.com/AFNetworking/AFNetworking/pull/4532 ping @banchichen The main branch is not merged This issue has been existed already! 已修复哈,新版本:3.4.3
gharchive/pull-request
2020-03-31T19:46:00
2025-04-01T04:56:07.637783
{ "authors": [ "ElfSundae", "LeoAiolia", "banchichen", "pengpeng-wang" ], "repo": "banchichen/TZImagePickerController", "url": "https://github.com/banchichen/TZImagePickerController/pull/1297", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2691669339
🛑 MXN Scroll On-ramps is down In 357617b, MXN Scroll On-ramps ($QUOTE_ENDPOINT) was down: HTTP code: 500 Response time: 27026 ms Resolved: MXN Scroll On-ramps is back up in e484fe8 after 13 minutes.
gharchive/issue
2024-11-25T17:44:40
2025-04-01T04:56:07.645634
{ "authors": [ "luisgj" ], "repo": "bandohq/upptime-monitor", "url": "https://github.com/bandohq/upptime-monitor/issues/531", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2760488314
🛑 MXN Blast On-ramps is down In 60720fe, MXN Blast On-ramps ($QUOTE_ENDPOINT) was down: HTTP code: 500 Response time: 6415 ms Resolved: MXN Blast On-ramps is back up in 27fb91a after 9 minutes.
gharchive/issue
2024-12-27T06:55:31
2025-04-01T04:56:07.647904
{ "authors": [ "luisgj" ], "repo": "bandohq/upptime-monitor", "url": "https://github.com/bandohq/upptime-monitor/issues/603", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
452859235
多数据源 测试用例带@Transactional 无法使用 3.1.1 该问题是怎么引起的?(最新版上已修复的会直接close掉) @RunWith(SpringRunner.class) @SpringBootTest(classes = {ApplicationLauncher.class}) @Transactional @DS("test") public class TestControllerTest{ @Test ....... } 重现步骤 报错信息 无法定位数据源 去那边提,这边只管 mp
gharchive/issue
2019-06-06T06:47:50
2025-04-01T04:56:07.658966
{ "authors": [ "hsoftxl", "miemieYaho" ], "repo": "baomidou/mybatis-plus", "url": "https://github.com/baomidou/mybatis-plus/issues/1236", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
302161834
Hikari连接池针对DB2数据库无法打印sql语句 底层数据库:DB2 连接池版本:HikariCP-java7--2.4.13 在判断HikariPreparedStatementWrapper.equals(stmtClassName)时,通过getValue("delegate.sqlStatement")获取不到sql语句,返回的为类对象字符形式 抱歉,我们团队无 db2 环境,希望可以提供 PR
gharchive/issue
2018-03-05T03:36:53
2025-04-01T04:56:07.660159
{ "authors": [ "qmdx", "sxrstrive" ], "repo": "baomidou/mybatis-plus", "url": "https://github.com/baomidou/mybatis-plus/issues/253", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2001201170
使用lambdaUpdate().remove()更新时,自动填充失效 当前使用版本(必填,否则不予处理) 3.5.2 该问题是如何引起的?(确定最新版也有问题再提!!!) 执行逻辑删除时,updateTime未更新; 重现步骤(如果有就写完整) this.lambdaUpdate().eq(SchoolBook::getSchoolId,schoolId) .in(SchoolBook::getBookId,delBookIds) .remove() 看了下源码,参数处理时未获取到tableInfo,未进入填充逻辑; 报错信息 无 填充只能填充到entity里 填充只能填充到entity里 近期有规划解决吗?没有的话,我可以试试嘛? 解决不了,就这样的 解决不了,就这样的 LambdaUpdateChainWrapper增加一个属性Class entityClass; 在Service层,调用lambdaUpdate()创建LambdaUpdateChainWrapper时,将EntityClass传递给wrapper; 不可以,你应该自己new进去 this.lambdaUpdate().remove() 这个没提供new的场景呀,刚才说的是对这个场景的扩展支持 我get到了你点了,自动生成mapper脚本中,确实没有参数可以支持更新字段; @miemieYaho 大佬,尝试修改了下,提了个pull request, 修改方式有问题的话,可以沟通 @yuge1805 我们是自己在mapper上做了个method来处理,需要传入entity作为参数,因为mybatisplus处理更新参数注入时需要有一个对象 @yuge1805 我们是自己在mapper上做了个method来处理,需要传入entity作为参数,因为mybatisplus处理更新参数注入时需要有一个对象 一样的想法,我在PR里也是这么改的。奈何官方人员只是说解决不了,然后直接关了PR,没有任何沟通的想法。┓( ´∀` )┏
gharchive/issue
2023-11-20T01:34:37
2025-04-01T04:56:07.667350
{ "authors": [ "chess3cake", "miemieYaho", "yuge1805" ], "repo": "baomidou/mybatis-plus", "url": "https://github.com/baomidou/mybatis-plus/issues/5785", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
700299432
Is there a pretrained model that uses resnet18 as a backbone? Hello. I would like to run the code with a smaller ResNet Model. Do you have any model with resnet18 as the backbone? If you can't provide a model, can you roughly know the accuracy? Thank you. Hi ksoy0128, I don't have the resnet18 model. Sorry! :(
gharchive/issue
2020-09-12T16:47:08
2025-04-01T04:56:07.669189
{ "authors": [ "baoxinchen", "ksoy0128" ], "repo": "baoxinchen/siammask_e", "url": "https://github.com/baoxinchen/siammask_e/issues/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
650045070
PostgisReferenceCache throws IllegalArgumentException While I am experimenting with Windows, I have noticed, that the osm_relations table is always empty after the import. I think the problem is due to the PostgisReferenceCache implementation I am using. In PostgisReferenceCache#get an IllegalArgumentException is thrown when the Reference does not exist: https://github.com/baremaps/baremaps/blob/40409acd91d002f16297432527fd92f28d072957/baremaps-osm/src/main/java/com/baremaps/osm/cache/PostgisReferenceCache.java#L57 I think this will cause the whole stream to cancel when it is thrown in the RelationBuilder#build: https://github.com/baremaps/baremaps/blob/40409acd91d002f16297432527fd92f28d072957/baremaps-osm/src/main/java/com/baremaps/osm/geometry/RelationBuilder.java#L69 Can't the Cache return null, when the Element is not available? Or should the call in the stream be wrapped with Try.of(...)? As of today, the LMDB cache is used when importing the data for the first time. The Postgis cache is used when updating previously imported data. In other words, the Postgis cache assumes an existing database fully populated with nodes, ways, and relations. This decision comes from the OSM data model. In order to create the geometries associated with ways and relations, the OSM data need to be read two times: a first time to build a cache of the nodes, ways and relations, and a second time to denormalize the data and create the geometries. For instance, in OSM, a node is characterized by an id, a longitude and a latitude and a way is characterized by an id and a list of node ids. In order to create the geometry for a way, baremaps fetches the nodes from the cache and use their longitude and latitude coordinates to create the corresponding line. I upgraded the LMDB to version 0.8 a couple of days ago, do you think they addressed the issues associated with windows in this release? Regarding the null value returned by the cache, this is hard to tell, as the OSM data is supposed to be consistent, I would prefer to fail early. Another option could be to add an in memory cache (only suitable for small imports), or a more portable cache (for instance based on MVStore or another kv store). What do you think? Thanks! Yes I already thought it would require the data to be read twice. I will try with the updated Lmdb Cache and see how it works out. 👍 The Import now works with the latest LMDB release. Although it is very slow, for a small city like Münster it takes 10 ninutes for caching references and coordinates. Is it faster for Linux? Good job! On linux, the creation of the cache (1min) and of the database (1min) is rather fast. What takes time is the creation of the indexes in postgis (about 8 minutes). Here is my output when importing muenster-regbez-latest.osm.pbf, do you obtained something similar? baremaps import \ --input 'muenster-regbez-latest.osm.pbf' \ --database 'jdbc:postgresql://localhost:5432/baremaps?allowMultiQueries=true&user=baremaps&password=baremaps' [INFO ] 2020-07-02 21:59:18.291 [main] Import - 8 processors available [INFO ] 2020-07-02 21:59:18.306 [main] Import - Dropping tables [INFO ] 2020-07-02 21:59:18.433 [main] Import - Creating tables [INFO ] 2020-07-02 21:59:18.465 [main] Import - Creating primary keys [INFO ] 2020-07-02 21:59:18.705 [main] Import - Fetching input [INFO ] 2020-07-02 21:59:18.706 [main] Import - Populating cache [INFO ] 2020-07-02 22:00:11.736 [main] Import - Populating database [INFO ] 2020-07-02 22:01:07.474 [main] Import - Indexing geometries [INFO ] 2020-07-02 22:06:17.411 [main] Import - Indexing attributes Thanks for the timing. When using createTempDirectory(...) it used the HDD, which took up to 10 minutes for building the cache. Putting it on the SSD yields much better results. They are slightly slower than yours, but this could be due to the hardware. Thanks for your patience with me! [INFO ] 2020-07-03 17:48:34.973 [main] Import - 4 processors available [INFO ] 2020-07-03 17:48:35.051 [main] Import - Dropping tables [INFO ] 2020-07-03 17:48:35.307 [main] Import - Creating tables [INFO ] 2020-07-03 17:48:35.322 [main] Import - Creating primary keys [INFO ] 2020-07-03 17:48:35.947 [main] Import - Fetching input [INFO ] 2020-07-03 17:48:35.947 [main] Import - Populating cache [INFO ] 2020-07-03 17:49:42.873 [main] Import - Populating database [INFO ] 2020-07-03 17:50:44.332 [main] Import - Indexing geometries [INFO ] 2020-07-03 17:53:14.731 [main] Import - Indexing attributes
gharchive/issue
2020-07-02T16:00:31
2025-04-01T04:56:08.020104
{ "authors": [ "bchapuis", "bytefish" ], "repo": "baremaps/baremaps", "url": "https://github.com/baremaps/baremaps/issues/97", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2349477
Pubsubhubbub I think it's stable now, so go for it :D Bartlomiej Szejna reply@reply.github.com a écrit : I think it's stable now, so go for it :D You can merge this Pull Request by running: git pull https://github.com/barjo/arvensis pubsubhubbub Or you can view, comment on it, or merge it online at: https://github.com/barjo/arvensis/pull/7 -- Commit Summary -- pack pubsubhubbub clients and hub into one bundle, move rss Move to registry subfolder Merge branch 'develop' of https://github.com/barjo/arvensis into pubsubhubbub adapt pubsubhubbub test to work with one pubsubhubub bundle Pubsubhubbub + jsonrpc example added (publisher and hub on one Add example rose configuration files minor fixes set rss feed factory as service dependency change rss servlet factory tracker to service dependency check style add service reference release remove redundant maven dependency style fix Chat example added javadoc extended Hub registration concurrency improved change class to abstract -- File Changes -- A example/chat/chat.api/pom.xml (68) A example/chat/chat.api/src/main/java/org/ow2/chameleon/rose/demo/chat/api/ChatClient.java (8) A example/chat/chat.api/src/main/java/org/ow2/chameleon/rose/demo/chat/api/ChatServer.java (10) A example/chat/chat.client/pom.xml (95) A example/chat/chat.client/src/main/java/org/ow2/chameleon/rose/demo/chat/client/internal/ChatClientImpl.java (118) A example/chat/chat.server/pom.xml (91) A example/chat/chat.server/src/main/java/org/ow2/chameleon/rose/demo/chat/server/internal/ChatServerImpl.java (84) A example/hello-pubsubhubub+jsonrpc/demoClient/bin/org/ow2/chameleon/rose/demo/client/Client.class (0) R example/hello-pubsubhubub+jsonrpc/demoClient/pom.xml (93) A example/hello-pubsubhubub+jsonrpc/demoClient/src/main/java/org/ow2/chameleon/rose/demo/client/Client.java (39) A example/hello-pubsubhubub+jsonrpc/demoService-API/bin/org/ow2/chameleon/rose/demo/api/DemoServiceAPI.class (0) A example/hello-pubsubhubub+jsonrpc/demoService-API/pom.xml (94) A example/hello-pubsubhubub+jsonrpc/demoService-API/src/main/java/org/ow2/chameleon/rose/demo/api/DemoServiceAPI.java (6) A example/hello-pubsubhubub+jsonrpc/demoService-impl/bin/org/ow2/chameleon/rose/demo/service/DemoService.class (0) A example/hello-pubsubhubub+jsonrpc/demoService-impl/pom.xml (100) A example/hello-pubsubhubub+jsonrpc/demoService-impl/src/main/java/org/ow2/chameleon/rose/demo/service/DemoService.java (21) A example/hello-pubsubhubub+jsonrpc/json configuration files/exporter/rose-conf.json (30) A example/hello-pubsubhubub+jsonrpc/json configuration files/importer/rose-conf.json (24) R registry/pubsubhubbub-it/pom.xml (23) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/AbstractTestConfiguration.java (40) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/PublisherTest.java (95) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/SubscriberTest.java (53) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/TestHubImpl.java (7) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/hub/HubTest.java (85) R registry/pubsubhubbub-webConsole-plugin/pom.xml (0) R registry/pubsubhubbub-webConsole-plugin/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/webconsole/PubsubhubbubWebConsole.java (0) D registry/pubsubhubbub/subscriber/pom.xml (172) R registry/pubsubhubub/pom.xml (30) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/constants/PubsubhubbubConstants.java (11) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/Hub.java (0) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/internal/HubImpl.java (57) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/internal/Registrations.java (113) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/internal/SendSubscription.java (49) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/publisher/Publisher.java (3) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/publisher/internal/HubPublisher.java (14) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/publisher/internal/PublisherImpl.java (98) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/subscriber/Subscriber.java (0) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/subscriber/internal/HubSubscriber.java (15) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/subscriber/internal/SubscriberImpl.java (21) -- Patch Links -- https://github.com/barjo/arvensis/pull/7.patch https://github.com/barjo/arvensis/pull/7.diff Reply to this email directly or view it on GitHub: https://github.com/barjo/arvensis/pull/7 Thank ! I 'll do it on monday H Bartlomiej Szejna reply@reply.github.com a écrit : I think it's stable now, so go for it :D You can merge this Pull Request by running: git pull https://github.com/barjo/arvensis pubsubhubbub Or you can view, comment on it, or merge it online at: https://github.com/barjo/arvensis/pull/7 -- Commit Summary -- pack pubsubhubbub clients and hub into one bundle, move rss Move to registry subfolder Merge branch 'develop' of https://github.com/barjo/arvensis into pubsubhubbub adapt pubsubhubbub test to work with one pubsubhubub bundle Pubsubhubbub + jsonrpc example added (publisher and hub on one Add example rose configuration files minor fixes set rss feed factory as service dependency change rss servlet factory tracker to service dependency check style add service reference release remove redundant maven dependency style fix Chat example added javadoc extended Hub registration concurrency improved change class to abstract -- File Changes -- A example/chat/chat.api/pom.xml (68) A example/chat/chat.api/src/main/java/org/ow2/chameleon/rose/demo/chat/api/ChatClient.java (8) A example/chat/chat.api/src/main/java/org/ow2/chameleon/rose/demo/chat/api/ChatServer.java (10) A example/chat/chat.client/pom.xml (95) A example/chat/chat.client/src/main/java/org/ow2/chameleon/rose/demo/chat/client/internal/ChatClientImpl.java (118) A example/chat/chat.server/pom.xml (91) A example/chat/chat.server/src/main/java/org/ow2/chameleon/rose/demo/chat/server/internal/ChatServerImpl.java (84) A example/hello-pubsubhubub+jsonrpc/demoClient/bin/org/ow2/chameleon/rose/demo/client/Client.class (0) R example/hello-pubsubhubub+jsonrpc/demoClient/pom.xml (93) A example/hello-pubsubhubub+jsonrpc/demoClient/src/main/java/org/ow2/chameleon/rose/demo/client/Client.java (39) A example/hello-pubsubhubub+jsonrpc/demoService-API/bin/org/ow2/chameleon/rose/demo/api/DemoServiceAPI.class (0) A example/hello-pubsubhubub+jsonrpc/demoService-API/pom.xml (94) A example/hello-pubsubhubub+jsonrpc/demoService-API/src/main/java/org/ow2/chameleon/rose/demo/api/DemoServiceAPI.java (6) A example/hello-pubsubhubub+jsonrpc/demoService-impl/bin/org/ow2/chameleon/rose/demo/service/DemoService.class (0) A example/hello-pubsubhubub+jsonrpc/demoService-impl/pom.xml (100) A example/hello-pubsubhubub+jsonrpc/demoService-impl/src/main/java/org/ow2/chameleon/rose/demo/service/DemoService.java (21) A example/hello-pubsubhubub+jsonrpc/json configuration files/exporter/rose-conf.json (30) A example/hello-pubsubhubub+jsonrpc/json configuration files/importer/rose-conf.json (24) R registry/pubsubhubbub-it/pom.xml (23) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/AbstractTestConfiguration.java (40) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/PublisherTest.java (95) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/SubscriberTest.java (53) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/clients/TestHubImpl.java (7) R registry/pubsubhubbub-it/src/test/java/org/ow2/chameleon/pubsubhubbub/test/hub/HubTest.java (85) R registry/pubsubhubbub-webConsole-plugin/pom.xml (0) R registry/pubsubhubbub-webConsole-plugin/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/webconsole/PubsubhubbubWebConsole.java (0) D registry/pubsubhubbub/subscriber/pom.xml (172) R registry/pubsubhubub/pom.xml (30) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/constants/PubsubhubbubConstants.java (11) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/Hub.java (0) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/internal/HubImpl.java (57) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/internal/Registrations.java (113) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/hub/internal/SendSubscription.java (49) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/publisher/Publisher.java (3) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/publisher/internal/HubPublisher.java (14) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/publisher/internal/PublisherImpl.java (98) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/subscriber/Subscriber.java (0) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/subscriber/internal/HubSubscriber.java (15) R registry/pubsubhubub/src/main/java/org/ow2/chameleon/rose/pubsubhubbub/subscriber/internal/SubscriberImpl.java (21) -- Patch Links -- https://github.com/barjo/arvensis/pull/7.patch https://github.com/barjo/arvensis/pull/7.diff Reply to this email directly or view it on GitHub: https://github.com/barjo/arvensis/pull/7
gharchive/issue
2011-11-25T13:49:18
2025-04-01T04:56:08.056589
{ "authors": [ "DenisMorand", "baju" ], "repo": "barjo/arvensis", "url": "https://github.com/barjo/arvensis/issues/7", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
166907810
Update from main bandit repo Update from main bandit repo to fix VS 2015 compilation errors +1
gharchive/pull-request
2016-07-21T20:15:59
2025-04-01T04:56:08.057780
{ "authors": [ "chrisbaron", "steve450" ], "repo": "barklyprotects/bandit", "url": "https://github.com/barklyprotects/bandit/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
295493971
[2.4.3] ide-helper:generate overwrites vendor file $ ./artisan ide-helper:generate A new helper file was written to _ide_helper.php Unexpected no document on Illuminate\Database\Eloquent\Model Wrote expected docblock to /vagrant/…/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php $ PAGER= git diff vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php`: diff --git a/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php b/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php index cbb3ee648..1eeb017da 100644 --- a/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php +++ b/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php @@ -15,6 +15,13 @@ use Illuminate\Database\Eloquent\Relations\Pivot; use Illuminate\Database\Query\Builder as QueryBuilder; use Illuminate\Database\ConnectionResolverInterface as Resolver; +/** + * + * + * @mixin \Eloquent + * @mixin \Illuminate\Database\Eloquent\Builder + * @mixin \Illuminate\Database\Query\Builder + */ abstract class Model implements ArrayAccess, Arrayable, Jsonable, JsonSerializable, QueueableEntity, UrlRoutable { use Concerns\HasAttributes, I believe this is due to https://github.com/barryvdh/laravel-ide-helper/pull/624 vendor files should never be written to, nor overwritten, in any circumstance; especially not from different packages. https://github.com/barryvdh/laravel-ide-helper/pull/626 I don't understand why it was okay to add @mixin \Eloquent for ages, but now it's so horrible to rewrite the DocBlock for this package to work? It's okay to write it to the idehelper or the user models, not vendor code.
gharchive/issue
2018-02-08T12:28:11
2025-04-01T04:56:08.097344
{ "authors": [ "barryvdh", "iPaat", "mfn" ], "repo": "barryvdh/laravel-ide-helper", "url": "https://github.com/barryvdh/laravel-ide-helper/issues/625", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
153235242
Save load Now, when you load a project, the blocks are actually shown on screen instead of just added in the background I placed one small comment, other then that it looks fine :)
gharchive/pull-request
2016-05-05T13:46:12
2025-04-01T04:56:08.100768
{ "authors": [ "bartdejonge1996", "mennoo1996" ], "repo": "bartdejonge1996/goto-fail", "url": "https://github.com/bartdejonge1996/goto-fail/pull/60", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
429488139
calculate % changed [ ] How many of the metrics have changed? (in %) [ ] How much % has each of the metrics changed? diff src/index.js test/index.js | diffstat unknown | 180 ++++++++++++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 148 insertions(+), 32 deletions(-) fixed in a77082d1460a07ae6d07c5ddddedeceef739ea9e
gharchive/issue
2019-04-04T21:19:49
2025-04-01T04:56:08.107468
{ "authors": [ "bartveneman" ], "repo": "bartveneman/css-analysis-diffstat", "url": "https://github.com/bartveneman/css-analysis-diffstat/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2568966844
Can't connect to server Server show's offline. Logs from server FIG FILES CREATED. WAIT FOR ANOTHER 40 SECONDS TO MAKE SURE ALL CHANGES WERE APPLIED... CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED rename GameplayTags.ini CONFIG FILE DIR CHANGED change GameplayTags.ini CONFIG FILE DIR CHANGED change GameUserSettings.ini CONFIG FILE DIR CHANGED change GameUserSettings.ini CONFIG FILE DIR CHANGED change GameUserSettings.ini CONFIG FILE DIR CHANGED change GameUserSettings.ini CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED change AstroServerSettings.ini CONFIG FILE DIR CHANGED change GameUserSettings.ini CONFIG FILE DIR CHANGED change GameUserSettings.ini ON ERROR: 0024:fixme:advapi:DeregisterEventSource (00000000CAFE4242) stub child process exited with code 0 DONE CONFIG FILES WERE CREATED. SHUT DOWN THE SERVER, UPDATE CONFIG AND THEN RESTART --------------SERVER STOP-------------- GOING TO STOP THE SERVER... HEALTH CHECK STOPPED! BACKUP STOPPED! --------------SERVER STOP DONE-------------- child process close all stdio with code 0 GOING TO UPDATE THE SERVER CONFIGURATION BASED ON CURRENT ENV VARIABLES... PUBLIC IP IS: **** INIT BACKUP... LOADING CURRENT BACKUPS FROM /backup AND /backup/daily BACKUP IS NOW RUNNING HEALTH CHECK IS NOW RUNNING --------------SERVER INIT DONE-------------- --------------START THE SERVER-------------- ON ERROR: _XSERVTransmkdir: Owner of /tmp/.X11-unix should be set to root ON ERROR: 0084:fixme:hid:handle_IRP_MN_QUERY_ID Unhandled type 00000005 ON ERROR: 0084:fixme:hid:handle_IRP_MN_QUERY_ID Unhandled type 00000005 ON ERROR: 0084:fixme:hid:handle_IRP_MN_QUERY_ID Unhandled type 00000005 ON ERROR: 0084:fixme:hid:handle_IRP_MN_QUERY_ID Unhandled type 00000005 ON ERROR: 0024:fixme:ntdll:NtQuerySystemInformation info_class SYSTEM_PERFORMANCE_INFORMATION ON ERROR: 0024:fixme:advapi:RegisterEventSourceW ((null),L"Astro-PID32"): stub ON ERROR: 0024:fixme:gameux:GameExplorerImpl_VerifyAccess (0000000000672560, L"Z:\astroneer\Astro\Binaries\Win64\AstroServer-Win64-Shipping.exe", 00000000004EDC98) ON ERROR: ALSA lib confmisc.c:767:(parse_card) ON ERROR: cannot find card '0' ON ERROR: ALSA lib conf.c:4745:(_snd_config_evaluate) ON ERROR: function snd_func_card_driver returned error: No such file or directory ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM default ON ERROR: ALSA lib confmisc.c:767:(parse_card) cannot find card '0' ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name ALSA lib conf.c:4745:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory ALSA lib conf.c:5233:(snd_config_expand) Evaluate error: No such file or directory ALSA lib pcm.c:2660:(snd_pcm_open_noupdate) Unknown PCM default ON ERROR: 0024:fixme:bcrypt:BCryptOpenAlgorithmProvider algorithm L"DH" not supported ON ERROR: 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub ON ERROR: 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub ON ERROR: 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub 0024:fixme:msvcp:_Locinfo__Locinfo_ctor_cat_cstr (00000000004EEB70 1 C) semi-stub ON ERROR: 0024:fixme:advapi:DeregisterEventSource (00000000CAFE4242) stub child process exited with code 0 2024-10-06T16:11:57-07:00 UNABLE TO INIT HEALTH CHECK AS THERE IS NO SAVE GAME FILE! 2024-10-06T16:12:17-07:00 UNABLE TO INIT HEALTH CHECK AS THERE IS NO SAVE GAME FILE! 2024-10-06T16:12:37-07:00 UNABLE TO INIT HEALTH CHECK AS THERE IS NO SAVE GAME FILE! I noticed the same after a server update a while ago. In most cases restarting the container did solve the problem. However.. I just published a new version of the Image (2.x) that uses proton instead of wine which seems to work but i did not test it over a longer period of time... Be aware that the current dedicated server version does not support the new DLC (Glitchwalker): https://blog.astroneer.space/p/astroneer-glitchwalkers/ Hope System Era will add support for the DLC but this may take a while... I tested the latest image, unfortunately, the game still says it's offline. IDK if the dedicated server prints any log. This seems like the PublicIP field in AstroServerSettings.ini is set to the wrong value. Make sure it is set to the same public IP you entered into the server checker. This seems like the PublicIP field in AstroServerSettings.ini is set to the wrong value. Make sure it is set to the same public IP you entered into the server checker. It is the same value. For further details, I am hosting the server on my Linux PC, port forwarding configured on my router (the router gets the public up). This setup worked for other games I have played, but somehow failed to work for Astroneer. This seems like the PublicIP field in AstroServerSettings.ini is set to the wrong value. Make sure it is set to the same public IP you entered into the server checker. It is the same value. For further details, I am hosting the server on my Linux PC, port forwarding configured on my router (the router gets the public IP). This setup worked for other games I have played, but somehow failed to work for Astroneer. Ok, if the IP matches, does rhe port in the config also match the one you're putting into the server checker. Public and Private port also have to match. Also check that the Heartbeat Interval is not 0. Also check that the Heartbeat Interval is not 0. Where can I find this setting? Thanks @JoeJoeTV for your help, appreciate. @KrisCris Did you set the IP Address via ENV variable or did you let the server determine your current ip address? Thanks @JoeJoeTV for your help, appreciate. @KrisCris Did you set the IP Address via ENV variable or did you let the server determine your current ip address? I am using the env, can verify that the ip is indeed written into the file. Thanks @JoeJoeTV for your help, appreciate. @KrisCris Did you set the IP Address via ENV variable or did you let the server determine your current ip address? I am using the env, can verify that the ip is indeed written into the file. Are you using the provided docker-compose file and if yes, did you change anything? Are you using the provided docker-compose file and if yes, did you change anything? Thanks for the help! I think I have solved the issue. The main issue causing the Server is experiencing issue on that checker page is the docker port mapping - the game has to run on the port you exposed to the public, mapping it to a different port will cause issue. Then, you can only join the game with the IP configured on the server, no local IP or things like that. Finally, the NAT Loopback was somehow broken with my current network configuration. Idk why my network switch caused the issue, but as soon as I plugged the ethernet cable directly onto the router, everything worked out. Hopefully this can also be helpful to ppl facing similar issues. @KrisCris Yes the server can only be joined via pulic ip / port as there is no dns resolution implemented (and i don't think that they are going to implement that in future). Not sure if i understood you right with the port mapping but there was an issue with the docker-compose file that always exposed port 8777. I Already fixed this in the latest develop. It should now expose the port configured in .env if one is defined. Not sure if i understood you right with the port mapping So for example if the server is using 8777, then you do 12345:8777, which doesn’t work. I am not sure why would this happen, but you have to match the internal port and the one exposed: set the port env to 12345, then 12345:12345 I am not sure why would this happen, but you have to match the internal port and the one exposed: set the port env to 12345, then 12345:12345 the server being unable to connect via a private ip address, like 192.168.x.x, kinda sucks. Just to explain a bit, why this happens: The cause is just how the dedicated server system for Astroneer is build. It's more complicated in comparison to e.g. a Minecraft dedicated server, where you just have an application listening on a port which the client connects to. For Astroneer there is 3 components: The game client, the dedicated server and the 3rd party Playfab service. When the dedicated server starts, it will start listening on the game port (e.g. 7777) on the UDP protocol for any interface AND register itself as a running server with the Playfab service, passing along it's "URL", which is the IPv4 address in the PublicIP field and the port separated by a colon. Additionally, the server won't start when given a local range IPv4 address. It will then regularly send heartbeat messages to Playfab, telling it that it's still online and its status (player counter, etc.). When a game client now wants to connect to the server, it first asks Playfab about the entered "URL" (again IP:Port) and the server's details. If nothing is there, the server will show as offline and if it gets something , it will display the server as online and show the player count from Playfab. If the server is online, the player can click to join and only then will the game try to connect to the dedicated server itself via the URL given by Playfab (so the public IP and game port). Because of this, the IP has to be the public one and the local and public port must match. NOTE: This is how I understood the system from asking other people, trying out and writing AstroTuxLauncher, since there is no official documentation on any of this currently, so some things may be not fully accurate. https://github.com/barumel/docker-astroneer-server/issues/6#issuecomment-2496511185 Thanks for the explanation!
gharchive/issue
2024-10-06T23:13:17
2025-04-01T04:56:08.136833
{ "authors": [ "JoeJoeTV", "KrisCris", "barumel", "hexonMD" ], "repo": "barumel/docker-astroneer-server", "url": "https://github.com/barumel/docker-astroneer-server/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
123885752
security https://github.com/moajs/moa-api/issues/15 https://github.com/krakenjs/lusca https://github.com/koajs/koa-lusca
gharchive/issue
2015-12-25T15:10:37
2025-04-01T04:56:08.142131
{ "authors": [ "i5ting" ], "repo": "base-n/base2-core", "url": "https://github.com/base-n/base2-core/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1849896231
feat: custom button text on external drop cards Description Deploy Notes Notes regarding deployment of the contained body of work. These should note any db migrations, nginx routes, infrastructure changes, and anything that must be done before deployment. [ ] Need to add environment variables ENV_VAR= Screenshots (if appropriate) Review Error for lauchness @ 2023-08-14 14:39:12 UTC User failed mfa authentication, public email is not set on your github profile. see go/mfa-help Review Error for ximxim @ 2023-08-14 14:41:19 UTC User failed mfa authentication, see go/mfa-help
gharchive/pull-request
2023-08-14T14:26:19
2025-04-01T04:56:08.145408
{ "authors": [ "cb-heimdall", "wilsoncusack" ], "repo": "base-org/onchainsummer.xyz", "url": "https://github.com/base-org/onchainsummer.xyz/pull/248", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2366059780
Deploy stopped working since v1.7: docker stderr: context "<name>" does not exist Hi, thanks for this great package! Today i tried to use the latest Kamal image and i got the error below. Looks like it's not able to inspect / create the docker context. Logs from kamal deploy -d production --verbose using ghcr.io/basecamp/kamal:v1.7.0 but it happens with both v1.7.0 and v1.7.1. Status: Downloaded newer image for ghcr.io/basecamp/kamal:v1.7.0 Log into image registry... ... removed for brevity ... DEBUG [f5a53d64] Login Succeeded INFO [f5a53d64] Finished in 1.720 seconds with exit status 0 (successful). Build and push app image... INFO [dbc88fd7] Running docker --version && docker buildx version on localhost DEBUG [dbc88fd7] Command: docker --version && docker buildx version DEBUG [dbc88fd7] Docker version 20.10.24, build 297e1284d3bd092e9bc96076c3ddc4bb33f8c7ab DEBUG [dbc88fd7] github.com/docker/buildx v0.15.0 d3a53189f7e9c917eeff851c895b9aad5a66b108 INFO [dbc88fd7] Finished in 0.078 seconds with exit status 0 (successful). INFO Cloning repo into build directory `/tmp/kamal-clones/my-app-2f65914456263/workdir/`... INFO [08f28ec4] Running /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263 clone /workdir on localhost DEBUG [08f28ec4] Command: /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263 clone /workdir DEBUG [08f28ec4] Cloning into 'workdir'... DEBUG [08f28ec4] done. INFO [08f28ec4] Finished in 0.200 seconds with exit status 0 (successful). INFO [ecc25a58] Running /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ status --porcelain on localhost DEBUG [ecc25a58] Command: /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ status --porcelain INFO [ecc25a58] Finished in 0.011 seconds with exit status 0 (successful). INFO [d71a3f71] Running /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ rev-parse HEAD on localhost DEBUG [d71a3f71] Command: /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ rev-parse HEAD DEBUG [d71a3f71] eec8108d97d18d6aff73362813289615131c86d7 INFO [d71a3f71] Finished in 0.002 seconds with exit status 0 (successful). INFO [5bb8e984] Running docker context inspect kamal-my-app-native-remote-amd64 --format '{{.Endpoints.docker.Host}}' on localhost DEBUG [5bb8e984] Command: docker context inspect kamal-my-app-native-remote-amd64 --format '{{.Endpoints.docker.Host}}' DEBUG [5bb8e984] context "kamal-my-app-native-remote-amd64" does not exist DEBUG [5bb8e984] WARN Missing compatible builder, so creating a new one first Finished all in 2.8 seconds ERROR (SSHKit::Command::Failed): docker exit status: 256 docker stdout: Nothing written docker stderr: context "kamal-my-app-native-remote-amd64" does not exist /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/command.rb:97:in `exit_status=' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/local.rb:59:in `block in execute_command' /usr/local/lib/ruby/3.2.0/open3.rb:228:in `popen_run' /usr/local/lib/ruby/3.2.0/open3.rb:103:in `popen3' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/local.rb:44:in `execute_command' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/abstract.rb:148:in `block in create_command_and_execute' <internal:kernel>:90:in `tap' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/abstract.rb:148:in `create_command_and_execute' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/abstract.rb:66:in `capture' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/sshkit_with_ext.rb:9:in `capture_with_info' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/cli/build.rb:38:in `block in push' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/abstract.rb:31:in `instance_exec' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/backends/abstract.rb:31:in `run' /usr/local/bundle/gems/sshkit-1.22.2/lib/sshkit/dsl.rb:10:in `run_locally' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/cli/build.rb:36:in `push' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/cli/build.rb:8:in `deliver' /usr/local/bundle/gems/thor-1.3.0/lib/thor/command.rb:28:in `run' /usr/local/bundle/gems/thor-1.3.0/lib/thor/invocation.rb:127:in `invoke_command' /usr/local/bundle/gems/thor-1.3.0/lib/thor.rb:527:in `dispatch' /usr/local/bundle/gems/thor-1.3.0/lib/thor/invocation.rb:116:in `invoke' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/cli/main.rb:35:in `block in deploy' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/cli/base.rb:75:in `print_runtime' /usr/local/bundle/gems/kamal-1.7.0/lib/kamal/cli/main.rb:24:in `deploy' /usr/local/bundle/gems/thor-1.3.0/lib/thor/command.rb:28:in `run' /usr/local/bundle/gems/thor-1.3.0/lib/thor/invocation.rb:127:in `invoke_command' /usr/local/bundle/gems/thor-1.3.0/lib/thor.rb:527:in `dispatch' /usr/local/bundle/gems/thor-1.3.0/lib/thor/base.rb:584:in `start' /usr/local/bundle/gems/kamal-1.7.0/bin/kamal:9:in `<top (required)>' /usr/local/bundle/bin/kamal:25:in `load' /usr/local/bundle/bin/kamal:25:in `<main>' The same command using v1.6.0 works. Logs from kamal deploy -d production --verbose using ghcr.io/basecamp/kamal:v1.6.0. Status: Downloaded newer image for ghcr.io/basecamp/kamal:v1.6.0 Log into image registry... ... removed for brevity ... DEBUG [f0cb5b8b] Login Succeeded INFO [f0cb5b8b] Finished in 1.774 seconds with exit status 0 (successful). Build and push app image... INFO [8796fd53] Running docker --version && docker buildx version on localhost DEBUG [8796fd53] Command: docker --version && docker buildx version DEBUG [8796fd53] Docker version 20.10.24, build 297e1284d3bd092e9bc96076c3ddc4bb33f8c7ab DEBUG [8796fd53] github.com/docker/buildx v0.14.1 59582a88fca7858dbe1886fd1556b2a0d79e43a3 INFO [8796fd53] Finished in 0.083 seconds with exit status 0 (successful). INFO Cloning repo into build directory `/tmp/kamal-clones/my-app-2f65914456263/workdir/`... INFO [a6a7a86b] Running /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263 clone /workdir on localhost DEBUG [a6a7a86b] Command: /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263 clone /workdir DEBUG [a6a7a86b] Cloning into 'workdir'... DEBUG [a6a7a86b] done. INFO [a6a7a86b] Finished in 0.236 seconds with exit status 0 (successful). INFO [c2ff35c9] Running /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ status --porcelain on localhost DEBUG [c2ff35c9] Command: /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ status --porcelain INFO [c2ff35c9] Finished in 0.012 seconds with exit status 0 (successful). INFO [f3dbccb0] Running /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ rev-parse HEAD on localhost DEBUG [f3dbccb0] Command: /usr/bin/env git -C /tmp/kamal-clones/my-app-2f65914456263/workdir/ rev-parse HEAD DEBUG [f3dbccb0] eec8108d97d18d6aff73362813289615131c86d7 INFO [f3dbccb0] Finished in 0.002 seconds with exit status 0 (successful). INFO [16895a4f] Running docker buildx build --push --platform linux/amd64 --builder kamal-my-app-native-remote -t ghcr.io/edolix/my-app:eec8108d97d18d6aff73362813289615131c86d7 -t ghcr.io/edolix/my-app:latest-production --label service="my-app" --file Dockerfile . on localhost DEBUG [16895a4f] Command: docker buildx build --push --platform linux/amd64 --builder kamal-my-app-native-remote -t ghcr.io/edolix/my-app:eec8108d97d18d6aff73362813289615131c86d7 -t ghcr.io/edolix/my-app:latest-production --label service="my-app" --file Dockerfile . DEBUG [16895a4f] ERROR: no builder "kamal-my-app-native-remote" found WARN Missing compatible builder, so creating a new one first DEBUG Using builder: native/remote INFO [77f58e4a] Running docker context create kamal-my-app-native-remote-amd64 --description 'kamal-my-app-native-remote amd64 native host' --docker 'host=' ; docker buildx create --name kamal-my-app-native-remote kamal-my-app-native-remote-amd64 --platform linux/amd64 on localhost DEBUG [77f58e4a] Command: docker context create kamal-my-app-native-remote-amd64 --description 'kamal-my-app-native-remote amd64 native host' --docker 'host=' ; docker buildx create --name kamal-my-app-native-remote kamal-my-app-native-remote-amd64 --platform linux/amd64 DEBUG [77f58e4a] kamal-my-app-native-remote-amd64 DEBUG [77f58e4a] Successfully created context "kamal-my-app-native-remote-amd64" DEBUG [77f58e4a] kamal-my-app-native-remote INFO [77f58e4a] Finished in 0.066 seconds with exit status 0 (successful). INFO [4e267505] Running docker buildx build --push ..... ... logs keep going till deploy succeeds Context I'm using the workaround described in https://github.com/basecamp/kamal/issues/809 so there's a dummy config/deploy.yml and a real config/deploy.production.yml with these settings: deploy.production.yml service: my-app image: edolix/my-app servers: web: hosts: - my-app.egallo.dev labels: traefik.http.services.my-app-web-production.loadbalancer.server.port: "8000" traefik.docker.network: private traefik.http.routers.smart_track.rule: Host(`my-app.egallo.dev`) traefik.http.routers.smart_track.entrypoints: websecure traefik.http.routers.smart_track.tls.certresolver: letsencrypt traefik.http.routers.smart_track_secure.entrypoints: websecure traefik.http.routers.smart_track_secure.rule: Host(`my-app.egallo.dev`) traefik.http.routers.smart_track_secure.tls: true traefik.http.routers.smart_track_secure.tls.certresolver: letsencrypt options: "add-host": host.docker.internal:host-gateway network: "private" registry: server: ghcr.io username: - KAMAL_REGISTRY_USERNAME password: - KAMAL_REGISTRY_PASSWORD # Inject ENV variables into containers (secrets come from .env). # Remember to run `kamal env push` after making changes! env: clear: HOSTNAME: my-app.egallo.dev secret: - removed_for_brevity # Use a different ssh user than root ssh: user: ubuntu builder: remote: arch: amd64 healthcheck: path: /up port: 8000 accessories: db: image: postgres:16.0 roles: - web env: secret: - POSTGRES_PASSWORD directories: - data:/var/lib/postgresql/data options: network: "private" traefik: options: publish: - "443:443" volume: - "/letsencrypt/acme.json:/letsencrypt/acme.json" network: "private" args: accesslog: true accesslog.format: json log: true log.level: DEBUG entryPoints.web.address: ":80" entryPoints.websecure.address: ":443" entryPoints.web.http.redirections.entryPoint.to: websecure entryPoints.web.http.redirections.entryPoint.scheme: https entryPoints.web.http.redirections.entrypoint.permanent: true entrypoints.websecure.http.tls: true entrypoints.websecure.http.tls.domains[0].main: "my-app.egallo.dev" certificatesResolvers.letsencrypt.acme.email: "edo91.gallo@gmail.com" certificatesResolvers.letsencrypt.acme.storage: "/letsencrypt/acme.json" certificatesResolvers.letsencrypt.acme.httpchallenge: true certificatesResolvers.letsencrypt.acme.httpchallenge.entrypoint: web Docker version Client: Cloud integration: v1.0.35+desktop.13 Version: 26.0.0 API version: 1.45 Go version: go1.21.8 Git commit: 2ae903e Built: Wed Mar 20 15:14:46 2024 OS/Arch: darwin/arm64 Context: desktop-linux Server: Docker Desktop 4.29.0 (145265) Engine: Version: 26.0.0 API version: 1.45 (minimum version 1.24) Go version: go1.21.8 Git commit: 8b79278 Built: Wed Mar 20 15:18:02 2024 OS/Arch: linux/arm64 Experimental: false containerd: Version: 1.6.28 GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb runc: Version: 1.1.12 GitCommit: v1.1.12-0-g51d5e94 docker-init: Version: 0.19.0 GitCommit: de40ad0 Still digging through trying to understand where could be the issue but the ticket might be helpful for others. Thanks again! Could it be around these lines: https://github.com/basecamp/kamal/blob/4697f894411af5f6e245c15c84b5073bc48edd04/lib/kamal/cli/build.rb#L45-L47 the error message from v1.7 is context "kamal-my-app-native-remote-amd64" does not exist while in v1.6.0 was ERROR: no builder "kamal-smart-track-native-remote" found. Hmm interesting, when I run docker context inspect kamal-my-app-native-remote-amd64 --format '{{.Endpoints.docker.Host}}' which doesn't exist, I get this error message: context "kamal-my-app-native-remote-amd64": context not found: open <snip>/.docker/contexts/meta/ad3cdf99c2f765ec10c20f6e8d60aac5f39e063f514574d5863c922f20ec6216/meta.json: no such file or directory So I'd be interested to know why you are getting a different error message. In any case let's update the matcher to include does not exist. Looks like the does not exist error message is coming from the cli.remove call. It will run docker context rm kamal-app-native-remote-amd64; docker buildx rm kamal-app-native-remote where docker context rm returns exactly context "kamal-app-native-remote-amd64" does not exist. I don't understand why this WARN line before cli.remove didn't show up in the logs tho. Is the output overridden by the docker error message? @edolix - the stacktrace is from line 38 of build.rb, so it looks like it is from the docker inspect command. I've released v1.7.2 with a fix for this - could you confirm if that's worked? I had the same issue, but didn't investigate further. I thought it might have to do something with the multiarch build (as this made it work again) and my newly setup server with Ubuntu 22.04 instead of doing builds on my M2.
gharchive/issue
2024-06-21T08:53:48
2025-04-01T04:56:08.157944
{ "authors": [ "djmb", "edolix", "plattenschieber" ], "repo": "basecamp/kamal", "url": "https://github.com/basecamp/kamal/issues/851", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1786981442
Create LICENSE Resolves #2 An already mentioned https://github.com/bashnick/transformer/blob/d953b08dd6be5e5a5fe19a83b03eb61bb97e4c45/README.md?plain=1#L33 @bashnick Friendly ping 🏓
gharchive/pull-request
2023-07-04T01:04:31
2025-04-01T04:56:08.164137
{ "authors": [ "szepeviktor" ], "repo": "bashnick/transformer", "url": "https://github.com/bashnick/transformer/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
182271581
Merge upstream commits Avoid Basho fork drifting too far from upstream project. :+1: 00bfcb6
gharchive/pull-request
2016-10-11T13:59:51
2025-04-01T04:56:08.166571
{ "authors": [ "javajolt", "kesslerm" ], "repo": "basho/bear", "url": "https://github.com/basho/bear/pull/4", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
104668673
Please add a License Would you please be so kind as to add a license to this repository, so it is clear how it is licensed out? I am assuming you are intending Apache V2, since that seems to be what you use, and also what is written in the header of lager_syslog_backend.erl, but if you could explicitly add a license, it would help greatly! Thank you! Added in 62f3a41. Should be merged pretty soon. That's fantastic. Thank you very much!
gharchive/issue
2015-09-03T11:04:33
2025-04-01T04:56:08.168154
{ "authors": [ "mrallen1", "sebastian" ], "repo": "basho/lager_syslog", "url": "https://github.com/basho/lager_syslog/issues/19", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
152685621
Improve SSL warning [JIRA: CLIENTS-835] When you use a Python and an "old" version of OpenSSL this code will emit a warning: https://github.com/basho/riak-python-client/blob/master/riak/security.py#L41-L45 https://github.com/basho/riak-python-client/blob/master/riak/security.py#L60-L64 Since Python 2.6 isn't supported anymore, this could be simplified. It would also be nice to not emit this warning unless a TLS/SSL connection is attempted. @dannylauca voila :smile:
gharchive/issue
2016-05-03T02:10:56
2025-04-01T04:56:08.170661
{ "authors": [ "lukebakken" ], "repo": "basho/riak-python-client", "url": "https://github.com/basho/riak-python-client/issues/458", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
31909430
riak_kv_gcounter dialyzer errors riak_kv_gcounter.erl:64: Invalid type specification for function riak_kv_gcounter:new/2. The success typing is (_,pos_integer()) -> {'ok',riak_kv_gcounter:gcounter()} riak_kv_gcounter.erl:75: Invalid type specification for function riak_kv_gcounter:update/3. The success typing is ('increment' | {'increment',pos_integer()},_,riak_kv_gcounter:gcounter()) -> {'ok',riak_kv_gcounter:gcounter()} riak_kv_gcounter.erl:85: Function merge/2 has no local return riak_kv_gcounter.erl:86: The call riak_kv_gcounter:merge(GCnt1::any(),GCnt2::any(),[]) does not have an opaque term of type riak_kv_gcounter:gcounter() as 3rd argument riak_kv_gcounter.erl:99: The call riak_kv_gcounter:merge(Rest::any(),RestOfClock2::[tuple()],[{_,_},...]) does not have opaque terms as 2nd and 3rd arguments riak_kv_gcounter.erl:101: The call riak_kv_gcounter:merge(Rest::any(),Clock2::[tuple()],[{_,_},...]) does not have opaque terms as 2nd and 3rd arguments Some of these are pretty domain-specific, so I figured I'd file an issue for now. For example, I'm not sure what types should be integer() vs. pos_integer(), etc. The #1164 (merged into 2.1 branch) deals with these warnings. Is it sufficient to close this issue?
gharchive/issue
2014-04-21T17:30:32
2025-04-01T04:56:08.172132
{ "authors": [ "hmmr", "reiddraper" ], "repo": "basho/riak_kv", "url": "https://github.com/basho/riak_kv/issues/922", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
625810131
WIP: Issue/#83 release annotations Hello @ceckoslab Here is latest changes required to achieve dynamic release annotations Kindly review and let me know P.S. Also need to update test to reflect latest changes Hello @ceckoslab I fixed currently existing tests and prepared to implement dynamic releases tests. I guess we need some tests for that. Also I think we need to implement releases for other types of diagram, am I right? So I hope tomorrow we will have tests and releases implemented for all kind of diagrams Hello @korzol I think that it looks fine for now. Please do not implement more things for other diagram types because I am planning to do some refactoring around the view logic. Could you go in to details about what and where you plan to implement more tests? Hello @ceckoslab I thought about adding another file into /tests/BasicRum/DiagramBuilder/ directory. Lets say DevicePerformaceTest.php or DevicePerformaceDesktopTest.php which will test diagram building for device performance (desktop) which is time_series and where dynamic releases is applicable Hello @korzol Could you write one more when the Data Layer returns empty result set? I remember that we were getting a notice for undefined variable in this case and our tests were failing. Hello @korzol Is this PR still a WIP? If not, could you remove the WIP part of the title?
gharchive/pull-request
2020-05-27T15:44:49
2025-04-01T04:56:08.176856
{ "authors": [ "ceckoslab", "korzol" ], "repo": "basicrum/backoffice", "url": "https://github.com/basicrum/backoffice/pull/127", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1404198380
alternative solution Hi Thank you very much for this project, it helped me a lot when trying to play DayZ with friends. But over time, I realized that I lacked the server browser, history, favorites and automatic installation of mods. And I tried to implement this in the project https://github.com/WoozyMasta/dayz-ctl Perhaps it will be useful or interesting, so I decided to share it here Cool project, thanks for sharing. You should however post this somewhere else instead of here, as it doesn't belong on this issue tracker.
gharchive/issue
2022-10-11T08:11:26
2025-04-01T04:56:08.187780
{ "authors": [ "WoozyMasta", "bastimeyer" ], "repo": "bastimeyer/dayz-linux-cli-launcher", "url": "https://github.com/bastimeyer/dayz-linux-cli-launcher/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2108166802
Generate README files With "next steps" if any With documentation about how Vike is used The following is an example of generated README: This app have been created with Bati using the following flags: --eslint --solid --edgedb To-Do EdgeDB Setup If EdgeDB CLI is not yet installed, execute the following command: curl --proto '=https' --tlsv1.2 -sSf https://sh.edgedb.com | sh Once the CLI is installed, you can initialize a project: edgedb project init Then follow instructions at https://www.edgedb.com/docs/intro/quickstart#set-up-your-schema About this app This app is ready to start thanks to Vike and SolidJS. In order to get familiar with Vike, here are some of the features that are already in place /pages/+config.h.ts This is the interface between Vike and your code. It imports/uses: A Layout component that wraps your Pages A customizable Head component A default title Routing By default, Vike does Filesystem Routing: the URL of a page is determined based on where its +Page.tsx (or +config.h.ts) file is located on the filesystem. If you want to deep dive into routing, Vike lets you choose between: Server Routing and Client Routing Filesystem Routing, Route Strings and Route Functions /pages/_error/+Page.tsx An error page which is rendered when errors occurs. /pages/+onPageTransitionStart.ts and /pages/+onPageTransitionEnd.ts The onPageTransitionStart() hook, together with onPageTransitionEnd(), enables you to implement page transition animations. ssr by default You can disable SSR for all your pages, or only for some pages while still using SSR for your other pages. HTML Streaming support Can be enabled/disabled for all your pages, or only for some pages while still using it for others. @AurelienLourot @brillout any wording that you would change/add/remove? Love it 💯 With "next steps" Neat, I like that wording. Maybe we can even call it # Next Steps instead of # To-Do. I made a PR: https://github.com/batijs/bati/pull/183. Feel free to reject all/some of it. (I've no strong opinions here, just slight personal preferences thus feel more then free to reject all/parts of it.) (FYI I think I found a new improved way to communicate the whole fake imports thing, enabling us to remove the .h. file extension.) LGTM ✨ (Tiny typo at „app havehas been created“.)
gharchive/pull-request
2024-01-30T15:38:42
2025-04-01T04:56:08.211630
{ "authors": [ "brillout", "magne4000" ], "repo": "batijs/bati", "url": "https://github.com/batijs/bati/pull/182", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1909693864
🛑 SAT - Portal CFDI is down In 1d44a68, SAT - Portal CFDI (https://portalcfdi.facturaelectronica.sat.gob.mx) was down: HTTP code: 503 Response time: 196 ms Resolved: SAT - Portal CFDI is back up in 590dc8f after 10 minutes.
gharchive/issue
2023-09-23T03:37:47
2025-04-01T04:56:08.215084
{ "authors": [ "batnieluyo" ], "repo": "batnieluyo/sat-monitor", "url": "https://github.com/batnieluyo/sat-monitor/issues/167", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
66977177
Revert "Http server" msimerson wants to merge 1 commit into master Actually, he doesn't. haha
gharchive/pull-request
2015-04-07T19:50:58
2025-04-01T04:56:08.216134
{ "authors": [ "celesteking", "msimerson" ], "repo": "baudehlo/Haraka", "url": "https://github.com/baudehlo/Haraka/pull/913", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
325607841
如何实现checkbox的绑定选中? 我想实现编辑的时候,根据已有的数据显示某些行时选中的,有方案吗? h 有复选框的选中事件么? 想实现的效果:选中某行的复选框,得到当前行的所有数据 可以通过getCheckedData 方法获取已选中的行的数据 或通过getCheckedTr 方法获取已选中的行dom 行的选中不支持通过代码进行操作 单条行显中后没有事件进行接收,但是可以通过getCheckedData进行获取已选中的 或者可以将所遇到的详细场景进行描述,如果确实存在这种需求,会考虑对此进行升级。 获取数据确实没有问题,就是通过行的的主键来绑定已有的值,通常用在编辑的时候。这个现在我还没有找到合适的方法,就去手动遍历了每一行的数据,然后对比主键,将这一行的checbox选中
gharchive/issue
2018-05-23T08:48:49
2025-04-01T04:56:08.218266
{ "authors": [ "baukh789", "cuitwangshicheng", "cupPhone" ], "repo": "baukh789/GridManager", "url": "https://github.com/baukh789/GridManager/issues/86", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2187363338
toolchains_llvm@1.0.0 Release: https://github.com/bazel-contrib/toolchains_llvm/releases/tag/1.0.0 @fmeum, not sure how the new presubmit and review process is supposed to work. I can not add reviewers to the PR. It's in the process of being automated, right now these operations can only be performed by members of a certain GitHub team.
gharchive/pull-request
2024-03-14T22:20:02
2025-04-01T04:56:08.229040
{ "authors": [ "fmeum", "siddharthab" ], "repo": "bazelbuild/bazel-central-registry", "url": "https://github.com/bazelbuild/bazel-central-registry/pull/1626", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
512545246
Add Python script that generates //:repositories.bzl Just experimenting. NOT FOR REVIEW! Interesting. I'm not exactly sure where you are going but there are a few things to think about point to a repo release by git commit or release tag retrieve the package from the cannonical URL from github based on the tag update sha see if the file is also in the mirror, download and verify sha use urls = [...] if found in mirror Most repos should depend on other repos by name only. If they insist on pinning to a specific version of that repo, it must be given another name. Personally, I don't find building repositories.bzl all that hard. Where I would like to see great improvement is in the UI. WORKSPACE should look like http_repository(name='bazel_federatoin', ...) load('@bazel_federation//:repositories.bzl', 'rules_foo', 'rules_bar', ...) load('@federation_generated', "federation_load") federation_load() Without all the mucking about with the individual setup rules for each repo. @sergiocampama started something like that. https://github.com/sergiocampama/pesto I have been meaning to get around to trying the same for the federation, but have not had the chance. I was trying to add rules_proto to the federation, but had serious trouble doing so because of a cyclic dependency between rules_proto, rules_cc and protobuf (and rules_java might joint that circle soon-ish). Realistically, I saw two options on how to resolve this: Create a single function that loads rules_cc, rules_proto and protobuf and make the current function an alias of it. This works fine as long as we have this limited set of rules, but doesn't solve the general case of cyclic deps. Create a way of letting rules define their dependencies and compute the transitive closure of all dependencies from that. This could be done in Starlark (more or less just rename dependencies.py to deps.bzl), or have a script that reads the dependency and emits repositories.bzl. Computing the transitive closure of deps is a fixpoint iteration on the dependency declarations, so it's O( |dependency_dict|^2 ), hence maybe a bit slow to do it all the time a load function is called. Ideally, these dependency definitions should live in the rules repo itself rather than the federation. It's easier to start by putting it in the federation, though. I think all your points could be added to the script eventually. I also have an idea of how to improve the setup, but that needs a little more time. (kinda like npm repos have a package.json, Bazel could have a bpm.star to declare dependencies. Users would then declare their direct dependencies in bpm.star, run bpm write-workspace [or teach Bazel itself how to do it] and get a WORKSPACE file that has all loads and setups in it. I wasn't aware of @sergiocampama's repo, but I feel like it's very similar).
gharchive/pull-request
2019-10-25T14:21:49
2025-04-01T04:56:08.236891
{ "authors": [ "Yannic", "aiuto" ], "repo": "bazelbuild/bazel-federation", "url": "https://github.com/bazelbuild/bazel-federation/pull/85", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
311179755
Add bazel RC support Current implementation uses --bazelrc=/dev/null which means that bazel will ignore .bazelrc. There are cases where this may block some use cases CC: @natansil @ittaiz Btw, This is indeed one of the improvements this library needs. Ease hermetic dependency on external repositories. I think there’s a bit of an easier way then what you did. I’ll try to play around with it tomorrow. On Sun, 8 Apr 2018 at 12:21 Shachar Anchelovich notifications@github.com wrote: @anchlovi commented on this pull request. In javatests/build/bazel/tests/integration/BazelBaseTestCaseTest.java https://github.com/bazelbuild/bazel-integration-testing/pull/58#discussion_r179941400 : Command cmd = driver.bazel("test", "//:IntegrationTestSuiteTest"); final int exitCode = cmd.run(); org.hamcrest.MatcherAssert. assertThat(exitCode, is(successfulExitCode(cmd))); } @Test public void testUseBazelRcFile() throws Exception { setUpTestSuit("IntegrationTestSuiteTest"); To do so I'll need to setup many things manually so features like copy run files will work — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bazelbuild/bazel-integration-testing/pull/58#discussion_r179941400, or mute the thread https://github.com/notifications/unsubscribe-auth/ABUIFyUVpe7xJ_9jWrSzvCCFLtBjrykYks5tmdahgaJpZM4TGlFS .
gharchive/pull-request
2018-04-04T10:52:04
2025-04-01T04:56:08.243758
{ "authors": [ "anchlovi", "ittaiz" ], "repo": "bazelbuild/bazel-integration-testing", "url": "https://github.com/bazelbuild/bazel-integration-testing/pull/58", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1033347470
Update jazzer This brings improved macOS support as well as a configurable deps for java_fuzz_test. I updated Jazzer to a version with native support for M1 Macs. I don't know whether Honggfuzz supports it, but everything else in rules_fuzzing should.
gharchive/pull-request
2021-10-22T09:01:21
2025-04-01T04:56:08.327274
{ "authors": [ "fmeum" ], "repo": "bazelbuild/rules_fuzzing", "url": "https://github.com/bazelbuild/rules_fuzzing/pull/181", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1406949837
Improve validation of extension attribute to pkg_tar If the pkg_tar rule is called without specifying compressor, then extension should only be allowed to take one of these values exactly (with or without the leading dot) [^1] [^2]: https://github.com/bazelbuild/rules_pkg/blob/60dbd92d1ce3338cbb8adcb8ae8800ea3421d7b8/pkg/private/tar/tar.bzl#L35-L38 The current implementation will accept a value such as tar.gz.bz2.xz which implies the file has three layers of compression. It will also accept a value such as txt.gz which implies that it is a compressed plain text file, i.e. readable directly by zcat, zless, or similar. These should cause a failure. [^1]: A leading dot is silently removed in the attribute's value. [^2]: In this list, the leading dot before tar.xz is missing and should be restored. Should be done alongside #60
gharchive/issue
2022-10-13T00:14:31
2025-04-01T04:56:08.343620
{ "authors": [ "aiuto", "dpward" ], "repo": "bazelbuild/rules_pkg", "url": "https://github.com/bazelbuild/rules_pkg/issues/623", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
824032992
Devices do not support tenant association Currently there does not appear to be any way to associated imported devices with a tenant, so they are left hanging. This literately took 5 minutes to implement. Can you please test the branch "feature/add-vm_tenant_relation" Thank you. Works beautifully! Thank you so much!!!!!
gharchive/issue
2021-03-07T22:02:37
2025-04-01T04:56:08.345236
{ "authors": [ "andymelichar", "bb-Ricardo" ], "repo": "bb-Ricardo/netbox-sync", "url": "https://github.com/bb-Ricardo/netbox-sync/issues/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1556694592
Update scalafmt-core to 3.7.1 Updates org.scalameta:scalafmt-core from 3.6.1 to 3.7.1. GitHub Release Notes - Version Diff I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "org.scalameta", artifactId = "scalafmt-core" } }] labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1 Superseded by #64. Superseded by #69.
gharchive/pull-request
2023-01-25T13:51:40
2025-04-01T04:56:08.350041
{ "authors": [ "scala-steward" ], "repo": "bbarker/diz", "url": "https://github.com/bbarker/diz/pull/58", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
56360833
Style multiline with + is indented cop is correcting my code in new line after + is on the end def method_name first_long_line_of_code + second_long_line_of_code + third_long_line_of_code end expected: def method_name first_long_line_of_code + second_long_line_of_code + third_long_line_of_code end is there configuration option for this ? Log is reporting : [Corrected] Use 2 (not 0) spaces for indenting an expression spanning multiple lines. No. The style aligned (which is the default for Style/MultilineOperationIndentation) only enforces aligned operands when the first operand is preceded by something (a keyword such as if or an assignment to a variable). The reason why we want to indent the continuation of an operation like the one in your example is that we want to make clear that it's not the start of a a new expression. I find your preferred style a bit strange, almost misleading. ok I never looked at it this way. thank you for explanation. exmaple above is not the best one. Consider such situation: def method_name calculate_result_for( 1, 2, 34, 5, 6) + calculate_result_for(23, 234, 546, 4, 5) + calculate_result_for( 3, 4, 111, 1, 9) end Is there any posiibility turn it off for a single method ? Absolutely! Just disable the cop locally. # rubocop:disable Style/MultilineOperationIndentation def method_name calculate_result_for( 1, 2, 34, 5, 6) + calculate_result_for(23, 234, 546, 4, 5) + calculate_result_for( 3, 4, 111, 1, 9) end # rubocop:enable Style/MultilineOperationIndentation Another idea for how to get out of the problem is this: def method_name result = calculate_result_for( 1, 2, 34, 5, 6) + calculate_result_for(23, 234, 546, 4, 5) + calculate_result_for( 3, 4, 111, 1, 9) result end thank you very much.
gharchive/issue
2015-02-03T10:41:23
2025-04-01T04:56:08.356643
{ "authors": [ "jonas054", "sufleR" ], "repo": "bbatsov/rubocop", "url": "https://github.com/bbatsov/rubocop/issues/1628", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1319683358
Regex + RichText translation Hello! I have a string in a game that I want to create a regex for, that looks like <color\=white>総ユニット数:18/30</color> (<color\=#80ffffff>未配置:1</color> <color\=#80ff80ff>配置済:9</color> <color\=#ffaa80ff>略奪:8</color> <color\=#ff4040ff>負傷:0</color> ) I try to create a regex for it like this: r:"^<color\\=(#?\w+)>総ユニット数:(\d+)\/(\d+)<\/color> \(<color\\=#80ffffff>未配置:(\d+)<\/color> <color\\=#80ff80ff>配置済:(\d+)<\/color> <color\\=#ffaa80ff>略奪:(\d+)<\/color> <color\\=#ff4040ff>負傷:(\d+)<\/color> \)$"=<color\=$1>Total number of units:$2/$3</color> (<color\=#80ffffff>Unassigned:$4</color> <color\=#80ff80ff>Deployed: $5</color> <color\=#ffaa80ff>Looting:$6</color> <color\=#ff4040ff>Injured: $7</color> ) But it does not work. I have checked, the whole string matches the regex. I have also tried setting MaxTextParserRecursion=2 and using separate regexes for rich text parts, like this: r:"未配置:(\d+)"=Unassigned:$1 r:"配置済:(\d+)"=Deployed:$1 r:"略奪:(\d+)"=Looting:$1 r:"負傷:(\d+)"=Injured:$1``` It also didn't work. Can anybody help me with it? Okay, turns out that TemplateAllNumberAway=True messes up matching, and the string that Translator tries to match to regex looks like this: <color=white>総ユニット数:{{A}}</color> (<color=#{{B}}ffffff>未配置:{{C}}</color> <color=#{{D}}ff{{E}}ff>配置済:{{F}}</color> <color=#ffaa{{G}}ff>略奪:{{H}}</color> <color=#ff{{I}}ff>負傷:{{J}}</color> ) So what worked for me was to add several substituted strings that correspond to RichText parts: 未配置:{{A}}=Unassigned:{{A}} 配置済:{{A}}=Deployed:{{A}} 略奪:{{A}}=Looting:{{A}} 負傷:{{A}}=Injured:{{A}}``` Still, problem of regex matching with `TemplateAllNumberAway=True` probably needs some work. I have a similar problem with regex matching: XUnity output this translation line: 每一点根骨提供基础法术强度+4%,速度+0.1\n\n当前:基础法术强度 14.0=Each point of root bone provides +4% base spell strength and +0.1 speed\n\nCurrent: Base Spell Strength 14.0 When I tried to replace it with this regex it fails to match: sr:"每一点根骨提供基础法术强度+4%,速度+0.1\n\n当前:基础法术强度 ([\d\.]+)"=Each point of root bone provides +4% base spell strength and +0.1 speed\n\nCurrent: Base Spell Strength $1 The text should be identical in both cases, and I have TemplateAllNumberAway=False in the config but it seems like that does no effect.
gharchive/issue
2022-07-27T14:51:28
2025-04-01T04:56:08.387073
{ "authors": [ "pipja", "zclimber" ], "repo": "bbepis/XUnity.AutoTranslator", "url": "https://github.com/bbepis/XUnity.AutoTranslator/issues/302", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
268314358
Crystal 0.23.1 Hi! I've tried this shard with the latest Crystal release, and it doesn't work. Do you have any plans to maintain this library? https://github.com/bbtfr/proxy.cr/pull/2 Sorry for the delay, I have been busy lately. I saw your PR, would you like to re-open it? Hi. I've made a lot of changes includes change file structure and rename the project to http_proxy_server. You check it here https://github.com/mamantoha/http_proxy_server. I can create new PR, but it will be incompatible with your repository. What do you think? Should we merge our repositories? ok, I think a new repo would be fine, since I don't have much time to maintain this project.
gharchive/issue
2017-10-25T08:47:27
2025-04-01T04:56:08.414523
{ "authors": [ "bbtfr", "mamantoha" ], "repo": "bbtfr/proxy.cr", "url": "https://github.com/bbtfr/proxy.cr/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2602353319
🐛 [BUG]: Updating edge jumps to top handle Is there an existing issue for this? [X] I have searched the existing issues and this is a new bug. Current Behavior I have custom node with 4 source handles all around. <Handle id="source-left" type="source" :position="Position.Left" /> <Handle id="source-top" type="source" :position="Position.Top" /> <Handle id="source-right" type="source" :position="Position.Right" /> <Handle id="source-bottom" type="source" :position="Position.Bottom" /> Add 2 custom nodes in flow called A and B Connect an edge between (source) B-left and (target) A-left Try updating the edge target from B-left to B-bottom. As soon as you move the handle, the edge source jumps from A-left to A-top So, when updating the target to another handle, the source handle always jumps to top (temporarily). Expected Behavior Source handle should stay commected Steps To Reproduce No response Relevant log output No response Anything else? No response Please provide a proper reproduction of the issue. You can find a sandbox template here: https://codesandbox.io/p/devbox/vue-flow-basic-gfgro4 I’ll gladly check what’s wrong once I’m back from my vacation once you’ve provided a repro ^^ Ok, thanks, I'll provide a reproduction today Here is a link to a reproduction: https://codesandbox.io/p/devbox/adoring-violet-tzvkh7 Just drag the edge from node B and you'll see side A jumping from left to top. Will be fixed in the next release Fixed with 1.41.3 Fixed with 1.41.3 @bcakmakoglu Thank you very much for your efforts; however, I am still encountering an issue. Whenever I call addNodes or directly add new nodes and edges data, all the old node handles move to the top and bottom, which is quite confusing for me. Have you ever come across a similar problem? https://github.com/user-attachments/assets/e8c3b001-af05-4047-89f4-83a38214c463 A reproduction for that issue would be needed for me to debug it, just a video isn't telling me much about your code and what might be happening. A reproduction for that issue would be needed for me to debug it, just a video isn't telling me much about your code and what might be happening. I'm sorry, I just saw this. Here is the reproduction logic I wrote for the issue: https://stackblitz.com/edit/vitejs-vite-zk61qe?file=src%2FuseLayout.ts That's a bit too complex for a simple repro tbh, I can't figure out where you do what in that sandbox and it'd take me too much time to sit down and analyse everything in there 😅 Can you strip it down to as little as possible that still re-creates the issue.
gharchive/issue
2024-10-21T12:08:44
2025-04-01T04:56:08.437363
{ "authors": [ "Zambiorix", "bcakmakoglu", "yiwwhl" ], "repo": "bcakmakoglu/vue-flow", "url": "https://github.com/bcakmakoglu/vue-flow/issues/1647", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
267386351
ftp connect to ftp.broadinstitute.org failed I ran into some error while installing genome refernece data: Running GGD recipe: hg19 hapmap 3.3 --2017-10-21 10:21:02-- ftp://gsapubftp-anonymous:password@ftp.broadinstitute.org/bundle/hg19/hapmap_3.3.hg19.sites.vcf.gz => `-' Resolving ftp.broadinstitute.org... 69.173.80.251 Connecting to ftp.broadinstitute.org|69.173.80.251|:21... connected. Logging in as gsapubftp-anonymous ... Login incorrect. Sorry about the issue, this is typically caused by Broad's servers being busy. We don't have any control over these external resources, but the installer/upgrade process you were running will restart from where it left off if you run with the same command again. Hopefully the server will be happier when retrying and finish cleanly for you. Hope this helps. Can we re-open this? FTP access is now disabled permanently, as documented here, and Google bucket should be used instead.
gharchive/issue
2017-10-21T14:38:52
2025-04-01T04:56:08.450605
{ "authors": [ "chapmanb", "veggiesaurus", "weizhu365" ], "repo": "bcbio/bcbio-nextgen", "url": "https://github.com/bcbio/bcbio-nextgen/issues/2118", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
634321799
sqlalchemy.exc.InterfaceError: (sqlite3.InterfaceError) Error binding parameter 123 - probably unsupported type. Version info bcbio version (bcbio_nextgen.py --version):1.2.3 OS name and version: Ubuntu 18.04 bionic To Reproduce Exact bcbio command you have used bcbio_nextgen.py -t local -n 20 ../config/config.yaml Your sample configuration file details: - algorithm: aligner: bwa mark_duplicates: false remove_lcr: true tools_on: gemini variantcaller: somatic: vardict germline: freebayes svcaller: lumpy effects: snpeff effects_transcripts: canonical_cancer vcfanno: [gemini,somatic] analysis: variant2 description: normal-sample files: ../input/PD44692b_N_sorted.bam genome_build: hg38 metadata: batch: bcbio phenotype: normal - algorithm: aligner: bwa mark_duplicates: false remove_lcr: true tools_on: gemini variantcaller: somatic: vardict germline: freebayes svcaller: lumpy effects: snpeff effects_transcripts: canonical_cancer vcfanno: [gemini,somatic] analysis: variant2 description: tumor-sample files: ../input/PD44692a_T_sorted.bam genome_build: hg38 metadata: batch: bcbio phenotype: tumor fc_date: '2020-04-29' fc_name: oesophageal upload: dir: ../final resources: default: memory: 4G cores: 8 jvm_opts: ["-Xms2000m", "-Xmx4000m"] Observed behavior Error message or bcbio output. Expected behavior Traceback (most recent call last): File "/tools/software/bcbio_tools/bin/vcf2db.py", line 924, in <module> impacts_extras=a.impacts_field, aok=a.a_ok) File "/tools/software/bcbio_tools/bin/vcf2db.py", line 234, in __init__ self.load() File "/tools/software/bcbio_tools/bin/vcf2db.py", line 322, in load self._load(self.vcf, create=False, start=i+1) File "/tools/software/bcbio_tools/bin/vcf2db.py", line 306, in _load self.insert(variants, expanded, keys, i) File "/tools/software/bcbio_tools/bin/vcf2db.py", line 374, in insert vilengths, variant_impacts) File "/tools/software/bcbio_tools/bin/vcf2db.py", line 402, in _insert self.__insert(v_objs, self.metadata.tables['variants'].insert()) File "/tools/software/bcbio_tools/bin/vcf2db.py", line 436, in __insert raise e sqlalchemy.exc.InterfaceError: (sqlite3.InterfaceError) Error binding parameter 123 - probably unsupported type. [SQL: INSERT INTO variants (variant_id, chrom, start, "end", vcf_id, ref, alt, qual, filter, type, sub_type, call_rate, num_hom_ref, num_het, num_hom_alt, num_unknown, aaf, gene, ensembl_gene_id, transcript, is_exonic, is_coding, is_lof, is_splicing, is_canonical, exon, codon_change, aa_change, aa_length, biotype, impact, impact_so, impact_severity, polyphen_pred, polyphen_score, sift_pred, sift_score, ab, abp, ac, af, an, ao, cigar, db, decomposed, dp, dpb, dpra, epp, eppr, gti, len, lof, meanalt, mqm, mqmr, ns, numalt, odds, old_multiallelic, old_variant, paired, pairedr, pao, pqa, pqr, pro, qa, qr, ro, rpl, rpp, rppr, rpr, run, saf, sap, sar, srf, srp, srr, ac_adj_exac_afr, ac_adj_exac_amr, ac_adj_exac_eas, ac_adj_exac_fin, ac_adj_exac_nfe, ac_adj_exac_oth, ac_adj_exac_sas, ac_exac_all, af_adj_exac_afr, af_adj_exac_amr, af_adj_exac_eas, af_adj_exac_fin, af_adj_exac_nfe, af_adj_exac_oth, af_adj_exac_sas, af_esp_aa, af_esp_all, af_esp_ea, af_exac_all, an_adj_exac_afr, an_adj_exac_amr, an_adj_exac_eas, an_adj_exac_fin, an_adj_exac_nfe, an_adj_exac_oth, an_adj_exac_sas, an_exac_all, clinvar_disease_name, clinvar_sig, common_pathogenic, gnomad_ac, gnomad_af, gnomad_af_afr, gnomad_af_amr, gnomad_af_asj, gnomad_af_eas, gnomad_af_fin, gnomad_af_nfe, gnomad_af_oth, gnomad_af_popmax, gnomad_af_sas, gnomad_an, max_aaf_all, num_exac_het, num_exac_hom, rs_ids, gts, gt_types, gt_phases, gt_depths, gt_ref_depths, gt_alt_depths, gt_quals, gt_alt_freqs) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)] [parameters: (25566, u'chr1', 12827540, 12827541, u'rs182233185', u'G', u'A', 306.29998779296875, None, 'snp', 'ts', 1.0, 0, 1, 0, 0, 0.5, u'PRAMEF11', None, u'ENST00000619922.1', 1, 1, 0, 0, 0, u'3/4', u'c.583C>T', u'p.Arg195Cys', 478, u'protein_coding', u'missense_variant', u'missense_variant', 'MED', None, None, None, None, 0.33333298563957214, 15.315299987792969, 1, 0.5, 2, 17, u'1X', 1, 0, 51, 51.0, 0.0, 6.203639984130859, 7.097780227661133, 0, 1, 'None', 1.0, 45.76470184326172, 44.82350158691406, 1, 1, 70.51809692382812, None, 'None', 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 659, 1285, 34, 10.0, 4.159900188446045, 5.309510231018066, 7.0, 1, 7, 4.159900188446045, 10, 13, 7.097780227661133, 21, 504.0, 26.0, 0.0, 0.0, 4.0, 3.0, 0.0, 537.0, 0.057500001043081284, 0.002267200034111738, 0.0, 0.0, 6.263500108616427e-05, 0.0033859999384731054, 0.0, -1.0, -1.0, -1.0, 0.004611399956047535, 8762.0, 11468.0, 8454.0, 6592.0, 63862.0, 886.0, 16426.0, 116450.0, None, None, 0, 3, 0.00031895001302473247, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, (246644, 9406), 0.057500001043081284, 483.0, 27.0, u'rs182233185', <read-only buffer for 0x7fa46b128fc0, size -1, offset 0 at 0x7fa4637f17f0>, <read-only buffer for 0x7fa46b128f90, size -1, offset 0 at 0x7fa4637f1830>, <read-only buffer for 0x7fa46b12b030, size -1, offset 0 at 0x7fa4637f1870>, <read-only buffer for 0x7fa46b12b060, size -1, offset 0 at 0x7fa4637f18b0>, <read-only buffer for 0x7fa46b12b090, size -1, offset 0 at 0x7fa4637f18f0>, <read-only buffer for 0x7fa46b12b0c0, size -1, offset 0 at 0x7fa4637f1930>, <read-only buffer for 0x7fa46b12b0f0, size -1, offset 0 at 0x7fa4637f1970>, <read-only buffer for 0x7fa46b12b120, size -1, offset 0 at 0x7fa4637f19b0>)] (Background on this error at: http://sqlalche.me/e/rvf5) ' returned non-zero exit status 1. Log files NA Additional context NA Hi @hocinebendou! We had a similar issue before #3087 I think here the problematic gnomad_exome.vcf.gz record is: gnomad_an = (246644, 9406) at chr1:12827541 If you re-install your hg38/variation/gnomad_exome.vcf.gz, that might help to solve the issue. To re-install: Delete gnomad_exome record from hg38/versions.csv run bcbio_nextgen.py upgrade -u skip --genomes hg38 --datatarget gemini In my updated installation this particular variant is decomposed: tabix gnomad_exome.vcf.gz chr1:12827541-12827542 | grep "AN=" chr1 12827541 rs1198360404 G A 1087.49 PASS AC=3;AC_afr=0;AC_afr_female=0;AC_afr_male=0;AC_amr=1;AC_amr_female=1;AC_amr_male=0;AC_asj=0;AC_asj_female=0;AC_asj_male=0;AC_eas=0;AC_eas_female=0;AC_eas_jpn=0;AC_eas_kor=0;AC_eas_male=0;AC_eas_oea=0;AC_female=1;AC_fin=0;AC_fin_female=0;AC_fin_male=0;AC_male=2;AC_nfe=2;AC_nfe_bgr=0;AC_nfe_est=0;AC_nfe_female=0;AC_nfe_male=2;AC_nfe_nwe=2;AC_nfe_onf=0;AC_nfe_seu=0;AC_nfe_swe=0;AC_oth=0;AC_oth_female=0;AC_oth_male=0;AC_popmax=1;AC_raw=6;AC_sas=0;AC_sas_female=0;AC_sas_male=0;AF=0.000318945;AF_afr=0;AF_afr_female=0;AF_afr_male=0;AF_amr=0.000625;AF_amr_female=0.00125628;AF_amr_male=0;AF_asj=0;AF_asj_female=0;AF_asj_male=0;AF_eas=0;AF_eas_female=0;AF_eas_kor=0;AF_eas_male=0;AF_eas_oea=0;AF_female=0.000234852;AF_fin=0;AF_fin_female=0;AF_fin_male=0;AF_male=0.0003885;AF_nfe=0.000509684;AF_nfe_est=0;AF_nfe_female=0;AF_nfe_male=0.000938967;AF_nfe_nwe=0.000609756;AF_nfe_onf=0;AF_oth=0;AF_oth_female=0;AF_oth_male=0;AF_popmax=0.000625;AF_raw=0.000102941;AF_sas=0;AF_sas_female=0;AF_sas_male=0;AN=9406;AN_afr=952;AN_afr_female=528;AN_afr_male=424;AN_amr=1600;AN_amr_female=796;AN_amr_male=804;AN_asj=228;AN_asj_female=156;AN_asj_male=72;AN_eas=866;AN_eas_female=404;AN_eas_jpn=0;AN_eas_kor=334;AN_eas_male=462;AN_eas_oea=532;AN_female=4258;AN_fin=108;AN_fin_female=34;AN_fin_male=74;AN_male=5148;AN_nfe=3924;AN_nfe_bgr=0;AN_nfe_est=6;AN_nfe_female=1794;AN_nfe_male=2130;AN_nfe_nwe=3280;AN_nfe_onf=638;AN_nfe_seu=0;AN_nfe_swe=0;AN_oth=368;AN_oth_female=154;AN_oth_male=214;AN_popmax=1600;AN_raw=58286;AN_sas=1360;AN_sas_female=392;AN_sas_male=968;BaseQRankSum=1.95;ClippingRankSum=0.406;DP=416844;FS=0;InbreedingCoeff=-0.0337;MQ=22.92;MQRankSum=-0.323;QD=8.06;ReadPosRankSum=0.406;SOR=0.707;allele_type=snv;faf95=8.604e-05;faf99=8.623e-05;n_alt_alleles=1;nhomalt=0;nhomalt_afr=0;nhomalt_afr_female=0;nhomalt_afr_male=0;nhomalt_amr=0;nhomalt_amr_female=0;nhomalt_amr_male=0;nhomalt_asj=0;nhomalt_asj_female=0;nhomalt_asj_male=0;nhomalt_eas=0;nhomalt_eas_female=0;nhomalt_eas_jpn=0;nhomalt_eas_kor=0;nhomalt_eas_male=0;nhomalt_eas_oea=0;nhomalt_female=0;nhomalt_fin=0;nhomalt_fin_female=0;nhomalt_fin_male=0;nhomalt_male=0;nhomalt_nfe=0;nhomalt_nfe_bgr=0;nhomalt_nfe_est=0;nhomalt_nfe_female=0;nhomalt_nfe_male=0;nhomalt_nfe_nwe=0;nhomalt_nfe_onf=0;nhomalt_nfe_seu=0;nhomalt_nfe_swe=0;nhomalt_oth=0;nhomalt_oth_female=0;nhomalt_oth_male=0;nhomalt_popmax=0;nhomalt_raw=0;nhomalt_sas=0;nhomalt_sas_female=0;nhomalt_sas_male=0;popmax=amr;variant_type=snv chr1 12827541 rs182233185 G C 3.34571e+06 PASS AC=1;AC_afr=0;AC_afr_female=0;AC_afr_male=0;AC_amr=0;AC_amr_female=0;AC_amr_male=0;AC_asj=0;AC_asj_female=0;AC_asj_male=0;AC_eas=0;AC_eas_female=0;AC_eas_jpn=0;AC_eas_kor=0;AC_eas_male=0;AC_eas_oea=0;AC_female=0;AC_fin=0;AC_fin_female=0;AC_fin_male=0;AC_male=1;AC_nfe=0;AC_nfe_bgr=0;AC_nfe_est=0;AC_nfe_female=0;AC_nfe_male=0;AC_nfe_nwe=0;AC_nfe_onf=0;AC_nfe_seu=0;AC_nfe_swe=0;AC_oth=0;AC_oth_female=0;AC_oth_male=0;AC_popmax=1;AC_raw=1;AC_sas=1;AC_sas_female=0;AC_sas_male=1;AF=4.05443e-06;AF_afr=0;AF_afr_female=0;AF_afr_male=0;AF_amr=0;AF_amr_female=0;AF_amr_male=0;AF_asj=0;AF_asj_female=0;AF_asj_male=0;AF_eas=0;AF_eas_female=0;AF_eas_jpn=0;AF_eas_kor=0;AF_eas_male=0;AF_eas_oea=0;AF_female=0;AF_fin=0;AF_fin_female=0;AF_fin_male=0;AF_male=7.45179e-06;AF_nfe=0;AF_nfe_bgr=0;AF_nfe_est=0;AF_nfe_female=0;AF_nfe_male=0;AF_nfe_nwe=0;AF_nfe_onf=0;AF_nfe_seu=0;AF_nfe_swe=0;AF_oth=0;AF_oth_female=0;AF_oth_male=0;AF_popmax=3.27718e-05;AF_raw=3.9946e-06;AF_sas=3.27718e-05;AF_sas_female=0;AF_sas_male=4.34934e-05;AN=246644;AN_afr=14778;AN_afr_female=8868;AN_afr_male=5910;AN_amr=34480;AN_amr_female=20208;AN_amr_male=14272;AN_asj=9962;AN_asj_female=4840;AN_asj_male=5122;AN_eas=18258;AN_eas_female=9252;AN_eas_jpn=130;AN_eas_kor=3812;AN_eas_male=9006;AN_eas_oea=14316;AN_female=112448;AN_fin=21162;AN_fin_female=10098;AN_fin_male=11064;AN_male=134196;AN_nfe=111446;AN_nfe_bgr=2516;AN_nfe_est=228;AN_nfe_female=48800;AN_nfe_male=62646;AN_nfe_nwe=41230;AN_nfe_onf=30100;AN_nfe_seu=11352;AN_nfe_swe=26020;AN_oth=6044;AN_oth_female=2860;AN_oth_male=3184;AN_popmax=30514;AN_raw=250338;AN_sas=30514;AN_sas_female=7522;AN_sas_male=22992;BaseQRankSum=4.83;ClippingRankSum=0.158;DP=19795847;FS=1.102;InbreedingCoeff=0.0793;MQ=35.75;MQRankSum=0.603;QD=13.27;ReadPosRankSum=0.241;SOR=0.58;allele_type=snv;faf95=0;faf99=0;n_alt_alleles=2;nhomalt=0;nhomalt_afr=0;nhomalt_afr_female=0;nhomalt_afr_male=0;nhomalt_amr=0;nhomalt_amr_female=0;nhomalt_amr_male=0;nhomalt_asj=0;nhomalt_asj_female=0;nhomalt_asj_male=0;nhomalt_eas=0;nhomalt_eas_female=0;nhomalt_eas_jpn=0;nhomalt_eas_kor=0;nhomalt_eas_male=0;nhomalt_eas_oea=0;nhomalt_female=0;nhomalt_fin=0;nhomalt_fin_female=0;nhomalt_fin_male=0;nhomalt_male=0;nhomalt_nfe=0;nhomalt_nfe_bgr=0;nhomalt_nfe_est=0;nhomalt_nfe_female=0;nhomalt_nfe_male=0;nhomalt_nfe_nwe=0;nhomalt_nfe_onf=0;nhomalt_nfe_seu=0;nhomalt_nfe_swe=0;nhomalt_oth=0;nhomalt_oth_female=0;nhomalt_oth_male=0;nhomalt_popmax=0;nhomalt_raw=0;nhomalt_sas=0;nhomalt_sas_female=0;nhomalt_sas_male=0;popmax=sas;variant_type=multi-snv Sergey @naumenko-sa. @pvanheus followed your instructions. However, this time I'm getting the following error: OpenBLAS blas_thread_init: pthread_create failed for thread 31 of 32: Resource temporarily unavailable OpenBLAS blas_thread_init: RLIMIT_NPROC 31668 current, 386078 max Traceback (most recent call last): File "/tools/software/bcbio/anaconda/bin/py", line 11, in <module> load_entry_point('pythonpy==0.4.11', 'console_scripts', 'py')() File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pkg_resources/__init__.py", line 490, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2853, in load_entry_point return ep.load() File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2444, in load return self.resolve() File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2450, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pythonpy/__main__.py", line 142, in <module> lazy_imports(args.expression, args.pre_cmd, args.post_cmd) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pythonpy/__main__.py", line 45, in lazy_imports import_matches(query) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pythonpy/__main__.py", line 38, in import_matches import_matches(query, prefix='%s.' % module_name) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pythonpy/__main__.py", line 38, in import_matches import_matches(query, prefix='%s.' % module_name) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/pythonpy/__main__.py", line 36, in import_matches module = __import__(module_name) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/variation/vardict.py", line 25, in <module> from bcbio import bam, broad, utils File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/bam/__init__.py", line 8, in <module> import numpy File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/numpy/__init__.py", line 142, in <module> from . import core File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/numpy/core/__init__.py", line 24, in <module> from . import multiarray File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/numpy/core/multiarray.py", line 14, in <module> from . import overrides File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/numpy/core/overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( KeyboardInterrupt Failed to open -: unknown file type Failed to open -: unknown file type Traceback (most recent call last): File "<string>", line 1, in <module> File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/variation/freebayes.py", line 330, in call_somatic _write_header(header) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/variation/freebayes.py", line 313, in _write_header for hline in header[:-1] + new_headers + [header[-1]]: IndexError: list index out of range ' returned non-zero exit status 1. Not sure why numpy is complaining? Thanks Sergey Glad that we solved the original issue! We were dealing with numpy/openblas before (see openblas part): #1225 https://stackoverflow.com/questions/33506042/openblas-error-when-importing-numpy-pthread-creat-error-in-blas-thread-init-fu Does numpy work at all in your system (make sure you are running bcbio python)? import numpy as np Try to check out openblas versions, here is what I see on one of the working bcbio/Ubuntu systems: conda list | grep openblas libblas 3.8.0 14_openblas conda-forge libcblas 3.8.0 14_openblas conda-forge liblapack 3.8.0 14_openblas conda-forge libopenblas 0.3.7 h5ec1e0e_6 conda-forge Then try to upgrade blas library if it is outdated: bcbio_conda update numpy scipy openblas or bcbio_nextgen.py upgrade -u skip --tools SN @naumenko-sa. Thanks for your reply. I can confirm that bcbio numpy is installed. We did run the upgrade command but the error still the same error. I want to add that I'm running bcbio using slurm (sbatch command). Does the numpy problem has something to do with the parallelism of slurm? Thanks I think it is rather related to python than to slurm. What is your python? which python Are you able to import numpy in python? import numpy as np What are you libblas versions? conda list | grep openblas libblas 3.8.0 14_openblas conda-forge libcblas 3.8.0 14_openblas conda-forge liblapack 3.8.0 14_openblas conda-forge libopenblas 0.3.7 h5ec1e0e_6 conda-forge SN Thanks @naumenko-sa Here are the information you requested: Which python? /tools/software/bcbio/anaconda/bin/python Are you able to import numpy in python? Yes What are you libblas versions? libblas 3.8.0 14_openblas conda-forge libcblas 3.8.0 14_openblas conda-forge liblapack 3.8.0 14_openblas conda-forge libopenblas 0.3.7 h5ec1e0e_6 conda-forge Thanks! Everything looks fine. Sorry, I don't see any clues on the bcbio side. Maybe the issue is related to the difference between nodes of your cluster (main node vs the node where your job is actually running with sbatch?) Can you try to run it on the main node (where the environment looks ok) check the environment (python) on the compute nodes? S. Thanks @naumenko-sa. It was a memory problem. We allocated more memory. This time I'm getting the following issue. Not sure why bcbio is looking for dbsnp-153.vcf.gz in my working directory. Is it a configuration problem? I appreciate too much your help. Thanks Traceback (most recent call last): File "/tools/software/bcbio/anaconda/bin/bcbio_nextgen.py", line 245, in <module> main(**kwargs) File "/tools/software/bcbio/anaconda/bin/bcbio_nextgen.py", line 46, in main run_main(**kwargs) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/pipeline/main.py", line 50, in run_main fc_dir, run_info_yaml) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/pipeline/main.py", line 91, in _run_toplevel for xs in pipeline(config, run_info_yaml, parallel, dirs, samples): File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/pipeline/main.py", line 165, in variant2pipeline samples = run_parallel("postprocess_variants", samples) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/distributed/multi.py", line 28, in run_parallel return run_multicore(fn, items, config, parallel=parallel) File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/bcbio/distributed/multi.py", line 86, in run_multicore for data in joblib.Parallel(parallel["num_jobs"], batch_size=1, backend="multiprocessing")(joblib.delayed(fn)(*x) for x in items): File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/joblib/parallel.py", line 1017, in __call__ self.retrieve() File "/tools/software/bcbio/anaconda/lib/python3.6/site-packages/joblib/parallel.py", line 909, in retrieve self._output.extend(job.get(timeout=self.timeout)) File "/tools/software/bcbio/anaconda/lib/python3.6/multiprocessing/pool.py", line 670, in get raise self._value subprocess.CalledProcessError: Command 'set -o pipefail; vcfanno -p 8 /usr/people/hbendou/uct/bcbio/PD44692/test1/work/vardict/dbsnp.conf /usr/people/hbendou/uct/bcbio/PD44692/test1/work/vardict/bcbio-effects.vcf.gz | bcftools reheader -h /usr/people/hbendou/uct/bcbio/PD44692/test1/work/vardict/bcbio-effects-annotated-sample_header.txt | bcftools view | bgzip -c > /usr/people/hbendou/uct/bcbio/PD44692/test1/work/bcbiotx/tmpalcw8zs6/bcbio-effects-annotated.vcf.gz ============================================= vcfanno version 0.3.2 [built with go1.12.1] see: https://github.com/brentp/vcfanno ============================================= vcfanno.go:112: [Flatten] unable to open file: //usr/people/hbendou/uct/bcbio/PD44692/test1/variation/dbsnp-153.vcf.gz in [E::bcf_hdr_read] Input is not detected as bcf or vcf format Failed to read the header: - Failed to open -: unknown file type Hi @hocinebendou ! Could you try to trace the location of the annotation files in your installation? vcfanno: gemini triggers annotation with /data/genomes/Hsapiens/hg38/config/vcfanno/gemini.conf it has dbsnp record: file="variation/dbsnp-153.vcf.gz" check if you have the file: /data/genomes/Hsapiens/hg38/variation/dbsnp-153.vcf.gz check hg38 resources file: $ cat /data/genomes/Hsapiens/hg38/seq/hg38-resources.yaml | grep dbsnp dbsnp: ../variation/dbsnp-153.vcf.gz S Hi @naumenko-sa I checked. There is no file dbsnp-153.vcf.gz. Instead I found dbsnp-151.vcf.gz which I suppose is an old version. What you suggest to update it? Thanks Hocine Yes, try to update hg38 data bcbio_nextgen.py upgrade -u skip --data --genomes hg38 --datatarget variation Sergey Hi @naumenko-sa It worked without issues. Thanks
gharchive/issue
2020-06-08T07:56:19
2025-04-01T04:56:08.474217
{ "authors": [ "hocinebendou", "naumenko-sa" ], "repo": "bcbio/bcbio-nextgen", "url": "https://github.com/bcbio/bcbio-nextgen/issues/3256", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1596112263
CSS styling, buttons hover effect CSS styling, buttons hover effect Thanks
gharchive/pull-request
2023-02-23T02:23:10
2025-04-01T04:56:08.477970
{ "authors": [ "Tunestring", "bcebel" ], "repo": "bcebel/Hot10", "url": "https://github.com/bcebel/Hot10/pull/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1031550001
Include a placeholder index.html on the initial deployment The web page will display an error message until the CronJob runs (scheduled for Sundays at 0:00). This can be resolved with a placeholder index.html file created on the initial deployment of the solution. The default web page will be overwritten by the CronJob. The initial index.html can be simple initially as "Please wait until next Sunday 00:00 before using". Alternatively, look for a way to trigger the CronJob immediately on deployment rather than waiting to the next Sunday. Perhaps an init container in the web deployment that runs a job? Created a kind: Job in the template that runs an initial load.
gharchive/issue
2021-10-20T15:30:17
2025-04-01T04:56:08.488506
{ "authors": [ "michaelshire" ], "repo": "bcgov/AppAssessment", "url": "https://github.com/bcgov/AppAssessment/issues/13", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2085186786
Test - Province in Government RM Issue Province is drop down select an option but there are no options to select under 'Government Location' Also made British Columbia the default value.
gharchive/issue
2024-01-17T00:35:36
2025-04-01T04:56:08.489777
{ "authors": [ "bferguso", "emjohnst" ], "repo": "bcgov/BCHeritage", "url": "https://github.com/bcgov/BCHeritage/issues/773", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2212011246
Sort my workplans by start date descending https://app.zenhub.com/workspaces/epictrack-63891ea941d309001fa292cf/issues/gh/bcgov/epic.track/1927 Details: change order_by to start_date.desc() Codecov Report All modified and coverable lines are covered by tests :white_check_mark: :exclamation: No coverage uploaded for pull request base (develop@d92d450). Click here to learn what that means. Additional details and impacted files @@ Coverage Diff @@ ## develop #2045 +/- ## ========================================== Coverage ? 76.27% ========================================== Files ? 283 Lines ? 9312 Branches ? 0 ========================================== Hits ? 7103 Misses ? 2209 Partials ? 0 Flag Coverage Δ epictrack-api 76.27% <100.00%> (?) Flags with carried forward coverage won't be shown. Click here to find out more. :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2024-03-27T22:31:57
2025-04-01T04:56:08.509550
{ "authors": [ "codecov-commenter", "jadmsaadaot" ], "repo": "bcgov/EPIC.track", "url": "https://github.com/bcgov/EPIC.track/pull/2045", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1008843788
Record attachment type for project and/or survey report attachments Overview Links to jira tickets https://quartech.atlassian.net/browse/BHBC-1382 This PR contains the following changes Record attachment type for project and/or survey report attachments This PR contains the following types of changes [x] New feature (change which adds functionality) [x] Enhancement (improvements to existing functionality) How Has This Been Tested? Locally Screenshots funny alignment There is an table called project_report_attachment .. should the project report be recorded in there ?
gharchive/pull-request
2021-09-28T00:00:28
2025-04-01T04:56:08.530064
{ "authors": [ "anissa-agahchen", "sdevalapurkar" ], "repo": "bcgov/biohubbc", "url": "https://github.com/bcgov/biohubbc/pull/553", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1716103451
Investigate if teimpPaymentPercentage and teimpPaymentAmount calculated fields are still required[5] ###Description of the Tech Debt### These two calculated fields are on the TEIMP form but we don't show them on the form. Investigate if we still need these two fields on this form. ###Tech Debt Triage# The purpose of our technical debt triage process is to analyze technical debt to determine risk level of the technical debt and the value in tackling that technical debt. Risk Value Scoring: Level Value High 3 Medium 2 Low 1 Technical Debt - Risk Types Level Value Business Area Risk - Risk of business area visibility / damage to user experience 1 Developer Fault Risk - How likely will this tech debt cause a future error related to coding on top of it 1 System Fault Risk - Risk of system errors or application downtime 1 Time Scale Risk - Compound risk effect if left alone. How much more difficult to fix or dangerous will this become over time? 1 Time Sink Risk - How much will this tech debt slow the development process down 1 TOTAL SCORE: 5 Discussion with @nanyangpro and @Sepehr-Sobhani determined these calculations are no longer necessary There is nothing for the PO to test :) Thanks @BCerki !
gharchive/issue
2023-05-18T18:47:25
2025-04-01T04:56:08.539698
{ "authors": [ "BCerki", "Sepehr-Sobhani", "pbastia" ], "repo": "bcgov/cas-cif", "url": "https://github.com/bcgov/cas-cif/issues/1675", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1893531407
Incorporate Opt-in workflows into the design Describe the task Context (to be refined) below: Hanna: Hi Scrum Team, it seems like there's been a decision in the business area that industrial operations will be allowed to opt in starting in the first year, i.e. at the same time as the other OBPS registrants. We will need to account for this in the Registration App. The implications I can think of off the top of my head are: -we'll probably need some checkbox to indicate an opt-in (not sure at what stage, i.e. user access approval, operator, operation?) -we may need to collect additional data from an opt-in, e.g. the BA seems to indicate that they will need to demonstrate that they are ready to participate - we should find out how that demonstration should happen and whether we need to support that in the app. E.g. an additional form they need to fill out, or attach a document, or sign something, etc. These are initial thoughts ... we should add this to things to discuss with business area. Patricia: I was able to confirm that Adria made this decision official, so yes, Opt-ins (smaller industry under 10KT) can apply to participate in OBPS. The process will include first applying to the Director (Adria), who will confirm whether or not they can join. Once approved by the Director, they should be allowed to register in (mostly?) the same manner as Regulated Operations and will also received a carbon tax exemption the same way. Apparently Opt-ins will have until Feb 2024 to apply. More info to come on this as we learn more!! Dylan: Something to consider is an allow-list for operators. Not totally sure if it would work for our use case, but if some operators need to be approved by adria, we'll probably have to record that approval somehow (either online or offline). Acceptance Criteria [ ] Review relevant slide in the Sep 21 Registration Engagement deck [ ] Create wireframes for the features/workflows Opt-ins in Figma [ ] Get input from the business area [ ] Refine design based on the input gathered Additional context Hey team! Please add your planning poker estimate with Zenhub @jaimiebutton @nanyangpro @NicoleGovvy Hi @hannavovk - This ticket is ready for you to review and can be closed with the design updates below in Figma: (New) Workflow | Add Operator - for when operator profile doesn't exist (New) Screen | New Entrant and/or Opt-in - for when registering an operation (New) Screen | Carbon Tax Exemption - for when an operation is registered As described in the ticket, further refinements/iterations should be made based on future input we get/collect. The design updates above serve as a visual reference to support communications between us and the business area. (cc: @suhafa :) Thanks @nanyangpro !
gharchive/issue
2023-09-13T00:36:31
2025-04-01T04:56:08.548825
{ "authors": [ "hannavovk", "nanyangpro" ], "repo": "bcgov/cas-registration", "url": "https://github.com/bcgov/cas-registration/issues/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2520856390
new private land layer needed in layer picker [ ] add the private cadastral layer from the BC Datawarehouse to the layer picker and ensure it can be turned on/off like the other layers. Layer is called "PMBC Parcel CADSTRE - Private - Fully Attributed" (in imap).
gharchive/issue
2024-09-11T21:53:59
2025-04-01T04:56:08.596568
{ "authors": [ "CrystalChadburn" ], "repo": "bcgov/invasivesbc", "url": "https://github.com/bcgov/invasivesbc/issues/3481", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2695595850
feat: retrieve compliance report chain feat: Retrieve Full Report Chain and Enhance Frontend Display Thanks for fixing test cases @kevin-hashimoto, But found issues w.r.t BCeID user. Please fix the issue from both BCeID user and IDIR user perspective and mask statuses as deemed necessary. Since we're using chain instead of history, is it necessary to have both history and chain data? We will need both as the chain is a list of supplemental reports with the history of events for each report
gharchive/pull-request
2024-11-26T17:49:15
2025-04-01T04:56:08.598463
{ "authors": [ "kevin-hashimoto" ], "repo": "bcgov/lcfs", "url": "https://github.com/bcgov/lcfs/pull/1293", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
483133934
Admin console User/Group management will not highlight group when selected There is an issue with the upgraded Keycloak consoles where it doesn't highlight which group you've clicked on when you're trying to manage groups for a user manually. The issue below happens in dev and test, but not prod at current time. (The group membership list component is a bit different in the old prod version) So if trying to remove a user from a group it looks like the UI isn't working (though it actually is) It seems to be that the UI will not highlight the item in the list if it is currently in both group panes (Group Membership and Available Groups). In the example from the screenshot below I have set up to reproduce In the left pane nothing will highlight In the right pane ACCESS_FOR_ALL and SAMPLE_GROUP will highlight, and the other 2 won't (this example is in https://sso-test.pathfinder.gov.bc.ca/auth/admin/jbd6rnxw/console/ and I've reproduced in another Keycloak realm as well) This has been tested in Chrome and Firefox, as well as on another team member's computer. So if you click on a group to leave it, it does successfully remove it from the list. So it's just that it's not highlighting in the UI. @junminahn is this still relevant as a known error for us? @junminahn is this still relevant as a known error for us? We can see if this issue is addressed in RH SSO 7.5 in the gold cluster.
gharchive/issue
2019-08-20T23:52:42
2025-04-01T04:56:08.606049
{ "authors": [ "junminahn", "loneil", "zsamji" ], "repo": "bcgov/ocp-sso", "url": "https://github.com/bcgov/ocp-sso/issues/46", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
268852636
Credit Trade screen to be built based on wireframe and including history Credit Transfer | Credit Trade screen to be built based on wireframe and including history |3| [v0.0.3-alpha] https://trello.com/c/1XtO2C6o https://trello.com/c/1XtO2C6o The issue was create for experimental use. We start to actively use the issues and milestones. Close this issue for now and reopen it if needed.
gharchive/issue
2017-10-26T17:51:57
2025-04-01T04:56:08.614867
{ "authors": [ "dainetrinidad", "kuanfandevops" ], "repo": "bcgov/tfrs", "url": "https://github.com/bcgov/tfrs/issues/118", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
461642975
Compliance Report | Part 2 Summary Line 1 and Line 12 |2| Task: This card builds off of the work that was completed in Card 1250 by implementing the required functionality for the Part 2 Renewable Fuel Requirement Summary section of the Summary & Declaration Page. Line 1 - Volume of petroleum-based gasoline supplied: This line displays the total volume of petroleum-based gasoline reported in Schedule B. Schedule B: Fuel Type = Petroleum-based gasoline Line 12 - Volume of petroleum-based diesel supplied: This line displays the total volume of petroleum-based diesel reported in Schedule B and reported in Schedule C with the expected use of Heating Oil. Formula is Schedule B: Fuel Type = Petroleum-based diesel + Schedule C: Fuel Type = Petroleum-based diesel and expected use of "Heating Oil". With respect to Schedule C, the part 2 requirements for this field only include Petroleum-based diesel used for heating oil and excludes petroleum-based diesel used for other purposes (e.g. aviation, national defense, other, etc.) Other Requirements: The corresponding value should be displayed in the format that places a comma every 3 digits. We are dealing with large volumes and unformatted number values can be difficult to read at a glance. For example, instead of 4327423 have it display on the screen as 4,327,423. Non-editable cell https://trello.com/c/mSRrVzYC/1468-compliance-report-part-2-summary-line-1-and-line-12-2 Compliance Report | Part 2 Summary Line 1 and Line 12 |2| Resolved in Sprint Quasar
gharchive/issue
2019-06-27T16:59:48
2025-04-01T04:56:08.620350
{ "authors": [ "KMenke", "amichard" ], "repo": "bcgov/tfrs", "url": "https://github.com/bcgov/tfrs/issues/1262", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1533549868
Hamburger 577 By default the sidebar shrinks and expands depending on screen width If the user clicks the hamburger button the sidebar toggles to either narrow or wide. If the user clicks the hamburger button it is assumed they want the sidebar to stay in the desired state no matter what the screen width. Innkeeper will need it as well. The innkeeperLayout is a separate file. (Originally these were split in case they differed but haven't so far so the innkeeperLayout might be able to be swapped with appLayout with an isInnkeeper parameter that sets the 1 class that makes them different.) Good to go ahead from the UX end. I like it!👍 We may want to adjust breakpoint widths for when the sidebar collapses to coordinate with the flexbox grid, so tables and stuff would collapse at the same breakpoint. But we'll probably want to test around to see what the best sizes are for that stuff later.
gharchive/pull-request
2023-01-14T23:27:19
2025-04-01T04:56:08.623279
{ "authors": [ "GurcharanjeetSingh", "loneil", "popkinj" ], "repo": "bcgov/traction", "url": "https://github.com/bcgov/traction/pull/386", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
140271460
Form_validation always accepts empty array See the following code: $this->load->library('form_validation'); $inputData = ['inp' => []]; $this->form_validation->set_data($inputData); $this->form_validation->set_rules('inp', 'Input field', 'is_natural|required'); //Will return true $this->form_validation->run(); This behaviour seems to be caused by a missing check in the following code: protected function _execute($row, $rules, $postdata = NULL, $cycles = 0) { // If the $_POST data is an array we will run a recursive call if (is_array($postdata)) { foreach ($postdata as $key => $val) { $this->_execute($row, $rules, $val, $key); } return; } There's no missing check. An empty array means no data, which in turn means no form was submitted - the library doesn't "accept" it; it ignores it. That's the intended behavior. Forgive me, but either I did not explain my problem well enough, or I have to reread the manual again. Another example: Two input fields, both required and `is_natural Forgive me, but either I did not explain my problem well enough, or I have to reread the manual again. Another example: Two input fields, both required and is_natural The input is accepted ($this->form_validation->run() returns true) even though the second value is an empty array instead of the required natural number. $this->load->library('form_validation'); $inputData = ['number1' => 5, 'number2' => []]; $this->form_validation->set_data($inputData); $this->form_validation->set_rules('number1', 'Input field 1', 'is_natural|required'); $this->form_validation->set_rules('number2', 'Input field 2', 'is_natural|required'); //Will return true $this->form_validation->run(); Hmm ... I didn't notice the 'inp' key in your initial post, my bad. Confirmed ... too bad you didn't report this a few hours earlier, the fix could've landed into 3.0.5. :/ Also, incidentally, this will provide progress towards #193. I'll try to release 3.0.6 ASAP (next week), this is pretty bad. Won't this return true as well if you use inp[] in the rule even in the 3.0.6 release? Does for me. $this->load->library('form_validation'); $inputData = ['inp' => []]; $this->form_validation->set_data($inputData); $this->form_validation->set_rules('inp[]', 'Input field', 'is_natural|required'); //Will return true $this->form_validation->run(); Also how do you post an empty array? Is this fix only relevant when using $this->form_validation->set_data? 'inp[]' is not the same as 'inp'; please post on our forums if you don't understand how it works.
gharchive/issue
2016-03-11T19:33:49
2025-04-01T04:56:08.647115
{ "authors": [ "Bitblade", "coldlamper", "narfbg" ], "repo": "bcit-ci/CodeIgniter", "url": "https://github.com/bcit-ci/CodeIgniter/issues/4516", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2252757709
6492 signature implementation Summary This PR is adding 6492 signature support to the SDK Change Type [ ] Bug Fix [ ] Refactor [x] New Feature [ ] Breaking Change [ ] Documentation Update [ ] Performance Improvement [ ] Other Checklist [x] My code follows this project's style guidelines [x] I've reviewed my own code [ ] I've added comments for any hard-to-understand areas [ ] I've updated the documentation if necessary [x] My changes generate no new warnings [ x I've added tests that prove my fix is effective or my feature works [x] All unit tests pass locally with my changes [x] Any dependent changes have been merged and published PR-Codex overview This PR updates test timeouts, adds signature verification tests, and refactors BiconomySmartAccountV2 with new methods and constants. Detailed summary Updated test timeouts Added signature verification tests Refactored BiconomySmartAccountV2 with new methods and constants ✨ Ask PR-Codex anything about this PR by commenting with /codex {your question} Size limit + lint @VGabriel45
gharchive/pull-request
2024-04-19T11:34:34
2025-04-01T04:56:08.652929
{ "authors": [ "VGabriel45", "joepegler" ], "repo": "bcnmy/biconomy-client-sdk", "url": "https://github.com/bcnmy/biconomy-client-sdk/pull/468", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
59231558
remove dependency on string The node package string modifies String.prototype which is a Javascript antipattern - some may agree/ disagree with this. However, when the same project includes the shelljs package as well, things get really messy - to and toEnd get invoked for no apparent reason: getNativeStringProperties in node_modules/yargs/node_modules/string/lib/string.js:681:32 calls String.toEnd in node_modules/shelljs/src/common.js:186:23 this really should not happen, and modifying String.prototype appears to be the root cause. This does not occur with yargs@1 or yargs@2, but it does indeed occur with yargs@3. @bguiz I agree that modifying String.prototype is an antipattern. Having said that, according to the string documentation it should only be modifying the string prototype if you ask it to: Originally, it modified the String prototype. But I quickly learned that in JavaScript, this is considered poor practice. https://github.com/bcoe/yargs/blob/master/lib/parser.js#L37 Having said this, there it sounds like you're seeing strange behavior around string? @bguiz I like the looks of this library as a drop in replacement: https://www.npmjs.com/package/morph give it a shot and see if you continue seeing shelljs problems? Yeah, don't get me wrong - both shelljs and string appear to be modifying String.prototype. SO the former is to blame too. In yargs@3, this function in string is the offender, getNativeStringProperties in node_modules/yargs/node_modules/string/lib/string.js:681:32. Despite the efforts to avoid it - highlighted in https://github.com/bcoe/yargs/blob/master/lib/parser.js#L37 - the getNativeStringProperties function appears to still be getting called; so I have a feeling that string's documentation might be misleading. If currently, the only use of the string is for camelise and decamelise, I say go for it! I've swapped out the string dependency in #108, let me know if this fixes the issues you were seeing with shelljs. Took a while to get around to it, but I have finally tested this fix with task-yargs and my target application - and this works like a charm! Consequently, task-yargs uses the latest yargs, as of now Thanks for the fix! W: http://bguiz.com On 28 February 2015 at 15:21, Benjamin E. Coe notifications@github.com wrote: I've swapped out the string dependency in #108, let me know if this fixes the issues you were seeing with shelljs. — Reply to this email directly or view it on GitHub. @bguiz awesome! Fun fact, I spent all weekend adding bash completion to yargs (and fixing a ton of bugs), would love for you to give it a shot: https://github.com/bcoe/yargs#completioncmd-description-fn I won't publish the feature to latest until I can get a few people to QA it. But you can test it out by running: npm install yargs@next. Adding bash completion was already on my radar: https://github.com/bguiz/task-yargs/issues/3 ... but I was looking at using complete or node-tabtab directly. Since yargs supports it out of the box now, that's even better! W: http://bguiz.com On 9 March 2015 at 17:27, Benjamin E. Coe notifications@github.com wrote: @bguiz awesome! Fun fact, I spent all weekend adding bash completion to yargs (and fixing a ton of bugs), would love for you to give it a shot: https://github.com/bcoe/yargs#completioncmd-description-fn I won't publish the feature to latest until I can get a few people to QA it. But you can test it out by running: npm install yargs@next. — Reply to this email directly or view it on GitHub.
gharchive/issue
2015-02-27T12:42:57
2025-04-01T04:56:08.666673
{ "authors": [ "bcoe", "bguiz" ], "repo": "bcoe/yargs", "url": "https://github.com/bcoe/yargs/issues/106", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
418149572
Fail to achieve concurrency masterFile.txt Above is the input file for the map. Now I was hoping that it would get split per line. Meaning, there shall be 10 lambda functions running concurrently, considering the input file has 10 new lines. Hence it fails to achieve concurrency. But it doesn't. Instead, Only a single lambda function is running which executes each line sequentially. Here is my code: func main() { job := corral.NewJob(wordCount{}, wordCount{}) options := []corral.Option{ corral.WithSplitSize(12), corral.WithMapBinSize(12), } driver := corral.NewDriver(job, options...) driver.Main() } It does show multiple maps (example 1/10, 2/10 etc) while I execute it locally. But when I deploy it to aws lambda, it only executes 1 map/function. Any kind of help would be really appreciated. I was able to use your example code/input and have 10 parallel lambdas using the following invocation: go run main.go -v --lambda --out s3://${AWS_TEST_BUCKET} s3://${AWS_TEST_BUCKET}/master_file.txt (Where ${AWS_TEST_BUCKET} is the bucket you've stored your input in) Can you provide the command you're invoking the job with as well as the output log? (The verbose log would be helpful) Firstly, Thanks alot for the quick reply Here is more information: Command: Main --lambda s3://{BuckName}/masterFile.txt --out s3://{BuckName}/ where Main is the exe file generated by go build command AWS Lambda Logs: START RequestId: 6e4992fe-43d3-454e-b7fe-629955cf57ab Version: $LATEST File - file_0.json File - file_1.json File - file_2.json File - file_3.json File - file_4.json File - file_5.json File - file_6.json File - file_7.json File - file_8.json File - file_9.json END RequestId: 6e4992fe-43d3-454e-b7fe-629955cf57ab REPORT RequestId: 6e4992fe-43d3-454e-b7fe-629955cf57ab Duration: 1762.65 ms Billed Duration: 1800 ms Memory Size: 1500 MB Max Memory Used: 112 MB START RequestId: 1b11b786-1df6-48f8-bed4-703186fcb955 Version: $LATEST END RequestId: 1b11b786-1df6-48f8-bed4-703186fcb955 REPORT RequestId: 1b11b786-1df6-48f8-bed4-703186fcb955 Duration: 266.66 ms Billed Duration: 300 ms Memory Size: 1500 MB Max Memory Used: 112 MB @bcongdon hoping and waiting for another quick reply :) Hi, I wish I had better news, but I'm still unable to reproduce the error you're having. 😕 (When I deploy, I get the correct number of jobs) The only thing I can think of would be to try undeploying everything (Main --undeploy), reuploading inputs to S3, and then redeploying. Also, I'd double check that you're using the latest version of corral, a reasonably up-to-date version of go (I tested using 1.11.4), etc. I'd also make the general suggestion that corral wasn't really built to handle really small input splits like this. Optimally, you'd want to give splits that are as big as possible while allowing the lambda to still process its entire split before timing out. In principle there's nothing preventing you from using 12-byte input splits, but be warned that corral wasn't designed to work optimally for this (and having such small splits probably won't scale well to large inputs) In your example, it looks like you might be loading files in the map phase? If your use-case supports it, it might be better to feed all the input files as inputs to the map phase. (i.e. Main file*.json). Without more context, I'm not sure if this is what you're trying to do. 🙂 Hi, Regarding "In your example, it looks like you might be loading files in the map phase?" - Yes, I am loading files in the map because I have thousands of file which I want to process parallelly. So that's the use-case. of using corral. Anyways, I am checking the versions and other possible ways to resolve the issue. Will let you know if I achieve success. Undeploying everything, reinstalling dependency and rebuilding everything from the scratch magically solved the concurrency issue. 🥇 But it is not creating the exact number of maps as the number of new lines, as you warned. But I wish there was a way to split the input merely on the basis on new lines and not per bytes. Hoping for an update :-) But thanks anyways! @bcongdon 1 more question - Main -v --lambda s3://{BuckName}/masterFile.txt --out s3://{BuckName}/ deploys to lambda function and executes it. IS there a way to execute the already deployed function with input file as argument? The --lambda flag only redeploys if your code has changed, so you can do something like Main -v --lambda --out s3://{OutputBucket}/ s3://{InputBucket}/masterFile1.txt ... followed immediately by Main -v --lambda --out s3://{OutputBucket}/ s3://{InputBucket}/masterFile2.txt and (assuming you didn't make any code changes) the same Lambda will be used. Note that the last argument is a list of input files, so you could even do something like this to act on both files with 1 run: Main -v --lambda --out s3://{OutputBucket}/ s3://{InputBucket}/masterFile1.txt s3://{InputBucket}/masterFile2.txt Currently there isn't a flag that explicitly skips the compilation/deployment step. I think this could be useful if you have really big jobs and don't want to compile everything each time you run it. I created #8 to track this idea.
gharchive/issue
2019-03-07T06:26:56
2025-04-01T04:56:08.679808
{ "authors": [ "ShagunParikh", "bcongdon" ], "repo": "bcongdon/corral", "url": "https://github.com/bcongdon/corral/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1733099343
prepare to add continuous state based predicates to spot envs also fix default spot IP to prevent hanging / bad behavior when running tests on network coauthor: @nkumar-bdai (tested on spot)
gharchive/pull-request
2023-05-30T22:19:01
2025-04-01T04:56:08.681643
{ "authors": [ "tsilver-bdai" ], "repo": "bdaiinstitute/predicators", "url": "https://github.com/bdaiinstitute/predicators/pull/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
458093601
Permissions issues with electron sandbox on archlinux Operating System: Archlinux (Manjaro) Beaker version: 0.8.8 Hi all, I'm trying to build Beaker from source and having some issues. First I run npm install which completes successfully then npm run rebuild also successful then npm start here's where i first get my error [8958:0619/111505.829308:FATAL:setuid_sandbox_host.cc(157)] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that ./node_modules/electron/dist/chrome-sandbox is owned by root and has mode 4755. a chmod and chown later npm start now succeeds and Beaker starts so now I successfully build the AppImage with npm run release but when running the built AppImage I get the same error but this time pointing at a temporary chrome-sandbox You need to make sure that /tmp/.mount_BeakerhN8lvV/chrome-sandbox is owned by root and has mode 4755. I can't however change that file's permissions as it seems to be deleted after the app crashes. Any suggestions? I'm not super familiar with electron so maybe I'm missing something obvious? I've noticed something similar on Debian 10. Haven't been able to research it in detail though. Thanks for filing. That's a pain! I'll check into it. @pfrazee let me know if I can help in anyway! Thanks! I just need to dig into electron docs and issues and look for some pointers. Facing similar issue (one with temporary path ) on CentOS7 as well. I confirm this issue on Manjaro (Archlinux). The fix #1524 that works for me on Debian sudo sysctl kernel.unprivileged_userns_clone=1returns the following error on Manjaro: sysctl: cannot stat /proc/sys/kernel/unprivileged_userns_clone: No such file or directory I am not able to install the last appimages on Manjaro linux.
gharchive/issue
2019-06-19T15:52:15
2025-04-01T04:56:08.732795
{ "authors": [ "brechtcs", "pfrazee", "raphaelbastide", "tulsileathers", "vaga007" ], "repo": "beakerbrowser/beaker", "url": "https://github.com/beakerbrowser/beaker/issues/1428", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
163554145
collapse-preserve-inline and throw I'm using js-beautify 1.6.3 on OSX for CLI formatting, main reason, to be cosnsitent with editor formatting (Sublime). I have noticed that there is difference even if using same settings. After digging deeper I have found theta runnting js-beautyfy twice from CLI, will give me what I expect. Digging deeper 'throws' seems to cause trouble echo 'if (a == 1) { a++; }' | js-beautify -b collapse-preserve-inline prints expected if (a == 1) { a++; } however echo 'if (a == 1) { throw "aaa" }' | js-beautify -b collapse-preserve-inline prints unexpected (nor collapsed nor expanded version) if (a == 1) { throw "aaa" } why ? and now the funny part (double formatting): echo 'if (a == 1) { throw "aaa" }' | js-beautify -b collapse-preserve-inline | js-beautify -b collapse-preserve-inline prints: if (a == 1) { throw "aaa" } so it is unstable and formatting result depends on how many times you run formatter ? Thanx for help Hello, excellent bug report and details. I've added this to the next milestone. Duplicate of #898
gharchive/issue
2016-07-03T09:14:16
2025-04-01T04:56:08.749880
{ "authors": [ "ainthek", "bitwiseman" ], "repo": "beautify-web/js-beautify", "url": "https://github.com/beautify-web/js-beautify/issues/962", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2140635221
Contribution guidelines section WHEN I enter a description, installation instructions, usage information, contribution guidelines, and test instructions THEN this information is added to the sections of the README entitled Description, Installation, Usage, Contributing, and Tests Done!
gharchive/issue
2024-02-17T22:06:43
2025-04-01T04:56:08.802203
{ "authors": [ "beckpull" ], "repo": "beckpull/readme-generator", "url": "https://github.com/beckpull/readme-generator/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2074876019
footer design provide a better design with node_env and version printed add social media links pr: https://github.com/bedirhanyildirim/fsmvu.com/pull/27
gharchive/issue
2024-01-10T17:47:02
2025-04-01T04:56:08.805020
{ "authors": [ "bedirhanyildirim" ], "repo": "bedirhanyildirim/fsmvu.com", "url": "https://github.com/bedirhanyildirim/fsmvu.com/issues/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
110597624
#Upgrade to React.js 0.14 Adds new react min version to package.json Adds new react-dom library to build dependencies Implementing new ReactDOM #3 Complete.
gharchive/pull-request
2015-10-09T06:46:49
2025-04-01T04:56:08.864555
{ "authors": [ "befreestudios" ], "repo": "befreestudios/Webpack-React-Flux", "url": "https://github.com/befreestudios/Webpack-React-Flux/pull/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
56364309
Faulty tx inputs produces undefined/None/Null input addresses There are "faulty" transactions like: txid = 2dc793134d1063a2f2505bba98c63441380658d25a707bed6a36f2b71d051aec https://blockchain.info/tx/2dc793134d1063a2f2505bba98c63441380658d25a707bed6a36f2b71d051aec txid = 8ebe1df6ebf008f7ec42ccd022478c9afaec3ca0444322243b745aa2e317c272 https://blockchain.info/tx/8ebe1df6ebf008f7ec42ccd022478c9afaec3ca0444322243b745aa2e317c272 which produces undefined/None/Null input addresses which mess up the enity graph mapping. This should not be a problem after commit 69e5da1b551b7a6c864ebff103598026affce41f which adds code to catch this error but it might be a better idea to address the root of the problem. This should be fixed by now
gharchive/issue
2015-02-03T11:14:42
2025-04-01T04:56:08.866953
{ "authors": [ "kernoelpanic" ], "repo": "behas/bitcoingraph", "url": "https://github.com/behas/bitcoingraph/issues/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
874954946
ConditionalExpression breaking weird with long condition When the Condition is long, this is how ConditionalExpression breaks. public string Value = someLongConditionddddddddd || someOtherConditionddddddddddd ? "yes" : "no"; This is working properly now.
gharchive/issue
2021-05-03T22:24:23
2025-04-01T04:56:08.885423
{ "authors": [ "belav" ], "repo": "belav/csharpier", "url": "https://github.com/belav/csharpier/issues/169", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
420551933
Support Size I see that you do not support annotation javax.validation.constraints.Size. Can you support it? From java doc: The annotated element size must be between the specified boundaries (included). Supported types are: CharSequence (length of character sequence is evaluated) Collection (collection size is evaluated) Map (map size is evaluated) Array (array length is evaluated) null elements are considered valid. @Size is supported and here is an example: Annotated bean and test. I see that you do not support annotation javax.validation.constraints.Size Where do you see that? Hmm I used it on List and it didn't work. It looks like only the String type is supported. Here is a failing test with v3.9.0: import java.util.List; import javax.validation.constraints.Size; import io.github.benas.randombeans.EnhancedRandomBuilder; import io.github.benas.randombeans.api.EnhancedRandom; import org.junit.Assert; import org.junit.Test; public class Issue348 { @Test public void testSizeOnList() { // given EnhancedRandom enhancedRandom = new EnhancedRandomBuilder() .build(); // when Person person = enhancedRandom.nextObject(Person.class); // then Assert.assertNotNull(person); int size = person.getNames().size(); Assert.assertTrue(size >= 2 && size <= 5); } static class Person { @Size(min = 2, max = 5) private List<String> names; public Person() { } public List<String> getNames() { return names; } public void setNames(List<String> names) { this.names = names; } } } We need to add support for all types as documented in the annotation (Collection, Map and Array) in addition to Strings. Thank you for reporting this issue! Maybe use Collection interface instead of List? Yes as I said here: We need to add support for all types as documented in the annotation (Collection, Map and Array) in addition to Strings. @magx2 A fix has been deployed in version 4.0.0.RC2-SNAPSHOT. Can you please give it a try? If you don't know how to use a snapshot version, please refer to the wiki here. As you might have noticed, the project has been renamed and you would need to adjust a couple of things (see migration guide). Looking forward for your feedback. Thank you upfront! It's working! Thanks!
gharchive/issue
2019-03-13T15:06:10
2025-04-01T04:56:08.903276
{ "authors": [ "benas", "magx2" ], "repo": "benas/random-beans", "url": "https://github.com/benas/random-beans/issues/348", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
205359813
Upgrade to Serde 0.9 There are a couple of breaking changes in both Serde itself and serde-json. I'm currently working on this. There are lots of changes, I'll see if I manage to solve them all :persevere:
gharchive/issue
2017-02-04T17:06:59
2025-04-01T04:56:08.906260
{ "authors": [ "antoine-de", "benashford" ], "repo": "benashford/rs-es", "url": "https://github.com/benashford/rs-es/issues/99", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
875661045
Duplicated whisper to yourself Describe the bug Sending a whisper generate a duplicated message on the chat To Reproduce Steps to reproduce the behavior: Go to https://dogehouse.tv Click on "new room" Creta a whisper #@USERNAME for yourself Press enter and enjoy Expected behavior Only one message please Screenshots What device are you on? Laptop Additional context I can reproduce this you mean you can't? you mean you can't? no i said i CAN. i agree with you, this is a thing
gharchive/issue
2021-05-04T17:06:38
2025-04-01T04:56:08.911470
{ "authors": [ "Gers2017", "amitojsingh366" ], "repo": "benawad/dogehouse", "url": "https://github.com/benawad/dogehouse/issues/2550", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
284829213
WIP: Use CC0-1.0 license text from source Ensure that we recognize both text to be merged in https://github.com/github/choosealicense.com/pull/488 and text published at choosealicense.com and used by licensee since 2013. Added tests, naive stripping of optional line at beginning (from CC source) and end (from choosealicense-2013). Not sure how to deal with large block of optional text in source: CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED HEREUNDER. Stripping that entire string regardless of how it is wrapped would be one straightforward option. Fixes #243 which is a duplicate of #172 I'm baffled as to why a wholly unrelated test just started failing (it doesn't locally): 1) command line invocation when given a repo URL detects the file's license Failure/Error: expect(stdout).to match('License: MIT License') expected "" to match "License: MIT License" # ./spec/bin_spec.rb:69:in `block (3 levels) in <top (required)>' I'll try restarting the CI test jobs later in hopes its something transient in the CI environment. I'm baffled as to why a wholly unrelated test just started failing (it doesn't locally): Ugh. I suspect it's hitting the API rate limit. I just added that test today, but it's largely an integration test. I think it's safe to remove. OK, it might be ugly, but it works. There are probably ways to make it slightly less ugly; review requested. Longer term and more generally I'd like to see if extracting <optional> texts from https://github.com/spdx/license-list-XML rather than hardcoding them here might be workable. Beyond that, explore a new matcher based entirely on following the markup in that repo. @benbalter just a ping to review this again when you get the chance. As mentioned at https://github.com/github/choosealicense.com/pull/488#issuecomment-379062833 SPDX 3.1 will account for the CC0-1.0 text variation introduced in choosealicense.com and licensee; it'd be nice to reciprocate by moving to the standard text in choosealicense.com and licensee. 😄 just because a PR is taking a long time doesn't mean it is no longer relevant though sometimes I do wonder why this one is taking so long bump Finally got around to making this PR consistent with the normalization improvements introduced in #342
gharchive/pull-request
2017-12-28T02:33:09
2025-04-01T04:56:08.921962
{ "authors": [ "anowlcalledjosh", "benbalter", "mlinksva" ], "repo": "benbalter/licensee", "url": "https://github.com/benbalter/licensee/pull/253", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
302217168
Implicitly re-order input dimensions? Currently if we provide an input n-d array with the right dimensions but in the wrong order it raises an error. Dimension order is important because xarray objects are used for the i/o model interface but not internally in processes. Maybe raising an error in this case is too strict and we could implicitly re-order the dimensions of given inputs before running a simulation. A downside is that users may not be aware of this change. the annoyance would be limited, though, as i/o data are xarray objects (the dimension order doesn't really matters). Or we could imagine keeping somewhere the dimension order given for the inputs and reuse that information later to revert back to this original order when building the outputs. Currently if we provide an input n-d array with the right dimensions but in the wrong order it raises an error. Not sure of this. Better explicit than implicit (see #76).
gharchive/issue
2018-03-05T08:59:09
2025-04-01T04:56:08.930357
{ "authors": [ "benbovy" ], "repo": "benbovy/xarray-simlab", "url": "https://github.com/benbovy/xarray-simlab/issues/30", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1213450563
total.wav is a single channel, why not align it in a multichannel fashion? Currently total.wav seems to contain the cummulative result. I think it would be much more valuable to have a total file that contains all tracks being correctly aligned. It could be even nicer to use a container format that does not require the 'shift' to be encoded, hence the offset would only be encoded as metadata (in a edit decision list fashion). Yes, total.wav contains all tracks averaged into a single channel. Audalign uses pydub to export files, so this kind of feature depends on pydub's capabilities. the write_extension argument in align lets you specify different formats than .wav. Are you suggesting that it also writes a total file with each aligned audio file being encoded as a separate channel? I'm not sure what kind of format could be used to encode the shift as metadata. Do you have some examples of what you're thinking of? Are you suggesting that it also writes a total file with each aligned audio file being encoded as a separate channel? When I aligning two mono files, my expectation was that I would receive a single single "dual-mono" file. At this moment I get three files, channel1, channel2 and total. So I would actually want to choose how the export is done. Separate channels, "dubbed" or integerated. I'm not sure what kind of format could be used to encode the shift as metadata. Do you have some examples of what you're thinking of? I was looking if mastroska was capable of doing this. I am very sure that SMIL is capable describing it. But nobody implements SMIL in a audio/video editor ;) I'll add a new option to specify if you want the output files to be encoded as channels in a single output file. Seems like a pretty simple addition from this StackOverflow post, so I could probably have it out in a day or two. Huh, I'm not very familiar with SMIL or mkv's, but that seems nifty! SMIL seems to be mostly a web thing? Huh, I'm not very familiar with SMIL or mkv's, but that seems nifty! SMIL seems to be mostly a web thing? I see SMIL as a very advanced playlist format that does not have to be linear, so the takeaway is: it is not a container, but rather standardised way to state how files are related towards eachother. I am also curious if MXF could do it, taking the original streams, and just placing them on a position in the timeline. That's some good-to-know info! I'm also not very familiar with MXF The align functions return the total results with the shifts and corresponding match strengths. It seems like you could process the output from the recognitions into one of those container formats. It's not a feature I would plan to support any time soon, but I'd be happy to accept PRs!
gharchive/issue
2022-04-23T20:41:31
2025-04-01T04:56:08.952928
{ "authors": [ "benfmiller", "skinkie" ], "repo": "benfmiller/audalign", "url": "https://github.com/benfmiller/audalign/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1693815224
Add Link to website in the description Thanks for the website! It would be helpful to have website link in the description: https://2023elections.bengawalk.com 🙌 Added the link to live version in the README.
gharchive/issue
2023-05-03T10:34:39
2025-04-01T04:56:08.962159
{ "authors": [ "akhil0001", "chaitanya-deep" ], "repo": "bengawalk/elections-2023", "url": "https://github.com/bengawalk/elections-2023/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2576387785
Task query seems to be broken I am experimenting with this plugin and trying to use the query functions to get all my tasks rendered into the kanban but, just like in #42 , it is not getting rendered at all. Was there any change that was not reflected in the readme, maybe? Yeah, broken for me also. I'm using version 3.8.1
gharchive/issue
2024-10-09T16:20:07
2025-04-01T04:56:08.986002
{ "authors": [ "jezzaaa", "pgpais" ], "repo": "benjypng/logseq-kanban-plugin", "url": "https://github.com/benjypng/logseq-kanban-plugin/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
617475385
Upgraded Sentiment Analysis - question about sigmoid To get the predictions you do this: rounded_preds = torch.round(torch.sigmoid(preds)) I was wondering why I get very different accuracies when I add a sigmoid layer to the LSTM instance: self.sigmoid = nn.Sigmoid() and then return the following: self.sigmoid(self.fc(hidden)) where self.fc is still the Linear hidden layer that you are using (and I do not apply sigmoid then in) rounded_preds = torch.round(preds) I expected that the results would be the same because I am doing the same, but apparently it's something totally different. Can you explain why this is the case? And does it also mean that if you want to add another layer to the classifier (say for instance a RELU layer) you have to do it before feeding the results to the linear layer (so between LSTM and linear layer). Note: I realise though that if you have a sigmoid value as the returned prediction value by the classifier instance itself, then this value will also be used to compute the loss and perform propagation, whereas if there is just a linear layer the value used to compute the loss will be the one provided by the linear layer. So it makes sense that the outcomes are different. So I guess that my question is then why can't we not let the classifier object return a value provided by the sigmoid function/layer (instead of using a value provided from a linear layer) or feed the output from the linear layer to a sigmoid layer and return that as the final prediction of the classifier object. I think that I know the answer: [(https://discuss.pytorch.org/t/bceloss-vs-bcewithlogitsloss/33586/4)] You are using BCEWithLogitsLoss, so the sigmoid activation function will be applied internally when the loss is computed. Therefore, you should not feed probabilities into the loss function (in contrast to BCEloss). However, to interpret the results / make a final prediction, we still need to convert the predicted values by the linear layer to sigmoids, which is why you do that in the binary_accuracy function. Yep, that's exactly correct.
gharchive/issue
2020-05-13T14:01:25
2025-04-01T04:56:09.055410
{ "authors": [ "TalitaAnthonio", "bentrevett" ], "repo": "bentrevett/pytorch-sentiment-analysis", "url": "https://github.com/bentrevett/pytorch-sentiment-analysis/issues/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2226253316
Chart when viewed hourly (e.g. 24 hours or "Today") drops first 3 data points I noticed that if I select "Today" or "24 hours" the chart consistently shows 0 for the first 3 data points/hours: Unclear if this is just affecting chart or the other tables (they're separate queries). After some additional debugging, this appears to only be happening in production. Here's localhost for last 24 hours (note this uses the production dataset): The same data rendered using the same version (b5159a5) deployed on Cloudflare: The data is the same; but the production worker is dropping the first 3 hours. I suspect – as always – this is a timezone thing. Okay, this happens when the server is running on a timezone that doesn't match the user (which is pretty frequent). Fix incoming.
gharchive/issue
2024-04-04T18:33:27
2025-04-01T04:56:09.058553
{ "authors": [ "benvinegar" ], "repo": "benvinegar/counterscale", "url": "https://github.com/benvinegar/counterscale/issues/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
57779611
Add Variant class Considering the example given in the README file, a variant class could be constructed. Consider supporting only C++14 Added Variant class. This issue can be closed.
gharchive/issue
2015-02-16T09:07:00
2025-04-01T04:56:09.074948
{ "authors": [ "berenoguz" ], "repo": "berenoguz/pera", "url": "https://github.com/berenoguz/pera/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
101113071
send experimenter email confirmation requests for unusual behavior if participants.bonus > exp.bonus_confirmation_threshold or if n.recruits > exp.recruit_confirmation_threshold ... This issue was moved to Dallinger/Dallinger#179
gharchive/issue
2015-08-14T22:13:29
2025-04-01T04:56:09.078940
{ "authors": [ "DallingerBot", "thomasmorgan" ], "repo": "berkeley-cocosci/Wallace", "url": "https://github.com/berkeley-cocosci/Wallace/issues/205", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1298166703
Issue 1 - Feature/create users - Berkeli Description Add functionality to see registered users and functionality to create a user Related to Create Users #1 Fixes #1 Checklist: [x] My code follows the style guidelines of this project [x] I have carefully reviewed my own code [x] I have commented my code [x] I have updated any documentation Heroku app: https://ldn8-cyf-breateau-11.herokuapp.com
gharchive/pull-request
2022-07-07T22:15:50
2025-04-01T04:56:09.087095
{ "authors": [ "berkeli" ], "repo": "berkeli/breteau-dashboard", "url": "https://github.com/berkeli/breteau-dashboard/pull/11", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
278748260
Broken links Seems like https://zeit.co/berzniz/react-overdrive-react-router-v4-demo/zketptspxf?redirect=1 and https://zeit.co/berzniz/react-overdrive-react-router-v4-demo/zketptspxf?redirect=1 are broken. at least for me. Thanks for reporting, I will try to re-deploy these
gharchive/issue
2017-12-03T06:08:46
2025-04-01T04:56:09.104505
{ "authors": [ "berzniz", "mehrdaad" ], "repo": "berzniz/react-overdrive", "url": "https://github.com/berzniz/react-overdrive/issues/40", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
668621622
Getting "component is null" error message on the console log Hi, First of all thank you for the great library, it really makes a huge difference! I faced one specific problem with dealing with Dropdown menus, both regular and Uncontrolled. Following the examples in https://sveltestrap.js.org give the following error: Uncaught TypeError: component is null handleDocumentClick Dropdown.svelte:85 Looking at the code in Dropdown.svelte I indeed see a component variable declared let component; but was never assigned value. Any help is appreciated. Keep it up! Hi @kefahi , thanks. I'm unable to duplicate, here is sample using same code: https://svelte.dev/repl/4b7f32465b2c4980b9540d8b6ba23fde?version=3.24.0 Would you mind sharing the code you are using? Thank you @bestguy The error went away on its own. It could be a glitch in my local setup. But I'm now curious to understand how it works without errors given that the component variable in Dropdown.svelte is not assigned any value. Thank you again for following up. Hi @kefahi , I think it actually is assigned a value on line #97 by using: bind:this={component} https://svelte.dev/tutorial/bind-this @bestguy That clarifies it, many thanks for your patience and follow-up! I'm closing the ticket.
gharchive/issue
2020-07-30T11:17:05
2025-04-01T04:56:09.115900
{ "authors": [ "bestguy", "kefahi" ], "repo": "bestguy/sveltestrap", "url": "https://github.com/bestguy/sveltestrap/issues/168", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2141202624
ChatCompletionCreateResponse incorrectly deserializes multiple tool calls when streaming Describe the bug Our application uses completion request streaming alongside OpenAI's recent support for multiple parallel tool calls. However, we have found that while OpenAI correctly returns multiple tool call objects in the stream, ChatCompletionCreateResponse always batches them into a single call with multiple function argument objects. This causes the first tool call's arguments to be malformed, and ignores all other tool calls from the API. Your code piece var result = new OpenAIResponse(); using var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(90)); await foreach (var completion in completionResult.WithCancellation(cancellationTokenSource.Token)) { if (completion.Successful) { var choice = completion.Choices.First(); var tools = choice.Message.ToolCalls; Console.WriteLine(tools.Count.ToString()); Console.WriteLine(tools.First().FunctionCall.Arguments.ToString()); } } // Our prompt includes multiple tools, including "googleSearch" and "getURL". // Our message to the agent: "Please search Google for cats and download the contents of www.wired.com." Result The code returns a single tool call containg the arguments of both tool calls: "{\"SearchTerm\": \"cats\"}{\"URL\": \"www.wired.com\"}" Expected behavior The code should return two separate tool calls in tools, each with its own arguments. Desktop (please complete the following information): OS: Windows Server 2019 Language: C# Version: v7.4.6 Additional context Looking at a proxy log of the response from OpenAI, we can see that the API properly returns two separate tool call objects: data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"role":"assistant","content":null},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"id":"call_c4yp30JbgLn1lwxhAWjCCShC","type":"function","function":{"name":"googleSearch","arguments":""}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\"Se"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"archT"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"erm\": "}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"\"cat"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"s\"}"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":1,"id":"call_iU0YaC5UfziyZN4bTQEKpsGS","type":"function","function":{"name":"getURL","arguments":""}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":1,"function":{"arguments":"{\"UR"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":1,"function":{"arguments":"L\": \""}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":1,"function":{"arguments":"www.wi"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":1,"function":{"arguments":"red."}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{"tool_calls":[{"index":1,"function":{"arguments":"com\"}"}}]},"logprobs":null,"finish_reason":null}]} data: {"id":"chatcmpl-8tjMROEisBcC2XbXNdjGD89ydAywJ","object":"chat.completion.chunk","created":1708293099,"model":"gpt-4-0125-preview","system_fingerprint":"fp_f084bcfc79","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"tool_calls"}]} data: [DONE] This is what leads me to believe that this is an issue specifically with this library. I feel that this may be caused by the changes in #463, but I'm not familiar enough with the codebase to verify that. I'm experiencing similar issues Hello, it actually seems there's an use-case is not handled in changes i did, related to parallel tool calls responses in chat streaming completion mode API. I already have detected where the issue is and i will fix it asap ( this evening or at last tomorrow ). Thank you for your details.
gharchive/issue
2024-02-18T21:54:25
2025-04-01T04:56:09.213490
{ "authors": [ "David-Buyer", "oferavnery", "skyegallup" ], "repo": "betalgo/openai", "url": "https://github.com/betalgo/openai/issues/493", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1574798790
Consider allowing DV_TEXT 'object', even if only a value attribute is present For DV_TEXT, if the data is {comment: "some Text"} , currently {comment: {value: "some text"}} is not valid when committing data, only {comment: "some text"}. This makes handling constraints where both text and codedText are valid, more awkward since it should be possible to infer text from codedText by the presence/absence of terminology and code attributes. Hi. This is already possible but the syntax of the value attribute is inccorect. {"comment": "some Text"} {"comment": {"|value": "some text"}} Regards, Primož Ah yes thanks. I was sure {"comment": {"|value": "some text"}} was rejected in the past - perhaps it has been fixed. I have tried this before answering you :) Can I close this issue or do you want to try this before closing? Regards, Primož Confirmed it works - closing
gharchive/issue
2023-02-07T18:00:11
2025-04-01T04:56:09.232641
{ "authors": [ "delopst", "freshehr" ], "repo": "better-care/web-template", "url": "https://github.com/better-care/web-template/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2420937670
Reflect dynamic components / type registrations for dynamic types What problem does this solve or what need does it fill? Components are not restricted to Rust types, rather they are opaque blobs of data identified in a world via ComponentId. Reflect is not restricted to Rust type and provides dynamic introspection of otherwise opaque data. However, TypeRegistration (and TypeInfo) is tied to Rust types. This means it's impossible to register reflection for dynamic components, even when you have a Reflect implementation available. What solution would you like? Don't require TypeId for TypeRegistration & TypeInfo. Type registrations need to be identified by a different id, such as a type path. What alternative(s) have you considered? Keep restricting reflection to static components. This was actually recently discussed to an extent on Discord. Relying on TypeId doesn't just limit our usage of dynamically defined types, it also makes hot-reloading a bit more complicated as well. We should probably consider relying on TypePath for stable identification, and possibly a pre-computed hash of that value for performance-critical flows.
gharchive/issue
2024-07-20T15:07:53
2025-04-01T04:56:09.262202
{ "authors": [ "MrGVSV", "SpecificProtagonist" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/issues/14404", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1159913279
Enable wayland by default What problem does this solve or what need does it fill? Without enabling wayland bevy fails on systems that support wayland instead of falling back to x11, supported by xwayland. This is because wgpu won't support x11 windows, when wayland is available on the system. What solution would you like? simply add "wayland" to the default features list. What alternative(s) have you considered? You can of course enable it manually, but many will forget and simply assume that wayland will be supported via xwayland. It may also be nice to default to X11 (through Xwayland) on GNOME even when wayland is supported as GNOME refuses to properly implement some parts of wayland. For example they don't support server side decorations, so winit has to draw a rather ugly title bar itself instead. That would be a bad idea, because wgpu's gles backend will always use wayland when possible. There is some discussion in winit and smithay_client_toolkit about adding libdecor support, for wayland, but for now we have to live with not the prettiest solution. That would be a bad idea, because wgpu's gles backend will always use wayland when possible. Then that is already a problem when wayland support is entirely disabled in winit. gfx-rs has logic to recreate the egl context when creating a surface for a different wayland window. I feel like this logic could be extended to recreating the egl context when creating a surface for an xcb or xlib window. GNOME will never add SSD because they asume that everyone uses GTK and prefered to implement titlebars with additional buttons, menus etc. as CSD thing... About libdecor, I found out it as kind of library which isn't FFI friendly at all and getting it stabilized in SCTK may take months. What's the impact of enabling Wayland when running somewhere it's not available? Longer build times? Regarding the topic of SSD in Gtk: I have stumbled upon this issue in a Wayland compositor and there is a PR to fix this issue in Gtk by enabling optional SSD (judging from the description, that is what it does; correct me if I am wrong). Might be of interest to you @HeavyRain266. Regarding the topic of SSD in Gtk: I have stumbled upon this issue in a Wayland compositor and there is a PR to fix this issue in Gtk by enabling optional SSD (judging from the description, that is what it does; correct me if I am wrong). Might be of interest to you @HeavyRain266. That doesn't fix absence of SSD in Mutter itself (GNOME's compositor) but adds option that you can disable GTK CSD in your Compositor to use your own SSD implementation. What alternative(s) have you considered? You can of course enable it manually, but many will forget and simply assume that wayland will be supported via xwayland. For GNOME in Ubuntu, that would mean having to manually disable it to continue to use x11 (as this comment notes)? For GNOME in Ubuntu, that would mean having to manually disable it to continue to use x11 (as this comment notes)? XWayland is currently broken for wgpu with gles as it is will create an Instance that is only compatible with wayland if wayland is available. Was scratching my head for a long while trying to figure out why the program was not exiting after closing the window. Enabling the wayland backend to stop relying to xwayland fixed the issue. Brief testing revealed no other issues. For example they don't support server side decorations, so winit has to draw a rather ugly title bar itself instead. There's nothing improper about this. It's perfectly valid in the spec to not support SSDs, and in some cases, expected, even. Plus, now we have libdecor and adwaita-sctk. There's not much need for SSDs, as now we can still get a similar end result. In the process, this can make the creation of a compositor vastly simpler. Another reason for wayland by default would be proper fractional scaling support. On my system (scale factor 1.5) applications running via XWayland applications are blurry due to being upscaled.
gharchive/issue
2022-03-04T18:07:41
2025-04-01T04:56:09.272259
{ "authors": [ "HeavyRain266", "bjorn3", "keis", "kirusfg", "kobutri", "mcobzarenco", "mockersf", "orowith2os", "pkupper" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/issues/4106", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1986465293
Adjust bevy_asset::AssetMode::Processed to take an Option<bool>th… Adjust bevy_asset::AssetMode::Processed to take an Option<bool>that if None will default to whether the asset_processor feature is enabled or not. Objective Allowing the asset_processor feature to be user controllable at runtime, though default to the prior functionality Solution Added an Option<bool> to the bevy_asset::AssetMode::Processed head and dispatched on it falling back to the cfg!(feature = "asset_processor") value. Changelog Changed: bevy_asset::AssetMode::Processed acquired a new argument of bevy_asset::AssetMode::Processed(Option<boo>), if None then prior functionality else it will enable the processor if Some(true) or disable it if Some(false) overriding the feature. Migration Guide To continue with the prior functionality just adjust any AssetMode::Processed to bevy_asset's plugin loader to be AssetMode::Processed(None). One thing to consider for this impl: while the asset_processor currently is "just" a flag to enable the asset processor, but ultimately I think it should also determine whether or not to compile the asset processor (this was omitted for 0.12 because the changes are non-trivial and require some refactors). I think the behavior should be: If asset_processor is not present, it is not compiled and we assume that for AssetMode::Processed, assets have already been compiled. If asset_processor is present, it is compiled and (by default) we assume that we will start the asset processor (enabling the "standard" workflow as described). If asset_processor is present, we are in AssetMode::Processed and the user has opted out from starting the processor as a runtime configuration on AssetPlugin, we will not start the server and we will exhibit the same behavior as (1). I think Processed should not be a bool. I would like to keep that as a simple global "Processed vs Unprocessed" configuration so we can make this behavior work transparently. Instead I think we should probably adopt the AssetPlugin::watch_for_changes_override pattern. Ex: AssetPlugin::start_asset_processor_override: Option<bool>, which will be considered when AssetMode::Processed is enabled with the asset_processor feature. One thing to consider for this impl: while the asset_processor currently is "just" a flag to enable the asset processor, but ultimately I think it should also determine whether or not to compile the asset processor (this was omitted for 0.12 because the changes are non-trivial and require some refactors). I expected it to work like this and was surprised when it didn't, I would expect this in the future as well (especially with how heavy some preprocessors might be when compiled in). I think the behavior should be: ...snip ... This works quite well. Instead I think we should probably adopt the AssetPlugin::watch_for_changes_override pattern. Ex: AssetPlugin::start_asset_processor_override: Option<bool>, which will be considered when AssetMode::Processed is enabled with the asset_processor feature. Effected this change, just pushed. I do wonder if it should be AssetPlugin::start_asset_processor_override: Option<bool>, or if it should just be AssetPlugin::start_asset_processor_override: bool, that defaults to true though, that's basically what is already happening so the Option might be superfluous here. Should I make it just a bool instead?
gharchive/pull-request
2023-11-09T21:57:00
2025-04-01T04:56:09.281620
{ "authors": [ "OvermindDL1", "cart" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/pull/10481", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2044894549
Send SceneInstanceReady only once per scene Objective Send SceneInstanceReady only once per scene. Solution I assume that this was not intentional. So I just changed it to only be sent once per scene. Changelog Fixed Fixed SceneInstanceReady being emitted for every Entity in a scene. Did you notice receiving several times this event? It should already be sent only once Yes, I received it multiple times, exactly as many times actually as I had entities in the scene. I'm not familiar with Bevy's codebase, but from how I understand the code it seems obvious that it calls send_event() once per entity. I'm on main and seeing this event only once, but this code shouldn't have changed since the 0.12.1 Could you share the scene you're using? I tried with a few gltf files Could you share the scene you're using? I tried with a few gltf files. I added a unit test. Without the change in this PR the unit test will fail. nice test, thanks! unlike scenes from gltfs, yours has two root entities, which explains what you're seeing 👍 Rebased after cfcb6885e3b475a93ec0fe7e88023ac0f354bbbf.
gharchive/pull-request
2023-12-16T18:24:10
2025-04-01T04:56:09.286418
{ "authors": [ "daxpedda", "mockersf" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/pull/11002", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
842866636
Mesh: added static and dynamic mesh for less runtime memory usage This addresses the unloading issue in https://github.com/bevyengine/bevy/issues/756. Problem Any mesh that is created will keep it's data (vertex attributes and indices) in the memory. This data is however never needed, unless the user wishes to make changes at runtime. Changes All mesh data will be unloaded after beeing uploaded to the GPU. If the user want's runtime changes hes has to use mesh::new_dynamic(). Internally, a new enum called MeshDataState indicates wther the mesh is static or dynamic. mesh now keep direct track of changes to mesh::attributes and mesh::indices. Remove mesh::indices_mut and mesh::attribute_mut, because any changes to these can't be tracked. If the user wishes to make chages to existing data, he has to clone them first from the immutable getter. When this gets merged, we should probably also implement something similar for Texture data. There is no need to keep texture data (and any mipmaps) in system RAM after copying to the GPU, if the texture will not be modified. Forgot to add: The current API will just print an error if, for example,set_attribute is called but the mesh is GPU only. This is similary to how Unity deals with the problem, but might not be super rust idiomatic? I tried other API aproaches, like having a extra struct MeshData that you can borrow from mesh, but there is no way to check when that borrow returns to update vertex_layout or indices_count etc. . Could also let that struct MeshDataborrowable over a scope wiht fn(&mut : MeshData) though, but maybe afterall this non-idomatic problem isn't big enough to justify this amount of extra code... 🤷 Round 2! Separated Mesh internally into MeshData and MeshMetaInfo Creating any mesh requieres the MeshDatato be defined up-front. Example let mut mesh = MeshData::default(); mesh_data.set_attribute(MeshData::ATTRIBUTE_POSITION, vec![[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0]]); mesh_data.set_indices(Some(Indices::U32(vec![0,1,2]))); Mesh::new_static(PrimitiveTopology::TriangleList, mesh_data) Altering mesh data happens now via Mesh::get_mesh_data_mut() which returns a Option<MeshData>, whether the mesh is static or dynamic. Example for altering a dynamic mesh: myMesh.get_mesh_data_mut().unwrap().set_attribute("Vertex_Position", new_vertex_data). The vertice count, etc. is now acquired via myMesh.meta().get_vertex_count(). Adapted examples and bevy_shape crate to new Mesh API. Shapes are now always getting created as dynamically. Open Questions Indices and attributes can't be borrowed mutable anymore, because: The meta data is only updated uppon set_indices or set_attribute. (Changeable) Changes made can't be validated and could crash bevy. To work arround this, the user would need to get a temporarly copy of the mesh first, which needs additional resources. Is there a use case where this is a problem? Could only think of enviroments with low-RAM, but is there something else? Any futher feedback on this? I gave this some thoughts: We're always dealing with two representations of the Mesh on CPU side: CPU form: bunch of arrays for each attribute like MyNormals: Vec<Vec3>. GPU form: big array for all attributes in interleaved form like in MyVertexData: Vec<u8> + a vertex layout descriptor. In my current implementation we're (optionally) carrying 1. inside Mesh and convert it into 2. once the mesh-system is called. But I now see this as a problem (see your 100MB mesh example), because the user has it's own, likely more efficient, representation of f.e. a terrain system already. So if the user commits his changes to Mesh we would have up three different representations simultaneously in memory (user, CPU, GPU). :grimacing: I would therefore suggest to remove any possibility to store a CPU-Mesh in Mesh entirely and only allow GPU meshes in byte form. struct MeshBlob{ pub vertex_bytes: &[u8], pub index_bytes: &[u8], pub meta: MeshMeta, } Dealing with raw GPU data has their own pitfalls, so I wouldn't want most user to deal with them directly, and instead offer some API for common cases. let my_mesh_blob = MeshBlob::from_attributes(my_positions, Some(my_normals),/*...*/).unwrap(); mesh.set_gpu_data(my_mesh_blob); In the case of the dynamic terrain, the user would just write his own MeshBlob creator. The GLTF loader would use MeshBlob::from_attributes(). This approach would solve a ton of problems: Memory consumption is minimized Less data would be moved around (see User Mesh -> CPU Mesh -> GPU Mesh) Vertex data can easily be validated (either by MeshBlob::from_attributes or Mesh::set_gpu_data) Missing attributes can easily be filled with defaults (fixes many panics by missing attributes (sorry for that!)) Unloading of mesh data is done in a reasonable way (once the blob is uploaded to the GPU) Attribute packaging and compression can be realized (See #756) MeshBlobs can be serialized for faster loading times in builds Vertex-Layouts are more predictable Less Render-pipeline permutations Makes it easier to write a custom shader Cleaner API of Mesh This would mean that I remove all of the API that deals with the CPU-Mesh and also change the scope of the PR quite a bit. Downside (as far as I can see): Loaded meshes can't be manipulated, since the user will never get the chance to access it. (just offer additional asset API?) MeshBlob::from_attributes needs some internal logic to choose from multiple VertexLayouts. Otherwise we would get overkill meshes that define only f.e. positions and normals, but store four channels of UVs :smiley: . Hi @julhe, @cart! I'm looking at reducing memory usage for my prototype and the quickest win was picking this up, just looking for some feedback on the approach based on the comments above. I've added support for meshes with vertex data that is already interleaved and unloading that data after an upload to the GPU like was done here. Now I'm optimizing for this case: However I think the "editing a very large Mesh without massive copies" problem is real. If someone has a 100 MB terrain mesh that they want to dynamically update, we probably shouldn't be copying it every time it changes (although they also shouldn't be doing a full upload to the gpu each time, but thats a separate problem to solve). At the moment I've went with 3 types of VertexStorage strategies and extended RenderAsset to allow an attempt to update existing PreparedAssets: GpuAndLocal (buffer created with MAP_WRITE, limited to small meshes (my 1660 Ti only has 256 MiB of host coherent memory)) GpuCow (store changed data in a staging buffer and copy to the vertex buffer during extraction, changes to the underlying buffer are stored as buffer address ranges with and CPU buffers (Vec<u8>) of the modified data) Cpu (full CPU->GPU re-uploads) Under the hood in Mesh there's now an abstraction to determine where an attribute should be written to in its storage. This allows set_attributes() et al. to work with loose or interleaved data regardless of storage to preserve API usability, but I'm still improving the ergonomics of that (to e.g. avoid replacing the entire buffer when changing a handful of vertices and dealing with having no attribute names in interleaved data). Every mesh is created with loose attributes by default and is converted to interleaved data on the first GPU upload, but the loose -> interleaved -> GPU copy is avoided by going from loose -> interleaved GPU storage and draining each loose attribute buffer as we go. From is implemented for LooseVertexData on InterleavedVertexData and vice versa, so converting back to a flat format to e.g. add more vertices is still possible, but as mentioned will fail with a GpuCow mesh. To summarize: CPU vertex data is unloaded after upload, optionally converted to a reference to a CoW staging buffer or host-coherent buffer Vertex data can now be stored as interleaved or loose data This is somewhat transparent to the user, who is still able to modify attributes. Interleaved vertex data has several storage mechanisms Host-coherent buffer (GpuAndLocal) Staging buffer that tracks changed buffer regions (GpuCow) Same full CPU->GPU copy that happens today (Cpu) Some concerns: you can't serialize a mesh to disk after the CPU data has been unloaded care needs to be taken to modify GpuAndLocal meshes in the correct place, maybe it's better to expose dynamic mesh data through some other render system specific handle? storing CPU buffers for GpuCow could be avoided by assuming the mesh can directly map the storage buffer, but introduces the above problem GpuCow meshes can only be written to, not read efficiency of GpuCow uploads is ultimately down to access patterns on the underlying vertex data Small request: rather than "dynamic" and "static", I would rather "mutable" and "immutable". That rename might make sense. In the context of rendering, typically the terms "dynamic" and "static" are used for whether objects can move around the scene (whether their transform will change) or not. Hearing "static mesh optimization" makes my brain think about things like batching geometry and baking lightmaps, assuming that those meshes are at a fixed Transform forever. Please rebase this PR when you get a chance :) Hey @JMS55 sorry for the delay! I'm fine with you taking over, but I would like to also recevie credit for the contribution. 🙂 @julhe sorry, I just saw your response! Of course you will receive credit if I end up merging anything. I tried some things and came to the conclusion that it would be better to wait for the asset rework to get finished https://github.com/bevyengine/bevy/pull/8624 rather then attempt this now, however. Hey @JMS55, just a simple ping here, checking if there's any updates on this. I'm interested in this feature and I'm offering my help if needed. Let me know! 🚀 Assets v2 has been merged, which means it's now feasible to do something like this. However the way we handle meshes in general is likely to change soon as part of the ongoing rendering rewrites. I have no plans to work on this myself at the moment, but if you're interested in working on it feel free to join #rendering-dev in the bevy discord and talk to us on what needs to be done :) I just wanted to leave my two cents: This data is however never needed, unless the user wishes to make changes at runtime. I'm writing a Bevy-based path tracer and I need to have a constant access to those raw data in order to correctly build the BVH (one could argue BLAS / TLAS approach is better, but in my case I'm "flattening" meshes and instances into a list of triangles and build BVH out of that). I'm not sure on the data flow, but if Bevy's to unload a mesh, it should at least give one frame for another systems (in particular inside RenderApp) to extract the mesh for storage on their own side (maybe the current implementation already does that, I haven't analyzed the code thoroughly) 🙂 I believe there should be an easy API toggle to choose whether data should be kept in system ram, or freed. There are valid use cases for both. If you need CPU-side code that deals with the data of meshes/textures/etc, you want them to be kept (in an ideal world, though, they could be put into unified memory on hardware that supports it (consoles, integrated gpus, etc). For most games, you want them to be freed. The default for assets loaded from disk should be to free the data on the CPU side, but it should be possible to reconfigure that (both globally (like we can do with the default texture filtering, etc) and per-asset). Assets that are added from code should default to keeping the data. Closing in favor of https://github.com/bevyengine/bevy/pull/10520
gharchive/pull-request
2021-03-29T00:11:41
2025-04-01T04:56:09.314576
{ "authors": [ "JMS55", "Patryk27", "Phyyl", "garyttierney", "inodentry", "jamadazi", "julhe" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/pull/1782", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1810932715
Add the Has world query to bevy_ecs::prelude Objective Addresses #9196 by adding query::Has to the bevy_ecs::prelude. @james7132 why no merge queue? Wanted to get @nicopap's review since a review was requested.
gharchive/pull-request
2023-07-19T00:43:12
2025-04-01T04:56:09.317701
{ "authors": [ "FlippinBerger", "alice-i-cecile", "james7132" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/pull/9204", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1856668461
Make GridPlacement's fields non-zero and add accessor functions. Objective There is no way to read the fields of GridPlacement once set. Values of 0 for GridPlacement's fields are invalid but can be set. A non-zero representation would be half the size. fixes #9474 Solution Add get_start, get_end and get_span accessor methods. ChangeGridPlacement's constructor functions to panic on arguments of zero. Use non-zero types instead of primitives for GridPlacement's fields. Changelog bevy_ui::ui_node::GridPlacement: Field types have been changed to Option<NonZeroI16> and Option<NonZeroU16>. This is because zero values are not valid for GridPlacement. Previously, Taffy interpreted these as auto variants. Constructor functions for GridPlacement that accept numeric values now return Result<GridPlacement, GridPlacementError>. Arguments of 0 are invalid, resulting in a GridPlacementError. Added accessor functions: get_start, get_end, and get_span. These return the inner primitive value (if present) of the respective fields. Migration Guide Constructor functions for GridPlacement that accept numeric values now return Result<GridPlacement, GridPlacementError>. Arguments of 0 are invalid, resulting in a GridPlacementError. Once there's docs for GridPlacementError this LGTM. Tests never hurt, but I'll leave that to you. @viridia can I get your review on this PR? If this looks good to you and meets your needs, I'll be able to merge it with your approval :) See the relevant section of CONTRIBUTING.md for more details on the community approval model we use. Once there's docs for GridPlacementError this LGTM. Tests never hurt, but I'll leave that to you. Added a few trivial tests but there isn't much here to check, the logic is all in the convert function and Taffy.
gharchive/pull-request
2023-08-18T12:47:35
2025-04-01T04:56:09.324291
{ "authors": [ "alice-i-cecile", "ickshonpe" ], "repo": "bevyengine/bevy", "url": "https://github.com/bevyengine/bevy/pull/9486", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1578207064
MGDSTRM-6199 Add rhoas CLI ACL tests. I split the review into two Parts. It is a good work but there are some redundancies which make code extensively large for scope of tested functionality. Also creating new profile and suite for testing cli is great idea (just one class added here) once mentioned things are resolved we can finish reduction of KafkaRhoasAclTest class in showed way. Thank you for your feedback, adding @DataProvider and ACLEntityType enum help us improve readability. I adress all your comments except @Test(priority = 1) and GITHUB_TOKEN (more info in the conversations) Hey great job. basically two main points: firstly we are heavily depending on concrete order of execution. Secondly we are actually testing deletion too much while it is not so necessary. I added here some suggestion but feel free to go with it any way. thanks! I'll address some of the issues you share in my next commit. @henryZrncik I commit new changes removing test order execution, dependencies and duplicated tests
gharchive/pull-request
2023-02-09T16:24:16
2025-04-01T04:56:09.339042
{ "authors": [ "agullon" ], "repo": "bf2fc6cc711aee1a0c2a/e2e-test-suite", "url": "https://github.com/bf2fc6cc711aee1a0c2a/e2e-test-suite/pull/474", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2074147163
🛑 Hacker News is down In 0842f7d, Hacker News (https://news.ycombinator.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Hacker News is back up in 84b8c07 after 30 minutes.
gharchive/issue
2024-01-10T11:16:33
2025-04-01T04:56:09.365737
{ "authors": [ "bguivarch" ], "repo": "bguivarch/testuptime2", "url": "https://github.com/bguivarch/testuptime2/issues/101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2138097332
Update to pacackge:web is there any option to upgrade this package to new package:web of flutter 3.19 is there any issue ? Not really, but is necessary if you want to compile in wasm your project "To run Flutter applications on the web using WebAssembly, you need to migrate all code — from the application and all dependencies — to use the new JavaScript Interop mechanism and the package:web. The legacy JavaScript and browser libraries remain unchanged and supported for compiling to JavaScript code. However, compiling to WebAssembly requires a migration." I think this is done, with version 0.0.6
gharchive/issue
2024-02-16T08:19:09
2025-04-01T04:56:09.371332
{ "authors": [ "JgomesAT", "bharathraj-e" ], "repo": "bharathraj-e/g_recaptcha_v3", "url": "https://github.com/bharathraj-e/g_recaptcha_v3/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }