id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1930555988
liableForVat false is not the case for folio-created vendor records Our migrated vendors will have liableForVat key in the data but it seems folio-created vendor records will not have that key if it is false. We got a key error for trying to transform an invoice when running the payments dag: [2023-10-06, 15:42:46 UTC] {taskinstance.py:1824} ERROR - Task failed with exception + Exception Group Traceback (most recent call last): | File "/home/airflow/.local/lib/python3.10/site-packages/airflow/decorators/base.py", line 220, in execute | return_value = super().execute(context) | File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 181, in execute | return_value = self.execute_callable() | File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 198, in execute_callable | return self.python_callable(*self.op_args, **self.op_kwargs) | File "/opt/airflow/libsys_airflow/plugins/orafin/tasks.py", line 64, in transform_folio_data_task | invoice, exclude = get_invoice(invoice_id, folio_client, converter) | File "/opt/airflow/libsys_airflow/plugins/orafin/payments.py", line 93, in get_invoice | invoice = converter.structure(invoice, Invoice) | File "/home/airflow/.local/lib/python3.10/site-packages/cattrs/converters.py", line 334, in structure | return self._structure_func.dispatch(cl)(obj, cl) | File "<cattrs generated structure libsys_airflow.plugins.orafin.models.Invoice>", line 61, in structure_Invoice | if errors: raise __c_cve('While structuring ' + 'Invoice', errors, __cl) | cattrs.errors.ClassValidationError: While structuring Invoice (1 sub-exception) +-+---------------- 1 ---------------- | Exception Group Traceback (most recent call last): | File "<cattrs generated structure libsys_airflow.plugins.orafin.models.Invoice>", line 45, in structure_Invoice | res['vendor'] = __c_structure_vendor(o['vendor'], __c_type_vendor) | File "<cattrs generated structure libsys_airflow.plugins.orafin.models.Vendor>", line 24, in structure_Vendor | if errors: raise __c_cve('While structuring ' + 'Vendor', errors, __cl) | cattrs.errors.ClassValidationError: While structuring Vendor (1 sub-exception) | Structuring class Invoice @ attribute vendor +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "<cattrs generated structure libsys_airflow.plugins.orafin.models.Vendor>", line 20, in structure_Vendor | res['liableForVat'] = __c_structure_liableForVat(o['liableForVat']) | KeyError: 'liableForVat' | Structuring class Vendor @ attribute liableForVat +------------------------------------ update this line in the model: https://github.com/sul-dlss/libsys-airflow/blob/1d5adfe1a499e74250cd2987d2f527d84e711655/libsys_airflow/plugins/orafin/models.py#L18 to liableForVat: Union[None, bool] Also, check API docs to see what is required vs. optional for the other properties in our models. @jermnelson I tried to add this to the models.py but when I update the fixture data in test_payments.py to reflect that the vendor doesn't have viableForVat key in the json data, the test still fails with the same error. See https://github.com/sul-dlss/libsys-airflow/commit/19655f6a919b42168e11f72496627119f03b7157 Hi Shelley, we need to specify a default value if the liableForVat is missing in the model definition (I should have looked at another example in the models.py ie line 55). So line 18 should be liableForVat: Union[bool, None] = None. This fixed all of the testing errors for me except for https://github.com/sul-dlss/libsys-airflow/blob/1d5adfe1a499e74250cd2987d2f527d84e711655/tests/orafin/test_payments.py#L153 which should be changed from False to None to pass. Hmm, I'm not sure this is working as intended. When I add back "liableForVat": False to the vendor object in https://github.com/sul-dlss/libsys-airflow/blob/19655f6a919b42168e11f72496627119f03b7157/tests/orafin/test_payments.py#L89-L93 then the test fails because it is now set to default as None and the test is looking for None not False. To me, this means that all the migrated vendor records where liableForVat: false is in the json data will fail too. @shelleydoljack, looking in the Models at line 124 if the liableForVat is False, we would generate a TA line. Should we generate a TA if liableForVat is None? The other place that liableForVat is used is in the tax_code function and if liablbeForVat being set to False or None returns the same value (USE_CA) I think I figured out what to do: we want liableForVat default to False when it is missing from the organization json we get back. I created a PR with this logic. I think this fixes the issues you bring up in https://github.com/sul-dlss/libsys-airflow/issues/773#issuecomment-1756467401.
gharchive/issue
2023-10-06T16:23:02
2025-04-01T04:35:58.999309
{ "authors": [ "jermnelson", "shelleydoljack" ], "repo": "sul-dlss/libsys-airflow", "url": "https://github.com/sul-dlss/libsys-airflow/issues/773", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1174089587
🛑 SISTEMA ROYAL-HOLIDAY is down In 23720db, SISTEMA ROYAL-HOLIDAY (https://www.royalholiday.com.ar/reservas/admin/) was down: HTTP code: 0 Response time: 0 ms Resolved: SISTEMA ROYAL-HOLIDAY is back up in a05cad0.
gharchive/issue
2022-03-18T22:44:45
2025-04-01T04:35:59.037698
{ "authors": [ "sumito74" ], "repo": "sumito74/upptime", "url": "https://github.com/sumito74/upptime/issues/327", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1985823960
4.2国际服从缓存文件导出报错 数据为空,错误代码:invalid request params 问题描述 4.2国际服从缓存文件导出报错 数据为空,错误代码:invalid request params 系统版本 v4.2.0.11092107 Windows11 22H2 错误信息 2023-11-09 23:05:01.002 | INFO | __main__:<module>:225 - 项目主页: https://github.com/sunfkny/genshin-gacha-export 2023-11-09 23:05:01.002 | INFO | __main__:<module>:226 - 作者: sunfkny 2023-11-09 23:05:01.002 | INFO | __main__:<module>:227 - 版本: v4.2.0.11092107 2023-11-09 23:05:01.002 | INFO | updater:update:99 - 更新发布: https://github.com/sunfkny/genshin-gacha-export/releases 2023-11-09 23:05:01.002 | INFO | updater:update:100 - Coding 制品库(国内推荐): https://sunfkny.coding.net/public-artifacts/genshin-gacha-export/releases/packages 2023-11-09 23:05:01.298 | INFO | updater:update:138 - 当前已是最新版本 2023-11-09 23:05:01.300 | INFO | __main__:<module>:240 - 检查配置文件中的链接 2023-11-09 23:05:01.730 | WARNING | __main__:check_api:210 - 链接过期 2023-11-09 23:05:01.750 | INFO | __main__:<module>:256 - 使用剪贴板模式 2023-11-09 23:05:01.750 | INFO | __main__:<module>:264 - 剪贴板中无链接 2023-11-09 23:05:01.750 | INFO | __main__:<module>:283 - 云·原神日志不存在 2023-11-09 23:05:01.750 | INFO | __main__:<module>:305 - 使用国际服日志 C:\Users\tenky\AppData\LocalLow\miHoYo\Genshin Impact\output_log.txt 2023-11-09 23:05:01.750 | INFO | __main__:<module>:326 - 缓存文件 C:\Program Files\Genshin Impact\Genshin Impact game\GenshinImpact_Data\webCaches\2.18.0.0\Cache\Cache_Data\data_2 2023-11-09 23:05:01.750 | INFO | __main__:<module>:332 - 开始读取缓存 2023-11-09 23:05:01.781 | INFO | __main__:<module>:350 - 检查缓存文件中的最新链接 2023-11-09 23:05:05.388 | WARNING | __main__:check_api:214 - 数据为空,错误代码:invalid request params 2023-11-09 23:05:05.388 | INFO | __main__:<module>:367 - 使用抓包模式 附加截图 触发之后把缓存文件 data_2 和日志 log.txt 上传一下, 我看看是编码问题还是啥其他问题 国际服的前缀似乎也改了, 更新了 v4.2.0.11102342 v4.2.0.11102342依然不行,提示链接过期,已经手动去祈愿调过了记录。 先附上log.txt log.txt
gharchive/issue
2023-11-09T15:07:10
2025-04-01T04:35:59.051549
{ "authors": [ "TenkyuChimata", "sunfkny" ], "repo": "sunfkny/genshin-gacha-export", "url": "https://github.com/sunfkny/genshin-gacha-export/issues/73", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
390579760
可空枚举映射的字段,在插入数据库时,始终有默认值,无法为Null 实体某属性的数据类型为一个可空的枚举,数据库中会mapping为可空的整型,但是在进行数据插入操作时,这个字段始终有默认值,无法为null,不知是否是一个bug 枚举这个不处理 ,因为有些人 NULL就解释成0
gharchive/issue
2018-12-13T09:03:37
2025-04-01T04:35:59.061393
{ "authors": [ "liumanman", "sunkaixuan" ], "repo": "sunkaixuan/SqlSugar", "url": "https://github.com/sunkaixuan/SqlSugar/issues/179", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1181173110
Use of NFTables instead of IPTables (Step 18: configure firewall) Guide How to self-host hardened strongSwan IKEv2/IPsec VPN server for iOS and macOS Summary I am trying to avoid using IPTables and have switched over to nftables. Could you provide the equivalent nftables commands (along with the iptables) for Step 18. I have tried using the auto-translate feature of nftables to convert the iptables commands to nftables syntax... but it does not translate all the commands. Thanks.. Hey @gspannu, I agree one should use nftables (most other guides do) but if I remember my previous attempts correctly, it isn’t straightforward in the context of specific ruleset. Guide has been deprecated given I am no longer using it myself… that said, open to peer reviewed pull request if you know how to switch firewall to nftables.
gharchive/issue
2022-03-25T19:13:26
2025-04-01T04:35:59.063491
{ "authors": [ "gspannu", "sunknudsen" ], "repo": "sunknudsen/privacy-guides", "url": "https://github.com/sunknudsen/privacy-guides/issues/224", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1091880068
fix: update currentSession.user when GoTrueClient.update is called Fixes #39 Addresses PR feedback in #40 Could you please add tests? Absolutely. Can you approve my account to run checks so I can test my tests? Not sure how to run them successfully locally yet. https://github.com/supabase-community/gotrue-dart/issues/47 The new stable Dart SDK has new lints that are failing. I can resolve these as well. Hey just wondering if this can be merged and released? 😀
gharchive/pull-request
2022-01-01T19:55:48
2025-04-01T04:35:59.096237
{ "authors": [ "MisterJimson", "bdlukaa" ], "repo": "supabase-community/gotrue-dart", "url": "https://github.com/supabase-community/gotrue-dart/pull/46", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1305792024
Add createTable method on client It would be useful to let developer create tables in a declerative way. I don't think it's safe to create a table on the client @jakub-stefaniak As @bdlukaa has suggested, it is not safe to be able to create tables from a client, and it is not how systems are designed using RDMS. Typically, you want to create the table in a safe environment, like Supabase dashboard, and define/ restrict how and which rows users can access those data. You can read more on how you can design tables from here, but I would like to close this issue for now. Feel free to open other issues if you have any questions regarding anything about Supabase!
gharchive/issue
2022-07-15T09:17:58
2025-04-01T04:35:59.098440
{ "authors": [ "bdlukaa", "dshukertjr", "jakub-stefaniak" ], "repo": "supabase-community/postgrest-dart", "url": "https://github.com/supabase-community/postgrest-dart/issues/74", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2092055587
[Bug]: Logging in via Google as banned user results in a crash General Info [X] I checked for similar bug report [X] I am using the latest version [X] I checked the troubleshooting page for similar problems [X] I enabled logging and checked the logs Version(s) 2.0.4 Target(s) Android What happened? (include your code) Logging in as a banned user via Google provider, results in the API call response with 500. That's for sure a Supabase issue. But there should probably be a mechanism to handle such exceptions and prevent them bubbling up and crashing the app. Steps To Reproduce (optional) Ban a user via auth.adminApi Log in as the banned user Relevant log output (optional) Fatal Exception: io.github.jan.supabase.exceptions.BadRequestRestException: server_error (Internal Server Error) URL: https://redacted.supabase.co/auth/v1/token?grant_type=id_token Headers: redacted Http Method: POST at io.github.jan.supabase.gotrue.AuthImpl.parseErrorResponse(AuthImpl.kt:489) at io.github.jan.supabase.gotrue.AuthenticatedSupabaseApiKt$authenticatedSupabaseApi$3.invoke(AuthenticatedSupabaseApi.kt:58) at io.github.jan.supabase.gotrue.AuthenticatedSupabaseApiKt$authenticatedSupabaseApi$3.invoke(AuthenticatedSupabaseApi.kt:58) at io.github.jan.supabase.network.SupabaseApi.rawRequest$suspendImpl(SupabaseApi.kt:25) at io.github.jan.supabase.network.SupabaseApi$rawRequest$1.invokeSuspend(:15) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:108) at androidx.compose.ui.platform.AndroidUiDispatcher.performTrampolineDispatch(AndroidUiDispatcher.android.kt:81) at androidx.compose.ui.platform.AndroidUiDispatcher.access$performTrampolineDispatch(AndroidUiDispatcher.android.kt:41) at androidx.compose.ui.platform.AndroidUiDispatcher$dispatchCallback$1.run(AndroidUiDispatcher.android.kt:57) at android.os.Handler.handleCallback(Handler.java:958) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loopOnce(Looper.java:205) at android.os.Looper.loop(Looper.java:294) at android.app.ActivityThread.main(ActivityThread.java:8248) at java.lang.reflect.Method.invoke(Method.java) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:552) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:971) @brezinajn So how do you log in? Via Compose Auth? Yes via Compose Auth. // client creation install(Auth) install(ComposeAuth) { googleNativeLogin(serverClientId = serverClientId) } ... // use site composeAuth.rememberSignInWithGoogle() Yes via Compose Auth. // client creation install(Auth) install(ComposeAuth) { googleNativeLogin(serverClientId = serverClientId) } ... // use site composeAuth.rememberSignInWithGoogle() Okay, does this crash happen on Android 14? (I'm asking because it seems like there is a try catch missing for the Android 14 implementation) Yes Alright, a fix will be included in the upcoming 2.1.0 beta.
gharchive/issue
2024-01-20T13:32:45
2025-04-01T04:35:59.104082
{ "authors": [ "brezinajn", "jan-tennert" ], "repo": "supabase-community/supabase-kt", "url": "https://github.com/supabase-community/supabase-kt/issues/432", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1884507254
Add in ability to call supabase.auth.resend() Feature request Is your feature request related to a problem? Please describe. It seems that the resend has not been setup in the sdk - https://supabase.com/docs/reference/javascript/auth-resend Describe the solution you'd like I would like to be able to call this to have the user resend the OTP email. Describe alternatives you've considered I will have to use an edge function instead, which isn't ideal. +1 Would also love to see this implemented Hi, I haven't had much time recently to work on Supabase due to some other projects, feel free to send a PR over and I'll gladly review it. Would love to pick this up as a first issue - I can try and find some time in next couple weeks @scottybobandy - any update on this?
gharchive/issue
2023-09-06T18:05:10
2025-04-01T04:35:59.107421
{ "authors": [ "bwship", "grsouza", "scottybobandy" ], "repo": "supabase-community/supabase-swift", "url": "https://github.com/supabase-community/supabase-swift/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1678271728
Typescript gen : invalid SQL Function types Bug report [x] I confirm this is a bug with Supabase, not with my own application. [x] I confirm I have searched the Docs, GitHub Discussions, and Discord. Describe the bug For both input and input types, typescript generation of SQL functions is invalid in some cases : Primitive types are generated as non-nullable (type instead of type | null) even though the function may accept NULL as input or output Domain types are generated as unknown I think that this bug is not caused by a PostgreSQL limitation as views produce the correct typing : primitives are nullable, domains generate as their underlying type and when domains are marked NOT NULL they are even non-nullable in typescript. To Reproduce CREATE DOMAIN strict_text AS text NOT NULL; CREATE FUNCTION some_function(arg strict_text) RETURNS table (nulltext text, stricttext strict_text) LANGUAGE SQL AS $$ SELECT NULL::uuid, arg $$; Generated type with supabase gen types typescript --local export interface Database { public: { Functions: { some_function: { Args: { arg: unknown } Returns: { nulltext: string stricttext: unknown }[] } } } } Expected behavior Generated types should be export interface Database { public: { Functions: { some_function: { Args: { arg: string } Returns: { nulltext: string | null stricttext: string }[] } } } } System information OS: [e.g. Linux NixOS] Version of supabase CLI: 1.50.8 Version of Node.js: 18.15.0 Additional context I :heart: the work being done and the mindset at Supabase. I believe strict typing is crucial to the success of a "backend in the DB" and I could help with a PR if I'm being given some pointers to the code responsible of the TS generation :wave: Thanks for the detailed bug report. The typescript generation code is in postgres-meta repo if you are keen to take a look. That would be a great addition to the client library tho :3 I would love to see it live in the near future!
gharchive/issue
2023-04-21T10:15:52
2025-04-01T04:35:59.129260
{ "authors": [ "mwoss", "ngasull", "sweatybridge" ], "repo": "supabase/cli", "url": "https://github.com/supabase/cli/issues/1030", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2660460451
feat: add minimum password length and password requirements config What kind of change does this PR introduce? This feature can be found in the Supabase Hosted Dashboard but doesn't have a way to configure it in the config.toml. What is the current behavior? No way to set Password minimum length and Password requirements What is the new behavior? Can now set Password minimum length and Password requirements in the config.toml file. Additional context Add any other context or screenshots. Add screen recording to show this working. Turn down your audio as I might be a bit loud in the recording. https://github.com/user-attachments/assets/19184c99-6428-4976-a2e4-371a0ba00d3a
gharchive/pull-request
2024-11-15T01:11:10
2025-04-01T04:35:59.132410
{ "authors": [ "silentworks" ], "repo": "supabase/cli", "url": "https://github.com/supabase/cli/pull/2885", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1864350764
Signing in via Native Google login, and then via native Apple login throws an exception Bug report [x] I confirm this is a bug with Supabase, not with my own application. [x] I confirm I have searched the Docs, GitHub Discussions, and Discord. Describe the bug After setting up native Google and Apple login for my remote Supabase instance, I sign in a user via native Google login. After that, I sign out and then sign in via native Apple login with the same email address. This signing in with native Apple login throws Unacceptable audience in id_token error on Gotrue. To Reproduce Steps to reproduce the behavior, please provide code snippets or a repository: Setup native Google and Apple login. Sign in via native Google login Sign out Sign in via native Apple login with the same email address and observe the exception thrown from Gotrue Expected behavior Performing Google login and Apple login on the same email address works fine. System information OS: iOS Version of supabase-flutter: 1.10.14 Additional context Related https://github.com/supabase/supabase-flutter/issues/5#issuecomment-1687945257 Is there any solution? I am facing it with singInWithApple() on ios and also i have implemented native login and get idtoken and then pass it to the signInWithIdToken() but still same error. AuthException(message: Unacceptable audience in id_token, statusCode:400) Thank you for replying @dshukertjr . I implemented 2 months ago and i have checked and tested last month it was working fine. It stopped working suddenly in last week. Thank you for the answer. It solves the issue and working fine now. ✅😀 Hey everyone. We did some digging on this, it appears that we had introduced a bug temporarily with the handling of these configs. If you had setup Apple or Google sign in then, the configs were likely saved incorrectly. This appears to have been a very low number of projects affected, but we've now fixed all of the project's settings. Sorry about that! I'll close this issue for now. Hello, I have been implementing sign in with apple since 2024/02/18. I have implemented sign in with apple since 2024/02/18, but I got the error Unacceptable audience in id_token. I tried to turn off "sign in with Apple" and turn it on again as mentioned in the comments, but the error remained the same. In my case, the address of the device to log in is an address that I have never registered, so I think the situation is different from what is outlined in the issue, but I am facing the same error. As for the login implementation, it is exactly the same as the new Apple login implementation at https://supabase.com/docs/reference/dart/upgrade-guide We are proceeding on the understanding that we have the necessary setup for apple login. Flutter Doctor [✓] Flutter (Channel stable, 3.16.5, on macOS 14.0 23A344 darwin-arm64 (Rosetta), locale ja-JP) [✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 15.2) [✓] Chrome - develop for the web [✓] Android Studio (version 2022.3) [✓] VS Code (version 1.86.0) [✓] Connected device (4 available) [✓] Network resources ```  Hello, I have been implementing sign in with apple since 2024/02/18. I have implemented sign in with apple since 2024/02/18, but I got the error Unacceptable audience in id_token. I tried to turn off "sign in with Apple" and turn it on again as mentioned in the comments, but the error remained the same. In my case, the address of the device to log in is an address that I have never registered, so I think the situation is different from what is outlined in the issue, but I am facing the same error. As for the login implementation, it is exactly the same as the new Apple login implementation at https://supabase.com/docs/reference/dart/upgrade-guide We are proceeding on the understanding that we have the necessary setup for apple login. Flutter Doctor [✓] Flutter (Channel stable, 3.16.5, on macOS 14.0 23A344 darwin-arm64 (Rosetta), locale ja-JP) [✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 15.2) [✓] Chrome - develop for the web [✓] Android Studio (version 2022.3) [✓] VS Code (version 1.86.0) [✓] Connected device (4 available) [✓] Network resources ```  hello bro, I face the same issue , did you find a solution?
gharchive/issue
2023-08-24T03:46:51
2025-04-01T04:35:59.144196
{ "authors": [ "Dr-Usman", "dshukertjr", "hf", "hixcoder", "iseruuuuu" ], "repo": "supabase/gotrue", "url": "https://github.com/supabase/gotrue/issues/1233", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1325961244
feat(saml): Add basic SAML IdP initiated flow What kind of change does this PR introduce? Bug fix, feature, docs update, ... What is the current behavior? Please link any relevant issues here. What is the new behavior? Feel free to include screenshots if it includes visual changes. Additional context Add any other context or screenshots. Ah wrong branch!
gharchive/pull-request
2022-08-02T14:40:30
2025-04-01T04:35:59.146192
{ "authors": [ "hf" ], "repo": "supabase/gotrue", "url": "https://github.com/supabase/gotrue/pull/581", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2016119312
Function not exposed Describe the bug Seems a function returning a table from another schema is not exposed. To Reproduce Steps to reproduce the behavior: begin; create schema if not exists a1; grant usage on schema a1 to public; create table a1.foo(id int); grant select on table a1.foo to public; create or replace function a1.the_foo() returns a1.foo stable return (select f from a1.foo f where f.id = 1); grant execute on function a1.the_foo() to public; create schema if not exists a2; grant usage on schema a2 to public; create or replace function a2.get_the_foo() returns a1.foo stable as $$ select * from a1.the_foo() $$ language sql; grant execute on function a2.get_the_foo() to public; set local search_path to 'a1'; select graphql.resolve($${__type(name: "Query") {fields(includeDeprecated: false) {name args {name type {kind name ofType {kind name}}}}}}$$)->'data'->'__type'->'fields'->1; -- {"args": [], "name": "the_foo"} set local search_path to 'a2'; select graphql.resolve($${__type(name: "Query") {fields(includeDeprecated: false) {name args {name type {kind name ofType {kind name}}}}}}$$)->'data'->'__type'->'fields'->1; -- null rollback; Expected behavior I would expect fully-typed return values to be exposed based on permissions and regardless of search_path. Versions: PostgreSQL: 16.1 pg_graphql commit ref: ee8ef69 To narrow the problem. I believe we should collect array of entities used in exposed functions (arg_types || type_oid). Filter them by schema usage permission. Expose them as types and may be connections (if some function returns the setof of the entity). Do NOT expose collection/mutation for them, effectively obey search_path instruction. That way we won't be in need to proxy each and every piece of hidden schemas to the exposed schema. yes, you're exactly right with the solution The solution would be to add a CTE under this to collect any tables or views referenced by functions on the search_path where the referenced table is not no the search_path and updating the join here to include them It might not actually require an update to the rust source but TBD
gharchive/issue
2023-11-29T09:00:52
2025-04-01T04:35:59.151586
{ "authors": [ "dvv", "olirice" ], "repo": "supabase/pg_graphql", "url": "https://github.com/supabase/pg_graphql/issues/455", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2575235977
chore: remove examples We already maintain examples in https://github.com/supabase/supabase/tree/master/examples and these keeps triggering dependabot alerts. Pull Request Test Coverage Report for Build 11251423380 Details 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 66.038% Totals Change from base Build 11125560527: 0.0% Covered Lines: 99 Relevant Lines: 129 💛 - Coveralls
gharchive/pull-request
2024-10-09T08:44:59
2025-04-01T04:35:59.156907
{ "authors": [ "coveralls", "soedirgo" ], "repo": "supabase/supabase-js", "url": "https://github.com/supabase/supabase-js/pull/1277", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
538236271
GraphQL Final Closes #29 Closes #45 Closes #46 Closes #59
gharchive/pull-request
2019-12-16T07:33:46
2025-04-01T04:35:59.161799
{ "authors": [ "suparngp" ], "repo": "suparngp/kotlin-multiplatform-projects", "url": "https://github.com/suparngp/kotlin-multiplatform-projects/pull/58", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
55951114
While parsing files each line is unnecessarily copied During tokenization the tokenizer copies each line to make parsing easier. This sports problems with long lines and/or embedded systems with low memory. A fix will reduce the maximum memory usage of the raw data by 50%. Looks good, just made some formatting updates. I used the merge method you mentioned using hub - I love it! Nice tidy commit history :) I ran the benchmarking tests before/after and there was no difference - speed wise at least. I'd be interested in running it through a profiler (e.g. visualvm) to validate that it actually does use less memory, but agree that it should as we're halving the number of Strings created. ok well, yeah that is nice, but im not sure if it was some other person who did the PR, that he would then be added as a contributer in github. Think I had to do an explicit merge in one of my other projects. Another annoyance, is that I have to clean up some branching locally now :) I have a bugfix branch sticking out..and another bugfix branch i never pushed i dont know how to get rid of :) Happy git'in.. i better use the command prompt rather than the gui.. the gui really suxx for Git compared to Hg ;) regarding memory Ive just come to the conclusion that when using a buffered file reader, we really dont know how many lines are cached..ie. how much memory is being consumed.. this may be worse than the string copying for embedded systems. Anyhow, I would not think you could register 50000 string copies.. but maybe we need another data set. A sparse one with many int's and decimals.. and many more rows.. maybe 200.000 ? it will really guide us when we do the encoding stuff On Sat, Jan 31, 2015 at 5:28 AM, James Bassett notifications@github.com wrote: Looks good, just made some formatting updates. I used the merge method http://blog.spreedly.com/2014/06/24/merge-pull-request-considered-harmful/#.VMxROIqUf0o you mentioned using hub - I love it! Nice tidy commit history :) I ran the benchmarking tests before/after and there was no difference - speed wise at least. I'd be interested in running it through a profiler (e.g. visualvm) to validate that it actually does use less memory, but agree that it should as we're halving the number of Strings created. — Reply to this email directly or view it on GitHub https://github.com/super-csv/super-csv/issues/20#issuecomment-72304150. -- Med venlig hilsen -Kasper No the 'apply mail' method keeps the commit history (I was able to merge your pull request into my local master, update the formatting, and merge my commit into yours, then push back to GitHub with you still recorded as making the change). The same should go for anyone else who forks and creates a PR (they should still get credit). Yeah I had to git reset --hard upstream/master to get my fork in sync again. I have 2 remotes (origin which points to my fork, and upstream which points to super-csv/super-csv). Seems to work well. I'm not too fussed about optimisation, so happy to let you take the lead on that. What are your goals? I'd think that most dev's using the library only really care about how easy the API is to use, and how fast it is. Can you explain why a file with ints/decimals is better? (isn't each line read in as a String anyway, so there's no difference??). I have the full CSV file used by the current benchmark (400k lines) if you're interested. We will see a huge speed improvement when writing since we should not attempt to encode int/decimal/... this is now taking a major part of the time when writing. Thus with the file i've submitted, we should see significant improvements with you removing the call to escape() Actually if you look at the commit, the executed code is exactly the same :) The escape() method was removed because the exact same functionality is now available via a preference - but the code from that method is still used in writeRow(). Well, then we should make this a bit more intelligent. Before the code was too generalized and thus everything was escaped. I guess we need to do so in the case that all values must be enclosed in "" but for 'normal' writing of data, there is no need to attempt escaping certain types. I thought that was part of your change. But I can open an issue on this ample optimization opportunity.
gharchive/issue
2015-01-29T20:19:21
2025-04-01T04:35:59.171599
{ "authors": [ "jamesbassett", "kbilsted" ], "repo": "super-csv/super-csv", "url": "https://github.com/super-csv/super-csv/issues/20", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2498090004
Setting VALIDATE_ALL_CODEBASE: false does not work Is there an existing issue for this? [X] I have searched the existing issues Are you using the latest Super-linter version available? [X] I am using the latest Super-linter version. [X] I can reproduce the issue running Super-linter using complete version identifier (example: vX.Y.Z), and not just with a partial one (example: vX) [X] I am using the super-linter/super-linter action or container image, and not the deprecated github/super-linter action or container image. Are you resonably sure that it's a Super-linter issue, and not an issue related to a tool that Super-linter runs? [X] I think that this is a Super-linter issue. Current Behavior Super linter checks JSCPD, yaml and git_leaks, but VALIDATE_ALL_CODEBASE is set to false. Only changes in the branch includes creating the configuration and adding a badge to the readme. I did initially push the branch with the default VALIDATE_ALL_CODEBASE: true. I have also tried creating a completely new branch with the VALIDATE_ALL_CODEBASE: false, but the failures are the same. It is not clear to me why it is looking at docker files (CHECKOV) or JavaScript files (JSCPD). Expected Behavior Only runs the appropriate linting against the changed files for the branch. In the case of this branch, it is creating the super linter configuration and badge. Super-Linter version v7.1.0 Super-linter configuration --- name: Lint on: # yamllint disable-line rule:truthy push: null pull_request: null permissions: {} jobs: build: name: Lint runs-on: ubuntu-latest permissions: contents: read packages: read # To report GitHub Actions status checks statuses: write steps: - name: Checkout code uses: actions/checkout@v4 with: # super-linter needs the full git history to get the # list of files that changed across commits fetch-depth: 0 - name: Super-linter uses: super-linter/super-linter@v7.1.0 # x-release-please-version env: # To report GitHub Actions status checks GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} VALIDATE_ALL_CODEBASE: false DEFAULT_BRANCH: origin/prod # We have a conflict with doctrine/instantiator and are not interested in static code analysis for now VALIDATE_PHP_PSALM: false VALIDATE_PHP_PHPSTAN: false Relevant log output Run super-linter/super-linter@v7.1.0 env: GITHUB_TOKEN: *** VALIDATE_ALL_CODEBASE: false DEFAULT_BRANCH: origin/prod VALIDATE_PHP_PSALM: false VALIDATE_PHP_PHPSTAN: false /usr/bin/docker run --name ghcriosuperlintersuperlinterv710_0f7281 --label b64555 --workdir /github/workspace --rm -e "GITHUB_TOKEN" -e "VALIDATE_ALL_CODEBASE" -e "DEFAULT_BRANCH" -e "VALIDATE_PHP_PSALM" -e "VALIDATE_PHP_PHPSTAN" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e "ACTIONS_RESULTS_URL" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/backoffice/backoffice":"/github/workspace" ghcr.io/super-linter/super-linter:v7.1.0 Super-Linter initialization 2024-08-30 19:01:06 [INFO] Command output when running linters: ------ CHECKOV GITHUB_ACTIONS GITLEAKS JSCPD YAML YAML_PRETTIER ------ 2024-08-30 19:01:06 [INFO] ---------------------------------------------- 2024-08-30 19:01:06 [INFO] ---------------------------------------------- Error: -30 19:01:06 [ERROR] Errors found in CHECKOV Notice: 30 19:01:06 [NOTICE] Successfully linted GITHUB_ACTIONS Notice: 30 19:01:06 [NOTICE] Successfully linted GITLEAKS Error: -30 19:01:07 [ERROR] Errors found in JSCPD Notice: 30 19:01:07 [NOTICE] Successfully linted YAML Error: -30 19:01:07 [ERROR] Errors found in YAML_PRETTIER Error: -30 19:01:08 [ERROR] Super-linter detected linting errors super-linter-output/super-linter-summary.md 44ms Steps To Reproduce Copy example yaml file. Push new branch Lint run fails against all codebase Disable PHP and ALL_CODEBASE validations and push change Failures outside of PHP and modified files still occur. Anything else? No response Sorry for the confusion, PHP checks are ignored correctly. That's a fair point for Jscpd and Checkov. No worries! Thanks for taking the time to report a potential problem!
gharchive/issue
2024-08-30T20:02:47
2025-04-01T04:35:59.180144
{ "authors": [ "ferrarimarco", "jbanahan-bvm" ], "repo": "super-linter/super-linter", "url": "https://github.com/super-linter/super-linter/issues/6093", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2435914713
Add concept of "roles" When fly.io is acting as a third party, it's hard for us to communicate to the first party whether the user is an admin. Internally, we do this with NoAdminFeatures, but third parties won't have the concept of organization features in their access structs. This PR moves NoAdminFeatures into the top-level package and renames it NotAdmin. Instead of enforcing the caveat based on org-features, the access must implement a method specifying what roles are required for the access. The logic of admin-only org features then lives in the access instead of in the caveat. This should all be backwards compatible and also allows for further role-based caveats. The simpler alternative I considered was to just have the access implement a RequiresAdmin() bool method for the sole purpose of interacting with the NotAdmin caveat. This would work fine too, but is a bit less flexible and would require more effort if we ever want to add additional roles in the future. This raises the question though of whether adding RBAC concepts to macaroons is dumb. Thoughts? I overthought this some more and switched from string-based roles to a bitmask where "admin" is the combination of all other roles.
gharchive/pull-request
2024-07-29T17:01:26
2025-04-01T04:35:59.207315
{ "authors": [ "btoews" ], "repo": "superfly/macaroon", "url": "https://github.com/superfly/macaroon/pull/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
296510832
Added scrollbar to overflow content in cloud accounts pages. Fixes issue #421. A simple css style was added to the app body to allow viewing of the extra content of cloud account pages, so that users can scroll down to fill out the rest of the form instead of zooming out, as referenced by @manigandham. LGTM Coverage increased (+0.02%) to 55.363% when pulling 690e273ff7e9b9c43d2c65b332d32bf8a130e7ab on TheKLARKEN:master into cbcf643521e87f8cbf48065c8d083ce1e1295d55 on supergiant:master.
gharchive/pull-request
2018-02-12T20:19:40
2025-04-01T04:35:59.213622
{ "authors": [ "TheKLARKEN", "coveralls", "gopherstein" ], "repo": "supergiant/supergiant", "url": "https://github.com/supergiant/supergiant/pull/422", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1528365847
🛑 Node - ca-179a516e.yalaso.top is down In ee1cdb7, Node - ca-179a516e.yalaso.top (http://ca-179a516e.yalaso.top/api/v1/ping) was down: HTTP code: 0 Response time: 0 ms Resolved: Node - ca-179a516e.yalaso.top is back up in 2f2c219.
gharchive/issue
2023-01-11T03:31:29
2025-04-01T04:35:59.218812
{ "authors": [ "RealYalaSo" ], "repo": "superrr-vpn/status", "url": "https://github.com/superrr-vpn/status/issues/400", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1855745637
🛑 Node - tr-2d50afca.yalaso.top is down In 6c6986c, Node - tr-2d50afca.yalaso.top (http://tr-2d50afca.yalaso.top/api/v1/ping) was down: HTTP code: 0 Response time: 0 ms Resolved: Node - tr-2d50afca.yalaso.top is back up in 2b5d85c.
gharchive/issue
2023-08-17T21:41:01
2025-04-01T04:35:59.221274
{ "authors": [ "RealYalaSo" ], "repo": "superrr-vpn/status", "url": "https://github.com/superrr-vpn/status/issues/691", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1453376468
Cli Command for npx create-supertokens-app@latest doesn't work. npx create-supertokens-app@latest Need to install the following packages: create-supertokens-app@latest Ok to proceed? (y) y _____ _____ _ / ___| |_ _| | | \ `--. _ _ _ __ ___ _ __| | ___ | | _____ _ __ ___ `--. \ | | | '_ \ / _ \ '__| |/ _ \| |/ / _ \ '_ \/ __| /\__/ / |_| | |_) | __/ | | | (_) | < __/ | | \__ \ \____/ \__,_| .__/ \___|_| \_/\___/|_|\_\___|_| |_|___/ | | |_| create-supertokens-app (v0.0.20) lets you quickly get started with using SuperTokens! Choose your tech stack and the authentication method, we will create a working project that uses SuperTokens for you. ? What is your app called? my-app ? Choose a frontend framework (Visit our documentation for integration with other frameworks): React ? Choose a backend framework (Visit our documentation for integration with other frameworks): Nest.js ? What type of authentication do you want to use? Social Login + Email Password ✅ Finished setting up folder structure! ⛔ Setup failed! Error: npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! Found: react@18.2.0 npm ERR! node_modules/react npm ERR! react@"^18.2.0" from the root project npm ERR! peer react@"^18.0.0" from @testing-library/react@13.4.0 npm ERR! node_modules/@testing-library/react npm ERR! @testing-library/react@"^13.4.0" from the root project npm ERR! 7 more (react-dom, react-router-dom, react-scripts, ...) npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer react@"^16.8.0 || ^17.0.0" from react-select@5.2.1 npm ERR! node_modules/supertokens-auth-react/node_modules/react-select npm ERR! react-select@"5.2.1" from supertokens-auth-react@0.27.1 npm ERR! node_modules/supertokens-auth-react npm ERR! supertokens-auth-react@"latest" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See /Users/RohitB/.npm/eresolve-report.txt for a full report. npm ERR! A complete log of this run can be found in: npm ERR! /Users/RohitB/.npm/_logs/2022-11-17T13_38_42_005Z-debug.log Error: If you think this is an issue with the tool, please report this as an issue at https://github.com/supertokens/create-supertokens-app/issues Hi @irohitb Thanks for reporting the issue, what version of node are you using? @nkshah2 16.3.0 Also, if you are investigating, this might be helpful. clone https://github.com/supertokens/supertokens-auth-react or remove *.lock files and node_modules then do yarn install or npm install You should get the same error. Probably with react-select first and then react-shadow For react-shadow this might be helpful: https://github.com/Wildhoney/ReactShadow/pull/140 Are you running the cli from inside another project folder? or is it in a standalone project folder? Right so ive tested with node 17, 18 and it works fine. Can you try updating the Node version and trying? Okay, it works withv17.9.0 (npx create-supertokens-app@latest). Alright, closing this issue in that case.
gharchive/issue
2022-11-17T13:44:24
2025-04-01T04:35:59.228211
{ "authors": [ "irohitb", "nkshah2" ], "repo": "supertokens/create-supertokens-app", "url": "https://github.com/supertokens/create-supertokens-app/issues/42", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1968895354
🛑 Advanced Satellite is down In 490feeb, Advanced Satellite (https://www.advancedsat.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Advanced Satellite is back up in dbd7123 after 7 minutes.
gharchive/issue
2023-10-30T17:36:17
2025-04-01T04:35:59.240782
{ "authors": [ "jflores1" ], "repo": "superwebpros/uptime", "url": "https://github.com/superwebpros/uptime/issues/502", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2599238120
feat: Materi Part 3 Chapter 1 - CSS Flexbox Lesson 1: Pengantar Flexbox Definisi dan fungsi Flexbox dalam layout modern Konsep utama: kontainer dan item flex Lesson 2: Properti Flexbox Properti display: flex, flex-direction, dan justify-content Properti align-items, align-self, dan flex-wrap Lesson 3: Contoh Penerapan Flexbox Membuat layout kolom dengan Flexbox Membuat layout yang responsif dengan Flexbox Permisi mas, minta tolong untuk issue ini assign ke saya mas @DimasZeava Sudah diassign ya, silahkan kontribusi 🎉 Baik mas huda terimakasih
gharchive/issue
2024-10-19T14:27:28
2025-04-01T04:35:59.263658
{ "authors": [ "DimasZeava", "iniakunhuda", "sendhyrama" ], "repo": "surabayadev/tutorial-css-lengkap", "url": "https://github.com/surabayadev/tutorial-css-lengkap/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
920904616
feat(compile): added exclude flag Addressing issue https://github.com/sure-thing/presta/issues/85 Will need to incorporate this soon, but this branch is out of date. Thanks again for noticing this issue, @cjenaro!
gharchive/pull-request
2021-06-15T01:39:20
2025-04-01T04:35:59.267930
{ "authors": [ "cjenaro", "estrattonbailey" ], "repo": "sure-thing/presta", "url": "https://github.com/sure-thing/presta/pull/86", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
285078581
Question about license Hi, Nice looking template! Can your documentation template be used for a themeforest/codecanyon item? Under what license? Should you and/or Template-World user be credited? Thank you! Hey, Crediting is not required, but always appreciated (CC0) ~Surjith
gharchive/issue
2017-12-29T11:31:52
2025-04-01T04:35:59.295070
{ "authors": [ "ramonamorea", "surjithctly" ], "repo": "surjithctly/documentation-html-template", "url": "https://github.com/surjithctly/documentation-html-template/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2533667087
Queue RPC calls This PR aims to fix #87, which occurs when multiple queries are executed in parallel, which is impossible due to the &mut self on the execute function. The solution is to queue up RPC calls from JavaScript ensuring execute is only called once at a time. Closes #87 +1
gharchive/pull-request
2024-09-18T12:54:07
2025-04-01T04:35:59.300416
{ "authors": [ "limcheekin", "macjuul" ], "repo": "surrealdb/surrealdb.wasm", "url": "https://github.com/surrealdb/surrealdb.wasm/pull/107", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2117976633
Release metal3 0.6.0 This release includes the following changes: The Ironic images are rebased on the 2023.2 dependencies The Ironic images are now Aligned with the upstream version The BMO CRDs are aligned with the upstream The VMedia TLS is now configurable The Predictable NIC naming convention is now configurable I will do a deployment on Sylva later today and update on whether it works there as well. @Kristian-ZH since this release changes the CRDs we need to decide how to handle that, these changes should be backwards compatible so I think our options are either document manually applying the crds subdir prior to helm upgrade, or consider creating a new -crds chart. @ipetrov117 are you aware of any specific handling of this scenario in Sylva for other charts? I think our options are either document manually applying the crds subdir prior to helm upgrade, or consider creating a new -crds chart. I was thinking to document that and to leave the decision for extracting crds to a dedicated dir for later because this should be aligned with all the charts, not only Metal3 @ipetrov117 are you aware of any specific handling of this scenario in Sylva for other charts? @hardys, I am not aware of any specific convention that is being followed. AFAIK Sylva use whatever the main chart provides in terms of deployment logic. For instance, if the chart foo uses 2 charts for its deployment logic (foo-crds and foo), then 2 units are created in Sylva - foo-crds and foo, where foo depends on foo-crds. Example can be found here. My 2 cents on the metal3 chart upgrade discussion.. There doesn't seem to be a concrete approach when it comes to upgrading CRDs. Looking at the helm doc regarding this, we have 2 possible approaches: Have a crds directory in the chart and leave helm to deploy the crds for us. This is the current approach that we and Sylva seem to be taking. While easier, this approach does not handle CRD upgrades well (at least from what I understand). Have a foo-crd and foo charts, where foo-crd is always deployed before foo. IMO documenting the manual CRD sync steps is an okay workaround as of this moment, but for the long term we should think about option (2). By having the CRDs in a chart outside of our main chart we can always ensure that they have been upgraded to the desired state before running the main chart upgrade. @Kristian-ZH I tested in the metal3-demo environment and I think we have an issue with UEFI, we see the BMH fail to start inspection like: Normal InspectionError 54m metal3-baremetal-controller Failed to inspect hardware. Reason: unable to start inspection: Failed to download image http://192.168.125.10:6180/uefi_esp.img, reason: HTTPConnectionPool(host='192.168.125.10', port=6180): Max retries exceeded with url: /uefi_esp.img (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5602727c50>: Failed to establish a new connection: [Errno 113] EHOSTUNREACH')) Looking at the ironic config I think we can see the problem: shardy@localhost:~> kubectl exec -n metal3-system metal3-metal3-ironic-677bc5c8cc-8cqdg -c ironic -- cat /etc/ironic/ironic.conf | grep uefi_esp bootloader = http://192.168.125.10:6180/uefi_esp.img shardy@localhost:~> kubectl get service -n metal3-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE baremetal-operator-controller-manager-metrics-service ClusterIP 10.43.95.163 <none> 8443/TCP 86m baremetal-operator-webhook-service ClusterIP 10.43.210.62 <none> 443/TCP 86m metal3-mariadb ClusterIP 10.43.136.41 <none> 3306/TCP 86m metal3-metal3-ironic LoadBalancer 10.43.57.230 192.168.125.10 6185:31785/TCP,5050:30890/TCP,6385:30479/TCP 86m Additional note, in the previous ironic image we had bootloader = {{ env.IRONIC_BOOT_BASE_URL }}/uefi_esp.img But I think due to the upstream rebase that got lost and is now e.g bootloader = http://{{ env.IRONIC_URL_HOST }}:{{ env.HTTP_PORT }}/uefi_esp.img I re-tested after https://build.opensuse.org/request/show/1144913 merged and all works OK now in the metal3-demo environment
gharchive/pull-request
2024-02-05T08:44:14
2025-04-01T04:35:59.343565
{ "authors": [ "Kristian-ZH", "hardys", "ipetrov117" ], "repo": "suse-edge/charts", "url": "https://github.com/suse-edge/charts/pull/86", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
226431162
Current Stock page improvements Build Number: 1.1.1 Improvements: Toggle for hiding stockouts, much like the one when managing a stock take. Column that shows the months stock available. Item.quantity/(item.dailyUsage*30days) for the number of batches in the expansion, hide those with no stock (discuss if this is appropriate, I think it is) Extra, though could use it's own issue: A way via 'Current Stock' to access a ledger view for an individual item. Would need designs done. @Chris-Petty point three is appropriate, but also, it's happening!
gharchive/issue
2017-05-04T22:55:52
2025-04-01T04:35:59.346000
{ "authors": [ "Chris-Petty", "edmofro" ], "repo": "sussol/mobile", "url": "https://github.com/sussol/mobile/issues/429", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1467958476
ebpf: Update CPU freq calculation to improve performance This PR introduces a 2.4X performance improvement for Kepler regarding the CPU profiling. We have been discussing the performance regression of Kepler in #365, with further analysis in #391. The performance analysis showed that garbage collector was the source of the performance degradation. After some evaluation, we identified that the functions calculating and collection the CPU frequency were some of the most expensive calls. To improve the performance, this PR first removes the function that calculates the CPU frequency per container and implements it using BPF. Second, the node CPU frequency is also collected by BPF, but when BPF is not available, Kepler can still collect it by reading kernel files. Although, it is a very expensive operation. Before the update, the pprof CPU profiling was showing a total of 4.89s of CPU utilization. After the update, the pprof CPU profiling was showing a total of 1.96s of CPU utilization. In summary: Before the update, Kepler was using ~12% CPU and dropped to ~1.5%. The goprof CPU profile was showing ~2.62s for runtime.cgocall, after the update it dropped to 1.21s. Example of the CPU frequency in the log: For the processes CPU frequency: I1129 10:11:43.393591 1 container_hc_collector.go:109] system_processes, comm:kworker/44:0, freq:1000000, inst:35614 I1129 10:11:43.393684 1 container_hc_collector.go:109] system_processes, comm:migration/48, freq:2300000, inst:88109 I1129 10:11:43.393734 1 container_hc_collector.go:109] system_processes, comm:migration/4, freq:1000000, inst:218511 I1129 10:11:43.393770 1 container_hc_collector.go:109] system_processes, comm:kworker/88:0, freq:1000000, inst:49713 I1129 10:11:43.393805 1 container_hc_collector.go:109] system_processes, comm:kworker/73:0, freq:1000000, inst:25600 I1129 10:11:43.393841 1 container_hc_collector.go:109] system_processes, comm:kworker/59:2, freq:1000000, inst:55641 I1129 10:11:43.393924 1 container_hc_collector.go:109] system_processes, comm:containerd-shim, freq:0, inst:2639840 I1129 10:11:43.393964 1 container_hc_collector.go:109] system_processes, comm:kworker/112:0, freq:1000000, inst:20193 I1129 10:11:43.394006 1 container_hc_collector.go:109] system_processes, comm:ksoftirqd/0, freq:1100000, inst:1203258 I1129 10:11:43.397728 1 container_hc_collector.go:109] 69ef7e11ba2f2a898bd30e248e8b722b03c6c21668e85bd152380afb4975bce3, comm:snmpd, freq:1000000, inst:6819013 I1129 10:11:43.397848 1 container_hc_collector.go:109] system_processes, comm:kworker/129:0, freq:1000000, inst:186264 I1129 10:11:43.397892 1 container_hc_collector.go:109] system_processes, comm:kworker/5:0, freq:0, inst:259759 I1129 10:11:43.397914 1 container_hc_collector.go:109] system_processes, comm:kworker/46:1, freq:1000000, inst:374557 I1129 10:11:43.397935 1 container_hc_collector.go:109] system_processes, comm:ksoftirqd/19, freq:1500000, inst:130141 For the node CPU frequency I1129 11:03:53.467639 1 node_energy_collector.go:83] c.NodeCPUFrequency map[0:1400000 1:3900000 2:1000000 3:1400000 4:1000000 5:1000000 6:1000000 7:1000000 8:1000000 9:1000000 10:1000000 11:1000000 12:1000000 13:1000000 14:1000000 15:1000000 16:1000000 17:1000000 18:1000000 19:1000000 20:3900000 21:1900000 22:1000000 23:1000000 24:1000000 25:2900000 26:1000000 27:1000000 28:1000000 29:1000000 30:3900000 31:1000000 32:3900000 33:1000000 34:1000000 35:1000000 36:1000000 37:1000000 38:1000000 39:1000000 40:2900000 41:1000000 42:1000000 43:1000000 44:1900000 45:1300000 46:2200000 47:2600000 48:1300000 49:1100000 50:1600000 51:2600000 52:1000000 53:1100000 54:1100000 55:1000000 56:1800000 57:1000000 58:1000000 59:3500000 60:1000000 61:1000000 62:1000000 63:1000000 64:1000000 65:1000000 66:1000000 67:1000000 68:1000000 69:1000000 70:1000000 71:1000000 72:1000000 73:1000000 74:1000000 75:1000000 76:1000000 77:1000000 78:1000000 79:1900000 80:1000000 81:1000000 82:1000000 83:1400000 84:1000000 85:1000000 86:1000000 87:1000000 88:1000000 89:1000000 90:1000000 91:1000000 92:1000000 93:1000000 94:1000000 95:1000000 96:1000000 97:1000000 98:3900000 99:1000000 100:1000000 101:1100000 102:1000000 103:1000000 104:1000000 105:1000000 106:1000000 107:1000000 108:1000000 109:1000000 110:3900000 111:1000000 112:1000000 113:1000000 114:1000000 115:1000000 116:1000000 117:1000000 118:1000000 119:1000000 120:2400000 121:1000000 122:3200000 123:1000000 124:1000000 125:1100000 126:1000000 127:1000000 128:1300000 129:1000000 130:3500000 131:1100000 132:3900000 133:3700000 134:3200000 135:1000000 136:2100000 137:1000000 138:1000000 139:1000000 140:1000000 141:1000000 142:2100000 143:1000000 144:1000000 145:2200000 146:1000000 147:1000000 148:1000000 149:1000000 150:1000000 151:1000000 152:1000000 153:1000000 154:1000000 155:1000000 156:1000000 157:1000000 158:1000000 159:1000000] the tracepoint doesn't work on my setup. To narrow down the scenarios, @marceloamaral can you try the following and see if you can capture any cpu frequency data? # perf list 'power:cpu_frequency' List of pre-defined events (to be used in -e): power:cpu_frequency [Tracepoint event] # perf record -e power:cpu_frequency -a sleep 10 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 1.943 MB perf.data ] # perf script @marceloamaral Since we have per core CPU cycles reading from PMU, we can convert the cycles into frequency @rootfs I also do not see data using perf # perf list 'power:cpu_frequency' List of pre-defined events (to be used in -e): Metric Groups: # perf record -e power:cpu_frequency -a sleep 10 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.538 MB perf.data ] # perf script I think I cannot see it because this tracing is disabled for perf. # cat /sys/kernel/debug/tracing/events/power/cpu_frequency/enable 0 But the BPF code can get the CPU frequency. Can you try to run the code of this PR? Since we have per core CPU cycles reading from PMU, we can convert the cycles into frequency Yes if we can get the reference cycles. freq=cycles/ref_cycles. Some CPUs haveref_cycles=100MHZ but it might change and I did not find how to discover it. But the limitation of using cycles is that an environment that does not have PMU will not collect the frequency. The tracepoint is a more generic solution. @marceloamaral the frequency is always 0 with the patch [root@rhel kepler]# patch -p1 < 427.diff patching file bpfassets/perf_event/perf_event.c patching file pkg/bpfassets/attacher/bcc_attacher.go patching file pkg/bpfassets/attacher/bcc_attacher_stub.go patching file pkg/bpfassets/perf_event_bindata.go patching file pkg/collector/container_hc_collector.go patching file pkg/collector/metric/container_metric.go patching file pkg/collector/metric/utils.go patching file pkg/collector/metric/utils_test.go patching file pkg/collector/metric_collector.go patching file pkg/collector/node_energy_collector.go patching file pkg/collector/prometheus_collector.go patching file pkg/power/acpi/acpi.go [root@rhel kepler]# make _build_local gofmt -e -d -s -l -w pkg/ cmd/ [root@rhel kepler]# echo 1 > /sys/kernel/debug/tracing/events/power/cpu_frequency/enable [root@rhel kepler]# _output/bin/_/kepler -v 5 2>&1 |grep avg avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 avgFreq: 0.00 @rootfs I found the problem that cpu frequency is 0. Accordingly to the bcc book: When the system is set to the performance governor, this tool shows nothing as there are no more frequency changes to instrument: the CPUs are pinned at the highest frequency. But note that, when the governor policy is set to performance, it does not mean that the cpu frequency does not change. It is just not changing via the governor policy, something else changes it. # for i in $(seq 0 5) ; do cat /sys/devices/system/cpu/cpufreq/policy0/scaling_cur_freq; sleep 1; done 1577682 3396041 1030914 1207322 1013511 I will continue investigate how to implement this via BPF I figured out how to calculate CPU frequency using hardware counters. As I mentioned before, we need the reference cycles and base frequency of the CPU, and I found out where this information is. There are some differences in the CPU frequency measured by the 3 approaches that we have. cpu - tracepoint - cycle/ref_cycle - acpi-sysfs 0 - 1000 - 1005 - 1003 1 - 1000 - 999 - 1001 2 - 1000 - 1000 - 1000 3 - 1000 - 1000 - 1000 4 - 1000 - 1003 - 1000 5 - 1000 - 1000 - 1050 6 - 1000 - 1000 - 1005 7 - 1000 - 1000 - 1020 8 - 1050 - 2231 - 1000 9 - 1150 - 999 - 1020 cpu - tracepoint - cycle/ref_cycle - acpi-sysfs 0 - 1000 - 1000 - 1000 1 - 1000 - 1001 - 1000 2 - 1000 - 1000 - 1000 3 - 1000 - 1009 - 1000 4 - 1000 - 1000 - 1026 5 - 2450 - 1873 - 1000 6 - 1000 - 1002 - 1000 7 - 1000 - 1001 - 1000 8 - 1000 - 1002 - 999 9 - 1037 - 998 - 1011 cpu - tracepoint - cycle/ref_cycle - acpi-sysfs 0 - 1000 - 999 - 1001 1 - 1000 - 1030 - 1170 2 - 1000 - 997 - 1000 3 - 1000 - 1004 - 1000 4 - 1000 - 1009 - 1000 5 - 1000 - 1005 - 1000 6 - 1000 - 1000 - 1000 7 - 1000 - 1002 - 1000 8 - 1003 - 1003 - 1000 9 - 1000 - 1000 - 1000 cpu - tracepoint - cycle/ref_cycle - acpi-sysfs 0 - 1000 - 1001 - 1000 1 - 1000 - 1015 - 1000 2 - 1000 - 991 - 1000 3 - 1612 - 2299 - 1000 4 - 1000 - 1000 - 1000 5 - 1000 - 1203 - 1000 6 - 1000 - 1005 - 1000 7 - 1000 - 1006 - 1034 8 - 1000 - 1000 - 1000 9 - 1000 - 1032 - 1112 The tracepoint and acpi-sysfs values are from caches in the kernel and the cycles/ref-cycles are calculated using hardware counters. Note that tracepoint and acpi-sysfs use caches and may miss some hardware updates (BIOS energy policy). Additionally the tracepoint and acpi-sysfs have different caches and are updated from different kernel calls. For example, tracepoint updates come from the CPU frequency governor policy and are never updated if the policy is performance. In this case, I would say that cycles/ref-cycles reports the most reliable results. I will update this PR to use cycles/ref-cycles, but using more counters implies that the kernel will multiplex the counters which requires to apply normalization in the counters, see here. @rootfs It finally worked. The main challenge was to add support for hardware counter with multiplexing, the bpf_perf_event_read_value function only works when using kprobe and we were using tracepoints. Why do we need to normalize the counters: PMUs have a limited number of hardware counters (depending on processor version). If we ask for more events than counters, the kernel starts time multiplexing and extrapolating the results from the sampling window to the the entire lifetime of the process. Therefore, normalizing here means de-extrapolating the results to expose the actual values. Also note that the documentation suggests always using bpf_perf_event_read_value: bpf_perf_event_read_value() is recommended over bpf_perf_event_read() in general. the latter has some quirks of the ABI where the error and the counter value are used as a return code (which is wrong to do since ranges may overlap). This problem is fixed with bpf_perf_event_read_value(), which at the same time provides more resources on the interface bpf_perf_event_read(). tested on my setup, the magnitude of frequency stats looks a bit off kepler_node_cpu_scaling_frequency_hertz{cpu="1",instance="rhel"} 3.970833e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="10",instance="rhel"} 3.916413e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="11",instance="rhel"} 3.615736e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="2",instance="rhel"} 3.967232e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="3",instance="rhel"} 3.980801e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="4",instance="rhel"} 3.976298e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="5",instance="rhel"} 3.765624e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="6",instance="rhel"} 3.246395e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="7",instance="rhel"} 3.978243e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="8",instance="rhel"} 3.953491e+06 kepler_node_cpu_scaling_frequency_hertz{cpu="9",instance="rhel"} 3.953808e+06 100 80130 0 80130 0 0 25.4M 0 --:--:-- --:--:-- --:--:-- 25.4M [root@rhel kepler]# cpupower frequency-info analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: Cannot determine or is not supported. hardware limits: 800 MHz - 4.10 GHz available cpufreq governors: performance powersave current policy: frequency should be within 800 MHz and 4.10 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency: Unable to call hardware current CPU frequency: 3.81 GHz (asserted by call to kernel) boost state support: Supported: yes Active: yes The frequency is in KHZ, the metrics label needs a update. @marceloamaral would you follow up with another PR? Thanks.
gharchive/pull-request
2022-11-29T12:13:38
2025-04-01T04:35:59.362413
{ "authors": [ "marceloamaral", "rootfs" ], "repo": "sustainable-computing-io/kepler", "url": "https://github.com/sustainable-computing-io/kepler/pull/427", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
48755785
Allocation API isn't as flexible as Lua's You might just want to close this WONTFIX since duktape's allocation strategy is baked into its API at this point. I just wanted to at least raise it as a point of discussion. In lua, malloc/free/realloc are all routed via a single callback, namely: typedef void * (*lua_Alloc) (void *ud, void *ptr, size_t osize, size_t nsize); The important feature is that free/realloc requests always get passed the old size of the object. This made it very easy to track the total amount of space that a lua instance had allocated. This is especially important for the small-memory environments that duktape is targetting. It's certainly possible to get by without this information: Some allocators have APIs to find the size of an allocation given its pointer (for instance jemalloc has malloc_usable_size()), but this is non-portable The allocator can allocate a bit of extra space before each returned memory region and stash the size there. In practice this means using 8 extra bytes for each allocation to preserve alignment guarantees. Another smaller benefit of passing the size along to deallocators is that it may be faster if the allocator can use that hint. That's uncommon today, but google has been talking about that optimization, for instance in this C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3536.html Anyway, to change it now would require an API break since exposed interfaces like duk_free() don't take a size parameter. So this would only be worth fixing if you're planning any other API breaks. I just thought I'd bring it up since it was one of the only places that I had trouble while converting some existing lua code to duktape. Since there's been no activity I'll close this.
gharchive/issue
2014-11-14T08:55:26
2025-04-01T04:35:59.371645
{ "authors": [ "mitchblank", "svaarala" ], "repo": "svaarala/duktape", "url": "https://github.com/svaarala/duktape/issues/76", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1052379568
Add info Port 3000 is occupied for npm run dev Describe the problem If you run the default npm run dev without specifying a specific port (-p), you will get the following error, if the port is already in use: > listen EACCES: permission denied 127.0.0.1:3000 Error: listen EACCES: permission denied 127.0.0.1:3000 at Server.setupListenHandle [as _listen2] (node:net:1317:21) at listenInCluster (node:net:1382:12) at doListen (node:net:1520:7) at processTicksAndRejections (node:internal/process/task_queues:84:21) If you run npm run preview and the port is already in use, you will get a more beginner friendly error message: Port 3000 is occupied Terminate the process occupying the port or specify a different port with --port Describe the proposed solution Unify the error message. If a port is already in use, provide the same error message. Alternatives considered Importance nice to have Additional Information No response What OS are you running? Windows 10 FWIW, I can't reproduce this. I tried this on Windows 10 earlier today (with both Node 14 and Node 16) and running npm run dev simultaneously in two different projects, and the second one displayed the more friendly error, and not the EACCES mentioned in the issue. I just tried to reproduce this right after I started my PC. on the first run of npm run dev it started without problems (port was not occupied). Then after it was running, I started the command again (with the same project and another project) and got the user-friendly error message and not EACESS. Then I stopped it and started an express web server on port 3000 and after it, I started npm run dev. The Vite dev server started without problems, but it can't be accessed. I got 404 from express. I tried the same with a Go (gin) web server and it was the same. The expected behavior would be to see the user-friendly error message in this case. Besides that, I could not reproduce the EACESS error. My Svelte, Node, NPM etc. versions were the same as from this test https://github.com/sveltejs/kit/pull/2792#issuecomment-968368935 The only difference was, that I left the Windows Insider Program (Release Preview Channel) a few days ago. But I can't imagine that this was the cause. Thanks for the thorough investigation @Zerotask! Since the bug is in Vite, I'll close this in favour of https://github.com/vitejs/vite/issues/5801, which seems to be having similar issues. Your comment there would help too, and it seems like a workaround is also found there.
gharchive/issue
2021-11-12T20:18:37
2025-04-01T04:35:59.390326
{ "authors": [ "Conduitry", "Zerotask", "benmccann", "bluwy" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/issues/2789", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1275641484
create-svelte: Add descriptions to select options closes #5217 Please don't delete this checklist! Before submitting the PR, please make sure you do the following: [x] It's really useful if your PR references an issue where it is discussed ahead of time. In many cases, features are absent for a reason. For large changes, please create an RFC: https://github.com/sveltejs/rfcs [x] This message body should clearly illustrate what problems it solves. [x] Ideally, include a test that fails without this PR but passes with it. Tests [x] Run the tests with pnpm test and lint the project with pnpm lint and pnpm check Changesets [x] If your PR makes a change that should be noted in one or more packages' changelogs, generate a changeset by running pnpm changeset and following the prompts. All changesets should be patch until SvelteKit 1.0 Here's how the descriptions look "Use TypeScript?" by itself would likely be interpreted as the TypeScript language itself. Add TypeScript's type checking? > Yes, using JavaScript with JSDoc comments Yes, using TypeScript syntax No I tweaked @gtm-nayan's suggestion in https://github.com/sveltejs/kit/pull/5221#issuecomment-1162495411 very slightly and committed it, so that the diff is once again legible. I think we're in a good place so I'll approve the PR, but will leave it open in case we want to bikeshed further
gharchive/pull-request
2022-06-18T02:34:42
2025-04-01T04:35:59.395598
{ "authors": [ "Rich-Harris", "gtm-nayan" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/pull/5221", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1396913081
Fix route sorting This is a breaking change, though I expect its impact to be very minimal. #7051 made me realise that our route sorting algorithm is flawed — because it works left-to-right, given these two routes... /[a]/b/c /a/[b]/[c] ...a pathname like /a/b/c would match /a/[b]/[c] first, even though /[a]/b/c is more specific (or rather, feels more specific, since we don't have a clearly articulated theory of specificity). The optional params sorting introduces more confusion: https://github.com/sveltejs/kit/pull/7051#discussion_r984913063 This PR changes things by computing a 'score' based on the average 'value' of each segment/part. It results in what I think is a more helpful and intuitive sort order: https://github.com/sveltejs/kit/blob/cc4422c7ab3f52b4112e59cbd47788f40ad9c5be/packages/kit/src/core/sync/create_manifest_data/index.spec.js#L223-L245 The one bit I'm not sure about is [...rest]/[required] coming after [required], but a solution that changes that order without changing other things is eluding me. Please don't delete this checklist! Before submitting the PR, please make sure you do the following: [x] It's really useful if your PR references an issue where it is discussed ahead of time. In many cases, features are absent for a reason. For large changes, please create an RFC: https://github.com/sveltejs/rfcs [x] This message body should clearly illustrate what problems it solves. [x] Ideally, include a test that fails without this PR but passes with it. Tests [x] Run the tests with pnpm test and lint the project with pnpm lint and pnpm check Changesets [x] If your PR makes a change that should be noted in one or more packages' changelogs, generate a changeset by running pnpm changeset and following the prompts. All changesets should be patch until SvelteKit 1.0 I would actually expect /a/[b]/[c] to be matched before /[a]/b/c because a is more specific than [a] and I'd expect matching to happen left-to-right. That also seems easier to explain than a scoring algorithm Do you have a proposal for a sorting algorithm that works that way but also ranks /[[optional]]/foo above /[required]? I went back and forth on /a/[b]/[c] vs /[a]/b/c, etc., and ultimately decided 🤷🏻‍♂️. My vote is it might not matter, but the ClearlyArticulatedTheoryOfSpecificity™️ could be published—scoring or sorting—somewhere to help people not trip up. The one bit I'm not sure about is [...rest]/[required] coming after [required], but a solution that changes that order without changing other things is eluding me. Actually I take that part back. If you have do have both of those routes, then [required] must take precedence otherwise it couldn't be matched. Still trying to figure out what sort of algorithm satisfies all our intuitions about the correct order A rundown of (maybe conflicting) thoughts we have: we want [required] to be matched after [[optional]]/foo so that foo matches the optional route first similarly, we may want [required] to be matched after [...a]/foo so that foo matches the rest route first we want [required] to be matched before [[optional]] and both before [...rest] because of specificity we want [...a] to be matched after [...b]/foo so that foo matches the more specific rest route first. If we did it the other way around, [...b]/foo could never be matched we want the sorting to be intuitive and easy to explain we want a sorting that requires as little folder depth consideration as possible, so it's easier for people to look at their directory tree (ideally only those at the same level) and say "oh yeah this matches first" The first two seem like the most controversial (personally it feels wrong to me), the others seem sensible. If the first two would be discarded, i.e. [required] always preceedes [[optional]] which always preceedes [...rest], then matching could stay greedy: Compare the current segment. If it's different, order them hardcoded < [required] < [[optional]] < [...rest]. If they are the same, look at the next one (this also solves the [...a] vs [...b]/foo case). What's left to figure out under this assumption is the specificity within a segment. To me it seems the best to also sort them greedily: a-b-c a-b-[[optional]] a-[required]-b [required]-b-c [required]-[[optional]]-c [[optional]]-b-c This may feel slightly weird for 5 vs 6 the same way it may seem weird for people preferring the first two requirements in the first section above, but it's easy to explain and understand and consistent with the segment ordering. To conclude, to me personally the current system (which AFAIK works greedily) is good, and the new optional params should be sorted between required and rest. Closing in favour of #7278
gharchive/pull-request
2022-10-04T22:25:46
2025-04-01T04:35:59.410960
{ "authors": [ "Rich-Harris", "arxpoetica", "benmccann", "dummdidumm" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/pull/7149", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1605763100
Dedent Per https://github.com/sveltejs/kit/pull/9270#issuecomment-1450855548, this adds a dedent utility for the places where we need to generate code from template literals. It allows our code to be much more readable, and makes the generated code more reliably readable as well. Need to use it for the virtual modules declared inline, and figure out whether to use it in render.js (I think the answer is probably 'yes', but want to make sure we can do so without introducing any performance overhead) Please don't delete this checklist! Before submitting the PR, please make sure you do the following: [x] It's really useful if your PR references an issue where it is discussed ahead of time. In many cases, features are absent for a reason. For large changes, please create an RFC: https://github.com/sveltejs/rfcs [ ] This message body should clearly illustrate what problems it solves. [ ] Ideally, include a test that fails without this PR but passes with it. Tests [ ] Run the tests with pnpm test and lint the project with pnpm lint and pnpm check Changesets [ ] If your PR makes a change that should be noted in one or more packages' changelogs, generate a changeset by running pnpm changeset and following the prompts. Changesets that add features should be minor and those that fix bugs should be patch. Please prefix changeset messages with feat:, fix:, or chore:. I think the render.js stuff should probably happen as part of a separate refactor, if at all
gharchive/pull-request
2023-03-01T22:31:22
2025-04-01T04:35:59.415993
{ "authors": [ "Rich-Harris" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/pull/9273", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
855138066
Update CSS for default template counter button Before submitting the PR, please make sure you do the following [ ] It's really useful if your PR references an issue where it is discussed ahead of time. In many cases, features are absent for a reason. For large changes, please create an RFC: https://github.com/sveltejs/rfcs [ ] This message body should clearly illustrate what problems it solves. [ ] Ideally, include a test that fails without this PR but passes with it. Tests [ ] Run the tests with pnpm test and lint the project with pnpm lint Changesets [ ] If your PR makes a change that should be noted in one or more packages' changelogs, generate a changeset by running pnpx changeset and following the prompts I've made two changes: You should remove the hard-coded width and height from the example button When you set exact width and height on elements, and users increase their font size in their browser, it might make their text bigger than the width and height set for the element. When this happens, text can be cut off because it goes outside, for example, the button's 200 by 60 pixel area. Removing the hard-coded width is not as important in this example because at least with an undefined height the text can move downwards. But, since the hard-set width isn't necessary for this example, and this might be code that people re-use a lot, I believe its best to not set an example of setting width and height. I optimized the button:hover CSS since it's only changing the border width. This way the border color doesn't have to be managed twice. Thanks! Could you change the padding to 1em 4em so the button roughly keeps its current aspect ratio? Also you could add a changeset? Thanks. We should probably hold off on merging this though as it would be obsoleted by https://github.com/sveltejs/kit/pull/934 and would just create a merge conflict @dummdidumm Padding changed and changeset added! Thank you — though I have to close this, as it is superseded by #1014
gharchive/pull-request
2021-04-10T19:20:44
2025-04-01T04:35:59.422089
{ "authors": [ "Rich-Harris", "bamadesigner", "benmccann", "dummdidumm" ], "repo": "sveltejs/kit", "url": "https://github.com/sveltejs/kit/pull/960", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1866907691
Apparent memory leak on Firefox on Linux I just closed a learn.svelte.dev tab on firefox and it reduced my ram usage by ~6GB. I noticed the line https://staticblitz.com (<Local Process ID>, cross-origin isolated) in the Firefox process manager and that it consumed 6GB of ram. The line was associated with the learn.svelte.dev tab. I have faced same issue, https://staticblitz.com consumed around 3G of ram usage, firefox profiler @Rich-Harris would you take a look at this issue, https://staticblitz.com/ can easily reach 8GB of memory usage, this is not acceptable from any stand point of view i would have noticed it on other websites if it was a firefox specific issue Confirm! I opened site and never touched it after. Memory consumption started with ~500Mb; 2 hours after is 1Gb already. PS: I was woke up today by roaring notebook; site was consumed 8Gb of ram and 100% cpu. Re-check observations in the morning
gharchive/issue
2023-08-25T11:53:22
2025-04-01T04:35:59.424975
{ "authors": [ "arnfaldur", "baznikin", "mortezadadgar" ], "repo": "sveltejs/learn.svelte.dev", "url": "https://github.com/sveltejs/learn.svelte.dev/issues/491", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1631200043
chore: add image for <svelte:document> example Add thumbnail of example added in https://github.com/sveltejs/svelte/pull/8387 The examples have been moved to a new location: https://github.com/sveltejs/svelte/tree/master/documentation/examples
gharchive/pull-request
2023-03-20T00:58:30
2025-04-01T04:35:59.429531
{ "authors": [ "benmccann", "oekazuma" ], "repo": "sveltejs/sites", "url": "https://github.com/sveltejs/sites/pull/453", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1967801655
many console.logs Open Devtools this happens when something changes (state) thanks for you work! Gah, that was a development leftover that must've slipped through when preparing the bundle locally for the first submission. I'll try to automate the publishing steps using GH actions, now that the store link is available. Thanks for the report! The new build v2.0.1 has been published to the store
gharchive/issue
2023-10-30T08:36:10
2025-04-01T04:35:59.432005
{ "authors": [ "cannap", "ignatiusmb" ], "repo": "sveltejs/svelte-devtools", "url": "https://github.com/sveltejs/svelte-devtools/issues/170", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1749525436
Incorrect code sample for use-move use-move sample code sample is a duplicate of user-click-outside Thank you for reporting this!
gharchive/issue
2023-06-09T09:33:16
2025-04-01T04:35:59.451731
{ "authors": [ "BeeMargarida", "aviadmini" ], "repo": "svelteuidev/svelteui", "url": "https://github.com/svelteuidev/svelteui/issues/392", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
151773267
Set header bkgd col to clemson-orange to avoid hard coding the color I tried to change the background color all in one go by changing clemson-orange in main.sccs, but it didn't work properly. This fixes that. Thanks for the contribution! You're welcome.
gharchive/pull-request
2016-04-29T00:40:24
2025-04-01T04:35:59.493655
{ "authors": [ "svmiller", "tdmcarthur" ], "repo": "svmiller/steve-ngvb-jekyll-template", "url": "https://github.com/svmiller/steve-ngvb-jekyll-template/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1115870808
How to change bright red colour of links without target In my PKM, I make frequent use of links that don't yet have a target page. These show up as a bright red in the markdown preview in the default dark VSCode mode and are pretty hard on the eye: Is there a way to actually configure these to be a different colour? I played around with adding custom CSS via VSCode's workspace setting but CSS doesn't have a method for targeting "broken" links, so I can change the colour only of the links that already have a target. hi, I tried this approach with following css and colour override works. Similar question was asked here. .memo-invalid-link { color: #a19700 !important; cursor: not-allowed; } I hope this helps. Fantastic, that does the trick!
gharchive/issue
2022-01-27T07:34:33
2025-04-01T04:35:59.496344
{ "authors": [ "erikjalevik", "svsool" ], "repo": "svsool/vscode-memo", "url": "https://github.com/svsool/vscode-memo/issues/519", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
791049469
Unable to fetch complete data Hi, First of all, thank you for such a tool. I tried to fetch the public scope for h1 using the following command - bbscope h1 -b --noToken -c url Fetched the result but somehow it is missing the details for Mailru program (https://hackerone.com/mailru). Is it because of the different formatting of the scope? Thank you. Hi there, thanks for raising this issue. You're right, and yes it's because of the different formatting of the scope (honestly, I blame HackerOne for this :upside_down_face: ). A while ago I added the --descToo flag as an attempt to mitigate this but it looks like that flag isn't helpful here either... I need to think about a proper way to deal with these edge cases...exporting everything from a program as json (#2) then grepping might be a good workaround :thinking: If you have any suggestion, feel free to write below Hi there, thanks for raising this issue. You're right, and yes it's because of the different formatting of the scope (honestly, I blame HackerOne for this :upside_down_face: ). A while ago I added the --descToo flag as an attempt to mitigate this but it looks like that flag isn't helpful here either... I need to think about a proper way to deal with these edge cases...exporting everything from a program as json (#2) then grepping might be a good workaround :thinking: If you have any suggestion, feel free to write below Thanks for coming back, I believe notifying users about such programs is crucial as they are missing major targets unless anything is figured out. As of now, updating the Description should be done to make the users aware of such things. Thanks for coming back, I believe notifying users about such programs is crucial as they are missing major targets unless anything is figured out. As of now, updating the Description should be done to make the users aware of such things. Hi there, I took a closer look and figured out what's actually wrong. Using the --proxy flag to send all requests through Burp I saw this: {"node":{"id":"Z2lkOi8vaGFja2Vyb25lL1N0cnVjdHVyZWRTY29wZS80MDQxOA==","asset_type":"OTHER","asset_identifier":"Ext. A Scope","rendered_instruction":"\u003cp\u003eProductivity, e-commerce, B2B projects at \u003ccode\u003e*.mail.ru\u003c/code\u003e, \u003ccode\u003e*.my.com\u003c/code\u003e and some dedicated project domains, including \u003ccode\u003ecorp.mail.ru\u003c/code\u003e, \u003ccode\u003erb.mail.ru\u003c/code\u003e, \u003ccode\u003etop.mail.ru\u003c/code\u003e, \u003ccode\u003emoney.mail.ru\u003c/code\u003e, \u003ccode\u003etbank.mail.ru\u003c/code\u003e, \u003ccode\u003ecombo.mail.ru\u003c/code\u003e, \u003ccode\u003eapinotify.mail.ru\u003c/code\u003e, \u003ccode\u003eblog.mail.ru\u003c/code\u003e, \u003ccode\u003etarget.my.com\u003c/code\u003e, \u003ccode\u003etracker.my.com\u003c/code\u003e, \u003ccode\u003etarantool.io\u003c/code\u003e, \u003ccode\u003eyoula.ru\u003c/code\u003e, \u003ccode\u003epandao.ru\u003c/code\u003e, \u003ccode\u003eam.ru\u003c/code\u003e, \u003ccode\u003egibdd.mail.ru\u003c/code\u003e, \u003ccode\u003ehelp.mail.ru\u003c/code\u003e except delegated and externally hosted domains and branded partner services.\u003c/p\u003e\n\n\u003cp\u003e\u003cmark\u003eExtended scope only awards critical serverside vulnerabilities, if vulnerability compromises the infrastructure (e.g. RCE, SQLi, LFR, SSRF, etc) or data outside of project\u0026#39;s scope (e.g. personal information) via serverside vector.\u003c/mark\u003e\u003c/p\u003e\n\n\u003cp\u003eClientside vulnerabilities (XSS, CSRF) and business logic specific bugs, including privilege escalations within the product are accepted without bounty. \u003cbr\u003e\n\u003cmark\u003eMitM and local attacks, user enumeration on registration/recovery, open redirections, insufficient session expiration, cookies working after logout etc are not accepted\u003c/mark\u003e unless there are additional vectors identified (e.g. ability to steal the session token via remote vector for open redirection).\u003c/p\u003e\n","max_severity":"critical","eligible_for_bounty":true},"cursor":"NQ"}, The --descToo flag is actually useful here, but as you can see these assets are marked as asset_type=OTHER, while you only selected the url category. To be sure to cover all cases, it's a good idea to select all categories. I still blame h1 here, but you're right, adding a warning in the readme to make everyone understand these scenarios is a good idea :smiley: Hi there, I took a closer look and figured out what's actually wrong. Using the --proxy flag to send all requests through Burp I saw this: {"node":{"id":"Z2lkOi8vaGFja2Vyb25lL1N0cnVjdHVyZWRTY29wZS80MDQxOA==","asset_type":"OTHER","asset_identifier":"Ext. A Scope","rendered_instruction":"\u003cp\u003eProductivity, e-commerce, B2B projects at \u003ccode\u003e*.mail.ru\u003c/code\u003e, \u003ccode\u003e*.my.com\u003c/code\u003e and some dedicated project domains, including \u003ccode\u003ecorp.mail.ru\u003c/code\u003e, \u003ccode\u003erb.mail.ru\u003c/code\u003e, \u003ccode\u003etop.mail.ru\u003c/code\u003e, \u003ccode\u003emoney.mail.ru\u003c/code\u003e, \u003ccode\u003etbank.mail.ru\u003c/code\u003e, \u003ccode\u003ecombo.mail.ru\u003c/code\u003e, \u003ccode\u003eapinotify.mail.ru\u003c/code\u003e, \u003ccode\u003eblog.mail.ru\u003c/code\u003e, \u003ccode\u003etarget.my.com\u003c/code\u003e, \u003ccode\u003etracker.my.com\u003c/code\u003e, \u003ccode\u003etarantool.io\u003c/code\u003e, \u003ccode\u003eyoula.ru\u003c/code\u003e, \u003ccode\u003epandao.ru\u003c/code\u003e, \u003ccode\u003eam.ru\u003c/code\u003e, \u003ccode\u003egibdd.mail.ru\u003c/code\u003e, \u003ccode\u003ehelp.mail.ru\u003c/code\u003e except delegated and externally hosted domains and branded partner services.\u003c/p\u003e\n\n\u003cp\u003e\u003cmark\u003eExtended scope only awards critical serverside vulnerabilities, if vulnerability compromises the infrastructure (e.g. RCE, SQLi, LFR, SSRF, etc) or data outside of project\u0026#39;s scope (e.g. personal information) via serverside vector.\u003c/mark\u003e\u003c/p\u003e\n\n\u003cp\u003eClientside vulnerabilities (XSS, CSRF) and business logic specific bugs, including privilege escalations within the product are accepted without bounty. \u003cbr\u003e\n\u003cmark\u003eMitM and local attacks, user enumeration on registration/recovery, open redirections, insufficient session expiration, cookies working after logout etc are not accepted\u003c/mark\u003e unless there are additional vectors identified (e.g. ability to steal the session token via remote vector for open redirection).\u003c/p\u003e\n","max_severity":"critical","eligible_for_bounty":true},"cursor":"NQ"}, The --descToo flag is actually useful here, but as you can see these assets are marked as asset_type=OTHER, while you only selected the url category. To be sure to cover all cases, it's a good idea to select all categories. I still blame h1 here, but you're right, adding a warning in the readme to make everyone understand these scenarios is a good idea :smiley:
gharchive/issue
2021-01-21T12:54:50
2025-04-01T04:35:59.508153
{ "authors": [ "rudrapwn", "sw33tLie" ], "repo": "sw33tLie/bbscope", "url": "https://github.com/sw33tLie/bbscope/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
161788131
swagger-editor latest version breaks tests swagger-editor v2.10.1 was released on May 11. It breaks the swagger-node tests. The tests still pass for v2.9.9. Here is the output of running npm test with v2.10.1: @saharj @mohsen1 do you have any ideas on this? What is swagger-engine?! I think this is just for swagger-node, and for whatever reason it can't find swagger-editor. I've seen that some people are adding an empty index.js file in the swagger-editor dependency? swagger-editor is a set of static files. @mohsen1 My mistake, I've updated the issue to say swagger-editor I don't remember most of this haha but I think this was the fix for the new version of editor: https://github.com/swagger-api/swagger-node/commit/103b833894965655765e157edd5ae3ca819f5ee9 Make sure you have that change as well I tried updating config/index.js with the change in 103b833, but am still getting the same error.
gharchive/issue
2016-06-22T21:16:02
2025-04-01T04:35:59.569366
{ "authors": [ "fehguy", "mohsen1", "rydahhh" ], "repo": "swagger-api/swagger-node", "url": "https://github.com/swagger-api/swagger-node/issues/393", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
264418621
Path doesn't contain parameters after deserialization Hello, I'm trying to read defined path parameters from Path object after deserialization, but I found out that the list is always null. In fact, path params are set to null at this line: https://github.com/swagger-api/swagger-parser/blob/e3d545b7e4cd20553023ad77a7298b0fc46a4a0e/modules/swagger-parser/src/main/java/io/swagger/parser/processors/PathsProcessor.java#L74 Can you clarify why this 'reset' is done? Thanks Giulio Hi, Can you provide the spec you are testing this with? Thanks 👍 @giuliopulina The list will always be null because, the pathProcessor has "shared" parameters, so when the paths are being processed the processor sets the parameters into each of the defined operations inside the path Object. So without looking at the spec I say (maybe I'm mistaken) if you need the parameters you should look inside one of the operation of that path Object. It should have them. Please send us the spec to look further. @gracekarina I think that, writing this issue, I didn't understand well what Path.getParameters() was meant to return. I was trying to read the defined path parameters for a given path (in order to perform semantic validation): let's we have a path like: /users/{id}/addresses/{addressId} I was expecting to get two PathParam ("id" and "addressId") from the Path object. After your explanation (and after reading about 'shared parameters' in the doc, I'm not using them in my spec) I understood that for my use case, I can simply parse the path string and extract param from there. Thank you a lot, I think the issue can be closed.
gharchive/issue
2017-10-11T00:15:43
2025-04-01T04:35:59.573295
{ "authors": [ "giuliopulina", "gracekarina" ], "repo": "swagger-api/swagger-parser", "url": "https://github.com/swagger-api/swagger-parser/issues/545", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
496691371
Now runs with python3 (needs more testing) Short description of what this resolves: This should now actually run on python3, ALTHOUGH IT DOES REQUIRE TESTING AND UPDATES. Changes proposed in this pull request: Changed imports to work with Python3 Checked to make sure new imports have the same function. Ran on python3 jarvis.py after installing all modules with pip3 install -3 requirements.txt ** Need some help updating and testing further, as so far only imports have been updated to work. ** Didn't notice someone already started this here! The code is very similar in both, but the required.txt file is better suited there and will be implemented here. Credit to skelmdev for that.
gharchive/pull-request
2019-09-21T19:24:41
2025-04-01T04:35:59.624528
{ "authors": [ "PredatorFeesh" ], "repo": "swapagarwal/JARVIS-on-Messenger", "url": "https://github.com/swapagarwal/JARVIS-on-Messenger/pull/537", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
615457069
"Unable to add DRM framebuffer" with mpv mpv version: 0.32.0 wlroots version: 0.10.0.r124.gf72686c0-1 (up to f72686c) sway version: sway version 1.4-726d187d (May 10 2020, branch 'master') When mpv plays in fullscreen mode, lots of the following error messages are printed out. [ERROR] [backend/drm/util.c:210] Unable to add DRM framebuffer: Invalid argument mpv command: mpv --no-config --gpu-api=vulkan --gpu-context=waylandvk <something to play> sway-drm.log Not sure if it is wlroots' problem or mpv's though. Does everything still work fine even though error messages are printed? I didn't test many other apps. But it seems mpv and sway still work fine, despite the error message. So this is just Sway trying to use direct scan-out and failing (falling back to regular composition). This is harmless, but the error messages could be improved. https://github.com/swaywm/sway/pull/5010 is an attempt at reducing the noise. This happens with RetroArch and RPCS3 too, aren't they supposed to be able to use direct scan-out? Or maybe I'm confused as to what direct scan-out is :) Direct scan-out means that the display engine directly reads from the client's buffer, without the compositor needing to copy to an intermediate buffer. Whether direct scan-out is possible depends on the display engine capabilities and the buffer format/modifier used by the client. I think this didn't happen before though. Also if I switch to OpenGL in RetroArch it doesn't occur. This is still an issue, my sway.log grows to 2MB after a couple minutes session in RPCS3 or RetroArch. The error messages should have been downgraded to DEBUG now.
gharchive/issue
2020-05-10T19:15:48
2025-04-01T04:35:59.655777
{ "authors": [ "emersion", "escalade", "focus64" ], "repo": "swaywm/wlroots", "url": "https://github.com/swaywm/wlroots/issues/2181", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
298060284
Vulkan Support This would be nice for even better performance, and actually allowing real multi-gpu to happen. Currently, there are several things missing from Vulkan to truly make this a reality and be a first-class renderer for wlroots. It would probably be best for this to remain out-of-tree until they've been resolved. There are also things in wlroots that needs to change for a Vulkan (or any non-OpenGL) renderer to even be viable. These things would all be incredibly breaking changes, and may be more appropriate for a 2.0 release. Removal of hard-dependency on EGL Basically, Vulkan doesn't need EGL and manages its own window system integration. I think this would involve wrapping all of that functionality up in a wlr_wsi struct, and have loadable backends, just like everything else we do. This might also open up the possibility for other types of renderers to happen (e.g. software) and backends which don't have an EGL surface type (e.g. fbdev). I think this would be a very difficult interface to design, though. Removal of all other GL calls in wlroots It would really be nice to not need OpenGL at all, but there are a couple of places in wlroots where we need to ability to draw stuff. For things like software cursors, it might be better to do it with callbacks, and get the library user to do it for us. For the multi-gpu stuff in DRM, I think it can mostly be removed by the inclusion of Vulkan. If we're going to get true multi-gpu rendering, I'm going to remove the half-assed fake multi-gpu support. Current Issues Binding wl_display Right now, there is no equivalent to WL_bind_wayland_display. I guess an EGLDisplay would need to be kept around just for this, until it gets implemented. There is a github issue about it here: https://github.com/KhronosGroup/Vulkan-Docs/issues/294 No support for GBM surfaces Vulkan currently doesn't have a GBM surface type or any other surface type we can use for the DRM backend, unlike EGL. Oddly enough, there is VK_KHR_display, which does a whole bunch of the stuff DRM is doing, but it's not suitable for us. I think it's mainly for controlling VR headsets, not implementing full display servers. I think this may be addressed whenever this GBM replacement that Nvidia was supposedly developing is finished. We'll have to wait and see what happens. This might also open up the possibility for other types of renderers to happen (e.g. software) and backends which don't have an EGL surface type (e.g. fbdev). I think this would be a very difficult interface to design, though. As an aside, we already have a software backend (headless) and we could easily do an fbdev backend with the same approach. EGL lets you create surfaces that aren't backed by a display. Hey, i am still very interested in this idea and would be willing to help with it! @ascent12 Why would VK_KHR_display not work for this? Mesa does currently not implement it but there is work being done on this. From reading the spec i always felt like it would work for this use case (in a much simpler way than the gbm/drm backend). As a step towards WL_bind_display there are the new external memory extensions. Couldn't importing client side images (their memory) using the fd from wl_drm already work using those extensions (not per specification but with the current mesa implementation)? Other advantages of vulkan over using the current egl/gl system would be better debugging support (the vulkan layers are really great IMO), better performance (probably mainly on the CPU but it could really make a difference) and (if we could use VK_KHR_display), we could also support the proprietary nvidia driver (which supports the extension already IIRC) without implementing something exclusively for them. I am not sure though if this really is worth it for the amount of work and breaking changes it would require. Why would VK_KHR_display not work for this? We're definitely are keeping GLES2 support, because it has much more hardware support than Vulkan, so the GBM/EGL stuff is sticking around. Trying to graft a VKDisplayKHR on top of all of DRM would be a complete mess; it would only be viable as a completely new backend, which I wouldn't actually be opposed to someone doing. However, it just doesn't have all of the features we need. We can't do things like DPMS, gamma control, hotplugging(?), or many of the other nuanced things that DRM allows us to do. That's why I said I think it's for VR headsets; those kinds of features aren't important for those, but are expected from a proper display server. Couldn't importing client side images (their memory) using the fd from wl_drm already work using those extensions Yes, but not every wayland client is going to be using wl_drm. We need to support those using wl_egl. I've actually noticed some recent work in libwayland about wl-egl-backend. I'm not actually sure if this is what we need to fix this issue, or if it's just something for drivers. Other advantages of vulkan over using the current egl/gl system would be better debugging support I set up some of the EGL/GL debugging stuff in an old PR of mine, but it was never merged. But yes, the Vulkan debugging stuff appears to be a lot better. However, it just doesn't have all of the features we need. We can't do things like DPMS, gamma control, hotplugging(?), or many of the other nuanced things that DRM allows us to do. Good point, i didn't consider this. There won't probably ever be extensions to allow something like it either, it's out of vulkans scope. So what approach do you have in mind? Creating a new backend that uses a VkDisplayKHR to render and at the same time uses the drm api (independently from vulkan since there is just no connection between them) for those additional features? And to render buffers from hardware accelerated clients, first import them as EGLImage, then export them as fd and then import the fd as vulkan memory and get vulkan images to render? Or are there better ways? There are people in the wayland/weston team that also work on vulkan, maybe it's simply not possible (without massive hacks and workarounds) to implement a vulkan compositor backend yet. Our current wlroots design has separate backends which is used to set up the windowing system, and makes it easier to add new ones. Currently there is DRM, libinput (used in conjunction with DRM, but can be used with others), X11, Wayland, and a headless one. I think it would be possible for someone to write a new one of these backends purely based on VK_KHR_display, completely separate from DRM. However, it would have to be missing the extra DRM-only features, and wouldn't serve as a complete replacement for it. And to render buffers from hardware accelerated clients, first import them as EGLImage, then export them as fd and then import the fd as vulkan memory and get vulkan images to render? Yes, I think that would work, but it's quite a roundabout way to do it. I would prefer that EGL is not needed at all. If we can get the dmabuf fd straight from a wl_egl client, we could import it directly. I'm not sure if this can work, though. There are people in the wayland/weston team that also work on vulkan, maybe it's simply not possible (without massive hacks and workarounds) to implement a vulkan compositor backend yet. Yeah, that's what I think too. Which is why I think any of this work shouldn't hit master yet. https://cgit.kde.org/kwin.git/log/?h=fredrik/vulkan KWin branch in early stages of Vulkan work I've been looking around some of the Mesa source code seeing how WL_bind_wayland_display is implemented. I didn't realise that wl_drm is what Mesa it actually used. There isn't a tot of information about it around. According to https://lists.freedesktop.org/archives/wayland-devel/2017-November/035767.html, it sounds like wl_drm is being phased out, and we could just get away with the linux_dmabuf protocol (currently unstable in wayland-protocols). So assuming my understanding is correct, WL_bind_wayland_display may not end up being a problem? Hmm, that's a good find. That might work! We should definitely implement the dmabuf protocol either way, though. I've started to add linux_dmabuf protocol support to wlroots and am now at a point where I need to port over weston's import_simple_dmabuf. Hope to continue with this later this week (with a longer train ride coming up). The Mesa implementation of EGL_WL_bind_wayland_display advertises the wl_drm interface. The Mesa EGL client implementation currently uses wl_drm to discover which GPU the compositor is using to composite, but preferentially uses zwp_linux_dmabuf_v1 if available. Weaning Mesa off wl_drm completely would be great, and I'll happily talk anyone through it who wants to work on it. On the server side, KhronosGroup/Vulkan-Docs#294 has a reasonably complete sketch of what Vulkan compositor support looks like. You can use VK_KHR_display to drive KMS for you (also handling buffer allocation through swapchains); it is easier to implement, but you lose some control over display timing, and it also means you need to composite all clients, and can't directly display clients on planes. The other approach is to use GBM as a buffer allocator, wrap those buffers in VkImages, render to them, extract the dma-fence FD from the rendering's signal VkSemaphore, and pass that into KMS with IN_FENCE_FD, whilst continuing to drive KMS like you do now. IN_FENCE_FD does require atomic; not sure if you have that. There's also @chadversary's work on modifiers linked from the above Khronos issue, which helps with performance. Thanks for the information @fooishbar. Now that I knowzwp_linux_dmabuf_v1 is all we need , I'll probably drop EGL_WL_bind_wayland_display in favour of EGL_EXT_image_dma_buf_import(_modifiers) instead. I've looked at VK_KHR_display before, and as I mentioned in a previous comment, it's just missing too much compared to doing DRM/KMS ourselves. Not to mention that we not dropping EGL/GL support, so we're still keeping the GBM stuff around anyway. We do have atomic modesetting support, so I'll have to look into using dma-fences. Bear in mind that you do still need bind_wayland_display, until we make Mesa be able to live without wl_drm. Even when it isn't using it for actual buffer exchange, clients still rely on it being there to determine which GPU is in use. :\ Alright then. I'll have to keep an eye out on what Mesa is doing about that. It'll hopefully be split out for 18.1. But that still leaves VA-API as a holdout, though it has now got support for exporting dmabufs; you can use that with waylandsink in GStreamer which will go through the dmabuf interface. Anyway, if you enable dmabuf, you'll get a performance win on Intel through more optimal tiling modes and also compression. There are also things in wlroots that needs to change for a Vulkan (or any non-OpenGL) renderer to even be viable. With this issue being resolved, it is getting even more possible to implement Vulkan support in wlroots. Please also read this comment. Sorry for popping in and resurrecting this issue and thread without any introductions and sorry if this does not really mean anything about this issue or if this doesn't really contribute to anything related to this issue, but has anyone seen or heard of this Wayland Vulkan compositor by @st3r4g? Here is the link for it and its source code: https://github.com/st3r4g/swvkc Here's the directory for the backend code, by the way: https://github.com/st3r4g/swvkc/tree/master/src/backend Actually working on a library that is similar https://github.com/EasyIP2023/lucurious Not sure if wlroots could be used in this library. I think so, but I've already configured it so that anyone who wants to build a vulkan compositor can do so without typing all the Vulkan related source code vkwayland implements zwp_linux_dmabuf_v1 and IMHO that's the correct solution for WL_bind_wayland_display. dmabuf are device agnostic, but there is nothing preventing an implementation from optimizing device to device references/xfers, so there is no need for anything more restrictive like WL_bind_wayland_display. VK_KHR_display worked perfectly, IMHO. I know nothing about OGL or GBM so I could be missing something. This is still planned to be supported, but the big part currently is changing internal architecture so it can be supported cleanly (see renderer v6 and related work). This isn't something I've been pouring a lot of time into, and it's a massive set of changes, so progress is pretty slow. The plan is to drop all of the vkSurfaceKHR/eglSurface crap, and fling buffers around with GBM ourselves, which is what swvkc appears to be doing at a glance. vkwayland implements zwp_linux_dmabuf_v1 and IMHO that's the correct solution for WL_bind_wayland_display. dmabuf are device agnostic, but there is nothing preventing an implementation from optimizing device to device references/xfers, so there is no need for anything more restrictive like WL_bind_wayland_display. Sure, that was the plan here too. There is one thing mesa still needs wl_drm for, which is getting the GPU it's actually supposed to be rendering on. The yet-to-be-merged dmabuf-hints patch should solve that: https://patchwork.freedesktop.org/patch/263061/ VK_KHR_display worked perfectly, IMHO. I know nothing about OGL or GBM so I could be missing something. It's fine if your entire goal is just getting something to the screen, but it's a very restrictive and barebones interface beyond that. A fully featured display server really wants to call KMS themselves. There are a lot of KMS properties we handle now and want to handle in the future, and some wrapper interface isn't going to allow us to do that. It also has some overlay plane interface, but it's too awkward and too static to be useful, basically falling into the same trap that the equivalent EGL interfaces fall into, which are also pretty heavily tied to EGLStreams in practice. The yet-to-be-merged dmabuf-hints patch should solve that: https://patchwork.freedesktop.org/patch/263061/ A Vulkan Compositor would have trouble getting a FD from a VkDevice or most likely VkPhysicalDevice, FWIK. Though this should be an easy extension to add. A Vulkan Compositor would have trouble getting a FD from a VkDevice or most likely VkPhysicalDevice, FWIK. Though this should be an easy extension to add. FWIW, this can already be done using VK_EXT_pci_bus_info (implemented in the mesa drivers) and manually comparing that information to available drm devices. Pretty sure that's one of the reasons this extension exists, i doubt there would be huge support for a completely new "extract fd from VkPhysicalDevice" extension. I wouldn't do that, some GPUs aren't PCI devices (e.g. on ARM boards). hm good point, i guess in those cases such an extension (or the other way around: construct a VkDevice from a device fd, comparable to the way it's done with egl contexts?) would be needed to implement the proposed linux-dmabuf changes with a vulkan compositor without using guesswork. Ideally I'd want something which directly gives a VkPhysicalDevice from a gbm_device, but something which just gives us the path to the render node (as an extension to vkGetPhysicalDeviceProperties2) would be acceptable too. I am just guessing but I feel that the freedesktop path needs help(possibly rewriten). Even if a Wayland Compositor can get an FD, how does a Vulkan Implementation help a client(the code calling vkGetPhysicalDeviceSurfaceSupportKHR) discover the correct PhysicalDevice? Don't Implementations already have a solution to this?
gharchive/issue
2018-02-18T05:25:33
2025-04-01T04:35:59.687905
{ "authors": [ "EasyIP2023", "SirCmpwn", "agx", "ascent12", "berylline", "cheako", "emersion", "fooishbar", "nyorain", "yegorius" ], "repo": "swaywm/wlroots", "url": "https://github.com/swaywm/wlroots/issues/642", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
911086070
The conversion of React props value, unicode character is unnecessary Here is the swcrc config: { "jsc": { "parser": { "syntax": "typescript", "tsx": true } } } This is the source file: import React from 'react'; export const HelloWorld = () => { return (<div title="您好SWC"></div>) } Result: import React from 'react'; export var HelloWorld = function() { return(/*#__PURE__*/ React.createElement("div", { title: "\\u60a8\\u597dSWC" })); }; As we can see all the chinese character was converted to unicode. but that is not what we want. Other non-English strings have the same problem. #1792 @kdy1 It looks like it could have been caused by a #1732. @kdy1 hi this is a severe bug which seems to be caused by unicode in jsx props are being transformed twice in jsx transform and codegen. The prior is introduced in #1732 but why is unicode transform in codegen needed? Because some Unicode characters do not fit into the String of rust. I made some wrong design decisisions, and the it's a complement But wasn't data in node.value already a rust string? Oh I found a way to fix it. Currently, swc is depending on UB being not optimized by the compiler Just run in to this problem just to find out that it's already being fixed 😄 God's work @kdy1 . Please look at my PR that adds a test for it: https://github.com/swc-project/swc/pull/1817
gharchive/issue
2021-06-04T03:35:43
2025-04-01T04:35:59.697963
{ "authors": [ "Austaras", "fiture", "haydnhkim", "kdy1", "vovacodes" ], "repo": "swc-project/swc", "url": "https://github.com/swc-project/swc/issues/1782", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1995550075
perf(es/minifier): Improve format.inline_script Improves on #8252 fn replace_close_inline_script by removing unsafe and improving performance by ~35% Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Thank you!
gharchive/pull-request
2023-11-15T20:51:28
2025-04-01T04:35:59.700620
{ "authors": [ "CLAassistant", "ZakisM", "kdy1" ], "repo": "swc-project/swc", "url": "https://github.com/swc-project/swc/pull/8292", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
237272843
Matching people with trainees When a trainee application is matched to a person, is all the trainee's information supposed to go into the person's record? See for example this training request: https://amy.software-carpentry.org/workshops/training_request/665/ was matched to this person: https://amy.software-carpentry.org/workshops/person/13594/ However the information in the training request does not populate the person's record. Is there another step to make this happen? The trainee's information goes into the person's record only if a new person record is created. If there is already an existing person record in database, it doesn't happen. See relevant code. Duplicate of #1270, fixed in #1313.
gharchive/issue
2017-06-20T16:34:40
2025-04-01T04:35:59.705618
{ "authors": [ "chrismedrela", "maneesha", "pbanaszkiewicz" ], "repo": "swcarpentry/amy", "url": "https://github.com/swcarpentry/amy/issues/1198", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
629743982
Possible to create potentially malformed URL via object key Describe the bug aws-sdk-swift's S3 has a method that calls return client.send(operation: "PutObject", path: "/{Bucket}/{Key+}", httpMethod: "PUT", input: input, on: eventLoop) (amongst other similar calls) that can result in a http url request formed with double slashes when the key starts with a /. I don't have an AWS account to confirm that AWS probably doesn't care, but I use Wasabi and it very much does care. We should get this: https://s3.us-west-1.amazonaws.com/bucketname/path/to/resource.ext but instead we get this: https://s3.us-west-1.amazonaws.com/bucketname//path/to/resource.ext To Reproduce Steps to reproduce the behavior: create an object request: let objectRequest = S3.PutObjectRequest(bucket: "bucketname", key: "/path/to/resource.ext") S3().putObject(objectRequest) //...putObject calls `AWSClient.send` //...send calls `AWSClient.createAWSRequest` //... createAWSRequest results in a request with the following: //request.url.absoluteString = "https://s3.us-west-1.amazonaws.com/bucketname//path/to/resource.ext" Expected behavior We should get this: https://s3.us-west-1.amazonaws.com/bucketname/path/to/resource.ext Additional context I'm not 100% sure this shouldn't be expected behavior, but I thought it was worth notifying anyways. I have a PR ready if this isn't technically a pebkac issue. This only happens when using a custom endpoint so I am closing this one in favour of #285
gharchive/issue
2020-06-03T06:58:12
2025-04-01T04:35:59.726221
{ "authors": [ "adam-fowler", "mredig" ], "repo": "swift-aws/aws-sdk-swift-core", "url": "https://github.com/swift-aws/aws-sdk-swift-core/issues/284", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2739226622
Fix StringOptimization to handle load_borrow This is part of fixing performance regressions for ossa modules rdar://140229560 @swift-ci test
gharchive/pull-request
2024-12-13T21:19:19
2025-04-01T04:35:59.753685
{ "authors": [ "meg-gupta" ], "repo": "swiftlang/swift", "url": "https://github.com/swiftlang/swift/pull/78176", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
676335407
Add Tokamak DevTools extension for Chrome The tool itself is written in Tokamak. The extension might work in Firefox too, but I haven't tried it. So far it lets you browse the View tree, hover over Views to highlight them, and click on Views to view some basic info about them. Todo: [ ] Fix various crashes [ ] Correctly update when the tree changes New features can probably be done in separate PRs Fails :no_entry_sign: DevTools/src/TokamakDevTools/Sources/TokamakDevTools/Model/NodeInspector.swift#L4 - Initializing an optional variable with nil is redundant. (redundant_optional_initialization) Generated by :no_entry_sign: Danger Swift against 086ae30ac32d9268bcf26810be356a9c42f6947d
gharchive/pull-request
2020-08-10T18:37:40
2025-04-01T04:35:59.756882
{ "authors": [ "carson-katri", "ie-ahm-robox" ], "repo": "swiftwasm/Tokamak", "url": "https://github.com/swiftwasm/Tokamak/pull/255", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2342203542
Adaptive default toolchain version based on Swift version on builder 現在、cartonの用意するツールチェーンのバージョンは固定でハードコードされています。 これは概ね現在の最新安定版を追従するように更新されています。 しかし、ユーザの開発環境はそれより古いものからまだアップグレードできていなかったり、 逆にリリース前の超最新版を使いたいこともあります。 そのような場合に、ソースコードの編集で使われる言語バージョンと、 cartonがwasmビルドで利用する言語バージョンが異なってしまいます。 対策方法として、.swift-verison ファイルを用意することで、 cartonが利用するバージョンを切り替えることができます。 しかしこれを使うまでもなく、 ユーザが期待するcartonに使って欲しいであろうバージョンは、 ユーザが $ swift run carton bundle するときに叩いている swift のバージョンでしょう。 ファイルで設定することなく、このように期待されるバージョンに切り替われば便利です。 このパッチでは、そのようにデフォルトを切り替えるように変更します。 cartonはswiftpmで依存pluginとして導入することが想定されているため、 そのSwiftのバージョンはcartonコードの中で #if compiler を判別することができます。 実装ではこれを使います。 事前に相談した会話: https://discord.com/channels/291054398077927425/383442648012423179/1248990244888641648 やったー
gharchive/pull-request
2024-06-09T12:26:16
2025-04-01T04:35:59.760442
{ "authors": [ "omochi" ], "repo": "swiftwasm/carton", "url": "https://github.com/swiftwasm/carton/pull/481", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2078043298
Update QuickSettings.vue for spelling Silence Spelling Fix Hello @tpatchg Thank you for helping with this. Merging now. PS: I saw your issue on the main repo. I'll drop a comment in a few
gharchive/pull-request
2024-01-12T04:29:42
2025-04-01T04:35:59.767526
{ "authors": [ "cwilvx", "tpatchg" ], "repo": "swing-opensource/swingmusic-client", "url": "https://github.com/swing-opensource/swingmusic-client/pull/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1966594616
🛑 www.swipload.com is down In 1a71e95, www.swipload.com (https://www.swipload.com) was down: HTTP code: 406 Response time: 317 ms Resolved: www.swipload.com is back up in b548131 after 8 minutes.
gharchive/issue
2023-10-28T13:24:03
2025-04-01T04:35:59.771576
{ "authors": [ "AntiAliasing" ], "repo": "swipload/status", "url": "https://github.com/swipload/status/issues/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
414089024
radio and checkbox classes affects other inputs too When having a <input type="text"> nested under a <input type="radio" the "input constraint" at following classes: radio--left radio--right are affecting the nested text inputs "top" attribute (and some more), which causes the input field to be higher as the label. The same problem exists within checkbox classes: checkbox--left checkbox--right I'll make a PR, that adds a constraint to the input class to: input[type=radio] at radios.scss input[type=checkbox] at checkboxes.scss as this would be more specific, which controls that we really want to be affected by the class. example: Will be fixed with Material Design Migration. Issue will be closed
gharchive/issue
2019-02-25T12:49:24
2025-04-01T04:35:59.777338
{ "authors": [ "bschaeublin", "gillerr" ], "repo": "swiss/styleguide", "url": "https://github.com/swiss/styleguide/issues/654", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
446923616
2.0 进程异常后,无法使用php bin/swoft http:start启动 使用的是自建docker 环境, composer create-project安装的2.0版本 docker cmd 是 php bin/swoft http:start 平时应用是 docker restart对应用进行重启 但是在2.0版本上 由于docker restart 没有正常退出swoft进程,导致 第二次 后续所有的docker启动都是无法正常重启的, 也尝试过 php bin/swoft http:restart ,但是提示无法拉起进程,超时退出 这块在之前1.x版本上是可以 docker restart 随时都能正常重启运行的, 这块中间有什么差别吗? 应该有bug,我们追查下 vendor/swoft/server/src/Server.php isRunning()方法中的 return 改成 跟1.X版本的一样就正常启动了 return $masterPID > 0 && Process::kill($managerPID, 0); //Process::kill($masterPID , 0) -> Process::kill($managerPID, 0) 删除runtime目录下单swoft,pid 再 php bin/swoft http:start
gharchive/issue
2019-05-22T04:08:30
2025-04-01T04:35:59.781951
{ "authors": [ "Ckeungkin", "f39516046", "inhere" ], "repo": "swoft-cloud/swoft", "url": "https://github.com/swoft-cloud/swoft/issues/646", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
706702802
Issue 601 topic services under ros2 Distro A, OPSEC #4584 Just checking in since it's been a while since I saw any updates for this, is it ready to go? I think so, but I don't have a great test for it. On Mon, Oct 12, 2020, 1:10 PM P. J. Reed notifications@github.com wrote: Just checking in since it's been a while since I saw any updates for this, is it ready to go? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/swri-robotics/marti_common/pull/604#issuecomment-707270609, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBZWHEJAE4YVJITLQ2W5F3SKNA7PANCNFSM4RWIMDIA . @danthony06 I was able to pull your branch and confirm it worked in my test case that generated the error in the first place. Without the fix, my code fails to compile with errors like: "/srv/DeleteRoute.h:4:10: fatal error: /msg/DeleteRouteResponse.hpp: No such file or directory 4 | #include _msgs/msg/DeleteRouteResponse.hpp With the fix, my code compiles and runs correctly. @pjreed Is that sufficient validation to move this patch forward? Thank You bet, that looks good to me. Thanks! Good deal @pjreed ! Any guidance on when we can expect this fix to be rolled into the distribution that's accessed via package managers on Ubuntu? Just need to give the CI guys a heads-up on what to expect... I've tagged a new release and opened PRs to get it released into Dashing, Eloquent, and Foxy: https://github.com/ros/rosdistro/pull/27468 https://github.com/ros/rosdistro/pull/27467 https://github.com/ros/rosdistro/pull/27466 AFAIK there isn't a fixed release schedule for the official ROS repositories, but they usually push out new releases every 3-4 weeks. It looks like Dashing & Eloquent were both updated at the end of October and Foxy was updated early November, so it should be soon.
gharchive/pull-request
2020-09-22T21:23:42
2025-04-01T04:35:59.803760
{ "authors": [ "danthony06", "dvmartin999", "pjreed" ], "repo": "swri-robotics/marti_common", "url": "https://github.com/swri-robotics/marti_common/pull/604", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2433543682
🛑 BestQA_GO is down In 2c8244f, BestQA_GO (https://www.bestqa.net/go/bestqa_dev/up) was down: HTTP code: 0 Response time: 0 ms Resolved: BestQA_GO is back up in c96a4de after 17 minutes.
gharchive/issue
2024-07-27T16:22:19
2025-04-01T04:35:59.829582
{ "authors": [ "swuecho" ], "repo": "swuecho/upptime", "url": "https://github.com/swuecho/upptime/issues/2206", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1126730400
🛑 Random site is down In 9a6c608, Random site (https://www.southernjive.co.uk) was down: HTTP code: 0 Response time: 0 ms Resolved: Random site is back up in aec02b8.
gharchive/issue
2022-02-08T02:55:50
2025-04-01T04:35:59.834391
{ "authors": [ "sxa" ], "repo": "sxa/aoupptime", "url": "https://github.com/sxa/aoupptime/issues/480", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1878408937
view! failing when attribute name ends in -ref Describe the bug To integrate with Tailwind Elements, I need to create div (and other) elements with a data-te-dropdown-ref tag. I tried: view! { cx, div(class="relative", data-te-dropdown-ref=true) {} } but this fails the perseus serve with: error: expected identifier --> src/components/twelements/dropdown.rs:7:48 | 7 | div(class="relative", data-te-dropdown-ref=true) {} | ^^^ Environment Sycamore: 0.8.1 with Perseus 0.4.2 OS: Linux This should be a pretty easy fix in the view! macro. Right now, we just parse attributes as a sequence of identifiers but ref is a keyword. Instead, we just need to use the Ident::parse_any function from syn instead of just Ident::parse.
gharchive/issue
2023-09-02T06:44:14
2025-04-01T04:35:59.847210
{ "authors": [ "lukechu10", "smessmer" ], "repo": "sycamore-rs/sycamore", "url": "https://github.com/sycamore-rs/sycamore/issues/620", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2124256513
Still active? repo hasn't committed in three years Hi @alex3236, this project is no longer actively maintained, but there are plans to update it, at the moment I'm focusing on other projects, so it may be a month before the project is updated. @syfxlin/tiptap-starter-kit is now updated, so close this issue, thanks!
gharchive/issue
2024-02-08T02:42:49
2025-04-01T04:35:59.852172
{ "authors": [ "alex3236", "syfxlin" ], "repo": "syfxlin/tiptap-starter-kit", "url": "https://github.com/syfxlin/tiptap-starter-kit/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1551733232
Add additional placement options, such as above and behind the car Add additional placement options, such as above and behind the car If you only wanted 1 thing displayed, it'd be nice if it could be centered. There's some config options to adjust this already, but tbh I'm not a fan of the implementation. I need to figure out how to make it friendlier. One idea I had was being able to save "presets" that you could assign to different cameras, having a few sensible defaults would be nice.
gharchive/issue
2023-01-21T08:14:06
2025-04-01T04:35:59.970434
{ "authors": [ "XertroV", "sylae" ], "repo": "sylae/DiegeticInfoDisplay", "url": "https://github.com/sylae/DiegeticInfoDisplay/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1937345379
🛑 Concrete Tree is down In ff072da, Concrete Tree (https://concretetree.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Concrete Tree is back up in 20f17b5 after 56 minutes.
gharchive/issue
2023-10-11T09:51:33
2025-04-01T04:35:59.978136
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/22016", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1941468936
🛑 Speedy Auto Clean is down In d1658be, Speedy Auto Clean (https://speedyautoclean.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Speedy Auto Clean is back up in d725a5f after 16 minutes.
gharchive/issue
2023-10-13T08:15:14
2025-04-01T04:35:59.980609
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/23871", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1943057610
🛑 Patio Pros Elpaso is down In d4b874c, Patio Pros Elpaso (https://patioproselpaso.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Patio Pros Elpaso is back up in d9ef2cd after 51 minutes.
gharchive/issue
2023-10-14T07:31:09
2025-04-01T04:35:59.982903
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/24643", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1962359091
🛑 Shiny Car Clean is down In d77fa67, Shiny Car Clean (https://shinycarclean.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Shiny Car Clean is back up in 7166ef3 after 30 minutes.
gharchive/issue
2023-10-25T22:28:24
2025-04-01T04:35:59.985168
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/33655", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1967108304
🛑 Clean Standards is down In 6e41042, Clean Standards (https://clean-standards.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Clean Standards is back up in 78f1336 after 10 minutes.
gharchive/issue
2023-10-29T18:37:27
2025-04-01T04:35:59.987443
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/36547", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1973191323
🛑 Cleanaholic is down In 143f6d4, Cleanaholic (https://cleanaholictn.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Cleanaholic is back up in d0aac43 after 43 minutes.
gharchive/issue
2023-11-01T22:17:12
2025-04-01T04:35:59.990513
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/39051", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1892201488
🛑 AZ Brick Frames is down In e1c955a, AZ Brick Frames (https://azbrickframes.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: AZ Brick Frames is back up in 69b1b54 after 47 minutes.
gharchive/issue
2023-09-12T10:25:58
2025-04-01T04:35:59.992806
{ "authors": [ "symapex" ], "repo": "symapex/upsite", "url": "https://github.com/symapex/upsite/issues/9479", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
236545267
Relationals and BooleanAtom Please review and suggest improvements. @isuruf What can be done to fix the build failure? See for example, https://github.com/ShikharJ/symengine.py/blob/52fb3d2d7a8b7925b9207415178af7404739fdd5/symengine/lib/symengine.pxd#L463 Ping @isuruf. In SymPy x < y returns a less than object. We need to do the same @isuruf I couldn't find where this was being parsed in SymPy. Can you point that out to me? In pure python you need to define __lt__. For Cython, you need to use __richcmp__ @isuruf Are the current changes satisfactory? Or should __lt__ and __richcmp__ be overloaded in all of the classes? Ping @isuruf. Looks good. I'll merge this after I've done a release over the weekend. @isuruf Can this be merged? It would be required for Logic classes. @ShikharJ, if you need this PR for other PRs, please send the new PR with the commits here. I'll merge this after the release.
gharchive/pull-request
2017-06-16T17:26:40
2025-04-01T04:35:59.996799
{ "authors": [ "ShikharJ", "isuruf" ], "repo": "symengine/symengine.py", "url": "https://github.com/symengine/symengine.py/pull/159", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
850212995
Migrate to the new bundle directory structure Q A Bug fix? no New feature? yes Tickets none License MIT Hello everyone! I'm happy to propose this refactoring according to the latest bundle best practices. I think it is a good time to do it because there are not so many packages and we can standardize this little piece of Symfony packages. Cheers! I have one minor question about assets/ directory it looks like it's not quite correct to rename it fully to public/ or it IS? From my point of view this assets/ directory in UX packages has nothing common with public/ bundle directory or I'm missing something? I have one minor question about assets/ directory it looks like it's not quite correct to rename it fully to public/ or it IS? From my point of view this assets/ directory in UX packages has nothing common with public/ bundle directory or I'm missing something? That's correct - it shouldn't go into public. And actually, I think the assets/ directory needs to NOT be moved from Resources/assets to assets. The reason is that part of Symfony Flex looks for Resources/assets. It also looks for assets/ - https://github.com/symfony/flex/blob/e472606b4b3173564f0edbca8f5d32b52fc4f2c9/src/PackageJsonSynchronizer.php#L153-L162 The problem is that, whatever the path, once a package is installed, the user's package.json is updated to point to the directory - e.g. "@symfony/ux-chartjs": "file:vendor/symfony/ux-chartjs/Resources/assets",. And so, moving the directory in an existing project would be a BC break :) Yeah BC break is not good... However, because UX has an experimental status and to achieve more clean code built with the latest technologies we can allow this little BC here, also because that package.json file is owned by the user and can be easily edited. WDYT? Im also reluctant to doing this change, because it diverges from what we do historically in other core Symfony bundles. Since this repo might end up being merged with Symfony/symfony, we should apply its conventions here. @nicolas-grekas so why not change these "historical" moments and force everything to the new way, as it was done for Symfony itself? I'm mixed here. I also prefer the new structure, but Ryan's point is very valid, and I'm not sure the reasons to do this change are strong enough to outweigh the drawbacks and diverge from the current way in symfony/symfony... Honestly, I'd like to push this structure to core Symfony bundles too 👼🏻 , and probably here we can start 😄
gharchive/pull-request
2021-04-05T08:58:15
2025-04-01T04:36:00.175694
{ "authors": [ "nicolas-grekas", "sadikoff", "weaverryan" ], "repo": "symfony/ux", "url": "https://github.com/symfony/ux/pull/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1711702709
Installation is not working Hello @fabpot, When i'm trying to install Croncape following your documentation (https://github.com/symfonycorp/croncape#installation), like this : go get github.com/symfonycorp/croncape I got this error : go: go.mod file not found in current directory or any parent directory. 'go get' is no longer supported outside a module. To build and install a command, use 'go install' with a version, like 'go install example.com/cmd@latest' For more information, see https://golang.org/doc/go-get-install-deprecation or run 'go help get' or 'go help install'. When i follow recommendations found here https://go.dev/doc/go-get-install-deprecation, like this : go install github.com/symfonycorp/croncape@v1.3.0 Everything works as expected, I guess the command is not the same, depending on the version of Go used localy. I'm using go version go1.18.1 linux/amd64 Thank you, Jérôme Correct, fixed in 45a0157
gharchive/issue
2023-05-16T10:12:53
2025-04-01T04:36:00.179103
{ "authors": [ "fabpot", "jeromecx" ], "repo": "symfonycorp/croncape", "url": "https://github.com/symfonycorp/croncape/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
624857784
Fix download link php -r "copy('https://github.com/symplify/easy-coding-standard-prefixed/blob/master/ecs.phar?raw=true', 'ecs.phar');" Thanks :+1: Need to revert this change as it creates file named ecs.phar?raw=true where it should be ecs.phar. Here's the output: ➜ cake4-demo git:(test) ✗ wget https://github.com/symplify/easy-coding-standard-prefixed/blob/master/ecs.phar\?raw\=true --2020-07-21 16:39:46-- https://github.com/symplify/easy-coding-standard-prefixed/blob/master/ecs.phar?raw=true Resolving github.com (github.com)... 13.234.176.102 Connecting to github.com (github.com)|13.234.176.102|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github.com/symplify/easy-coding-standard-prefixed/raw/master/ecs.phar [following] --2020-07-21 16:39:47-- https://github.com/symplify/easy-coding-standard-prefixed/raw/master/ecs.phar Reusing existing connection to github.com:443. HTTP request sent, awaiting response... 302 Found Location: https://raw.githubusercontent.com/symplify/easy-coding-standard-prefixed/master/ecs.phar [following] --2020-07-21 16:39:50-- https://raw.githubusercontent.com/symplify/easy-coding-standard-prefixed/master/ecs.phar Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.152.133 Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.152.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 35755159 (34M) [application/octet-stream] Saving to: ‘ecs.phar?raw=true’ ecs.phar?raw=true 100%[=======================================================================>] 34.10M 2.03MB/s in 35s 2020-07-21 16:40:28 (985 KB/s) - ‘ecs.phar?raw=true’ saved [35755159/35755159] Either command should be: php -r "copy('https://github.com/symplify/easy-coding-standard-prefixed/blob/master/ecs.phar?raw=true', 'ecs.phar');" or what was earlier: wget https://github.com/symplify/easy-coding-standard-prefixed/blob/master/ecs.phar
gharchive/pull-request
2020-05-26T12:49:20
2025-04-01T04:36:00.213918
{ "authors": [ "TomasVotruba", "dxops", "ishanvyas22" ], "repo": "symplify/easy-coding-standard-prefixed", "url": "https://github.com/symplify/easy-coding-standard-prefixed/pull/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
910811055
[STATIC-DETECTOR] You have requested a non-existent service "Symfony\Component\Console\Application". Releated to this? https://github.com/symplify/symplify/issues/3188 Hi, thanks for reporting. I think the fix would be as same as in linked issue https://github.com/symplify/symplify/pull/3261 Closing as resolved, thank you :+1:
gharchive/issue
2021-06-03T19:59:06
2025-04-01T04:36:00.216331
{ "authors": [ "TomasVotruba", "gertvdb" ], "repo": "symplify/symplify", "url": "https://github.com/symplify/symplify/issues/3255", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1376738002
[monorepo-builder] vendor/bin/monorepo-builder init generates files with deprecated code The vendor/bin/monorepo-builder init command generates $parameters->set(Option::DATA_TO_APPEND, [ ComposerJsonSection::REQUIRE_DEV => [ 'phpunit/phpunit' => '^9.5', ], ]); Option::DATA_TO_APPEND is deprecated, though the deprecation message isn't enormously useful to first-time users: /** * @var string * @deprecated Use MBConfig instead * @api */ public const DATA_TO_APPEND = 'data_to_append'; How does one actually get $mbConfig, in order to call packageDirectories() on it? On a related note, why not generate the initial file with the packages directory set by default? Otherwise, if you're following the instructions you immediately get an error. It works like this: <?php declare(strict_types=1); use Symplify\MonorepoBuilder\Config\MBConfig; use Symplify\ComposerJsonManipulator\ValueObject\ComposerJsonSection; return static function (MBConfig $mbConfig): void { $mbConfig->packageDirectories([ __DIR__ . '/src', ]); $mbConfig->defaultBranch('main'); $mbConfig->dataToAppend([ ComposerJsonSection::REQUIRE_DEV => [ 'phpunit/phpunit' => '^9.5', ], ]); }; The default files definitely should be updated.
gharchive/issue
2022-09-17T10:47:52
2025-04-01T04:36:00.218746
{ "authors": [ "bohanyang", "tacman" ], "repo": "symplify/symplify", "url": "https://github.com/symplify/symplify/issues/4405", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
775084986
TypeChecking - Type check function input and output. Adds ability to type check functions input and outputs on runtime with support for all typing primitives. from typing import Union @typecheck def func(a: Union[int, bool], b: str) -> bool: ... func("hello", 0) # Throws error. For implementation purposes it might be useful to have a function that takes in Type[Any] and a Type to determine if the the Type[Any] is of type Type. def check(obj, kind): ... check(int, Union[int, str]) # >>> True check(int, Optional[Union[int, str]]) # >>> True Would be re-creating the wheel. typeguard does this well.
gharchive/issue
2020-12-27T18:17:51
2025-04-01T04:36:00.343029
{ "authors": [ "synchronizing" ], "repo": "synchronizing/toolbox", "url": "https://github.com/synchronizing/toolbox/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1501120880
[Bug]: Lightning send issues Describe the bug Can’t send Lightning. I tried sending to Muun and then to my node.. and it’s not working Reproduce Go to '...' Click on '....' Scroll down to '....' See error Screenshots / Recording Operating system IOS 16.2 Bitkit version No response Log output No response I think I had wallet connectivity issue before, yet, after fixing it now it’s telling me this.. and finally then I got a problem with my node connectivity for lightning probably triggering the problem above and after making sure I was properly connected.. it simply never send the sats... it went into a loop.. making me wait and wait and never confirming the delivery of the sats over lighting We've been working on a lot of upgrades to Lightning and performance should significantly improve in the next release. If you're funds are stuck, please send an e-mail to support@synonym.to so we can help you retrieve your funds. @nbourbon, this issue should be resolved as of the latest build if you would like to update and try again. Going to close this issue for now, but if you're still having issues please feel free to comment and we'll reopen it or send us an email at support@synonym.to
gharchive/issue
2022-12-17T02:39:12
2025-04-01T04:36:00.445482
{ "authors": [ "JWBurgers", "coreyphillips", "nbourbon" ], "repo": "synonymdev/bitkit", "url": "https://github.com/synonymdev/bitkit/issues/785", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
129541151
subd support: n-sided polygons & pixar crease tags optional parameter to triangulate polygons to maintain compatibility added create tag test & example subd crease tag file from OpenSubD source code Super cool! But there's some CI build error. I will check/fix it, then try to merge it into master. I must have forked a while ago as I made my changes on a very old revision of the library. I can reapply my changes to HEAD to safe you the effort of merging them in by hand? Actually I am mostly finished applying your patch to recent master by hand, so you don't need to working on it.
gharchive/pull-request
2016-01-28T18:47:29
2025-04-01T04:36:00.448719
{ "authors": [ "dboogert", "syoyo" ], "repo": "syoyo/tinyobjloader", "url": "https://github.com/syoyo/tinyobjloader/pull/64", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1071883448
Implement grammar support for Function and Modular Model calls Currently, the grammar does not support function and modular model calls, the parse tree currently recognizes those as errors. Specifically, the following grammar (example) needs to be supported: S3 := quadratic(s1, k1, k2, k3); Implemented in 55b8642329094c373a816758681ece8d480ee668
gharchive/issue
2021-12-06T08:49:34
2025-04-01T04:36:00.455138
{ "authors": [ "mastevb" ], "repo": "sys-bio/vscode-antimony", "url": "https://github.com/sys-bio/vscode-antimony/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
728248832
Todo List: Mobil2an Assalamualaikum Wr. Wb. Ini adalah beberapa hal yang akan kita kerjakan untuk project grafika komputer kedepannya: [ ] Bikin asset jalan [ ] Bikin asset mobil [ ] Bikin logic untuk selebihnya bisa ditambahkan dikomentar. Terima kasih. Barusan saya menambahkan GithubBot ke telegram untuk mendapatkan notif jika ada update.
gharchive/issue
2020-10-23T14:06:36
2025-04-01T04:36:00.457659
{ "authors": [ "sysfdn" ], "repo": "sysfdn/Mobil2an", "url": "https://github.com/sysfdn/Mobil2an/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1613432387
Decode v2021 frames Battery types 0xfaf1 Ternary Lithium 0xfaf2 Lithium Iron Phosphate 0xfaf3 Lithium Titanate 0xfaf4 Custom Battery status 0 Unknown 1 Idle 2 Charge 3 Discharge 4 Standby 5 Error Charge Mosfet Status 0 Off 1 On 2 Cell overvoltage 3 Overcurrent protection 4 Battery full 5 Battery overvoltage 6 Battery overtemperature 7 Power overtemperature 8 Current exception 9 Balancer cable missing 10 Board overtemperature 11 Reserved 12 Open failed 13 Discharge Mosfet Exception 14 Waiting 15 Manual off 16 Two level exceed voltage 17 Low temperature protection 18 Voltage difference exceeded 19 Reserved 20 Self Detect Error Discharge Mosfet Status 0 Off 1 On 2 Cell undervoltage 3 Overcurrent protection 4 Two current exceeded 5 Battery overvoltage 6 Battery overtemperature 7 Power overtemperature 8 Current exception 9 Balancer cable missing 10 Board overtemperature 11 Charge open 12 Short circuit protection 13 Discharge Mosfet Exception 14 Open failed 15 Manual off 16 Two level low voltage 17 Low temperature protection 18 Voltage difference exceeded 19 Self Detect Error This a proper way to decode new style frames, maybe you will find it useful: I happen to have entire code in C# including sending commands for new style ant bmses. public static RegisterData AntProtocol_FixedParse(byte[] dataBuf) { RegisterData registerData = new RegisterData(); registerData.MsgType = 1; registerData.SysOperationAuth = dataBuf[6]; registerData.SystemState = dataBuf[7]; registerData.Temperature_Num = dataBuf[8]; registerData.PACK_Cell_Num = dataBuf[9]; registerData.ProtectMack_Bit = BitConverter.ToUInt64(dataBuf, 10); registerData.WarningMack_Bit = BitConverter.ToUInt64(dataBuf, 18); registerData.PushMack_Bit = BitConverter.ToUInt64(dataBuf, 26); int index = 34; // Starting index for (byte i = 0; i < registerData.PACK_Cell_Num; ++i) { registerData.Voltage_Cell_Value[i] = BitConverter.ToUInt16(dataBuf, index); index += 2; } for (byte i = 0; i < registerData.Temperature_Num; ++i) { registerData.Temperature_Value[i] = BitConverter.ToInt16(dataBuf, index); index += 2; } registerData.Temperature_MOS = BitConverter.ToInt16(dataBuf, index); index += 2; registerData.Temperature_Balance = BitConverter.ToInt16(dataBuf, index); index += 2; registerData.Voltage_Pack = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Current_Value = BitConverter.ToInt16(dataBuf, index); index += 2; registerData.Pack_SOC = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Pack_SOH = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.DIS_MOS_State = dataBuf[index++]; registerData.CH_MOS_State = dataBuf[index++]; registerData.Balance_State = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Pack_Physics_AH = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.Pack_Remain_AH = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.Pack_All_AH = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.Pack_Power = BitConverter.ToInt32(dataBuf, index); index += 4; registerData.All_Timer_ms = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.Balance_State_Bit = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.Voltage_Cell_Max_Value = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_Cell_Max_Pos = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_Cell_Min_Value = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_Cell_Min_Pos = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_Cell_Difference = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_Cell_Average_Value = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_DS_Value = BitConverter.ToInt16(dataBuf, index); index += 2; registerData.Voltage_DIS_MOS_Value = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_CH_MOS_Value = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Voltage_NH_MOS_Value = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.PackCellType = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.Pack_All_DisAH = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.Pack_All_ChAH = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.All_DisTimer_s = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.All_ChTimer_s = BitConverter.ToUInt32(dataBuf, index); index += 4; if (dataBuf[5] > registerData.PACK_Cell_Num * 2 + registerData.Temperature_Num * 2 + 106) { registerData.CHG_duration = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.CHG_interval = BitConverter.ToUInt32(dataBuf, index); index += 4; registerData.CT_remaining = BitConverter.ToUInt16(dataBuf, index); index += 2; registerData.RT_discharge = BitConverter.ToUInt16(dataBuf, index); index += 2; } else if (dataBuf[2] == 18 || dataBuf[2] == 66) { registerData.MsgType = 2; byte[] numArray123 = new byte[16]; byte num324 = dataBuf[2]; ushort num325 = (ushort) (((uint) (ushort) dataBuf[4] << 8) + (uint) dataBuf[3]); byte num326 = dataBuf[5]; ushort[] numArray124 = new ushort[256]; for (int newIndex = 0; newIndex < (int) num326; newIndex += 2) { ushort num327 = (ushort) ((uint) (ushort) ((uint) (ushort) dataBuf[newIndex + 7] << 8) + (uint) dataBuf[newIndex + 6]); numArray124[(newIndex + (int) num325) / 2] = num327; } registerData.MsgData = numArray124; } if (dataBuf[2] == byte.MaxValue) { ushort num328 = (ushort) (((uint) (ushort) dataBuf[11] << 8) + (uint) dataBuf[12]); byte[] numArray = new byte[16] { (byte) ((uint) dataBuf[2] >> 4), (byte) ((uint) dataBuf[2] & 15U), (byte) ((uint) dataBuf[3] >> 4), (byte) ((uint) dataBuf[3] & 15U), (byte) ((uint) dataBuf[4] >> 4), (byte) ((uint) dataBuf[4] & 15U), (byte) ((uint) dataBuf[5] >> 4), (byte) ((uint) dataBuf[5] & 15U), (byte) ((uint) dataBuf[6] >> 4), (byte) ((uint) dataBuf[6] & 15U), (byte) ((uint) dataBuf[7] >> 4), (byte) ((uint) dataBuf[7] & 15U), (byte) ((uint) dataBuf[8] >> 4), (byte) ((uint) dataBuf[8] & 15U), (byte) ((uint) dataBuf[9] >> 4), (byte) ((uint) dataBuf[9] & 15U) }; registerData.Warning = (int) num328; } return registerData; }
gharchive/pull-request
2023-03-07T13:19:18
2025-04-01T04:36:00.484093
{ "authors": [ "pwilkowski", "syssi" ], "repo": "syssi/esphome-ant-bms", "url": "https://github.com/syssi/esphome-ant-bms/pull/52", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1852019910
*: add comments as tittle all comments are resolved, pl review again.
gharchive/pull-request
2023-08-15T19:40:52
2025-04-01T04:36:00.749716
{ "authors": [ "Arlottang" ], "repo": "systemxlabs/tinysql", "url": "https://github.com/systemxlabs/tinysql/pull/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
410620695
デッキでカラムをドラッグできなくなっている Summary Expected Behavior Actual Behavior Steps to Reproduce Environment Windows 10 / Firefox 65 => OK Windows 10 / Chrome 71 => NG @syuilo Environment 書いてほしいです。 Windows 10 / Firefox 65 => ドラッグできるが、読み込み直すまで反映されない Windows 10 / Chrome 72 => ドラッグできない、読み込み直しても変わらない みたい Linux 4.19.31-1-lts / Awesome WM / Firefox 67でも同様の現象を確認。やはり、再読みこみするまで反映されない。 macOS 10.14.4 / Firefox 67.0b17 (Developer Edition) で読み込み直すまで変わらないのを確認
gharchive/issue
2019-02-15T06:12:55
2025-04-01T04:36:00.797411
{ "authors": [ "acid-chicken", "mei23", "rinsuki", "silverscat-3", "syuilo" ], "repo": "syuilo/misskey", "url": "https://github.com/syuilo/misskey/issues/4273", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
276059903
プッシュ通知はGCMをやめてVAPIDにする なるべく外部依存をなくしたほうが良いと思います これを機にサーバーごとにビルドするようにするか
gharchive/issue
2017-11-22T13:03:10
2025-04-01T04:36:00.798230
{ "authors": [ "syuilo", "tamaina" ], "repo": "syuilo/misskey", "url": "https://github.com/syuilo/misskey/issues/940", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
434573823
Fix #4734 Summary Fix #4734 🙏
gharchive/pull-request
2019-04-18T03:28:12
2025-04-01T04:36:00.799080
{ "authors": [ "syuilo", "tamaina" ], "repo": "syuilo/misskey", "url": "https://github.com/syuilo/misskey/pull/4745", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
481069946
fix: HiDPi環境でオブジェクトを選択できない Summary 同一コード内で window のプロパティは全て window. を明示しているので統一しました 🙏🙏🙏🙏🙏🙏
gharchive/pull-request
2019-08-15T09:28:46
2025-04-01T04:36:00.800094
{ "authors": [ "Xeltica", "syuilo" ], "repo": "syuilo/misskey", "url": "https://github.com/syuilo/misskey/pull/5268", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
320929612
Small Fix in README Examples + Angular2 wrapper Hi, thanks for this module. Some points i found: Code Examples The 2 examples in readme is not working because the js link has change to https://szimek.github.io/signature_pad/js/signature_pad.js The links that are wrong: Other demos Erase feature: https://jsfiddle.net/szimek/jq9cyzuc/ Undo feature: https://jsfiddle.net/szimek/osenxvjc/ Smooth In the main demo, when i draw i can see some smooth in the line thickness. But using this wrapper it does not work like that.. any ideas how this can be fixed? https://github.com/wulfsolter/angular2-signaturepad/issues/61 Thanks! Thanks for reporting these issues! I've already fixed both examples. Regarding the Angular wrapper - I'm not really sure. The demo at http://lathonez.com/angular2-signaturepad-demo/ seems to work fine. You can play with values of minWidth and maxWidth and see if changing them makes any difference. Awesome!! Well.. i used the vanillajs and have the smooth behaviour.. so its something related with the wrapper.. thanks!
gharchive/issue
2018-05-07T19:39:51
2025-04-01T04:36:00.808872
{ "authors": [ "mariohmol", "szimek" ], "repo": "szimek/signature_pad", "url": "https://github.com/szimek/signature_pad/issues/363", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
348976847
Move to sphinx gallery waiting on https://github.com/sphinx-gallery/sphinx-gallery/issues/150 You may also be interested in https://github.com/ianhi/mpl-playback which I made to integrate sphinx gallery with matplotlib widgets. Basically you just record your interaction with the figure and it will play it back and draw a mouse cursor on. Then you don't need to rerecord examples of interacting if your styling changes or the behavior of the function changs slightly. Example of it in action here: https://mpl-playback.readthedocs.io/en/latest/gallery/index.html and important lines of conf.py are here: https://github.com/ianhi/mpl-playback/blob/409e6fc68cdec4b59ce56231cbbd4d117647d3f4/doc/conf.py#L37-L45 if you want to use it let me know and i'm happy to help. You may also be interested in https://github.com/ianhi/mpl-playback which I made to integrate sphinx gallery with matplotlib widgets. Basically you just record your interaction with the figure and it will play it back and draw a mouse cursor on. Then you don't need to rerecord examples of interacting if your styling changes or the behavior of the function changs slightly. Example of it in action here: https://mpl-playback.readthedocs.io/en/latest/gallery/index.html and important lines of conf.py are here: https://github.com/ianhi/mpl-playback/blob/409e6fc68cdec4b59ce56231cbbd4d117647d3f4/doc/conf.py#L37-L45 if you want to use it let me know and i'm happy to help.
gharchive/issue
2018-08-09T04:51:30
2025-04-01T04:36:00.822984
{ "authors": [ "ianhi", "t-makaro" ], "repo": "t-makaro/animatplot", "url": "https://github.com/t-makaro/animatplot/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1947690592
bug: NEXTAUTH_URL validation failing with vercel env pull Provide environment information System: OS: macOS 13.6 CPU: (8) arm64 Apple M2 Memory: 737.61 MB / 24.00 GB Shell: 5.9 - /bin/zsh Binaries: Node: 18.17.1 - ~/.volta/tools/image/node/18.17.1/bin/node Yarn: 4.0.0-rc.50 - ~/.volta/tools/image/yarn/4.0.0-rc.50/bin/yarn npm: 9.6.7 - ~/.volta/tools/image/node/18.17.1/bin/npm pnpm: 8.7.4 - ~/.volta/bin/pnpm Watchman: 2023.10.02.00 - /opt/homebrew/bin/watchman ct3aMetadata.initVersion: "7.19.0" Describe the bug Using Vercel, you can conveniently fetch the latest environment variables with vercel env pull. However, a caveat is that Vercel sets VERCEL_URL="" locally, causing .preprocess() to always use the empty VERCEL_URL instead of the explicitly set NEXTAUTH_URL. To resolve this, every time you pull the environment variables, you must manually remove VERCEL_URL="" from .env.local. This step is also necessary in the CI environment to pass the env.mjs validation. I'm considering a logic switch from str => process.env.VERCEL_URL ?? str to str => str ?? process.env.VERCEL_URL, where VERCEL_URL becomes the fallback value, rather than NEXTAUTH_URL. I'm prepared to implement this change, but I'd appreciate your input on whether this solution makes sense to you, or if you have a better suggestion. Reproduction repo I can't set it up, because it depends on a Vercel account. But you don't need any code changes to reproduce. To reproduce Run npm create t3-app@latest and install with NextAuth Connect repo to Vercel Set NEXTAUTH_URL="http://localhost:3000" in Vercel for development environment. Run vercel env pull (you may need to run vercel login first) Run npm run dev Now you should get the following error next dev ❌ Invalid environment variables: { NEXTAUTH_URL: [ 'String must contain at least 1 character(s)' ] } Error: Invalid environment variables Additional information No response Awesome! That's great. And I saw it's already in create-t3 🙌 Thank you @juliusmarminge. I will close the issue then.
gharchive/issue
2023-10-17T15:23:05
2025-04-01T04:36:00.847805
{ "authors": [ "danieldeichfuss" ], "repo": "t3-oss/create-t3-app", "url": "https://github.com/t3-oss/create-t3-app/issues/1602", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1326536711
fix(prisma): fix minor typo in prisma schema Fix minor typo in prisma schema [x] I reviewed linter warnings + errors, resolved formatting, types and other issues related to my work [x] The PR title follows the convention we established conventional-commit [x] I performed a functional test on my final commit Fixed small typo in auth-schema.prisma (changed @db.text to @db.Text) to address #277 Refrence: https://www.prisma.io/docs/concepts/database-connectors/postgresql Would you mind also PRing this to next branch?
gharchive/pull-request
2022-08-03T00:31:25
2025-04-01T04:36:00.850548
{ "authors": [ "RodSarhan", "juliusmarminge" ], "repo": "t3-oss/create-t3-app", "url": "https://github.com/t3-oss/create-t3-app/pull/278", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1464718104
fix: language select layout shift (closes #855) Closes #855 ✅ Checklist [x] I have followed every step in the contributing guide (updated 2022-10-06). [x] The PR title follows the convention we established conventional-commit [x] I performed a functional test on my final commit Changelog ✅ Updated navbar style to 'fixed' instead of 'flex' to prevent layout shift when selecting the language select dropdown ✅ Added component type of 'div' to the Listbox.Button component within the LanguageSelect component to prevent scrolling when unselected / clicking outside of language select options Screenshots 💯 Prettier Looks great. Not on PC so cant test but from your preview it looks to be working 👌👌 the issue is that you fixed the navbar, so the index is not getting positioned according to nav ig Awesome 👌🏻 I'll fix prettier check when I'm back at my PC a bit later also, cant navigate by tabbing anymore for some reason, it selects the theme toggle after github icon instead of selecting language selector Interesting, I narrowed down the layout shift issue to this 'flex': On phone otherwise would just like to line in repo. Removing this flex gets rid of the issue but is likely causing these other issues Tried a bunch of different overflow options but couldn't figure out how to keep it flex and not introduce the shift Okay after some more digging I think I found the culprit class: https://github.com/t3-oss/create-t3-app/blob/next/www/src/styles/global.css#L40 Removing this line completely fixes the issue without any of the introduced bugs above. I don't see any adverse effects when removing it - @juliusmarminge I see you made this commit ~2 months ago, is there something I'm missing that this resolved? If not I can update the PR, thanks! Okay after some more digging I think I found the culprit class: https://github.com/t3-oss/create-t3-app/blob/next/www/src/styles/global.css#L40 Removing this line completely fixes the issue without any of the introduced bugs above. I don't see any adverse effects when removing it - @juliusmarminge I see you made this commit ~2 months ago, is there something I'm missing that this resolved? If not I can update the PR, thanks! Gitblame don't do me like that 😂 FR though I don't remember putting that there so not sure what it does. Okay after some more digging I think I found the culprit class: https://github.com/t3-oss/create-t3-app/blob/next/www/src/styles/global.css#L40 [ ![test](https://user-images.githubusercontent.com/63591760/204090098-aa90dfa6-4206-4c6a-b006-d9660c3c02be.gif) ](https://user-images.githubusercontent.com/63591760/204090098-aa90dfa6-4206-4c6a-b006-d9660c3c02be.gif) [ ](https://user-images.githubusercontent.com/63591760/204090098-aa90dfa6-4206-4c6a-b006-d9660c3c02be.gif) Removing this line completely fixes the issue without any of the introduced bugs above. I don't see any adverse effects when removing it - @juliusmarminge I see you made this commit ~2 months ago, is there something I'm missing that this resolved? If not I can update the PR, thanks! Gitblame don't do me like that 😂 FR though I don't remember putting that there so not sure what it does. Haha 🤣 I'm not seeing any weird behavior anywhere but just wanted to double check I wasn't overlooking anything. Scrollbars can be nastly little buggers
gharchive/pull-request
2022-11-25T16:00:27
2025-04-01T04:36:00.861700
{ "authors": [ "Seth-McKilla", "asrvd", "juliusmarminge" ], "repo": "t3-oss/create-t3-app", "url": "https://github.com/t3-oss/create-t3-app/pull/867", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2246067820
fix: identify Deno as server (#154) @juliusmarminge We hit the faulty server detection explained in #154 when using t3-env on Netlify in a middleware in Next.js. Netlify uses Deno to bundle for edge, and this resulted in builds failing with "❌ Attempted to access a server-side environment variable on the client". I think Netlify + Next.js middleware + t3-env qualifies as "common setups widely used" as you put it in your comment We would appreciate out of the box support for this combo. Can you fix lint? bun lint:fix
gharchive/pull-request
2024-04-16T13:26:56
2025-04-01T04:36:00.863975
{ "authors": [ "juliusmarminge", "michaellopez" ], "repo": "t3-oss/t3-env", "url": "https://github.com/t3-oss/t3-env/pull/220", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
852426786
Pallet Registry Outlined an onchain registry in a pallet. Also tried updating the circuit node and circuit-runtime to run with the latest substrate master but got stuck @ this error. Will investigate further... Open points: Integration - storing successfully executed contracts Offchain registry ~ substrate-archive ? Serving contracts through the fetch_contracts RPC Related #32 Many thanks, @chiefbiiko! The final implementation done in #45 is heavily inspired by ur input, but follows the FRAME v2 implementation as the rest of the Circuit pallets. Will close it in favour of #45 ✌️
gharchive/pull-request
2021-04-07T13:45:47
2025-04-01T04:36:00.866915
{ "authors": [ "MaciejBaj", "chiefbiiko" ], "repo": "t3rn/t3rn", "url": "https://github.com/t3rn/t3rn/pull/40", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
85873297
Touchable cannot transition from RESPONDER_INACTIVE_PRESS_IN to LONG_PRESS_DETECTED for responder .r[1]{TOP_LEVEL}[0].0.0.0.1.1.0.0" Odd bug, I'm getting this error after spamming the router forward and back. 2015-06-07 15:53:43.681 [error][tid:com.facebook.React.JavaScript] "Error: Touchable cannot transition from `RESPONDER_INACTIVE_PRESS_IN` to `LONG_PRESS_DETECTED` for responder `.r[1]{TOP_LEVEL}[0].0.0.0.1.1.0.0` stack: _receiveSignal index.ios.bundle:32990 _handleLongDelay index.ios.bundle:32969 <unknown> index.ios.bundle:8186 callTimer index.ios.bundle:7974 callTimers index.ios.bundle:7997 jsCall index.ios.bundle:7423 _callFunction index.ios.bundle:7686 <unknown> index.ios.bundle:7713 <unknown> index.ios.bundle:7707 perform index.ios.bundle:6221 batchedUpdates index.ios.bundle:14065 batchedUpdates index.ios.bundle:4753 <unknown> index.ios.bundle:7706 applyWithErrorReporter index.ios.bundle:7458 guardReturn index.ios.bundle:7480 processBatch index.ios.bundle:7705 URL: http://localhost:8081/index.ios.bundle line: 32993 message: Touchable cannot transition from `RESPONDER_INACTIVE_PRESS_IN` to `LONG_PRESS_DETECTED` for responder `.r[1]{TOP_LEVEL}[0].0.0.0.1.1.0.0`" is anyone else experiencing this. 2015-06-07 17:42:39.888 [error][tid:com.facebook.React.JavaScript] "Error: undefined is not an object (evaluating 'this.refs[UNDERLAY_REF].setNativeProps') stack: _showUnderlay index.ios.bundle:34905 touchableHandleActivePressIn index.ios.bundle:34878 _performSideEffectsForTransition index.ios.bundle:33057 _receiveSignal index.ios.bundle:32996 _handleDelay index.ios.bundle:32964 <unknown> index.ios.bundle:8186 callTimer index.ios.bundle:7974 callTimers index.ios.bundle:7997 jsCall index.ios.bundle:7423 _callFunction index.ios.bundle:7686 <unknown> index.ios.bundle:7713 <unknown> index.ios.bundle:7707 perform index.ios.bundle:6221 batchedUpdates index.ios.bundle:14065 batchedUpdates index.ios.bundle:4753 <unknown> index.ios.bundle:7706 applyWithErrorReporter index.ios.bundle:7458 guardReturn index.ios.bundle:7480 processBatch index.ios.bundle:7705 URL: http://localhost:8081/index.ios.bundle line: 34905 message: undefined is not an object (evaluating 'this.refs[UNDERLAY_REF].setNativeProps')"
gharchive/issue
2015-06-07T07:56:04
2025-04-01T04:36:00.869212
{ "authors": [ "elsodev" ], "repo": "t4t5/react-native-router", "url": "https://github.com/t4t5/react-native-router/issues/32", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1296807205
[BUG] heal txAdmin/FXServer versions: v4.14.2 atop FXServer 5652 Describe the bug Healing yourself or the whole Server isn’t possible as the Heal Option on the Ingame Menu does nothing To Reproduce Steps to reproduce the behavior: Use the In-Game Menu and try healing yourself with the heal option Expected behavior You get full health Screenshots If applicable, add screenshots to help explain your problem. Additional context Add any other context about the problem here, like for example if it's a server issue, which OS is fxserver hosted on. Do you have any more information? Works for everybody else... you might be using s framework or anticheat that is not dealing well with the txAdmin heal event.
gharchive/issue
2022-07-07T04:50:34
2025-04-01T04:36:00.905557
{ "authors": [ "InfoBlock", "tabarra" ], "repo": "tabarra/txAdmin", "url": "https://github.com/tabarra/txAdmin/issues/654", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
303608225
Does not extract data properly from tables with alternate row color and no line borders tried extracting data from tables that were formatted without any border, just alternate row color. It looks like whether stream or lattice approach, it only pulled out the grey rows and ignored (stream) or left blank (lattice) the white rows. Thanks for your report, @agaisin. We would need to take a look at the table that you were trying to extract. If the document can be shared publicly, please attach it to this issue. I think this might be an example like agaisin intended (sorry for hijacking thread if I'm wrong)... short_2015.pdf
gharchive/issue
2018-03-08T19:49:04
2025-04-01T04:36:00.922560
{ "authors": [ "adrennhoff", "agaisin", "jazzido" ], "repo": "tabulapdf/tabula", "url": "https://github.com/tabulapdf/tabula/issues/814", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }