id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2056742805
🛑 Software Center - Test 1 is down In cf0b9c7, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 565dccb after 10 minutes.
gharchive/issue
2023-12-26T22:28:55
2025-04-01T04:34:29.409327
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/26608", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2087011555
🛑 Auth-Bridge - Test 1 is down In f883410, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in d58cb92 after 21 minutes.
gharchive/issue
2024-01-17T21:26:31
2025-04-01T04:34:29.411660
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/27659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2140792712
🛑 Software Center - Test 1 is down In b518b24, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 48434a2 after 10 minutes.
gharchive/issue
2024-02-18T07:35:33
2025-04-01T04:34:29.413791
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/29249", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1142048152
🛑 Software Center - Test 1 is down In cca3dcb, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 0c00ac6.
gharchive/issue
2022-02-17T23:16:04
2025-04-01T04:34:29.415897
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/315", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2274197962
🛑 Software Center - Test 1 is down In d9afae9, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 03995da after 37 minutes.
gharchive/issue
2024-05-01T21:02:21
2025-04-01T04:34:29.418151
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/32946", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2306992172
🛑 Auth-Bridge - Test 1 is down In c5c7936, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 11bf419 after 10 minutes.
gharchive/issue
2024-05-20T23:48:29
2025-04-01T04:34:29.420311
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/33880", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2355315836
🛑 Software Center - Test 1 is down In c2872c5, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in d922a8c after 11 minutes.
gharchive/issue
2024-06-15T22:39:35
2025-04-01T04:34:29.422603
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/34956", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2595489020
🛑 Software Center - Test 1 is down In afabc3b, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 1f67af7 after 32 minutes.
gharchive/issue
2024-10-17T18:50:25
2025-04-01T04:34:29.424698
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/37650", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1246239550
🛑 Software Center - Test 1 is down In 0b93d9b, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 9072502.
gharchive/issue
2022-05-24T09:09:36
2025-04-01T04:34:29.426783
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/3897", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1268031052
🛑 Auth-Bridge - Test 1 is down In 70dd558, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 703effd.
gharchive/issue
2022-06-10T21:13:27
2025-04-01T04:34:29.428927
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/4503", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1316039740
🛑 Software Center - Test 1 is down In 280d74e, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 96f8439.
gharchive/issue
2022-07-24T23:39:11
2025-04-01T04:34:29.431010
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/6009", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1440626862
🛑 Software Center - Test 1 is down In 736940b, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 2835c0f.
gharchive/issue
2022-11-08T17:34:47
2025-04-01T04:34:29.433315
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/8824", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2566452406
separate quick-action highlighting & potion order tracking improvements Station quick-action highlighting can now be configured separately Improved station highlighting when multiple orders share the same potion type Show (ready!) when you finish refining a potion that fulfills an order I don't know if I have something wrong with my testing configuration, but the "ready!" text stays at the same potion location even after I turn in. For example, before I turn in an order, having a MML in my inventory, I see: After turn-in, I get new orders, but the "ready!" stays. @Dot145 please open an issue but i cannot reproduce it
gharchive/pull-request
2024-10-04T13:57:42
2025-04-01T04:34:29.443895
{ "authors": [ "Dot145", "hex-agon" ], "repo": "hex-agon/mastering-mixology", "url": "https://github.com/hex-agon/mastering-mixology/pull/40", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
667002313
relocate argparse and click packages This makes the dropins for argparse and click submodules. I placed them currently in argparse2tool/dropins, but maybe they could also go to argparse2tool directly. The intended use is then always PYTHONPATH=$(argparse2tool_check_path) python examples/example.py --generate_cwl_tool Do you some project using click? I would like to add a test in travis and see if the new usage works in practice also for click. Sure, I can add a test case for that. Sure, I can add a test case for that. Cool. Do you think we should drop the dropins/ subdir? I'm fine with it, less likely to cause issues later if they're sequestered in their own directory. (.venv) (topic/dropins✱) [hxr@mk:~/arbeit/galaxy/argparse2tool]$ PYTHONPATH=$(argparse2tool -) python examples/example-click.py --generate_cwl_tool Traceback (most recent call last): File "examples/example-click.py", line 1, in <module> import click File "/home/hxr/arbeit/galaxy/argparse2tool/argparse2tool/dropins/click/__init__.py", line 73, in <module> class Arg2CWLCommand(Arg2CWLMixin, click.Command): AttributeError: 'NoneType' object has no attribute 'Command' (.venv) (topic/dropins✱) [hxr@mk:~/arbeit/galaxy/argparse2tool]1$ PYTHONPATH=$(argparse2tool -) python examples/example-click.py --generate_galaxy_xml Traceback (most recent call last): File "examples/example-click.py", line 1, in <module> import click File "/home/hxr/arbeit/galaxy/argparse2tool/argparse2tool/dropins/click/__init__.py", line 73, in <module> class Arg2CWLCommand(Arg2CWLMixin, click.Command): AttributeError: 'NoneType' object has no attribute 'Command' click doesn't work as far as I can tell? weird. Oh, I was missing the click package. Looks good from my side. If you like we can have another release and I can continue on the bioconda integration https://github.com/bioconda/bioconda-recipes/pull/23347 .. (I see no better possibility than having a pypi release first) Cool, sounds good. Feel free to bump in setup and tag appropriately.
gharchive/pull-request
2020-07-28T11:35:58
2025-04-01T04:34:29.490982
{ "authors": [ "bernt-matthias", "hexylena" ], "repo": "hexylena/argparse2tool", "url": "https://github.com/hexylena/argparse2tool/pull/68", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
620036058
数据库初始化脚本中乱码 ModelBuilderExtensions 文件中的数据好像没有起作用 ============================================== systemusers 初始化脚本没有数据 无法登录,需要自己添加用户 systemconfigs 脚本中乱码(编码不是 utf-8 ,而且 还原数据库时不能选中为 utf-8) 'Assembly_ImagePullPolicy', '³ÌÐò¼¯ÅäÖÃ', 'Îļþ°üÀ­È¡²ßÂÔ', 'IfNotPresent', '1', '1', 'Always-×ÜÊÇÀ­È¡£¬IfNotPresent-±¾µØÃ»ÓÐʱÀ­È¡£¬Ä¬ÈÏÊÇAlways', '2020-04-05 08:57:18.417000', '2020-04-05 17:12:09.487020', 'admin' 'Email_FromAccount', 'ÓʼþÅäÖÃ', '·¢¼þÈËÕ˺Å', '', '3', '1', 'seed by efcore auto migration', '2020-04-05 15:38:14.583060', '2020-04-05 17:12:09.492483', 'admin' 'Email_FromAccountPwd', 'ÓʼþÅäÖÃ', '·¢¼þÈËÕ˺ÅÃÜÂë', '', '4', '1', 'seed by efcore auto migration', '2020-04-05 15:38:14.583060', '2020-04-05 17:12:09.493020', 'admin' 'Email_SmtpPort', 'ÓʼþÅäÖÃ', 'Óʼþ·þÎñÆ÷¶Ë¿Ú', '25', '2', '1', 'seed by efcore auto migration', '2020-04-05 15:38:14.583053', '2020-04-05 17:12:09.491849', 'admin' 'Email_SmtpServer', 'ÓʼþÅäÖÃ', 'Óʼþ·þÎñÆ÷', '', '1', '1', 'seed by efcore auto migration', '2020-04-05 15:38:14.582863', '2020-04-05 17:12:09.491180', 'admin' 'Http_RequestTimeout', 'HTTPÅäÖÃ', 'ÇëÇó³¬Ê±Ê±¼ä', '10', '1', '1', 'µ¥Î»ÊÇÃ룬ĬÈÏÖµÊÇ10', '2020-04-08 06:48:48.201000', NULL, NULL 'System_WorkerUnHealthTimes', 'ϵͳÅäÖÃ', 'WorkerÔÊÐíÎÞÏìÓ¦´ÎÊý', '3', '1', '1', '½¡¿µ¼ì²éʧ°Ü´ïµ½×î´ó´ÎÊý»á±»ÏÂÏßÌÞ³ý£¬Ä¬ÈÏÖµÊÇ3', '2020-04-08 06:48:48.201000', NULL, NULL 乱码问题检查下数据库默认的编码类型,初始数据没创建的话可以先把数据库手动删除再尝试重新迁移看看 乱码问题检查下数据库默认的编码类型,初始数据没创建的话可以先把数据库手动删除再尝试重新迁移看看 直接看ModelBuilderExtensions 文件中的 在数据库中改好了
gharchive/issue
2020-05-18T09:03:12
2025-04-01T04:34:29.495257
{ "authors": [ "JosonJiang", "hey-hoho" ], "repo": "hey-hoho/ScheduleMasterCore", "url": "https://github.com/hey-hoho/ScheduleMasterCore/issues/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
974779392
Run on format? I use SHIFT + OPTION + F to format my files because I have autosave turned on. Is there any way to get this extension to do its thing when I format the file? Nevermind I'm dumb. It runs even with autosave.
gharchive/issue
2021-08-19T15:17:33
2025-04-01T04:34:29.499331
{ "authors": [ "bastinald" ], "repo": "heybourn/headwind", "url": "https://github.com/heybourn/headwind/issues/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
652730183
Extract Sort Logic into Separate Package Is your feature request related to a problem? Please describe. I would like to write a Prettier plugin that automatically sorts my Tailwind classes, rather than needing to run the code through a VSCode plugin. This will allow me to ensure classes are sorted correctly as part of my CI configuration. Describe the solution you'd like Ideally, I would like to use the class sorting functionality from Headwind directly, rather than implementing something very similar from scratch. What I'm imagining is that the actual sort logic is extracted into something like a @headwind/core package that would allow other tools to do something like this: const { sortClasses } = require('@headwind/core'); /// whatever sortClasses(classString, options); where my own code will handle File parsing and extracting the class string Reading the options from the file Updating the file Describe alternatives you've considered Some alternatives include Waiting for the Tailwind team to build something themselves Problem: Not really in the spirit of OSS; I'd love to see the community work on tools together, rather than waiting for them! Writing my own custom Tailwind sorter Problem: There would then be my own implementation, Headwind, and whatever Tailwind eventually makes first-party Just copying the code from here Problem: This package and my own would be in sync to start, but I'd be constantly trying to keep up with changes to this library Additional context If you're interested in the idea, I'd love to help out with getting this done! If you're not interested, that's OK too; I'm happy to just duplicate the logic if need be. What happened to this, did you abandon it @alexlafroscia? We consider using headwind in a project but it doesn't really make sense imo if we cant run it in ci. So I really like your idea here. What happened to this, did you abandon it @alexlafroscia? We consider using headwind in a project but it doesn't really make sense imo if we cant run it in ci. So I really like your idea here. Hey! Yeah, I did end up abandoning this I guess. I don't remember closing it necessarily, but it looks like I did 😅 I don't have a lot of open-source time these days but agree that this would be a really nice thing to support! Hey! Yeah, I did end up abandoning this I guess. I don't remember closing it necessarily, but it looks like I did 😅 I don't have a lot of open-source time these days but agree that this would be a really nice thing to support! Ah I see, the same situation here 😉 I'll give you a ping if I sometime in the future find some time to extract it into a prettier plugin! Ah I see, the same situation here 😉 I'll give you a ping if I sometime in the future find some time to extract it into a prettier plugin! I think part of the problem that I ran into was the fact that there is not a good way to generate the list of actual Tailwind classes generated by a Tailwind config without running all of Tailwind (including the actual CSS generation). I believe that Headwind assumes your class names match the default, but for my apps that's not the case. You'd want a piece of light-weight software that allows you to do something like const listOfClassNames = generateClassNames(tailwindConfigObject); But... that doesn't actually exist in any way, including within Tailwind's source code itself. It's all tied up with PostCSS and actually running the transformations over the output CSS files. This was true at the time I was looking into this, at least; it may have changed by now, and (hopefully) will in the future, too. IIRC I ran into a problem, then, where a Prettier plugin must run synchronously and, because Tailwind runs as part of PostCSS, it must be run (or I guess, is designed to be run?) asynchronously. With those things being incompatible I just put the whole thing down. If we had the kind of utility described above, we wouldn't have the sync/async problem, but... that's where things are right now. I think part of the problem that I ran into was the fact that there is not a good way to generate the list of actual Tailwind classes generated by a Tailwind config without running all of Tailwind (including the actual CSS generation). I believe that Headwind assumes your class names match the default, but for my apps that's not the case. You'd want a piece of light-weight software that allows you to do something like const listOfClassNames = generateClassNames(tailwindConfigObject); But... that doesn't actually exist in any way, including within Tailwind's source code itself. It's all tied up with PostCSS and actually running the transformations over the output CSS files. This was true at the time I was looking into this, at least; it may have changed by now, and (hopefully) will in the future, too. IIRC I ran into a problem, then, where a Prettier plugin must run synchronously and, because Tailwind runs as part of PostCSS, it must be run (or I guess, is designed to be run?) asynchronously. With those things being incompatible I just put the whole thing down. If we had the kind of utility described above, we wouldn't have the sync/async problem, but... that's where things are right now. Okay interesting, thanks for the writeup! For my team just matching as Headwind does would probably be sufficient, at least better than nothing. To have something that would handle everything from the tailwind config would've been incredible but running the generation of the tailwind classes and do the matching and ranking sounds pretty heavy to run on save. Might be wrong tho, I have quite limited experience in this field. Okay interesting, thanks for the writeup! For my team just matching as Headwind does would probably be sufficient, at least better than nothing. To have something that would handle everything from the tailwind config would've been incredible but running the generation of the tailwind classes and do the matching and ranking sounds pretty heavy to run on save. Might be wrong tho, I have quite limited experience in this field.
gharchive/issue
2020-07-07T23:17:03
2025-04-01T04:34:29.510439
{ "authors": [ "alexlafroscia", "anthager" ], "repo": "heybourn/headwind", "url": "https://github.com/heybourn/headwind/issues/82", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
911046387
feat: add isActive prop 描述 添加了isActive prop,用于子菜单鼠标覆盖是否显示背景色 类型 [ ] 修复 bug(没有不兼容的修改) [x] 新功能(没有不兼容的修改) [ ] 不兼容的修改(修复 bug 或者新增功能时导致现有版本不能正常工作) 请确保已完成以下内容 [ ] 代码风格与本项目一致,并通过 npm run lint 校验 [ ] 如果需要修改文档,请修改(包含中英文文档) [ ] 遵循 Semantic Versioning 2.0.0 修改版本号 是想控制 hover 的时候不显示背景色? 是的
gharchive/pull-request
2021-06-04T02:37:32
2025-04-01T04:34:29.526788
{ "authors": [ "heynext", "xiaoxian521" ], "repo": "heynext/v-contextmenu", "url": "https://github.com/heynext/v-contextmenu/pull/101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
170556634
adds specfem sample this PR adds a new sample example "samples/specfem". it mimicks the SPECFEM3D_GLOBE stiffness computation for elastic domains (most heavy routine in the code). the example implements 4th-order spectral-element computations (used by default). the various flavors (Deville, unrolled, dispatched, prefetch, static) are timed and speedup compared to the Deville routine (compute_forces_Dev.F90). a step-by-step explanation is given in the README.md. best wishes, daniel Current coverage is 45.72% (diff: 100%) Merging #93 into master will decrease coverage by 0.01% @@ master #93 diff @@ ========================================== Files 32 32 Lines 5260 5260 Methods 223 223 Messages 0 0 Branches 868 868 ========================================== - Hits 2406 2405 -1 Misses 2579 2579 - Partials 275 276 +1 Powered by Codecov. Last update 5b2f1f4...a15b3c1 thanks! That's great that we have a specfem reproducer!
gharchive/pull-request
2016-08-11T01:54:37
2025-04-01T04:34:29.548178
{ "authors": [ "alheinecke", "codecov-io", "danielpeter" ], "repo": "hfp/libxsmm", "url": "https://github.com/hfp/libxsmm/pull/93", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
199095514
How to use ValidateCertificate in VB? Hi everybody, somehow I cant figure out how to validate and always accept TLS certificates when connecting to a FTPS server with Explicit TLS in .NET (plain FTP works perfectly). According to VS-Debugger "ValidateCertificate" must be handled with RaiseEvent >> but what's the correct syntax? Private Shared Sub TestServer() Using cl As New FtpClient() cl.Host = m_host cl.Credentials = New NetworkCredential(m_user, m_pass) cl.EncryptionMode = FtpEncryptionMode.Explicit cl.ValidateCertificate += Function(control, e) e.Accept = True End Function cl.connect() End Using End Sub Thx for your help! Hi AxelBinder, I think the code needs an AddHandler statement in VB.NET: Private Sub TestServer() Using cl As New FtpClient() cl.Host = "hiost" Dim m_user As String = "username" Dim m_pass As String = "passwords" cl.Credentials = New NetworkCredential(m_user, m_pass) cl.EncryptionMode = FtpEncryptionMode.Explicit AddHandler cl.ValidateCertificate, New FtpSslValidation(AddressOf Onvalidatecert) cl.Connect() End Using End Sub Private Sub Onvalidatecert(control As FtpClient, e As FtpSslValidationEventArgs) e.Accept = True End Sub Regards Thank you for this @Tharmkin. I didn't know how to do it myself! thanks so mucho!!!! it was very usefull
gharchive/issue
2017-01-06T00:30:45
2025-04-01T04:34:29.589993
{ "authors": [ "Tharmkin", "alvarofrean", "axelbinder", "hgupta9" ], "repo": "hgupta9/FluentFTP", "url": "https://github.com/hgupta9/FluentFTP/issues/29", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2055937687
1.20.2 support (if possible) Hey, I noticed that you skipped 1.20.2 support and went straight to 1.20.3 and 4 and I currently use mods that are so far only on 1.20.2 Hey I have a fork of SongPlayer that's compatible with 1.20.2 on my GitHub if you want to use that temporarily. Yeah sure that can work! There is now a 1.20.2 backport here.
gharchive/issue
2023-12-26T01:08:59
2025-04-01T04:34:29.592113
{ "authors": [ "Sk8kman", "hhhzzzsss", "hqnt" ], "repo": "hhhzzzsss/SongPlayer", "url": "https://github.com/hhhzzzsss/SongPlayer/issues/34", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
582864401
Routenöffnung für REST api_user ist eine Rolle für user, die die REST benutzen können (siehe ILIAS) closed by #187
gharchive/issue
2020-03-17T09:12:01
2025-04-01T04:34:29.593166
{ "authors": [ "iTitus" ], "repo": "hhu-propra2/abschlussprojekt-mopse", "url": "https://github.com/hhu-propra2/abschlussprojekt-mopse/issues/174", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
278032510
HHH-12106 - Database name not quoted at schema update https://hibernate.atlassian.net/browse/HHH-12106 @dreab8 Do you mind reviewing it? Thanks. a PR https://github.com/hibernate/hibernate-orm/pull/2091 with the change I proposed. @vladmihalcea what do you think? applied upstream Thanks @vladmihalcea
gharchive/pull-request
2017-11-30T08:15:46
2025-04-01T04:34:29.618170
{ "authors": [ "dreab8", "vladmihalcea" ], "repo": "hibernate/hibernate-orm", "url": "https://github.com/hibernate/hibernate-orm/pull/2074", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
988974446
HHH-14811 org.hibernate.AssertionFailure thrown instead of LazyInitializationException when trying to access a lazy property on a deleted entity https://hibernate.atlassian.net/browse/HHH-14811 Thanks! I do wonder if we shouldn't have a specific exception type for unexpected/illegal operations on deleted entities, but that's probably best left as a separate discussion.
gharchive/pull-request
2021-09-06T09:32:27
2025-04-01T04:34:29.619423
{ "authors": [ "Sanne", "yrodiere" ], "repo": "hibernate/hibernate-orm", "url": "https://github.com/hibernate/hibernate-orm/pull/4191", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1091589091
HQL doc rewrite/restructure document new features of HQL (literals, functions, filter, rollup) rewrite parts of the section dealing with the Query API + execution split out a new chapter about the query language, and reorder sections remove material about deprecated/removed features get rid of use of java.sql.Timestamp from the code! make use of repeatable annotations in code examples I'm going to merge this. We can easily make further cleanups later.
gharchive/pull-request
2021-12-31T14:29:51
2025-04-01T04:34:29.621340
{ "authors": [ "gavinking" ], "repo": "hibernate/hibernate-orm", "url": "https://github.com/hibernate/hibernate-orm/pull/4541", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
500796685
HSEARCH-3720 + HSEARCH-3722 Artifact renamings HSEARCH-3720: Rename the mapper-pojo artifact to make it clearer that it's just an abstract base HSEARCH-3722: Fix the artifact ID of the ORM mapper integration tests module I checked the sonar report: the "bugs" are false positives. Thanks! Will merge as soon as CI goes green.
gharchive/pull-request
2019-10-01T10:09:58
2025-04-01T04:34:29.623345
{ "authors": [ "yrodiere" ], "repo": "hibernate/hibernate-search", "url": "https://github.com/hibernate/hibernate-search/pull/2108", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2061474549
🛑 kiwifarms.st is down In 607400f, kiwifarms.st (https://kiwifarms.st) was down: HTTP code: 0 Response time: 0 ms Resolved: kiwifarms.st is back up in 47a27ed after 4 minutes.
gharchive/issue
2024-01-01T10:36:24
2025-04-01T04:34:29.632360
{ "authors": [ "hickoryhouse" ], "repo": "hickoryhouse/kf", "url": "https://github.com/hickoryhouse/kf/issues/2385", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2193928199
🛑 kiwifarms.st is down In 6ff93cc, kiwifarms.st (https://kiwifarms.st) was down: HTTP code: 0 Response time: 0 ms Resolved: kiwifarms.st is back up in 9bef346 after 4 minutes.
gharchive/issue
2024-03-19T03:28:28
2025-04-01T04:34:29.635238
{ "authors": [ "hickoryhouse" ], "repo": "hickoryhouse/kf", "url": "https://github.com/hickoryhouse/kf/issues/3245", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2659212284
Extend Module Patching to ensure all additonal module-info.class files work at runtime Currently, we mostly use the Module Path at compile time. Not all patched modules, in org.hiero.gradle.base.jpms-modules, are tested for runtime correctness yet. Related considerations: https://github.com/hashgraph/hedera-services/issues/6242 Testing: https://github.com/hashgraph/hedera-services/issues/5275 Testing: https://github.com/hashgraph/hedera-services/issues/4525
gharchive/issue
2024-11-14T15:36:49
2025-04-01T04:34:29.673778
{ "authors": [ "jjohannes" ], "repo": "hiero-ledger/hiero-gradle-conventions", "url": "https://github.com/hiero-ledger/hiero-gradle-conventions/issues/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
465826917
Support gRPC health check endpoint by default Mu should support gRPC health checks with limited configuration (if not by default) out of the box. See https://github.com/grpc/grpc/blob/master/doc/health-checking.md for details. This would allow dependencies to wait for the service to become healthy before sending requests to it. For example https://github.com/grpc-ecosystem/grpc-health-probe/ This could be done with a default GrpcConfig implementation to be added as a parameter to configList in GrpcServer.netty[IO](port, myService :: healthService :: Nil). A first version of health check service: #630 Pending: Create new issue: "unary service part should support not only protobuf but avro" (https://github.com/higherkindness/mu/pull/630#issuecomment-516792561) Create healthcheck documentation (?)
gharchive/issue
2019-07-09T14:51:02
2025-04-01T04:34:29.735134
{ "authors": [ "TannerYoung", "mrtmmr" ], "repo": "higherkindness/mu", "url": "https://github.com/higherkindness/mu/issues/626", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1246739531
🛑 Frontend is down In f69a466, Frontend (https://beta.medusadistribution.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Frontend is back up in 466f374.
gharchive/issue
2022-05-24T15:31:30
2025-04-01T04:34:29.772252
{ "authors": [ "himalayadevo" ], "repo": "himalayadevo/medusa_monitoring", "url": "https://github.com/himalayadevo/medusa_monitoring/issues/148", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2622783386
🛑 Birthdays is down In 3c3ca7d, Birthdays (https://m.hinzwifi.xyz) was down: HTTP code: 0 Response time: 0 ms Resolved: Birthdays is back up in f0f32f1 after 11 minutes.
gharchive/issue
2024-10-30T03:34:17
2025-04-01T04:34:29.791677
{ "authors": [ "hinzwifi" ], "repo": "hinzwifi/uptime-hinz", "url": "https://github.com/hinzwifi/uptime-hinz/issues/2027", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
332544418
Adapter stopped connecting to hipchat This adapter was working perfectly for me for years... Now it won't connect at all. Perhaps Hipchat somehow broke compatibility. I've tried resetting the password for the Hipchat account I used, doing an npm update, updating packages on the host computer... no luck. Here are the logs: Jun 14 15:57:41 hubot systemd[1]: Starting Hubot... [Thu Jun 14 2018 15:57:45 GMT-0400 (EDT)] DEBUG Loading adapter hipchat [Thu Jun 14 2018 15:57:46 GMT-0400 (EDT)] DEBUG HipChat adapter options: {"jid":"xxxx_xxxx@chat.hipchat.com","password":"xxxxx","token":null,"rooms":"All","rooms_blacklist":"","host":null,"bosh":{"url":null},"autojoin":false,"xmppDomain":null,"reconnect":true} [Thu Jun 14 2018 15:57:46 GMT-0400 (EDT)] INFO Connecting HipChat adapter... [Thu Jun 14 2018 15:59:54 GMT-0400 (EDT)] DEBUG Disconnecting here [Thu Jun 14 2018 15:59:54 GMT-0400 (EDT)] INFO Connection went offline .... Here is my package.json: { "name": "testbot", "version": "0.0.0", "private": true, "author": "", "description": "Test bot", "dependencies": { "amqplib": "0.5.1", "hubot": "^2.19.0", "hubot-diagnostics": "0.0.1", "hubot-google-images": "^0.2.6", "hubot-google-translate": "^0.2.0", "hubot-help": "^1.0.1", "hubot-heroku-keepalive": "^1.0.2", "hubot-hipchat": "^2.12.0-6", "hubot-maps": "0.0.2", "hubot-pugme": "^0.1.0", "hubot-redis-brain": "0.0.3", "hubot-rules": "^0.1.1", "hubot-scripts": "^2.17.2", "hubot-shipit": "^0.2.0", "sprintf-js": "^1.0.3", "thumbor": "^0.1.3", "xml2js": "^0.4.17" }, "engines": { "node": "0.10.x" } } Same here, hope it starts working again soon Also having similar issues with our connection. I met the same issue. And I found following Workaround for Atlassian Server platform. https://confluence.atlassian.com/hipchatkb/hubot-stopped-working-on-hipchat-server-v2-0-7-and-later-867181545.html https://confluence.atlassian.com/hipchatkb/external-xmpp-ports-5222-5223-disabled-by-default-in-hipchat-server-2-0-7-859442760.html Unfortunately my hipchat is using Atlassian Cloud which I couldn't find any workaround for cloud so far. Same issue here. Sometimes it claims to reconnect successfully, but I'm never able to use robot.messageRoom() to send a message. It always gives me this error: 2018-06-19T22:40:25.296953+00:00 app[web.1]: [Tue Jun 19 2018 22:40:25 GMT+0000 (Coordinated Universal Time)] ERROR TypeError: Cannot read property 'message' of undefined 2018-06-19T22:40:25.296957+00:00 app[web.1]: at Robot.HipChat.send (/app/node_modules/hubot-hipchat/src/hipchat.coffee:35:7, <js>:50:38) 2018-06-19T22:40:25.296959+00:00 app[web.1]: at Robot.messageRoom (/app/node_modules/hubot/src/robot.js:608:23) 2018-06-19T22:40:25.296960+00:00 app[web.1]: at /app/scripts/github-workflow.coffee:157:70, <js>:111:14 2018-06-19T22:40:25.296962+00:00 app[web.1]: at Object.callback (/app/node_modules/githubot/lib/githubot.js:113:20) 2018-06-19T22:40:25.296963+00:00 app[web.1]: at next (/app/node_modules/async/lib/async.js:723:43) 2018-06-19T22:40:25.296965+00:00 app[web.1]: at /app/node_modules/async/lib/async.js:24:16 2018-06-19T22:40:25.296967+00:00 app[web.1]: at IncomingMessage.<anonymous> (/app/node_modules/scoped-http-client/src/index.js:95:22) 2018-06-19T22:40:25.296968+00:00 app[web.1]: at IncomingMessage.emit (events.js:187:15) 2018-06-19T22:40:25.296970+00:00 app[web.1]: at endReadableNT (_stream_readable.js:1081:12) 2018-06-19T22:40:25.296971+00:00 app[web.1]: at process._tickCallback (internal/process/next_tick.js:63:19) (github-workflow.coffee is our custom code.) Oddly enough plugins we didn't write still seem to work. But I can't see a difference between how they're created. Same issue here. :( I found that by using both the master branch of this repo (instead of the 2.12.0-6 release version) and changing the hipchat.coffee send method, it seems to be back to normal now. The repeated connection offline output is cleaned up in this master, but the messageRoom method was still broken afterward. The commit is https://github.com/slepp/hubot-hipchat/commit/126f87591bc5555c1df329fca91141b7c3ac189c I've pushed the little change to my own repo: https://github.com/slepp/hubot-hipchat @slepp if that fixes the issue, could you send a PR? I can merge and release a new version later today. This may be fixed partly by #308 and #304, those two combined fixed things for us. Solved by migrating team to Slack :| same...
gharchive/issue
2018-06-14T20:02:12
2025-04-01T04:34:29.799693
{ "authors": [ "OrganicPanda", "andrewmeissner", "bringink", "dangerbell", "dfraser", "rberrelleza", "rwinikates", "slepp" ], "repo": "hipchat/hubot-hipchat", "url": "https://github.com/hipchat/hubot-hipchat/issues/307", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2462200846
Formula simplifier using e-graphs Here's a first attempt using egglog to remove temporary existentials, e.g.: exists x. a=b/\res=x/\x=1 ==> a=b/\res=1 (datatype Value (Num i64)) (declare True Value) (declare False Value) (datatype VarType) (datatype Term (Val Value) (Var VarType) (And Term Term) (Eq Term Term) (Ex VarType Term)) (function V (String) VarType) (function From (Term) VarType) (rewrite (And ?a ?b) (And ?b ?a)) (rewrite (Eq ?a ?b) (Eq ?b ?a)) (rewrite (Ex ?v (And (Eq (Var ?v) ?x) (Eq (Var ?v) ?y))) (Eq ?x ?y)) (let e1 (Ex (V "x") (And (Eq (Var (V "x")) (Val (Num 1))) (Eq (Var (V "x")) (Var (V "res")))))) (push) (run 4) (check (= e1 (Eq (Var (V "res")) (Val (Num 1))))) (extract e1) (pop) Dealing with binders A simple solution (used above), where variables are represented with strings and assumed to be fresh, may be sufficient. If not, here are a few pointers. Blog post Sec 2 of this paper Slide 50 Next steps Figure out the full list of simplifications currently implemented Try above example using ego Here are some more simplifications which are currently implemented: simplify_existential_locations: ex x; ens x->...; ex y; ens y->... ==> ex x; ens x->...; ex y; ens x->... if x=y appears somewhere remove_temp_vars: ex x; ens x=c; S ==> ex x; S if x does not occur in S and c is a constant optimize_existentials: ex x; S ==> S if x does not occur in S ex x; S; S1 ==> S; ex x; S1 if x does not occur in S but occurs in S1 remove_vars_occurring_twice: ex x; ens x=0/\res=x ==> ex x; ens 0=0/\res=0 propagate_function_stage_equalities: ens b=(fun ...)/\a=b; a(...) ==> ens b=(fun ...); b(...) simplify_pure: ens !(x=true) ==> ens x=false ens x=x ==> ens true ens c=c ==> ens true ens true/\P ==> ens P ens false/\P ==> ens false ens c+c ==> ens c1 if c1=c+C
gharchive/issue
2024-08-13T01:47:59
2025-04-01T04:34:29.806007
{ "authors": [ "dariusf" ], "repo": "hipsleek/Heifer", "url": "https://github.com/hipsleek/Heifer/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1911674203
List Inscriptions API times out while looking for recursive=true Describe the bug The list Inscriptions API times out while looking for recursive=true. Below is the error: To Reproduce Steps to reproduce the behavior: Execute the below command: curl -L 'https://api.hiro.so/ordinals/v1/inscriptions?recursive=true&limit=40' \ -H 'Accept: application/json' Expected behavior Expected behaviour is to return a json list of inscriptions that has recursive set to true. Screenshots Below is the error it returns data: { statusCode: 500, code: '57014', error: 'Internal Server Error', message: 'canceling statement due to statement timeout' } Desktop (please complete the following information): NA Smartphone (please complete the following information): NA Additional context It does work, but not always. I'd say 10% success rate. :tada: This issue has been resolved in version 1.2.2 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/issue
2023-09-25T14:44:19
2025-04-01T04:34:29.814798
{ "authors": [ "blockstack-devops", "ronykris" ], "repo": "hirosystems/ordinals-api", "url": "https://github.com/hirosystems/ordinals-api/issues/236", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1296533477
fix: error when selecting tx request fee Try out this version of the Hiro Wallet - download extension builds. This PR fixes the issue found testing ledger with selecting a tx request fee. There was one spot where we weren't converting the decimal value to microstacks. cc/ @kyranjamie @fbwoolf @kyranjamie you ok if I merge this into dev so I can get the fix rebased into the release branch today?
gharchive/pull-request
2022-07-06T21:59:22
2025-04-01T04:34:29.816570
{ "authors": [ "fbwoolf" ], "repo": "hirosystems/stacks-wallet-web", "url": "https://github.com/hirosystems/stacks-wallet-web/pull/2543", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1745479090
Change UI to Create new wallet/ restore wallet, connect Ledger Sometimes users will create a new wallet, unknowingly, rather than restore one. I think the wallet having a screen like this could help make the difference between restoring and creating a new wallet more clear. Three buttons instead of 1 button and two url's for the other options. Current Adjustments have been made based on this issue with the version coming next Tuesday.
gharchive/issue
2023-06-07T09:40:42
2025-04-01T04:34:29.818718
{ "authors": [ "314159265359879" ], "repo": "hirosystems/wallet", "url": "https://github.com/hirosystems/wallet/issues/3819", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1702609397
how to acess score attention in max level Hello Kevin, I'm trying to access the attention scores at the highest level. But when I ask for the A_2_aux I have 48 values, given that the top k = 12 and the 10x zoom. However, when I ask for select_1 or select_2, it only returns 12 positions. I've tried in every way to find the exact position of the zoom level, but I only get the position of the top -k. Could you help me ? Hi Maira, Yes, 48 attention scores are expected and the model selects again k=12 from those. At a high magnification, ZoomMIL only computes attention values for patches that were selected at the preceding lower magnification, not for all patches. So you can get attention values for all patches only at the lowest magnification. For any higher magnification, attention values are only computed for the selected patches.
gharchive/issue
2023-05-09T19:04:55
2025-04-01T04:34:29.834903
{ "authors": [ "Mairafatoretto", "kevthan" ], "repo": "histocartography/zoommil", "url": "https://github.com/histocartography/zoommil/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1251688348
階乗前計算 SRM 830 Div.1 1000 二重階乗 階乗 nPr nCr mod の上限を見て超えたらちゃんと 0 にしてくれるやつ mod の上限を見て超えたらちゃんと 0 にしてくれるやつ クラスを割と抽象的に実装してしまい、その状況では割と不自然な仮定だし、それくらいの対処は本番ですぐできるので、やらない......
gharchive/issue
2022-05-28T17:45:34
2025-04-01T04:34:29.840983
{ "authors": [ "hitonanode" ], "repo": "hitonanode/cplib-cpp", "url": "https://github.com/hitonanode/cplib-cpp/issues/204", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1914587159
千问7B 单卡会显存溢出,是什么原因呢? 命令参数: CUDA_VISIBLE_DEVICES=0 python src/train_bash.py --stage sft --model_name_or_path /qwen-7b/base_models/Qwen-7B --do_train True --overwrite_cache False --finetuning_type lora --template default --dataset_dir data --dataset alpaca_gpt4_zh --cutoff_len 1024 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 1000 --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --save_steps 100 --warmup_steps 0 --flash_attn False --lora_rank 8 --lora_dropout 0.1 --lora_target c_attn --resume_lora_training True --output_dir /qwen-7b/saves/Qwen-7B/lora/2023-09-27-10-10-00 --fp16 True --val_size 0.1 --evaluation_strategy steps --eval_steps 100 --load_best_model_at_end True --plot_loss True RTX 3090单卡24G显存 减小 per_device_train_batch_size 用bf16 @Essence9999 bf16 不能解决显存溢出问题 解决了吗?
gharchive/issue
2023-09-27T02:42:50
2025-04-01T04:34:29.848381
{ "authors": [ "Essence9999", "hiyouga", "liuyijiang1994", "wqainlpdl" ], "repo": "hiyouga/LLaMA-Efficient-Tuning", "url": "https://github.com/hiyouga/LLaMA-Efficient-Tuning/issues/1053", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2092014667
关于训练的问题,期待回复 Reminder [X] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others _No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response 之前训练遇到过控制台类似的输出,“pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time ”,这种情况会明显降低训练速度,将--gradient_accumulation_steps 减少,能缓解出现提示的频率,最终通过减小--per_device_train_batch_size 能避免出现这种情况,而且训练速度会更快些尽管batch变小了。 Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others _No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response 之前训练遇到过控制台类似的输出,“pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time ”,这种情况会明显降低训练速度,将--gradient_accumulation_steps 减少,能缓解出现提示的频率,最终通过减小--per_device_train_batch_size 能避免出现这种情况,而且训练速度会更快些尽管batch变小了。 Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others _No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response 之前训练遇到过控制台类似的输出,“pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time ”,这种情况会明显降低训练速度,将--gradient_accumulation_steps 减少,能缓解出现提示的频率,最终通过减小--per_device_train_batch_size 能避免出现这种情况,而且训练速度会更快些尽管batch变小了。 Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others _No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response 之前训练遇到过控制台类似的输出,“pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time ”,这种情况会明显降低训练速度,将--gradient_accumulation_steps 减少,能缓解出现提示的频率,最终通过减小--per_device_train_batch_size 能避免出现这种情况,而且训练速度会更快些尽管batch变小了。 Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others _No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response 之前训练遇到过控制台类似的输出,“pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time ”,这种情况会明显降低训练速度,将--gradient_accumulation_steps 减少,能缓解出现提示的频率,最终通过减小--per_device_train_batch_size 能避免出现这种情况,而且训练速度会更快些尽管batch变小了。 Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others _No response Reminder [x] I have read the README and searched the existing issues. Reproduction 想问问大家在微调Qwen-14B-chat的时候,有出现显存达到峰值再下降,再上升的现象吗? 控制台输出了: 2 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time 例如在训练一个step的时候显存达到23G,然后训练完减小,再上升 训练脚本为: deepspeed --num_gpus 4 --master_port=9901 src/train_bash.py --deepspeed /home/ftpai/code/LLaMA-Factory/ds_config_3.json --stage sft --do_train --model_name_or_path /webtt/weight/Qwen-14B-Chat --dataset Qwenmax_finetune_sft --template qwen --finetuning_type lora --lora_target c_attn,attn.c_proj,w1,w2,mlp.c_proj --output_dir /webtt/weight/LLaMA_factory_output_model/Qwen14B-chat-sft --per_device_train_batch_size 1 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 1 --save_steps 1000 --learning_rate 1e-6 --num_train_epochs 1.0 --lora_rank 64 --lora_dropout 0.05 --lora_alpha 16 --plot_loss --bf16 --lora_bf16_mode True --cutoff_len 4096 --report_to tensorboard --overwrite_output_dir 我采用的zero3训练: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Expected behavior No response System Info No response Others No response 之前训练遇到过控制台类似的输出,“pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time ”,这种情况会明显降低训练速度,将--gradient_accumulation_steps 减少,能缓解出现提示的频率,最终通过减小--per_device_train_batch_size 能避免出现这种情况,而且训练速度会更快些尽管batch变小了。 我的训练batch_size都为1了,梯度累加更新为4了已经是最小的了 我的训练batch_size都为1了,梯度累加更新为4了已经是最小的了 你可以把offload_optimizer也释放到cpu啊,或者增加下gpu的使用数量,总之减小每台gpu显存使用应该会有效果。我之前就是占的太满了,81G的显存用到了80.9G,很容易频繁出现这个提示,降下来之后就没了,速度还能快些(正常点) 我使用8张显存为40G的A100来全参数微调llama3-8B,--per_device_train_batch_size 和--gradient_accumulation_steps 都设置成1,按理来讲显存应该是足够的,但是依然出会出现这个问题,这个应该不太合理吧,请问这个问题有解决方法吗 我使用8张显存为40G的A100来全参数微调llama3-8B,--per_device_train_batch_size 和--gradient_accumulation_steps 都设置成1,按理来讲显存应该是足够的,但是依然出会出现这个问题,这个应该不太合理吧,请问这个问题有解决方法吗 显存是否够用,需要关注下运行程序时每张gpu实时的显存使用情况吧。另外,启用zero3+offload、flash attention2(支持的话),还能大幅降低显存需求,调整完这些可以再关注下是否还有这个问题出现 Hello, does this warning affects also model performance or training loss? or only training time? Hello, does this warning affects also model performance or training loss? or only training time? +1, i also want to know
gharchive/issue
2024-01-20T11:13:33
2025-04-01T04:34:29.952945
{ "authors": [ "AGI-player", "LiweiPE", "ddf62", "mfj9999", "xx-Jiangwen" ], "repo": "hiyouga/LLaMA-Factory", "url": "https://github.com/hiyouga/LLaMA-Factory/issues/2258", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2480102158
请问预训练数据tokenizer后可以存储吗,下次运行直接加载tokenizer后的数据,不用每次都在线tokenizer? Reminder [X] I have read the README and searched the existing issues. System Info 请问预训练数据tokenizer后可以存储吗,下次运行直接加载tokenizer后的数据,不用每次都在线tokenizer? Reproduction 请问预训练数据tokenizer后可以存储吗,下次运行直接加载tokenizer后的数据,不用每次都在线tokenizer? Expected behavior No response Others No response https://github.com/hiyouga/LLaMA-Factory/tree/main/examples#preprocess-dataset
gharchive/issue
2024-08-22T07:59:21
2025-04-01T04:34:29.956701
{ "authors": [ "DataNinja42298", "hiyouga" ], "repo": "hiyouga/LLaMA-Factory", "url": "https://github.com/hiyouga/LLaMA-Factory/issues/5240", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2627864124
🛑 Griffo is down In db9d361, Griffo (https://www.griffo.de) was down: HTTP code: 0 Response time: 0 ms Resolved: Griffo is back up in 4ca333d after 11 minutes.
gharchive/issue
2024-10-31T21:28:59
2025-04-01T04:34:29.963733
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/10022", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2629605366
🛑 Griffo is down In 2d3354b, Griffo (https://www.griffo.de) was down: HTTP code: 0 Response time: 0 ms Resolved: Griffo is back up in abafcbb after 22 minutes.
gharchive/issue
2024-11-01T18:12:32
2025-04-01T04:34:29.966538
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/10087", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2243739724
🛑 LM Bogen WSV is down In 47275c2, LM Bogen WSV (https://www.lmbogenwsv.de) was down: HTTP code: 0 Response time: 0 ms Resolved: LM Bogen WSV is back up in 3067c34 after 7 minutes.
gharchive/issue
2024-04-15T13:47:19
2025-04-01T04:34:29.968836
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/1772", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2277469912
🛑 LM Bogen WSV is down In 150e5d8, LM Bogen WSV (https://www.lmbogenwsv.de) was down: HTTP code: 0 Response time: 0 ms Resolved: LM Bogen WSV is back up in 8f70191 after 7 minutes.
gharchive/issue
2024-05-03T11:19:54
2025-04-01T04:34:29.971128
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/2847", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2204069444
🛑 HJStrauss is down In 0e14c98, HJStrauss (https://www.hjstrauss.de) was down: HTTP code: 0 Response time: 0 ms Resolved: HJStrauss is back up in c467c01 after 11 minutes.
gharchive/issue
2024-03-23T22:14:41
2025-04-01T04:34:29.973402
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/457", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2356011467
🛑 LM Bogen WSV is down In 73b4949, LM Bogen WSV (https://www.lmbogenwsv.de) was down: HTTP code: 0 Response time: 0 ms Resolved: LM Bogen WSV is back up in aabe2ed after 33 minutes.
gharchive/issue
2024-06-16T21:58:42
2025-04-01T04:34:29.975671
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/5165", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2391881654
🛑 Opa Hansi is down In 016da28, Opa Hansi (https://www.opahansi.de) was down: HTTP code: 0 Response time: 0 ms Resolved: Opa Hansi is back up in 2379889 after 28 minutes.
gharchive/issue
2024-07-05T06:00:43
2025-04-01T04:34:29.978149
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/6309", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2406796972
🛑 Opa Hansi is down In 9ddf616, Opa Hansi (https://www.opahansi.de) was down: HTTP code: 0 Response time: 0 ms Resolved: Opa Hansi is back up in 9fcb2f8 after 8 minutes.
gharchive/issue
2024-07-13T08:25:29
2025-04-01T04:34:29.980417
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/6849", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2412909054
🛑 HJStrauss is down In 2a83be5, HJStrauss (https://www.hjstrauss.de) was down: HTTP code: 0 Response time: 0 ms Resolved: HJStrauss is back up in 367a425 after 8 minutes.
gharchive/issue
2024-07-17T07:46:00
2025-04-01T04:34:29.982666
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/7121", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2216611163
🛑 Griffo is down In 20d45d9, Griffo (https://www.griffo.de) was down: HTTP code: 0 Response time: 0 ms Resolved: Griffo is back up in 1c82e2d after 12 minutes.
gharchive/issue
2024-03-30T18:15:44
2025-04-01T04:34:29.985064
{ "authors": [ "hjstrauss" ], "repo": "hjstrauss/MonitorMySites", "url": "https://github.com/hjstrauss/MonitorMySites/issues/863", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
347434478
Conflict during instal, have to use force I can add the plugin fine. but if i remove android platform, then add it back again, i get the following... Failed to install 'cordova-plugin-kiosk': Error: There was a conflict trying to modify attributes with <edit-config> in plugin cordova-plugin-kiosk. The conflicting plugin, undefined, already modified the same attributes. The conflict must be resolved before cordova-plugin-kiosk can be added. You may use --force to add the plugin and overwrite the conflicting attributes. I can use force and it will install locally, but then my ionicframework build will break, since i dont think i can force the install during the packaging process. What am I doing wrong? I believe your ionicframework build should not be affected, as it should start from empty project - no removing+re adding. Can you paste commands you use? Building from a blank project still creates this problem. So Ionic framework errors when packaging. Just a quick fyi ... i know it is a fork of this one, but https://github.com/guatedude2/cordova-plugin-kiosk-launcher suffers from the same problem described here. I am using android@6.4.0 when i add the platform due to using the cordova-camera-preview plugin. But i see this being used on https://github.com/batout/ionic3-full-kiosk-mode-demo/tree/master/src which is using 6.3.0. With regards to commands, I try various orders of adding/removing the platform and the plugin. I feel like i have tried all the permutations. here are a list of the plugins i am using... cordova-android-support-gradle-release 1.4.4 "cordova-android-support-gradle-release" cordova-plugin-android-permissions 1.0.0 "Permissions" cordova-plugin-camera-preview 0.10.0 "cordova-plugin-camera-preview" cordova-plugin-compat 1.2.0 "Compat" cordova-plugin-device 2.0.2 "Device" cordova-plugin-file 6.0.1 "File" cordova-plugin-fullscreen 1.1.0 "cordova-plugin-fullscreen" cordova-plugin-hotspot 1.2.10 "Cordova HotSpot Plugin" cordova-plugin-insomnia 4.3.0 "Insomnia (prevent screen sleep)" cordova-plugin-ionic 5.0.5 "cordova-plugin-ionic" cordova-plugin-ionic-keyboard 2.1.2 "cordova-plugin-ionic-keyboard" cordova-plugin-ionic-webview 2.0.2 "cordova-plugin-ionic-webview" cordova-plugin-nativeaudio 3.0.9 "Cordova Native Audio" cordova-plugin-splashscreen 5.0.2 "Splashscreen" cordova-plugin-statusbar 2.4.2 "StatusBar" cordova-plugin-unswipable-android-status-bar 1.0.0 "UnswipableAndroidStatusBarPlugin" cordova-plugin-whitelist 1.3.3 "Whitelist" cordova-sqlite-storage 2.3.3 "Cordova sqlite storage plugin" cordovarduino 0.0.9 "Serial" It seems the plugin is removing the engine line in the config.xml when i run it as force? I have the same problme, did you find a way to fix it ?
gharchive/issue
2018-08-03T15:10:48
2025-04-01T04:34:29.991542
{ "authors": [ "FBNtbrunet", "hkalina", "l0c0luke" ], "repo": "hkalina/cordova-plugin-kiosk", "url": "https://github.com/hkalina/cordova-plugin-kiosk/issues/60", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1524558369
Multiple BLE Scanners for multiple sensors I have about 10 BLE temp sensors and after setup 10 scanners for 10 different names only last one scanning even when it's not connected in flow. How to gather data from multiple sensors correctly? You need only one BLE scan node. The others should be BLE connects. The way you do the connection is that: scan -> connect -> stop scan -> start scan -> connect the other one.
gharchive/issue
2023-01-08T15:34:29
2025-04-01T04:34:29.993419
{ "authors": [ "Andrulius", "hkayann" ], "repo": "hkayann/node-red-contrib-ble-sense", "url": "https://github.com/hkayann/node-red-contrib-ble-sense/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2239263559
grad_norm is 0 Hello, I found that the gradient is 0 during training. Always 0? That has never happened in my training. Did you change something? Feel free to re-open if there are any updates.
gharchive/issue
2024-04-12T07:13:35
2025-04-01T04:34:29.994445
{ "authors": [ "hkchengrex", "wenzyan" ], "repo": "hkchengrex/Cutie", "url": "https://github.com/hkchengrex/Cutie/issues/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1971682748
Where did the factor 2 go in rotary embedding? Well, thanks for you great video on youtube. I just got one question, in your implementation, in method precompute_theta_pos_frequencies. The equation of theta contains a factor 2 like you wrote in your comment, theta_i = 10000^(-2(i-1)/dim) for i = [1, 2, ... dim/2]. However, in your code, I don't see a factor 2 added, how is that? @ZhichaoDuan theta_numerator = torch.arange(0, head_dim, 2).float() This line creates a tensor starting at 0 and stepping by 2. If head_dim is 6, for instance, theta_numerator would be [0, 2, 4]. These values correspond to 2(i-1) where i starts from 1. In other words, for i = 1, 2, 3, ..., theta_numerator directly represents 2(i-1). Thank you @OmPrasad93, that's correct.
gharchive/issue
2023-11-01T04:36:10
2025-04-01T04:34:29.996444
{ "authors": [ "OmPrasad93", "ZhichaoDuan", "hkproj" ], "repo": "hkproj/pytorch-llama", "url": "https://github.com/hkproj/pytorch-llama/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2224161567
RuntimeError: Error(s) in loading state_dict for VAE_Encoder: In fact, I faced this problem when I run the demo, it seems like the keys after converted cannot be found. What should I do? raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for VAE_Encoder: Unexpected key(s) in state_dict: "1.groupnorm_1.weight", "1.groupnorm_1.bias", "1.conv_1.weight", "1.conv_1.bias", "1.groupnorm_2.weight", "1.groupnorm_2.bias", "1.conv_2.weight", "1.conv_2.bias", "2.groupnorm_1.weight", "2.groupnorm_1.bias", "2.conv_1.weight", "2.conv_1.bias", "2.groupnorm_2.weight", "2.groupnorm_2.bias", "2.conv_2.weight", "2.conv_2.bias", "4.groupnorm_1.weight", "4.groupnorm_1.bias", "4.conv_1.weight", "4.conv_1.bias", "4.groupnorm_2.weight", "4.groupnorm_2.bias", "4.conv_2.weight", "4.conv_2.bias", "5.groupnorm_1.weight", "5.groupnorm_1.bias", "5.conv_1.weight", "5.conv_1.bias", "5.groupnorm_2.weight", "5.groupnorm_2.bias", "5.conv_2.weight", "5.conv_2.bias", "7.groupnorm_1.weight", "7.groupnorm_1.bias", "7.conv_1.weight", "7.conv_1.bias", "7.groupnorm_2.weight", "7.groupnorm_2.bias", "7.conv_2.weight", "7.conv_2.bias", "8.groupnorm_1.weight", "8.groupnorm_1.bias", "8.conv_1.weight", "8.conv_1.bias", "8.groupnorm_2.weight", "8.groupnorm_2.bias", "8.conv_2.weight", "8.conv_2.bias", "10.groupnorm_1.weight", "10.groupnorm_1.bias", "10.conv_1.weight", "10.conv_1.bias", "10.groupnorm_2.weight", "10.groupnorm_2.bias", "10.conv_2.weight", "10.conv_2.bias", "11.groupnorm_1.weight", "11.groupnorm_1.bias", "11.conv_1.weight", "11.conv_1.bias", "11.groupnorm_2.weight", "11.groupnorm_2.bias", "11.conv_2.weight", "11.conv_2.bias", "12.groupnorm_1.weight", "12.groupnorm_1.bias", "12.conv_1.weight", "12.conv_1.bias", "12.groupnorm_2.weight", "12.groupnorm_2.bias", "12.conv_2.weight", "12.conv_2.bias", "14.groupnorm_1.weight", "14.groupnorm_1.bias", "14.conv_1.weight", "14.conv_1.bias", "14.groupnorm_2.weight", "14.groupnorm_2.bias", "14.conv_2.weight", "14.conv_2.bias". So the naming convention of the model needs to match the state dict naming convention you have to check the for example in encoder you need to check is it either groupnorm or groupNorm and the naming should exactly match. I had also got the same error. Try checking your class UNET(nn.Module) and the SwitchSequential . You might have done some mistake there. It solved my issue. Let me know if it works for you as well.
gharchive/issue
2024-04-04T00:24:00
2025-04-01T04:34:30.002460
{ "authors": [ "jianingPeng0382", "meankitdas", "parth394" ], "repo": "hkproj/pytorch-stable-diffusion", "url": "https://github.com/hkproj/pytorch-stable-diffusion/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2126847599
reasonCode is optional for nursing see business rules v0.13. Modify also on test server. #128
gharchive/issue
2024-02-09T10:22:01
2025-04-01T04:34:30.004706
{ "authors": [ "bdc-ehealth" ], "repo": "hl7-be/referral", "url": "https://github.com/hl7-be/referral/issues/285", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
997414737
Questionnaire Text in German, ValueSets in english (Oliver Egger, ahdis ag) ch.fhir.ig.ch-rad-order#0.1.0 /questionnaire.html The questionnaire is defined with german text descriptions, CodeSystem/ValueSet is however RequestedService is however in english: http://fhir.ch/ig/ch-rad-order/CodeSystem-ch-rad-order-requested-service.html Propose that the german translation will be added. Need to add guidance if there will be different questionnaires by language or one questionnaire with all languages included (possibly something which should be defined also in CH-ORF) Oliver Egger, ahdis ag Value Set changed to: #RequestForPrecedentReport "Befundbericht früherer Untersuchung(en)" #RequestForPrecedentReportAndImages "Bilder und Befundberichte früherer Untersuchung(en)" #ImagingRequest "Bildgebenden Diagnostik" #RadIntervention "Interventionelle Radiologie" #SecondOpinion "Zweitmeinung" #ImagingRequestWithIntervention "Bildgebende Diagnostik und Intervention" #RemoteReporting "Fernbefundung" Issue reviewed and closed.
gharchive/issue
2021-09-15T19:04:01
2025-04-01T04:34:30.008264
{ "authors": [ "JBleuer", "PeroGrgic", "ig-feedback" ], "repo": "hl7ch/ch-rad-order", "url": "https://github.com/hl7ch/ch-rad-order/issues/14", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
815806195
Wrong UTF-8 character spacing when unicode is enabled What did you expect to happen? With unicode disabled, all the unicode characters seems to have the same spacing: What actually happened? When I enabled the unicode, I get this: Notice how the alignment has changed. All the other option remained the same. The code to reproduce the problem is: # Compute the SGP4 algorithm considering all variables in a state vector. function _sgp4_si(Δt, sgp4_gc, epoch, r_TEME::AbstractVector, v_TEME::AbstractVector) orb_TEME = rv_to_kepler(r_TEME, v_TEME, epoch) # Obtain the required mean elements to initialize the SGP4. a₀ = orb_TEME.a/sgp4_gc.R0 # ......................... Semi-major axis [ER] e₀ = orb_TEME.e # ............................. Excentricity [ ] i₀ = orb_TEME.i # ............................ Inclination [rad] Ω₀ = orb_TEME.Ω # ................................... RAAN [rad] ω₀ = orb_TEME.ω # ........................ Arg. of perigee [rad] M₀ = f_to_M(e₀, orb_TEME.f) # ........................... Mean anomaly [rad] This is how the unicode-fonts package works, by mapping different fonts to different code points to cover as much of the unicode spec as possible. Please raise this upstream instead. As this is not a Doom issue I will close it. Oh I see, sorry then! Thanks for the detailed explanation.
gharchive/issue
2021-02-24T20:10:12
2025-04-01T04:34:30.011364
{ "authors": [ "hlissner", "ronisbr" ], "repo": "hlissner/doom-emacs", "url": "https://github.com/hlissner/doom-emacs/issues/4689", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
537679749
YT Rwnd '18 is still the most disliked YT Rwnd Yet Mentioned that YT Rewind '18 is still the most disliked YT Rewind Yet. I really don't think it going to get any better any time soon. I'd really like to see a Peertube instance which could afford not to put data caps on creators. Haha, good to know that it hasn't changed, but I cannot accept this PR. The YT Rewind '18 reference is only an analogy, and is secondary to the point that is being made in this paragraph (that there are many ways to set up your environment). This parenthetical doesn't add value to that point. Thanks for the PR, in any case!
gharchive/pull-request
2019-12-13T17:40:22
2025-04-01T04:34:30.013088
{ "authors": [ "avronr", "hlissner" ], "repo": "hlissner/doom-emacs", "url": "https://github.com/hlissner/doom-emacs/pull/2190", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
775019488
faq: document gating bindings in :config default Closes #4485, based on the comment in #2267. Hi! Thanks for the PR but I'm afraid I must turn it down. We're transitioning to Discourse later this week, and our FAQ along with it (faq.org will be deleted soon). Therefore I'm not accepting PRs to the docs for the time being. Thanks for the help in any case! Hi! Thanks for the PR but I'm afraid I must turn it down. We're transitioning to Discourse later this week, and our FAQ along with it (faq.org will be deleted soon). Therefore I'm not accepting PRs to the docs for the time being. Thanks for the help in any case!
gharchive/pull-request
2020-12-27T11:16:32
2025-04-01T04:34:30.015001
{ "authors": [ "asymmetric", "hlissner" ], "repo": "hlissner/doom-emacs", "url": "https://github.com/hlissner/doom-emacs/pull/4490", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
728880838
Add Ristretto filter Why did you close it? Why did you close it? Colors dont match yet. I will open a new PR with all remaining filters.
gharchive/pull-request
2020-10-24T21:27:54
2025-04-01T04:34:30.016340
{ "authors": [ "ema2159", "minikN" ], "repo": "hlissner/emacs-doom-themes", "url": "https://github.com/hlissner/emacs-doom-themes/pull/531", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2095046545
Update articleheader.css Please always provide the GitHub issue(s) your PR is for, as well as test URLs where your change can be observed (before and after): Fix # Test URLs: Before: https://main--famous-smoke-cigaradvisor--hlxsites.hlx.page/cigaradvisor/posts/2023/12/top-10-best-new-cigars-of-2023 After:https://feature-article-header--famous-smoke-cigaradvisor--hlxsites.hlx.page/cigaradvisor/posts/2023/12/top-10-best-new-cigars-of-2023 @bstopp I am already fetching these for indix(es) https://github.com/hlxsites/famous-smoke-cigaradvisor/blob/27d403eac0311b09cd02911c034a8068694232a1/cigaradvisor/blocks/articleheader/articleheader.js#L12 https://github.com/hlxsites/famous-smoke-cigaradvisor/blob/27d403eac0311b09cd02911c034a8068694232a1/cigaradvisor/blocks/articleheader/articleheader.js#L21
gharchive/pull-request
2024-01-23T00:57:30
2025-04-01T04:34:30.019592
{ "authors": [ "kailasnadh790" ], "repo": "hlxsites/famous-smoke-cigaradvisor", "url": "https://github.com/hlxsites/famous-smoke-cigaradvisor/pull/102", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2041184979
App sec jb delete non default repo Please always provide the GitHub issue(s) your PR is for, as well as test URLs where your change can be observed (before and after): Fix # Test URLs: Before: https://main--prisma-cloud-docs--hlxsites.hlx.page/ After: https://--prisma-cloud-docs--hlxsites.hlx.page/ Worker: https://prisma-cloud-docs-production.adobeaem.workers.dev/?branch= Commenting out a link to non-default repo content (content is ready but not ready to merge - with R&D)
gharchive/pull-request
2023-12-14T08:22:49
2025-04-01T04:34:30.022307
{ "authors": [ "JBakstPaloAlto" ], "repo": "hlxsites/prisma-cloud-docs", "url": "https://github.com/hlxsites/prisma-cloud-docs/pull/282", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2470312632
Record events as users navigate workflow Jira link Part of https://github.com/hmcts/slack-help-bot/issues/383 Change description Store events as user navigates workflow for analytics. Code is a bit complex as its trying to record events even when parts of it are skipped due to no results so that numbers are accurate for recording and analytics. Testing done Tested these scenarios manually: Knowledge store and related issues Related issues No related issues or knowledge store All is displaying the right results customEvents | where name !contains "Test" | summarize event_count = count() by bin(timestamp, 20m), name | render columnchart Checklist [ ] commit messages are meaningful and follow good commit message guidelines [ ] README and other documentation has been updated / added (if needed) [ ] tests have been updated / new tests has been added (if needed) [ ] Does this PR introduce a breaking change Plan Result (cftptl) No changes. Your infrastructure matches the configuration.
gharchive/pull-request
2024-08-16T13:46:07
2025-04-01T04:34:30.374090
{ "authors": [ "hmcts-platform-operations", "timja" ], "repo": "hmcts/slack-help-bot", "url": "https://github.com/hmcts/slack-help-bot/pull/425", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2142846949
Replaced incorrect messageType CC928C to CC025C in generateIE025Message Replaced incorrect messageType CC928C to CC025C in generateIE025Message Local Test Results:
gharchive/pull-request
2024-02-19T17:18:31
2025-04-01T04:34:30.375640
{ "authors": [ "rakeshshamantula" ], "repo": "hmrc/common-transit-convention-traders-test-support", "url": "https://github.com/hmrc/common-transit-convention-traders-test-support/pull/135", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2457381236
Feature/di 244 write unit tests for instances This pull request introduces a comprehensive suite of tests for the Actions and related components in the uk.gov.hmrc.saliabilitiessandpitapi project. The tests cover various aspects of functionality including validation actions, configuration handling, and response handling. The focus is on ensuring that our actions and services correctly interact and handle edge cases as expected. How to Test Run All Tests: Execute all tests using your build tool (e.g., sbt test) to verify that all scenarios pass as expected Looks like some formatting issues are causing the pr builder to fail
gharchive/pull-request
2024-08-09T08:18:19
2025-04-01T04:34:30.378745
{ "authors": [ "RobertBuczek", "Spectre99x" ], "repo": "hmrc/sa-liabilities-sandpit-api", "url": "https://github.com/hmrc/sa-liabilities-sandpit-api/pull/12", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1303863095
[BIOMAGE-1903] add two checkboxes for compliance Description Details URL to issue https://biomage.atlassian.net/browse/BIOMAGE-1903 Link to staging deployment URL (or set N/A) N/A Links to any PRs or resources related to this PR https://github.com/hms-dbmi-cellenics/api/pull/394 https://github.com/hms-dbmi-cellenics/iac/pull/490 Integration test branch master Merge checklist Your changes will be ready for merging after all of the steps below have been completed. Code updates Have best practices and ongoing refactors being observed in this PR [ ] Migrated any selector / reducer used to the new format. Manual/unit testing [ ] Tested changes using InfraMock locally or no tests required for change, e.g. Kubernetes chart updates. [ ] Validated that current unit tests for code work as expected and are sufficient for code coverage or no unit tests required for change, e.g. documentation update. [ ] Unit tests written or no unit tests required for change, e.g. documentation update. Integration testing You must check the box below to run integration tests on the latest commit on your PR branch. Integration tests have to pass before the PR can be merged. Without checking the box, your PR will not pass the required status checks for merging. [ ] Started end-to-end tests on the latest commit. Documentation updates [ ] Relevant Github READMEs updated or no GitHub README updates required. [ ] Relevant Wiki pages created/updated or no Wiki updates required. Optional [ ] Staging environment is unstaged before merging. [ ] Photo of a cute animal attached to this PR. is this not going to be staged? or it's not required? is this not going to be staged? or it's not required? It is staged
gharchive/pull-request
2022-07-13T19:44:11
2025-04-01T04:34:30.387242
{ "authors": [ "cosa65", "kafkasl" ], "repo": "hms-dbmi-cellenics/ui", "url": "https://github.com/hms-dbmi-cellenics/ui/pull/772", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
185765817
The state of DynamoDB support Sorry if this isn't appropriate for an issue, but I figured it may be useful to others using this library. What is the current state of DynamoDB support? The initial branch is nearly a year old. We're happy to work on this ourselves if necessary, but if there was specific problems you ran across when trying to make it work, that would be great info. Any advice on tackling this before we dive in? Since this initial work we have implemented data handlers as standalone projects: relationaldb (the most mature of the handlers and the one we're actively using), mongodb (quite mature) and elasticsearch. I believe the DynamoDB handler never progressed much further than the experimental stage (@theninj4 can provide further clarification). We could perhaps cherry pick that code into a standalone project and you could then take it from there? We're happy to accept external contributions. Hi pmcnr-hx, Pulling out the dynamodb branch into a separate project we could fork and PR would be great. We are currently using jsonapi pretty lightly in production, so we will probably start very thin (simple read and write support), but having a project to work off of would be great! The current state of the DynamoDB branch is... hmm. I took the empty jsonapi-server handler interface and started pushing around some generic dynamodb code to flesh out what it might eventually look like. I was going in somewhat blind to how DynamoDB should be used - the idea was to get some static code working, then alter it to be dynamic and conform to the resource specification. I shelved the idea when I found the limitation of 5x secondary indexes and ran out of enthusiasm for their vague error messages which really slowed progress. TLDR; DynamoDB is great for specifically tailored use cases - making a generic DynamoDB handler for resources of any shape/size is going to be a LOT of hassle. That is good info to have! Personally I'm new to DynamoDB too, and have only worked with the MongoDB handler. I took a look at the branch and kind of got that impression. Perhaps we will settle on the tailored DynamoDB handler for our use cases, and if we need a more generalized handler we can reassess our options then. Thanks for the reply!
gharchive/issue
2016-10-27T20:00:45
2025-04-01T04:34:30.493265
{ "authors": [ "dlras2", "pmcnr-hx", "theninj4" ], "repo": "holidayextras/jsonapi-server", "url": "https://github.com/holidayextras/jsonapi-server/issues/211", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
253735666
Fix node executable path The path to the node executable may be different between the Linux distributions, but it looks like /usr/bin/node is more standard. The safest is... #!/usr/bin/env node ...it looks for node in the PATH. https://stackoverflow.com/a/24253067/604296
gharchive/pull-request
2017-08-29T16:58:47
2025-04-01T04:34:30.496063
{ "authors": [ "forabi", "msafih" ], "repo": "hollowverse/common", "url": "https://github.com/hollowverse/common/pull/11", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1920117025
Make the examples discoverable by adding a Gallery Right now you have to browse around and read the code to identify the example you need. Or maybe even run them to figure out how they work. I think we should make them more discoverable. I would propose Short term solution: Document each example with a little bit of text and a .png, mp4 or .gif. This could be added to the README or in a seperate gallery.md document. Long term solution: Create a gallery like https://github.com/MarcSkovMadsen/sphinx-gallery-panel-example or deploy the examples This one has been solved
gharchive/issue
2023-09-30T04:41:09
2025-04-01T04:34:30.512470
{ "authors": [ "MarcSkovMadsen" ], "repo": "holoviz-topics/panel-chat-examples", "url": "https://github.com/holoviz-topics/panel-chat-examples/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1377557918
support enum form field Same as date/string: I want to define and use enum types. Expectation: enum value names displayed as dropdown Not as easy as other variables, the Bpmn/ModelInstance API does not support this, so we have to a) enhance the camunda core API before or b) work directly with the xml document model.
gharchive/issue
2022-09-19T07:59:06
2025-04-01T04:34:30.540600
{ "authors": [ "jangalinski" ], "repo": "holunda-io/camunda-admin-process-registry", "url": "https://github.com/holunda-io/camunda-admin-process-registry/issues/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1782455096
Python Matter Server doesn't show up over LAN via macvlan network. I've been discussing this in the HA Discord. But I'm just gonna write it here so it can get a bit more attention by others. I've been trying to commission a TP-Link Tapo P125M Wi-Fi Smart Plug to my Home Assistant Container instance running the docker image for the Python Matter Server exposed to my LAN over a macvlan network alongside a few other things on it. However, whenever I view all exisitng mDNS entries on my network, the Matter Server doesn't show up as a _matter._tcp. entry. And any attempts to commission the device fail with the Python Matter Server complaining about timing out with mDNS resolution. 2023-06-30 12:05:43 matter-server chip.CTL[1] DEBUG Key Found 97 2023-06-30 12:05:43 matter-server chip.CTL[1] DEBUG StorageAdapter::SetKeyValue: Key = f/1/k/0, Value = 0x7f11537fca30 (97) 2023-06-30 12:05:43 matter-server PersistentStorage[1] INFO SetSdkKey: f/1/k/0 = b'\x15$\x01\x00$\x02\x016\x03\x15$\x04\x00%\x05\x80\x070\x06\x10\xf2\x1b\xe1\xd8\x8c\xe3\xce\x18fH\x94O\xbe}m\x92\x18\x15$\x04\x00$\x05\x000\x06\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x18\x15$\x04\x00$\x05\x000\x06\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x18\x18%\x07\xff\xff\x18' 2023-06-30 12:05:43 matter-server PersistentStorage[1] INFO Committing... 2023-06-30 12:05:43 matter-server matter_server.server.device_controller[1] DEBUG CHIP Device Controller Initialized 2023-06-30 12:05:43 matter-server matter_server.server.storage[1] DEBUG Loading persistent settings from /data/3126079813310142521.json 2023-06-30 12:05:43 matter-server matter_server.server.storage[1] DEBUG Started. 2023-06-30 12:05:43 matter-server matter_server.server.device_controller[1] DEBUG Loaded 0 nodes 2023-06-30 12:05:43 matter-server matter_server.server.vendor_info[1] INFO Loading vendor info from storage. 2023-06-30 12:05:43 matter-server matter_server.server.vendor_info[1] INFO Loaded 81 vendors from storage. 2023-06-30 12:05:43 matter-server matter_server.server.vendor_info[1] INFO Fetching the latest vendor info from DCL. 2023-06-30 12:05:44 matter-server matter_server.server.vendor_info[1] INFO Fetched 81 vendors from DCL. 2023-06-30 12:05:44 matter-server matter_server.server.vendor_info[1] INFO Saving vendor info to storage. 2023-06-30 12:05:44 matter-server matter_server.server.server[1] DEBUG Webserver initialized. 2023-06-30 12:05:45 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Connected from 192.168.1.2 2023-06-30 12:05:45 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Received: { "message_id": "4ede87e77e4547c08ea93ccbaf271016", "command": "start_listening", "args": null } 2023-06-30 12:05:45 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Received CommandMessage(message_id='4ede87e77e4547c08ea93ccbaf271016', command='start_listening', args=None) 2023-06-30 12:05:45 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Handling command start_listening 2023-06-30 12:06:59 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Received: { "message_id": "a412eea54196475798b185b14796e63b", "command": "commission_with_code", "args": { "code": "10232554859" } } 2023-06-30 12:06:59 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Received CommandMessage(message_id='a412eea54196475798b185b14796e63b', command='commission_with_code', args={'code': '10232554859'}) 2023-06-30 12:06:59 matter-server matter_server.server.client_handler[1] DEBUG [139712493335184] Handling command commission_with_code 2023-06-30 12:06:59 matter-server matter_server.server.helpers.paa_certificates[1] INFO Fetching the latest PAA root certificates from DCL. 2023-06-30 12:07:00 matter-server matter_server.server.helpers.paa_certificates[1] INFO Fetched 0 PAA root certificates from DCL. 2023-06-30 12:07:00 matter-server matter_server.server.helpers.paa_certificates[1] INFO Fetching the latest PAA root certificates from Git. 2023-06-30 12:07:00 matter-server matter_server.server.helpers.paa_certificates[1] INFO Fetched 0 PAA root certificates from Git. 2023-06-30 12:07:00 matter-server chip.CTL[1] INFO Setting attestation nonce to random value 2023-06-30 12:07:00 matter-server chip.CTL[1] INFO Setting CSR nonce to random value 2023-06-30 12:07:00 matter-server chip.CTL[1] DEBUG Stopping commissioning discovery over DNS-SD 2023-06-30 12:07:00 matter-server chip.CTL[1] INFO Starting commissioning discovery over BLE 2023-06-30 12:07:00 matter-server chip.CTL[1] INFO Starting commissioning discovery over DNS-SD 2023-06-30 12:07:00 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth0: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:00 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth1: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:00 matter-server matter_server.server.storage[1] DEBUG Saved data to persistent storage 2023-06-30 12:07:00 matter-server chip.DL[1] DEBUG TRACE: Bus acquired for name C-0001 2023-06-30 12:07:00 matter-server chip.DL[1] DEBUG PlatformBlueZInit init success 2023-06-30 12:07:00 matter-server chip.BLE[1] INFO BLE removing known devices. 2023-06-30 12:07:00 matter-server chip.BLE[1] INFO BLE initiating scan. 2023-06-30 12:07:00 matter-server chip.DL[1] ERROR Long dispatch time: 144 ms, for event type 2 2023-06-30 12:07:01 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:01 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth0: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:01 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth1: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:01 matter-server chip.DIS[1] DEBUG mDNS broadcast had only partial success: 2 successes and 2 failures. 2023-06-30 12:07:01 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:02 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:03 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:03 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth0: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:03 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth1: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:03 matter-server chip.DIS[1] DEBUG mDNS broadcast had only partial success: 2 successes and 2 failures. 2023-06-30 12:07:04 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:04 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:04 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:04 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:04 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:05 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:05 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:05 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:06 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:06 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:06 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:07 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:07 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:07 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth0: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:07 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth1: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:07 matter-server chip.DIS[1] DEBUG mDNS broadcast had only partial success: 2 successes and 2 failures. 2023-06-30 12:07:08 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:10 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:10 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:10 matter-server chip.BLE[1] DEBUG Device F8:A2:6D:D6:73:93 does not look like a CHIP device. 2023-06-30 12:07:10 matter-server chip.BLE[1] ERROR BLE scan error: src/platform/Linux/bluez/ChipDeviceScanner.cpp:154: CHIP Error 0x00000032: Timeout 2023-06-30 12:07:10 matter-server chip.BLE[1] INFO Scan complete. No matching device found. 2023-06-30 12:07:15 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth0: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:15 matter-server chip.DIS[1] DEBUG Warning: Attempt to mDNS broadcast failed on eth1: src/inet/UDPEndPointImplSockets.cpp:411: OS Error 0x02000063: Cannot assign requested address 2023-06-30 12:07:15 matter-server chip.DIS[1] DEBUG mDNS broadcast had only partial success: 2 successes and 2 failures. 2023-06-30 12:07:30 matter-server chip.CTL[1] ERROR Discovery timed out 2023-06-30 12:07:30 matter-server chip.CTL[1] DEBUG Stopping commissioning discovery over BLE 2023-06-30 12:07:30 matter-server chip.BLE[1] ERROR BleConnectionDelegate::CancelConnection is not implemented. 2023-06-30 12:07:30 matter-server chip.-[1] ERROR src/platform/Linux/BLEManagerImpl.cpp:732: CHIP Error 0x0000002D: Not Implemented at src/controller/SetUpCodePairer.cpp:551 2023-06-30 12:07:30 matter-server chip.CTL[1] DEBUG Stopping commissioning discovery over DNS-SD 2023-06-30 12:07:30 matter-server chip.ZCL[1] ERROR Secure Pairing Failed 2023-06-30 12:07:30 matter-server matter_server.server.client_handler[1] ERROR [139712493335184] Error handling message: CommandMessage(message_id='a412eea54196475798b185b14796e63b', command='commission_with_code', args={'code': '10232554859'}) Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/matter_server/server/client_handler.py", line 188, in _run_handler result = await result ^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/matter_server/server/device_controller.py", line 172, in commission_with_code raise NodeCommissionFailed( matter_server.common.errors.NodeCommissionFailed: Commission with code failed for node 10 2023-06-30 12:07:31 matter-server chip.DIS[1] ERROR Timeout waiting for mDNS resolution. A few other things in these debug logs that concern me: The logs are claiming that there is an eth0 and an eth1, but at the moment. The only ethernet port on my host machine is an ethernet interface labeled enp2s0 The logs are complaining about being unable to assign the requested address Another thing, mainly because it seems to have caught Marcelveldt's eye in the Discord matter channel as being a big offender 2023-06-30 12:07:00 matter-server chip.DIS[1] DEBUG mDNS broadcast had only partial success: 2 successes and 2 failures. The smart plug is currently commissioned to Apple Home and Google Home without any issues. And both the Home Assistant Container instance and Python Matter Server are running on the same device, and are connected to the same LAN that the P125M is on. My network is currently managed by a Pfsense firewall running version 2.6.0 that has the WAN set to DHCP/DHCP6 and the LAN set to a static ipv6/track interface of WAN for ipv6. Avahi is currently installed and enabled, and set to repeat packets across subnets for a few extra IoT devices on another network. And here is my docker compose for the Matter server and macvlan: networks: a-lan: driver: macvlan driver_opts: parent: enp2s0 ipam: config: - subnet: 192.168.1.0/24 gateway: 192.168.1.1 - subnet: xxxx:xxxx:xxxx:948::/64 gateway: fe80::xxxx:xxxx:fe70:c331 services: matter-server: hostname: matter-server mac_address: D2-C8-9C-B1-50-2A image: ghcr.io/home-assistant-libs/python-matter-server:stable container_name: matter-server command: "--log-level debug" restart: unless-stopped # Required for mDNS to work correctly security_opt: - apparmor:unconfined volumes: - /docker/home-assistant/python-matter-server/data:/data/ - /run/dbus:/run/dbus:ro ports: - 5580:5580 networks: homeassistant: a-lan: ipv4_address: 192.168.1.4 ipv6_address: 'xxxx:xxxx:xxxx:948:1335:ad2d:8def:1b3f' Ok, So I made sure to turn off mDNS repeating like the documentation says to within the Avahi package. But I am still not getting any different results I've just gotten Matter working as a docker container running in a macvlan network if you are still interested? hey @kelvtech-co-uk could you share docker compose file? hey @kelvtech-co-uk could you share docker compose file? Have a look at my post here, it shows you the compose details plus a little further explanation, it also shows you the clear warning message ref support from one fo the devs. Further edit, have been reading this and this. Changes to the sysctl interface files aren't needed on the container host, I no longer change the .accept_ra or .accept_ra_rt_info_max_plen values on my unraid server. However I believe docker defaults container sysctl interface files to be .forwarding=0 and .accept_ra=1 which is fine to receive v6 routes however .accept_ra_rt_info_max_plen is set also to 0 which means its going to ignore and not take the Thread network routes being advertised as they are 64 bits in length. As such this needs to be overridden when the container is built so the below lines are still needed in the container compose file. sysctls: net.ipv6.conf.eth0.accept_ra_rt_info_max_plen: 64
gharchive/issue
2023-06-30T12:35:37
2025-04-01T04:34:30.566482
{ "authors": [ "S0ulf3re", "cbezmen", "kelvtech-co-uk" ], "repo": "home-assistant-libs/python-matter-server", "url": "https://github.com/home-assistant-libs/python-matter-server/issues/347", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1057226100
Consider adding coreutils to images Hello! I've been using scripts within home assistant to do various things - I've become stuck when trying to use the -d argument with date Turns out it's because coreutils is not installed on the home assistant images I've got around this for now by creating a dockerfile to add it in , but this means that I will no longer get automatic updates (using watchtower) and will have to build an image each time a new release comes out Could you describe the use case for it? The container is not really meant as a command-line environment. Ignore this While running the modified image there appeared to be many issues with integrations The use case btw was I'm hitting a API in a bash script, where I want to get a date range, so was using date -dlast-monday to get last mondays date This doesn't work without the mentioned package, but using date -d "-$24:00:00" so I've modified the script to use this
gharchive/issue
2021-11-18T11:08:07
2025-04-01T04:34:31.021440
{ "authors": [ "Shaun-Harrison", "frenck" ], "repo": "home-assistant/docker-base", "url": "https://github.com/home-assistant/docker-base/issues/153", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
532455381
Support encrypted communication with HA Just for that extra bit of security, we should implement encrypted communication with HA over the webhook like iOS supports. Some notes on this: We need to use an updated and peer reviewed library to do this. @JBassett says he looked at https://github.com/terl/lazysodium-android, one option I suggested. The other option is https://github.com/joshjdevl/libsodium-jni. Sadly, both are Java. The only Kotlin option hasn't been updated in 2+ years. Initial support has been added to HA Core to enable encryption on existing registrations (at https://github.com/home-assistant/home-assistant/pull/31743). Call webhook action enable_encryption on HA Core 0.106+, receive a JSON body with secret as the response. HA version can be derived via get_config webhook action. The logic in app should look like this (psuedocode):if app_has_no_encryption_key and ha_version > 0.106: attempt_to_enable_encryption() If enabling encryption fails due to older HA version, then we should either keep doing that check every so often (whenever app starts?) or have a button in settings to allow users to manually enable encryption. Do we know the HA version that we talk to ? Seems like something we should know in the app to be able to enable/disable functionality. @balloob We can get that info from the get_config webhook action. Initial work https://github.com/home-assistant/home-assistant-android/tree/feature/webhookEncryption I am running into issues with HA decrypting my requests... If anyone wants to take a crack at it please do!
gharchive/issue
2019-12-04T05:15:30
2025-04-01T04:34:31.130066
{ "authors": [ "JBassett", "balloob", "robbiet480" ], "repo": "home-assistant/home-assistant-android", "url": "https://github.com/home-assistant/home-assistant-android/issues/97", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
344147226
0.74.1 - http component start failed with cors_allowed_origins configed Home Assistant release with the issue: 0.74.1 Last working Home Assistant release (if known): 0.74.0 Operating environment (Hass.io/Docker/Windows/etc.): Component/platform: https://www.home-assistant.io/components/http/ Description of problem: http start failed Problem-relevant configuration.yaml entries and (fill out even if it seems unimportant): http: api_password: some_password cors_allowed_origins: - https://google.com - https://home-assistant.io Traceback (if applicable): 2018-07-24 20:02:02 ERROR (MainThread) [homeassistant.core] Error doing job: Task exception was never retrieved Traceback (most recent call last): File "/usr/src/app/homeassistant/components/http/__init__.py", line 132, in start_server await server.start() File "/usr/src/app/homeassistant/components/http/__init__.py", line 284, in start await self.app.startup() File "/usr/local/lib/python3.6/site-packages/aiohttp/web_app.py", line 278, in startup await self.on_startup.send(self) File "/usr/local/lib/python3.6/site-packages/aiohttp/signals.py", line 35, in send await receiver(*args, **kwargs) File "/usr/src/app/homeassistant/components/http/cors.py", line 53, in cors_startup cors.add(route) File "/usr/local/lib/python3.6/site-packages/aiohttp_cors/cors_config.py", line 263, in add return self._cors_impl.add(routing_entity, config) File "/usr/local/lib/python3.6/site-packages/aiohttp_cors/cors_config.py", line 137, in add routing_entity, parsed_config) File "/usr/local/lib/python3.6/site-packages/aiohttp_cors/urldispatcher_router_adapter.py", line 240, in set_config_for_routing_entity resource)) ValueError: CORS is already configured for <PlainResource /auth/token> resource. Additional information: I've updated to the latest Docker Image 0.74.1, but after the update HA wasn't working anymore (Frontende couldn't be reached. After a roll-back to 0.74.0 it was fine again. Not so critical to me, but I though I share it here. In 0.74.1, we auto loaded auth component with http component cause this issue. I can reproduce it. Version 0.74.0 is the last build which is working for me, if I update to the latest build, I get a HTTP 502 bad gateway error. What changed since v 0.74.0 in terms of that?
gharchive/issue
2018-07-24T18:13:23
2025-04-01T04:34:31.171045
{ "authors": [ "awarecan", "darox" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/issues/15659", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
370993656
Cannot init HomeKit Controller – no entities added even if accessory was found I made a fresh installation of Homebridge which works as a bridge for eWeLink account (via plugin). All works fine if I add this bridge to Apple Home – all accessories are discovered and works fine. But when I try to configure: discovery: enable: - homekit My bridge is discovered fine, but when I click on "Configure" in UI, enter the PIN code (correct one) – it closes popup and gives no further info, including no accessories added. Log says: 2018-10-17 09:49:45 ERROR (MainThread) [homeassistant.core] Error executing service <ServiceCall configurator.configure (c:64b3d1c0028a445caf00508fd2f8e227): fields=code=031-45-154, configure_id=1735570544-1> Traceback (most recent call last): File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/core.py", line 1177, in _event_to_service_call await service_handler.func(service_call) File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/configurator.py", line 221, in async_handle_service_call call.data.get(ATTR_FIELDS, {})) File "/usr/lib/python3.5/asyncio/futures.py", line 380, in __iter__ yield self # This tells Task to wait for completion. File "/usr/lib/python3.5/asyncio/tasks.py", line 304, in _wakeup future.result() File "/usr/lib/python3.5/asyncio/futures.py", line 293, in result raise self._exception File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run result = self.fn(*self.args, **self.kwargs) File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/homekit_controller/__init__.py", line 218, in device_config_callback self.accessory_setup() File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/homekit_controller/__init__.py", line 139, in accessory_setup data = self.get_json('/accessories') File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/homekit_controller/__init__.py", line 166, in get_json response = self.securecon.get(target) AttributeError: 'NoneType' object has no attribute 'get' After restart I can see no further config of bridge and no accessories, but when I try to add bridge into Apple Home – it says that is it already paired with another device. What HASS discovers as homekit is: [homeassistant.components.discovery] Found new service: homekit {'host': '192.168.1.31', 'name': 'eWeLink bridge-CAD8', 'port': 51826, 'properties': {'id': 'CC:22:3D:E3:CE:30', 'pv': '1.0', 'md': 'eWeLink bridge', 'sf': '0', 's#': '1', 'ci': '2', 'sh': 'lGPgxg==', 'ff': '0', 'c#': '2'}, 'hostname': 'CC_22_3D_E3_CE_30.local.'} I downgraded HASS to 0.79.3 and re-paired bridge – now without error. Plus I enabled logging and I can see: Oct 17 11:07:25 raspberrypi hass[2637]: 2018-10-17 11:07:25 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Discovered unique device CC:22:3D:E3:CE:30 Oct 17 11:07:25 raspberrypi hass[2637]: 2018-10-17 11:07:25 INFO (Thread-7) [homeassistant.components.homekit_controller] Setting up Homekit device eWeLink bridge Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found Unknown Service: 49FB9D4D-0FEA-4BF1-8FA6-E7B18AB86DCE Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found switch Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found switch Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found switch Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found switch Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found switch Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found accessory-information Oct 17 11:07:26 raspberrypi hass[2637]: 2018-10-17 11:07:26 DEBUG (Thread-7) [homeassistant.components.homekit_controller] Found switch But still – no switches in UI and entities. Some more research. It looks like this https://github.com/home-assistant/home-assistant/blob/master/homeassistant/components/homekit_controller/__init__.py#L157 is returning None or discovery.load_platform is not working. Home assistant is using homekit library https://github.com/jlusiardi/homekit_python When you get JSON response from the device it looks something like this: { 'accessories': [ { 'aid': 1, 'services': [ { 'type': '3E', 'characteristics': [ { 'maxLen': 64, 'type': '20', 'perms': [ 'pr' ], 'format': 'string', 'value': 'Koogeek', 'iid': 3 }, { 'maxLen': 64, 'type': '21', 'perms': [ 'pr' ], 'format': 'string', 'value': 'O1EU', 'iid': 4 }, { 'format': 'bool', 'perms': [ 'pw' ], 'type': '14', 'iid': 6 }, { 'value': '2.3.6', 'type': '52', 'perms': [ 'pr' ], 'format': 'string', 'iid': 49 } ], 'iid': 1 } ... You can notice that there are listed services of that accessory and each service has its type. This id of type is being mapped in homekit library https://github.com/jlusiardi/homekit_python/blob/master/homekit/model/services/service_types.py Then in home assistant we are mapping it using "HOMEKIT_ACCESSORY_DISPATCH" to home assistant's entites. So your service type maybe "49" and it is being mapped to "switch" in homekit library but then in homeassistant there is no mapping from switch to switch so it doesn't get registered. To fix it - try editing file homeassistant/components/homekit_controller/init.py: After line 26 'outlet': 'switch', Add: 'switch': 'switch', Restart home assistant. @drndos I had the same issue, editing the file homeassistant/components/homekit_controller/init.py: with the suggested change helped to fix it. Thanks! Finally I found same issue as I have. I solved this problem too via just adding a line. I think #17916 is good to merge.
gharchive/issue
2018-10-17T10:00:46
2025-04-01T04:34:31.180277
{ "authors": [ "cadavre", "drndos", "ey-", "nodeover" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/issues/17544", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
550191236
IHC integration seems to not load light (dimmers) Home Assistant release with the issue: 0.103.6 Last working Home Assistant release (if known): Do not know Operating environment (Hass.io/Docker/Windows/etc.): Docker, Raspian Buster RPI3 Integration: https://www.home-assistant.io/integrations/ihc/ Description of problem: The integration does not automatically populate with dimmers (light). I have tested and manually put in a light id in the configuration. Then that id shows up after a reboot of the HA. This is a snapshot from my log file. Could it be that the integration somehow misses to do a service=load_platform.light,....?? 2020-01-15 14:00:08 DEBUG (MainThread) [homeassistant.core] Bus:Handling <Event platform_discovered[L]: service=load_platform.binary_sensor, platform=ihc, 2020-01-15 14:00:09 DEBUG (MainThread) [homeassistant.core] Bus:Handling <Event platform_discovered[L]: service=load_platform.switch, platform=ihc, Problem-relevant configuration.yaml entries and (fill out even if it seems unimportant): ihc: - url: 'http://192.168.1.x' username: aaa password: bbb info: True # light: # - id: 17034077 # name: Erikdim # dimmable: True Traceback (if applicable): Additional information: This issue can be closed. It turned out that Swedish WL dimmers was not configured in the auto setup file. Look here for refererence: https://www.dingus.dk/customizing-home-assistent-ihc-auto-setup/
gharchive/issue
2020-01-15T13:37:48
2025-04-01T04:34:31.187092
{ "authors": [ "deepspace1" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/issues/30789", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
571973388
Error in the rest.sensor. No status unavailable. The problem No status unavailable. If the request is unsuccessful. And remains the previous state of the sensor. Environment Home Assistant release with the issue: 106.0 Last working Home Assistant release (if known): 105.5 Operating environment (Hass.io/Docker/Windows/etc.): Docker Integration causing this issue: rest Link to integration documentation on our website: https://www.home-assistant.io/integrations/rest/ Problem-relevant configuration.yaml sensors: - platform: rest resource: http://192.168.1.12:8123/api/states/sensor.ha_uptime name: ha1_uptime unit_of_measurement: minutes force_update: true headers: authorization: !secret ha1_token value_template: "{{ value_json.state }}" Traceback/Error logs 2020-02-27 12:28:03 ERROR (SyncWorker_12) [homeassistant.components.rest.sensor] Error fetching data: http://192.168.1.12:8123/api/states/sensor.ha_uptime failed with HTTPConnectionPool(host='192.168.1.12', port=8123): Max retries exceeded with url: /api/states/sensor.ha_uptime (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8e25b490>: Failed to establish a new connection: [Errno 111] Connection refused')) 2020-02-27 12:28:03 ERROR (MainThread) [homeassistant.helpers.entity] Update for sensor.ha1_uptime fails Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 279, in async_update_ha_state await self.async_device_update() File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 476, in async_device_update await self.hass.async_add_executor_job(self.update) File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/usr/src/homeassistant/homeassistant/components/rest/sensor.py", line 205, in update content_type = self.rest.headers.get("content-type") AttributeError: 'NoneType' object has no attribute 'get' Additional information Possible solution to the problem --- sensor.py.org 2020-02-27 11:35:12.120159619 +0200 +++ sensor.py 2020-02-27 11:35:12.084160035 +0200 @@ -202,7 +202,10 @@ self.rest.update() value = self.rest.data _LOGGER.debug("Data fetched from resource: %s", value) - content_type = self.rest.headers.get("content-type") + if self.rest.headers is not None: + content_type = self.rest.headers.get("content-type") + else: + content_type = None if content_type and content_type.startswith("text/xml"): try: @zvldz Would you please fill out the configuration.yaml section Filled out the configuration section. Fixed here https://github.com/home-assistant/home-assistant/pull/32309 @springstan Sorry I didn't see your PR before I put in #32309 I added a test for this failure state. @bdraco no worries ✌ thanks for fixing this issue and adding a test for it though :)
gharchive/issue
2020-02-27T10:31:11
2025-04-01T04:34:31.192899
{ "authors": [ "bdraco", "springstan", "zvldz" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/issues/32254", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
264697857
Trådfri ValueError on init with async code Home Assistant release (hass --version): 0.55.0 Python release (python3 --version): Python 3.5.3 Component/platform: pytradfri Description of problem: Upgrading to 0.55 with new async code results in ValueError on tradfri light init. Further, after HASS has been running for a while status is not shown in HASS (all marked as off). Expected: No ValueError and possibility to see trådfri lights status without restarting. Problem-relevant configuration.yaml entries and steps to reproduce: tradfri: host: 10.0.2.252 api_key: !secret tradfri_api_key allow_tradfri_groups: false Install new aiocoap libs (tried all fixes suggested here) Start HASS See error in log Traceback (if applicable): 2017-10-11 17:41:50 ERROR (MainThread) [homeassistant.core] Error doing job: Task exception was never retrieved Traceback (most recent call last): File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/home/homeassistant/hass/lib/python3.5/site-packages/homeassistant/helpers/entity_component.py", line 388, in async_add_entities yield from asyncio.wait(tasks, loop=self.component.hass.loop) File "/usr/lib/python3.5/asyncio/tasks.py", line 346, in wait raise ValueError('Set of coroutines/Futures is empty.') ValueError: Set of coroutines/Futures is empty. After HASS has been running I get the following error message: 2017-10-11 20:40:26 ERROR (MainThread) [homeassistant.core] Error doing job: Fatal read error on socket transport Traceback (most recent call last): File "/usr/lib/python3.5/asyncio/selector_events.py", line 723, in _read_ready data = self._sock.recv(self.max_size) OSError: [Errno 113] No route to host In those cases it is not possible to see status of Trådfri lights (all are marked as off). Additional info: I have tried to find any clues in the debug output (logger: default: debug) but with no success. However, I have a limited understanding of Python and HASS. I might therefore have missed something. Since 0.55 I'm also experiencing that after a few hours of running Home Assistant the lights status is not reported correctly. I can still turn them on and set their brightness, but their state will remain off. As a side effect the Flux Switch (and various automations) don't work. I think fixing the init problem should be solved by replacing line 388 in homeassistant/helpers/entity_component.py (at least I get no error messages and it passes testing) yield from asyncio.wait(tasks, loop=self.component.hass.loop) with if tasks: yield from asyncio.wait(tasks, loop=self.component.hass.loop) However, I am still trying to find the calling function for the second error. I should mention, that I do not get those errors in my log, but I am seeing the same behaviour where the light state does not update. Maybe this is a separate issue. In my case, I can also get lights with their state stuck to 'on'. The 'stuck state' issue seems to persist with the pytradfri update on the dev branch https://github.com/home-assistant/home-assistant/commit/d16c5f904668a035995d985a3ec372320d008459 Same problem here, after a few hours I'm unable to control my tradfri with homeassistant, but it works with tradfri apps or the tradfri remote. To solve the problem I need to restart homeassistant. Persists in HASS 0.57.0 with pytradfri 4.0.1. The problem is still here with hass 0.57.1 So, I get a slightly different error message after HASS 0.57 and pytradfri 4.0.x. I get the initial error message as stated before but not the "Error doing job: Fatal read error on socket transport" that I got after a few hours, instead I get the following directly after the first initial error message: ERROR (MainThread) [homeassistant.core] Error doing job: Task exception was never retrieved Traceback (most recent call last): File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step result = coro.send(None) File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/api/aiocoap_api.py", line 149, in request result = yield from self._execute(api_commands) File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/api/aiocoap_api.py", line 107, in _execute yield from self._observe(api_command) File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/api/aiocoap_api.py", line 169, in _observe api_command.result = _process_output(r) File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/command.py", line 71, in result self._result = self._process_result(value) File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/resource.py", line 46, in observe_callback callback(self) File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/light/tradfri.py", line 318, in _observe_update self._light_data.hex_color_inferred File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/device.py", line 281, in hex_color_inferred *xy_brightness_to_rgb(scale(x), scale(y), self.dimmer) File "/srv/homeassistant/lib/python3.5/site-packages/pytradfri/color.py", line 142, in xy_brightness_to_rgb brightness = ibrightness / 255. TypeError: unsupported operand type(s) for /: 'NoneType' and 'float' If it matters I have three TRÅDFRI-lamps and one Hue. Overall, HASS and tradfri works even more poorly now. When I (re)start HASS the status is off (bulbs that are on are stated as off and vice versa) and sometimes I cannot change state. I tried some debug (but I'm not an expert). When tradfri is working I have something like these in the logs: 2017-11-07 12:26:42 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Status: 2.05 Content, Received: {"3":{"0":"IKEA of Sweden","1":"TRADFRI bulb E27 WS opal 980lm","2":"","3":"1.2.217","6":1},"9001":"sala","9002":1493665166,"9020":1509983471,"9003":65540,"3311":[{"5850":0,"5851":254,"5711":370,"5709":30138,"5710":26909,"5706":"f1e0b5","9003":0}],"9054":0,"5750":2,"9019":1} 2017-11-07 12:26:42 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Status: 2.05 Content, Received: {"3":{"0":"IKEA of Sweden","1":"TRADFRI bulb E14 WS opal 400lm","2":"","3":"1.2.217","6":1},"9001":"sala piccola","9002":1499359276,"9020":1509983474,"9003":65542,"3311":[{"5850":0,"5851":254,"5711":370,"5709":30138,"5710":26909,"5706":"f1e0b5","9003":0}],"9054":0,"5750":2,"9019":1} 2017-11-07 12:28:22 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Executing 192.168.20.15 put ['15001', 65540]: {'3311': [{'5850': 1}]} 2017-11-07 12:28:22 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Status: 2.04 Changed, Received: when tradfri is NOT working I don't have the "Content, Received" line any more: 2017-11-07 12:25:39 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Executing 192.168.20.15 put ['15001', 65540]: {'3311': [{'5850': 1}]} 2017-11-07 12:25:39 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Status: 2.04 Changed, Received: 2017-11-07 12:25:39 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Executing 192.168.20.15 put ['15001', 65540]: {'3311': [{'5850': 1}]} 2017-11-07 12:25:39 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Status: 2.04 Changed, Received: 2017-11-07 12:25:39 DEBUG (MainThread) [pytradfri.api.aiocoap_api] Executing 192.168.20.15 put ['15001', 65540]: {'3311': [{'5850': 1}]} I hope this is helpful to resolve the problem I get this aswell, however not using trådfri. I am using rflink for lights, but get the same error in logs and not able to control lamps after error appears in logs... I solved the problem with an upgrade of python from 3.5 to 3.6 (I needed to install with pip cython and DTLSSocket) @mvivaldi, I also upgraded as I did not see the edit, the issue persists. I get the problem after four hours now. I get the following error on init: ERROR (MainThread) [homeassistant.core] Error doing job: Task exception was never retrieved Traceback (most recent call last): File "/usr/local/lib/python3.6/asyncio/tasks.py", line 180, in _step result = coro.send(None) File "/srv/homeassistant/lib/python3.6/site-packages/homeassistant/helpers/entity_component.py", line 405, in async_add_entities yield from asyncio.wait(tasks, loop=self.component.hass.loop) File "/usr/local/lib/python3.6/asyncio/tasks.py", line 304, in wait raise ValueError('Set of coroutines/Futures is empty.') ValueError: Set of coroutines/Futures is empty. and then the error message I had in the beginning is back (after 4 hours or so) ERROR (MainThread) [homeassistant.core] Error doing job: Fatal read error on socket transport Traceback (most recent call last): File "/usr/local/lib/python3.6/asyncio/selector_events.py", line 724, in _read_ready data = self._sock.recv(self.max_size) OSError: [Errno 113] No route to host I don't have any error in the logfile, I don't know what to do anymore, I restart homessistant every few hours via cron to "solve" the problem Solved (I hope!). The problem was my home assistant and my tradfri weren't in the same network (no firewall involved only a router). Now they are in the same network and everything is working. @mvivaldi, is it working for you? I've noticed something very, very strange that I wonder whether you can check. Before starting hass I run pip3 install --upgrade homeassistant pytradfri (all below in virtutal env). Everything is updating to latest version (homeassistant==0.59.2 and pytradfri==5.2.0) but when I launch hass (during startup) it reverts pytradfri to version 4.1.0. Could you please run: 1 - "pip3 install --upgrade homeassistant pytradfri" 2 - pip3 freeze (and copy it here) 3 - run hass 4 - stop hass 5 - pip3 freeze (and copy here). I have no clue what happens but pytradfri directory is removed and replaced on hass startup. I just want to check if it is the same for you. Same problem here on hass.io but not on hassbian. In hass.io trådfi gets unresponsive in approximate 2 days of uptime. Any fix for this? @ggravlingen @lwis Does one of you have any insight into what's going on here? It's no longer clear from this thread what the problem is, if it's in relation to the unlimited observation; I have a connection to my Gateway open 24/7 without issues. The problem is that if the gateway just stops emitting events without closing the socket there's not much we can to do correct. As you mentioned, I'm hesitant about putting in any workarounds for sporadic memory leaks on the hardware. Do have a busy setup with many lights frequently changing? I've the same issue on hass 0.69.1. I am still experiencing the issue on 0.74, and my workaround still works. However this issue is a mess of different problems with slightly different symptoms. My problem, the states stop updating after some time without any log output, seems to be the same as #14386, so I think it should be tracked there. If nobody experiences the error as described by @comra, I suggest closing this issue. Well, my problem persists but a few versions ago the error stopped showing up in the log. So, we can close it as it is not the same anymore. @max-te I'm having the same issue here. Your workaround should be working great, but I guess that a reset once every 2 minutes is quite alot. I would suggest to extend that duration to something like an hour. Which would be more stable on system load, but also would be enough to reset the event system when something goes wrong. I mostly experience this issue and/or #14386 when I switch mulitple lights or switches at once. And it is quite annoying having to restart HASS everytime I'm going to sleep. Should be fixed by https://github.com/home-assistant/home-assistant/pull/18708 @cgarwood #18708 was not merged and unfortunately does not fix this issue completely. Can you re-open this issue? @max-te, did ypu find out if the fix you wrote, relate to the increased resource use you reported?
gharchive/issue
2017-10-11T18:57:43
2025-04-01T04:34:31.215164
{ "authors": [ "Nicxe", "alex3305", "asosso", "cgarwood", "comra", "lwis", "max-te", "mvivaldi", "olskar", "sveip" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/issues/9822", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
266166118
AirVisual unable to retrieve data at this location Home Assistant release (hass --version): 0.55.1 Python release (python3 --version): 3.4.2 Component/platform: AirVisual Description of problem: After starting Home Assistant for a short while the AirVisual sensors starts to error (all is OK on boot). All sensors begin returning unknown states. I noticed the issue began with 0.55.0. At first I thought it might be an issue with their API, but it seems odd that it works for a short time. Error log: 2017-10-17 15:51:18 ERROR (MainThread) [homeassistant.components.sensor.airvisual] Unable to retrieve data on this location: {'pollution_info': {'maincn': 'o3', 'aqicn': 13, 'ts': '2017-10-17T11:00:00.000Z', 'mainus': 'o3', 'aqius': 17}, 'state': 'West Sussex', 'latitude': [blanked], 'longitude': [blanked], 'city': 'Lodsworth', '_client': <pyairvisual.client.Client object at 0x6c812030>, 'country': 'United Kingdom', '_throttle': {1842766864: [<_thread.lock object at 0x6c84a968>, datetime.datetime(2017, 10, 17, 14, 40, 58, 195423, tzinfo=<UTC>)]}, '_radius': 500} Expected: AirQuality should report the correct values for my location. Problem-relevant configuration.yaml entries and steps to reproduce: platform: airvisual api_key: !secret airvisual_key monitored_conditions: - us radius: 500 Your configuration doesn't mention whether you're using latitude/longitude or city/state/country. I assume that since you've blanked them, you're using latitude/longitude? Without knowing anything further, I'm guessing that you're inputting a latitude/longitude that AirVisual can't consistently resolve for some reason (I've experienced this at times with their API). If you're willing to input the location information directly, I just tried this config and it worked as expected – give it a shot: sensor: - platform: airvisual api_key: !secret airvisual_key monitored_conditions: - us show_on_map: false city: lodsworth state: west-sussex country: uk Thanks for the quick suggestion. My long and lat are currently set elsewhere as mentioned in the docs: if excluded, the longitude/latitude defined under the homeassistant key in configuration.yaml will be used I will add a location as you have suggested and will report back if it does the trick. Thanks, @scottsweb. Even if it does work, it would still be interesting to debug why your coordinates aren't working. I'm available in the forums, so feel free to shoot me a DM! Looks like switching the location style has not helped: It works for maybe 5-10 minutes. Will DM my lat/long. @scottsweb: Got it; will be on the lookout for your DM!
gharchive/issue
2017-10-17T15:15:09
2025-04-01T04:34:31.223395
{ "authors": [ "bachya", "scottsweb" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/issues/9923", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
386798446
Refactor script helper actions into their own methods Description: This cleans up the script helper and puts all actions into their own methods. Needed because now that services propagate exceptions, we need to update the edges to properly handle errors. Checklist: [x] The code change is tested and works locally. [x] Local tests pass with tox. Your PR cannot be merged unless tests pass [x] There is no commented out code in this PR. If the code does not interact with devices: [x] Tests have been added to verify that the new code works. Since it's just a refactor, no new logic, will merge and then continue work on #18965
gharchive/pull-request
2018-12-03T12:35:27
2025-04-01T04:34:31.226722
{ "authors": [ "balloob" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/pull/18962", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
544308782
Revert state standby back to state off Breaking Change: State Standby is reverted back to State Off. User defined scripts, automations, etc., will need to be updated. Description: Fixes unforeseen front end card issue with change to state_standby from #28261. Service turn_off called when state is standby instead of turn_on when power button is clicked. Related issue (if applicable): fixes #28891 Checklist: [x] The code change is tested and works locally. [x] Local tests pass with tox. Your PR cannot be merged unless tests pass [x] There is no commented out code in this PR. [x] I have followed the development checklist This is a workaround for a frontend bug. We should not change the backend for this. This is a workaround for a frontend bug. We should not change the backend for this. @ballob So what would be the best direction for this? Can we do a simple frontend change? Like home-assistant/homeassistant-polymer#4250
gharchive/pull-request
2020-01-01T04:36:15
2025-04-01T04:34:31.230561
{ "authors": [ "balloob", "ktnrg45" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/pull/30344", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
179286215
Correctly define requirements for emulated hue Description: Emulated hue sometimes starts without its components installed. Related issue (if applicable): fixes #3518 Checklist: If code communicates with devices, web services, or a: [x] Local tests with tox run successfully. Your PR cannot be merged unless tests pass Oh, and due to the CRLF fix you have a conflict @balloob I believe this is ready to merge now. 🐬 !
gharchive/pull-request
2016-09-26T17:15:32
2025-04-01T04:34:31.233028
{ "authors": [ "balloob", "lwis" ], "repo": "home-assistant/home-assistant", "url": "https://github.com/home-assistant/home-assistant/pull/3535", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1809615963
Fix recursive rule detection This PR is an alternative to #1398 which still uses the entire stack of visited rules, only it doesn't "remember" them for each element in a sequence. When the same rule is used multiple times in a sentence, recursive detection returns a false positive result. Here's an example: expansion_rules: test_sentence: "<rule1> <rule2>" rule1: "test" rule2: "sentence" The above example works. However, without the fix, the one below results in an error, although it shouldn't: expansion_rules: test_sentence: "(<rule1> <rule2> | [<rule2>] <rule1>)" rule1: "test" rule2: "sentence" @synesthesiam does this makes #1398 obsolete? Yes it does. Get the latest version and test with your test sentences, you'll see that it doesn't fail anymore.
gharchive/pull-request
2023-07-18T10:09:17
2025-04-01T04:34:31.236198
{ "authors": [ "mib1185", "tetele" ], "repo": "home-assistant/intents", "url": "https://github.com/home-assistant/intents/pull/1447", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1802001178
email通知不支持配置qq邮箱作为发送邮箱 email通知不支持配置qq邮箱作为发送邮箱 那是否支持自己的SMTP服务器呢?或者使用Linux自带的mail命令发送?
gharchive/issue
2023-07-13T01:49:07
2025-04-01T04:34:31.351128
{ "authors": [ "OwnerCM", "StackExplode" ], "repo": "hongyonghan/Docker_Microsoft365_E5_Renew_X", "url": "https://github.com/hongyonghan/Docker_Microsoft365_E5_Renew_X/issues/72", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2080239244
SSG: Handling when response.text() can not parse This line uses response.text(). However, it is better to handle ArrayBuffer and other contents that cannot be parsed with text(). https://github.com/honojs/hono/blob/04b686ca39421d28d3aa3f8491c13e1f7d1c8de1/src/helper/ssg/index.ts#L37 Regarding the SSG issue, I'll tackle it here first
gharchive/issue
2024-01-13T10:54:05
2025-04-01T04:34:31.352570
{ "authors": [ "watany-dev", "yusukebe" ], "repo": "honojs/hono", "url": "https://github.com/honojs/hono/issues/1962", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1600178150
How to send a file with Hono under Node.JS? With the Node.JS adapter, how do you send a file stream to the Response? I found this in @hono/node-server/serve-static: const content = readFileSync(path) if (content) { // ... return c.body(content) } But this will read the entire file into memory (!) before sending it - that's not acceptable as more than a proof of concept. (and really should be fixed in that package as well.) I found this article, which talks about different ways to get a Web Stream in Node.JS. Looking at documentation here, it doesn't look like that's going to work, since "it will not close the FileHandle automatically", and I won't have any way to close it after returning the Response. (?) I looked through stream and web stream and file system docs, and there are so many things that sound like it's going to be the thing I'm looking for (Stream, Readable, ReadableStream) but I can't figure out how to convert or wrap any of these into a web stream. I had this working in Express only because Express provides a helper function to do it, I guess. Maybe we need some platform helper functions (as I've also proposed here) - will anyone be able to figure this out on their own? The 2ality.com article is 40 pages and doesn't seem to even fully answer the basic question of how to open a file. I am so lost. 😥 I gave this another shot today - it still doesn't work for my use-case. What I built in Express (and successfully ported to Koa) is an image server middleware - to implement that, I need: some means of reading/writing files (for caching) a production-ready means of creating a file-response with proper streaming, cache-control, resume, etc. The file-system API doesn't exist and the static file server middleware doesn't export createStreamBody, which I had to copy/paste, nor does it export a function to create a file-response from a given path. While I was able to implement these things myself with Hono, the whole thing ends up being Node.JS-dependent anyway, defeating the purpose of using with a cross-runtime framework. Hono compares itself to Express on multiple points (performance, file size, etc.) and comes off as an alternative to Express - I'm sure it's great for APIs and stuff, but it just doesn't seem like this is ready to replace either Express or Koa as a general purpose web server framework. There are significant limitations, and although you can work around these by stepping outside the framework and using Node.JS APIs, this won't result in cross-platform applications. You should feel free to close this issue, if these requirements are simply out of scope for this project - but you might should consider clarifying (in documentation etc.) the limitations compared to generic web servers. Uncovering these limitations on your own is very time consuming - if you're trying to switch from Express or Koa, it's likely you simply won't find what you need, because it doesn't exist. Hono looks cool for a lot of things! 🙂 and I wish it could replace Express and Koa, but it just can't do that for every use case in those frameworks at the moment. Still a very cool framework, just not for everyone. 🙂 Hi @mindplay-dk ! We are trying it here: honojs/node-server#18 After I take a look on the PR, I found out that you merge some PR. Does that mean this issue already being fix or is it not? And if it being fix, is it already documented? I'm using node 18. I don't know why, but this requires an explicit conversion import { Readable } from 'stream' const stream = Readable.toWeb(nodeStream) as ReadableStream return c.body(stream) Example with archiver library for generating dynamic archive. import archiver from 'archiver' import { Readable } from 'stream' const app = new Hono() app.get('/file.zip', async (c) => { const zip = archiver('zip', { zlib: { level: 5 }, }) downloadFiles(zip).finally(() => zip.finalize()) const stream = Readable.toWeb(zip) as ReadableStream return c.body(stream) }) for my use case i was using the vite build tool and all i needed was to serve the correct file path. file structure was |assets| |index-adsfioe.js |index-4oj03n.css |index.html |someSvg.svg //arg one is request UrL arg 2 is the location from project root. app.use('/assets/*', serveStatic({ root: './dist' })) app.use('*', prettyJSON()) app.get('/', serveStatic({path:'./dist'})) this worked for everything but the svg, how i got it to work was checking the path via console.logging the path in the serve-static function https://github.com/honojs/node-server/blob/main/src/serve-static.ts#L47 This Issue will be closed once, so if you have any problems, please create a new Issue.
gharchive/issue
2023-02-26T19:49:16
2025-04-01T04:34:31.364342
{ "authors": [ "Meleeman01", "ftoh", "krsbx", "mindplay-dk", "yusukebe" ], "repo": "honojs/hono", "url": "https://github.com/honojs/hono/issues/935", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2135579122
refactor: jsx streaming There is no change in behavior, but it is not a useful refactoring for existing applications, so it is a question of whether to include it in a patch version. A minor version might be fine. 867f85722b33038606a8f3d7b826742d490a0134 : Depending on the content, it may look like the following, and in this case a JS error occurs. <template id="E:0"></template><! -E:0-> 71c8ec52bbcedf1149420a81bb9fd0b53ce791ec : When handling https://github.com/honojs/honox/issues/47, it is useful to be able to retrieve the resolved value later, so the data attribute was added. The data will be slightly larger, but I don't think it will be a problem. Author should do the followings, if applicable [x] Add tests [x] Run tests [x] yarn denoify to generate files for Deno Hi @usualoma Looks great! I think there is no problem with this code. I'm trying to decide whether to put this in the patch release or the minor version. So, I'll merge this later. Maybe 4.1 will be released earlier without many features. @usualoma I'm merging and releasing this in a patch version, as there are no clear feature additions in this PR and I don't want this to become a blocker. Thanks!
gharchive/pull-request
2024-02-15T03:43:07
2025-04-01T04:34:31.368744
{ "authors": [ "usualoma", "yusukebe" ], "repo": "honojs/hono", "url": "https://github.com/honojs/hono/pull/2216", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
797326205
[Feature request] A flag to change the contrast threshold I'm not sure if this is still being worked on, but I really like the idea. One thing I would love to see is a flag to change the contrast threshold. For example, consider the following the input image: Currently, this script produces this output: But if one could increase (or decrease?) the contrast threshold, then lighter lines would show up as solid. I would provide code suggestions but I've no idea if this is possible with the current libraries... Let me know! :-) Hi, this is a great idea! I'm actually fighting the same problem myself, but it didn't occur to me I could add an option to solve this. Currently I'm working around it by preprocessing the images and first giving them the desired threshold in GIMP. I still use the tool, but it's a while since I created it. Would you be willing to work on such improvement? I don't know from the top of my head how hard it would be to add this. I took a quick look and I see this in the code: # Threshold "$src_dir/lib/localthresh" -m 1 -r 65 -b 5 -n yes "$in_file" "$out.png" "$src_dir/lib/isonoise" -r 3 "$out.png" "$out.png" Those are scripts from here: http://www.fmwconcepts.com/imagemagick/ http://www.fmwconcepts.com/imagemagick/localthresh/ http://www.fmwconcepts.com/imagemagick/isonoise/ Both have certain parameters (if you scroll down the respective pages, you'll see full documentation), but I think it would require a bit of trial and error to see what we can do and what would make sense the most. Ideally the option would be something like cartoonist.sh -t 5? Where I could put different number instead of 5, like on a scale from 1-10, and see which one makes the best output for the picture at hand. What do you think? Hi, this is a great idea! I'm actually fighting the same problem myself, but it didn't occur to me I could add an option to solve this. Currently I'm working around it by preprocessing the images and first giving them the desired threshold in GIMP. I still use the tool, but it's a while since I created it. Would you be willing to work on such improvement? I don't know from the top of my head how hard it would be to add this. I took a quick look and I see this in the code: # Threshold "$src_dir/lib/localthresh" -m 1 -r 65 -b 5 -n yes "$in_file" "$out.png" "$src_dir/lib/isonoise" -r 3 "$out.png" "$out.png" Those are scripts from here: http://www.fmwconcepts.com/imagemagick/ http://www.fmwconcepts.com/imagemagick/localthresh/ http://www.fmwconcepts.com/imagemagick/isonoise/ Both have certain parameters (if you scroll down the respective pages, you'll see full documentation), but I think it would require a bit of trial and error to see what we can do and what would make sense the most. Ideally the option would be something like cartoonist.sh -t 5? Where I could put different number instead of 5, like on a scale from 1-10, and see which one makes the best output for the picture at hand. What do you think? Hi @honzajavorek. Thanks for the response! I actually do have some free time today, so I'll make a fork and see what I can do! Hi @honzajavorek. Thanks for the response! I actually do have some free time today, so I'll make a fork and see what I can do! Awesome! Please report back any findings, even if you end up in a dead end 🚀 Awesome! Please report back any findings, even if you end up in a dead end 🚀
gharchive/issue
2021-01-30T03:42:46
2025-04-01T04:34:31.377468
{ "authors": [ "honzajavorek", "jakewilliami" ], "repo": "honzajavorek/cartoonist", "url": "https://github.com/honzajavorek/cartoonist/issues/9", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
181806504
Sign out button border commented updated css after moving sign out button into ul element. it doesn't need border any more I think it would make sense to combine this PR with the UI element?
gharchive/pull-request
2016-10-08T05:38:10
2025-04-01T04:34:31.388456
{ "authors": [ "Janatbek", "gr2m" ], "repo": "hoodiehq/hoodie-app-tracker", "url": "https://github.com/hoodiehq/hoodie-app-tracker/pull/96", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
182380601
Provide instructions on how to unregister service workers in Readme Readme should include instructions on how to unregister service workers once the tutorial is over. I was attempting to use http://localhost:8080 for a different project and the Hoodie Offline Tutorial was still on it since I forgot to unregister the service workers. I'd be happy to edit the readme and make a PR with the instructions. yes that’d be very helpful, thanks!
gharchive/issue
2016-10-11T21:35:14
2025-04-01T04:34:31.389900
{ "authors": [ "gr2m", "nongaap" ], "repo": "hoodiehq/hoodie-camp-tutorial", "url": "https://github.com/hoodiehq/hoodie-camp-tutorial/issues/23", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
16514813
headless testing: find alternative to testem one that is fast, so it's good to make tests on file changes Just found this thread from google. I have used testem to more extent than karma. @gr2m @svnlto any comparison insights between karma and testem from your experience? this thread is over 3 years old :) I haven’t use either since then okay. thanks for reply.
gharchive/issue
2013-07-09T09:13:35
2025-04-01T04:34:31.391917
{ "authors": [ "g-patel", "gr2m" ], "repo": "hoodiehq/hoodie", "url": "https://github.com/hoodiehq/hoodie/issues/99", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2426276432
🛑 FWAw is down In ef8237b, FWAw ($FWA_SITE) was down: HTTP code: 0 Response time: 0 ms Resolved: FWAw is back up in 175fa7a after 13 minutes.
gharchive/issue
2024-07-23T23:23:52
2025-04-01T04:34:31.408133
{ "authors": [ "hoopybe" ], "repo": "hoopybe/uptime", "url": "https://github.com/hoopybe/uptime/issues/1291", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
514651765
specify data-reflex-allow on an input element to allow reflex operations Type of PR (feature, enhancement, bug fix, etc.) Enhancement Description Adds a data-reflex-allow attribute to the API, which can be put on an input element to opt-out of SSoT handling. Why should this be added While I still strongly believe SSoT is the responsible default, scenarios do emerge when you need an escape hatch. Checklist [x] My code follows the style guidelines of this project [x] Checks (StandardRB & Prettier-Standard) are passing Superceded by #157
gharchive/pull-request
2019-10-30T13:02:26
2025-04-01T04:34:31.423758
{ "authors": [ "leastbad" ], "repo": "hopsoft/stimulus_reflex", "url": "https://github.com/hopsoft/stimulus_reflex/pull/89", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1419468159
os.environ["CUDA_LAUNCH_BLOCKING"] = "1" is set in trainer.py This should not have been left set. I presume it will impact performance. I should add an argument parser and allow an option that triggers this for debugging. Note: possibly also add an argument for enabling anomaly detection Issue addressed in #29
gharchive/issue
2022-10-22T19:24:07
2025-04-01T04:34:31.425561
{ "authors": [ "horenbergerb" ], "repo": "horenbergerb/BernoulliDiffusion", "url": "https://github.com/horenbergerb/BernoulliDiffusion/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1562810072
CB-18643: Migrate to AWS SDK v2 In this commit, we've replaced the AWS SDK v1 with the v2. This version change touches several things in our codebase: Import changes You must create all clients using the client builder method. Constructors are no longer available. Request and response object names All client class names are now fully camel-cased and no longer prefixed by "Amazon". Exception class names, and their structures and relationships, have also changed. Client configurations Waiters Clients and operation request and response objects are now immutable and cannot be changed after creation. The setter method names don't include the "set" or "with" retest this please retest this please retest this please
gharchive/pull-request
2023-01-30T16:54:17
2025-04-01T04:34:31.503472
{ "authors": [ "keyki", "tiborpopovics" ], "repo": "hortonworks/cloudbreak", "url": "https://github.com/hortonworks/cloudbreak/pull/14193", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
479731421
CDPSDX-716 - suppress warning for single zookeeper CDPSDX-716 - suppress warning for single zookeeper 2nd attempt for zookeeper config change to suppress. previous pull request was denied due to changes in the 2 HA blueprints. this pull request only includes changes to the NON-HA blueprint. Closes #CDPSDX-716 Is there anything i need to do to assist in the pull request here? verified config is available in CM: Version: Cloudera Enterprise 7.x.0 (#1312036 built by jenkins on 20190730-0355 git: 1ef7869f5dc580724ffbb376211fe15acda70a5a) Thank you for including the CM version. It's hard to track which version has the right configs exposed.
gharchive/pull-request
2019-08-12T16:11:48
2025-04-01T04:34:31.505444
{ "authors": [ "jasonwmcswain", "keyki" ], "repo": "hortonworks/cloudbreak", "url": "https://github.com/hortonworks/cloudbreak/pull/5930", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
548951216
Remove duplicated word Nothing specific, just a small correction. Can one of the admins verify this patch? Thanks!
gharchive/pull-request
2020-01-13T14:03:46
2025-04-01T04:34:31.506409
{ "authors": [ "jenkins-cloudbreak", "keyki", "peterroth" ], "repo": "hortonworks/cloudbreak", "url": "https://github.com/hortonworks/cloudbreak/pull/7059", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }